venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Order-Invariant Cardinality Estimators Are Differentially Private Abstract We consider privacy in the context of streaming algorithms for cardinality estimation. We show that a large class of algorithms all satisfy -differential privacy, so long as (a) the algorithm is combined with a simple down-sampling procedure, and (b) the input stream cardinality is Ω(k/ ). Here, k is a certain parameter of the sketch that is always at most the sketch size in bits, but is typically much smaller. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality. Our analysis applies to essentially all popular cardinality estimation algorithms, and substantially generalizes and tightens privacy bounds from earlier works. Our approach is faster and exhibits a better utility-space tradeoff than prior art. 1 Introduction Cardinality estimation, or the distinct counting problem, is a fundamental data analysis task. Typical applications are found in network traffic monitoring [9], query optimization [20], and counting unique search engine queries [14]. A key challenge is to perform this estimation in small space while processing each data item quickly. Typical approaches for solving this problem at scale involve data sketches such as the Flajolet-Martin (FM85) sketch [12], HyperLogLog (HLL) [11], Bottom-k [2, 6, 3]. All these provide approximate cardinality estimates but use bounded space. While research has historically focused on the accuracy, speed, and space usage of these sketches, recent work examines their privacy guarantees. These privacy-preserving properties have grown in importance as companies have built tools that can grant an appropriate level of privacy to different people and scenarios. The tools aid in satisfying users’ demand for better data stewardship, while also ensuring compliance with regulatory requirements. We show that all cardinality estimators in a class of hash-based, order-invariant sketches with bounded size are -differentially private (DP) so long as the algorithm is combined with a simple down-sampling procedure and the true cardinality satisfies a mild lower bound. This lower bound requirement can be guaranteed to hold by inserting sufficiently many “phantom elements” into the stream when initializing the sketch. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality. Our novel analysis has significant benefits. First, prior works on differentially private cardinality estimation have analyzed only specific sketches [23, 25, 5, 22]. Moreover, many of the sketches analyzed (e.g., [23, 22]), while reminiscent of sketches used in practice, in fact differ from practical 36th Conference on Neural Information Processing Systems (NeurIPS 2022). sketches in important ways. For example, Smith et al. [22] analyze a variant of HLL that Section 4 shows has an update time that can be k times slower than an HLL sketch with k buckets. While our analysis covers an entire class of sketches at once, our error analysis improves upon prior work in many cases when specialized to specific sketches. For example, our analysis yields tighter privacy bounds for HLL than the one given in [5], yielding both an -DP guarantee, rather than an ( , δ)-DP guarantee, as well as tighter bounds on the failure probability δ—see Section 4 for details. Crucially, the class of sketches we analyze captures many (in fact, almost all to our knowledge) of the sketches that are actually used in practice. This means that existing systems can be used in contexts requiring privacy, either without modification if streams are guaranteed to satisfy the mild cardinality lower bound we require, or with a simple pre-processing step described if such cardinality lower bounds may not hold. Thus, existing data infrastructure can be easily modified to provide DP guarantees, and in fact existing sketches can be easily migrated to DP summaries. 1.1 Related work One perspective is that cardinality estimators cannot simultaneously preserve privacy and offer good utility [7]. However, this impossibility result applies only when an adversary However, this impossibility result applies only when an adversary can create and merge an arbitrary number of sketches, effectively observing an item’s value many times. It does not address the privacy of one sketch itself. Other works have studied more realistic models where either the hashes are public, but private noise is added to the sketch [23, 17, 25], or the hashes are secret [5] (i.e., not known to the adversary who is trying to “break” privacy). This latter setting turns out to permit less noisy cardinality estimates. Past works study specific sketches or a variant of a sketch. For example, Smith et al. [22] show that an HLL-type sketch is -DP while [25] modifies the FM85 sketch using coordinated sampling, which is also based on a private hash. Variants of both models are analyzed by Choi et al. [5], and they show (amongst other contributions) a similar result to [22], establishing that an FM85-type sketch is differentially private. Like these prior works, we focus on the setting when the hash functions are kept secret from the adversary. A related problem of differentially private estimation of cardinalities under set operations is studied by [18], but they assume the inputs to each sketch are already de-duplicated. There is one standard caveat: following prior works [22, 5] our privacy analysis assumes a perfectly random hash function. One can remove this assumption both in theory and practice by using a cryptographic hash function. This will yield a sketch that satisfies either a computational variant of differential privacy called SIM-CDP, or standard information-theoretic notions of differential privacy under the assumption that the hash function fools space-bounded computations [22, Section 2.3]. Other works also consider the privacy-preserving properties of common Lp functions over data streams. For p = 2, these include fast dimensionality reduction [4, 24] and least squares regression [21]. Meanwhile, for 0 < p ≤ 1, frequency-moment estimation has also been studied [26]. Our focus is solely the cardinality estimation problem when p = 0. 1.2 Preliminaries More formally, we consider the following problem. Problem Definition Let D = {x1, . . . , xn} denote a stream of samples with each identifier xi coming from a large universe U , e.g., of size 264. The objective is to estimate the cardinality, or number of distinct identifiers, of D using an algorithm S which is given privacy parameters , δ ≥ 0 and a space bound b, measured in bits. Definition 1.1 (Differential Privacy [8]). A randomized algorithm S is ( , δ)-differentially private (( , δ)-DP for short or if δ = 0, pure -DP) if for any pair of data sets D,D′ that differ in one record and for all S in the range of S, Pr(S(D′) ∈ S) ≤ e Pr(S(D) ∈ S) + δ with probability over the internal randomness of the algorithm S. Rather than analyzing any specific sketching algorithm, we analyze a natural class of randomized distinct counting sketches. Algorithms in this class operate in the following manner: each time a new stream item i arrives, i is hashed using some uniform random hash function h, and then h(i) is used to update the sketch, i.e., the update procedure depends only on h(i), and is otherwise independent of i. Our analysis applies to any such algorithm that depends only on the set of observed hash values. Equivalently, the sketch state is invariant both to the order in which stream items arrive, and to item duplication.2 We call this class of algorithms hash-based, order-invariant cardinality estimators. Note that for any hash-based, order-invariant cardinality estimator, the distribution of the sketch depends only on the cardinality of the stream. All distinct counting sketches of which we are aware that are invariant to permutations of the input data are included in this class. This includes FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL as shown in Section 4. Definition 1.2 (Hash-Based, Order-invariant Cardinality Estimators). Any sketching algorithm that depends only on the set of hash values of stream items using a uniform random hash function is a hash-based order-invariant cardinality estimator. We denote this class of algorithms by C. We denote a sketching algorithm with internal randomness r by Sr (for hash-based algorithms, r specifies the random hash function used). The algorithm takes a data set D and generates a data structure Sr(D) that is used to estimate the cardinality. We refer to this structure as the state of the sketch, or simply the sketch, and the values it can take by s ∈ Ω. Sketches are first initialized and then items are inserted into the sketch with an add operation that may or may not change the sketch state. The size of the sketch is a crucial constraint, and we denote the space consumption in bits by b. For example, FM85 consists of k bitmaps of length `. Thus, its state s ∈ Ω = {0, 1}k×`. Typically, ` = 32, so that b = 32k. Further examples are given in Section 4. Our goal is to prove such sketches are differentially private. 2 Hash-Based Order-Invariant Estimators are Private The distribution of any hash-based, order-invariant cardinality estimator depends only on the cardinality of the input stream, so without loss of generality we assume the input is D = {1, . . . , n}. Denote the set D\{i} by D−i for i ∈ D and a sketching algorithm with internal randomness r by Sr(D). By definition, for an -differential privacy guarantee, we must show that the Bayes factor comparing the hypothesis i ∈ D versus i /∈ D is appropriately bounded: e− < Prr(Sr(D) = s) Prr(Sr(D−i) = s) < e ∀s ∈ Ω, i ∈ D. (1) Overview of privacy results. The main result in our analysis bounds the privacy loss of a hash-based, order-invariant sketch in terms of just two sketch-specific quantities. Both quantities intuitively capture how sensitive the sketch is to the removal or insertion of a single item from the data stream. The first quantity is a bound kmax on the number of items that would change the sketch if removed from the stream. Denote the items whose removal from the data set changes the sketch by Kr := {i ∈ D : Sr(D−i) 6= Sr(D)}. (2) Denote its cardinality by Kr := |Kr| and the upper bound by kmax = suprKr. The second quantity is a bound on a "sampling" probability. Let π(s) be the probability that a newly inserted item would change a sketch in state s, π(s) := Pr r (Sr(D) 6= Sr(D−i) |Sr(D−i) = s). (3) Although a sketch generally does not store explicit samples, conceptually, it can be helpful to think of π(s) as the probability that an as-yet-unseen item i gets “sampled” by a sketch in state s. We upper bound π∗ := sups∈Ω π(s) to limit the influence of items added to the stream. The main sub-result in our analysis (Theorem 2.4) roughly states that the sketch is -DP so long as (a) the sampling probability π∗ < 1 − e− is small enough, and (b) the stream cardinality n > kmaxe −1 = Θ(kmax/ ) is large enough. We show Property (a) is a necessary condition for any -DP algorithm if the algorithm works over data universes of unbounded size. Unfortunately, Property (a) does not directly hold for natural 2A sketch is duplication-invariant if and only if its state when run on any stream σ is identical to its state when run on the stream σ′, in which all elements of the stream σ appear exactly once. sketching algorithms. But we show (Section 2.2) by applying a simple down-sampling procedure, any hash-based, order-invariant algorithm can be modified to satisfy (a). Furthermore, Section 4 shows common sketches satisfy Property (a) with high probability, thus providing ( , δ)-DP guarantees for sufficiently large cardinalities. Compared to [5], these guarantees are tighter, more precise, and more general as they establish the failure probability δ decays exponentially with n, provide explicit formulas for δ, and apply to a range of sketches rather than just HLL. Overview of the analysis. The definition of -DP requires bounding the Bayes factor in equation 1. The challenge is that the numerator and denominator may not be easy to compute by themselves. However, it is similar to the form of a conditional probability involving only one insertion. Our main trick re-expresses this Bayes factor as a sum of conditional probabilities involving a single insertion. Since the denominator Prr(Sr(D−i) = s) involves a specific item i which may change the sketch, we instead consider the smallest item Jr whose removal does not change the sketch. This allows us to re-express the numerator in terms of a conditional probability Prr(S(D) = s ∧ Jr = j) = Prr(Jr = j|S(D−j) = s) Prr(S(D−j) = s) involving only a single insertion plus a nuisance term Prr(S(D−j) = s). The symmetry of items gives that the nuisance term is equal to denominator Prr(S(D−j) = s) = Prr(S(D−i) = s), thus allowing us to eliminate it. Lemma 2.1. Suppose n > suprKr. Then Prr(Kr = n) = 0, and Prr(Sr(D) = s) Prr(Sr(D−i) = s) = ∑ j∈D Pr r (Jr = j |Sr(D−j) = s). (4) By further conditioning on the total number of items that, when removed, can change the sketch, we obtain conditional probabilities that are simple to calculate. A combinatorial argument simplifies the resulting expression and gives us two factors in Lemma 2.2, one involving the sampling probability for new items π(s) given a sketch in state s and the other being an expectation involving Kr. This identifies the two quantities that must be controlled in order for a sketch to be -DP. Lemma 2.2. Under the same assumptions as Lemma 2.1 ∑ j∈D Pr r (Jr = j |Sr(D−j) = s) = (1− π(s))Er ( 1 + Kr n−Kr + 1 ∣∣∣∣Sr(D−1) = s ) . (5) To show that all hash-based, order invariant sketching algorithms can be made -DP, we show that Kr can always be bounded by the maximum size of the sketch in bits. Thus, if a sketch is combined with a downsampling procedure to ensure π(s) is sufficiently small, one satisfies both of the properties that are sufficient for an -DP guarantee. Having established (5), we can derive a result showing that a hash-based, order-invariant sketch is -DP so long as the stream cardinality is large enough and sups∈Ω π(s) is not too close to 1. Corollary 2.3. Let Ω denote the set of all possible states of a hash-based order-invariant distinct counting sketching algorithm. When run on a stream of cardinality n > suprKr, the sketch output by the algorithm satisfies -DP if π0 := 1− e− > sup s∈Ω π(s) and (6) e > 1 + Er ( Kr n−Kr + 1 ∣∣∣∣Sr(D−1) = s ) for all sketch states s ∈ Ω. (7) Furthermore, if the data stream D consists of items from a universe U of unbounded size, Condition 6 is necessarily satisfied by any sketching algorithm satisfying -DP. The above corollary may be difficult to apply directly since the expectation in Condition (7) is often difficult to compute and depends on the unknown cardinality n. Our main result provides sufficient criteria to ensure that Condition (7) holds. The criteria is expressed in terms of a minimum cardinality n0 and sketch-dependent constant kmax. This constant kmax is a bound on the maximum number of items which change the sketch when removed. That is, for all input streamsD and all r, kmax ≥ |Kr|. We derive kmax for a number of popular sketch algorithms in Section 4. Theorem 2.4. Consider any hash-based, order-invariant distinct counting sketch. The sketch output by the algorithm satisfies an -DP guarantee if sup s∈Ω π(s) < π0 := 1− e− and there are strictly greater than (8) n0 := kmax/(1− e− ) unique items in the stream. (9) Later, we explain how to modify existing sketching algorithms in a black-box way to satisfy these conditions. If left unmodified, most sketching algorithms used in practice allow for some sketch values s ∈ Ω which violate Condition 8, i.e π(s) > 1− e− . We call such sketch values “privacy-violating”. Fortunately, such values turn out to arise with only tiny probability. The next theorem states that, so long as this probability is smaller than δ, the sketch satisfies ( , δ)-DP without modification. The proof of Theorem 2.5 follows immediately from Theorem 2.4. Theorem 2.5. Let n0 be as in Theorem 2.4. Given a hash-based, order-invariant distinct counting sketch with bounded size, let Ω′ be the set of sketch states such that π(s) ≥ π0. If the input stream D has cardinality n > n0, then the sketch is ( , δ) differentially private where δ = Prr(Sr(D) ∈ Ω′). 2.1 Constructing Sketches Satisfying Approximate Differential Privacy: Algorithm 1a Theorem 2.5 states that, when run on a stream with n ≥ n0 distinct items, any hash-based orderinvariant algorithm (see Algorithm 1a) automatically satisfies ( , δ)-differential privacy where δ denotes the probability that the final sketch state s is “privacy-violating”, i.e., π(s) > π0 = 1− e− . In Section 4, we provide concrete bounds of δ for specific algorithms. In all cases considered, δ falls exponentially with respect to the cardinality n. Thus, high privacy is achieved with high probability so long as the stream is large. We now outline how to derive a bound for a specific sketch. We can prove the desired bound on δ by analyzing sketches in a manner similar to the coupon collector problem. Assuming a perfect, random hash function, the hash values of a universe of items defines a probability space. We can identify v ≤ kmax events or coupons, C1, . . . , Cv, such that π(s) is guaranteed to be less than π0 after all events have occurred. Thus, if all coupons are collected, the sketch satisfies the requirement to be -DP. As the cardinality n grows, the probability that a particular coupons remains missing decreases exponentially. A simple union bound shows that the probability δ that any coupon is missing decreases exponentially with n. For more intuition as to why unmodified sketches satisfy an ( , δ)-DP guarantee when the cardinality is large, we note that the inclusion probability π(s) is closely tied to the cardinality estimate in most sketching algorithms. For example, the cardinality estimators used in HLL and KMV are inversely proportional to the sampling probability π(s), i.e., N̂(s) ∝ 1/π(s), while for LPCA and Adaptive Sampling, the cardinality estimators are monotonically decreasing with respect to π(s). Thus, for most sketching algorithms, when run on a stream of sufficiently large cardinality, the resulting sketch is privacy-violating only when the cardinality estimate is also inaccurate. Theorem 2.6 is useful when analyzing the privacy of such algorithms, as it characterizes the probability δ of a “privacy violation” in terms of the probability the returned estimate, N̂(Sr(D)), is lower than some threshold Ñ(π0). Theorem 2.6. Let Sr be a sketching algorithm with estimator N̂(Sr). If n ≥ n0 and the estimate returned on sketch s is a strictly decreasing function of π(s), so that N̂(s) = Ñ(π(s)) for a function Ñ . Then, Sr is ( , δ)-DP where δ = Prr(N̂(Sr(D)) < Ñ(π0)). 2.2 Constructing Sketches Satisfying Pure Differential Privacy: Algorithm 1b - 1c Theorem 2.4 guarantees an -DP sketch if (8), (9) hold. Condition (8) requires that sups∈Ω π(s) < 1− e− , i.e., the “sampling probability” of the sketching algorithm is sufficiently small regardless of the sketch’s state s. Meanwhile, (9) requires that the input cardinality is sufficiently large. We show that any hash-based, order-invariant distinct counting sketching algorithm can satisfy these two conditions by adding a simple pre-processing step which does two things. First, it “downsamples” the input stream by hashing each input, interpreting the hash values as numbers in [0, 1], and simply ignoring numbers whose hashes are larger than π0. The downsampling hash must be independent to that used by the sketching algorithm itself. This ensures that Condition (8) is satisfied, as each input item has maximum sampling probability π0. BASE(items, ) S ← InitSketch() for x ∈ items do S.add(x) return N̂(S) (a) ( , δ)-DP for n ≥ n0. DPSKETCHLARGESET(items, ) S ← InitSketch() π0 ← 1− e− for x ∈ items do if hash(x) < π0 then S.add(x) return N̂(S)/π0 (b) ( , 0)-DP for n ≥ n0. DPSKETCHANYSET(items, ) S, n0 ← DPInitSketch( ) π0 ← 1− e− for x ∈ items do if hash(x) < π0 then S.add(x) return N̂(S)/π0 − n0 (c) ( , 0)-DP for n ≥ 1. Algorithms 1: Differentially private cardinality estimation algorithms from black box sketches. The function InitSketch() initializes a black-box sketch. The uniform random hash function hash(x) is chosen independently of any hash in the black-box sketch and is interpreted as a real in [0, 1]. The cardinality estimate returned by sketch S is denoted N̂(S). DPInitSketch is given in Algorithm 2a. If there is an a priori guarantee that the number of distinct items n is greater than n0 = kmax1−e− , then (9) is trivially satisfied. Pseudocode for the resulting -DP algorithm is given in Algorithm 1b. If there is no such guarantee, then the preprocessing step adds n0 items to the input stream to satisfy (9). To ensure unbiasedness, these n0 items must (i) be distinct from any items in the “real” stream, and (ii) be downsampled as per the first modification. An unbiased estimate of the cardinality of the unmodified stream can then be easily recovered from the sketch via a post-processing correction. Pseudocode for the modified algorithm, which is guaranteed to satisfy -DP, is given in Algorithm 1c. Corollary 2.7. The functions DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c) yield -DP distinct counting sketches provided that n ≥ n0 and n ≥ 1, respectively. 2.3 Constructing -DP Sketches from Existing Sketches: Algorithm 3, Appendix A.1 As regulations change and new ones are added, existing data may need to be appropriately anonymized. However, if the data has already been sketched, the underlying data may no longer be available, and even if it is retained, it may be too costly to reprocess it all. Our theory allows these sketches to be directly converted into differentially private sketches when the sketch has a merge procedure. Using the merge procedure to achieve -differential privacy yields more useful estimates than the naive approach of simply adding Laplace noise to cardinality estimates in proportion to the global sensitivity. The algorithm assumes it is possible to take a sketch Sr(D1) of a stream D1 and a sketch Sr(D2) of a stream D2, and “merge” them to get a sketch of the concatenation of the two streams D1 ◦ D2. This is the case for most practical hash-based order-invariant distinct count sketches. Denote the merge of sketches Sr(D1) and Sr(D2) by Sr(D1) ∪ Sr(D2). In this setting, we think of the existing nonprivate sketch Sr(D1) being converted to a sketch that satisfies -DP by Algorithm 3 (see pseudocode in Appendix A.1). Since sketch Sr(D1) is already constructed, items cannot be first downsampled in the build phase the way they are in Algorithms 1b-1c. To achieve -DP, Algorithm 3 constructs a noisily initialized sketch, Sr(D2), which satisfies both the downsampling condition (Condition (8)) and the minimum stream cardinality requirement (Condition (9)) and returns the merged sketch Sr(D1)∪ Sr(D2). Hence, the sketch will satisfy both conditions for -DP, as shown in Corollary A.3 This merge based procedure typically adds no additional error to the estimates for large cardinalities. In contrast, the naive approach of adding Laplace noise can add significant noise since the sensitivity can be very large. For example, HLL’s estimator is of the form N̂HLL(s) = α/π(s) where α is a constant and s is the sketch. One item can update a bin to the maximum value, so that the updated sketch s′ has sampling probability π(s′) < π(s)(1− 1/k). The sensitivity of cardinality estimate is thus at least N̂HLL(s)/k. Given that the cardinality estimate, and hence sensitivity, can be arbitrarily large when n ≥ k, the naive approach is unworkable to achieve -DP. 3 The Utility of Private Sketches When processing a data set with n unique items, denote the expectation and variance of a sketch and its estimator by En(N̂) and Varn(N̂) respectively. We show that our algorithms all yield unbiased estimates. Furthermore, we show that for Algorithms 1a-1c, if the base sketch satisfies a relative error guarantee (defined below), the DP sketches add no additional error asymptotically. Establishing unbiasedness. To analyze the expectation and variance of each algorithm’s estimator, N̂(S(D)), note that each estimator uses a ‘base estimate’ N̂base from the base sketch S and has the form N̂(S(D)) = N̂basep − V ; p is the downsampling probability and V is the number of artificial items added. This allows us to express expectations and variance via the variance of the base estimator. Theorem 3.1. Consider a base sketching algorithm S ∈ C with an unbiased estimator N̂base for the cardinality of items added to the base sketch. Algorithms 1 (a)-(c) and 3 yield unbiased estimators. Bounding the variance. Theorem 3.1 yields a clean expression for the variance of our private algorithms. Namely, Var[N̂(Sr(D))] = E[Var( N̂basep |V )] which is shown in Corollary B.1. The expression is a consequence of the law of total variance and that the estimators are unbiased. We say that the base sketch satisfies a relative-error guarantee if with high probability, the estimate returned by the sketching algorithm when run on a stream of cardinality n is (1 ± 1/√c)n for some constant c > 0. Let N̂base,n denote the cardinality estimate when the base algorithm is run on a stream of cardinality n, as opposed to N̂base denoting the cardinality estimate produced by the base sketch on the sub-sampled stream used in our private sketches DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c). The relative error guarantee is satisfied when Varn(N̂base,n) < n 2/c; this is an immediate consequence of Chebyshev’s inequality. When the number of artificially added items V is constant as in Algorithms 1b and 1c, Corollary B.1 provides a precise expression for the variance of the differentially private sketch. In Theorem 3.2 below, we use this expression to establish that the modification of the base algorithm to an -DP sketch as per Algorithms 1b and 1c satisfy the exact same relative error guarantee asymptotically. In other words, the additional error due to any pre-processing (down-sampling and possibly adding artificial items) is insignificant for large cardinalities n. Theorem 3.2. Suppose N̂base,n satisfies a relative error guarantee, Varn(N̂base,n) < n2/c, for all n and for some constant c. Let v = 0 for Algorithm 1b and v = n0 for Algorithm 1c. Then Algorithms 1b and 1c satisfy Varn(N̂) ≤ (n+ v)2 c + (n+ v)(v + π−10 ) kmax = (n+ v)2 c +O(n), (10) so that Varn(N̂)/Varn(N̂base,n)→ 1 as n→∞. In Corollary B.2 we prove an analagous result for Algorithm 3, which merges non-private and noisy sketches to produce a private sketch. Informally, the result is comparable to (10), albeit with v ≥ n0. This is because, in Algorithm 3, the number of artificial items added V is a random variable. We ensure that the algorithm satisfies a utility guarantee by bounding V with high probability. This is equivalent to showing that the base sketching algorithm satisfies an ( , δ)-DP guarantee as for any n∗ ≥ n0 and dataset D∗ with |D∗| = n∗, ( , δn∗)-DP ensures δn∗ > Prr(π(Sr(D∗)) > π0) = Prr(V > n∗) which follows from the definition of V in Algorithm 2b. 4 Examples of Hash-based, Order-Invariant Cardinality Estimators We now provide ( , δ)-DP results for a select group of samples: FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL. The ( , δ)-DP results in this section operate in the Algorithm 1a setting with no modification to the base sketching algorithm. Recall that the quantities of interest are the number of bins used in the sketch k, the size of the sketch in bits b and the number of items whose absence changes the sketch kmax. From Section 2 and Lemma A.1 we know that kmax ≤ b but for several common sketches we show a stronger bound of kmax = k. The relationship between these parameters for various sketching algorithms is summarized in Table 1. Table 2, Appendix C, details our improvements over [22, 5] in both privacy and utility. We remind the reader that, per (6), π0 = 1 − e− , and (9) n0 = kmax1−e− . Furthermore, recall that once we bound the parameter kmax for any given hash-based order-invariant sketching algorithm, Corollary 2.7 states that the derived algorithms 1b-1c satisfy -DP provided that n ≥ n0 and n ≥ 1, respectively. Accordingly, in the rest of this section, we bound kmax for each example sketch of interest, which has the consequences for pure -differential privacy delineated above. Flajolet-Martin ’85 The FM85 sketch, often called Probabilistic Counting with Stochastic Averaging (PCSA), consists of k bitmaps Bi of length `. Each item is hashed into a bitmap and index (Bi, Gi) and sets the indexed bit in the bitmap to 1. The chosen bitmap is uniform amongst the k bitmaps and the indexGi ∼ Geometric(1/2). If ` is the length of each bitmap, then the total number of bits used by the sketch is b = k` and kmax = k` for all seeds r. A typical value for ` is 32 bits, as used in Table 1. Past work [25] proposed an -DP version of FM85 using a similar subsampling idea combined with random bit flips. Theorem 4.1. Let v = d− log2 π0e and π̃0 := 2−v ∈ (π0/2, π0]. If n ≥ n0, then the FM85 sketch is ( , δ)-DP with δ ≤ kv exp ( −π̃0 nk ) . For any k, FM85 has kmax ∈ {32k, 64k}. This is worse than all other sketches we study which have kmax = k, so FM85 needs a larger number of minimum items n0 to ensure the sketch is ( , δ)-DP. LPCA The Linear Probabilistic Counting Algorithm (LPCA) consists of a length-k bitmap. Each item is hashed to an index and sets its bit to 1. If B is the number of 1 bits, the LPCA cardinality estimate is N̂LPCA = −k log(1−B/k) = k log π(Sr(D)). Trivially, kmax = k. Since all bits are expected to be 1 after processing roughly k log k distinct items, the capacity of the sketch is bounded. To estimate larger cardinalities, one first downsamples the distinct items with some sampling probability p. To ensure the sketch satisfies an -DP guarantee, one simply ensures p ≥ π0. In this case, our analysis shows that LPCA is differentially private with no modifications if the cardinality is sufficiently large. Otherwise, since the estimator N̂(s) is a function of the sampling probability π(s), Theorem 2.6 provides an ( , δ) guarantee in terms of N̂ . Theorem 4.2. Consider a LPCA sketch with k bits and downsampling probability p. If p < π0 and n > k1−e− then LPCA is -DP. Otherwise, let b0 = dk(1− π0/p)e, π̃0 = b0/k, and µ0 be the expected number of items inserted to fill b0 bits in the sketch. Then, LPCA is ( , δ)-DP if n > µ0 with δ = Pr r (B < b0) < µ0 n exp ( − π̃0 µ0 n ) exp(−π̃0) (11) where B is the number of filled bits in the sketch. Furthermore, µ0 < Ñ(π̃0) where Ñ(π̃) = −kp log(1− π̃) is the cardinality estimate of the sketch when the sampling probability is π̃. Bottom-k (also known as MinCount or KMV) sketches store the k smallest hash values. Removing an item changes the sketch if and only if 1) the item’s hash value is one of these k and 2) it does not collide with another item’s hash value. Thus, kmax = k. Typically, the output size of the hash function is large enough to ensure that the collision probability is negligible, so for practical purposes kmax = k exactly. Since the Bottom-k estimator N̂(s) = (k − 1)/π(s) is a function of the update probability π(s), Theorem 2.6 gives an ( , δ)-DP guarantee in terms of the cardinality estimate by coupon collecting; Theorem 4.3 tightens this bound on δ for a stronger ( , δ)-DP guarantee. 3This approximation holds for n < k. A better approximation of the error is √ k(exp(n/k)− n/k − 1) Theorem 4.3. Consider Bottom-k with k minimum values. Given > 0, let π0, n0 be the corresponding subsampling and minimum cardinality to ensure the modified Bottom-k sketch is ( , 0)-DP. When run on streams of cardinality n ≥ n0, then the unmodified sketch is ( , δ)-DP, where δ = P (X ≤ k) < exp(−nαn) where X ∼ Binomial(n, π0) and αn = 12 (π0−k/n)2 π0(1−π0)+1/3n2 → 1 2 π0 1−π0 as n→∞. The closely related Adaptive Sampling sketch has the same privacy behavior as a bottom-k sketch. Rather than storing exactly k hashes, the algorithm maintains a threshold p and stores up to k hash values beneath p. Once the sketch size exceeds k, the threshold is halved and only hashes less than p/2 are kept. Since at most k hashes are stored, and the sketch is modified only if one of these hashes is removed the maximum number of items that can modify the sketch by removal is kmax = k. Corollary 4.4. For any size k and cardinality n, if a bottom-k sketch is ( , δ)-DP, then a maximum size k adaptive sampling sketch is ( , δ)-DP with the same and δ. HyperLogLog (HLL) hashes each item to a bin and value (Bi, Gi). Within each bin, it takes the maximum value so each bin is a form of Bottom-1 sketch. If there are k bins, then kmax = k. Our results uniformly improve upon existing DP results on the HLL sketch and its variants. One variation of the HLL sketch achieves -DP but is far slower than HLL, as it requires every item to be independently hashed once for each of the k bins, rather than just one time [22]. In other words, [22] needs O(k) update time compared to O(1) for our algorithms. Another provides an ( , δ) guarantee for streams of cardinality n ≥ n′0, for an n′0 that is larger than our n0 by a factor of roughly (at least) 8, with δ falling exponentially with n [5]. In contrast, for streams with cardinality n ≥ n0, we provide a pure -DP guarantee using Algorithms 1b-1c. HLL also has the following ( , δ) guarantee. Theorem 4.5. If n ≥ n0, then HLL satisfies an ( , δ)-DP guarantee where δ ≤ k exp(−π0n/k) HLL’s estimator is only a function of π(s) for medium to large cardinalities as it has the form N̂(s) = Ñ(π(s)) when Ñ(π(s)) > 5k/2. Thus, if π0 is sufficiently small so that Ñ(π0(s)) > 5k/2, then Theorem 2.6 can still be applied, and HLL satisfies ( , δ)-DP with δ = P (N̂(Sr(D)) < Ñ(π0)). 5 Empirical Evaluation We provide two experiments highlighting the practical benefits of our approach. Of past works, only [5, 22] are comparable and both differ from our approach in significant ways. We empirically compare only to [22] since [5] is simply an analysis of HLL. Our improvement over [5] for HLL consists of providing significantly tighter privacy bounds in Section 4 and providing a fully -DP sketch in the secret hash setting. We denote our -DP version of HLL using Algorithm 1b by PHLL (private-HLL) and that of [22] by QLL. Details of the experimental setup are in Appendix D. Experiment 1: Update Time (Figure 1a). We implemented regular, non-private HLL, our PHLL, and QLL and recorded the time to populate every sketch over 210 updates with k ∈ {27, 28, . . . 212} buckets. For HLL, these bucket sizes correspond to relative standard errors ranging from ≈ 9% down to ≈ 1.6%. Each marker represents the mean update time over all updates and the curves are the evaluated mean update time over 10 trials. As expected from theory, the update time of [22] grows as O(k). In contrast, our method PHLL has a constant update time and is similar in magnitude to HLL. Both are roughly 500× faster than [22] when k = 212. Thus, figure 1a shows that [22] is not a scalable solution and the speedup by achieving O(1) updates is substantial. Experiment 2: Space Comparison (Figure 1b). In addition to having a worse update time, we also show that QLL has lower utility in the sense that it requires more space than PHLL to achieve the same error. Fixing the input cardinality at n = 220 and the privacy budget at = ln(2), we vary the number of buckets k ∈ {27, 28, . . . 212} and simulate the -DP methods, PHLL and QLL [22]. The number of buckets controls the error and we found that both methods obtained very similar mean relative error for a given number of bins4 so we plot the space usage against the expected relative error for a given number of buckets. For QLL, since the error guarantees tie the parameter γ to the number of buckets, we modify γ accordingly as well. We compare the sizes of each sketch as the error varies. Since the number of bits required for each bin depends on the range of values the bin can take, we record the simulated total sketch size := k · log2 maxi si, by using the space required for the largest bin value over k buckets. Although QLL achieves similar utility, it does so using a sketch that is larger: when k = 27, we expect an error of roughly 9%, QLL is roughly 1.1× larger. This increases to about 1.6× larger than our PHLL sketch when k = 212, achieving error of roughly 1.6%. We see that the average increase in space when using QLL compared to PHLL grows exponentially in the desired accuracy of the sketch; when lower relative error is necessary, we obtain a greater space improvement over QLL than at higher relative errors. This supports the behavior expected by comparing with space bounds of [22] with (P)HLL. 6 Conclusion We have studied the (differential) privacy of a class of cardinality estimation sketches that includes most popular algorithms. Two examples are the HLL and KMV (bottom-k) sketches that have been deployed in large systems [14, 1]. We have shown that the sketches returned by these algorithms are -differentially private when run on streams of cardinality greater than n0 = kmax1−e− and when combined with a simple downsampling procedure. Moreover, even without downsampling, these algorithms satisfy ( , δ)-differential privacy where δ falls exponentially with the stream cardinality n once n is larger than the threshold n0. Our results are more general and yield better privacy guarantees than prior work for small space cardinality estimators that preserve differential privacy. Our empirical validations show that our approach is practical and scalable, being much faster than previous state-of-the-art while consuming much less space. Acknowledgments and Disclosure of Funding We are grateful to Graham Cormode for valuable comments on an earlier version of this manuscript. Justin Thaler was supported by NSF SPX award CCF-1918989 and NSF CAREER award CCF1845125. 4This is shown in Figure 3, Appendix D. 7 Paper Checklist 1. (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] (c) Did you discuss any potential negative societal impacts of your work? See answer to next question. (d) Did you describe the limitations of your work? Our work shows that existing algorithms, or mild variants thereof, preserve privacy. Therefore, there should not be any negative societal impacts that are consequences of positive privacy results unless users/readers incorrectly apply the results to their systems. Any mathematical limitations from the theory are clearly outlined through the formal technical statements. 2. (a) Did you state the full set of assumptions of all theoretical results?[Yes] (b) Did you include complete proofs of all theoretical results?[Yes] 3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? A small repository containing the experimental scripts and figure plotting has been provided. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? For Figure 1a the standard deviations have been plotted in shaded regions but these are too small in magnitude to be seen on the scale of the plot, indicating that there is very little variation. For Figure 1b we have plotted the entire distribution over all trials. (d) Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] 4. (a) If your work uses existing assets, did you cite the creators? [N/A] (b) Did you mention the license of the assets? [N/A] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What are the main contributions and strengths of the paper regarding hash-based sketching algorithms for cardinality? 2. What are the weaknesses or limitations of the proposed technique, particularly concerning its privacy guarantees and potential applicability to other settings? 3. How do the authors derive tighter privacy guarantees for specific streaming algorithms, and how effective are these approaches? 4. Can you provide examples or explanations to help illustrate the privacy benefits and utility costs of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a general technique for proving that hash-based, order-invariant sketching algorithms for cardinality (which encompasses most known sketching algorithms for the problem) satisfy DP. The DP parameters depend on an unconditional upper bound on the probability that a new element causes the state of the algorithm to change, as well as a condition that the number of unique elements in the stream be higher than some value. When the sketching algorithm do not satisfy the conditions, they propose a lightweight, blackbox algorithm that downsamples the stream and adds in "fake" unique elements, and show this has no utility drawback and will satisfy privacy. They derive tighter privacy guarantees for specific streaming algorithms and validate their privacy and utility results. Strengths And Weaknesses The results are general: they are able to make sketching algorithms for cardinality satisfy DP The privacy overhead is small in terms of utility cost and computational cost Their algorithm is even able to take non-private sketches which are costly to recompute and combine them with private sketches such that the entire computation will satisfy DP. One possible drawback is that the problem intuitively seems slightly ``easy'', see the limitations section. Questions In the introduction, what are the paramters b,k? They should be explained here. Is the privacy analysis close to being optimal or can a more fine-grained approach be made, such as not having an unconditional upper bound on the stream update chance \pi_0 but rather a time-dependent one? Limitations Sketching algorithms by their very nature seem to store a tiny amount of data and heavily use randomness. Thus, they should be close to being ``private'' already. It would be helpful if the authors could private a toy example where blatant non-privacy of the sketching algorithm is exposed. This also raises the issue that perhaps the privacy analysis and techniques only work in this particular sketching setting. It would be helpful if the authors could compare and contrast these methods to other privacy work in sketching in related works section.
NIPS
Title Order-Invariant Cardinality Estimators Are Differentially Private Abstract We consider privacy in the context of streaming algorithms for cardinality estimation. We show that a large class of algorithms all satisfy -differential privacy, so long as (a) the algorithm is combined with a simple down-sampling procedure, and (b) the input stream cardinality is Ω(k/ ). Here, k is a certain parameter of the sketch that is always at most the sketch size in bits, but is typically much smaller. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality. Our analysis applies to essentially all popular cardinality estimation algorithms, and substantially generalizes and tightens privacy bounds from earlier works. Our approach is faster and exhibits a better utility-space tradeoff than prior art. 1 Introduction Cardinality estimation, or the distinct counting problem, is a fundamental data analysis task. Typical applications are found in network traffic monitoring [9], query optimization [20], and counting unique search engine queries [14]. A key challenge is to perform this estimation in small space while processing each data item quickly. Typical approaches for solving this problem at scale involve data sketches such as the Flajolet-Martin (FM85) sketch [12], HyperLogLog (HLL) [11], Bottom-k [2, 6, 3]. All these provide approximate cardinality estimates but use bounded space. While research has historically focused on the accuracy, speed, and space usage of these sketches, recent work examines their privacy guarantees. These privacy-preserving properties have grown in importance as companies have built tools that can grant an appropriate level of privacy to different people and scenarios. The tools aid in satisfying users’ demand for better data stewardship, while also ensuring compliance with regulatory requirements. We show that all cardinality estimators in a class of hash-based, order-invariant sketches with bounded size are -differentially private (DP) so long as the algorithm is combined with a simple down-sampling procedure and the true cardinality satisfies a mild lower bound. This lower bound requirement can be guaranteed to hold by inserting sufficiently many “phantom elements” into the stream when initializing the sketch. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality. Our novel analysis has significant benefits. First, prior works on differentially private cardinality estimation have analyzed only specific sketches [23, 25, 5, 22]. Moreover, many of the sketches analyzed (e.g., [23, 22]), while reminiscent of sketches used in practice, in fact differ from practical 36th Conference on Neural Information Processing Systems (NeurIPS 2022). sketches in important ways. For example, Smith et al. [22] analyze a variant of HLL that Section 4 shows has an update time that can be k times slower than an HLL sketch with k buckets. While our analysis covers an entire class of sketches at once, our error analysis improves upon prior work in many cases when specialized to specific sketches. For example, our analysis yields tighter privacy bounds for HLL than the one given in [5], yielding both an -DP guarantee, rather than an ( , δ)-DP guarantee, as well as tighter bounds on the failure probability δ—see Section 4 for details. Crucially, the class of sketches we analyze captures many (in fact, almost all to our knowledge) of the sketches that are actually used in practice. This means that existing systems can be used in contexts requiring privacy, either without modification if streams are guaranteed to satisfy the mild cardinality lower bound we require, or with a simple pre-processing step described if such cardinality lower bounds may not hold. Thus, existing data infrastructure can be easily modified to provide DP guarantees, and in fact existing sketches can be easily migrated to DP summaries. 1.1 Related work One perspective is that cardinality estimators cannot simultaneously preserve privacy and offer good utility [7]. However, this impossibility result applies only when an adversary However, this impossibility result applies only when an adversary can create and merge an arbitrary number of sketches, effectively observing an item’s value many times. It does not address the privacy of one sketch itself. Other works have studied more realistic models where either the hashes are public, but private noise is added to the sketch [23, 17, 25], or the hashes are secret [5] (i.e., not known to the adversary who is trying to “break” privacy). This latter setting turns out to permit less noisy cardinality estimates. Past works study specific sketches or a variant of a sketch. For example, Smith et al. [22] show that an HLL-type sketch is -DP while [25] modifies the FM85 sketch using coordinated sampling, which is also based on a private hash. Variants of both models are analyzed by Choi et al. [5], and they show (amongst other contributions) a similar result to [22], establishing that an FM85-type sketch is differentially private. Like these prior works, we focus on the setting when the hash functions are kept secret from the adversary. A related problem of differentially private estimation of cardinalities under set operations is studied by [18], but they assume the inputs to each sketch are already de-duplicated. There is one standard caveat: following prior works [22, 5] our privacy analysis assumes a perfectly random hash function. One can remove this assumption both in theory and practice by using a cryptographic hash function. This will yield a sketch that satisfies either a computational variant of differential privacy called SIM-CDP, or standard information-theoretic notions of differential privacy under the assumption that the hash function fools space-bounded computations [22, Section 2.3]. Other works also consider the privacy-preserving properties of common Lp functions over data streams. For p = 2, these include fast dimensionality reduction [4, 24] and least squares regression [21]. Meanwhile, for 0 < p ≤ 1, frequency-moment estimation has also been studied [26]. Our focus is solely the cardinality estimation problem when p = 0. 1.2 Preliminaries More formally, we consider the following problem. Problem Definition Let D = {x1, . . . , xn} denote a stream of samples with each identifier xi coming from a large universe U , e.g., of size 264. The objective is to estimate the cardinality, or number of distinct identifiers, of D using an algorithm S which is given privacy parameters , δ ≥ 0 and a space bound b, measured in bits. Definition 1.1 (Differential Privacy [8]). A randomized algorithm S is ( , δ)-differentially private (( , δ)-DP for short or if δ = 0, pure -DP) if for any pair of data sets D,D′ that differ in one record and for all S in the range of S, Pr(S(D′) ∈ S) ≤ e Pr(S(D) ∈ S) + δ with probability over the internal randomness of the algorithm S. Rather than analyzing any specific sketching algorithm, we analyze a natural class of randomized distinct counting sketches. Algorithms in this class operate in the following manner: each time a new stream item i arrives, i is hashed using some uniform random hash function h, and then h(i) is used to update the sketch, i.e., the update procedure depends only on h(i), and is otherwise independent of i. Our analysis applies to any such algorithm that depends only on the set of observed hash values. Equivalently, the sketch state is invariant both to the order in which stream items arrive, and to item duplication.2 We call this class of algorithms hash-based, order-invariant cardinality estimators. Note that for any hash-based, order-invariant cardinality estimator, the distribution of the sketch depends only on the cardinality of the stream. All distinct counting sketches of which we are aware that are invariant to permutations of the input data are included in this class. This includes FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL as shown in Section 4. Definition 1.2 (Hash-Based, Order-invariant Cardinality Estimators). Any sketching algorithm that depends only on the set of hash values of stream items using a uniform random hash function is a hash-based order-invariant cardinality estimator. We denote this class of algorithms by C. We denote a sketching algorithm with internal randomness r by Sr (for hash-based algorithms, r specifies the random hash function used). The algorithm takes a data set D and generates a data structure Sr(D) that is used to estimate the cardinality. We refer to this structure as the state of the sketch, or simply the sketch, and the values it can take by s ∈ Ω. Sketches are first initialized and then items are inserted into the sketch with an add operation that may or may not change the sketch state. The size of the sketch is a crucial constraint, and we denote the space consumption in bits by b. For example, FM85 consists of k bitmaps of length `. Thus, its state s ∈ Ω = {0, 1}k×`. Typically, ` = 32, so that b = 32k. Further examples are given in Section 4. Our goal is to prove such sketches are differentially private. 2 Hash-Based Order-Invariant Estimators are Private The distribution of any hash-based, order-invariant cardinality estimator depends only on the cardinality of the input stream, so without loss of generality we assume the input is D = {1, . . . , n}. Denote the set D\{i} by D−i for i ∈ D and a sketching algorithm with internal randomness r by Sr(D). By definition, for an -differential privacy guarantee, we must show that the Bayes factor comparing the hypothesis i ∈ D versus i /∈ D is appropriately bounded: e− < Prr(Sr(D) = s) Prr(Sr(D−i) = s) < e ∀s ∈ Ω, i ∈ D. (1) Overview of privacy results. The main result in our analysis bounds the privacy loss of a hash-based, order-invariant sketch in terms of just two sketch-specific quantities. Both quantities intuitively capture how sensitive the sketch is to the removal or insertion of a single item from the data stream. The first quantity is a bound kmax on the number of items that would change the sketch if removed from the stream. Denote the items whose removal from the data set changes the sketch by Kr := {i ∈ D : Sr(D−i) 6= Sr(D)}. (2) Denote its cardinality by Kr := |Kr| and the upper bound by kmax = suprKr. The second quantity is a bound on a "sampling" probability. Let π(s) be the probability that a newly inserted item would change a sketch in state s, π(s) := Pr r (Sr(D) 6= Sr(D−i) |Sr(D−i) = s). (3) Although a sketch generally does not store explicit samples, conceptually, it can be helpful to think of π(s) as the probability that an as-yet-unseen item i gets “sampled” by a sketch in state s. We upper bound π∗ := sups∈Ω π(s) to limit the influence of items added to the stream. The main sub-result in our analysis (Theorem 2.4) roughly states that the sketch is -DP so long as (a) the sampling probability π∗ < 1 − e− is small enough, and (b) the stream cardinality n > kmaxe −1 = Θ(kmax/ ) is large enough. We show Property (a) is a necessary condition for any -DP algorithm if the algorithm works over data universes of unbounded size. Unfortunately, Property (a) does not directly hold for natural 2A sketch is duplication-invariant if and only if its state when run on any stream σ is identical to its state when run on the stream σ′, in which all elements of the stream σ appear exactly once. sketching algorithms. But we show (Section 2.2) by applying a simple down-sampling procedure, any hash-based, order-invariant algorithm can be modified to satisfy (a). Furthermore, Section 4 shows common sketches satisfy Property (a) with high probability, thus providing ( , δ)-DP guarantees for sufficiently large cardinalities. Compared to [5], these guarantees are tighter, more precise, and more general as they establish the failure probability δ decays exponentially with n, provide explicit formulas for δ, and apply to a range of sketches rather than just HLL. Overview of the analysis. The definition of -DP requires bounding the Bayes factor in equation 1. The challenge is that the numerator and denominator may not be easy to compute by themselves. However, it is similar to the form of a conditional probability involving only one insertion. Our main trick re-expresses this Bayes factor as a sum of conditional probabilities involving a single insertion. Since the denominator Prr(Sr(D−i) = s) involves a specific item i which may change the sketch, we instead consider the smallest item Jr whose removal does not change the sketch. This allows us to re-express the numerator in terms of a conditional probability Prr(S(D) = s ∧ Jr = j) = Prr(Jr = j|S(D−j) = s) Prr(S(D−j) = s) involving only a single insertion plus a nuisance term Prr(S(D−j) = s). The symmetry of items gives that the nuisance term is equal to denominator Prr(S(D−j) = s) = Prr(S(D−i) = s), thus allowing us to eliminate it. Lemma 2.1. Suppose n > suprKr. Then Prr(Kr = n) = 0, and Prr(Sr(D) = s) Prr(Sr(D−i) = s) = ∑ j∈D Pr r (Jr = j |Sr(D−j) = s). (4) By further conditioning on the total number of items that, when removed, can change the sketch, we obtain conditional probabilities that are simple to calculate. A combinatorial argument simplifies the resulting expression and gives us two factors in Lemma 2.2, one involving the sampling probability for new items π(s) given a sketch in state s and the other being an expectation involving Kr. This identifies the two quantities that must be controlled in order for a sketch to be -DP. Lemma 2.2. Under the same assumptions as Lemma 2.1 ∑ j∈D Pr r (Jr = j |Sr(D−j) = s) = (1− π(s))Er ( 1 + Kr n−Kr + 1 ∣∣∣∣Sr(D−1) = s ) . (5) To show that all hash-based, order invariant sketching algorithms can be made -DP, we show that Kr can always be bounded by the maximum size of the sketch in bits. Thus, if a sketch is combined with a downsampling procedure to ensure π(s) is sufficiently small, one satisfies both of the properties that are sufficient for an -DP guarantee. Having established (5), we can derive a result showing that a hash-based, order-invariant sketch is -DP so long as the stream cardinality is large enough and sups∈Ω π(s) is not too close to 1. Corollary 2.3. Let Ω denote the set of all possible states of a hash-based order-invariant distinct counting sketching algorithm. When run on a stream of cardinality n > suprKr, the sketch output by the algorithm satisfies -DP if π0 := 1− e− > sup s∈Ω π(s) and (6) e > 1 + Er ( Kr n−Kr + 1 ∣∣∣∣Sr(D−1) = s ) for all sketch states s ∈ Ω. (7) Furthermore, if the data stream D consists of items from a universe U of unbounded size, Condition 6 is necessarily satisfied by any sketching algorithm satisfying -DP. The above corollary may be difficult to apply directly since the expectation in Condition (7) is often difficult to compute and depends on the unknown cardinality n. Our main result provides sufficient criteria to ensure that Condition (7) holds. The criteria is expressed in terms of a minimum cardinality n0 and sketch-dependent constant kmax. This constant kmax is a bound on the maximum number of items which change the sketch when removed. That is, for all input streamsD and all r, kmax ≥ |Kr|. We derive kmax for a number of popular sketch algorithms in Section 4. Theorem 2.4. Consider any hash-based, order-invariant distinct counting sketch. The sketch output by the algorithm satisfies an -DP guarantee if sup s∈Ω π(s) < π0 := 1− e− and there are strictly greater than (8) n0 := kmax/(1− e− ) unique items in the stream. (9) Later, we explain how to modify existing sketching algorithms in a black-box way to satisfy these conditions. If left unmodified, most sketching algorithms used in practice allow for some sketch values s ∈ Ω which violate Condition 8, i.e π(s) > 1− e− . We call such sketch values “privacy-violating”. Fortunately, such values turn out to arise with only tiny probability. The next theorem states that, so long as this probability is smaller than δ, the sketch satisfies ( , δ)-DP without modification. The proof of Theorem 2.5 follows immediately from Theorem 2.4. Theorem 2.5. Let n0 be as in Theorem 2.4. Given a hash-based, order-invariant distinct counting sketch with bounded size, let Ω′ be the set of sketch states such that π(s) ≥ π0. If the input stream D has cardinality n > n0, then the sketch is ( , δ) differentially private where δ = Prr(Sr(D) ∈ Ω′). 2.1 Constructing Sketches Satisfying Approximate Differential Privacy: Algorithm 1a Theorem 2.5 states that, when run on a stream with n ≥ n0 distinct items, any hash-based orderinvariant algorithm (see Algorithm 1a) automatically satisfies ( , δ)-differential privacy where δ denotes the probability that the final sketch state s is “privacy-violating”, i.e., π(s) > π0 = 1− e− . In Section 4, we provide concrete bounds of δ for specific algorithms. In all cases considered, δ falls exponentially with respect to the cardinality n. Thus, high privacy is achieved with high probability so long as the stream is large. We now outline how to derive a bound for a specific sketch. We can prove the desired bound on δ by analyzing sketches in a manner similar to the coupon collector problem. Assuming a perfect, random hash function, the hash values of a universe of items defines a probability space. We can identify v ≤ kmax events or coupons, C1, . . . , Cv, such that π(s) is guaranteed to be less than π0 after all events have occurred. Thus, if all coupons are collected, the sketch satisfies the requirement to be -DP. As the cardinality n grows, the probability that a particular coupons remains missing decreases exponentially. A simple union bound shows that the probability δ that any coupon is missing decreases exponentially with n. For more intuition as to why unmodified sketches satisfy an ( , δ)-DP guarantee when the cardinality is large, we note that the inclusion probability π(s) is closely tied to the cardinality estimate in most sketching algorithms. For example, the cardinality estimators used in HLL and KMV are inversely proportional to the sampling probability π(s), i.e., N̂(s) ∝ 1/π(s), while for LPCA and Adaptive Sampling, the cardinality estimators are monotonically decreasing with respect to π(s). Thus, for most sketching algorithms, when run on a stream of sufficiently large cardinality, the resulting sketch is privacy-violating only when the cardinality estimate is also inaccurate. Theorem 2.6 is useful when analyzing the privacy of such algorithms, as it characterizes the probability δ of a “privacy violation” in terms of the probability the returned estimate, N̂(Sr(D)), is lower than some threshold Ñ(π0). Theorem 2.6. Let Sr be a sketching algorithm with estimator N̂(Sr). If n ≥ n0 and the estimate returned on sketch s is a strictly decreasing function of π(s), so that N̂(s) = Ñ(π(s)) for a function Ñ . Then, Sr is ( , δ)-DP where δ = Prr(N̂(Sr(D)) < Ñ(π0)). 2.2 Constructing Sketches Satisfying Pure Differential Privacy: Algorithm 1b - 1c Theorem 2.4 guarantees an -DP sketch if (8), (9) hold. Condition (8) requires that sups∈Ω π(s) < 1− e− , i.e., the “sampling probability” of the sketching algorithm is sufficiently small regardless of the sketch’s state s. Meanwhile, (9) requires that the input cardinality is sufficiently large. We show that any hash-based, order-invariant distinct counting sketching algorithm can satisfy these two conditions by adding a simple pre-processing step which does two things. First, it “downsamples” the input stream by hashing each input, interpreting the hash values as numbers in [0, 1], and simply ignoring numbers whose hashes are larger than π0. The downsampling hash must be independent to that used by the sketching algorithm itself. This ensures that Condition (8) is satisfied, as each input item has maximum sampling probability π0. BASE(items, ) S ← InitSketch() for x ∈ items do S.add(x) return N̂(S) (a) ( , δ)-DP for n ≥ n0. DPSKETCHLARGESET(items, ) S ← InitSketch() π0 ← 1− e− for x ∈ items do if hash(x) < π0 then S.add(x) return N̂(S)/π0 (b) ( , 0)-DP for n ≥ n0. DPSKETCHANYSET(items, ) S, n0 ← DPInitSketch( ) π0 ← 1− e− for x ∈ items do if hash(x) < π0 then S.add(x) return N̂(S)/π0 − n0 (c) ( , 0)-DP for n ≥ 1. Algorithms 1: Differentially private cardinality estimation algorithms from black box sketches. The function InitSketch() initializes a black-box sketch. The uniform random hash function hash(x) is chosen independently of any hash in the black-box sketch and is interpreted as a real in [0, 1]. The cardinality estimate returned by sketch S is denoted N̂(S). DPInitSketch is given in Algorithm 2a. If there is an a priori guarantee that the number of distinct items n is greater than n0 = kmax1−e− , then (9) is trivially satisfied. Pseudocode for the resulting -DP algorithm is given in Algorithm 1b. If there is no such guarantee, then the preprocessing step adds n0 items to the input stream to satisfy (9). To ensure unbiasedness, these n0 items must (i) be distinct from any items in the “real” stream, and (ii) be downsampled as per the first modification. An unbiased estimate of the cardinality of the unmodified stream can then be easily recovered from the sketch via a post-processing correction. Pseudocode for the modified algorithm, which is guaranteed to satisfy -DP, is given in Algorithm 1c. Corollary 2.7. The functions DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c) yield -DP distinct counting sketches provided that n ≥ n0 and n ≥ 1, respectively. 2.3 Constructing -DP Sketches from Existing Sketches: Algorithm 3, Appendix A.1 As regulations change and new ones are added, existing data may need to be appropriately anonymized. However, if the data has already been sketched, the underlying data may no longer be available, and even if it is retained, it may be too costly to reprocess it all. Our theory allows these sketches to be directly converted into differentially private sketches when the sketch has a merge procedure. Using the merge procedure to achieve -differential privacy yields more useful estimates than the naive approach of simply adding Laplace noise to cardinality estimates in proportion to the global sensitivity. The algorithm assumes it is possible to take a sketch Sr(D1) of a stream D1 and a sketch Sr(D2) of a stream D2, and “merge” them to get a sketch of the concatenation of the two streams D1 ◦ D2. This is the case for most practical hash-based order-invariant distinct count sketches. Denote the merge of sketches Sr(D1) and Sr(D2) by Sr(D1) ∪ Sr(D2). In this setting, we think of the existing nonprivate sketch Sr(D1) being converted to a sketch that satisfies -DP by Algorithm 3 (see pseudocode in Appendix A.1). Since sketch Sr(D1) is already constructed, items cannot be first downsampled in the build phase the way they are in Algorithms 1b-1c. To achieve -DP, Algorithm 3 constructs a noisily initialized sketch, Sr(D2), which satisfies both the downsampling condition (Condition (8)) and the minimum stream cardinality requirement (Condition (9)) and returns the merged sketch Sr(D1)∪ Sr(D2). Hence, the sketch will satisfy both conditions for -DP, as shown in Corollary A.3 This merge based procedure typically adds no additional error to the estimates for large cardinalities. In contrast, the naive approach of adding Laplace noise can add significant noise since the sensitivity can be very large. For example, HLL’s estimator is of the form N̂HLL(s) = α/π(s) where α is a constant and s is the sketch. One item can update a bin to the maximum value, so that the updated sketch s′ has sampling probability π(s′) < π(s)(1− 1/k). The sensitivity of cardinality estimate is thus at least N̂HLL(s)/k. Given that the cardinality estimate, and hence sensitivity, can be arbitrarily large when n ≥ k, the naive approach is unworkable to achieve -DP. 3 The Utility of Private Sketches When processing a data set with n unique items, denote the expectation and variance of a sketch and its estimator by En(N̂) and Varn(N̂) respectively. We show that our algorithms all yield unbiased estimates. Furthermore, we show that for Algorithms 1a-1c, if the base sketch satisfies a relative error guarantee (defined below), the DP sketches add no additional error asymptotically. Establishing unbiasedness. To analyze the expectation and variance of each algorithm’s estimator, N̂(S(D)), note that each estimator uses a ‘base estimate’ N̂base from the base sketch S and has the form N̂(S(D)) = N̂basep − V ; p is the downsampling probability and V is the number of artificial items added. This allows us to express expectations and variance via the variance of the base estimator. Theorem 3.1. Consider a base sketching algorithm S ∈ C with an unbiased estimator N̂base for the cardinality of items added to the base sketch. Algorithms 1 (a)-(c) and 3 yield unbiased estimators. Bounding the variance. Theorem 3.1 yields a clean expression for the variance of our private algorithms. Namely, Var[N̂(Sr(D))] = E[Var( N̂basep |V )] which is shown in Corollary B.1. The expression is a consequence of the law of total variance and that the estimators are unbiased. We say that the base sketch satisfies a relative-error guarantee if with high probability, the estimate returned by the sketching algorithm when run on a stream of cardinality n is (1 ± 1/√c)n for some constant c > 0. Let N̂base,n denote the cardinality estimate when the base algorithm is run on a stream of cardinality n, as opposed to N̂base denoting the cardinality estimate produced by the base sketch on the sub-sampled stream used in our private sketches DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c). The relative error guarantee is satisfied when Varn(N̂base,n) < n 2/c; this is an immediate consequence of Chebyshev’s inequality. When the number of artificially added items V is constant as in Algorithms 1b and 1c, Corollary B.1 provides a precise expression for the variance of the differentially private sketch. In Theorem 3.2 below, we use this expression to establish that the modification of the base algorithm to an -DP sketch as per Algorithms 1b and 1c satisfy the exact same relative error guarantee asymptotically. In other words, the additional error due to any pre-processing (down-sampling and possibly adding artificial items) is insignificant for large cardinalities n. Theorem 3.2. Suppose N̂base,n satisfies a relative error guarantee, Varn(N̂base,n) < n2/c, for all n and for some constant c. Let v = 0 for Algorithm 1b and v = n0 for Algorithm 1c. Then Algorithms 1b and 1c satisfy Varn(N̂) ≤ (n+ v)2 c + (n+ v)(v + π−10 ) kmax = (n+ v)2 c +O(n), (10) so that Varn(N̂)/Varn(N̂base,n)→ 1 as n→∞. In Corollary B.2 we prove an analagous result for Algorithm 3, which merges non-private and noisy sketches to produce a private sketch. Informally, the result is comparable to (10), albeit with v ≥ n0. This is because, in Algorithm 3, the number of artificial items added V is a random variable. We ensure that the algorithm satisfies a utility guarantee by bounding V with high probability. This is equivalent to showing that the base sketching algorithm satisfies an ( , δ)-DP guarantee as for any n∗ ≥ n0 and dataset D∗ with |D∗| = n∗, ( , δn∗)-DP ensures δn∗ > Prr(π(Sr(D∗)) > π0) = Prr(V > n∗) which follows from the definition of V in Algorithm 2b. 4 Examples of Hash-based, Order-Invariant Cardinality Estimators We now provide ( , δ)-DP results for a select group of samples: FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL. The ( , δ)-DP results in this section operate in the Algorithm 1a setting with no modification to the base sketching algorithm. Recall that the quantities of interest are the number of bins used in the sketch k, the size of the sketch in bits b and the number of items whose absence changes the sketch kmax. From Section 2 and Lemma A.1 we know that kmax ≤ b but for several common sketches we show a stronger bound of kmax = k. The relationship between these parameters for various sketching algorithms is summarized in Table 1. Table 2, Appendix C, details our improvements over [22, 5] in both privacy and utility. We remind the reader that, per (6), π0 = 1 − e− , and (9) n0 = kmax1−e− . Furthermore, recall that once we bound the parameter kmax for any given hash-based order-invariant sketching algorithm, Corollary 2.7 states that the derived algorithms 1b-1c satisfy -DP provided that n ≥ n0 and n ≥ 1, respectively. Accordingly, in the rest of this section, we bound kmax for each example sketch of interest, which has the consequences for pure -differential privacy delineated above. Flajolet-Martin ’85 The FM85 sketch, often called Probabilistic Counting with Stochastic Averaging (PCSA), consists of k bitmaps Bi of length `. Each item is hashed into a bitmap and index (Bi, Gi) and sets the indexed bit in the bitmap to 1. The chosen bitmap is uniform amongst the k bitmaps and the indexGi ∼ Geometric(1/2). If ` is the length of each bitmap, then the total number of bits used by the sketch is b = k` and kmax = k` for all seeds r. A typical value for ` is 32 bits, as used in Table 1. Past work [25] proposed an -DP version of FM85 using a similar subsampling idea combined with random bit flips. Theorem 4.1. Let v = d− log2 π0e and π̃0 := 2−v ∈ (π0/2, π0]. If n ≥ n0, then the FM85 sketch is ( , δ)-DP with δ ≤ kv exp ( −π̃0 nk ) . For any k, FM85 has kmax ∈ {32k, 64k}. This is worse than all other sketches we study which have kmax = k, so FM85 needs a larger number of minimum items n0 to ensure the sketch is ( , δ)-DP. LPCA The Linear Probabilistic Counting Algorithm (LPCA) consists of a length-k bitmap. Each item is hashed to an index and sets its bit to 1. If B is the number of 1 bits, the LPCA cardinality estimate is N̂LPCA = −k log(1−B/k) = k log π(Sr(D)). Trivially, kmax = k. Since all bits are expected to be 1 after processing roughly k log k distinct items, the capacity of the sketch is bounded. To estimate larger cardinalities, one first downsamples the distinct items with some sampling probability p. To ensure the sketch satisfies an -DP guarantee, one simply ensures p ≥ π0. In this case, our analysis shows that LPCA is differentially private with no modifications if the cardinality is sufficiently large. Otherwise, since the estimator N̂(s) is a function of the sampling probability π(s), Theorem 2.6 provides an ( , δ) guarantee in terms of N̂ . Theorem 4.2. Consider a LPCA sketch with k bits and downsampling probability p. If p < π0 and n > k1−e− then LPCA is -DP. Otherwise, let b0 = dk(1− π0/p)e, π̃0 = b0/k, and µ0 be the expected number of items inserted to fill b0 bits in the sketch. Then, LPCA is ( , δ)-DP if n > µ0 with δ = Pr r (B < b0) < µ0 n exp ( − π̃0 µ0 n ) exp(−π̃0) (11) where B is the number of filled bits in the sketch. Furthermore, µ0 < Ñ(π̃0) where Ñ(π̃) = −kp log(1− π̃) is the cardinality estimate of the sketch when the sampling probability is π̃. Bottom-k (also known as MinCount or KMV) sketches store the k smallest hash values. Removing an item changes the sketch if and only if 1) the item’s hash value is one of these k and 2) it does not collide with another item’s hash value. Thus, kmax = k. Typically, the output size of the hash function is large enough to ensure that the collision probability is negligible, so for practical purposes kmax = k exactly. Since the Bottom-k estimator N̂(s) = (k − 1)/π(s) is a function of the update probability π(s), Theorem 2.6 gives an ( , δ)-DP guarantee in terms of the cardinality estimate by coupon collecting; Theorem 4.3 tightens this bound on δ for a stronger ( , δ)-DP guarantee. 3This approximation holds for n < k. A better approximation of the error is √ k(exp(n/k)− n/k − 1) Theorem 4.3. Consider Bottom-k with k minimum values. Given > 0, let π0, n0 be the corresponding subsampling and minimum cardinality to ensure the modified Bottom-k sketch is ( , 0)-DP. When run on streams of cardinality n ≥ n0, then the unmodified sketch is ( , δ)-DP, where δ = P (X ≤ k) < exp(−nαn) where X ∼ Binomial(n, π0) and αn = 12 (π0−k/n)2 π0(1−π0)+1/3n2 → 1 2 π0 1−π0 as n→∞. The closely related Adaptive Sampling sketch has the same privacy behavior as a bottom-k sketch. Rather than storing exactly k hashes, the algorithm maintains a threshold p and stores up to k hash values beneath p. Once the sketch size exceeds k, the threshold is halved and only hashes less than p/2 are kept. Since at most k hashes are stored, and the sketch is modified only if one of these hashes is removed the maximum number of items that can modify the sketch by removal is kmax = k. Corollary 4.4. For any size k and cardinality n, if a bottom-k sketch is ( , δ)-DP, then a maximum size k adaptive sampling sketch is ( , δ)-DP with the same and δ. HyperLogLog (HLL) hashes each item to a bin and value (Bi, Gi). Within each bin, it takes the maximum value so each bin is a form of Bottom-1 sketch. If there are k bins, then kmax = k. Our results uniformly improve upon existing DP results on the HLL sketch and its variants. One variation of the HLL sketch achieves -DP but is far slower than HLL, as it requires every item to be independently hashed once for each of the k bins, rather than just one time [22]. In other words, [22] needs O(k) update time compared to O(1) for our algorithms. Another provides an ( , δ) guarantee for streams of cardinality n ≥ n′0, for an n′0 that is larger than our n0 by a factor of roughly (at least) 8, with δ falling exponentially with n [5]. In contrast, for streams with cardinality n ≥ n0, we provide a pure -DP guarantee using Algorithms 1b-1c. HLL also has the following ( , δ) guarantee. Theorem 4.5. If n ≥ n0, then HLL satisfies an ( , δ)-DP guarantee where δ ≤ k exp(−π0n/k) HLL’s estimator is only a function of π(s) for medium to large cardinalities as it has the form N̂(s) = Ñ(π(s)) when Ñ(π(s)) > 5k/2. Thus, if π0 is sufficiently small so that Ñ(π0(s)) > 5k/2, then Theorem 2.6 can still be applied, and HLL satisfies ( , δ)-DP with δ = P (N̂(Sr(D)) < Ñ(π0)). 5 Empirical Evaluation We provide two experiments highlighting the practical benefits of our approach. Of past works, only [5, 22] are comparable and both differ from our approach in significant ways. We empirically compare only to [22] since [5] is simply an analysis of HLL. Our improvement over [5] for HLL consists of providing significantly tighter privacy bounds in Section 4 and providing a fully -DP sketch in the secret hash setting. We denote our -DP version of HLL using Algorithm 1b by PHLL (private-HLL) and that of [22] by QLL. Details of the experimental setup are in Appendix D. Experiment 1: Update Time (Figure 1a). We implemented regular, non-private HLL, our PHLL, and QLL and recorded the time to populate every sketch over 210 updates with k ∈ {27, 28, . . . 212} buckets. For HLL, these bucket sizes correspond to relative standard errors ranging from ≈ 9% down to ≈ 1.6%. Each marker represents the mean update time over all updates and the curves are the evaluated mean update time over 10 trials. As expected from theory, the update time of [22] grows as O(k). In contrast, our method PHLL has a constant update time and is similar in magnitude to HLL. Both are roughly 500× faster than [22] when k = 212. Thus, figure 1a shows that [22] is not a scalable solution and the speedup by achieving O(1) updates is substantial. Experiment 2: Space Comparison (Figure 1b). In addition to having a worse update time, we also show that QLL has lower utility in the sense that it requires more space than PHLL to achieve the same error. Fixing the input cardinality at n = 220 and the privacy budget at = ln(2), we vary the number of buckets k ∈ {27, 28, . . . 212} and simulate the -DP methods, PHLL and QLL [22]. The number of buckets controls the error and we found that both methods obtained very similar mean relative error for a given number of bins4 so we plot the space usage against the expected relative error for a given number of buckets. For QLL, since the error guarantees tie the parameter γ to the number of buckets, we modify γ accordingly as well. We compare the sizes of each sketch as the error varies. Since the number of bits required for each bin depends on the range of values the bin can take, we record the simulated total sketch size := k · log2 maxi si, by using the space required for the largest bin value over k buckets. Although QLL achieves similar utility, it does so using a sketch that is larger: when k = 27, we expect an error of roughly 9%, QLL is roughly 1.1× larger. This increases to about 1.6× larger than our PHLL sketch when k = 212, achieving error of roughly 1.6%. We see that the average increase in space when using QLL compared to PHLL grows exponentially in the desired accuracy of the sketch; when lower relative error is necessary, we obtain a greater space improvement over QLL than at higher relative errors. This supports the behavior expected by comparing with space bounds of [22] with (P)HLL. 6 Conclusion We have studied the (differential) privacy of a class of cardinality estimation sketches that includes most popular algorithms. Two examples are the HLL and KMV (bottom-k) sketches that have been deployed in large systems [14, 1]. We have shown that the sketches returned by these algorithms are -differentially private when run on streams of cardinality greater than n0 = kmax1−e− and when combined with a simple downsampling procedure. Moreover, even without downsampling, these algorithms satisfy ( , δ)-differential privacy where δ falls exponentially with the stream cardinality n once n is larger than the threshold n0. Our results are more general and yield better privacy guarantees than prior work for small space cardinality estimators that preserve differential privacy. Our empirical validations show that our approach is practical and scalable, being much faster than previous state-of-the-art while consuming much less space. Acknowledgments and Disclosure of Funding We are grateful to Graham Cormode for valuable comments on an earlier version of this manuscript. Justin Thaler was supported by NSF SPX award CCF-1918989 and NSF CAREER award CCF1845125. 4This is shown in Figure 3, Appendix D. 7 Paper Checklist 1. (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] (c) Did you discuss any potential negative societal impacts of your work? See answer to next question. (d) Did you describe the limitations of your work? Our work shows that existing algorithms, or mild variants thereof, preserve privacy. Therefore, there should not be any negative societal impacts that are consequences of positive privacy results unless users/readers incorrectly apply the results to their systems. Any mathematical limitations from the theory are clearly outlined through the formal technical statements. 2. (a) Did you state the full set of assumptions of all theoretical results?[Yes] (b) Did you include complete proofs of all theoretical results?[Yes] 3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? A small repository containing the experimental scripts and figure plotting has been provided. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? For Figure 1a the standard deviations have been plotted in shaded regions but these are too small in magnitude to be seen on the scale of the plot, indicating that there is very little variation. For Figure 1b we have plotted the entire distribution over all trials. (d) Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] 4. (a) If your work uses existing assets, did you cite the creators? [N/A] (b) Did you mention the license of the assets? [N/A] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper regarding cardinality estimators in insertion-only streams? 2. What are the strengths of the proposed approach, particularly in its analysis framework and applicability to standard algorithms? 3. Do you have any concerns or questions about the paper's claims and experiments? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any potential limitations or ethical considerations related to the paper's findings or applications?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies the general problem of cardinality estimators in insertion only streams with differential privacy. The authors present a novel and general analysis framework that establishes the privacy property of a vast set of algorithms. The framework shows that any algorithm that is order invariant, based on a purely random hash, limited in space, and with some sensitivity property is DP. They show that many standard algorithms respect this property with a simple analysis. This results in improved approximation/privacy tradeoffs. More precisely they show that existing algorithms (not designed for privacy) are private on large enough streams or can be made private with a simple downsampling of the input. This is very useful in practice as it allows to reuse original implementations. The analysis framework seems easy to use and general. The authors also experiment with the algorithms derived. The results hold a slightly relax setting for DP that involves cryptographic assumptions but this framework seems very reasonable. Strengths And Weaknesses general framework for the analysis of algorithms results hold for well known algorithm with no or limited modifications to their implementations slightly less strong setting for DP (requires cryptographic assumptions on secrecy of hash functions and indistinguishability from random distribution). works in the single release model not in the continual release model which is more applicable in stream settings. Questions Would it be possible to clarify in a table what novel (or improved) tradeoffs between accuracy and privacy the method allows for the algorithms presented (vs the best known result)? Do you think the work can be adapted to the continual release model? Can you add a short statement that the algorithm gets better bound than simply running the non-private algorithm and then perturbing the output? Limitations No negative impact to society
NIPS
Title Order-Invariant Cardinality Estimators Are Differentially Private Abstract We consider privacy in the context of streaming algorithms for cardinality estimation. We show that a large class of algorithms all satisfy -differential privacy, so long as (a) the algorithm is combined with a simple down-sampling procedure, and (b) the input stream cardinality is Ω(k/ ). Here, k is a certain parameter of the sketch that is always at most the sketch size in bits, but is typically much smaller. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality. Our analysis applies to essentially all popular cardinality estimation algorithms, and substantially generalizes and tightens privacy bounds from earlier works. Our approach is faster and exhibits a better utility-space tradeoff than prior art. 1 Introduction Cardinality estimation, or the distinct counting problem, is a fundamental data analysis task. Typical applications are found in network traffic monitoring [9], query optimization [20], and counting unique search engine queries [14]. A key challenge is to perform this estimation in small space while processing each data item quickly. Typical approaches for solving this problem at scale involve data sketches such as the Flajolet-Martin (FM85) sketch [12], HyperLogLog (HLL) [11], Bottom-k [2, 6, 3]. All these provide approximate cardinality estimates but use bounded space. While research has historically focused on the accuracy, speed, and space usage of these sketches, recent work examines their privacy guarantees. These privacy-preserving properties have grown in importance as companies have built tools that can grant an appropriate level of privacy to different people and scenarios. The tools aid in satisfying users’ demand for better data stewardship, while also ensuring compliance with regulatory requirements. We show that all cardinality estimators in a class of hash-based, order-invariant sketches with bounded size are -differentially private (DP) so long as the algorithm is combined with a simple down-sampling procedure and the true cardinality satisfies a mild lower bound. This lower bound requirement can be guaranteed to hold by inserting sufficiently many “phantom elements” into the stream when initializing the sketch. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality. Our novel analysis has significant benefits. First, prior works on differentially private cardinality estimation have analyzed only specific sketches [23, 25, 5, 22]. Moreover, many of the sketches analyzed (e.g., [23, 22]), while reminiscent of sketches used in practice, in fact differ from practical 36th Conference on Neural Information Processing Systems (NeurIPS 2022). sketches in important ways. For example, Smith et al. [22] analyze a variant of HLL that Section 4 shows has an update time that can be k times slower than an HLL sketch with k buckets. While our analysis covers an entire class of sketches at once, our error analysis improves upon prior work in many cases when specialized to specific sketches. For example, our analysis yields tighter privacy bounds for HLL than the one given in [5], yielding both an -DP guarantee, rather than an ( , δ)-DP guarantee, as well as tighter bounds on the failure probability δ—see Section 4 for details. Crucially, the class of sketches we analyze captures many (in fact, almost all to our knowledge) of the sketches that are actually used in practice. This means that existing systems can be used in contexts requiring privacy, either without modification if streams are guaranteed to satisfy the mild cardinality lower bound we require, or with a simple pre-processing step described if such cardinality lower bounds may not hold. Thus, existing data infrastructure can be easily modified to provide DP guarantees, and in fact existing sketches can be easily migrated to DP summaries. 1.1 Related work One perspective is that cardinality estimators cannot simultaneously preserve privacy and offer good utility [7]. However, this impossibility result applies only when an adversary However, this impossibility result applies only when an adversary can create and merge an arbitrary number of sketches, effectively observing an item’s value many times. It does not address the privacy of one sketch itself. Other works have studied more realistic models where either the hashes are public, but private noise is added to the sketch [23, 17, 25], or the hashes are secret [5] (i.e., not known to the adversary who is trying to “break” privacy). This latter setting turns out to permit less noisy cardinality estimates. Past works study specific sketches or a variant of a sketch. For example, Smith et al. [22] show that an HLL-type sketch is -DP while [25] modifies the FM85 sketch using coordinated sampling, which is also based on a private hash. Variants of both models are analyzed by Choi et al. [5], and they show (amongst other contributions) a similar result to [22], establishing that an FM85-type sketch is differentially private. Like these prior works, we focus on the setting when the hash functions are kept secret from the adversary. A related problem of differentially private estimation of cardinalities under set operations is studied by [18], but they assume the inputs to each sketch are already de-duplicated. There is one standard caveat: following prior works [22, 5] our privacy analysis assumes a perfectly random hash function. One can remove this assumption both in theory and practice by using a cryptographic hash function. This will yield a sketch that satisfies either a computational variant of differential privacy called SIM-CDP, or standard information-theoretic notions of differential privacy under the assumption that the hash function fools space-bounded computations [22, Section 2.3]. Other works also consider the privacy-preserving properties of common Lp functions over data streams. For p = 2, these include fast dimensionality reduction [4, 24] and least squares regression [21]. Meanwhile, for 0 < p ≤ 1, frequency-moment estimation has also been studied [26]. Our focus is solely the cardinality estimation problem when p = 0. 1.2 Preliminaries More formally, we consider the following problem. Problem Definition Let D = {x1, . . . , xn} denote a stream of samples with each identifier xi coming from a large universe U , e.g., of size 264. The objective is to estimate the cardinality, or number of distinct identifiers, of D using an algorithm S which is given privacy parameters , δ ≥ 0 and a space bound b, measured in bits. Definition 1.1 (Differential Privacy [8]). A randomized algorithm S is ( , δ)-differentially private (( , δ)-DP for short or if δ = 0, pure -DP) if for any pair of data sets D,D′ that differ in one record and for all S in the range of S, Pr(S(D′) ∈ S) ≤ e Pr(S(D) ∈ S) + δ with probability over the internal randomness of the algorithm S. Rather than analyzing any specific sketching algorithm, we analyze a natural class of randomized distinct counting sketches. Algorithms in this class operate in the following manner: each time a new stream item i arrives, i is hashed using some uniform random hash function h, and then h(i) is used to update the sketch, i.e., the update procedure depends only on h(i), and is otherwise independent of i. Our analysis applies to any such algorithm that depends only on the set of observed hash values. Equivalently, the sketch state is invariant both to the order in which stream items arrive, and to item duplication.2 We call this class of algorithms hash-based, order-invariant cardinality estimators. Note that for any hash-based, order-invariant cardinality estimator, the distribution of the sketch depends only on the cardinality of the stream. All distinct counting sketches of which we are aware that are invariant to permutations of the input data are included in this class. This includes FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL as shown in Section 4. Definition 1.2 (Hash-Based, Order-invariant Cardinality Estimators). Any sketching algorithm that depends only on the set of hash values of stream items using a uniform random hash function is a hash-based order-invariant cardinality estimator. We denote this class of algorithms by C. We denote a sketching algorithm with internal randomness r by Sr (for hash-based algorithms, r specifies the random hash function used). The algorithm takes a data set D and generates a data structure Sr(D) that is used to estimate the cardinality. We refer to this structure as the state of the sketch, or simply the sketch, and the values it can take by s ∈ Ω. Sketches are first initialized and then items are inserted into the sketch with an add operation that may or may not change the sketch state. The size of the sketch is a crucial constraint, and we denote the space consumption in bits by b. For example, FM85 consists of k bitmaps of length `. Thus, its state s ∈ Ω = {0, 1}k×`. Typically, ` = 32, so that b = 32k. Further examples are given in Section 4. Our goal is to prove such sketches are differentially private. 2 Hash-Based Order-Invariant Estimators are Private The distribution of any hash-based, order-invariant cardinality estimator depends only on the cardinality of the input stream, so without loss of generality we assume the input is D = {1, . . . , n}. Denote the set D\{i} by D−i for i ∈ D and a sketching algorithm with internal randomness r by Sr(D). By definition, for an -differential privacy guarantee, we must show that the Bayes factor comparing the hypothesis i ∈ D versus i /∈ D is appropriately bounded: e− < Prr(Sr(D) = s) Prr(Sr(D−i) = s) < e ∀s ∈ Ω, i ∈ D. (1) Overview of privacy results. The main result in our analysis bounds the privacy loss of a hash-based, order-invariant sketch in terms of just two sketch-specific quantities. Both quantities intuitively capture how sensitive the sketch is to the removal or insertion of a single item from the data stream. The first quantity is a bound kmax on the number of items that would change the sketch if removed from the stream. Denote the items whose removal from the data set changes the sketch by Kr := {i ∈ D : Sr(D−i) 6= Sr(D)}. (2) Denote its cardinality by Kr := |Kr| and the upper bound by kmax = suprKr. The second quantity is a bound on a "sampling" probability. Let π(s) be the probability that a newly inserted item would change a sketch in state s, π(s) := Pr r (Sr(D) 6= Sr(D−i) |Sr(D−i) = s). (3) Although a sketch generally does not store explicit samples, conceptually, it can be helpful to think of π(s) as the probability that an as-yet-unseen item i gets “sampled” by a sketch in state s. We upper bound π∗ := sups∈Ω π(s) to limit the influence of items added to the stream. The main sub-result in our analysis (Theorem 2.4) roughly states that the sketch is -DP so long as (a) the sampling probability π∗ < 1 − e− is small enough, and (b) the stream cardinality n > kmaxe −1 = Θ(kmax/ ) is large enough. We show Property (a) is a necessary condition for any -DP algorithm if the algorithm works over data universes of unbounded size. Unfortunately, Property (a) does not directly hold for natural 2A sketch is duplication-invariant if and only if its state when run on any stream σ is identical to its state when run on the stream σ′, in which all elements of the stream σ appear exactly once. sketching algorithms. But we show (Section 2.2) by applying a simple down-sampling procedure, any hash-based, order-invariant algorithm can be modified to satisfy (a). Furthermore, Section 4 shows common sketches satisfy Property (a) with high probability, thus providing ( , δ)-DP guarantees for sufficiently large cardinalities. Compared to [5], these guarantees are tighter, more precise, and more general as they establish the failure probability δ decays exponentially with n, provide explicit formulas for δ, and apply to a range of sketches rather than just HLL. Overview of the analysis. The definition of -DP requires bounding the Bayes factor in equation 1. The challenge is that the numerator and denominator may not be easy to compute by themselves. However, it is similar to the form of a conditional probability involving only one insertion. Our main trick re-expresses this Bayes factor as a sum of conditional probabilities involving a single insertion. Since the denominator Prr(Sr(D−i) = s) involves a specific item i which may change the sketch, we instead consider the smallest item Jr whose removal does not change the sketch. This allows us to re-express the numerator in terms of a conditional probability Prr(S(D) = s ∧ Jr = j) = Prr(Jr = j|S(D−j) = s) Prr(S(D−j) = s) involving only a single insertion plus a nuisance term Prr(S(D−j) = s). The symmetry of items gives that the nuisance term is equal to denominator Prr(S(D−j) = s) = Prr(S(D−i) = s), thus allowing us to eliminate it. Lemma 2.1. Suppose n > suprKr. Then Prr(Kr = n) = 0, and Prr(Sr(D) = s) Prr(Sr(D−i) = s) = ∑ j∈D Pr r (Jr = j |Sr(D−j) = s). (4) By further conditioning on the total number of items that, when removed, can change the sketch, we obtain conditional probabilities that are simple to calculate. A combinatorial argument simplifies the resulting expression and gives us two factors in Lemma 2.2, one involving the sampling probability for new items π(s) given a sketch in state s and the other being an expectation involving Kr. This identifies the two quantities that must be controlled in order for a sketch to be -DP. Lemma 2.2. Under the same assumptions as Lemma 2.1 ∑ j∈D Pr r (Jr = j |Sr(D−j) = s) = (1− π(s))Er ( 1 + Kr n−Kr + 1 ∣∣∣∣Sr(D−1) = s ) . (5) To show that all hash-based, order invariant sketching algorithms can be made -DP, we show that Kr can always be bounded by the maximum size of the sketch in bits. Thus, if a sketch is combined with a downsampling procedure to ensure π(s) is sufficiently small, one satisfies both of the properties that are sufficient for an -DP guarantee. Having established (5), we can derive a result showing that a hash-based, order-invariant sketch is -DP so long as the stream cardinality is large enough and sups∈Ω π(s) is not too close to 1. Corollary 2.3. Let Ω denote the set of all possible states of a hash-based order-invariant distinct counting sketching algorithm. When run on a stream of cardinality n > suprKr, the sketch output by the algorithm satisfies -DP if π0 := 1− e− > sup s∈Ω π(s) and (6) e > 1 + Er ( Kr n−Kr + 1 ∣∣∣∣Sr(D−1) = s ) for all sketch states s ∈ Ω. (7) Furthermore, if the data stream D consists of items from a universe U of unbounded size, Condition 6 is necessarily satisfied by any sketching algorithm satisfying -DP. The above corollary may be difficult to apply directly since the expectation in Condition (7) is often difficult to compute and depends on the unknown cardinality n. Our main result provides sufficient criteria to ensure that Condition (7) holds. The criteria is expressed in terms of a minimum cardinality n0 and sketch-dependent constant kmax. This constant kmax is a bound on the maximum number of items which change the sketch when removed. That is, for all input streamsD and all r, kmax ≥ |Kr|. We derive kmax for a number of popular sketch algorithms in Section 4. Theorem 2.4. Consider any hash-based, order-invariant distinct counting sketch. The sketch output by the algorithm satisfies an -DP guarantee if sup s∈Ω π(s) < π0 := 1− e− and there are strictly greater than (8) n0 := kmax/(1− e− ) unique items in the stream. (9) Later, we explain how to modify existing sketching algorithms in a black-box way to satisfy these conditions. If left unmodified, most sketching algorithms used in practice allow for some sketch values s ∈ Ω which violate Condition 8, i.e π(s) > 1− e− . We call such sketch values “privacy-violating”. Fortunately, such values turn out to arise with only tiny probability. The next theorem states that, so long as this probability is smaller than δ, the sketch satisfies ( , δ)-DP without modification. The proof of Theorem 2.5 follows immediately from Theorem 2.4. Theorem 2.5. Let n0 be as in Theorem 2.4. Given a hash-based, order-invariant distinct counting sketch with bounded size, let Ω′ be the set of sketch states such that π(s) ≥ π0. If the input stream D has cardinality n > n0, then the sketch is ( , δ) differentially private where δ = Prr(Sr(D) ∈ Ω′). 2.1 Constructing Sketches Satisfying Approximate Differential Privacy: Algorithm 1a Theorem 2.5 states that, when run on a stream with n ≥ n0 distinct items, any hash-based orderinvariant algorithm (see Algorithm 1a) automatically satisfies ( , δ)-differential privacy where δ denotes the probability that the final sketch state s is “privacy-violating”, i.e., π(s) > π0 = 1− e− . In Section 4, we provide concrete bounds of δ for specific algorithms. In all cases considered, δ falls exponentially with respect to the cardinality n. Thus, high privacy is achieved with high probability so long as the stream is large. We now outline how to derive a bound for a specific sketch. We can prove the desired bound on δ by analyzing sketches in a manner similar to the coupon collector problem. Assuming a perfect, random hash function, the hash values of a universe of items defines a probability space. We can identify v ≤ kmax events or coupons, C1, . . . , Cv, such that π(s) is guaranteed to be less than π0 after all events have occurred. Thus, if all coupons are collected, the sketch satisfies the requirement to be -DP. As the cardinality n grows, the probability that a particular coupons remains missing decreases exponentially. A simple union bound shows that the probability δ that any coupon is missing decreases exponentially with n. For more intuition as to why unmodified sketches satisfy an ( , δ)-DP guarantee when the cardinality is large, we note that the inclusion probability π(s) is closely tied to the cardinality estimate in most sketching algorithms. For example, the cardinality estimators used in HLL and KMV are inversely proportional to the sampling probability π(s), i.e., N̂(s) ∝ 1/π(s), while for LPCA and Adaptive Sampling, the cardinality estimators are monotonically decreasing with respect to π(s). Thus, for most sketching algorithms, when run on a stream of sufficiently large cardinality, the resulting sketch is privacy-violating only when the cardinality estimate is also inaccurate. Theorem 2.6 is useful when analyzing the privacy of such algorithms, as it characterizes the probability δ of a “privacy violation” in terms of the probability the returned estimate, N̂(Sr(D)), is lower than some threshold Ñ(π0). Theorem 2.6. Let Sr be a sketching algorithm with estimator N̂(Sr). If n ≥ n0 and the estimate returned on sketch s is a strictly decreasing function of π(s), so that N̂(s) = Ñ(π(s)) for a function Ñ . Then, Sr is ( , δ)-DP where δ = Prr(N̂(Sr(D)) < Ñ(π0)). 2.2 Constructing Sketches Satisfying Pure Differential Privacy: Algorithm 1b - 1c Theorem 2.4 guarantees an -DP sketch if (8), (9) hold. Condition (8) requires that sups∈Ω π(s) < 1− e− , i.e., the “sampling probability” of the sketching algorithm is sufficiently small regardless of the sketch’s state s. Meanwhile, (9) requires that the input cardinality is sufficiently large. We show that any hash-based, order-invariant distinct counting sketching algorithm can satisfy these two conditions by adding a simple pre-processing step which does two things. First, it “downsamples” the input stream by hashing each input, interpreting the hash values as numbers in [0, 1], and simply ignoring numbers whose hashes are larger than π0. The downsampling hash must be independent to that used by the sketching algorithm itself. This ensures that Condition (8) is satisfied, as each input item has maximum sampling probability π0. BASE(items, ) S ← InitSketch() for x ∈ items do S.add(x) return N̂(S) (a) ( , δ)-DP for n ≥ n0. DPSKETCHLARGESET(items, ) S ← InitSketch() π0 ← 1− e− for x ∈ items do if hash(x) < π0 then S.add(x) return N̂(S)/π0 (b) ( , 0)-DP for n ≥ n0. DPSKETCHANYSET(items, ) S, n0 ← DPInitSketch( ) π0 ← 1− e− for x ∈ items do if hash(x) < π0 then S.add(x) return N̂(S)/π0 − n0 (c) ( , 0)-DP for n ≥ 1. Algorithms 1: Differentially private cardinality estimation algorithms from black box sketches. The function InitSketch() initializes a black-box sketch. The uniform random hash function hash(x) is chosen independently of any hash in the black-box sketch and is interpreted as a real in [0, 1]. The cardinality estimate returned by sketch S is denoted N̂(S). DPInitSketch is given in Algorithm 2a. If there is an a priori guarantee that the number of distinct items n is greater than n0 = kmax1−e− , then (9) is trivially satisfied. Pseudocode for the resulting -DP algorithm is given in Algorithm 1b. If there is no such guarantee, then the preprocessing step adds n0 items to the input stream to satisfy (9). To ensure unbiasedness, these n0 items must (i) be distinct from any items in the “real” stream, and (ii) be downsampled as per the first modification. An unbiased estimate of the cardinality of the unmodified stream can then be easily recovered from the sketch via a post-processing correction. Pseudocode for the modified algorithm, which is guaranteed to satisfy -DP, is given in Algorithm 1c. Corollary 2.7. The functions DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c) yield -DP distinct counting sketches provided that n ≥ n0 and n ≥ 1, respectively. 2.3 Constructing -DP Sketches from Existing Sketches: Algorithm 3, Appendix A.1 As regulations change and new ones are added, existing data may need to be appropriately anonymized. However, if the data has already been sketched, the underlying data may no longer be available, and even if it is retained, it may be too costly to reprocess it all. Our theory allows these sketches to be directly converted into differentially private sketches when the sketch has a merge procedure. Using the merge procedure to achieve -differential privacy yields more useful estimates than the naive approach of simply adding Laplace noise to cardinality estimates in proportion to the global sensitivity. The algorithm assumes it is possible to take a sketch Sr(D1) of a stream D1 and a sketch Sr(D2) of a stream D2, and “merge” them to get a sketch of the concatenation of the two streams D1 ◦ D2. This is the case for most practical hash-based order-invariant distinct count sketches. Denote the merge of sketches Sr(D1) and Sr(D2) by Sr(D1) ∪ Sr(D2). In this setting, we think of the existing nonprivate sketch Sr(D1) being converted to a sketch that satisfies -DP by Algorithm 3 (see pseudocode in Appendix A.1). Since sketch Sr(D1) is already constructed, items cannot be first downsampled in the build phase the way they are in Algorithms 1b-1c. To achieve -DP, Algorithm 3 constructs a noisily initialized sketch, Sr(D2), which satisfies both the downsampling condition (Condition (8)) and the minimum stream cardinality requirement (Condition (9)) and returns the merged sketch Sr(D1)∪ Sr(D2). Hence, the sketch will satisfy both conditions for -DP, as shown in Corollary A.3 This merge based procedure typically adds no additional error to the estimates for large cardinalities. In contrast, the naive approach of adding Laplace noise can add significant noise since the sensitivity can be very large. For example, HLL’s estimator is of the form N̂HLL(s) = α/π(s) where α is a constant and s is the sketch. One item can update a bin to the maximum value, so that the updated sketch s′ has sampling probability π(s′) < π(s)(1− 1/k). The sensitivity of cardinality estimate is thus at least N̂HLL(s)/k. Given that the cardinality estimate, and hence sensitivity, can be arbitrarily large when n ≥ k, the naive approach is unworkable to achieve -DP. 3 The Utility of Private Sketches When processing a data set with n unique items, denote the expectation and variance of a sketch and its estimator by En(N̂) and Varn(N̂) respectively. We show that our algorithms all yield unbiased estimates. Furthermore, we show that for Algorithms 1a-1c, if the base sketch satisfies a relative error guarantee (defined below), the DP sketches add no additional error asymptotically. Establishing unbiasedness. To analyze the expectation and variance of each algorithm’s estimator, N̂(S(D)), note that each estimator uses a ‘base estimate’ N̂base from the base sketch S and has the form N̂(S(D)) = N̂basep − V ; p is the downsampling probability and V is the number of artificial items added. This allows us to express expectations and variance via the variance of the base estimator. Theorem 3.1. Consider a base sketching algorithm S ∈ C with an unbiased estimator N̂base for the cardinality of items added to the base sketch. Algorithms 1 (a)-(c) and 3 yield unbiased estimators. Bounding the variance. Theorem 3.1 yields a clean expression for the variance of our private algorithms. Namely, Var[N̂(Sr(D))] = E[Var( N̂basep |V )] which is shown in Corollary B.1. The expression is a consequence of the law of total variance and that the estimators are unbiased. We say that the base sketch satisfies a relative-error guarantee if with high probability, the estimate returned by the sketching algorithm when run on a stream of cardinality n is (1 ± 1/√c)n for some constant c > 0. Let N̂base,n denote the cardinality estimate when the base algorithm is run on a stream of cardinality n, as opposed to N̂base denoting the cardinality estimate produced by the base sketch on the sub-sampled stream used in our private sketches DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c). The relative error guarantee is satisfied when Varn(N̂base,n) < n 2/c; this is an immediate consequence of Chebyshev’s inequality. When the number of artificially added items V is constant as in Algorithms 1b and 1c, Corollary B.1 provides a precise expression for the variance of the differentially private sketch. In Theorem 3.2 below, we use this expression to establish that the modification of the base algorithm to an -DP sketch as per Algorithms 1b and 1c satisfy the exact same relative error guarantee asymptotically. In other words, the additional error due to any pre-processing (down-sampling and possibly adding artificial items) is insignificant for large cardinalities n. Theorem 3.2. Suppose N̂base,n satisfies a relative error guarantee, Varn(N̂base,n) < n2/c, for all n and for some constant c. Let v = 0 for Algorithm 1b and v = n0 for Algorithm 1c. Then Algorithms 1b and 1c satisfy Varn(N̂) ≤ (n+ v)2 c + (n+ v)(v + π−10 ) kmax = (n+ v)2 c +O(n), (10) so that Varn(N̂)/Varn(N̂base,n)→ 1 as n→∞. In Corollary B.2 we prove an analagous result for Algorithm 3, which merges non-private and noisy sketches to produce a private sketch. Informally, the result is comparable to (10), albeit with v ≥ n0. This is because, in Algorithm 3, the number of artificial items added V is a random variable. We ensure that the algorithm satisfies a utility guarantee by bounding V with high probability. This is equivalent to showing that the base sketching algorithm satisfies an ( , δ)-DP guarantee as for any n∗ ≥ n0 and dataset D∗ with |D∗| = n∗, ( , δn∗)-DP ensures δn∗ > Prr(π(Sr(D∗)) > π0) = Prr(V > n∗) which follows from the definition of V in Algorithm 2b. 4 Examples of Hash-based, Order-Invariant Cardinality Estimators We now provide ( , δ)-DP results for a select group of samples: FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL. The ( , δ)-DP results in this section operate in the Algorithm 1a setting with no modification to the base sketching algorithm. Recall that the quantities of interest are the number of bins used in the sketch k, the size of the sketch in bits b and the number of items whose absence changes the sketch kmax. From Section 2 and Lemma A.1 we know that kmax ≤ b but for several common sketches we show a stronger bound of kmax = k. The relationship between these parameters for various sketching algorithms is summarized in Table 1. Table 2, Appendix C, details our improvements over [22, 5] in both privacy and utility. We remind the reader that, per (6), π0 = 1 − e− , and (9) n0 = kmax1−e− . Furthermore, recall that once we bound the parameter kmax for any given hash-based order-invariant sketching algorithm, Corollary 2.7 states that the derived algorithms 1b-1c satisfy -DP provided that n ≥ n0 and n ≥ 1, respectively. Accordingly, in the rest of this section, we bound kmax for each example sketch of interest, which has the consequences for pure -differential privacy delineated above. Flajolet-Martin ’85 The FM85 sketch, often called Probabilistic Counting with Stochastic Averaging (PCSA), consists of k bitmaps Bi of length `. Each item is hashed into a bitmap and index (Bi, Gi) and sets the indexed bit in the bitmap to 1. The chosen bitmap is uniform amongst the k bitmaps and the indexGi ∼ Geometric(1/2). If ` is the length of each bitmap, then the total number of bits used by the sketch is b = k` and kmax = k` for all seeds r. A typical value for ` is 32 bits, as used in Table 1. Past work [25] proposed an -DP version of FM85 using a similar subsampling idea combined with random bit flips. Theorem 4.1. Let v = d− log2 π0e and π̃0 := 2−v ∈ (π0/2, π0]. If n ≥ n0, then the FM85 sketch is ( , δ)-DP with δ ≤ kv exp ( −π̃0 nk ) . For any k, FM85 has kmax ∈ {32k, 64k}. This is worse than all other sketches we study which have kmax = k, so FM85 needs a larger number of minimum items n0 to ensure the sketch is ( , δ)-DP. LPCA The Linear Probabilistic Counting Algorithm (LPCA) consists of a length-k bitmap. Each item is hashed to an index and sets its bit to 1. If B is the number of 1 bits, the LPCA cardinality estimate is N̂LPCA = −k log(1−B/k) = k log π(Sr(D)). Trivially, kmax = k. Since all bits are expected to be 1 after processing roughly k log k distinct items, the capacity of the sketch is bounded. To estimate larger cardinalities, one first downsamples the distinct items with some sampling probability p. To ensure the sketch satisfies an -DP guarantee, one simply ensures p ≥ π0. In this case, our analysis shows that LPCA is differentially private with no modifications if the cardinality is sufficiently large. Otherwise, since the estimator N̂(s) is a function of the sampling probability π(s), Theorem 2.6 provides an ( , δ) guarantee in terms of N̂ . Theorem 4.2. Consider a LPCA sketch with k bits and downsampling probability p. If p < π0 and n > k1−e− then LPCA is -DP. Otherwise, let b0 = dk(1− π0/p)e, π̃0 = b0/k, and µ0 be the expected number of items inserted to fill b0 bits in the sketch. Then, LPCA is ( , δ)-DP if n > µ0 with δ = Pr r (B < b0) < µ0 n exp ( − π̃0 µ0 n ) exp(−π̃0) (11) where B is the number of filled bits in the sketch. Furthermore, µ0 < Ñ(π̃0) where Ñ(π̃) = −kp log(1− π̃) is the cardinality estimate of the sketch when the sampling probability is π̃. Bottom-k (also known as MinCount or KMV) sketches store the k smallest hash values. Removing an item changes the sketch if and only if 1) the item’s hash value is one of these k and 2) it does not collide with another item’s hash value. Thus, kmax = k. Typically, the output size of the hash function is large enough to ensure that the collision probability is negligible, so for practical purposes kmax = k exactly. Since the Bottom-k estimator N̂(s) = (k − 1)/π(s) is a function of the update probability π(s), Theorem 2.6 gives an ( , δ)-DP guarantee in terms of the cardinality estimate by coupon collecting; Theorem 4.3 tightens this bound on δ for a stronger ( , δ)-DP guarantee. 3This approximation holds for n < k. A better approximation of the error is √ k(exp(n/k)− n/k − 1) Theorem 4.3. Consider Bottom-k with k minimum values. Given > 0, let π0, n0 be the corresponding subsampling and minimum cardinality to ensure the modified Bottom-k sketch is ( , 0)-DP. When run on streams of cardinality n ≥ n0, then the unmodified sketch is ( , δ)-DP, where δ = P (X ≤ k) < exp(−nαn) where X ∼ Binomial(n, π0) and αn = 12 (π0−k/n)2 π0(1−π0)+1/3n2 → 1 2 π0 1−π0 as n→∞. The closely related Adaptive Sampling sketch has the same privacy behavior as a bottom-k sketch. Rather than storing exactly k hashes, the algorithm maintains a threshold p and stores up to k hash values beneath p. Once the sketch size exceeds k, the threshold is halved and only hashes less than p/2 are kept. Since at most k hashes are stored, and the sketch is modified only if one of these hashes is removed the maximum number of items that can modify the sketch by removal is kmax = k. Corollary 4.4. For any size k and cardinality n, if a bottom-k sketch is ( , δ)-DP, then a maximum size k adaptive sampling sketch is ( , δ)-DP with the same and δ. HyperLogLog (HLL) hashes each item to a bin and value (Bi, Gi). Within each bin, it takes the maximum value so each bin is a form of Bottom-1 sketch. If there are k bins, then kmax = k. Our results uniformly improve upon existing DP results on the HLL sketch and its variants. One variation of the HLL sketch achieves -DP but is far slower than HLL, as it requires every item to be independently hashed once for each of the k bins, rather than just one time [22]. In other words, [22] needs O(k) update time compared to O(1) for our algorithms. Another provides an ( , δ) guarantee for streams of cardinality n ≥ n′0, for an n′0 that is larger than our n0 by a factor of roughly (at least) 8, with δ falling exponentially with n [5]. In contrast, for streams with cardinality n ≥ n0, we provide a pure -DP guarantee using Algorithms 1b-1c. HLL also has the following ( , δ) guarantee. Theorem 4.5. If n ≥ n0, then HLL satisfies an ( , δ)-DP guarantee where δ ≤ k exp(−π0n/k) HLL’s estimator is only a function of π(s) for medium to large cardinalities as it has the form N̂(s) = Ñ(π(s)) when Ñ(π(s)) > 5k/2. Thus, if π0 is sufficiently small so that Ñ(π0(s)) > 5k/2, then Theorem 2.6 can still be applied, and HLL satisfies ( , δ)-DP with δ = P (N̂(Sr(D)) < Ñ(π0)). 5 Empirical Evaluation We provide two experiments highlighting the practical benefits of our approach. Of past works, only [5, 22] are comparable and both differ from our approach in significant ways. We empirically compare only to [22] since [5] is simply an analysis of HLL. Our improvement over [5] for HLL consists of providing significantly tighter privacy bounds in Section 4 and providing a fully -DP sketch in the secret hash setting. We denote our -DP version of HLL using Algorithm 1b by PHLL (private-HLL) and that of [22] by QLL. Details of the experimental setup are in Appendix D. Experiment 1: Update Time (Figure 1a). We implemented regular, non-private HLL, our PHLL, and QLL and recorded the time to populate every sketch over 210 updates with k ∈ {27, 28, . . . 212} buckets. For HLL, these bucket sizes correspond to relative standard errors ranging from ≈ 9% down to ≈ 1.6%. Each marker represents the mean update time over all updates and the curves are the evaluated mean update time over 10 trials. As expected from theory, the update time of [22] grows as O(k). In contrast, our method PHLL has a constant update time and is similar in magnitude to HLL. Both are roughly 500× faster than [22] when k = 212. Thus, figure 1a shows that [22] is not a scalable solution and the speedup by achieving O(1) updates is substantial. Experiment 2: Space Comparison (Figure 1b). In addition to having a worse update time, we also show that QLL has lower utility in the sense that it requires more space than PHLL to achieve the same error. Fixing the input cardinality at n = 220 and the privacy budget at = ln(2), we vary the number of buckets k ∈ {27, 28, . . . 212} and simulate the -DP methods, PHLL and QLL [22]. The number of buckets controls the error and we found that both methods obtained very similar mean relative error for a given number of bins4 so we plot the space usage against the expected relative error for a given number of buckets. For QLL, since the error guarantees tie the parameter γ to the number of buckets, we modify γ accordingly as well. We compare the sizes of each sketch as the error varies. Since the number of bits required for each bin depends on the range of values the bin can take, we record the simulated total sketch size := k · log2 maxi si, by using the space required for the largest bin value over k buckets. Although QLL achieves similar utility, it does so using a sketch that is larger: when k = 27, we expect an error of roughly 9%, QLL is roughly 1.1× larger. This increases to about 1.6× larger than our PHLL sketch when k = 212, achieving error of roughly 1.6%. We see that the average increase in space when using QLL compared to PHLL grows exponentially in the desired accuracy of the sketch; when lower relative error is necessary, we obtain a greater space improvement over QLL than at higher relative errors. This supports the behavior expected by comparing with space bounds of [22] with (P)HLL. 6 Conclusion We have studied the (differential) privacy of a class of cardinality estimation sketches that includes most popular algorithms. Two examples are the HLL and KMV (bottom-k) sketches that have been deployed in large systems [14, 1]. We have shown that the sketches returned by these algorithms are -differentially private when run on streams of cardinality greater than n0 = kmax1−e− and when combined with a simple downsampling procedure. Moreover, even without downsampling, these algorithms satisfy ( , δ)-differential privacy where δ falls exponentially with the stream cardinality n once n is larger than the threshold n0. Our results are more general and yield better privacy guarantees than prior work for small space cardinality estimators that preserve differential privacy. Our empirical validations show that our approach is practical and scalable, being much faster than previous state-of-the-art while consuming much less space. Acknowledgments and Disclosure of Funding We are grateful to Graham Cormode for valuable comments on an earlier version of this manuscript. Justin Thaler was supported by NSF SPX award CCF-1918989 and NSF CAREER award CCF1845125. 4This is shown in Figure 3, Appendix D. 7 Paper Checklist 1. (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] (c) Did you discuss any potential negative societal impacts of your work? See answer to next question. (d) Did you describe the limitations of your work? Our work shows that existing algorithms, or mild variants thereof, preserve privacy. Therefore, there should not be any negative societal impacts that are consequences of positive privacy results unless users/readers incorrectly apply the results to their systems. Any mathematical limitations from the theory are clearly outlined through the formal technical statements. 2. (a) Did you state the full set of assumptions of all theoretical results?[Yes] (b) Did you include complete proofs of all theoretical results?[Yes] 3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? A small repository containing the experimental scripts and figure plotting has been provided. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? For Figure 1a the standard deviations have been plotted in shaded regions but these are too small in magnitude to be seen on the scale of the plot, indicating that there is very little variation. For Figure 1b we have plotted the entire distribution over all trials. (d) Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] 4. (a) If your work uses existing assets, did you cite the creators? [N/A] (b) Did you mention the license of the assets? [N/A] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the main contribution of the paper regarding differential privacy? 2. What are the strengths and weaknesses of the proposed algorithm compared to prior works? 3. How does the reviewer assess the accuracy and trade-offs of the proposed method? 4. What are the suggestions for improving the paper's content and relevance? 5. Are there any concerns or limitations regarding the paper's claims and methods?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors prove that nearly all hash-based, order-invariant cardinality estimators are differentially private once combined with a down-sampling procedure. This class includes several well-celebrated cardinality streaming algorithms such as Flajolet-Martin, HyperLogLog, and Bottom-k. Strengths And Weaknesses I really enjoyed reading the paper. The text is well-organized and the authors did a good job explaining the tedious math proof in an intuitive and easy-to-follow way. Overall I think the paper makes solid contribution and passes the bar of acceptance. Concrete strengths and weaknesses are listed below and I am willing to further improve the score if the authors can address most of the weaknesses in their rebuttal. Strength: The paper considers a class of cardinality estimators and proves DP for them as long as they satisfy several conditions, compared to prior works only considering one specific sketching algorithm. The paper evaluates the update time and space ratio which exhibits the superiority of the proposed algorithm. Weakness: I am a little bit worried about the accuracy of the proposed sketching with down-subsampling. In Section 5, only update time and space ratio are evaluated. However, it is unclear how QLL and PHLL compares in terms of accuracy. If QLL is much better in accuracy, then a more thorough discussion of the different trade-offs between QLL and PHLL is needed. Subsampling is also known to provide privacy amplification for DP guarantees. In the proposed algorithm, down-sampling is used to eliminate the δ term. Does it also contribution to privacy amplification? The related work section ignores some recent progress on DP streaming algorithms. These works consider a broader class of streaming algorithms in which cardinality estimation is a special case of order 0. Specifically [1]-[3] consider another special case of order 2 and [5] considers cases of real orders from 0 (exclusive) to 1. [1] "The johnson-lindenstrauss transform itself preserves differential privacy." Jeremiah Blocki, Avrim Blum, Anupam Datta, and Or Sheffet. FOCS 2012. [2] "Randomness Efficient Fast-Johnson-Lindenstrauss Transform with Applications in Differential Privacy and Compressed Sensing." Jalaj Upadhyay. CoRR, abs/1410.2470, 2014. [3] "Differentially private ordinary least squares." Or Sheffet. ICML 2017. [4] "Differentially Private Fractional Frequency Moments Estimation with Polylogarithmic Space." Lun Wang, Iosif Pinelis, and Dawn Song. ICLR 2022. The title needs change. Maybe "Nearly all hash-based order-invariant cardinality estimators are differentially private"? The current one seems to be an overclaim. Line 135: The sentence "By further conditioning on the total number of items K r that can be removed without changing the sketch" seems misleading. K r is the total number of items that change the sketch once removed if I understand correctly according to Equation (2)? Questions Please address the below questions in the rebuttal. Give an empirical comparison of accuracy between QLL and PHLL. Discuss the reason for different trade-offs between space, update time and accuracy. Change the title to avoid overclaim. Discuss the relevant works mentioned above in Related Work section. Fix line 135. (Optional) Discuss whether the down-sampling procedure can be used for privacy amplification. Limitations The authors should exhibit accuracy evaluation results and discuss the limitations if it is worse than QLL.
NIPS
Title An Improved Analysis of Stochastic Gradient Descent with Momentum Abstract SGD with momentum (SGDM) has been widely applied in many machine learning tasks, and it is often applied with dynamic stepsizes and momentum weights tuned in a stagewise manner. Despite of its empirical advantage over SGD, the role of momentum is still unclear in general since previous analyses on SGDM either provide worse convergence bounds than those of SGD, or assume Lipschitz or quadratic objectives, which fail to hold in practice. Furthermore, the role of dynamic parameters has not been addressed. In this work, we show that SGDM converges as fast as SGD for smooth objectives under both strongly convex and nonconvex settings. We also prove that multistage strategy is beneficial for SGDM compared to using fixed parameters. Finally, we verify these theoretical claims by numerical experiments. 1 Introduction Stochastic gradient methods have been a widespread practice in machine learning. They aim to minimize the following empirical risk: min x∈Rd f(x) := 1 n n∑ i=1 `(x, qi), (1) where ` is a loss function and {qi}ni=1 denotes the training data, x denotes the trainable parameters of the machine learning model, e.g., the weight matrices in a neural network. In general, stochastic gradient methods can be written as mk = βmk−1 + (1− β)g̃k, xk+1 = xk − αmk. (2) where α > 0 is a stepsize, β ∈ [0, 1) is called momentum weight, and m0 = 0. The classical Stochastic Gradient Descent(SGD) method [21] uses β = 0 and mk = g̃k, where g̃k is a stochastic gradient of f(x) at xk. To boost the practical performance, one often applies a momentum weight of β > 0. and the resulting algorithm is often called SGD with momentum (SGDM). SGDM is very popular for training neural networks with remarkable empirical successes, and has been implemented as the default SGD optimizer in Pytorch [19] and Tensorflow [1]1. The idea behind SGDM originates from Polyak’s heavy-ball method [20] for deterministic optimization. For strongly convex and smooth objectives, heavy-ball method enjoys an accelerated linear 1Their implementation of SGDM does not have the (1− β) before g̃k, which gives mk = ∑k i=1 β k−ig̃i, while mk = (1− β) ∑k i=1 β k−ig̃i for (2). Therefore, they only differ by a constant scaling. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convergence rate over gradient descent [7]. However, the theoretical understanding of its stochastic counterpart is far from being complete. In the case of fixed stepsize and momentum weight, most of the current results only apply to restrictive settings. In [15, 16] and [12], the behavior of SGDM on least square regression is analyzed and linear convergence is established. [9] analyzes the local convergence rate of SGDM for strongly convex and smooth functions, where the initial point x0 is assumed to be close enough to the minimizer x∗. [25] provides global convergence of SGDM, but only for objectives with uniformly bounded gradients, thus excluding many machine learning models such as Ridge regression. Very recently, [26] presents a convergence bound of O( 1kα + α 1−β ) for general smooth nonconvex objectives 3. When β = 0, this recovers the classical convergence bound of O( 1kα + α) of SGD [4]. However, the size of stationary distribution O( α1−β ) is 1 1−β times larger than that of SGD. This factor is not negligible, especially when large β values such as 0.99 and 0.995 is applied [24]. Therefore, their result does not explain the competitiveness of SGDM compared to SGD. Concurrent to this work, [22] shows that SGDM converges as fast as SGD under convexity and strong convexity, and that it is asymptotically faster than SGD for overparameterized models. Remarkably, their analysis considers a different stepsize and momentum weight schedule from this work, and applies to arbitrary sampling without assuming the bounded variance of the gradient noise. In deep learning, SGDM is often applied with various parameter tuning rules to achieve efficient training. One of the most widely adopted rules is called “constant and drop", where a constant stepsize is applied for a long period and is dropped by some constant factor to allow for refined training, while the momentum weight is either kept unchanged (usually 0.9) or gradually increasing. We call this strategy Multistage SGDM and summarize it in Algorithm 1. Practically, (multistage) SGDM was successfully applied to training large-scale neural networks [13, 11], and it was found that appropriate parameter tuning leads to superior performance [24]. Since then, (multistage) SGDM has become increasingly popular [23]. At each stage, Multistage SGDM (Algorithm 1) requires three parameters: stepsize, momentum weight, and stage length. In [8] and [10], doubling argument based rules are analyzed for SGD on strongly convex objectives, where the stage length is doubled whenever the stepsize is halved. Recently, certain stepsize schedules are shown to yield faster convergence for SGD on nonconvex objectives satisfying growth conditions [27, 5], and a nearly optimal stepsize schedule is provided for SGD on least square regression [6]. These results consider only the momentum-free case. Another recent work focuses on the asymptotic convergence of SGDM (i.e., without convergence rate) [9], which requires the momentum weights to approach either 0 or 1, and therefore contradicts the common practice in neural network training. In summary, the convergence rate of Multistage SGDM (Algorithm 1) has not been established except for the momentum-free case, and the role of parameters in different stages is unclear. Algorithm 1 Multistage SGDM Input: problem data f(x) as in (1), number of stages n, momentum weights {βi}ni=1 ⊆ [0, 1), step sizes {αi}ni=1, and stage lengths {Ti}ni=1 at n stages, initialization x1 ∈ Rd and m0 = 0, iteration counter k = 1. 1: for i = 1, 2, ..., n do 2: α← αi, β ← βi; 3: for j = 1, 2, ..., Ti do 4: Sample a minibatch ζk uniformly from the training data; 5: g̃k ← ∇xl(xk, ζk); 6: mk ← βmk−1 + (1− β)g̃k; 7: xk+1 ← xk − αmk; 8: k ← k + 1; 9: end for 10: end for 11: return x̃, which is generated by first choosing a stage l ∈ {1, 2, ...n} uniformly at random, and then choosing x̃ ∈ {xT1+...+Tl−1+1, xT1+...+Tl−1+2, ..., xT1+...+Tl} uniformly at random; 3Here k is the number of iterations. Note that in [26], a different but equivalent formulation of SGDM is analyzed; their stepsize γ is effectively α 1−β in our setting. 1.1 Our contributions In this work, we provide new convergence analysis for SGDM and Multistage SGDM that resolve the aforementioned issues. A comparison of our results with prior work can be found in Table 1. 1. We show that for both strongly convex and nonconvex objectives, SGDM (2) enjoys the same convergence bound as SGD. This helps explain the empirical observations that SGDM is at least as fast as SGD [23]. Our analysis relies on a new observation that, the update direction mk of SGDM (2) has a controllable deviation from the current full gradient ∇f(xk), and enjoys a smaller variance. Inspired by this, we construct a new Lyapunov function that properly handles this deviation and exploits an auxiliary sequence to take advantage of the reduced variance. Compared to aforementioned previous work, our analysis applies to not only least squares, does not assume uniformly bounded gradient, and improves the convergence bound. 2. For the more popular SGDM in the multistage setting (Algorithm 1), we establish its convergence and demonstrate that the multistage strategy are faster at initial stages. Specifically, we allow larger stepsizes in the first few stages to boost initial performance, and smaller stepsizes in the final stages decrease the size of stationary distribution. Theoretically, we properly redefine the aforementioned auxiliary sequence and Lyapunov function to incorporate the stagewise parameters. To the best of our knowledge, this is the first convergence guarantee for SGDM in the multistage setting. 1.2 Other related work Nesterov’s momentum achieves optimal convergence rate in deterministic optimization [18], and has also been combined with SGD for neural network training [24]. Recently, its multistage version has been analyzed for convex or strongly convex objectives [3, 14]. Other forms of momentum for stochastic optimization include PID Control-based methods [2], Accelerated SGD [12], and Quasi-Hyperbolic Momentum [17]. In this work, we restrict ourselves to heavy-ball momentum, which is arguably the most popular form of momentum in current deep learning practice. 2 Notation and Preliminaries Throughout this paper, we use ‖ · ‖ for vector `2-norm, 〈·, ·〉 stands for dot product. Let gk denote the full gradient of f at xk, i.e., gk := ∇f(xk), and f∗ := minx∈Rd f(x). Definition 1. We say that f : Rd → R is L−smooth with L ≥ 0, if it is differentiable and satisfies f(y) ≤ f(x) + 〈∇f(x), y − x〉+ L 2 ‖y − x‖2,∀x, y ∈ Rd . We say that f : Rd → R is µ−strongly convex with µ ≥ 0, if it satisfies f(y) ≥ f(x) + 〈∇f(x), y − x〉+ µ 2 ‖y − x‖2,∀x, y ∈ Rd . The following assumption is effective throughout, which is standard in stochastic optimization. Assumption 1. 1. Smoothness: The objective f(x) in (1) is L−smooth. 2. Unbiasedness: At each iteration k, g̃k satisfies Eζk [g̃k] = gk. 3. Independent samples: the random samples {ζk}∞k=1 are independent. 4. Bounded variance: the variance of g̃k with respect to ζk satisfies Varζk(g̃k) = Eζk [‖g̃k − gk‖2] ≤ σ2 for some σ2 > 0. Unless otherwise noted, all the proof in the paper are deferred to the appendix. 3 Key Ingredients of Convergence Theory In this section, we present some key insights for the analysis of stochastic momentum methods. For simplicity, we first focus on the case of fixed stepsize and momentum weight, and make proper generalizations for Multistage SGDM in App. C. 3.1 A key observation on momentum In this section, we make the following observation on the role of momentum: With a momentum weight β ∈ [0, 1), the update vector mk enjoys a reduced “variance" of (1− β)σ2, while having a controllable deviation from the full gradient gk in expectation. First, without loss of generality, we can take m0 = 0, and express mk as mk = (1− β) k∑ i=1 βk−ig̃i. (3) mk is a moving average of the past stochastic gradients, with smaller weights for older ones1. we have the following result regarding the “variance" of mk, which is measured between mk and its deterministic version (1− β) ∑k i=1 β k−igi. Lemma 1. Under Assumption 1, the update vector mk in SGDM (2) satisfies E ∥∥∥∥∥mk − (1− β) k∑ i=1 βk−igi ∥∥∥∥∥ 2 ≤ 1− β 1 + β (1− β2k)σ2. Lemma 1 follows directly from the property of the moving average. On the other hand, (1−β) ∑k i=1 β k−igi is a moving average of all past gradients, which is in contrast to SGD. It seems unclear how far is (1− β) ∑k i=1 β k−igi from the ideal descent direction gk, which could be unbounded unless stronger assumptions are imposed. Previous analysis such as [25] and [9] make the blanket assumption of bounded∇f to circumvent this difficulty. In this work, we provide a different perspective to resolve this issue. Lemma 2. Under Assumption 1, we have E ∥∥∥∥∥ 11− βk (1− β) k∑ i=1 βk−igi − gk ∥∥∥∥∥ 2 ≤ k−1∑ i=1 ak,i E[‖xi+1 − xi‖2], where ak,i = L2βk−i 1− βk ( k − i+ β 1− β ) . (4) 1Note the sum of weights (1− β) ∑k i=1 β k−i = 1− βk → 1 as k →∞. From Lemma 2, we know the deviation of 1 1−βk (1− β) ∑k i=1 β k−igi from gk is controllable sum of past successive iterate differences, in the sense that the coefficients ak,i decays linearly for older ones. This inspires the construction of a new Lyapunov function to handle the deviation brought by the momentum, as we shall see next. 3.2 A new Lyapunov function Let us construct the following Lyapunov function for SGDM: Lk = ( f(zk)− f? ) + k−1∑ i=1 ci‖xk+1−i − xk−i‖2. (5) In the Lyapunov function (5), {ci}∞i=1 are positive constants to be specified later corresponding to the deviation described in Lemma 2. Since the coefficients in (4) converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled, and Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings (see Propositions 1 and 2). In (5), zk is an auxiliary sequence defined as zk = { xk k = 1, 1 1−βx k − β1−βx k−1 k ≥ 2. (6) This auxiliary sequence first appeared in the analysis of deterministic heavy ball methods in [7], and later applied in the analysis of SGDM [26, 25]. It enjoys the following property. Lemma 3. zk defined in (6) satisfies zk+1 − zk = −αg̃k. Lemma 3 indicates that it is more convenient to analyze zk than xk since zk behaves more like a SGD iterate, although the stochastic gradient g̃k is not taken at zk. Since the coefficients of the deviation in Lemma 2 converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled. Remarkably, we shall see in Sec. 4 that with c1 = O ( L 1−β ) , Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings, and that SGDM converges as fast as SGD. Now, let us turn to the Multistage SGDM (Algorithm 1), which has been very successful in neural network training. However, its convergence still remains unclear except for the momentum-free case. To establish convergence, we require the parameters of Multistage SGDM to satisfy αiβi 1− βi ≡ A1, for i = 1, 2, ...n. αiTi ≡ A2, for i = 1, 2, ...n. 0 ≤ β1 ≤ β2 ≤ ... ≤ βn < 1. (7) where αi, βi, and Ti are the stepsize, momentum weight, and stage length of ith stage, respectively, and A1, A2 are properly chosen constants. In principle, one applies larger stepsizes αi at the initial stages, which will accelerate initial convergence, and smaller stepsizes for the final stages, which will shrink the size of final stationary distribution. As a result, (7) stipulates that less iterations are required for stages with large stepsizes and more iterations for stages with small stepsizes. Finally, (7) requires the momentum weights to be monotonically increasing, which is consistent with what’s done in practice [24]. often, using constant momentum weight also works. Under the parameter choices in (7), let us define the auxiliary sequence zk by zk = xk −A1mk−1. (8) This {zk}∞k=1 sequence reduces to (6) when a constant stepsize and momentum weight are applied. Furthermore, the observations made in Lemmas 1, 2, and 3 can also be generalized (see Lemmas 4, 5, 6, and 7 in App. C). In Sec. 5. we shall see that with (7) and appropriately chosen {ci}∞i=1, Lk in (5) also defines a Lyapunov function in the multistage setting, which in turn leads to the convergence of Multistage SGDM. 4 Convergence of SGDM In this section, we proceed to establish the convergence of SGDM described in (2). First, by following the idea presented in Sec. 3, we can show that Lk defined in (5) is a Lyapunov function. Proposition 1. Let Assumption 1 hold. In (2), let α ≤ 1−β 2 √ 2L √ β+β2 . Let {ci}∞i=1 in (5) be defined by c1 = β+β2 (1−β)3L 3α2 1− 4α2 β+β2(1−β)2L2 , ci+1 = ci − ( 4c1α 2 + Lα2 1− β ) βi(i+ β 1− β )L2 for all i ≥ 1. Then, ci > 0 for all i ≥ 1, and E[Lk+1 − Lk] ≤ ( −α+ 3− β + β 2 2(1− β) Lα2 + 4c1α 2 ) E[‖gk‖2] + ( β2 2(1 + β) Lα2σ2 + 1 2 Lα2σ2 + 2c1 1− β 1 + β α2σ2 ) . (9) By telescoping (9), we obtain the stationary convergence of SGDM under nonconvex settings. Theorem 1. Let Assumption 1 hold. In (2), let α ≤ α ≤ min{ 1−βL(4−β+β2) , 1−β 2 √ 2L √ β+β2 }. Then, 1 k k∑ i=1 E[‖gi‖2] ≤ 2 ( f(x1)− f∗ ) kα + ( β + 3β2 2(1 + β) + 1 ) Lασ2 = O ( f(x1)− f∗ kα + Lασ2 ) . (10) Now let us turn to the strongly convex setting, for which we have Proposition 2. Let Assumption 1 hold. Assume in addition that f is µ−strongly convex. In (2), let α ≤ min{ 1−β5L , 1−β L ( 3−β+2β2+ 48 √ β 25 2L+18µ L )}. Then, there exists positive constants ci for (5) such that for all k ≥ k0 := b log 0.5log β c, we have E[Lk+1 − Lk] ≤ − αµ 1 + 8µL E[Lk] + ( 1 + β + β2 2(1 + β) L+ 1− β 1 + β 2c1)α 2σ2 + β2 + Lα2 β2 1−β (1 + 8µL )(1 + β) 2µα2σ2. The choices of {ci}∞i=1 is similar to those of Proposition 1 and can be found in App. B.4. With Proposition 2, we immediately have Theorem 2. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(zk)− f∗] ≤ ( 1− αµ 1 + 8µL )k−k0 E[Lk0 ] + ( 1 + 8µ L ) 1 + β + β2 2(1 + β) L µ ασ2 + ( 1 + 8µ L )( 1 1 + β 12 √ β 25 2L+ 18µ µ ασ2 + β2 + Lα10 β 2 1 + 8µL 2 1 + β ασ2 ) = O ( (1− αµ)k + L µ ασ2 ) . Corollary 1. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(xk)− f∗] = O ( rk + L µ ασ2 ) , where r = max{1− αµ, β}. Remark 1. 1. Under nonconvex settings, the classical convergence bound of SGD is O ( f(x1)−f∗ kα + Lασ 2 ) with α = O( 1L ) (see, e.g., Theorem 4.8 of [4]). Therefore, Theorem 1 tells us that with α = O( 1−βL ), SGDM achieves the same convergence bound as SGD. 2. In contrast, the radius of the stationary distribution for SGDM in [26] and [25] is O( ασ 2 1−β ), and the latter one also assumes that∇f is uniformly bounded. 3. In Theorem 2 and Corollary 1, the convergence bounds hold for k ≥ k0 = b log 0.5log β c, where k0 is a mild constant1. when r = 1− αµ, the O(rk) part in Corollary 1 matches the lower bound established in Proposition 3 of [12]. 4. The convergence bound of SGD under strong convexity is O ( (1− αµ)k + Lµασ 2 ) (see, e.g, Theorem 4.6 of [4]), our result for SGDM in Corollary 1 recovers this when β = 0. 5 Convergence of Multistage SGDM In this section, we switch to the Multistage SGDM (Algorithm 1). Let us first show that when the (7) is applied, we can define the constants ci properly so that (5) still produces a Lyapunov function. Proposition 3. Let Assumption 1 hold. In Algorithm 1, let the parameters satisfy (7) with A1 = 1 24 √ 2L . In addition, let 1− β1 β1 ≤ 12 1− βn√ βn + β2n , c1 = α21 1−β1 βn+β 2 n (1−βn)2L 3 1− 4α21 βn+β2n (1−βn)2L 2 , and for any i ≥ 1, let ci+1 = ci − ( 4c1α 2 1 + L α21 1− β1 ) βin(i+ βn 1− βn )L2. Then, we have ci > 0 for any i ≥ 1. Furthermore, with zk defined in (8), for any k ≥ 1, we have E[Lk+1 − Lk] ≤ ( − α(k) + 3− β(k) + 2β 2(k) 2(1− β(k)) Lα2(k) + 4c1α 2(k) ) E[‖gk‖2] + ( β2(k)Lα2(k)12 β1√ βn + β2n σ2 + 1 2 Lα2(k)σ2 + 4c1(1− β1)α2(k)σ2 ) . where α(k), β(k) are the stepsize and momentum weight applied at kth iteration, respectively. Theorem 3. Let Assumption 1 hold. Under the same settings as in Proposition 3, let β1 ≥ 12 and let A2 be large enough such that β2Tii ≤ 1 2 for i = 1, 2, ...n. Then, we have 1 n n∑ l=1 1 Tl T1+..+Tl∑ i=T1+..+Tl−1+1 E[‖gi‖2] ≤ 2(f(x 1)− f∗) nA2 + 1 n n∑ l=1 ( 24β2l β1√ βn + β2n L+ 3L ) αlσ 2 = O ( f(x1)− f∗ nA2 + 1 n n∑ l=1 Lαlσ 2 ) . (11) Remark 2. 1. On the left hand side of (11), we have the average of the averaged squared gradient norm of n stages. 1For example, we have k0 = 6 for the popular choice β = 0.9. 2. On the right hand side of (11), the first term dominates at initial stages, we can apply large αi for these stages to accelerate initial convergence, and use smaller αi for later stages so that the size of stationary distribution is small. In contrast, (static) SGDM need to use a small stepsize α to make the size of stationary distribution with small. 3. It is unclear whether the iteration complexity of Multistage SGDM is better than SGDM or not. However, we do observe that Multistage SGDM is faster numerically. We leave the possible improved analysis of Multistage SGDM to future work. 6 Experiments In this section, we verify our theoretical claims by numerical experiments. For each combination of algorithm and training task, training is performed with 3 random seeds 1, 2, 3. Unless otherwise stated, we report the average of losses of the past m batches, where m is the number of batches for the whole dataset. Our implementation is available at GitHub1. Additional implementation details can be found in App. E. 6.1 Logistic regression Setup. The MNIST dataset consists of n = 60000 labeled examples of 28× 28 gray-scale images of handwritten digits in K = 10 classes 0, 1, . . . , 9. For all algorithms, we use batch size s = 64 (and hence number batches per epoch is m = 1874), number of epochs T = 50. The regularization parameter is λ = 5× 10−4. The effect of α in (static) SGDM. By Theorem 2 we know that, with a fixed β, a larger α leads to faster loss decrease to the stationary distribution. However, the size of the stationary distribution is also larger. This is well illustrated in Figure 1. For example, α = 1.0 and α = 0.5 make losses decrease more rapidly than α = 0.1. During later iterations, α = 0.1 leads to a lower final loss. Multistage SGDM. We take 3 stages for Multistage SGDM. The parameters are chosen according to (7): T1 = 3, T2 = 6, T3 = 21, αi = A2/Ti, βi = A1/(c2 + αi), where A2 = 2.0 and A1 = 1.0.1 We compare Multistage SGDM with SGDM with (α, β) = (0.66, 0.9) and (α, β) = (0.095, 0.9), where 0.66, 0.095 are the stepsizes of the first and last stage of Multistage SGDM, respectively. The training losses of initial and later iterations are shown in Figure 2. We can see that SGDM with (α, β) = (0.66, 0.9) converges faster initially, but has a higher final loss; while SGDM with (α, β) = (0.095, 0.9) behaves the other way. Multistage SGDM takes the advantage of both, as predicted by Theorem 3. The performances of SGDM and Vanilla SGD with the same stepsize are similar. 6.2 Image classification For the task of training ResNet-18 on the CIFAR-10 dataset, we compare Multistage SGDM, a baseline SGDM, and YellowFin [28], an automatic momentum tuner based on heuristics from 1https://github.com/gao-yuan-hangzhou/improved-analysis-sgdm 1Here, A1 is not set by its theoretical value 124L , since the dataset is very large and the gradient Lipschitz constant L cannot be computed easily. optimizing strongly convex quadratics. The initial learning rate of YellowFin is set to 0.1,1 and other parameters are set as their default values. All algorithms are run for T = 50 epochs and the batch size is fixed as s = 128. For Multistage SGDM, the parameters choices are governed by (7): the stage lengths are T1 = 5, T2 = 10, and T3 = 35. Take A1 = 1.0, A2 = 2.0, set the per-stage stepsizes and momentum weights as αi = A2/Ti and βi = A1/(A1 + αi), for stages i = 1, 2, 3. For the baseline SGDM, the stepsize schedule of Multistage SGDM is applied, but with a fixed momentum β = 0.9. In Figure 3, we present training losses and end-of-epoch validation accuracy of the tested algorithms. We can see that Multistage SGDM performs the best. Baseline SGDM is slightly worse, possibly because of its fixed momentum weight. Finally, Multistage SGDM can reach a test accuracy of 93% around 200 epochs. 7 Summary and Future Directions In this work, we provide new theoretical insights into the convergence behavior of SGDM and Multistage SGDM. For SGDM, we show that it is as fast as plain SGD in both nonconvex and strongly convex settings. For the widely adopted multistage SGDM, we establish its convergence and show the advantage of stagewise training. There are still open problems to be addressed. For example, (a) Is it possible to show that SGDM converges faster than SGD for special objectives such as quadratic ones? (b) Are there more efficient parameter choices than (7) that guarantee even faster convergence? 1We have experimented with initial learning rates 0.001 (default), 0.01, 0.1 and 0.5, each repeated 3 times; we found that the choice 0.1 is the best in terms of the final training loss. Broader Impact The results of this paper improves the performance of stochastic gradient descent with momentum as well as its multistage version. Our study will also benefit the machine learning community. We do not believe that the results in this work will cause any ethical issue, or put anyone at a disadvantage in our society. Acknowledgements Yanli Liu and Wotao Yin were partially supported by ONR Grant N000141712162. Yanli Liu was also supported by UCLA Dissertation Year Fellowship. Yuan Gao was supported by Columbia University School of Engineering and Applied Science.
1. What are the main contributions and strengths of the paper regarding the convergence analysis of Stochastic Gradient Descent with momentum? 2. What are the weaknesses and limitations of the paper, particularly in terms of presentation and missing references? 3. Do you have any questions or concerns regarding the proposed Lyapunov function and its application in the analysis? 4. How does the reviewer assess the novelty and significance of the multistage variant analysis? 5. Are there any issues with the notation and terminology used in the paper, such as the use of $L^k$ for the Lyapunov function and the update rule for stochastic heavy ball method? 6. What is the reviewer's overall assessment of the paper's quality and potential impact on the field?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents a convergence analysis of the popular Stochastic Gradient Descent method with momentum (SGDM) and its multistage variant. The paper focus on two classes of smooth functions: non-convex and strongly convex. It was shown that SGDM converges as fast as SGD for the above classes of functions. The theoretical convergence is verified through preliminary numerical testing. ---- Post Rebuttal ---- I have read the authors' response and other reviews and decided to keep the overall score unchanged. I would strongly suggest the authors to improve the presentation of their work as I suggested in my original review, include missing references and highlight more the benefits of the proposed approach. Regarding the issue on Lemma 1 that R3 mentioned I trust the claim of the authors that they would be able to fix it. In the opposite scenario (if it turns out that the issue is not fixable) i expect that the authors will withdraw their submission as the whole theory depends on this result. Strengths The theoretical understanding of SGDM is very limited and the paper tries to close this gap. The proposed analysis is made through a novel Lyapunov function that take advantage of the reduced variance of the SGDM update. The analysis of multistage SGDM in also novel (to the best of my knowledge there is no previous analysis of the multistage variant). Weaknesses The ideas of the paper could be interesting however the paper loses some points in terms of presentation. Also some claims are not really justified. For example the title mentioned "An Improved Analysis" but it was never really explained in detail why the proposed analysis justifies the word "improved". There are some limitations of existing papers in the Intro but this should be more clear in the main contributions of the work. In line 74, the authors mentioned: "To the best of our knowledge, this is the first convergence (and acceleration) guarantee for SGDM in the multistage setting." How the acceleration is justified? I understand that in the initial stage of the algorithm one can use larger step-size and thus the method could be faster but it is not clear how one can claim accelerated rate. I think a table with a summary of convergence rates is needed where the results of previous works will be presented in comparison with the rates of the proposed analysis. This will make the presentation much clearer. Important: Normally the update of stochastic heavy ball method (SGD with momentum) has no $1-\beta$ before the gradient in update (2). See for example the update (3) in [2]. How is this different from the standard presentation of the SGS with heavy ball momentum: $x^{k+1}=x^k-\alpha g^k+\beta (x^k-x^{k-1}).$ ? The authors denote the Lyapunov function with $L^k$ which could be confused with the smoothness parameter $L.$ The presentation of Theorem 1 looks informal. One should state the properties of the function in a Theorem. In this case is f(x) is nonconvex and smooth. For Theorem 2 it was assumed that $k>k_0$. In my opinion this means that the rate is asymptotic (especially if it turns out that $k_0$ is large) and needs to be highlighted. In Remark 1 point 2 the following is mentioned: "assumes uniformly gradient of f". What this means? In the appendix, the proofs of Lemma 1 and Lemma 2 should change order. Also in line 314 it is mentioned that in the last step item 2 of Assumption 1 is used. Can you elaborate more on this? Why is that the case?
NIPS
Title An Improved Analysis of Stochastic Gradient Descent with Momentum Abstract SGD with momentum (SGDM) has been widely applied in many machine learning tasks, and it is often applied with dynamic stepsizes and momentum weights tuned in a stagewise manner. Despite of its empirical advantage over SGD, the role of momentum is still unclear in general since previous analyses on SGDM either provide worse convergence bounds than those of SGD, or assume Lipschitz or quadratic objectives, which fail to hold in practice. Furthermore, the role of dynamic parameters has not been addressed. In this work, we show that SGDM converges as fast as SGD for smooth objectives under both strongly convex and nonconvex settings. We also prove that multistage strategy is beneficial for SGDM compared to using fixed parameters. Finally, we verify these theoretical claims by numerical experiments. 1 Introduction Stochastic gradient methods have been a widespread practice in machine learning. They aim to minimize the following empirical risk: min x∈Rd f(x) := 1 n n∑ i=1 `(x, qi), (1) where ` is a loss function and {qi}ni=1 denotes the training data, x denotes the trainable parameters of the machine learning model, e.g., the weight matrices in a neural network. In general, stochastic gradient methods can be written as mk = βmk−1 + (1− β)g̃k, xk+1 = xk − αmk. (2) where α > 0 is a stepsize, β ∈ [0, 1) is called momentum weight, and m0 = 0. The classical Stochastic Gradient Descent(SGD) method [21] uses β = 0 and mk = g̃k, where g̃k is a stochastic gradient of f(x) at xk. To boost the practical performance, one often applies a momentum weight of β > 0. and the resulting algorithm is often called SGD with momentum (SGDM). SGDM is very popular for training neural networks with remarkable empirical successes, and has been implemented as the default SGD optimizer in Pytorch [19] and Tensorflow [1]1. The idea behind SGDM originates from Polyak’s heavy-ball method [20] for deterministic optimization. For strongly convex and smooth objectives, heavy-ball method enjoys an accelerated linear 1Their implementation of SGDM does not have the (1− β) before g̃k, which gives mk = ∑k i=1 β k−ig̃i, while mk = (1− β) ∑k i=1 β k−ig̃i for (2). Therefore, they only differ by a constant scaling. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convergence rate over gradient descent [7]. However, the theoretical understanding of its stochastic counterpart is far from being complete. In the case of fixed stepsize and momentum weight, most of the current results only apply to restrictive settings. In [15, 16] and [12], the behavior of SGDM on least square regression is analyzed and linear convergence is established. [9] analyzes the local convergence rate of SGDM for strongly convex and smooth functions, where the initial point x0 is assumed to be close enough to the minimizer x∗. [25] provides global convergence of SGDM, but only for objectives with uniformly bounded gradients, thus excluding many machine learning models such as Ridge regression. Very recently, [26] presents a convergence bound of O( 1kα + α 1−β ) for general smooth nonconvex objectives 3. When β = 0, this recovers the classical convergence bound of O( 1kα + α) of SGD [4]. However, the size of stationary distribution O( α1−β ) is 1 1−β times larger than that of SGD. This factor is not negligible, especially when large β values such as 0.99 and 0.995 is applied [24]. Therefore, their result does not explain the competitiveness of SGDM compared to SGD. Concurrent to this work, [22] shows that SGDM converges as fast as SGD under convexity and strong convexity, and that it is asymptotically faster than SGD for overparameterized models. Remarkably, their analysis considers a different stepsize and momentum weight schedule from this work, and applies to arbitrary sampling without assuming the bounded variance of the gradient noise. In deep learning, SGDM is often applied with various parameter tuning rules to achieve efficient training. One of the most widely adopted rules is called “constant and drop", where a constant stepsize is applied for a long period and is dropped by some constant factor to allow for refined training, while the momentum weight is either kept unchanged (usually 0.9) or gradually increasing. We call this strategy Multistage SGDM and summarize it in Algorithm 1. Practically, (multistage) SGDM was successfully applied to training large-scale neural networks [13, 11], and it was found that appropriate parameter tuning leads to superior performance [24]. Since then, (multistage) SGDM has become increasingly popular [23]. At each stage, Multistage SGDM (Algorithm 1) requires three parameters: stepsize, momentum weight, and stage length. In [8] and [10], doubling argument based rules are analyzed for SGD on strongly convex objectives, where the stage length is doubled whenever the stepsize is halved. Recently, certain stepsize schedules are shown to yield faster convergence for SGD on nonconvex objectives satisfying growth conditions [27, 5], and a nearly optimal stepsize schedule is provided for SGD on least square regression [6]. These results consider only the momentum-free case. Another recent work focuses on the asymptotic convergence of SGDM (i.e., without convergence rate) [9], which requires the momentum weights to approach either 0 or 1, and therefore contradicts the common practice in neural network training. In summary, the convergence rate of Multistage SGDM (Algorithm 1) has not been established except for the momentum-free case, and the role of parameters in different stages is unclear. Algorithm 1 Multistage SGDM Input: problem data f(x) as in (1), number of stages n, momentum weights {βi}ni=1 ⊆ [0, 1), step sizes {αi}ni=1, and stage lengths {Ti}ni=1 at n stages, initialization x1 ∈ Rd and m0 = 0, iteration counter k = 1. 1: for i = 1, 2, ..., n do 2: α← αi, β ← βi; 3: for j = 1, 2, ..., Ti do 4: Sample a minibatch ζk uniformly from the training data; 5: g̃k ← ∇xl(xk, ζk); 6: mk ← βmk−1 + (1− β)g̃k; 7: xk+1 ← xk − αmk; 8: k ← k + 1; 9: end for 10: end for 11: return x̃, which is generated by first choosing a stage l ∈ {1, 2, ...n} uniformly at random, and then choosing x̃ ∈ {xT1+...+Tl−1+1, xT1+...+Tl−1+2, ..., xT1+...+Tl} uniformly at random; 3Here k is the number of iterations. Note that in [26], a different but equivalent formulation of SGDM is analyzed; their stepsize γ is effectively α 1−β in our setting. 1.1 Our contributions In this work, we provide new convergence analysis for SGDM and Multistage SGDM that resolve the aforementioned issues. A comparison of our results with prior work can be found in Table 1. 1. We show that for both strongly convex and nonconvex objectives, SGDM (2) enjoys the same convergence bound as SGD. This helps explain the empirical observations that SGDM is at least as fast as SGD [23]. Our analysis relies on a new observation that, the update direction mk of SGDM (2) has a controllable deviation from the current full gradient ∇f(xk), and enjoys a smaller variance. Inspired by this, we construct a new Lyapunov function that properly handles this deviation and exploits an auxiliary sequence to take advantage of the reduced variance. Compared to aforementioned previous work, our analysis applies to not only least squares, does not assume uniformly bounded gradient, and improves the convergence bound. 2. For the more popular SGDM in the multistage setting (Algorithm 1), we establish its convergence and demonstrate that the multistage strategy are faster at initial stages. Specifically, we allow larger stepsizes in the first few stages to boost initial performance, and smaller stepsizes in the final stages decrease the size of stationary distribution. Theoretically, we properly redefine the aforementioned auxiliary sequence and Lyapunov function to incorporate the stagewise parameters. To the best of our knowledge, this is the first convergence guarantee for SGDM in the multistage setting. 1.2 Other related work Nesterov’s momentum achieves optimal convergence rate in deterministic optimization [18], and has also been combined with SGD for neural network training [24]. Recently, its multistage version has been analyzed for convex or strongly convex objectives [3, 14]. Other forms of momentum for stochastic optimization include PID Control-based methods [2], Accelerated SGD [12], and Quasi-Hyperbolic Momentum [17]. In this work, we restrict ourselves to heavy-ball momentum, which is arguably the most popular form of momentum in current deep learning practice. 2 Notation and Preliminaries Throughout this paper, we use ‖ · ‖ for vector `2-norm, 〈·, ·〉 stands for dot product. Let gk denote the full gradient of f at xk, i.e., gk := ∇f(xk), and f∗ := minx∈Rd f(x). Definition 1. We say that f : Rd → R is L−smooth with L ≥ 0, if it is differentiable and satisfies f(y) ≤ f(x) + 〈∇f(x), y − x〉+ L 2 ‖y − x‖2,∀x, y ∈ Rd . We say that f : Rd → R is µ−strongly convex with µ ≥ 0, if it satisfies f(y) ≥ f(x) + 〈∇f(x), y − x〉+ µ 2 ‖y − x‖2,∀x, y ∈ Rd . The following assumption is effective throughout, which is standard in stochastic optimization. Assumption 1. 1. Smoothness: The objective f(x) in (1) is L−smooth. 2. Unbiasedness: At each iteration k, g̃k satisfies Eζk [g̃k] = gk. 3. Independent samples: the random samples {ζk}∞k=1 are independent. 4. Bounded variance: the variance of g̃k with respect to ζk satisfies Varζk(g̃k) = Eζk [‖g̃k − gk‖2] ≤ σ2 for some σ2 > 0. Unless otherwise noted, all the proof in the paper are deferred to the appendix. 3 Key Ingredients of Convergence Theory In this section, we present some key insights for the analysis of stochastic momentum methods. For simplicity, we first focus on the case of fixed stepsize and momentum weight, and make proper generalizations for Multistage SGDM in App. C. 3.1 A key observation on momentum In this section, we make the following observation on the role of momentum: With a momentum weight β ∈ [0, 1), the update vector mk enjoys a reduced “variance" of (1− β)σ2, while having a controllable deviation from the full gradient gk in expectation. First, without loss of generality, we can take m0 = 0, and express mk as mk = (1− β) k∑ i=1 βk−ig̃i. (3) mk is a moving average of the past stochastic gradients, with smaller weights for older ones1. we have the following result regarding the “variance" of mk, which is measured between mk and its deterministic version (1− β) ∑k i=1 β k−igi. Lemma 1. Under Assumption 1, the update vector mk in SGDM (2) satisfies E ∥∥∥∥∥mk − (1− β) k∑ i=1 βk−igi ∥∥∥∥∥ 2 ≤ 1− β 1 + β (1− β2k)σ2. Lemma 1 follows directly from the property of the moving average. On the other hand, (1−β) ∑k i=1 β k−igi is a moving average of all past gradients, which is in contrast to SGD. It seems unclear how far is (1− β) ∑k i=1 β k−igi from the ideal descent direction gk, which could be unbounded unless stronger assumptions are imposed. Previous analysis such as [25] and [9] make the blanket assumption of bounded∇f to circumvent this difficulty. In this work, we provide a different perspective to resolve this issue. Lemma 2. Under Assumption 1, we have E ∥∥∥∥∥ 11− βk (1− β) k∑ i=1 βk−igi − gk ∥∥∥∥∥ 2 ≤ k−1∑ i=1 ak,i E[‖xi+1 − xi‖2], where ak,i = L2βk−i 1− βk ( k − i+ β 1− β ) . (4) 1Note the sum of weights (1− β) ∑k i=1 β k−i = 1− βk → 1 as k →∞. From Lemma 2, we know the deviation of 1 1−βk (1− β) ∑k i=1 β k−igi from gk is controllable sum of past successive iterate differences, in the sense that the coefficients ak,i decays linearly for older ones. This inspires the construction of a new Lyapunov function to handle the deviation brought by the momentum, as we shall see next. 3.2 A new Lyapunov function Let us construct the following Lyapunov function for SGDM: Lk = ( f(zk)− f? ) + k−1∑ i=1 ci‖xk+1−i − xk−i‖2. (5) In the Lyapunov function (5), {ci}∞i=1 are positive constants to be specified later corresponding to the deviation described in Lemma 2. Since the coefficients in (4) converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled, and Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings (see Propositions 1 and 2). In (5), zk is an auxiliary sequence defined as zk = { xk k = 1, 1 1−βx k − β1−βx k−1 k ≥ 2. (6) This auxiliary sequence first appeared in the analysis of deterministic heavy ball methods in [7], and later applied in the analysis of SGDM [26, 25]. It enjoys the following property. Lemma 3. zk defined in (6) satisfies zk+1 − zk = −αg̃k. Lemma 3 indicates that it is more convenient to analyze zk than xk since zk behaves more like a SGD iterate, although the stochastic gradient g̃k is not taken at zk. Since the coefficients of the deviation in Lemma 2 converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled. Remarkably, we shall see in Sec. 4 that with c1 = O ( L 1−β ) , Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings, and that SGDM converges as fast as SGD. Now, let us turn to the Multistage SGDM (Algorithm 1), which has been very successful in neural network training. However, its convergence still remains unclear except for the momentum-free case. To establish convergence, we require the parameters of Multistage SGDM to satisfy αiβi 1− βi ≡ A1, for i = 1, 2, ...n. αiTi ≡ A2, for i = 1, 2, ...n. 0 ≤ β1 ≤ β2 ≤ ... ≤ βn < 1. (7) where αi, βi, and Ti are the stepsize, momentum weight, and stage length of ith stage, respectively, and A1, A2 are properly chosen constants. In principle, one applies larger stepsizes αi at the initial stages, which will accelerate initial convergence, and smaller stepsizes for the final stages, which will shrink the size of final stationary distribution. As a result, (7) stipulates that less iterations are required for stages with large stepsizes and more iterations for stages with small stepsizes. Finally, (7) requires the momentum weights to be monotonically increasing, which is consistent with what’s done in practice [24]. often, using constant momentum weight also works. Under the parameter choices in (7), let us define the auxiliary sequence zk by zk = xk −A1mk−1. (8) This {zk}∞k=1 sequence reduces to (6) when a constant stepsize and momentum weight are applied. Furthermore, the observations made in Lemmas 1, 2, and 3 can also be generalized (see Lemmas 4, 5, 6, and 7 in App. C). In Sec. 5. we shall see that with (7) and appropriately chosen {ci}∞i=1, Lk in (5) also defines a Lyapunov function in the multistage setting, which in turn leads to the convergence of Multistage SGDM. 4 Convergence of SGDM In this section, we proceed to establish the convergence of SGDM described in (2). First, by following the idea presented in Sec. 3, we can show that Lk defined in (5) is a Lyapunov function. Proposition 1. Let Assumption 1 hold. In (2), let α ≤ 1−β 2 √ 2L √ β+β2 . Let {ci}∞i=1 in (5) be defined by c1 = β+β2 (1−β)3L 3α2 1− 4α2 β+β2(1−β)2L2 , ci+1 = ci − ( 4c1α 2 + Lα2 1− β ) βi(i+ β 1− β )L2 for all i ≥ 1. Then, ci > 0 for all i ≥ 1, and E[Lk+1 − Lk] ≤ ( −α+ 3− β + β 2 2(1− β) Lα2 + 4c1α 2 ) E[‖gk‖2] + ( β2 2(1 + β) Lα2σ2 + 1 2 Lα2σ2 + 2c1 1− β 1 + β α2σ2 ) . (9) By telescoping (9), we obtain the stationary convergence of SGDM under nonconvex settings. Theorem 1. Let Assumption 1 hold. In (2), let α ≤ α ≤ min{ 1−βL(4−β+β2) , 1−β 2 √ 2L √ β+β2 }. Then, 1 k k∑ i=1 E[‖gi‖2] ≤ 2 ( f(x1)− f∗ ) kα + ( β + 3β2 2(1 + β) + 1 ) Lασ2 = O ( f(x1)− f∗ kα + Lασ2 ) . (10) Now let us turn to the strongly convex setting, for which we have Proposition 2. Let Assumption 1 hold. Assume in addition that f is µ−strongly convex. In (2), let α ≤ min{ 1−β5L , 1−β L ( 3−β+2β2+ 48 √ β 25 2L+18µ L )}. Then, there exists positive constants ci for (5) such that for all k ≥ k0 := b log 0.5log β c, we have E[Lk+1 − Lk] ≤ − αµ 1 + 8µL E[Lk] + ( 1 + β + β2 2(1 + β) L+ 1− β 1 + β 2c1)α 2σ2 + β2 + Lα2 β2 1−β (1 + 8µL )(1 + β) 2µα2σ2. The choices of {ci}∞i=1 is similar to those of Proposition 1 and can be found in App. B.4. With Proposition 2, we immediately have Theorem 2. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(zk)− f∗] ≤ ( 1− αµ 1 + 8µL )k−k0 E[Lk0 ] + ( 1 + 8µ L ) 1 + β + β2 2(1 + β) L µ ασ2 + ( 1 + 8µ L )( 1 1 + β 12 √ β 25 2L+ 18µ µ ασ2 + β2 + Lα10 β 2 1 + 8µL 2 1 + β ασ2 ) = O ( (1− αµ)k + L µ ασ2 ) . Corollary 1. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(xk)− f∗] = O ( rk + L µ ασ2 ) , where r = max{1− αµ, β}. Remark 1. 1. Under nonconvex settings, the classical convergence bound of SGD is O ( f(x1)−f∗ kα + Lασ 2 ) with α = O( 1L ) (see, e.g., Theorem 4.8 of [4]). Therefore, Theorem 1 tells us that with α = O( 1−βL ), SGDM achieves the same convergence bound as SGD. 2. In contrast, the radius of the stationary distribution for SGDM in [26] and [25] is O( ασ 2 1−β ), and the latter one also assumes that∇f is uniformly bounded. 3. In Theorem 2 and Corollary 1, the convergence bounds hold for k ≥ k0 = b log 0.5log β c, where k0 is a mild constant1. when r = 1− αµ, the O(rk) part in Corollary 1 matches the lower bound established in Proposition 3 of [12]. 4. The convergence bound of SGD under strong convexity is O ( (1− αµ)k + Lµασ 2 ) (see, e.g, Theorem 4.6 of [4]), our result for SGDM in Corollary 1 recovers this when β = 0. 5 Convergence of Multistage SGDM In this section, we switch to the Multistage SGDM (Algorithm 1). Let us first show that when the (7) is applied, we can define the constants ci properly so that (5) still produces a Lyapunov function. Proposition 3. Let Assumption 1 hold. In Algorithm 1, let the parameters satisfy (7) with A1 = 1 24 √ 2L . In addition, let 1− β1 β1 ≤ 12 1− βn√ βn + β2n , c1 = α21 1−β1 βn+β 2 n (1−βn)2L 3 1− 4α21 βn+β2n (1−βn)2L 2 , and for any i ≥ 1, let ci+1 = ci − ( 4c1α 2 1 + L α21 1− β1 ) βin(i+ βn 1− βn )L2. Then, we have ci > 0 for any i ≥ 1. Furthermore, with zk defined in (8), for any k ≥ 1, we have E[Lk+1 − Lk] ≤ ( − α(k) + 3− β(k) + 2β 2(k) 2(1− β(k)) Lα2(k) + 4c1α 2(k) ) E[‖gk‖2] + ( β2(k)Lα2(k)12 β1√ βn + β2n σ2 + 1 2 Lα2(k)σ2 + 4c1(1− β1)α2(k)σ2 ) . where α(k), β(k) are the stepsize and momentum weight applied at kth iteration, respectively. Theorem 3. Let Assumption 1 hold. Under the same settings as in Proposition 3, let β1 ≥ 12 and let A2 be large enough such that β2Tii ≤ 1 2 for i = 1, 2, ...n. Then, we have 1 n n∑ l=1 1 Tl T1+..+Tl∑ i=T1+..+Tl−1+1 E[‖gi‖2] ≤ 2(f(x 1)− f∗) nA2 + 1 n n∑ l=1 ( 24β2l β1√ βn + β2n L+ 3L ) αlσ 2 = O ( f(x1)− f∗ nA2 + 1 n n∑ l=1 Lαlσ 2 ) . (11) Remark 2. 1. On the left hand side of (11), we have the average of the averaged squared gradient norm of n stages. 1For example, we have k0 = 6 for the popular choice β = 0.9. 2. On the right hand side of (11), the first term dominates at initial stages, we can apply large αi for these stages to accelerate initial convergence, and use smaller αi for later stages so that the size of stationary distribution is small. In contrast, (static) SGDM need to use a small stepsize α to make the size of stationary distribution with small. 3. It is unclear whether the iteration complexity of Multistage SGDM is better than SGDM or not. However, we do observe that Multistage SGDM is faster numerically. We leave the possible improved analysis of Multistage SGDM to future work. 6 Experiments In this section, we verify our theoretical claims by numerical experiments. For each combination of algorithm and training task, training is performed with 3 random seeds 1, 2, 3. Unless otherwise stated, we report the average of losses of the past m batches, where m is the number of batches for the whole dataset. Our implementation is available at GitHub1. Additional implementation details can be found in App. E. 6.1 Logistic regression Setup. The MNIST dataset consists of n = 60000 labeled examples of 28× 28 gray-scale images of handwritten digits in K = 10 classes 0, 1, . . . , 9. For all algorithms, we use batch size s = 64 (and hence number batches per epoch is m = 1874), number of epochs T = 50. The regularization parameter is λ = 5× 10−4. The effect of α in (static) SGDM. By Theorem 2 we know that, with a fixed β, a larger α leads to faster loss decrease to the stationary distribution. However, the size of the stationary distribution is also larger. This is well illustrated in Figure 1. For example, α = 1.0 and α = 0.5 make losses decrease more rapidly than α = 0.1. During later iterations, α = 0.1 leads to a lower final loss. Multistage SGDM. We take 3 stages for Multistage SGDM. The parameters are chosen according to (7): T1 = 3, T2 = 6, T3 = 21, αi = A2/Ti, βi = A1/(c2 + αi), where A2 = 2.0 and A1 = 1.0.1 We compare Multistage SGDM with SGDM with (α, β) = (0.66, 0.9) and (α, β) = (0.095, 0.9), where 0.66, 0.095 are the stepsizes of the first and last stage of Multistage SGDM, respectively. The training losses of initial and later iterations are shown in Figure 2. We can see that SGDM with (α, β) = (0.66, 0.9) converges faster initially, but has a higher final loss; while SGDM with (α, β) = (0.095, 0.9) behaves the other way. Multistage SGDM takes the advantage of both, as predicted by Theorem 3. The performances of SGDM and Vanilla SGD with the same stepsize are similar. 6.2 Image classification For the task of training ResNet-18 on the CIFAR-10 dataset, we compare Multistage SGDM, a baseline SGDM, and YellowFin [28], an automatic momentum tuner based on heuristics from 1https://github.com/gao-yuan-hangzhou/improved-analysis-sgdm 1Here, A1 is not set by its theoretical value 124L , since the dataset is very large and the gradient Lipschitz constant L cannot be computed easily. optimizing strongly convex quadratics. The initial learning rate of YellowFin is set to 0.1,1 and other parameters are set as their default values. All algorithms are run for T = 50 epochs and the batch size is fixed as s = 128. For Multistage SGDM, the parameters choices are governed by (7): the stage lengths are T1 = 5, T2 = 10, and T3 = 35. Take A1 = 1.0, A2 = 2.0, set the per-stage stepsizes and momentum weights as αi = A2/Ti and βi = A1/(A1 + αi), for stages i = 1, 2, 3. For the baseline SGDM, the stepsize schedule of Multistage SGDM is applied, but with a fixed momentum β = 0.9. In Figure 3, we present training losses and end-of-epoch validation accuracy of the tested algorithms. We can see that Multistage SGDM performs the best. Baseline SGDM is slightly worse, possibly because of its fixed momentum weight. Finally, Multistage SGDM can reach a test accuracy of 93% around 200 epochs. 7 Summary and Future Directions In this work, we provide new theoretical insights into the convergence behavior of SGDM and Multistage SGDM. For SGDM, we show that it is as fast as plain SGD in both nonconvex and strongly convex settings. For the widely adopted multistage SGDM, we establish its convergence and show the advantage of stagewise training. There are still open problems to be addressed. For example, (a) Is it possible to show that SGDM converges faster than SGD for special objectives such as quadratic ones? (b) Are there more efficient parameter choices than (7) that guarantee even faster convergence? 1We have experimented with initial learning rates 0.001 (default), 0.01, 0.1 and 0.5, each repeated 3 times; we found that the choice 0.1 is the best in terms of the final training loss. Broader Impact The results of this paper improves the performance of stochastic gradient descent with momentum as well as its multistage version. Our study will also benefit the machine learning community. We do not believe that the results in this work will cause any ethical issue, or put anyone at a disadvantage in our society. Acknowledgements Yanli Liu and Wotao Yin were partially supported by ONR Grant N000141712162. Yanli Liu was also supported by UCLA Dissertation Year Fellowship. Yuan Gao was supported by Columbia University School of Engineering and Applied Science.
1. What is the focus of the paper regarding SGD with momentum? 2. What are the strengths of the proposed approach, particularly in terms of potential functions and convergence rates? 3. Do you have any concerns or questions regarding the analysis of SGDM, specifically when using $z_k$ instead of $x_k$?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper provides an improved analysis of SGD with momentum (SGDM). To be specific, the authors introduce a new potential function and show that SGDM can converge as fast as SGD for smooth strongly-convex/nonconvex objectives. In addition, they establish the faster convergence of SGDM in the multistage setting. Strengths - The paper introduces a new potential function to analyze the convergence. This technique might be able to gain a great deal of attention in the optimization community, especially for those who are interested in understanding acceleration. - The paper presents the first convergence results for SGDM in the multistage setting. Weaknesses - For the final claims in the strongly convex setting, the authors use $z_k$ instead of $x_k$. However, $z_k$ is not the output of SGDM. None of the previous analyses establishes their convergence based on the auxiliary sequence $z_k$. Since $z_k$ is not a convex combination of $x_k$, I am curious about how it will influence our results.
NIPS
Title An Improved Analysis of Stochastic Gradient Descent with Momentum Abstract SGD with momentum (SGDM) has been widely applied in many machine learning tasks, and it is often applied with dynamic stepsizes and momentum weights tuned in a stagewise manner. Despite of its empirical advantage over SGD, the role of momentum is still unclear in general since previous analyses on SGDM either provide worse convergence bounds than those of SGD, or assume Lipschitz or quadratic objectives, which fail to hold in practice. Furthermore, the role of dynamic parameters has not been addressed. In this work, we show that SGDM converges as fast as SGD for smooth objectives under both strongly convex and nonconvex settings. We also prove that multistage strategy is beneficial for SGDM compared to using fixed parameters. Finally, we verify these theoretical claims by numerical experiments. 1 Introduction Stochastic gradient methods have been a widespread practice in machine learning. They aim to minimize the following empirical risk: min x∈Rd f(x) := 1 n n∑ i=1 `(x, qi), (1) where ` is a loss function and {qi}ni=1 denotes the training data, x denotes the trainable parameters of the machine learning model, e.g., the weight matrices in a neural network. In general, stochastic gradient methods can be written as mk = βmk−1 + (1− β)g̃k, xk+1 = xk − αmk. (2) where α > 0 is a stepsize, β ∈ [0, 1) is called momentum weight, and m0 = 0. The classical Stochastic Gradient Descent(SGD) method [21] uses β = 0 and mk = g̃k, where g̃k is a stochastic gradient of f(x) at xk. To boost the practical performance, one often applies a momentum weight of β > 0. and the resulting algorithm is often called SGD with momentum (SGDM). SGDM is very popular for training neural networks with remarkable empirical successes, and has been implemented as the default SGD optimizer in Pytorch [19] and Tensorflow [1]1. The idea behind SGDM originates from Polyak’s heavy-ball method [20] for deterministic optimization. For strongly convex and smooth objectives, heavy-ball method enjoys an accelerated linear 1Their implementation of SGDM does not have the (1− β) before g̃k, which gives mk = ∑k i=1 β k−ig̃i, while mk = (1− β) ∑k i=1 β k−ig̃i for (2). Therefore, they only differ by a constant scaling. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convergence rate over gradient descent [7]. However, the theoretical understanding of its stochastic counterpart is far from being complete. In the case of fixed stepsize and momentum weight, most of the current results only apply to restrictive settings. In [15, 16] and [12], the behavior of SGDM on least square regression is analyzed and linear convergence is established. [9] analyzes the local convergence rate of SGDM for strongly convex and smooth functions, where the initial point x0 is assumed to be close enough to the minimizer x∗. [25] provides global convergence of SGDM, but only for objectives with uniformly bounded gradients, thus excluding many machine learning models such as Ridge regression. Very recently, [26] presents a convergence bound of O( 1kα + α 1−β ) for general smooth nonconvex objectives 3. When β = 0, this recovers the classical convergence bound of O( 1kα + α) of SGD [4]. However, the size of stationary distribution O( α1−β ) is 1 1−β times larger than that of SGD. This factor is not negligible, especially when large β values such as 0.99 and 0.995 is applied [24]. Therefore, their result does not explain the competitiveness of SGDM compared to SGD. Concurrent to this work, [22] shows that SGDM converges as fast as SGD under convexity and strong convexity, and that it is asymptotically faster than SGD for overparameterized models. Remarkably, their analysis considers a different stepsize and momentum weight schedule from this work, and applies to arbitrary sampling without assuming the bounded variance of the gradient noise. In deep learning, SGDM is often applied with various parameter tuning rules to achieve efficient training. One of the most widely adopted rules is called “constant and drop", where a constant stepsize is applied for a long period and is dropped by some constant factor to allow for refined training, while the momentum weight is either kept unchanged (usually 0.9) or gradually increasing. We call this strategy Multistage SGDM and summarize it in Algorithm 1. Practically, (multistage) SGDM was successfully applied to training large-scale neural networks [13, 11], and it was found that appropriate parameter tuning leads to superior performance [24]. Since then, (multistage) SGDM has become increasingly popular [23]. At each stage, Multistage SGDM (Algorithm 1) requires three parameters: stepsize, momentum weight, and stage length. In [8] and [10], doubling argument based rules are analyzed for SGD on strongly convex objectives, where the stage length is doubled whenever the stepsize is halved. Recently, certain stepsize schedules are shown to yield faster convergence for SGD on nonconvex objectives satisfying growth conditions [27, 5], and a nearly optimal stepsize schedule is provided for SGD on least square regression [6]. These results consider only the momentum-free case. Another recent work focuses on the asymptotic convergence of SGDM (i.e., without convergence rate) [9], which requires the momentum weights to approach either 0 or 1, and therefore contradicts the common practice in neural network training. In summary, the convergence rate of Multistage SGDM (Algorithm 1) has not been established except for the momentum-free case, and the role of parameters in different stages is unclear. Algorithm 1 Multistage SGDM Input: problem data f(x) as in (1), number of stages n, momentum weights {βi}ni=1 ⊆ [0, 1), step sizes {αi}ni=1, and stage lengths {Ti}ni=1 at n stages, initialization x1 ∈ Rd and m0 = 0, iteration counter k = 1. 1: for i = 1, 2, ..., n do 2: α← αi, β ← βi; 3: for j = 1, 2, ..., Ti do 4: Sample a minibatch ζk uniformly from the training data; 5: g̃k ← ∇xl(xk, ζk); 6: mk ← βmk−1 + (1− β)g̃k; 7: xk+1 ← xk − αmk; 8: k ← k + 1; 9: end for 10: end for 11: return x̃, which is generated by first choosing a stage l ∈ {1, 2, ...n} uniformly at random, and then choosing x̃ ∈ {xT1+...+Tl−1+1, xT1+...+Tl−1+2, ..., xT1+...+Tl} uniformly at random; 3Here k is the number of iterations. Note that in [26], a different but equivalent formulation of SGDM is analyzed; their stepsize γ is effectively α 1−β in our setting. 1.1 Our contributions In this work, we provide new convergence analysis for SGDM and Multistage SGDM that resolve the aforementioned issues. A comparison of our results with prior work can be found in Table 1. 1. We show that for both strongly convex and nonconvex objectives, SGDM (2) enjoys the same convergence bound as SGD. This helps explain the empirical observations that SGDM is at least as fast as SGD [23]. Our analysis relies on a new observation that, the update direction mk of SGDM (2) has a controllable deviation from the current full gradient ∇f(xk), and enjoys a smaller variance. Inspired by this, we construct a new Lyapunov function that properly handles this deviation and exploits an auxiliary sequence to take advantage of the reduced variance. Compared to aforementioned previous work, our analysis applies to not only least squares, does not assume uniformly bounded gradient, and improves the convergence bound. 2. For the more popular SGDM in the multistage setting (Algorithm 1), we establish its convergence and demonstrate that the multistage strategy are faster at initial stages. Specifically, we allow larger stepsizes in the first few stages to boost initial performance, and smaller stepsizes in the final stages decrease the size of stationary distribution. Theoretically, we properly redefine the aforementioned auxiliary sequence and Lyapunov function to incorporate the stagewise parameters. To the best of our knowledge, this is the first convergence guarantee for SGDM in the multistage setting. 1.2 Other related work Nesterov’s momentum achieves optimal convergence rate in deterministic optimization [18], and has also been combined with SGD for neural network training [24]. Recently, its multistage version has been analyzed for convex or strongly convex objectives [3, 14]. Other forms of momentum for stochastic optimization include PID Control-based methods [2], Accelerated SGD [12], and Quasi-Hyperbolic Momentum [17]. In this work, we restrict ourselves to heavy-ball momentum, which is arguably the most popular form of momentum in current deep learning practice. 2 Notation and Preliminaries Throughout this paper, we use ‖ · ‖ for vector `2-norm, 〈·, ·〉 stands for dot product. Let gk denote the full gradient of f at xk, i.e., gk := ∇f(xk), and f∗ := minx∈Rd f(x). Definition 1. We say that f : Rd → R is L−smooth with L ≥ 0, if it is differentiable and satisfies f(y) ≤ f(x) + 〈∇f(x), y − x〉+ L 2 ‖y − x‖2,∀x, y ∈ Rd . We say that f : Rd → R is µ−strongly convex with µ ≥ 0, if it satisfies f(y) ≥ f(x) + 〈∇f(x), y − x〉+ µ 2 ‖y − x‖2,∀x, y ∈ Rd . The following assumption is effective throughout, which is standard in stochastic optimization. Assumption 1. 1. Smoothness: The objective f(x) in (1) is L−smooth. 2. Unbiasedness: At each iteration k, g̃k satisfies Eζk [g̃k] = gk. 3. Independent samples: the random samples {ζk}∞k=1 are independent. 4. Bounded variance: the variance of g̃k with respect to ζk satisfies Varζk(g̃k) = Eζk [‖g̃k − gk‖2] ≤ σ2 for some σ2 > 0. Unless otherwise noted, all the proof in the paper are deferred to the appendix. 3 Key Ingredients of Convergence Theory In this section, we present some key insights for the analysis of stochastic momentum methods. For simplicity, we first focus on the case of fixed stepsize and momentum weight, and make proper generalizations for Multistage SGDM in App. C. 3.1 A key observation on momentum In this section, we make the following observation on the role of momentum: With a momentum weight β ∈ [0, 1), the update vector mk enjoys a reduced “variance" of (1− β)σ2, while having a controllable deviation from the full gradient gk in expectation. First, without loss of generality, we can take m0 = 0, and express mk as mk = (1− β) k∑ i=1 βk−ig̃i. (3) mk is a moving average of the past stochastic gradients, with smaller weights for older ones1. we have the following result regarding the “variance" of mk, which is measured between mk and its deterministic version (1− β) ∑k i=1 β k−igi. Lemma 1. Under Assumption 1, the update vector mk in SGDM (2) satisfies E ∥∥∥∥∥mk − (1− β) k∑ i=1 βk−igi ∥∥∥∥∥ 2 ≤ 1− β 1 + β (1− β2k)σ2. Lemma 1 follows directly from the property of the moving average. On the other hand, (1−β) ∑k i=1 β k−igi is a moving average of all past gradients, which is in contrast to SGD. It seems unclear how far is (1− β) ∑k i=1 β k−igi from the ideal descent direction gk, which could be unbounded unless stronger assumptions are imposed. Previous analysis such as [25] and [9] make the blanket assumption of bounded∇f to circumvent this difficulty. In this work, we provide a different perspective to resolve this issue. Lemma 2. Under Assumption 1, we have E ∥∥∥∥∥ 11− βk (1− β) k∑ i=1 βk−igi − gk ∥∥∥∥∥ 2 ≤ k−1∑ i=1 ak,i E[‖xi+1 − xi‖2], where ak,i = L2βk−i 1− βk ( k − i+ β 1− β ) . (4) 1Note the sum of weights (1− β) ∑k i=1 β k−i = 1− βk → 1 as k →∞. From Lemma 2, we know the deviation of 1 1−βk (1− β) ∑k i=1 β k−igi from gk is controllable sum of past successive iterate differences, in the sense that the coefficients ak,i decays linearly for older ones. This inspires the construction of a new Lyapunov function to handle the deviation brought by the momentum, as we shall see next. 3.2 A new Lyapunov function Let us construct the following Lyapunov function for SGDM: Lk = ( f(zk)− f? ) + k−1∑ i=1 ci‖xk+1−i − xk−i‖2. (5) In the Lyapunov function (5), {ci}∞i=1 are positive constants to be specified later corresponding to the deviation described in Lemma 2. Since the coefficients in (4) converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled, and Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings (see Propositions 1 and 2). In (5), zk is an auxiliary sequence defined as zk = { xk k = 1, 1 1−βx k − β1−βx k−1 k ≥ 2. (6) This auxiliary sequence first appeared in the analysis of deterministic heavy ball methods in [7], and later applied in the analysis of SGDM [26, 25]. It enjoys the following property. Lemma 3. zk defined in (6) satisfies zk+1 − zk = −αg̃k. Lemma 3 indicates that it is more convenient to analyze zk than xk since zk behaves more like a SGD iterate, although the stochastic gradient g̃k is not taken at zk. Since the coefficients of the deviation in Lemma 2 converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled. Remarkably, we shall see in Sec. 4 that with c1 = O ( L 1−β ) , Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings, and that SGDM converges as fast as SGD. Now, let us turn to the Multistage SGDM (Algorithm 1), which has been very successful in neural network training. However, its convergence still remains unclear except for the momentum-free case. To establish convergence, we require the parameters of Multistage SGDM to satisfy αiβi 1− βi ≡ A1, for i = 1, 2, ...n. αiTi ≡ A2, for i = 1, 2, ...n. 0 ≤ β1 ≤ β2 ≤ ... ≤ βn < 1. (7) where αi, βi, and Ti are the stepsize, momentum weight, and stage length of ith stage, respectively, and A1, A2 are properly chosen constants. In principle, one applies larger stepsizes αi at the initial stages, which will accelerate initial convergence, and smaller stepsizes for the final stages, which will shrink the size of final stationary distribution. As a result, (7) stipulates that less iterations are required for stages with large stepsizes and more iterations for stages with small stepsizes. Finally, (7) requires the momentum weights to be monotonically increasing, which is consistent with what’s done in practice [24]. often, using constant momentum weight also works. Under the parameter choices in (7), let us define the auxiliary sequence zk by zk = xk −A1mk−1. (8) This {zk}∞k=1 sequence reduces to (6) when a constant stepsize and momentum weight are applied. Furthermore, the observations made in Lemmas 1, 2, and 3 can also be generalized (see Lemmas 4, 5, 6, and 7 in App. C). In Sec. 5. we shall see that with (7) and appropriately chosen {ci}∞i=1, Lk in (5) also defines a Lyapunov function in the multistage setting, which in turn leads to the convergence of Multistage SGDM. 4 Convergence of SGDM In this section, we proceed to establish the convergence of SGDM described in (2). First, by following the idea presented in Sec. 3, we can show that Lk defined in (5) is a Lyapunov function. Proposition 1. Let Assumption 1 hold. In (2), let α ≤ 1−β 2 √ 2L √ β+β2 . Let {ci}∞i=1 in (5) be defined by c1 = β+β2 (1−β)3L 3α2 1− 4α2 β+β2(1−β)2L2 , ci+1 = ci − ( 4c1α 2 + Lα2 1− β ) βi(i+ β 1− β )L2 for all i ≥ 1. Then, ci > 0 for all i ≥ 1, and E[Lk+1 − Lk] ≤ ( −α+ 3− β + β 2 2(1− β) Lα2 + 4c1α 2 ) E[‖gk‖2] + ( β2 2(1 + β) Lα2σ2 + 1 2 Lα2σ2 + 2c1 1− β 1 + β α2σ2 ) . (9) By telescoping (9), we obtain the stationary convergence of SGDM under nonconvex settings. Theorem 1. Let Assumption 1 hold. In (2), let α ≤ α ≤ min{ 1−βL(4−β+β2) , 1−β 2 √ 2L √ β+β2 }. Then, 1 k k∑ i=1 E[‖gi‖2] ≤ 2 ( f(x1)− f∗ ) kα + ( β + 3β2 2(1 + β) + 1 ) Lασ2 = O ( f(x1)− f∗ kα + Lασ2 ) . (10) Now let us turn to the strongly convex setting, for which we have Proposition 2. Let Assumption 1 hold. Assume in addition that f is µ−strongly convex. In (2), let α ≤ min{ 1−β5L , 1−β L ( 3−β+2β2+ 48 √ β 25 2L+18µ L )}. Then, there exists positive constants ci for (5) such that for all k ≥ k0 := b log 0.5log β c, we have E[Lk+1 − Lk] ≤ − αµ 1 + 8µL E[Lk] + ( 1 + β + β2 2(1 + β) L+ 1− β 1 + β 2c1)α 2σ2 + β2 + Lα2 β2 1−β (1 + 8µL )(1 + β) 2µα2σ2. The choices of {ci}∞i=1 is similar to those of Proposition 1 and can be found in App. B.4. With Proposition 2, we immediately have Theorem 2. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(zk)− f∗] ≤ ( 1− αµ 1 + 8µL )k−k0 E[Lk0 ] + ( 1 + 8µ L ) 1 + β + β2 2(1 + β) L µ ασ2 + ( 1 + 8µ L )( 1 1 + β 12 √ β 25 2L+ 18µ µ ασ2 + β2 + Lα10 β 2 1 + 8µL 2 1 + β ασ2 ) = O ( (1− αµ)k + L µ ασ2 ) . Corollary 1. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(xk)− f∗] = O ( rk + L µ ασ2 ) , where r = max{1− αµ, β}. Remark 1. 1. Under nonconvex settings, the classical convergence bound of SGD is O ( f(x1)−f∗ kα + Lασ 2 ) with α = O( 1L ) (see, e.g., Theorem 4.8 of [4]). Therefore, Theorem 1 tells us that with α = O( 1−βL ), SGDM achieves the same convergence bound as SGD. 2. In contrast, the radius of the stationary distribution for SGDM in [26] and [25] is O( ασ 2 1−β ), and the latter one also assumes that∇f is uniformly bounded. 3. In Theorem 2 and Corollary 1, the convergence bounds hold for k ≥ k0 = b log 0.5log β c, where k0 is a mild constant1. when r = 1− αµ, the O(rk) part in Corollary 1 matches the lower bound established in Proposition 3 of [12]. 4. The convergence bound of SGD under strong convexity is O ( (1− αµ)k + Lµασ 2 ) (see, e.g, Theorem 4.6 of [4]), our result for SGDM in Corollary 1 recovers this when β = 0. 5 Convergence of Multistage SGDM In this section, we switch to the Multistage SGDM (Algorithm 1). Let us first show that when the (7) is applied, we can define the constants ci properly so that (5) still produces a Lyapunov function. Proposition 3. Let Assumption 1 hold. In Algorithm 1, let the parameters satisfy (7) with A1 = 1 24 √ 2L . In addition, let 1− β1 β1 ≤ 12 1− βn√ βn + β2n , c1 = α21 1−β1 βn+β 2 n (1−βn)2L 3 1− 4α21 βn+β2n (1−βn)2L 2 , and for any i ≥ 1, let ci+1 = ci − ( 4c1α 2 1 + L α21 1− β1 ) βin(i+ βn 1− βn )L2. Then, we have ci > 0 for any i ≥ 1. Furthermore, with zk defined in (8), for any k ≥ 1, we have E[Lk+1 − Lk] ≤ ( − α(k) + 3− β(k) + 2β 2(k) 2(1− β(k)) Lα2(k) + 4c1α 2(k) ) E[‖gk‖2] + ( β2(k)Lα2(k)12 β1√ βn + β2n σ2 + 1 2 Lα2(k)σ2 + 4c1(1− β1)α2(k)σ2 ) . where α(k), β(k) are the stepsize and momentum weight applied at kth iteration, respectively. Theorem 3. Let Assumption 1 hold. Under the same settings as in Proposition 3, let β1 ≥ 12 and let A2 be large enough such that β2Tii ≤ 1 2 for i = 1, 2, ...n. Then, we have 1 n n∑ l=1 1 Tl T1+..+Tl∑ i=T1+..+Tl−1+1 E[‖gi‖2] ≤ 2(f(x 1)− f∗) nA2 + 1 n n∑ l=1 ( 24β2l β1√ βn + β2n L+ 3L ) αlσ 2 = O ( f(x1)− f∗ nA2 + 1 n n∑ l=1 Lαlσ 2 ) . (11) Remark 2. 1. On the left hand side of (11), we have the average of the averaged squared gradient norm of n stages. 1For example, we have k0 = 6 for the popular choice β = 0.9. 2. On the right hand side of (11), the first term dominates at initial stages, we can apply large αi for these stages to accelerate initial convergence, and use smaller αi for later stages so that the size of stationary distribution is small. In contrast, (static) SGDM need to use a small stepsize α to make the size of stationary distribution with small. 3. It is unclear whether the iteration complexity of Multistage SGDM is better than SGDM or not. However, we do observe that Multistage SGDM is faster numerically. We leave the possible improved analysis of Multistage SGDM to future work. 6 Experiments In this section, we verify our theoretical claims by numerical experiments. For each combination of algorithm and training task, training is performed with 3 random seeds 1, 2, 3. Unless otherwise stated, we report the average of losses of the past m batches, where m is the number of batches for the whole dataset. Our implementation is available at GitHub1. Additional implementation details can be found in App. E. 6.1 Logistic regression Setup. The MNIST dataset consists of n = 60000 labeled examples of 28× 28 gray-scale images of handwritten digits in K = 10 classes 0, 1, . . . , 9. For all algorithms, we use batch size s = 64 (and hence number batches per epoch is m = 1874), number of epochs T = 50. The regularization parameter is λ = 5× 10−4. The effect of α in (static) SGDM. By Theorem 2 we know that, with a fixed β, a larger α leads to faster loss decrease to the stationary distribution. However, the size of the stationary distribution is also larger. This is well illustrated in Figure 1. For example, α = 1.0 and α = 0.5 make losses decrease more rapidly than α = 0.1. During later iterations, α = 0.1 leads to a lower final loss. Multistage SGDM. We take 3 stages for Multistage SGDM. The parameters are chosen according to (7): T1 = 3, T2 = 6, T3 = 21, αi = A2/Ti, βi = A1/(c2 + αi), where A2 = 2.0 and A1 = 1.0.1 We compare Multistage SGDM with SGDM with (α, β) = (0.66, 0.9) and (α, β) = (0.095, 0.9), where 0.66, 0.095 are the stepsizes of the first and last stage of Multistage SGDM, respectively. The training losses of initial and later iterations are shown in Figure 2. We can see that SGDM with (α, β) = (0.66, 0.9) converges faster initially, but has a higher final loss; while SGDM with (α, β) = (0.095, 0.9) behaves the other way. Multistage SGDM takes the advantage of both, as predicted by Theorem 3. The performances of SGDM and Vanilla SGD with the same stepsize are similar. 6.2 Image classification For the task of training ResNet-18 on the CIFAR-10 dataset, we compare Multistage SGDM, a baseline SGDM, and YellowFin [28], an automatic momentum tuner based on heuristics from 1https://github.com/gao-yuan-hangzhou/improved-analysis-sgdm 1Here, A1 is not set by its theoretical value 124L , since the dataset is very large and the gradient Lipschitz constant L cannot be computed easily. optimizing strongly convex quadratics. The initial learning rate of YellowFin is set to 0.1,1 and other parameters are set as their default values. All algorithms are run for T = 50 epochs and the batch size is fixed as s = 128. For Multistage SGDM, the parameters choices are governed by (7): the stage lengths are T1 = 5, T2 = 10, and T3 = 35. Take A1 = 1.0, A2 = 2.0, set the per-stage stepsizes and momentum weights as αi = A2/Ti and βi = A1/(A1 + αi), for stages i = 1, 2, 3. For the baseline SGDM, the stepsize schedule of Multistage SGDM is applied, but with a fixed momentum β = 0.9. In Figure 3, we present training losses and end-of-epoch validation accuracy of the tested algorithms. We can see that Multistage SGDM performs the best. Baseline SGDM is slightly worse, possibly because of its fixed momentum weight. Finally, Multistage SGDM can reach a test accuracy of 93% around 200 epochs. 7 Summary and Future Directions In this work, we provide new theoretical insights into the convergence behavior of SGDM and Multistage SGDM. For SGDM, we show that it is as fast as plain SGD in both nonconvex and strongly convex settings. For the widely adopted multistage SGDM, we establish its convergence and show the advantage of stagewise training. There are still open problems to be addressed. For example, (a) Is it possible to show that SGDM converges faster than SGD for special objectives such as quadratic ones? (b) Are there more efficient parameter choices than (7) that guarantee even faster convergence? 1We have experimented with initial learning rates 0.001 (default), 0.01, 0.1 and 0.5, each repeated 3 times; we found that the choice 0.1 is the best in terms of the final training loss. Broader Impact The results of this paper improves the performance of stochastic gradient descent with momentum as well as its multistage version. Our study will also benefit the machine learning community. We do not believe that the results in this work will cause any ethical issue, or put anyone at a disadvantage in our society. Acknowledgements Yanli Liu and Wotao Yin were partially supported by ONR Grant N000141712162. Yanli Liu was also supported by UCLA Dissertation Year Fellowship. Yuan Gao was supported by Columbia University School of Engineering and Applied Science.
1. What are the concerns regarding the proof of Lemma 1 in the paper? 2. What are the strengths and weaknesses of the paper, particularly in its contributions, assumptions, and empirical evaluations? 3. How does the reviewer assess the significance and novelty of the proposed approach compared to prior works? 4. Are there any questions or suggestions regarding the paper's content, such as improvements or modifications to the proposed method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Based on the authors' response after rebuttal, it is possible to fix the error. -------------------------------------------- Thanks for your feedback. After carefully reading them, I still have the concern on the proof of Lemma1. ----------------------------------------------------------------------- This paper analyzes the SGD with momentum under different settings via Lyapunov functions. For strongly convex and non-convex functions, the SGDM enjoys the same convergence of SGD. While, the multistage setting leads to acceleration. Strengths This work is based on sound assumptions and the empirical evaluation seems reasonable to me. Weaknesses - (7) requires that \alpha_i and \beta_i have to change at the same time, which makes the multistage setting less practical. - Since that all the theorems on the SGDM don't improve the convergence rate of SGD, the results seems not interesting.
NIPS
Title An Improved Analysis of Stochastic Gradient Descent with Momentum Abstract SGD with momentum (SGDM) has been widely applied in many machine learning tasks, and it is often applied with dynamic stepsizes and momentum weights tuned in a stagewise manner. Despite of its empirical advantage over SGD, the role of momentum is still unclear in general since previous analyses on SGDM either provide worse convergence bounds than those of SGD, or assume Lipschitz or quadratic objectives, which fail to hold in practice. Furthermore, the role of dynamic parameters has not been addressed. In this work, we show that SGDM converges as fast as SGD for smooth objectives under both strongly convex and nonconvex settings. We also prove that multistage strategy is beneficial for SGDM compared to using fixed parameters. Finally, we verify these theoretical claims by numerical experiments. 1 Introduction Stochastic gradient methods have been a widespread practice in machine learning. They aim to minimize the following empirical risk: min x∈Rd f(x) := 1 n n∑ i=1 `(x, qi), (1) where ` is a loss function and {qi}ni=1 denotes the training data, x denotes the trainable parameters of the machine learning model, e.g., the weight matrices in a neural network. In general, stochastic gradient methods can be written as mk = βmk−1 + (1− β)g̃k, xk+1 = xk − αmk. (2) where α > 0 is a stepsize, β ∈ [0, 1) is called momentum weight, and m0 = 0. The classical Stochastic Gradient Descent(SGD) method [21] uses β = 0 and mk = g̃k, where g̃k is a stochastic gradient of f(x) at xk. To boost the practical performance, one often applies a momentum weight of β > 0. and the resulting algorithm is often called SGD with momentum (SGDM). SGDM is very popular for training neural networks with remarkable empirical successes, and has been implemented as the default SGD optimizer in Pytorch [19] and Tensorflow [1]1. The idea behind SGDM originates from Polyak’s heavy-ball method [20] for deterministic optimization. For strongly convex and smooth objectives, heavy-ball method enjoys an accelerated linear 1Their implementation of SGDM does not have the (1− β) before g̃k, which gives mk = ∑k i=1 β k−ig̃i, while mk = (1− β) ∑k i=1 β k−ig̃i for (2). Therefore, they only differ by a constant scaling. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convergence rate over gradient descent [7]. However, the theoretical understanding of its stochastic counterpart is far from being complete. In the case of fixed stepsize and momentum weight, most of the current results only apply to restrictive settings. In [15, 16] and [12], the behavior of SGDM on least square regression is analyzed and linear convergence is established. [9] analyzes the local convergence rate of SGDM for strongly convex and smooth functions, where the initial point x0 is assumed to be close enough to the minimizer x∗. [25] provides global convergence of SGDM, but only for objectives with uniformly bounded gradients, thus excluding many machine learning models such as Ridge regression. Very recently, [26] presents a convergence bound of O( 1kα + α 1−β ) for general smooth nonconvex objectives 3. When β = 0, this recovers the classical convergence bound of O( 1kα + α) of SGD [4]. However, the size of stationary distribution O( α1−β ) is 1 1−β times larger than that of SGD. This factor is not negligible, especially when large β values such as 0.99 and 0.995 is applied [24]. Therefore, their result does not explain the competitiveness of SGDM compared to SGD. Concurrent to this work, [22] shows that SGDM converges as fast as SGD under convexity and strong convexity, and that it is asymptotically faster than SGD for overparameterized models. Remarkably, their analysis considers a different stepsize and momentum weight schedule from this work, and applies to arbitrary sampling without assuming the bounded variance of the gradient noise. In deep learning, SGDM is often applied with various parameter tuning rules to achieve efficient training. One of the most widely adopted rules is called “constant and drop", where a constant stepsize is applied for a long period and is dropped by some constant factor to allow for refined training, while the momentum weight is either kept unchanged (usually 0.9) or gradually increasing. We call this strategy Multistage SGDM and summarize it in Algorithm 1. Practically, (multistage) SGDM was successfully applied to training large-scale neural networks [13, 11], and it was found that appropriate parameter tuning leads to superior performance [24]. Since then, (multistage) SGDM has become increasingly popular [23]. At each stage, Multistage SGDM (Algorithm 1) requires three parameters: stepsize, momentum weight, and stage length. In [8] and [10], doubling argument based rules are analyzed for SGD on strongly convex objectives, where the stage length is doubled whenever the stepsize is halved. Recently, certain stepsize schedules are shown to yield faster convergence for SGD on nonconvex objectives satisfying growth conditions [27, 5], and a nearly optimal stepsize schedule is provided for SGD on least square regression [6]. These results consider only the momentum-free case. Another recent work focuses on the asymptotic convergence of SGDM (i.e., without convergence rate) [9], which requires the momentum weights to approach either 0 or 1, and therefore contradicts the common practice in neural network training. In summary, the convergence rate of Multistage SGDM (Algorithm 1) has not been established except for the momentum-free case, and the role of parameters in different stages is unclear. Algorithm 1 Multistage SGDM Input: problem data f(x) as in (1), number of stages n, momentum weights {βi}ni=1 ⊆ [0, 1), step sizes {αi}ni=1, and stage lengths {Ti}ni=1 at n stages, initialization x1 ∈ Rd and m0 = 0, iteration counter k = 1. 1: for i = 1, 2, ..., n do 2: α← αi, β ← βi; 3: for j = 1, 2, ..., Ti do 4: Sample a minibatch ζk uniformly from the training data; 5: g̃k ← ∇xl(xk, ζk); 6: mk ← βmk−1 + (1− β)g̃k; 7: xk+1 ← xk − αmk; 8: k ← k + 1; 9: end for 10: end for 11: return x̃, which is generated by first choosing a stage l ∈ {1, 2, ...n} uniformly at random, and then choosing x̃ ∈ {xT1+...+Tl−1+1, xT1+...+Tl−1+2, ..., xT1+...+Tl} uniformly at random; 3Here k is the number of iterations. Note that in [26], a different but equivalent formulation of SGDM is analyzed; their stepsize γ is effectively α 1−β in our setting. 1.1 Our contributions In this work, we provide new convergence analysis for SGDM and Multistage SGDM that resolve the aforementioned issues. A comparison of our results with prior work can be found in Table 1. 1. We show that for both strongly convex and nonconvex objectives, SGDM (2) enjoys the same convergence bound as SGD. This helps explain the empirical observations that SGDM is at least as fast as SGD [23]. Our analysis relies on a new observation that, the update direction mk of SGDM (2) has a controllable deviation from the current full gradient ∇f(xk), and enjoys a smaller variance. Inspired by this, we construct a new Lyapunov function that properly handles this deviation and exploits an auxiliary sequence to take advantage of the reduced variance. Compared to aforementioned previous work, our analysis applies to not only least squares, does not assume uniformly bounded gradient, and improves the convergence bound. 2. For the more popular SGDM in the multistage setting (Algorithm 1), we establish its convergence and demonstrate that the multistage strategy are faster at initial stages. Specifically, we allow larger stepsizes in the first few stages to boost initial performance, and smaller stepsizes in the final stages decrease the size of stationary distribution. Theoretically, we properly redefine the aforementioned auxiliary sequence and Lyapunov function to incorporate the stagewise parameters. To the best of our knowledge, this is the first convergence guarantee for SGDM in the multistage setting. 1.2 Other related work Nesterov’s momentum achieves optimal convergence rate in deterministic optimization [18], and has also been combined with SGD for neural network training [24]. Recently, its multistage version has been analyzed for convex or strongly convex objectives [3, 14]. Other forms of momentum for stochastic optimization include PID Control-based methods [2], Accelerated SGD [12], and Quasi-Hyperbolic Momentum [17]. In this work, we restrict ourselves to heavy-ball momentum, which is arguably the most popular form of momentum in current deep learning practice. 2 Notation and Preliminaries Throughout this paper, we use ‖ · ‖ for vector `2-norm, 〈·, ·〉 stands for dot product. Let gk denote the full gradient of f at xk, i.e., gk := ∇f(xk), and f∗ := minx∈Rd f(x). Definition 1. We say that f : Rd → R is L−smooth with L ≥ 0, if it is differentiable and satisfies f(y) ≤ f(x) + 〈∇f(x), y − x〉+ L 2 ‖y − x‖2,∀x, y ∈ Rd . We say that f : Rd → R is µ−strongly convex with µ ≥ 0, if it satisfies f(y) ≥ f(x) + 〈∇f(x), y − x〉+ µ 2 ‖y − x‖2,∀x, y ∈ Rd . The following assumption is effective throughout, which is standard in stochastic optimization. Assumption 1. 1. Smoothness: The objective f(x) in (1) is L−smooth. 2. Unbiasedness: At each iteration k, g̃k satisfies Eζk [g̃k] = gk. 3. Independent samples: the random samples {ζk}∞k=1 are independent. 4. Bounded variance: the variance of g̃k with respect to ζk satisfies Varζk(g̃k) = Eζk [‖g̃k − gk‖2] ≤ σ2 for some σ2 > 0. Unless otherwise noted, all the proof in the paper are deferred to the appendix. 3 Key Ingredients of Convergence Theory In this section, we present some key insights for the analysis of stochastic momentum methods. For simplicity, we first focus on the case of fixed stepsize and momentum weight, and make proper generalizations for Multistage SGDM in App. C. 3.1 A key observation on momentum In this section, we make the following observation on the role of momentum: With a momentum weight β ∈ [0, 1), the update vector mk enjoys a reduced “variance" of (1− β)σ2, while having a controllable deviation from the full gradient gk in expectation. First, without loss of generality, we can take m0 = 0, and express mk as mk = (1− β) k∑ i=1 βk−ig̃i. (3) mk is a moving average of the past stochastic gradients, with smaller weights for older ones1. we have the following result regarding the “variance" of mk, which is measured between mk and its deterministic version (1− β) ∑k i=1 β k−igi. Lemma 1. Under Assumption 1, the update vector mk in SGDM (2) satisfies E ∥∥∥∥∥mk − (1− β) k∑ i=1 βk−igi ∥∥∥∥∥ 2 ≤ 1− β 1 + β (1− β2k)σ2. Lemma 1 follows directly from the property of the moving average. On the other hand, (1−β) ∑k i=1 β k−igi is a moving average of all past gradients, which is in contrast to SGD. It seems unclear how far is (1− β) ∑k i=1 β k−igi from the ideal descent direction gk, which could be unbounded unless stronger assumptions are imposed. Previous analysis such as [25] and [9] make the blanket assumption of bounded∇f to circumvent this difficulty. In this work, we provide a different perspective to resolve this issue. Lemma 2. Under Assumption 1, we have E ∥∥∥∥∥ 11− βk (1− β) k∑ i=1 βk−igi − gk ∥∥∥∥∥ 2 ≤ k−1∑ i=1 ak,i E[‖xi+1 − xi‖2], where ak,i = L2βk−i 1− βk ( k − i+ β 1− β ) . (4) 1Note the sum of weights (1− β) ∑k i=1 β k−i = 1− βk → 1 as k →∞. From Lemma 2, we know the deviation of 1 1−βk (1− β) ∑k i=1 β k−igi from gk is controllable sum of past successive iterate differences, in the sense that the coefficients ak,i decays linearly for older ones. This inspires the construction of a new Lyapunov function to handle the deviation brought by the momentum, as we shall see next. 3.2 A new Lyapunov function Let us construct the following Lyapunov function for SGDM: Lk = ( f(zk)− f? ) + k−1∑ i=1 ci‖xk+1−i − xk−i‖2. (5) In the Lyapunov function (5), {ci}∞i=1 are positive constants to be specified later corresponding to the deviation described in Lemma 2. Since the coefficients in (4) converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled, and Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings (see Propositions 1 and 2). In (5), zk is an auxiliary sequence defined as zk = { xk k = 1, 1 1−βx k − β1−βx k−1 k ≥ 2. (6) This auxiliary sequence first appeared in the analysis of deterministic heavy ball methods in [7], and later applied in the analysis of SGDM [26, 25]. It enjoys the following property. Lemma 3. zk defined in (6) satisfies zk+1 − zk = −αg̃k. Lemma 3 indicates that it is more convenient to analyze zk than xk since zk behaves more like a SGD iterate, although the stochastic gradient g̃k is not taken at zk. Since the coefficients of the deviation in Lemma 2 converges linearly to 0 as k →∞, we can choose {ci}∞i=1 in a diminishing fashion, such that this deviation can be controlled. Remarkably, we shall see in Sec. 4 that with c1 = O ( L 1−β ) , Lk defined in (5) is indeed a Lyapunov function under strongly convex and nonconvex settings, and that SGDM converges as fast as SGD. Now, let us turn to the Multistage SGDM (Algorithm 1), which has been very successful in neural network training. However, its convergence still remains unclear except for the momentum-free case. To establish convergence, we require the parameters of Multistage SGDM to satisfy αiβi 1− βi ≡ A1, for i = 1, 2, ...n. αiTi ≡ A2, for i = 1, 2, ...n. 0 ≤ β1 ≤ β2 ≤ ... ≤ βn < 1. (7) where αi, βi, and Ti are the stepsize, momentum weight, and stage length of ith stage, respectively, and A1, A2 are properly chosen constants. In principle, one applies larger stepsizes αi at the initial stages, which will accelerate initial convergence, and smaller stepsizes for the final stages, which will shrink the size of final stationary distribution. As a result, (7) stipulates that less iterations are required for stages with large stepsizes and more iterations for stages with small stepsizes. Finally, (7) requires the momentum weights to be monotonically increasing, which is consistent with what’s done in practice [24]. often, using constant momentum weight also works. Under the parameter choices in (7), let us define the auxiliary sequence zk by zk = xk −A1mk−1. (8) This {zk}∞k=1 sequence reduces to (6) when a constant stepsize and momentum weight are applied. Furthermore, the observations made in Lemmas 1, 2, and 3 can also be generalized (see Lemmas 4, 5, 6, and 7 in App. C). In Sec. 5. we shall see that with (7) and appropriately chosen {ci}∞i=1, Lk in (5) also defines a Lyapunov function in the multistage setting, which in turn leads to the convergence of Multistage SGDM. 4 Convergence of SGDM In this section, we proceed to establish the convergence of SGDM described in (2). First, by following the idea presented in Sec. 3, we can show that Lk defined in (5) is a Lyapunov function. Proposition 1. Let Assumption 1 hold. In (2), let α ≤ 1−β 2 √ 2L √ β+β2 . Let {ci}∞i=1 in (5) be defined by c1 = β+β2 (1−β)3L 3α2 1− 4α2 β+β2(1−β)2L2 , ci+1 = ci − ( 4c1α 2 + Lα2 1− β ) βi(i+ β 1− β )L2 for all i ≥ 1. Then, ci > 0 for all i ≥ 1, and E[Lk+1 − Lk] ≤ ( −α+ 3− β + β 2 2(1− β) Lα2 + 4c1α 2 ) E[‖gk‖2] + ( β2 2(1 + β) Lα2σ2 + 1 2 Lα2σ2 + 2c1 1− β 1 + β α2σ2 ) . (9) By telescoping (9), we obtain the stationary convergence of SGDM under nonconvex settings. Theorem 1. Let Assumption 1 hold. In (2), let α ≤ α ≤ min{ 1−βL(4−β+β2) , 1−β 2 √ 2L √ β+β2 }. Then, 1 k k∑ i=1 E[‖gi‖2] ≤ 2 ( f(x1)− f∗ ) kα + ( β + 3β2 2(1 + β) + 1 ) Lασ2 = O ( f(x1)− f∗ kα + Lασ2 ) . (10) Now let us turn to the strongly convex setting, for which we have Proposition 2. Let Assumption 1 hold. Assume in addition that f is µ−strongly convex. In (2), let α ≤ min{ 1−β5L , 1−β L ( 3−β+2β2+ 48 √ β 25 2L+18µ L )}. Then, there exists positive constants ci for (5) such that for all k ≥ k0 := b log 0.5log β c, we have E[Lk+1 − Lk] ≤ − αµ 1 + 8µL E[Lk] + ( 1 + β + β2 2(1 + β) L+ 1− β 1 + β 2c1)α 2σ2 + β2 + Lα2 β2 1−β (1 + 8µL )(1 + β) 2µα2σ2. The choices of {ci}∞i=1 is similar to those of Proposition 1 and can be found in App. B.4. With Proposition 2, we immediately have Theorem 2. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(zk)− f∗] ≤ ( 1− αµ 1 + 8µL )k−k0 E[Lk0 ] + ( 1 + 8µ L ) 1 + β + β2 2(1 + β) L µ ασ2 + ( 1 + 8µ L )( 1 1 + β 12 √ β 25 2L+ 18µ µ ασ2 + β2 + Lα10 β 2 1 + 8µL 2 1 + β ασ2 ) = O ( (1− αµ)k + L µ ασ2 ) . Corollary 1. Let Assumption 1 hold and assume in addition that f is µ−strongly convex. Under the same settings as in Proposition 2, for all k ≥ k0 = b log 0.5log β c we have E[f(xk)− f∗] = O ( rk + L µ ασ2 ) , where r = max{1− αµ, β}. Remark 1. 1. Under nonconvex settings, the classical convergence bound of SGD is O ( f(x1)−f∗ kα + Lασ 2 ) with α = O( 1L ) (see, e.g., Theorem 4.8 of [4]). Therefore, Theorem 1 tells us that with α = O( 1−βL ), SGDM achieves the same convergence bound as SGD. 2. In contrast, the radius of the stationary distribution for SGDM in [26] and [25] is O( ασ 2 1−β ), and the latter one also assumes that∇f is uniformly bounded. 3. In Theorem 2 and Corollary 1, the convergence bounds hold for k ≥ k0 = b log 0.5log β c, where k0 is a mild constant1. when r = 1− αµ, the O(rk) part in Corollary 1 matches the lower bound established in Proposition 3 of [12]. 4. The convergence bound of SGD under strong convexity is O ( (1− αµ)k + Lµασ 2 ) (see, e.g, Theorem 4.6 of [4]), our result for SGDM in Corollary 1 recovers this when β = 0. 5 Convergence of Multistage SGDM In this section, we switch to the Multistage SGDM (Algorithm 1). Let us first show that when the (7) is applied, we can define the constants ci properly so that (5) still produces a Lyapunov function. Proposition 3. Let Assumption 1 hold. In Algorithm 1, let the parameters satisfy (7) with A1 = 1 24 √ 2L . In addition, let 1− β1 β1 ≤ 12 1− βn√ βn + β2n , c1 = α21 1−β1 βn+β 2 n (1−βn)2L 3 1− 4α21 βn+β2n (1−βn)2L 2 , and for any i ≥ 1, let ci+1 = ci − ( 4c1α 2 1 + L α21 1− β1 ) βin(i+ βn 1− βn )L2. Then, we have ci > 0 for any i ≥ 1. Furthermore, with zk defined in (8), for any k ≥ 1, we have E[Lk+1 − Lk] ≤ ( − α(k) + 3− β(k) + 2β 2(k) 2(1− β(k)) Lα2(k) + 4c1α 2(k) ) E[‖gk‖2] + ( β2(k)Lα2(k)12 β1√ βn + β2n σ2 + 1 2 Lα2(k)σ2 + 4c1(1− β1)α2(k)σ2 ) . where α(k), β(k) are the stepsize and momentum weight applied at kth iteration, respectively. Theorem 3. Let Assumption 1 hold. Under the same settings as in Proposition 3, let β1 ≥ 12 and let A2 be large enough such that β2Tii ≤ 1 2 for i = 1, 2, ...n. Then, we have 1 n n∑ l=1 1 Tl T1+..+Tl∑ i=T1+..+Tl−1+1 E[‖gi‖2] ≤ 2(f(x 1)− f∗) nA2 + 1 n n∑ l=1 ( 24β2l β1√ βn + β2n L+ 3L ) αlσ 2 = O ( f(x1)− f∗ nA2 + 1 n n∑ l=1 Lαlσ 2 ) . (11) Remark 2. 1. On the left hand side of (11), we have the average of the averaged squared gradient norm of n stages. 1For example, we have k0 = 6 for the popular choice β = 0.9. 2. On the right hand side of (11), the first term dominates at initial stages, we can apply large αi for these stages to accelerate initial convergence, and use smaller αi for later stages so that the size of stationary distribution is small. In contrast, (static) SGDM need to use a small stepsize α to make the size of stationary distribution with small. 3. It is unclear whether the iteration complexity of Multistage SGDM is better than SGDM or not. However, we do observe that Multistage SGDM is faster numerically. We leave the possible improved analysis of Multistage SGDM to future work. 6 Experiments In this section, we verify our theoretical claims by numerical experiments. For each combination of algorithm and training task, training is performed with 3 random seeds 1, 2, 3. Unless otherwise stated, we report the average of losses of the past m batches, where m is the number of batches for the whole dataset. Our implementation is available at GitHub1. Additional implementation details can be found in App. E. 6.1 Logistic regression Setup. The MNIST dataset consists of n = 60000 labeled examples of 28× 28 gray-scale images of handwritten digits in K = 10 classes 0, 1, . . . , 9. For all algorithms, we use batch size s = 64 (and hence number batches per epoch is m = 1874), number of epochs T = 50. The regularization parameter is λ = 5× 10−4. The effect of α in (static) SGDM. By Theorem 2 we know that, with a fixed β, a larger α leads to faster loss decrease to the stationary distribution. However, the size of the stationary distribution is also larger. This is well illustrated in Figure 1. For example, α = 1.0 and α = 0.5 make losses decrease more rapidly than α = 0.1. During later iterations, α = 0.1 leads to a lower final loss. Multistage SGDM. We take 3 stages for Multistage SGDM. The parameters are chosen according to (7): T1 = 3, T2 = 6, T3 = 21, αi = A2/Ti, βi = A1/(c2 + αi), where A2 = 2.0 and A1 = 1.0.1 We compare Multistage SGDM with SGDM with (α, β) = (0.66, 0.9) and (α, β) = (0.095, 0.9), where 0.66, 0.095 are the stepsizes of the first and last stage of Multistage SGDM, respectively. The training losses of initial and later iterations are shown in Figure 2. We can see that SGDM with (α, β) = (0.66, 0.9) converges faster initially, but has a higher final loss; while SGDM with (α, β) = (0.095, 0.9) behaves the other way. Multistage SGDM takes the advantage of both, as predicted by Theorem 3. The performances of SGDM and Vanilla SGD with the same stepsize are similar. 6.2 Image classification For the task of training ResNet-18 on the CIFAR-10 dataset, we compare Multistage SGDM, a baseline SGDM, and YellowFin [28], an automatic momentum tuner based on heuristics from 1https://github.com/gao-yuan-hangzhou/improved-analysis-sgdm 1Here, A1 is not set by its theoretical value 124L , since the dataset is very large and the gradient Lipschitz constant L cannot be computed easily. optimizing strongly convex quadratics. The initial learning rate of YellowFin is set to 0.1,1 and other parameters are set as their default values. All algorithms are run for T = 50 epochs and the batch size is fixed as s = 128. For Multistage SGDM, the parameters choices are governed by (7): the stage lengths are T1 = 5, T2 = 10, and T3 = 35. Take A1 = 1.0, A2 = 2.0, set the per-stage stepsizes and momentum weights as αi = A2/Ti and βi = A1/(A1 + αi), for stages i = 1, 2, 3. For the baseline SGDM, the stepsize schedule of Multistage SGDM is applied, but with a fixed momentum β = 0.9. In Figure 3, we present training losses and end-of-epoch validation accuracy of the tested algorithms. We can see that Multistage SGDM performs the best. Baseline SGDM is slightly worse, possibly because of its fixed momentum weight. Finally, Multistage SGDM can reach a test accuracy of 93% around 200 epochs. 7 Summary and Future Directions In this work, we provide new theoretical insights into the convergence behavior of SGDM and Multistage SGDM. For SGDM, we show that it is as fast as plain SGD in both nonconvex and strongly convex settings. For the widely adopted multistage SGDM, we establish its convergence and show the advantage of stagewise training. There are still open problems to be addressed. For example, (a) Is it possible to show that SGDM converges faster than SGD for special objectives such as quadratic ones? (b) Are there more efficient parameter choices than (7) that guarantee even faster convergence? 1We have experimented with initial learning rates 0.001 (default), 0.01, 0.1 and 0.5, each repeated 3 times; we found that the choice 0.1 is the best in terms of the final training loss. Broader Impact The results of this paper improves the performance of stochastic gradient descent with momentum as well as its multistage version. Our study will also benefit the machine learning community. We do not believe that the results in this work will cause any ethical issue, or put anyone at a disadvantage in our society. Acknowledgements Yanli Liu and Wotao Yin were partially supported by ONR Grant N000141712162. Yanli Liu was also supported by UCLA Dissertation Year Fellowship. Yuan Gao was supported by Columbia University School of Engineering and Applied Science.
1. What are the main contributions and improvements offered by the paper regarding SGD with momentum? 2. What are the strengths of the proposed theoretical analysis? 3. What are the weaknesses or limitations of the paper's analysis and comparisons with other works? 4. Are there any questions or concerns regarding the practicality or effectiveness of the proposed multi-stage SGDM method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions after rebuttal: I have read the rebuttal and the other review's. I keep my score unchanged. More formal derivations saying that multistage SGDM can achieve epsilon accuracy faster than SGDM would improve understanding of this paper. I believe it is possible to do since multistage SGDM allows for the larger stepsizes at initial optimization stages. ------------------------------------------------------------------ The paper presents an improved theoretical analysis of SGD with momentum (SGDM) for non-convex and strongly convex functions, which matches convergence rate of SGD when the stepsize is small. Paper also proposes and analyses a new multi-stage SGD with momentum - a momentum SGD with the learning rate schedule, where the learning rate is dropped several times during training. Strengths Theoretical analysis is interesting and gives new insights about training SGD with momentum. Weaknesses - In the provided theoretical analysis, SGDM requires smaller stepsizes than standard SGD, which means that still in some cases SGD is faster, which was not discussed in the paper. - the theoretical comparison of multi-stage SGDM to standard SGDM is missing: is the rate of multi-stage SGDM better than the best rate of SGDM to reach fixed accuracy epsilon? - There is no experimental comparison to the SGD without momentum baseline.
NIPS
Title Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback Abstract The ensemble method is a promising way to mitigate the overestimation issue in Q-learning, where multiple function approximators are used to estimate the action values. It is known that the estimation bias hinges heavily on the ensemble size (i.e., the number of Q-function approximators used in the target), and that determining the ‘right’ ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process. To tackle this challenge, we first derive an upper bound and a lower bound on the estimation bias, based on which the ensemble size is adapted to drive the bias to be nearly zero, thereby coping with the impact of the time-varying approximation errors accordingly. Motivated by the theoretic findings, we advocate that the ensemble method can be combined with Model Identification Adaptive Control (MIAC) for effective ensemble size adaptation. Specifically, we devise Adaptive Ensemble Q-learning (AdaEQ), a generalized ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias. Extensive experiments are carried out to show that AdaEQ can improve the learning performance than the existing methods for the MuJoCo benchmark. 1 Introduction Thanks to recent advances in function approximation methods using deep neural networks [20], Q-learning [35] has been widely used to solve reinforcement learning (RL) problems in a variety of applications, e.g., robotic control [23, 13], path planning [15, 24] and production scheduling [34, 21]. Despite the great success, it is well recognized that Q-learning may suffer from the notorious overestimation bias [29, 33, 32, 10, 37], which would significantly impede the learning efficiency. Recent work [9, 11] indicates that this problem also persists in the actor-critic setting. To address this issue, the ensemble method [16, 1, 26, 7] has emerged as a promising solution in which multiple Q-function approximators are used to get better estimation of the action values. Needless to say, the ensemble size, i.e., the number of Q-function approximators used in the target, has intrinsic impact on Q-learning. Notably, it is shown in [6, 17] that while a large ensemble size could completely remove the overestimation bias, it may go to the other extreme and result in underestimation bias and unstable training, which is clearly not desirable. Therefore, instead of simply increasing the ensemble 35th Conference on Neural Information Processing Systems (NeurIPS 2021). size to mitigate the overestimation issue, a fundamental question to ask is:“ Is it possible to determine the right ensemble size on the fly so as to minimize the estimation bias?” Some existing ensemble methods [2, 19, 17] adopt a trial-and-error strategy to search for the ensemble size, which would be time-consuming and require a lot of human engineering for different RL tasks. The approximation error of the Q-function during the learning process plays a nontrivial role in the selection of the ensemble size, since it directly impacts the Q-target estimation accuracy. This however remains not well understood. In particular, the fact that the approximation error is time-varying, due to the iterative nature of Q-learning [36, 5], gives rise to the question that whether a fixed ensemble size should be used in the learning process. To answer this question, we show in Section 2.2 that using a fixed ensemble size is likely to lead to either overestimation or underestimation bias, and the bias may shift between overestimation and underestimation because of the time-varying approximation error, calling for an adaptive ensemble size so as to drive the bias close to zero based on the underlying learning dynamics. Thus motivated, in this work we study effective ensemble size adaptation to minimize the estimation bias that hinges heavily on the time-varying approximation errors during the learning process. To this end, we first characterize the relationship among the ensemble size, the function approximation errors, and the estimation bias, by deriving an upper bound and a lower bound on the estimation bias. Our findings reveal that the ensemble size should be selected adaptively in a way to cope with the impact of the time-varying approximation errors. Building upon the theoretic results, we cast the estimation bias minimization as an adaptive control problem where the approximation error during the learning process is treated as the control object, and the ensemble size is adapted based on the feedback of the control output, i.e., the value of the approximation error from the last iteration. The key idea in this approach is inspired from the classic Model Identification Adaptive Control (MIAC) framework [3, 25], where at each step the current system identification of the control object is fed back to adjust the controller, and consequently a new control signal is devised following the updated control law. One main contribution of this work lies in the development of AdaEQ, a generalized ensemble method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. Specifically, the approximation error in each iteration is quantified by comparing the difference between the Q-estimates and the Monte Carlo return using the current learned policy over a testing trajectory [29, 17]. Inspired by MIAC, the approximation error serves as the feedback to adapt the ensemble size. Besides, we introduce a ‘tolerance’ parameter in the adaptation mechanism to balance the control tendency towards positive or negative bias during the learning process. In this way, AdaEQ can encompass other existing ensemble methods as special cases, including Maxmin [17], by properly setting this hyperparameter. A salient feature of the feedback-adaptation mechanism is that it can be used effectively in conjunction with both standard Q-learning [22] and actor-critic methods [28, 11]. Experimental results on the continuous-control MuJoCo benchmark [30] show that AdaEQ is robust to the initial ensemble size in different environments, and achieves higher average return, thanks to keeping the estimation bias close to zero, when compared to the state-of-the-art ensemble methods such as REDQ [6] and Average-DQN [2]. Related Work. Bias-corrected Q-learning [18] introduces the bias correction term to reduce the overestimation bias. Double Q-learning is proposed in [12, 33] to address the overestimation issue in vanilla Q-learning, by leveraging two independent Q-function approximators to estimate the maximum Q-function value in the target. S-DQN and S-DDQN use the softmax operator instead of the max operator to further reduce the overestimation bias [27]. Self-correcting Q-learning aims to balance the underestimation in double Q-learning and overestimation in classic Q learning by introducing a new self-correcting estimator [38]. Weighted Q-learning proposes a new estimator based on the weighted average of the sample means, and conducts the empirical analysis in the discrete action space [8]. Weighted Double Q-learning [37] uses the Q-approximator together with the double Q-approximator to balance the overestimation and underestimation bias. Nevertheless, acquiring independent approximators is often intractable for large-scale tasks. To resolve this issue, the Twin-Delayed Deep Deterministic policy gradient algorithm (TD3) [9] and Soft Actor-Critic (SAC) [11] have been devised to take the minimum over two approximators in the target network. Along a different avenue, the ensemble-based methods generalize double Q-learning to correct the overestimation bias by increasing the number of Q-function approximators. Particularly, AverageDQN [2] takes the average of multiple approximators in the target to reduce the overestimation error, and Random Ensemble Mixture (REM) [1] estimates the target value using the random convex combination of the approximators. It is worth noting that both Average-DQN and REM cannot completely eliminate the overestimation bias. Most recently, Maxmin Q-learning [17] defines a proxy Q-function by choosing the minimum Q-value for each action among all approximators. Similar to Maxmin, Random Ensembled Q-learning (REDQ) [6] formulates the proxy Q-function by choosing only a subset of the ensemble. Nevertheless, both Maxmin and REDQ use a fixed ensemble size. In this study, we introduce an adaptation mechanism for the ensemble size to drive the estimation bias to be close to zero, thereby mitigating the possible overestimation and underestimation issues. 2 Impact of Ensemble Size on Estimation Bias 2.1 Ensemble Q-learning As is standard, we consider a Markov decision process (MDP) defined by the tuple 〈S,A, P, r, γ〉, where S and A denote the state space and the action space, respectively. P (s′|s, a) : S ×A× S → [0, 1] denotes the probability transition function from current state s to the next state s′ by taking action a ∈ A, and r(s, a) : S ×A → R is the corresponding reward. γ ∈ (0, 1] is the discount factor. At each step t, the agent observes the state st, takes an action at following a policy π : S → A, receives the reward rt, and evolves to a new state st+1. The objective is to find an optimal policy π∗ to maximize the discounted return R = ∑∞ t=0 γ trt. By definition, Q-function is the expected return when choosing action a in state s and following with the policy π: Qπ = E[ ∑∞ t=0 γ trt(st, at)|s0 = s, a0 = a]. Q-learning is an off-policy value-based method that aims at learning the optimal Q-function Q∗ : S ×A → R, where the optimal Q-function is a fixed point of the Bellman optimality equation [4]: T Q∗(s, a) = r(s, a) + γEs′∼P (s′|s,a) [maxa′∈AQ∗(s′, a′)] . (1) Given a transition sample (s, a, r, s′), the Bellman operator can be employed to update the Q-function as follows: Q(s, a)← (1− α)Q(s, a) + αy, y := r + γmaxa′∈AQ(s′, a′). (2) where α is the step size and y is the target. Under some conditions, Q-learning can converge to the optimal fixed-point solution asymptotically [31]. In deep Q-learning, the Q-function is approximated by a neural network, and it has been shown [33] that the approximation error, amplified by the max operator in the target, results in the overestimation phenomena. One promising approach to address this issue is the ensemble Q-learning method, which is the main subject of this study. The Ensemble Method. Specifically, the ensemble method maintains N separate approximators Q1, Q2, · · · , QN of the Q-function, based on which a subset of these approximators is used to devise a proxy Q-function. For example, in Average-DQN [2], the proxy Q-function is obtained by computing the average value over all N approximators to reduce the overestimation bias: Qave(·) = 1N ∑N i=1Q i(·). However, the average operation cannot completely eliminate the overestimation bias, since the average of the overestimation bias is still positive. To tackle this challenge, Maxmin [17] and REDQ [6] take the ‘min’ operation over a subsetM ( size M ) of the ensemble: Qproxy(·) = mini∈MQi(·). (3) The target value in the ensemble-based Q-learning is then computed as y = r +maxa′∈AQproxy. It is worth noting that in the existing studies, the in-target ensemble size M , pre-determined for a given environment, remain fixed in the learning process. 2.2 An Illustrative Example It is known that the determination of the optimal ensemble size is highly nontrivial, and a poor choice of the ensemble size would degrade the performance of ensemble Q-learning significantly [17]. As mentioned earlier, it is unclear a priori if a fixed ensemble size should be used in the learning process. (a) Function approximation. (b) Five function approximators. (c) Ensemble via ‘min’ operator. (d) Estimation error. Figure 2: Illustration of estimation bias in the ensemble method. (a) Each approximator is fitted to the noisy values (green dots) at the sampled states independently. (b) Five Q-function approximators are obtained for both actions (green lines and blue lines). (c) Apply the min operator over M (M = 3) randomly selected approximators to obtain a proxy approximator for each action. (d) The estimation error is obtained by comparing the underlying true value (purple line in (a)) and the target value using the proxy approximator. (a) Estimation bias vs. τ . (b) Estimation bias vs. numbers of actions. Figure 3: Illustration of overestimation and underestimation phenomena for different ensemble sizes. In what follows, we use an example to illustrate the potential pitfalls in the ensemble methods by examining the sensitivity of the estimation bias to the ensemble size [6, 17]. Along the same line as in [33], we consider an example with a real-valued continuous state space. In this example, there are two discrete actions available at each state and the optimal action values depend only on the state, i.e., in each state both actions result in the same optimal value Q∗(s, ·), which is assumed to be Q∗(s, ·) = sin(s). Figure 2 demonstrates how the ensemble method is carried out in four stages: (I) For each Q-function approximator Qi, i = 1, 2, · · · , 5, we first generate 10 noisy action-value samples independently (green dots in Figure 2(a)). Let ei(s, a) denote the approximation error of Qi: Qi(s, a) = Q∗(s, a) + ei(s, a), with ei(s, a) ∼ U(−τi, τi), (4) where τi ∼ U(0, τ) models the approximation error distribution for the i-th approximator. Note that the assumption on the uniform error distribution is commonly used to indicate that both positive and negative approximation error are possible in Q-function approximators [29][17][6]. (II) Next, Figure 2(b) illustrates the ensemble (N = 5) of approximators for two actions, where each approximator is a 6-degree polynomial that fits the noisy values at sampled states. (III) Following the same ensemble approach in [6][17], we randomly choose M approximators from the ensemble and take the minimum over them to obtain a proxy approximator for each action, resulting in the dashed lines in Figure 2(c). (IV) Finally, the maximum action value of the proxy approximator is used as the target to update the current approximators. To evaluate the target value estimation error, Figure 2(d) depicts the difference between the obtained target value and the underlying true value when using different ensemble size M . As in [33], we utilize the average estimation error (i.e., estimation bias) to quantify the performance of current approximators. For example, when the ensemble size M = 2, the red line is above zero for most states, implying the overestimation tendency in the target. Clearly, Figure 2(d) indicates that the estimation bias is highly dependent on the ensemble size, and even a change of M can lead the shift from overestimation to underestimation. Since the Q-function approximation error of each approximator changes over time in the training process [5] (examples for this phenomenon can be found in Appendix B.3), we next analyze the impact of the ensemble size on the estimation bias under different approximation error distributions. As shown in Figure 3(a), with a fixed ensemble size M , the estimation bias may shift between positive and negative and be ‘dramatically’ large for some error distributions. In light of this observation, departing from using a fixed size, we advocate to adapt the in-target ensemble size, e.g., set M = 4 when the noise parameter τ > 1.5 and M = 3 otherwise. The estimation bias resulted by this adaptation mechanism is much closer to zero. Besides, Figure 3(b) characterizes the estimation bias under different action spaces, which is also important considering that different tasks normally have different action spaces and the number of available actions may vary in different states even for the same task. The adaptive ensemble approach is clearly more robust in our setting. In a nutshell, both Figure 3(a) and 3(b) suggest that a fixed ensemble size would not work well to minimize the estimation bias during learning for different tasks. This phenomenon has also been observed in the empirical results [17]. In stark contrast, adaptively changing the ensemble size based on the approximation error indeed can help to reduce the estimation bias in different settings. 3 Adaptive Ensemble Q-learning (AdaEQ) Motivated by the illustrative example above, we next devise a generalized ensemble method with ensemble size adaptation to drive the estimation bias to be close to zero, by taking into consideration the time-varying feature of the approximation error during the learning process. Formally, we consider an ensemble of N Q-function approximators, i.e., {Qi}Ni=1, with each approximator initialized independently and randomly. We use the minimum of a subsetM of the N approximators in the Q-learning target as in (3), where the size of subset |M| =M ≤ N . 3.1 Lower Bound and Upper Bound on Estimation Bias We first answer the following key question:“How does the approximation error, together with the ensemble size, impact the estimation bias?". To this end, based on [29], we characterize the intrinsic relationship among the ensemble size M , the Q-function approximation error and the estimation bias, and derive an upper bound and a lower bound on the bias in the tabular case. Without loss of generality, we assume that for each state s, there are A available actions. Let ei(s, a) , Qi(s, a)−Qπ(s, a) be the approximation error for the i-th Q-function approximator, where Qπ(s, a) is the ground-truth of the Q-value for the current policy π. By using (3) to compute the target Q-value, we define the estimation error in the Bellman equation for transition (s, a, r, s′) as ZM : ZM ,r + γmaxa′∈Amini∈MQi(s′, a′)− (r +maxa′∈AQπ(s′, a′)) . Here a positive E[ZM ] implies overestimation bias while a negative E[ZM ] implies underestimation bias. Note that we use the subscription M to emphasize that the estimation bias is intimately related to M . The case with two distributions for Q-function approximation errors. For ease of exposition, we first consider the case when the approximation errors follow one of the two uniform distributions, as illustrated in Figure 4(a). Specifically, assume that for i ∈ K ⊂ M with |K| = K, ei(s, a) ∼ U(−τ1, τ1) , and for i ∈M \ K, ei(s, a) ∼ U(−τ2, τ2). Without loss of generality, we assume that τ1 > τ2 > 0. It is worth noting that in [29][17][6], the approximation error for all approximators is assumed to follow the same uniform distribution, i.e., τ1 = τ2, which is clearly more restrictive than the case here with two error distributions. For instance, when only one approximator is chosen to be updated at each step [17], the approximation error distribution of this approximator would change over time and hence differ from the others. We have the following results on the upper bound and lower bound of the estimation bias E[ZM ]. Theorem 1. For the case with two distributions for Q-function approximation errors, the estimation bias E[ZM ] satisfies that E[ZM ] ≥ γ (τ1(1− fAK − 2fAM ) + τ2(1− fAM )) ; (5) E[ZM ] ≤ γ ( τ1 + τ2(1− 2fA(M−K) − (1− βK)A) ) , (6) where βK = ( 12 − τ2 2τ1 )K , fAK = 1KB( 1 K , A + 1) = Γ(A+1)Γ(1+ 1K ) Γ(A+ 1K +1) = A(A−1)···1 (A+ 1K )(A+ 1 K−1)···(1+ 1 K ) with B(·, ·) being the Beta function. (a) Q-function approximation error distributions. (b) Lower bound and upper bound on Estimation bias. (c) Impact of approximation error on the estimation bias: overestimation vs. underestimation. Figure 4: Illustration of upper bounds and lower bounds on estimation bias in Theorem 1. (a) The case where the approximation errors of the Q-approximators can be categorized into two uniform distributions. (b) The lower bound and the upper bound corresponding to (5) and (6), for given τ1, τ2, A: The blue point represents the ‘critical’ point where decreasing the ensemble size may lead overestimation (the lower bound is positive); and the red point denotes the ‘critical’ point where increasing ensemble size may lead underestimation (the upper bound is negative). (c) Due to time-varying feature of the approximation errors, the blue curve and the red curve depict the ‘critical’ points for the lower bound and the upper bound, respectively. The proof of Theorem 1 is relegated to the Appendix A.1. Theorem 1 reveals that the estimation bias depends on the ensemble size as well as the approximation error distributions. To get a more concrete sense of Theorem 1, we consider an example where τ1 = 0.5 and τ2 = 0.4, as depicted in Figure 4(b), and characterize the relationship between the estimation bias and the ensemble size M . Notably, the estimation bias turns negative when the ensemble size M > Mu = 9 (red point: the value of M where the upper bound is 0) and becomes positive when M < Ml = 4 (blue point: the value of M where the lower bound is 0). In Figure 4(c), we fix τ2 = 0.4 and show how those two critical points (Mu and Ml) change along with τ1. Here the red shaded area indicates underestimation bias when M > Mu, and the blue shaded area indicates overestimation bias when M < Ml. Clearly, in order to avoid the positive bias (blue shaded area), it is desirable to increase the ensemble size when the approximation error is large, e.g., τ1 > 0.6. On the other hand, decreasing the ensemble size is more preferred to avoid underestimation (red shaded area) when the approximation error is small, e.g., τ1 < 0.6. The general case with heterogeneous distributions for Q-function approximation errors. Next, we consider a general case, in which the approximation errors for different approximators {Qi} are independently but non-identically distributed. Specifically, we assume that the approximation error ei(s, a) for Qi(s, a), i = 1, 2, · · · ,M , follows the uniform distribution U(−τi, τi), where τi > 0. We use a multitude of tools to devise the upper bound and lower bound on the estimation bias E[ZM ]. As expected, this general case is technically more challenging and the bounds would be not as sharp as in the special case with two distributions. Theorem 2. For the general case with heterogeneous error distributions, the estimation bias E[ZM ] satisfies that E[ZM ] ≥γ ( τmin − τmax(fA(M−1) + 2fAM ) ) ; (7) E[ZM ] ≤γ (2τmin − τmax (fAM − 2gAM )) , (8) where τmin = mini τi and τmax = maxi τi. gAM = 1M I0.5( 1 M , A + 1) with I0.5(·, ·) being the regularized incomplete Beta function. Observe from Theorem 2 that the lower bound in (7) is positive when τmin(1 − 2fAM ) > τmaxfA(M−1), indicating the existence of the overestimation issue. On thew contrary, the upper bound in (8) is negative when 2τmin < τmax (1 + fAM − 2gAM ), pointing to the underestimation issue. In general, when τmin is large enough, decreasing ensemble size M is likely to cause overestimation, e.g., E[ZM ] ≥ 0 when M < 2. On the other hand, when τmax is small enough, increasing ensemble size M is likely to cause underestimation, e.g., E[ZM ] ≤ 0 when M is sufficiently large. Determination of parameter c. As illustrated in Figure 4(c), for given approximation error characterization, a threshold c can be chosen such that increasing the ensemble size would help to correct the overestimation bias when τmax > c, and decreasing the ensemble size is more conductive to mitigate the underestimation bias when τmax < c. Specifically, parameter c is determined in two steps. Step 1: To estimate approximation error distribution parameters τmin and τmax by running an ensemble based algorithm (e.g., Algorithm 1) for a few epochs with a fixed ensemble size. In particular, a testing trajectory is generated from a random initial state using the current policy to compute the (discounted) MC return Qπ and the estimated Q-function value Qi, i = 1, 2, · · · , N . We next fit a uniform distribution model U(−τi, τi) of the approximation error (Qi −Qπ) for each Q-function approximator Qi. Then, τmin and τmax can be obtained by choosing the minimum and maximum values among τi, i = 1, 2, · · · , N . Step 2: To obtain the upper bound and the lower bound in Theorem 2 by using {τmin, τmax, A, γ}. We investigate the relationship between ensemble size M and the estimation bias by studying the bounds and identifying the ‘critical’ points as illustrated in Figure 4(b). Observe that a ‘proper’ ensemble size should be chosen between the ‘critical’ points, so as to reduce the overestimation and underestimation bias as much as possible. Since the approximation error is time-varying during the learning process, these two ‘critical’ points vary along with {τmax} and {τmin} (as shown in Figure 4(c)). Intuitively, it is desirable to drive the system to avoid both the red region (underestimation) and the blue region (overestimation). It can be clearly observed that there is a wide range of choice for parameter c (e.g., [0.5, 0.7] in Figure 4(c)) for the algorithm to stay in the white region, indicating that even though the pre-determined c above is not optimized, it can still serve the purpose well. The proof of Theorem 2 and numerical illustration can be found in the Appendix A.3. Summarizing, both Theorem 1 and Theorem 2 indicate that the approximation error characterization plays a critical role in controlling the estimation bias. In fact, both the lower bound and the upper bound in Theorem 2 depends on τmin and τmax, which are time-varying due to the iterative nature of the learning process, indicating that it is sensible to use an adaptive ensemble size to drive the estimation bias to be close to zero, as much as possible. 3.2 Practical Implementation Based on the theoretic findings above, we next propose AdaEQ that adapts the ensemble size based on the approximation error feedback on the fly, so as to drive the estimation bias close to zero. Particularly, as summarized in Algorithm 1, AdaEQ introduces two important steps at each iteration t, i.e., approximation error characterization (line 3) and ensemble size adaptation (line 4), which can be combined with the framework of either Q-learning or actor-critic methods. Characterization of the time-varying approximation error. As outlined in Algorithm 1, the first key step is to quantify the time-varying approximation error at each iteration t (for ease of exposition, we omit the subscript t when it is clear from the context). Along the same line as in [9, 33, 6], we run a testing trajectory of length H , T = (s0, a0, s1, a1, · · · , sH , aH), from a random initial state using the current policy π, and compute the discounted Monte Carlo return Qπ(s, a) and the estimated Q-function value Qi(s, a), i = 1, · · · , N for each visited state-action pair (s, a). The empirical standard derivation of Qi(s, a)−Qπ(s, a) can be then obtained to quantify the approximation error of each approximator Qi. Then, we take the average of the empirical standard derivation over all approximators to characterize the approximation error at the current iteration t, i.e., τ̃t = 1 N ∑N i=1 std(Q i(s, a)−Qπ(s, a)), (s, a) ∈ T . (9) Error-feedback based ensemble size adaptation. Based on the theoretic results and Figure 4(c), we update the ensemble size M at each iteration t based on the approximation error (9), using the following piecewise function: Mt = rand(Mt−1 + 1, N) τ̃t−1 > c, Mt−1 + 1 ≤ N rand(2,Mt−1 − 1) τ̃t−1 < c, Mt−1 − 1 ≥ 2 Mt−1 otherwise, (10) where rand(·, ·) is a uniform random function and c is a pre-determined parameter to capture the ‘tolerance’ of the estimation bias during the adaptation process. Recall that parameter c can be determined by using the upper bound and the lower bound in Theorem 2 (Theorem 1). Particularly, a larger c implies that more tolerance of the underestimation bias is allowed when adapting the ensemble size Mt. A smaller c, on the other hand, admits more tolerance of the overestimation. In this way, AdaEQ can be viewed as a generalization of Maxmin and REDQ with ensemble size adaptation. In particular, when c = 0 and Mt+1 ≤ N , the adaptation mechanism would increase the ensemble size until it is equal to N . Consequently, AdaEQ degenerates to Maxmin [17] where M = N , leading Algorithm 1 Adaptive Ensemble Q-learning (AdaEQ) 1: Empty replay buffer D, step size α, number of the approximators N , initial in-target ensemble size M0 ≤ N , initial state s. Initialize N approximators with different training samples. 2: for Iteration t = 1, 2, 3, · · · do 3: Identify approximation error parameter τ̃t using (9) 4: Update ensemble size Mt according to (10) 5: Sample a setM of Mt different indices from {1, 2, · · · , N} 6: Obtain the proxy approximator Qproxy(s, a)← mini∈MQi(s, a), ∀a ∈ A 7: Choose action a from current state s using policy derived from Qproxy (e.g., ε-greedy) 8: Take action a, observe r and next state s′ 9: Update replay buffer D ← D ∪ {s, a, r, s′} 10: for i = 1, 2, · · · , N do 11: Sample a random mini-batch B from D 12: Compute the target: y(s, a, r, s′)← r + γmaxa′∈AQproxy(s′, a′), (s, a, r, s′) ∈ B 13: Update Q-function Qi: Qi(s, a)← (1− α)Qi(s, a) + αy(s, a, r, s′), (s, a, r, s′) ∈ B 14: end for 15: s← s′ 16: end for to possible underestimation bias. Meantime, when c is set sufficiently large, the ensemble size M would decrease until reaching the minimal value 2 during the learning process, where the estimation bias would be positive according to Theorem 2. In this case, AdaEQ is degenerated to REDQ [6] with ensemble size M = 2. We show the convergence analysis of AdaEQ in Appendix A.5. Remark. We use random sampling in Eqn. (10) for two reasons. Firstly, the characterization of the approximation error in Eqn. (9) is noisy in nature. In particular, Monte Carlo returns with finite-length testing trajectory may introduce empirical errors when estimating the underlying ground true value of Qπ . This noisy estimation is often the case when the policy is not deterministic, or the environment is not deterministic. Thus, we use random sampling to ‘capture’ the impact of this noisy estimation. Secondly, in general it is infeasible to characterize the exact relationship between estimation bias ZM and ensemble size M . Without any further prior information except from the bounds we obtained in Theorem 1 and Theorem 2 about the approximation error, the random sampling can be viewed as the ‘exploration’ in AdaEQ. 4 Experimental Results In this section, we evaluate the effectiveness of AdaEQ by answering the following questions: 1) Can AdaEQ minimize the estimation bias and further improve the performance in comparison to existing ensemble methods? 2) How does AdaEQ perform given different initial ensemble sizes? 3) How does the ‘tolerance’ parameter c affect the performance? To make a fair comparison, we follow the setup of [6] and use the same code base to compare the performance of AdaEQ with REDQ [6] and Average-DQN (AVG) [2], on three MuJoCo continuous control tasks: Hopper, Ant and Walker2d. The same hyperparameters are used for all the algorithms. Specifically, we consider N = 10 Q-function approximators in total. The ensemble size M = N = 10 for AVG, while the initial M for AdaEQ is set as 4. The ensemble size for REDQ is set as M = 2, which is the fine-tuned result from [6]. For all the experiments, we set the ‘tolerance’ parameter c in (10) as 0.3 and the length of the testing trajectories as H = 500. The ensemble size is updated according to (10) every 10 epochs in AdaEQ. The discount factor is 0.99. Implementation details and hyperparamter settings are fully described in Appendix B.1. Evaluation of estimation bias. To investigate the impact of the adaptation mechanism in AdaEQ, we begin by examining how the estimation bias changes in the training process. After each epoch, we run an evaluation episode of length H = 500, starting from an initial state sampled from the replay buffer. We calculate the estimation error based on the difference between the Monte Carlo return value and the Q-estimates as in [33, 6, 14]. For each experiment, the shaded area represents a standard deviation of the average evaluation over 3 training seeds. As shown in the first row of Figure 5, AdaEQ can reduce the estimation bias to nearly zero in all three benchmark environments, in contrast to REDQ and AVG. The AVG approach tends to result in positive bias in all three environments (a) Hopper-v2 task. Average returns over different initial ensemble size M = 2, 3, 5. (b) Hopper-v2 task. Estimation bias over different initial ensemble size M = 2, 3, 5. (c) Ant task. Average returns over different initial ensemble size M = 3, 5, 7. (d) Ant task. Estimation bias over different initial ensemble size M = 3, 5, 7. Figure 6: Impacts of the initial ensemble size M on the performance of AdaEQ in Hopper-v2 and Ant task. The solid lines are the mean values and the shaded areas are the standard derivations across three ensemble size settings. during the learning procedure, which is consistent with the results obtained in [6]. Notably, it can be clearly observed from Hopper and Walker2d tasks that the estimation bias for AdaEQ is driven to be close to zero, thanks to the dynamic ensemble size adjustment during the learning process. Meantime, in the Ant task, even though the fine-tuned REDQ can mitigate the overestimation bias, it tends to have underestimation bias, whereas AdaEQ is able to keep the bias closer to zero (gray dashed line) even under a ‘non-optimal’ choice of the initial ensemble size. Performance on MuJoCo benchmark. We evaluate the policy return after each epoch by calculating the undiscounted sum of rewards when running the current learnt policy [6, 14]. The second row of Figure 5 demonstrates the average return during the learning process for AdaEQ, AVG and REDQ, respectively. Especially, we choose the fine-tune ensemble size for REDQ [6]. As observed in Figure 5, AdaEQ can efficiently learn a better policy and achieve higher average return in all three challenging MuJoCo tasks, without searching the optimal parameters beforehand for each of them. Meantime, AdaEQ only incurs slightly more computation time than REDQ in most MuJoCo tasks. Due to space limitation, we have relegated the wall-clock training time comparison to Table 2 in Appendix B.2. Robustness to the initial ensemble size. Next, we investigate the performance of AdaEQ under different settings of the initial ensemble size in the Hopper-v2 and Ant environment, i.e.,M = (2, 3, 5) (a) Hopper-v2 task. Average returns over different parameter c in AdaEQ. (b) Hopper-v2 task. Estimation bias over different parameter c in AdaEQ. (c) Ant task. Average returns over different parameter c in AdaEQ. (d) Ant task. Estimation bias over different parameter c in AdaEQ. Figure 7: Impacts of parameter c on the performance of AdaEQ in Hopper-v2 and Ant task. The initial ensemble size is set to be M = 4. The mean value and the standard derivation are evaluated across three training seeds. and M = (3, 5, 7). As shown in Figure 6, AdaEQ consistently outperforms the others in terms of the average performance over different setups, which implies the benefit of adjusting the in-target ensemble size based on the error feedback. It can be seen from the shaded area that the performance of AVG and REDQ, may vary significantly when the ensemble size changes. Robustness to parameter c in a wide range. As illustrated in Figure 7, we conduct the ablation study by setting c = 0.001, 0.3, 0.5, 0.7, 1.5 on the Hopper-v2 and Ant tasks. Clearly, AdaEQ works better for c ∈ [0.3, 0.7]. The experiment results corroborate our analysis in Section 3.1 that our algorithm is not sensitive to parameter c in a wide range. As mentioned in Section 3.2, when parameter c is close to zero, AdaEQ degenerates to Maxmin, which is known to suffer from underestimation bias when the ensemble size is large [6]. Further, as illustrated in Figure 7(b), when c is large, e.g., c = 1.5, the ensemble size would gradually decrease to the minimum and hence would not be able to throttle the overestimation tendency during the learning process. 5 Conclusion Determining the right ensemble size is highly nontrivial for the ensemble Q-learning to correct the overestimation without introducing significant underestimation bias. In this paper, we devise AdaEQ, a generalized ensemble Q-learning method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. More specifically, by establishing the upper bound and the lower bound of the estimation bias, we first characterize the impact of both the ensemble size and the time-varying approximation error on the estimation bias. Building upon the theoretic results, we treat the estimation bias minimization as an adaptive control problem, and take the approximation error as feedback to adjust the ensemble size adaptively during the learning process. Our experiments show that AdaEQ consistently and effectively outperforms the existing ensemble methods, such as REDQ and AVG in MuJoCo tasks, corroborating the benefit of using AdaEQ to drive the estimation bias close to zero. There are many important avenues for future work. In terms of the bounds of the estimation bias, our analysis builds upon the standard independent assumption as in previous works. It’s worth noting that in practice, the errors are generally correlated [29] and the theoretical analysis for this case remains not well understood. Additionally, in this work, we use a heuristic tolerance parameter in the adaptation mechanism to strike the balance in controlling the positive bias and negative bias. It is of great interest to develop a systematic approach to optimize this tolerance parameter. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their constructive comments. This work was supported in part by NSF grants CNS-2130125, CCSS-2121222 and CNS-2003081.
1. What is the main contribution of the paper regarding reinforcement learning? 2. What are the strengths of the proposed method, particularly in addressing the overestimation bias? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the weaknesses of the paper, especially in terms of its assumptions and limitations? 5. Do you have any concerns or suggestions regarding the paper's experimental evaluation?
Summary Of The Paper Review
Summary Of The Paper Some Reinforcement Learning algorithms like Q-learning are subject to a bias known as the maximization or overestimation bias. This is due to the fact that value iteration approximates the value of the next state as the max over actions of Q-values. As a consequence, if the Q-values are approximate, the maximum over such approximations tends to be skewed towards high values. Ensembling many independent estimates of the Q-values helps mitigatin the overestimation bias, but often at the risk of achieving the opposite effect, i.e. an underestimation bias, if the ensemble is too large. The point of this paper is to propose and develop a method to estimate the current bias, and use that to estimate to dynamically choose the size of the ensemble so as to find a trade-off between over- and underestimation that minimizes the estimation bias. The method is demonstrated on a simple toy task and then on control tasks with continuous action spaces simulated with MuJoCo physics engine. Review The paper starts with an illustrative synthetic example of how an approximation error in the Q-values gives rise to an estimation bias that persists in the ensemble case, and motivating that an adaptive ensemble size would actually potentially mitigate this problem. It then develops a theory that bounds the estimation bias of an ensemble of Q-values under the assumption of an approximation error that is uniform across across states and actions, an assumption that allows the authors to postulate an analytic expression for the approximation error that is used in developing the theory. This results in analytical bounds for the bias as a function of the size of the ensemble size M and the approximation error tau. They then go on to develop a way to estimate the approximation error tau, relying on Monte Carlo returns computed over rollouts of the current policy. The bias mitigation algorithm AdaEQ then basically consists in using this last estimate of the approximation error tau by plugging it into the expressions for the bias in order to recover a value of the ensemble size M that results in a bias that is between the estimated upper bound indicating overestimation and lower bounds indicating underestimation. The authors then demonstrate their algorithms against other ensemble-based bias mitigation strategies (REDQ and Average-Q) on three tasks from MuJoCO demonstrating competitive performance both in faithfully minimizing the estimation bias, and achieving higher average returns.
NIPS
Title Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback Abstract The ensemble method is a promising way to mitigate the overestimation issue in Q-learning, where multiple function approximators are used to estimate the action values. It is known that the estimation bias hinges heavily on the ensemble size (i.e., the number of Q-function approximators used in the target), and that determining the ‘right’ ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process. To tackle this challenge, we first derive an upper bound and a lower bound on the estimation bias, based on which the ensemble size is adapted to drive the bias to be nearly zero, thereby coping with the impact of the time-varying approximation errors accordingly. Motivated by the theoretic findings, we advocate that the ensemble method can be combined with Model Identification Adaptive Control (MIAC) for effective ensemble size adaptation. Specifically, we devise Adaptive Ensemble Q-learning (AdaEQ), a generalized ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias. Extensive experiments are carried out to show that AdaEQ can improve the learning performance than the existing methods for the MuJoCo benchmark. 1 Introduction Thanks to recent advances in function approximation methods using deep neural networks [20], Q-learning [35] has been widely used to solve reinforcement learning (RL) problems in a variety of applications, e.g., robotic control [23, 13], path planning [15, 24] and production scheduling [34, 21]. Despite the great success, it is well recognized that Q-learning may suffer from the notorious overestimation bias [29, 33, 32, 10, 37], which would significantly impede the learning efficiency. Recent work [9, 11] indicates that this problem also persists in the actor-critic setting. To address this issue, the ensemble method [16, 1, 26, 7] has emerged as a promising solution in which multiple Q-function approximators are used to get better estimation of the action values. Needless to say, the ensemble size, i.e., the number of Q-function approximators used in the target, has intrinsic impact on Q-learning. Notably, it is shown in [6, 17] that while a large ensemble size could completely remove the overestimation bias, it may go to the other extreme and result in underestimation bias and unstable training, which is clearly not desirable. Therefore, instead of simply increasing the ensemble 35th Conference on Neural Information Processing Systems (NeurIPS 2021). size to mitigate the overestimation issue, a fundamental question to ask is:“ Is it possible to determine the right ensemble size on the fly so as to minimize the estimation bias?” Some existing ensemble methods [2, 19, 17] adopt a trial-and-error strategy to search for the ensemble size, which would be time-consuming and require a lot of human engineering for different RL tasks. The approximation error of the Q-function during the learning process plays a nontrivial role in the selection of the ensemble size, since it directly impacts the Q-target estimation accuracy. This however remains not well understood. In particular, the fact that the approximation error is time-varying, due to the iterative nature of Q-learning [36, 5], gives rise to the question that whether a fixed ensemble size should be used in the learning process. To answer this question, we show in Section 2.2 that using a fixed ensemble size is likely to lead to either overestimation or underestimation bias, and the bias may shift between overestimation and underestimation because of the time-varying approximation error, calling for an adaptive ensemble size so as to drive the bias close to zero based on the underlying learning dynamics. Thus motivated, in this work we study effective ensemble size adaptation to minimize the estimation bias that hinges heavily on the time-varying approximation errors during the learning process. To this end, we first characterize the relationship among the ensemble size, the function approximation errors, and the estimation bias, by deriving an upper bound and a lower bound on the estimation bias. Our findings reveal that the ensemble size should be selected adaptively in a way to cope with the impact of the time-varying approximation errors. Building upon the theoretic results, we cast the estimation bias minimization as an adaptive control problem where the approximation error during the learning process is treated as the control object, and the ensemble size is adapted based on the feedback of the control output, i.e., the value of the approximation error from the last iteration. The key idea in this approach is inspired from the classic Model Identification Adaptive Control (MIAC) framework [3, 25], where at each step the current system identification of the control object is fed back to adjust the controller, and consequently a new control signal is devised following the updated control law. One main contribution of this work lies in the development of AdaEQ, a generalized ensemble method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. Specifically, the approximation error in each iteration is quantified by comparing the difference between the Q-estimates and the Monte Carlo return using the current learned policy over a testing trajectory [29, 17]. Inspired by MIAC, the approximation error serves as the feedback to adapt the ensemble size. Besides, we introduce a ‘tolerance’ parameter in the adaptation mechanism to balance the control tendency towards positive or negative bias during the learning process. In this way, AdaEQ can encompass other existing ensemble methods as special cases, including Maxmin [17], by properly setting this hyperparameter. A salient feature of the feedback-adaptation mechanism is that it can be used effectively in conjunction with both standard Q-learning [22] and actor-critic methods [28, 11]. Experimental results on the continuous-control MuJoCo benchmark [30] show that AdaEQ is robust to the initial ensemble size in different environments, and achieves higher average return, thanks to keeping the estimation bias close to zero, when compared to the state-of-the-art ensemble methods such as REDQ [6] and Average-DQN [2]. Related Work. Bias-corrected Q-learning [18] introduces the bias correction term to reduce the overestimation bias. Double Q-learning is proposed in [12, 33] to address the overestimation issue in vanilla Q-learning, by leveraging two independent Q-function approximators to estimate the maximum Q-function value in the target. S-DQN and S-DDQN use the softmax operator instead of the max operator to further reduce the overestimation bias [27]. Self-correcting Q-learning aims to balance the underestimation in double Q-learning and overestimation in classic Q learning by introducing a new self-correcting estimator [38]. Weighted Q-learning proposes a new estimator based on the weighted average of the sample means, and conducts the empirical analysis in the discrete action space [8]. Weighted Double Q-learning [37] uses the Q-approximator together with the double Q-approximator to balance the overestimation and underestimation bias. Nevertheless, acquiring independent approximators is often intractable for large-scale tasks. To resolve this issue, the Twin-Delayed Deep Deterministic policy gradient algorithm (TD3) [9] and Soft Actor-Critic (SAC) [11] have been devised to take the minimum over two approximators in the target network. Along a different avenue, the ensemble-based methods generalize double Q-learning to correct the overestimation bias by increasing the number of Q-function approximators. Particularly, AverageDQN [2] takes the average of multiple approximators in the target to reduce the overestimation error, and Random Ensemble Mixture (REM) [1] estimates the target value using the random convex combination of the approximators. It is worth noting that both Average-DQN and REM cannot completely eliminate the overestimation bias. Most recently, Maxmin Q-learning [17] defines a proxy Q-function by choosing the minimum Q-value for each action among all approximators. Similar to Maxmin, Random Ensembled Q-learning (REDQ) [6] formulates the proxy Q-function by choosing only a subset of the ensemble. Nevertheless, both Maxmin and REDQ use a fixed ensemble size. In this study, we introduce an adaptation mechanism for the ensemble size to drive the estimation bias to be close to zero, thereby mitigating the possible overestimation and underestimation issues. 2 Impact of Ensemble Size on Estimation Bias 2.1 Ensemble Q-learning As is standard, we consider a Markov decision process (MDP) defined by the tuple 〈S,A, P, r, γ〉, where S and A denote the state space and the action space, respectively. P (s′|s, a) : S ×A× S → [0, 1] denotes the probability transition function from current state s to the next state s′ by taking action a ∈ A, and r(s, a) : S ×A → R is the corresponding reward. γ ∈ (0, 1] is the discount factor. At each step t, the agent observes the state st, takes an action at following a policy π : S → A, receives the reward rt, and evolves to a new state st+1. The objective is to find an optimal policy π∗ to maximize the discounted return R = ∑∞ t=0 γ trt. By definition, Q-function is the expected return when choosing action a in state s and following with the policy π: Qπ = E[ ∑∞ t=0 γ trt(st, at)|s0 = s, a0 = a]. Q-learning is an off-policy value-based method that aims at learning the optimal Q-function Q∗ : S ×A → R, where the optimal Q-function is a fixed point of the Bellman optimality equation [4]: T Q∗(s, a) = r(s, a) + γEs′∼P (s′|s,a) [maxa′∈AQ∗(s′, a′)] . (1) Given a transition sample (s, a, r, s′), the Bellman operator can be employed to update the Q-function as follows: Q(s, a)← (1− α)Q(s, a) + αy, y := r + γmaxa′∈AQ(s′, a′). (2) where α is the step size and y is the target. Under some conditions, Q-learning can converge to the optimal fixed-point solution asymptotically [31]. In deep Q-learning, the Q-function is approximated by a neural network, and it has been shown [33] that the approximation error, amplified by the max operator in the target, results in the overestimation phenomena. One promising approach to address this issue is the ensemble Q-learning method, which is the main subject of this study. The Ensemble Method. Specifically, the ensemble method maintains N separate approximators Q1, Q2, · · · , QN of the Q-function, based on which a subset of these approximators is used to devise a proxy Q-function. For example, in Average-DQN [2], the proxy Q-function is obtained by computing the average value over all N approximators to reduce the overestimation bias: Qave(·) = 1N ∑N i=1Q i(·). However, the average operation cannot completely eliminate the overestimation bias, since the average of the overestimation bias is still positive. To tackle this challenge, Maxmin [17] and REDQ [6] take the ‘min’ operation over a subsetM ( size M ) of the ensemble: Qproxy(·) = mini∈MQi(·). (3) The target value in the ensemble-based Q-learning is then computed as y = r +maxa′∈AQproxy. It is worth noting that in the existing studies, the in-target ensemble size M , pre-determined for a given environment, remain fixed in the learning process. 2.2 An Illustrative Example It is known that the determination of the optimal ensemble size is highly nontrivial, and a poor choice of the ensemble size would degrade the performance of ensemble Q-learning significantly [17]. As mentioned earlier, it is unclear a priori if a fixed ensemble size should be used in the learning process. (a) Function approximation. (b) Five function approximators. (c) Ensemble via ‘min’ operator. (d) Estimation error. Figure 2: Illustration of estimation bias in the ensemble method. (a) Each approximator is fitted to the noisy values (green dots) at the sampled states independently. (b) Five Q-function approximators are obtained for both actions (green lines and blue lines). (c) Apply the min operator over M (M = 3) randomly selected approximators to obtain a proxy approximator for each action. (d) The estimation error is obtained by comparing the underlying true value (purple line in (a)) and the target value using the proxy approximator. (a) Estimation bias vs. τ . (b) Estimation bias vs. numbers of actions. Figure 3: Illustration of overestimation and underestimation phenomena for different ensemble sizes. In what follows, we use an example to illustrate the potential pitfalls in the ensemble methods by examining the sensitivity of the estimation bias to the ensemble size [6, 17]. Along the same line as in [33], we consider an example with a real-valued continuous state space. In this example, there are two discrete actions available at each state and the optimal action values depend only on the state, i.e., in each state both actions result in the same optimal value Q∗(s, ·), which is assumed to be Q∗(s, ·) = sin(s). Figure 2 demonstrates how the ensemble method is carried out in four stages: (I) For each Q-function approximator Qi, i = 1, 2, · · · , 5, we first generate 10 noisy action-value samples independently (green dots in Figure 2(a)). Let ei(s, a) denote the approximation error of Qi: Qi(s, a) = Q∗(s, a) + ei(s, a), with ei(s, a) ∼ U(−τi, τi), (4) where τi ∼ U(0, τ) models the approximation error distribution for the i-th approximator. Note that the assumption on the uniform error distribution is commonly used to indicate that both positive and negative approximation error are possible in Q-function approximators [29][17][6]. (II) Next, Figure 2(b) illustrates the ensemble (N = 5) of approximators for two actions, where each approximator is a 6-degree polynomial that fits the noisy values at sampled states. (III) Following the same ensemble approach in [6][17], we randomly choose M approximators from the ensemble and take the minimum over them to obtain a proxy approximator for each action, resulting in the dashed lines in Figure 2(c). (IV) Finally, the maximum action value of the proxy approximator is used as the target to update the current approximators. To evaluate the target value estimation error, Figure 2(d) depicts the difference between the obtained target value and the underlying true value when using different ensemble size M . As in [33], we utilize the average estimation error (i.e., estimation bias) to quantify the performance of current approximators. For example, when the ensemble size M = 2, the red line is above zero for most states, implying the overestimation tendency in the target. Clearly, Figure 2(d) indicates that the estimation bias is highly dependent on the ensemble size, and even a change of M can lead the shift from overestimation to underestimation. Since the Q-function approximation error of each approximator changes over time in the training process [5] (examples for this phenomenon can be found in Appendix B.3), we next analyze the impact of the ensemble size on the estimation bias under different approximation error distributions. As shown in Figure 3(a), with a fixed ensemble size M , the estimation bias may shift between positive and negative and be ‘dramatically’ large for some error distributions. In light of this observation, departing from using a fixed size, we advocate to adapt the in-target ensemble size, e.g., set M = 4 when the noise parameter τ > 1.5 and M = 3 otherwise. The estimation bias resulted by this adaptation mechanism is much closer to zero. Besides, Figure 3(b) characterizes the estimation bias under different action spaces, which is also important considering that different tasks normally have different action spaces and the number of available actions may vary in different states even for the same task. The adaptive ensemble approach is clearly more robust in our setting. In a nutshell, both Figure 3(a) and 3(b) suggest that a fixed ensemble size would not work well to minimize the estimation bias during learning for different tasks. This phenomenon has also been observed in the empirical results [17]. In stark contrast, adaptively changing the ensemble size based on the approximation error indeed can help to reduce the estimation bias in different settings. 3 Adaptive Ensemble Q-learning (AdaEQ) Motivated by the illustrative example above, we next devise a generalized ensemble method with ensemble size adaptation to drive the estimation bias to be close to zero, by taking into consideration the time-varying feature of the approximation error during the learning process. Formally, we consider an ensemble of N Q-function approximators, i.e., {Qi}Ni=1, with each approximator initialized independently and randomly. We use the minimum of a subsetM of the N approximators in the Q-learning target as in (3), where the size of subset |M| =M ≤ N . 3.1 Lower Bound and Upper Bound on Estimation Bias We first answer the following key question:“How does the approximation error, together with the ensemble size, impact the estimation bias?". To this end, based on [29], we characterize the intrinsic relationship among the ensemble size M , the Q-function approximation error and the estimation bias, and derive an upper bound and a lower bound on the bias in the tabular case. Without loss of generality, we assume that for each state s, there are A available actions. Let ei(s, a) , Qi(s, a)−Qπ(s, a) be the approximation error for the i-th Q-function approximator, where Qπ(s, a) is the ground-truth of the Q-value for the current policy π. By using (3) to compute the target Q-value, we define the estimation error in the Bellman equation for transition (s, a, r, s′) as ZM : ZM ,r + γmaxa′∈Amini∈MQi(s′, a′)− (r +maxa′∈AQπ(s′, a′)) . Here a positive E[ZM ] implies overestimation bias while a negative E[ZM ] implies underestimation bias. Note that we use the subscription M to emphasize that the estimation bias is intimately related to M . The case with two distributions for Q-function approximation errors. For ease of exposition, we first consider the case when the approximation errors follow one of the two uniform distributions, as illustrated in Figure 4(a). Specifically, assume that for i ∈ K ⊂ M with |K| = K, ei(s, a) ∼ U(−τ1, τ1) , and for i ∈M \ K, ei(s, a) ∼ U(−τ2, τ2). Without loss of generality, we assume that τ1 > τ2 > 0. It is worth noting that in [29][17][6], the approximation error for all approximators is assumed to follow the same uniform distribution, i.e., τ1 = τ2, which is clearly more restrictive than the case here with two error distributions. For instance, when only one approximator is chosen to be updated at each step [17], the approximation error distribution of this approximator would change over time and hence differ from the others. We have the following results on the upper bound and lower bound of the estimation bias E[ZM ]. Theorem 1. For the case with two distributions for Q-function approximation errors, the estimation bias E[ZM ] satisfies that E[ZM ] ≥ γ (τ1(1− fAK − 2fAM ) + τ2(1− fAM )) ; (5) E[ZM ] ≤ γ ( τ1 + τ2(1− 2fA(M−K) − (1− βK)A) ) , (6) where βK = ( 12 − τ2 2τ1 )K , fAK = 1KB( 1 K , A + 1) = Γ(A+1)Γ(1+ 1K ) Γ(A+ 1K +1) = A(A−1)···1 (A+ 1K )(A+ 1 K−1)···(1+ 1 K ) with B(·, ·) being the Beta function. (a) Q-function approximation error distributions. (b) Lower bound and upper bound on Estimation bias. (c) Impact of approximation error on the estimation bias: overestimation vs. underestimation. Figure 4: Illustration of upper bounds and lower bounds on estimation bias in Theorem 1. (a) The case where the approximation errors of the Q-approximators can be categorized into two uniform distributions. (b) The lower bound and the upper bound corresponding to (5) and (6), for given τ1, τ2, A: The blue point represents the ‘critical’ point where decreasing the ensemble size may lead overestimation (the lower bound is positive); and the red point denotes the ‘critical’ point where increasing ensemble size may lead underestimation (the upper bound is negative). (c) Due to time-varying feature of the approximation errors, the blue curve and the red curve depict the ‘critical’ points for the lower bound and the upper bound, respectively. The proof of Theorem 1 is relegated to the Appendix A.1. Theorem 1 reveals that the estimation bias depends on the ensemble size as well as the approximation error distributions. To get a more concrete sense of Theorem 1, we consider an example where τ1 = 0.5 and τ2 = 0.4, as depicted in Figure 4(b), and characterize the relationship between the estimation bias and the ensemble size M . Notably, the estimation bias turns negative when the ensemble size M > Mu = 9 (red point: the value of M where the upper bound is 0) and becomes positive when M < Ml = 4 (blue point: the value of M where the lower bound is 0). In Figure 4(c), we fix τ2 = 0.4 and show how those two critical points (Mu and Ml) change along with τ1. Here the red shaded area indicates underestimation bias when M > Mu, and the blue shaded area indicates overestimation bias when M < Ml. Clearly, in order to avoid the positive bias (blue shaded area), it is desirable to increase the ensemble size when the approximation error is large, e.g., τ1 > 0.6. On the other hand, decreasing the ensemble size is more preferred to avoid underestimation (red shaded area) when the approximation error is small, e.g., τ1 < 0.6. The general case with heterogeneous distributions for Q-function approximation errors. Next, we consider a general case, in which the approximation errors for different approximators {Qi} are independently but non-identically distributed. Specifically, we assume that the approximation error ei(s, a) for Qi(s, a), i = 1, 2, · · · ,M , follows the uniform distribution U(−τi, τi), where τi > 0. We use a multitude of tools to devise the upper bound and lower bound on the estimation bias E[ZM ]. As expected, this general case is technically more challenging and the bounds would be not as sharp as in the special case with two distributions. Theorem 2. For the general case with heterogeneous error distributions, the estimation bias E[ZM ] satisfies that E[ZM ] ≥γ ( τmin − τmax(fA(M−1) + 2fAM ) ) ; (7) E[ZM ] ≤γ (2τmin − τmax (fAM − 2gAM )) , (8) where τmin = mini τi and τmax = maxi τi. gAM = 1M I0.5( 1 M , A + 1) with I0.5(·, ·) being the regularized incomplete Beta function. Observe from Theorem 2 that the lower bound in (7) is positive when τmin(1 − 2fAM ) > τmaxfA(M−1), indicating the existence of the overestimation issue. On thew contrary, the upper bound in (8) is negative when 2τmin < τmax (1 + fAM − 2gAM ), pointing to the underestimation issue. In general, when τmin is large enough, decreasing ensemble size M is likely to cause overestimation, e.g., E[ZM ] ≥ 0 when M < 2. On the other hand, when τmax is small enough, increasing ensemble size M is likely to cause underestimation, e.g., E[ZM ] ≤ 0 when M is sufficiently large. Determination of parameter c. As illustrated in Figure 4(c), for given approximation error characterization, a threshold c can be chosen such that increasing the ensemble size would help to correct the overestimation bias when τmax > c, and decreasing the ensemble size is more conductive to mitigate the underestimation bias when τmax < c. Specifically, parameter c is determined in two steps. Step 1: To estimate approximation error distribution parameters τmin and τmax by running an ensemble based algorithm (e.g., Algorithm 1) for a few epochs with a fixed ensemble size. In particular, a testing trajectory is generated from a random initial state using the current policy to compute the (discounted) MC return Qπ and the estimated Q-function value Qi, i = 1, 2, · · · , N . We next fit a uniform distribution model U(−τi, τi) of the approximation error (Qi −Qπ) for each Q-function approximator Qi. Then, τmin and τmax can be obtained by choosing the minimum and maximum values among τi, i = 1, 2, · · · , N . Step 2: To obtain the upper bound and the lower bound in Theorem 2 by using {τmin, τmax, A, γ}. We investigate the relationship between ensemble size M and the estimation bias by studying the bounds and identifying the ‘critical’ points as illustrated in Figure 4(b). Observe that a ‘proper’ ensemble size should be chosen between the ‘critical’ points, so as to reduce the overestimation and underestimation bias as much as possible. Since the approximation error is time-varying during the learning process, these two ‘critical’ points vary along with {τmax} and {τmin} (as shown in Figure 4(c)). Intuitively, it is desirable to drive the system to avoid both the red region (underestimation) and the blue region (overestimation). It can be clearly observed that there is a wide range of choice for parameter c (e.g., [0.5, 0.7] in Figure 4(c)) for the algorithm to stay in the white region, indicating that even though the pre-determined c above is not optimized, it can still serve the purpose well. The proof of Theorem 2 and numerical illustration can be found in the Appendix A.3. Summarizing, both Theorem 1 and Theorem 2 indicate that the approximation error characterization plays a critical role in controlling the estimation bias. In fact, both the lower bound and the upper bound in Theorem 2 depends on τmin and τmax, which are time-varying due to the iterative nature of the learning process, indicating that it is sensible to use an adaptive ensemble size to drive the estimation bias to be close to zero, as much as possible. 3.2 Practical Implementation Based on the theoretic findings above, we next propose AdaEQ that adapts the ensemble size based on the approximation error feedback on the fly, so as to drive the estimation bias close to zero. Particularly, as summarized in Algorithm 1, AdaEQ introduces two important steps at each iteration t, i.e., approximation error characterization (line 3) and ensemble size adaptation (line 4), which can be combined with the framework of either Q-learning or actor-critic methods. Characterization of the time-varying approximation error. As outlined in Algorithm 1, the first key step is to quantify the time-varying approximation error at each iteration t (for ease of exposition, we omit the subscript t when it is clear from the context). Along the same line as in [9, 33, 6], we run a testing trajectory of length H , T = (s0, a0, s1, a1, · · · , sH , aH), from a random initial state using the current policy π, and compute the discounted Monte Carlo return Qπ(s, a) and the estimated Q-function value Qi(s, a), i = 1, · · · , N for each visited state-action pair (s, a). The empirical standard derivation of Qi(s, a)−Qπ(s, a) can be then obtained to quantify the approximation error of each approximator Qi. Then, we take the average of the empirical standard derivation over all approximators to characterize the approximation error at the current iteration t, i.e., τ̃t = 1 N ∑N i=1 std(Q i(s, a)−Qπ(s, a)), (s, a) ∈ T . (9) Error-feedback based ensemble size adaptation. Based on the theoretic results and Figure 4(c), we update the ensemble size M at each iteration t based on the approximation error (9), using the following piecewise function: Mt = rand(Mt−1 + 1, N) τ̃t−1 > c, Mt−1 + 1 ≤ N rand(2,Mt−1 − 1) τ̃t−1 < c, Mt−1 − 1 ≥ 2 Mt−1 otherwise, (10) where rand(·, ·) is a uniform random function and c is a pre-determined parameter to capture the ‘tolerance’ of the estimation bias during the adaptation process. Recall that parameter c can be determined by using the upper bound and the lower bound in Theorem 2 (Theorem 1). Particularly, a larger c implies that more tolerance of the underestimation bias is allowed when adapting the ensemble size Mt. A smaller c, on the other hand, admits more tolerance of the overestimation. In this way, AdaEQ can be viewed as a generalization of Maxmin and REDQ with ensemble size adaptation. In particular, when c = 0 and Mt+1 ≤ N , the adaptation mechanism would increase the ensemble size until it is equal to N . Consequently, AdaEQ degenerates to Maxmin [17] where M = N , leading Algorithm 1 Adaptive Ensemble Q-learning (AdaEQ) 1: Empty replay buffer D, step size α, number of the approximators N , initial in-target ensemble size M0 ≤ N , initial state s. Initialize N approximators with different training samples. 2: for Iteration t = 1, 2, 3, · · · do 3: Identify approximation error parameter τ̃t using (9) 4: Update ensemble size Mt according to (10) 5: Sample a setM of Mt different indices from {1, 2, · · · , N} 6: Obtain the proxy approximator Qproxy(s, a)← mini∈MQi(s, a), ∀a ∈ A 7: Choose action a from current state s using policy derived from Qproxy (e.g., ε-greedy) 8: Take action a, observe r and next state s′ 9: Update replay buffer D ← D ∪ {s, a, r, s′} 10: for i = 1, 2, · · · , N do 11: Sample a random mini-batch B from D 12: Compute the target: y(s, a, r, s′)← r + γmaxa′∈AQproxy(s′, a′), (s, a, r, s′) ∈ B 13: Update Q-function Qi: Qi(s, a)← (1− α)Qi(s, a) + αy(s, a, r, s′), (s, a, r, s′) ∈ B 14: end for 15: s← s′ 16: end for to possible underestimation bias. Meantime, when c is set sufficiently large, the ensemble size M would decrease until reaching the minimal value 2 during the learning process, where the estimation bias would be positive according to Theorem 2. In this case, AdaEQ is degenerated to REDQ [6] with ensemble size M = 2. We show the convergence analysis of AdaEQ in Appendix A.5. Remark. We use random sampling in Eqn. (10) for two reasons. Firstly, the characterization of the approximation error in Eqn. (9) is noisy in nature. In particular, Monte Carlo returns with finite-length testing trajectory may introduce empirical errors when estimating the underlying ground true value of Qπ . This noisy estimation is often the case when the policy is not deterministic, or the environment is not deterministic. Thus, we use random sampling to ‘capture’ the impact of this noisy estimation. Secondly, in general it is infeasible to characterize the exact relationship between estimation bias ZM and ensemble size M . Without any further prior information except from the bounds we obtained in Theorem 1 and Theorem 2 about the approximation error, the random sampling can be viewed as the ‘exploration’ in AdaEQ. 4 Experimental Results In this section, we evaluate the effectiveness of AdaEQ by answering the following questions: 1) Can AdaEQ minimize the estimation bias and further improve the performance in comparison to existing ensemble methods? 2) How does AdaEQ perform given different initial ensemble sizes? 3) How does the ‘tolerance’ parameter c affect the performance? To make a fair comparison, we follow the setup of [6] and use the same code base to compare the performance of AdaEQ with REDQ [6] and Average-DQN (AVG) [2], on three MuJoCo continuous control tasks: Hopper, Ant and Walker2d. The same hyperparameters are used for all the algorithms. Specifically, we consider N = 10 Q-function approximators in total. The ensemble size M = N = 10 for AVG, while the initial M for AdaEQ is set as 4. The ensemble size for REDQ is set as M = 2, which is the fine-tuned result from [6]. For all the experiments, we set the ‘tolerance’ parameter c in (10) as 0.3 and the length of the testing trajectories as H = 500. The ensemble size is updated according to (10) every 10 epochs in AdaEQ. The discount factor is 0.99. Implementation details and hyperparamter settings are fully described in Appendix B.1. Evaluation of estimation bias. To investigate the impact of the adaptation mechanism in AdaEQ, we begin by examining how the estimation bias changes in the training process. After each epoch, we run an evaluation episode of length H = 500, starting from an initial state sampled from the replay buffer. We calculate the estimation error based on the difference between the Monte Carlo return value and the Q-estimates as in [33, 6, 14]. For each experiment, the shaded area represents a standard deviation of the average evaluation over 3 training seeds. As shown in the first row of Figure 5, AdaEQ can reduce the estimation bias to nearly zero in all three benchmark environments, in contrast to REDQ and AVG. The AVG approach tends to result in positive bias in all three environments (a) Hopper-v2 task. Average returns over different initial ensemble size M = 2, 3, 5. (b) Hopper-v2 task. Estimation bias over different initial ensemble size M = 2, 3, 5. (c) Ant task. Average returns over different initial ensemble size M = 3, 5, 7. (d) Ant task. Estimation bias over different initial ensemble size M = 3, 5, 7. Figure 6: Impacts of the initial ensemble size M on the performance of AdaEQ in Hopper-v2 and Ant task. The solid lines are the mean values and the shaded areas are the standard derivations across three ensemble size settings. during the learning procedure, which is consistent with the results obtained in [6]. Notably, it can be clearly observed from Hopper and Walker2d tasks that the estimation bias for AdaEQ is driven to be close to zero, thanks to the dynamic ensemble size adjustment during the learning process. Meantime, in the Ant task, even though the fine-tuned REDQ can mitigate the overestimation bias, it tends to have underestimation bias, whereas AdaEQ is able to keep the bias closer to zero (gray dashed line) even under a ‘non-optimal’ choice of the initial ensemble size. Performance on MuJoCo benchmark. We evaluate the policy return after each epoch by calculating the undiscounted sum of rewards when running the current learnt policy [6, 14]. The second row of Figure 5 demonstrates the average return during the learning process for AdaEQ, AVG and REDQ, respectively. Especially, we choose the fine-tune ensemble size for REDQ [6]. As observed in Figure 5, AdaEQ can efficiently learn a better policy and achieve higher average return in all three challenging MuJoCo tasks, without searching the optimal parameters beforehand for each of them. Meantime, AdaEQ only incurs slightly more computation time than REDQ in most MuJoCo tasks. Due to space limitation, we have relegated the wall-clock training time comparison to Table 2 in Appendix B.2. Robustness to the initial ensemble size. Next, we investigate the performance of AdaEQ under different settings of the initial ensemble size in the Hopper-v2 and Ant environment, i.e.,M = (2, 3, 5) (a) Hopper-v2 task. Average returns over different parameter c in AdaEQ. (b) Hopper-v2 task. Estimation bias over different parameter c in AdaEQ. (c) Ant task. Average returns over different parameter c in AdaEQ. (d) Ant task. Estimation bias over different parameter c in AdaEQ. Figure 7: Impacts of parameter c on the performance of AdaEQ in Hopper-v2 and Ant task. The initial ensemble size is set to be M = 4. The mean value and the standard derivation are evaluated across three training seeds. and M = (3, 5, 7). As shown in Figure 6, AdaEQ consistently outperforms the others in terms of the average performance over different setups, which implies the benefit of adjusting the in-target ensemble size based on the error feedback. It can be seen from the shaded area that the performance of AVG and REDQ, may vary significantly when the ensemble size changes. Robustness to parameter c in a wide range. As illustrated in Figure 7, we conduct the ablation study by setting c = 0.001, 0.3, 0.5, 0.7, 1.5 on the Hopper-v2 and Ant tasks. Clearly, AdaEQ works better for c ∈ [0.3, 0.7]. The experiment results corroborate our analysis in Section 3.1 that our algorithm is not sensitive to parameter c in a wide range. As mentioned in Section 3.2, when parameter c is close to zero, AdaEQ degenerates to Maxmin, which is known to suffer from underestimation bias when the ensemble size is large [6]. Further, as illustrated in Figure 7(b), when c is large, e.g., c = 1.5, the ensemble size would gradually decrease to the minimum and hence would not be able to throttle the overestimation tendency during the learning process. 5 Conclusion Determining the right ensemble size is highly nontrivial for the ensemble Q-learning to correct the overestimation without introducing significant underestimation bias. In this paper, we devise AdaEQ, a generalized ensemble Q-learning method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. More specifically, by establishing the upper bound and the lower bound of the estimation bias, we first characterize the impact of both the ensemble size and the time-varying approximation error on the estimation bias. Building upon the theoretic results, we treat the estimation bias minimization as an adaptive control problem, and take the approximation error as feedback to adjust the ensemble size adaptively during the learning process. Our experiments show that AdaEQ consistently and effectively outperforms the existing ensemble methods, such as REDQ and AVG in MuJoCo tasks, corroborating the benefit of using AdaEQ to drive the estimation bias close to zero. There are many important avenues for future work. In terms of the bounds of the estimation bias, our analysis builds upon the standard independent assumption as in previous works. It’s worth noting that in practice, the errors are generally correlated [29] and the theoretical analysis for this case remains not well understood. Additionally, in this work, we use a heuristic tolerance parameter in the adaptation mechanism to strike the balance in controlling the positive bias and negative bias. It is of great interest to develop a systematic approach to optimize this tolerance parameter. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their constructive comments. This work was supported in part by NSF grants CNS-2130125, CCSS-2121222 and CNS-2003081.
1. What is the focus and contribution of the paper regarding Q-function estimation in DRL? 2. What are the strengths of the proposed approach, particularly in addressing the overestimation problem? 3. Do you have any concerns or questions about the paper's experiments and their setup? 4. How does the reviewer assess the clarity, quality, originality, and reproducibility of the paper's content? 5. Are there any specific suggestions or requests for additional information or ablation studies that the reviewer would like to see in the paper?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a DRL method AdaEQ to solve the overestimation problem of Q-function. The experiment results show that AdaEQ can outperform the baseline methods. Review The overestimation of Q is the targeting problem of this paper. The proposed AdaEQ solves this problem in a two-stage procedure: approximate error characterization and ensemble size adaptation to minimize the bias. This is a natural solution for the targeting problem. Originality: The authors propose to optimize the estimation of Q-value through modifying the number of the ensembled Q_{n}. The authors provide the guarantees of estimation error on Q_i in the proposed method. Unlike the existing ensemble methods, AdaEQ can adaptively choose the ensemble size due to the learning process. The experiment results prove that AdaEQ works. Clarity: The theoretical elaboration part (section3) of this paper is very detailed and logical. But the exposition of the experiment is relatively less comprehensive. The introduction of the environment and the description of the task are unclear. It is unclear whether the authors have made any adjustments and modifications to the environment of Mujoco. It would be better to add some necessary description in the experimental background. Overall, the paper is well-written and easy to understand. Quality: The theoretical part of this article is comprehensive. The experimental part should be enhanced by adding some necessary ablation studies—for example, the effect of the choice of c (in eq.10) on the result. Similarly in eq.10, whether the different sample methods of M affect the result. Below are some questions and suggestions for this paper. In fig.5, In the Ant environment, AdaEQ has a pronounced fluctuation. Considering that Ant is a relatively stable environment, this phenomenon should not happen. How to explain this phenomenon? Does AdaEQ need more compute time than REDQ? If yes, how much time does AdaEQ take more than REDQ in each environment step? Is AdaEQ sensitive to c(in eq.10)? How does AdaEQ perform on other Mujoco tasks? Such as humanoid, hammock? If you have done this experiment, you can also show the results, even if the results are not competitive. Overall, the experimental results can support the claim that AdaEQ can indeed improve performance on some tasks. Although this method is straightforward, it also provides a very general idea for optimization, which is very impressive. The experimental part of this article could be enhanced to provide strong support for the claim. === Post Rebuttal === The paper proposes a novel idea with a comprehensive theoretical analysis. The experiment can provide basic support to its claims.
NIPS
Title Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback Abstract The ensemble method is a promising way to mitigate the overestimation issue in Q-learning, where multiple function approximators are used to estimate the action values. It is known that the estimation bias hinges heavily on the ensemble size (i.e., the number of Q-function approximators used in the target), and that determining the ‘right’ ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process. To tackle this challenge, we first derive an upper bound and a lower bound on the estimation bias, based on which the ensemble size is adapted to drive the bias to be nearly zero, thereby coping with the impact of the time-varying approximation errors accordingly. Motivated by the theoretic findings, we advocate that the ensemble method can be combined with Model Identification Adaptive Control (MIAC) for effective ensemble size adaptation. Specifically, we devise Adaptive Ensemble Q-learning (AdaEQ), a generalized ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias. Extensive experiments are carried out to show that AdaEQ can improve the learning performance than the existing methods for the MuJoCo benchmark. 1 Introduction Thanks to recent advances in function approximation methods using deep neural networks [20], Q-learning [35] has been widely used to solve reinforcement learning (RL) problems in a variety of applications, e.g., robotic control [23, 13], path planning [15, 24] and production scheduling [34, 21]. Despite the great success, it is well recognized that Q-learning may suffer from the notorious overestimation bias [29, 33, 32, 10, 37], which would significantly impede the learning efficiency. Recent work [9, 11] indicates that this problem also persists in the actor-critic setting. To address this issue, the ensemble method [16, 1, 26, 7] has emerged as a promising solution in which multiple Q-function approximators are used to get better estimation of the action values. Needless to say, the ensemble size, i.e., the number of Q-function approximators used in the target, has intrinsic impact on Q-learning. Notably, it is shown in [6, 17] that while a large ensemble size could completely remove the overestimation bias, it may go to the other extreme and result in underestimation bias and unstable training, which is clearly not desirable. Therefore, instead of simply increasing the ensemble 35th Conference on Neural Information Processing Systems (NeurIPS 2021). size to mitigate the overestimation issue, a fundamental question to ask is:“ Is it possible to determine the right ensemble size on the fly so as to minimize the estimation bias?” Some existing ensemble methods [2, 19, 17] adopt a trial-and-error strategy to search for the ensemble size, which would be time-consuming and require a lot of human engineering for different RL tasks. The approximation error of the Q-function during the learning process plays a nontrivial role in the selection of the ensemble size, since it directly impacts the Q-target estimation accuracy. This however remains not well understood. In particular, the fact that the approximation error is time-varying, due to the iterative nature of Q-learning [36, 5], gives rise to the question that whether a fixed ensemble size should be used in the learning process. To answer this question, we show in Section 2.2 that using a fixed ensemble size is likely to lead to either overestimation or underestimation bias, and the bias may shift between overestimation and underestimation because of the time-varying approximation error, calling for an adaptive ensemble size so as to drive the bias close to zero based on the underlying learning dynamics. Thus motivated, in this work we study effective ensemble size adaptation to minimize the estimation bias that hinges heavily on the time-varying approximation errors during the learning process. To this end, we first characterize the relationship among the ensemble size, the function approximation errors, and the estimation bias, by deriving an upper bound and a lower bound on the estimation bias. Our findings reveal that the ensemble size should be selected adaptively in a way to cope with the impact of the time-varying approximation errors. Building upon the theoretic results, we cast the estimation bias minimization as an adaptive control problem where the approximation error during the learning process is treated as the control object, and the ensemble size is adapted based on the feedback of the control output, i.e., the value of the approximation error from the last iteration. The key idea in this approach is inspired from the classic Model Identification Adaptive Control (MIAC) framework [3, 25], where at each step the current system identification of the control object is fed back to adjust the controller, and consequently a new control signal is devised following the updated control law. One main contribution of this work lies in the development of AdaEQ, a generalized ensemble method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. Specifically, the approximation error in each iteration is quantified by comparing the difference between the Q-estimates and the Monte Carlo return using the current learned policy over a testing trajectory [29, 17]. Inspired by MIAC, the approximation error serves as the feedback to adapt the ensemble size. Besides, we introduce a ‘tolerance’ parameter in the adaptation mechanism to balance the control tendency towards positive or negative bias during the learning process. In this way, AdaEQ can encompass other existing ensemble methods as special cases, including Maxmin [17], by properly setting this hyperparameter. A salient feature of the feedback-adaptation mechanism is that it can be used effectively in conjunction with both standard Q-learning [22] and actor-critic methods [28, 11]. Experimental results on the continuous-control MuJoCo benchmark [30] show that AdaEQ is robust to the initial ensemble size in different environments, and achieves higher average return, thanks to keeping the estimation bias close to zero, when compared to the state-of-the-art ensemble methods such as REDQ [6] and Average-DQN [2]. Related Work. Bias-corrected Q-learning [18] introduces the bias correction term to reduce the overestimation bias. Double Q-learning is proposed in [12, 33] to address the overestimation issue in vanilla Q-learning, by leveraging two independent Q-function approximators to estimate the maximum Q-function value in the target. S-DQN and S-DDQN use the softmax operator instead of the max operator to further reduce the overestimation bias [27]. Self-correcting Q-learning aims to balance the underestimation in double Q-learning and overestimation in classic Q learning by introducing a new self-correcting estimator [38]. Weighted Q-learning proposes a new estimator based on the weighted average of the sample means, and conducts the empirical analysis in the discrete action space [8]. Weighted Double Q-learning [37] uses the Q-approximator together with the double Q-approximator to balance the overestimation and underestimation bias. Nevertheless, acquiring independent approximators is often intractable for large-scale tasks. To resolve this issue, the Twin-Delayed Deep Deterministic policy gradient algorithm (TD3) [9] and Soft Actor-Critic (SAC) [11] have been devised to take the minimum over two approximators in the target network. Along a different avenue, the ensemble-based methods generalize double Q-learning to correct the overestimation bias by increasing the number of Q-function approximators. Particularly, AverageDQN [2] takes the average of multiple approximators in the target to reduce the overestimation error, and Random Ensemble Mixture (REM) [1] estimates the target value using the random convex combination of the approximators. It is worth noting that both Average-DQN and REM cannot completely eliminate the overestimation bias. Most recently, Maxmin Q-learning [17] defines a proxy Q-function by choosing the minimum Q-value for each action among all approximators. Similar to Maxmin, Random Ensembled Q-learning (REDQ) [6] formulates the proxy Q-function by choosing only a subset of the ensemble. Nevertheless, both Maxmin and REDQ use a fixed ensemble size. In this study, we introduce an adaptation mechanism for the ensemble size to drive the estimation bias to be close to zero, thereby mitigating the possible overestimation and underestimation issues. 2 Impact of Ensemble Size on Estimation Bias 2.1 Ensemble Q-learning As is standard, we consider a Markov decision process (MDP) defined by the tuple 〈S,A, P, r, γ〉, where S and A denote the state space and the action space, respectively. P (s′|s, a) : S ×A× S → [0, 1] denotes the probability transition function from current state s to the next state s′ by taking action a ∈ A, and r(s, a) : S ×A → R is the corresponding reward. γ ∈ (0, 1] is the discount factor. At each step t, the agent observes the state st, takes an action at following a policy π : S → A, receives the reward rt, and evolves to a new state st+1. The objective is to find an optimal policy π∗ to maximize the discounted return R = ∑∞ t=0 γ trt. By definition, Q-function is the expected return when choosing action a in state s and following with the policy π: Qπ = E[ ∑∞ t=0 γ trt(st, at)|s0 = s, a0 = a]. Q-learning is an off-policy value-based method that aims at learning the optimal Q-function Q∗ : S ×A → R, where the optimal Q-function is a fixed point of the Bellman optimality equation [4]: T Q∗(s, a) = r(s, a) + γEs′∼P (s′|s,a) [maxa′∈AQ∗(s′, a′)] . (1) Given a transition sample (s, a, r, s′), the Bellman operator can be employed to update the Q-function as follows: Q(s, a)← (1− α)Q(s, a) + αy, y := r + γmaxa′∈AQ(s′, a′). (2) where α is the step size and y is the target. Under some conditions, Q-learning can converge to the optimal fixed-point solution asymptotically [31]. In deep Q-learning, the Q-function is approximated by a neural network, and it has been shown [33] that the approximation error, amplified by the max operator in the target, results in the overestimation phenomena. One promising approach to address this issue is the ensemble Q-learning method, which is the main subject of this study. The Ensemble Method. Specifically, the ensemble method maintains N separate approximators Q1, Q2, · · · , QN of the Q-function, based on which a subset of these approximators is used to devise a proxy Q-function. For example, in Average-DQN [2], the proxy Q-function is obtained by computing the average value over all N approximators to reduce the overestimation bias: Qave(·) = 1N ∑N i=1Q i(·). However, the average operation cannot completely eliminate the overestimation bias, since the average of the overestimation bias is still positive. To tackle this challenge, Maxmin [17] and REDQ [6] take the ‘min’ operation over a subsetM ( size M ) of the ensemble: Qproxy(·) = mini∈MQi(·). (3) The target value in the ensemble-based Q-learning is then computed as y = r +maxa′∈AQproxy. It is worth noting that in the existing studies, the in-target ensemble size M , pre-determined for a given environment, remain fixed in the learning process. 2.2 An Illustrative Example It is known that the determination of the optimal ensemble size is highly nontrivial, and a poor choice of the ensemble size would degrade the performance of ensemble Q-learning significantly [17]. As mentioned earlier, it is unclear a priori if a fixed ensemble size should be used in the learning process. (a) Function approximation. (b) Five function approximators. (c) Ensemble via ‘min’ operator. (d) Estimation error. Figure 2: Illustration of estimation bias in the ensemble method. (a) Each approximator is fitted to the noisy values (green dots) at the sampled states independently. (b) Five Q-function approximators are obtained for both actions (green lines and blue lines). (c) Apply the min operator over M (M = 3) randomly selected approximators to obtain a proxy approximator for each action. (d) The estimation error is obtained by comparing the underlying true value (purple line in (a)) and the target value using the proxy approximator. (a) Estimation bias vs. τ . (b) Estimation bias vs. numbers of actions. Figure 3: Illustration of overestimation and underestimation phenomena for different ensemble sizes. In what follows, we use an example to illustrate the potential pitfalls in the ensemble methods by examining the sensitivity of the estimation bias to the ensemble size [6, 17]. Along the same line as in [33], we consider an example with a real-valued continuous state space. In this example, there are two discrete actions available at each state and the optimal action values depend only on the state, i.e., in each state both actions result in the same optimal value Q∗(s, ·), which is assumed to be Q∗(s, ·) = sin(s). Figure 2 demonstrates how the ensemble method is carried out in four stages: (I) For each Q-function approximator Qi, i = 1, 2, · · · , 5, we first generate 10 noisy action-value samples independently (green dots in Figure 2(a)). Let ei(s, a) denote the approximation error of Qi: Qi(s, a) = Q∗(s, a) + ei(s, a), with ei(s, a) ∼ U(−τi, τi), (4) where τi ∼ U(0, τ) models the approximation error distribution for the i-th approximator. Note that the assumption on the uniform error distribution is commonly used to indicate that both positive and negative approximation error are possible in Q-function approximators [29][17][6]. (II) Next, Figure 2(b) illustrates the ensemble (N = 5) of approximators for two actions, where each approximator is a 6-degree polynomial that fits the noisy values at sampled states. (III) Following the same ensemble approach in [6][17], we randomly choose M approximators from the ensemble and take the minimum over them to obtain a proxy approximator for each action, resulting in the dashed lines in Figure 2(c). (IV) Finally, the maximum action value of the proxy approximator is used as the target to update the current approximators. To evaluate the target value estimation error, Figure 2(d) depicts the difference between the obtained target value and the underlying true value when using different ensemble size M . As in [33], we utilize the average estimation error (i.e., estimation bias) to quantify the performance of current approximators. For example, when the ensemble size M = 2, the red line is above zero for most states, implying the overestimation tendency in the target. Clearly, Figure 2(d) indicates that the estimation bias is highly dependent on the ensemble size, and even a change of M can lead the shift from overestimation to underestimation. Since the Q-function approximation error of each approximator changes over time in the training process [5] (examples for this phenomenon can be found in Appendix B.3), we next analyze the impact of the ensemble size on the estimation bias under different approximation error distributions. As shown in Figure 3(a), with a fixed ensemble size M , the estimation bias may shift between positive and negative and be ‘dramatically’ large for some error distributions. In light of this observation, departing from using a fixed size, we advocate to adapt the in-target ensemble size, e.g., set M = 4 when the noise parameter τ > 1.5 and M = 3 otherwise. The estimation bias resulted by this adaptation mechanism is much closer to zero. Besides, Figure 3(b) characterizes the estimation bias under different action spaces, which is also important considering that different tasks normally have different action spaces and the number of available actions may vary in different states even for the same task. The adaptive ensemble approach is clearly more robust in our setting. In a nutshell, both Figure 3(a) and 3(b) suggest that a fixed ensemble size would not work well to minimize the estimation bias during learning for different tasks. This phenomenon has also been observed in the empirical results [17]. In stark contrast, adaptively changing the ensemble size based on the approximation error indeed can help to reduce the estimation bias in different settings. 3 Adaptive Ensemble Q-learning (AdaEQ) Motivated by the illustrative example above, we next devise a generalized ensemble method with ensemble size adaptation to drive the estimation bias to be close to zero, by taking into consideration the time-varying feature of the approximation error during the learning process. Formally, we consider an ensemble of N Q-function approximators, i.e., {Qi}Ni=1, with each approximator initialized independently and randomly. We use the minimum of a subsetM of the N approximators in the Q-learning target as in (3), where the size of subset |M| =M ≤ N . 3.1 Lower Bound and Upper Bound on Estimation Bias We first answer the following key question:“How does the approximation error, together with the ensemble size, impact the estimation bias?". To this end, based on [29], we characterize the intrinsic relationship among the ensemble size M , the Q-function approximation error and the estimation bias, and derive an upper bound and a lower bound on the bias in the tabular case. Without loss of generality, we assume that for each state s, there are A available actions. Let ei(s, a) , Qi(s, a)−Qπ(s, a) be the approximation error for the i-th Q-function approximator, where Qπ(s, a) is the ground-truth of the Q-value for the current policy π. By using (3) to compute the target Q-value, we define the estimation error in the Bellman equation for transition (s, a, r, s′) as ZM : ZM ,r + γmaxa′∈Amini∈MQi(s′, a′)− (r +maxa′∈AQπ(s′, a′)) . Here a positive E[ZM ] implies overestimation bias while a negative E[ZM ] implies underestimation bias. Note that we use the subscription M to emphasize that the estimation bias is intimately related to M . The case with two distributions for Q-function approximation errors. For ease of exposition, we first consider the case when the approximation errors follow one of the two uniform distributions, as illustrated in Figure 4(a). Specifically, assume that for i ∈ K ⊂ M with |K| = K, ei(s, a) ∼ U(−τ1, τ1) , and for i ∈M \ K, ei(s, a) ∼ U(−τ2, τ2). Without loss of generality, we assume that τ1 > τ2 > 0. It is worth noting that in [29][17][6], the approximation error for all approximators is assumed to follow the same uniform distribution, i.e., τ1 = τ2, which is clearly more restrictive than the case here with two error distributions. For instance, when only one approximator is chosen to be updated at each step [17], the approximation error distribution of this approximator would change over time and hence differ from the others. We have the following results on the upper bound and lower bound of the estimation bias E[ZM ]. Theorem 1. For the case with two distributions for Q-function approximation errors, the estimation bias E[ZM ] satisfies that E[ZM ] ≥ γ (τ1(1− fAK − 2fAM ) + τ2(1− fAM )) ; (5) E[ZM ] ≤ γ ( τ1 + τ2(1− 2fA(M−K) − (1− βK)A) ) , (6) where βK = ( 12 − τ2 2τ1 )K , fAK = 1KB( 1 K , A + 1) = Γ(A+1)Γ(1+ 1K ) Γ(A+ 1K +1) = A(A−1)···1 (A+ 1K )(A+ 1 K−1)···(1+ 1 K ) with B(·, ·) being the Beta function. (a) Q-function approximation error distributions. (b) Lower bound and upper bound on Estimation bias. (c) Impact of approximation error on the estimation bias: overestimation vs. underestimation. Figure 4: Illustration of upper bounds and lower bounds on estimation bias in Theorem 1. (a) The case where the approximation errors of the Q-approximators can be categorized into two uniform distributions. (b) The lower bound and the upper bound corresponding to (5) and (6), for given τ1, τ2, A: The blue point represents the ‘critical’ point where decreasing the ensemble size may lead overestimation (the lower bound is positive); and the red point denotes the ‘critical’ point where increasing ensemble size may lead underestimation (the upper bound is negative). (c) Due to time-varying feature of the approximation errors, the blue curve and the red curve depict the ‘critical’ points for the lower bound and the upper bound, respectively. The proof of Theorem 1 is relegated to the Appendix A.1. Theorem 1 reveals that the estimation bias depends on the ensemble size as well as the approximation error distributions. To get a more concrete sense of Theorem 1, we consider an example where τ1 = 0.5 and τ2 = 0.4, as depicted in Figure 4(b), and characterize the relationship between the estimation bias and the ensemble size M . Notably, the estimation bias turns negative when the ensemble size M > Mu = 9 (red point: the value of M where the upper bound is 0) and becomes positive when M < Ml = 4 (blue point: the value of M where the lower bound is 0). In Figure 4(c), we fix τ2 = 0.4 and show how those two critical points (Mu and Ml) change along with τ1. Here the red shaded area indicates underestimation bias when M > Mu, and the blue shaded area indicates overestimation bias when M < Ml. Clearly, in order to avoid the positive bias (blue shaded area), it is desirable to increase the ensemble size when the approximation error is large, e.g., τ1 > 0.6. On the other hand, decreasing the ensemble size is more preferred to avoid underestimation (red shaded area) when the approximation error is small, e.g., τ1 < 0.6. The general case with heterogeneous distributions for Q-function approximation errors. Next, we consider a general case, in which the approximation errors for different approximators {Qi} are independently but non-identically distributed. Specifically, we assume that the approximation error ei(s, a) for Qi(s, a), i = 1, 2, · · · ,M , follows the uniform distribution U(−τi, τi), where τi > 0. We use a multitude of tools to devise the upper bound and lower bound on the estimation bias E[ZM ]. As expected, this general case is technically more challenging and the bounds would be not as sharp as in the special case with two distributions. Theorem 2. For the general case with heterogeneous error distributions, the estimation bias E[ZM ] satisfies that E[ZM ] ≥γ ( τmin − τmax(fA(M−1) + 2fAM ) ) ; (7) E[ZM ] ≤γ (2τmin − τmax (fAM − 2gAM )) , (8) where τmin = mini τi and τmax = maxi τi. gAM = 1M I0.5( 1 M , A + 1) with I0.5(·, ·) being the regularized incomplete Beta function. Observe from Theorem 2 that the lower bound in (7) is positive when τmin(1 − 2fAM ) > τmaxfA(M−1), indicating the existence of the overestimation issue. On thew contrary, the upper bound in (8) is negative when 2τmin < τmax (1 + fAM − 2gAM ), pointing to the underestimation issue. In general, when τmin is large enough, decreasing ensemble size M is likely to cause overestimation, e.g., E[ZM ] ≥ 0 when M < 2. On the other hand, when τmax is small enough, increasing ensemble size M is likely to cause underestimation, e.g., E[ZM ] ≤ 0 when M is sufficiently large. Determination of parameter c. As illustrated in Figure 4(c), for given approximation error characterization, a threshold c can be chosen such that increasing the ensemble size would help to correct the overestimation bias when τmax > c, and decreasing the ensemble size is more conductive to mitigate the underestimation bias when τmax < c. Specifically, parameter c is determined in two steps. Step 1: To estimate approximation error distribution parameters τmin and τmax by running an ensemble based algorithm (e.g., Algorithm 1) for a few epochs with a fixed ensemble size. In particular, a testing trajectory is generated from a random initial state using the current policy to compute the (discounted) MC return Qπ and the estimated Q-function value Qi, i = 1, 2, · · · , N . We next fit a uniform distribution model U(−τi, τi) of the approximation error (Qi −Qπ) for each Q-function approximator Qi. Then, τmin and τmax can be obtained by choosing the minimum and maximum values among τi, i = 1, 2, · · · , N . Step 2: To obtain the upper bound and the lower bound in Theorem 2 by using {τmin, τmax, A, γ}. We investigate the relationship between ensemble size M and the estimation bias by studying the bounds and identifying the ‘critical’ points as illustrated in Figure 4(b). Observe that a ‘proper’ ensemble size should be chosen between the ‘critical’ points, so as to reduce the overestimation and underestimation bias as much as possible. Since the approximation error is time-varying during the learning process, these two ‘critical’ points vary along with {τmax} and {τmin} (as shown in Figure 4(c)). Intuitively, it is desirable to drive the system to avoid both the red region (underestimation) and the blue region (overestimation). It can be clearly observed that there is a wide range of choice for parameter c (e.g., [0.5, 0.7] in Figure 4(c)) for the algorithm to stay in the white region, indicating that even though the pre-determined c above is not optimized, it can still serve the purpose well. The proof of Theorem 2 and numerical illustration can be found in the Appendix A.3. Summarizing, both Theorem 1 and Theorem 2 indicate that the approximation error characterization plays a critical role in controlling the estimation bias. In fact, both the lower bound and the upper bound in Theorem 2 depends on τmin and τmax, which are time-varying due to the iterative nature of the learning process, indicating that it is sensible to use an adaptive ensemble size to drive the estimation bias to be close to zero, as much as possible. 3.2 Practical Implementation Based on the theoretic findings above, we next propose AdaEQ that adapts the ensemble size based on the approximation error feedback on the fly, so as to drive the estimation bias close to zero. Particularly, as summarized in Algorithm 1, AdaEQ introduces two important steps at each iteration t, i.e., approximation error characterization (line 3) and ensemble size adaptation (line 4), which can be combined with the framework of either Q-learning or actor-critic methods. Characterization of the time-varying approximation error. As outlined in Algorithm 1, the first key step is to quantify the time-varying approximation error at each iteration t (for ease of exposition, we omit the subscript t when it is clear from the context). Along the same line as in [9, 33, 6], we run a testing trajectory of length H , T = (s0, a0, s1, a1, · · · , sH , aH), from a random initial state using the current policy π, and compute the discounted Monte Carlo return Qπ(s, a) and the estimated Q-function value Qi(s, a), i = 1, · · · , N for each visited state-action pair (s, a). The empirical standard derivation of Qi(s, a)−Qπ(s, a) can be then obtained to quantify the approximation error of each approximator Qi. Then, we take the average of the empirical standard derivation over all approximators to characterize the approximation error at the current iteration t, i.e., τ̃t = 1 N ∑N i=1 std(Q i(s, a)−Qπ(s, a)), (s, a) ∈ T . (9) Error-feedback based ensemble size adaptation. Based on the theoretic results and Figure 4(c), we update the ensemble size M at each iteration t based on the approximation error (9), using the following piecewise function: Mt = rand(Mt−1 + 1, N) τ̃t−1 > c, Mt−1 + 1 ≤ N rand(2,Mt−1 − 1) τ̃t−1 < c, Mt−1 − 1 ≥ 2 Mt−1 otherwise, (10) where rand(·, ·) is a uniform random function and c is a pre-determined parameter to capture the ‘tolerance’ of the estimation bias during the adaptation process. Recall that parameter c can be determined by using the upper bound and the lower bound in Theorem 2 (Theorem 1). Particularly, a larger c implies that more tolerance of the underestimation bias is allowed when adapting the ensemble size Mt. A smaller c, on the other hand, admits more tolerance of the overestimation. In this way, AdaEQ can be viewed as a generalization of Maxmin and REDQ with ensemble size adaptation. In particular, when c = 0 and Mt+1 ≤ N , the adaptation mechanism would increase the ensemble size until it is equal to N . Consequently, AdaEQ degenerates to Maxmin [17] where M = N , leading Algorithm 1 Adaptive Ensemble Q-learning (AdaEQ) 1: Empty replay buffer D, step size α, number of the approximators N , initial in-target ensemble size M0 ≤ N , initial state s. Initialize N approximators with different training samples. 2: for Iteration t = 1, 2, 3, · · · do 3: Identify approximation error parameter τ̃t using (9) 4: Update ensemble size Mt according to (10) 5: Sample a setM of Mt different indices from {1, 2, · · · , N} 6: Obtain the proxy approximator Qproxy(s, a)← mini∈MQi(s, a), ∀a ∈ A 7: Choose action a from current state s using policy derived from Qproxy (e.g., ε-greedy) 8: Take action a, observe r and next state s′ 9: Update replay buffer D ← D ∪ {s, a, r, s′} 10: for i = 1, 2, · · · , N do 11: Sample a random mini-batch B from D 12: Compute the target: y(s, a, r, s′)← r + γmaxa′∈AQproxy(s′, a′), (s, a, r, s′) ∈ B 13: Update Q-function Qi: Qi(s, a)← (1− α)Qi(s, a) + αy(s, a, r, s′), (s, a, r, s′) ∈ B 14: end for 15: s← s′ 16: end for to possible underestimation bias. Meantime, when c is set sufficiently large, the ensemble size M would decrease until reaching the minimal value 2 during the learning process, where the estimation bias would be positive according to Theorem 2. In this case, AdaEQ is degenerated to REDQ [6] with ensemble size M = 2. We show the convergence analysis of AdaEQ in Appendix A.5. Remark. We use random sampling in Eqn. (10) for two reasons. Firstly, the characterization of the approximation error in Eqn. (9) is noisy in nature. In particular, Monte Carlo returns with finite-length testing trajectory may introduce empirical errors when estimating the underlying ground true value of Qπ . This noisy estimation is often the case when the policy is not deterministic, or the environment is not deterministic. Thus, we use random sampling to ‘capture’ the impact of this noisy estimation. Secondly, in general it is infeasible to characterize the exact relationship between estimation bias ZM and ensemble size M . Without any further prior information except from the bounds we obtained in Theorem 1 and Theorem 2 about the approximation error, the random sampling can be viewed as the ‘exploration’ in AdaEQ. 4 Experimental Results In this section, we evaluate the effectiveness of AdaEQ by answering the following questions: 1) Can AdaEQ minimize the estimation bias and further improve the performance in comparison to existing ensemble methods? 2) How does AdaEQ perform given different initial ensemble sizes? 3) How does the ‘tolerance’ parameter c affect the performance? To make a fair comparison, we follow the setup of [6] and use the same code base to compare the performance of AdaEQ with REDQ [6] and Average-DQN (AVG) [2], on three MuJoCo continuous control tasks: Hopper, Ant and Walker2d. The same hyperparameters are used for all the algorithms. Specifically, we consider N = 10 Q-function approximators in total. The ensemble size M = N = 10 for AVG, while the initial M for AdaEQ is set as 4. The ensemble size for REDQ is set as M = 2, which is the fine-tuned result from [6]. For all the experiments, we set the ‘tolerance’ parameter c in (10) as 0.3 and the length of the testing trajectories as H = 500. The ensemble size is updated according to (10) every 10 epochs in AdaEQ. The discount factor is 0.99. Implementation details and hyperparamter settings are fully described in Appendix B.1. Evaluation of estimation bias. To investigate the impact of the adaptation mechanism in AdaEQ, we begin by examining how the estimation bias changes in the training process. After each epoch, we run an evaluation episode of length H = 500, starting from an initial state sampled from the replay buffer. We calculate the estimation error based on the difference between the Monte Carlo return value and the Q-estimates as in [33, 6, 14]. For each experiment, the shaded area represents a standard deviation of the average evaluation over 3 training seeds. As shown in the first row of Figure 5, AdaEQ can reduce the estimation bias to nearly zero in all three benchmark environments, in contrast to REDQ and AVG. The AVG approach tends to result in positive bias in all three environments (a) Hopper-v2 task. Average returns over different initial ensemble size M = 2, 3, 5. (b) Hopper-v2 task. Estimation bias over different initial ensemble size M = 2, 3, 5. (c) Ant task. Average returns over different initial ensemble size M = 3, 5, 7. (d) Ant task. Estimation bias over different initial ensemble size M = 3, 5, 7. Figure 6: Impacts of the initial ensemble size M on the performance of AdaEQ in Hopper-v2 and Ant task. The solid lines are the mean values and the shaded areas are the standard derivations across three ensemble size settings. during the learning procedure, which is consistent with the results obtained in [6]. Notably, it can be clearly observed from Hopper and Walker2d tasks that the estimation bias for AdaEQ is driven to be close to zero, thanks to the dynamic ensemble size adjustment during the learning process. Meantime, in the Ant task, even though the fine-tuned REDQ can mitigate the overestimation bias, it tends to have underestimation bias, whereas AdaEQ is able to keep the bias closer to zero (gray dashed line) even under a ‘non-optimal’ choice of the initial ensemble size. Performance on MuJoCo benchmark. We evaluate the policy return after each epoch by calculating the undiscounted sum of rewards when running the current learnt policy [6, 14]. The second row of Figure 5 demonstrates the average return during the learning process for AdaEQ, AVG and REDQ, respectively. Especially, we choose the fine-tune ensemble size for REDQ [6]. As observed in Figure 5, AdaEQ can efficiently learn a better policy and achieve higher average return in all three challenging MuJoCo tasks, without searching the optimal parameters beforehand for each of them. Meantime, AdaEQ only incurs slightly more computation time than REDQ in most MuJoCo tasks. Due to space limitation, we have relegated the wall-clock training time comparison to Table 2 in Appendix B.2. Robustness to the initial ensemble size. Next, we investigate the performance of AdaEQ under different settings of the initial ensemble size in the Hopper-v2 and Ant environment, i.e.,M = (2, 3, 5) (a) Hopper-v2 task. Average returns over different parameter c in AdaEQ. (b) Hopper-v2 task. Estimation bias over different parameter c in AdaEQ. (c) Ant task. Average returns over different parameter c in AdaEQ. (d) Ant task. Estimation bias over different parameter c in AdaEQ. Figure 7: Impacts of parameter c on the performance of AdaEQ in Hopper-v2 and Ant task. The initial ensemble size is set to be M = 4. The mean value and the standard derivation are evaluated across three training seeds. and M = (3, 5, 7). As shown in Figure 6, AdaEQ consistently outperforms the others in terms of the average performance over different setups, which implies the benefit of adjusting the in-target ensemble size based on the error feedback. It can be seen from the shaded area that the performance of AVG and REDQ, may vary significantly when the ensemble size changes. Robustness to parameter c in a wide range. As illustrated in Figure 7, we conduct the ablation study by setting c = 0.001, 0.3, 0.5, 0.7, 1.5 on the Hopper-v2 and Ant tasks. Clearly, AdaEQ works better for c ∈ [0.3, 0.7]. The experiment results corroborate our analysis in Section 3.1 that our algorithm is not sensitive to parameter c in a wide range. As mentioned in Section 3.2, when parameter c is close to zero, AdaEQ degenerates to Maxmin, which is known to suffer from underestimation bias when the ensemble size is large [6]. Further, as illustrated in Figure 7(b), when c is large, e.g., c = 1.5, the ensemble size would gradually decrease to the minimum and hence would not be able to throttle the overestimation tendency during the learning process. 5 Conclusion Determining the right ensemble size is highly nontrivial for the ensemble Q-learning to correct the overestimation without introducing significant underestimation bias. In this paper, we devise AdaEQ, a generalized ensemble Q-learning method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. More specifically, by establishing the upper bound and the lower bound of the estimation bias, we first characterize the impact of both the ensemble size and the time-varying approximation error on the estimation bias. Building upon the theoretic results, we treat the estimation bias minimization as an adaptive control problem, and take the approximation error as feedback to adjust the ensemble size adaptively during the learning process. Our experiments show that AdaEQ consistently and effectively outperforms the existing ensemble methods, such as REDQ and AVG in MuJoCo tasks, corroborating the benefit of using AdaEQ to drive the estimation bias close to zero. There are many important avenues for future work. In terms of the bounds of the estimation bias, our analysis builds upon the standard independent assumption as in previous works. It’s worth noting that in practice, the errors are generally correlated [29] and the theoretical analysis for this case remains not well understood. Additionally, in this work, we use a heuristic tolerance parameter in the adaptation mechanism to strike the balance in controlling the positive bias and negative bias. It is of great interest to develop a systematic approach to optimize this tolerance parameter. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their constructive comments. This work was supported in part by NSF grants CNS-2130125, CCSS-2121222 and CNS-2003081.
1. What is the focus of the paper regarding Q-learning? 2. What are the strengths of the proposed approach, particularly in terms of its adaptive nature? 3. How does the paper address the issue of estimation bias in Q-learning? 4. Can you provide more details about the experimental results demonstrated in the paper? 5. How does the reviewer assess the originality, quality, clarity, and significance of the paper's content?
Summary Of The Paper Review
Summary Of The Paper To mitigate the overestimation issue in Q-learning, this paper proposes an adaptive way to control the ensemble size of Q-function approximators. Experimentally, this paper demonstrates how the ensemble size affects the estimation bias. Theoretically, this paper gives the upper bound and lower bound of the estimation bias according to the ensemble size. Finally, this paper evaluates the proposed algorithm in the MuJoCo environment. Review Originality: This paper studies the effect of ensemble size on the estimation bias. There are several works studying the ensemble size of Q-function approximators. But this is the first to study how to adaptively control ensemble size on the fly. The originality is incremental. Quality: In response to the estimation bias, this paper discusses the upper and lower bounds experimentally and theoretically. Clarity: This paper is well written, clear and easy to read. Significance: The ensemble size heavily affects the estimation bias. This paper proposes a randomized algorithm which updates the ensemble size according to a pre-determined "tolerance". It is a rough way to adjust the ensemble size. In total, this paper makes an incremental contribution.
NIPS
Title Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback Abstract The ensemble method is a promising way to mitigate the overestimation issue in Q-learning, where multiple function approximators are used to estimate the action values. It is known that the estimation bias hinges heavily on the ensemble size (i.e., the number of Q-function approximators used in the target), and that determining the ‘right’ ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process. To tackle this challenge, we first derive an upper bound and a lower bound on the estimation bias, based on which the ensemble size is adapted to drive the bias to be nearly zero, thereby coping with the impact of the time-varying approximation errors accordingly. Motivated by the theoretic findings, we advocate that the ensemble method can be combined with Model Identification Adaptive Control (MIAC) for effective ensemble size adaptation. Specifically, we devise Adaptive Ensemble Q-learning (AdaEQ), a generalized ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias. Extensive experiments are carried out to show that AdaEQ can improve the learning performance than the existing methods for the MuJoCo benchmark. 1 Introduction Thanks to recent advances in function approximation methods using deep neural networks [20], Q-learning [35] has been widely used to solve reinforcement learning (RL) problems in a variety of applications, e.g., robotic control [23, 13], path planning [15, 24] and production scheduling [34, 21]. Despite the great success, it is well recognized that Q-learning may suffer from the notorious overestimation bias [29, 33, 32, 10, 37], which would significantly impede the learning efficiency. Recent work [9, 11] indicates that this problem also persists in the actor-critic setting. To address this issue, the ensemble method [16, 1, 26, 7] has emerged as a promising solution in which multiple Q-function approximators are used to get better estimation of the action values. Needless to say, the ensemble size, i.e., the number of Q-function approximators used in the target, has intrinsic impact on Q-learning. Notably, it is shown in [6, 17] that while a large ensemble size could completely remove the overestimation bias, it may go to the other extreme and result in underestimation bias and unstable training, which is clearly not desirable. Therefore, instead of simply increasing the ensemble 35th Conference on Neural Information Processing Systems (NeurIPS 2021). size to mitigate the overestimation issue, a fundamental question to ask is:“ Is it possible to determine the right ensemble size on the fly so as to minimize the estimation bias?” Some existing ensemble methods [2, 19, 17] adopt a trial-and-error strategy to search for the ensemble size, which would be time-consuming and require a lot of human engineering for different RL tasks. The approximation error of the Q-function during the learning process plays a nontrivial role in the selection of the ensemble size, since it directly impacts the Q-target estimation accuracy. This however remains not well understood. In particular, the fact that the approximation error is time-varying, due to the iterative nature of Q-learning [36, 5], gives rise to the question that whether a fixed ensemble size should be used in the learning process. To answer this question, we show in Section 2.2 that using a fixed ensemble size is likely to lead to either overestimation or underestimation bias, and the bias may shift between overestimation and underestimation because of the time-varying approximation error, calling for an adaptive ensemble size so as to drive the bias close to zero based on the underlying learning dynamics. Thus motivated, in this work we study effective ensemble size adaptation to minimize the estimation bias that hinges heavily on the time-varying approximation errors during the learning process. To this end, we first characterize the relationship among the ensemble size, the function approximation errors, and the estimation bias, by deriving an upper bound and a lower bound on the estimation bias. Our findings reveal that the ensemble size should be selected adaptively in a way to cope with the impact of the time-varying approximation errors. Building upon the theoretic results, we cast the estimation bias minimization as an adaptive control problem where the approximation error during the learning process is treated as the control object, and the ensemble size is adapted based on the feedback of the control output, i.e., the value of the approximation error from the last iteration. The key idea in this approach is inspired from the classic Model Identification Adaptive Control (MIAC) framework [3, 25], where at each step the current system identification of the control object is fed back to adjust the controller, and consequently a new control signal is devised following the updated control law. One main contribution of this work lies in the development of AdaEQ, a generalized ensemble method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. Specifically, the approximation error in each iteration is quantified by comparing the difference between the Q-estimates and the Monte Carlo return using the current learned policy over a testing trajectory [29, 17]. Inspired by MIAC, the approximation error serves as the feedback to adapt the ensemble size. Besides, we introduce a ‘tolerance’ parameter in the adaptation mechanism to balance the control tendency towards positive or negative bias during the learning process. In this way, AdaEQ can encompass other existing ensemble methods as special cases, including Maxmin [17], by properly setting this hyperparameter. A salient feature of the feedback-adaptation mechanism is that it can be used effectively in conjunction with both standard Q-learning [22] and actor-critic methods [28, 11]. Experimental results on the continuous-control MuJoCo benchmark [30] show that AdaEQ is robust to the initial ensemble size in different environments, and achieves higher average return, thanks to keeping the estimation bias close to zero, when compared to the state-of-the-art ensemble methods such as REDQ [6] and Average-DQN [2]. Related Work. Bias-corrected Q-learning [18] introduces the bias correction term to reduce the overestimation bias. Double Q-learning is proposed in [12, 33] to address the overestimation issue in vanilla Q-learning, by leveraging two independent Q-function approximators to estimate the maximum Q-function value in the target. S-DQN and S-DDQN use the softmax operator instead of the max operator to further reduce the overestimation bias [27]. Self-correcting Q-learning aims to balance the underestimation in double Q-learning and overestimation in classic Q learning by introducing a new self-correcting estimator [38]. Weighted Q-learning proposes a new estimator based on the weighted average of the sample means, and conducts the empirical analysis in the discrete action space [8]. Weighted Double Q-learning [37] uses the Q-approximator together with the double Q-approximator to balance the overestimation and underestimation bias. Nevertheless, acquiring independent approximators is often intractable for large-scale tasks. To resolve this issue, the Twin-Delayed Deep Deterministic policy gradient algorithm (TD3) [9] and Soft Actor-Critic (SAC) [11] have been devised to take the minimum over two approximators in the target network. Along a different avenue, the ensemble-based methods generalize double Q-learning to correct the overestimation bias by increasing the number of Q-function approximators. Particularly, AverageDQN [2] takes the average of multiple approximators in the target to reduce the overestimation error, and Random Ensemble Mixture (REM) [1] estimates the target value using the random convex combination of the approximators. It is worth noting that both Average-DQN and REM cannot completely eliminate the overestimation bias. Most recently, Maxmin Q-learning [17] defines a proxy Q-function by choosing the minimum Q-value for each action among all approximators. Similar to Maxmin, Random Ensembled Q-learning (REDQ) [6] formulates the proxy Q-function by choosing only a subset of the ensemble. Nevertheless, both Maxmin and REDQ use a fixed ensemble size. In this study, we introduce an adaptation mechanism for the ensemble size to drive the estimation bias to be close to zero, thereby mitigating the possible overestimation and underestimation issues. 2 Impact of Ensemble Size on Estimation Bias 2.1 Ensemble Q-learning As is standard, we consider a Markov decision process (MDP) defined by the tuple 〈S,A, P, r, γ〉, where S and A denote the state space and the action space, respectively. P (s′|s, a) : S ×A× S → [0, 1] denotes the probability transition function from current state s to the next state s′ by taking action a ∈ A, and r(s, a) : S ×A → R is the corresponding reward. γ ∈ (0, 1] is the discount factor. At each step t, the agent observes the state st, takes an action at following a policy π : S → A, receives the reward rt, and evolves to a new state st+1. The objective is to find an optimal policy π∗ to maximize the discounted return R = ∑∞ t=0 γ trt. By definition, Q-function is the expected return when choosing action a in state s and following with the policy π: Qπ = E[ ∑∞ t=0 γ trt(st, at)|s0 = s, a0 = a]. Q-learning is an off-policy value-based method that aims at learning the optimal Q-function Q∗ : S ×A → R, where the optimal Q-function is a fixed point of the Bellman optimality equation [4]: T Q∗(s, a) = r(s, a) + γEs′∼P (s′|s,a) [maxa′∈AQ∗(s′, a′)] . (1) Given a transition sample (s, a, r, s′), the Bellman operator can be employed to update the Q-function as follows: Q(s, a)← (1− α)Q(s, a) + αy, y := r + γmaxa′∈AQ(s′, a′). (2) where α is the step size and y is the target. Under some conditions, Q-learning can converge to the optimal fixed-point solution asymptotically [31]. In deep Q-learning, the Q-function is approximated by a neural network, and it has been shown [33] that the approximation error, amplified by the max operator in the target, results in the overestimation phenomena. One promising approach to address this issue is the ensemble Q-learning method, which is the main subject of this study. The Ensemble Method. Specifically, the ensemble method maintains N separate approximators Q1, Q2, · · · , QN of the Q-function, based on which a subset of these approximators is used to devise a proxy Q-function. For example, in Average-DQN [2], the proxy Q-function is obtained by computing the average value over all N approximators to reduce the overestimation bias: Qave(·) = 1N ∑N i=1Q i(·). However, the average operation cannot completely eliminate the overestimation bias, since the average of the overestimation bias is still positive. To tackle this challenge, Maxmin [17] and REDQ [6] take the ‘min’ operation over a subsetM ( size M ) of the ensemble: Qproxy(·) = mini∈MQi(·). (3) The target value in the ensemble-based Q-learning is then computed as y = r +maxa′∈AQproxy. It is worth noting that in the existing studies, the in-target ensemble size M , pre-determined for a given environment, remain fixed in the learning process. 2.2 An Illustrative Example It is known that the determination of the optimal ensemble size is highly nontrivial, and a poor choice of the ensemble size would degrade the performance of ensemble Q-learning significantly [17]. As mentioned earlier, it is unclear a priori if a fixed ensemble size should be used in the learning process. (a) Function approximation. (b) Five function approximators. (c) Ensemble via ‘min’ operator. (d) Estimation error. Figure 2: Illustration of estimation bias in the ensemble method. (a) Each approximator is fitted to the noisy values (green dots) at the sampled states independently. (b) Five Q-function approximators are obtained for both actions (green lines and blue lines). (c) Apply the min operator over M (M = 3) randomly selected approximators to obtain a proxy approximator for each action. (d) The estimation error is obtained by comparing the underlying true value (purple line in (a)) and the target value using the proxy approximator. (a) Estimation bias vs. τ . (b) Estimation bias vs. numbers of actions. Figure 3: Illustration of overestimation and underestimation phenomena for different ensemble sizes. In what follows, we use an example to illustrate the potential pitfalls in the ensemble methods by examining the sensitivity of the estimation bias to the ensemble size [6, 17]. Along the same line as in [33], we consider an example with a real-valued continuous state space. In this example, there are two discrete actions available at each state and the optimal action values depend only on the state, i.e., in each state both actions result in the same optimal value Q∗(s, ·), which is assumed to be Q∗(s, ·) = sin(s). Figure 2 demonstrates how the ensemble method is carried out in four stages: (I) For each Q-function approximator Qi, i = 1, 2, · · · , 5, we first generate 10 noisy action-value samples independently (green dots in Figure 2(a)). Let ei(s, a) denote the approximation error of Qi: Qi(s, a) = Q∗(s, a) + ei(s, a), with ei(s, a) ∼ U(−τi, τi), (4) where τi ∼ U(0, τ) models the approximation error distribution for the i-th approximator. Note that the assumption on the uniform error distribution is commonly used to indicate that both positive and negative approximation error are possible in Q-function approximators [29][17][6]. (II) Next, Figure 2(b) illustrates the ensemble (N = 5) of approximators for two actions, where each approximator is a 6-degree polynomial that fits the noisy values at sampled states. (III) Following the same ensemble approach in [6][17], we randomly choose M approximators from the ensemble and take the minimum over them to obtain a proxy approximator for each action, resulting in the dashed lines in Figure 2(c). (IV) Finally, the maximum action value of the proxy approximator is used as the target to update the current approximators. To evaluate the target value estimation error, Figure 2(d) depicts the difference between the obtained target value and the underlying true value when using different ensemble size M . As in [33], we utilize the average estimation error (i.e., estimation bias) to quantify the performance of current approximators. For example, when the ensemble size M = 2, the red line is above zero for most states, implying the overestimation tendency in the target. Clearly, Figure 2(d) indicates that the estimation bias is highly dependent on the ensemble size, and even a change of M can lead the shift from overestimation to underestimation. Since the Q-function approximation error of each approximator changes over time in the training process [5] (examples for this phenomenon can be found in Appendix B.3), we next analyze the impact of the ensemble size on the estimation bias under different approximation error distributions. As shown in Figure 3(a), with a fixed ensemble size M , the estimation bias may shift between positive and negative and be ‘dramatically’ large for some error distributions. In light of this observation, departing from using a fixed size, we advocate to adapt the in-target ensemble size, e.g., set M = 4 when the noise parameter τ > 1.5 and M = 3 otherwise. The estimation bias resulted by this adaptation mechanism is much closer to zero. Besides, Figure 3(b) characterizes the estimation bias under different action spaces, which is also important considering that different tasks normally have different action spaces and the number of available actions may vary in different states even for the same task. The adaptive ensemble approach is clearly more robust in our setting. In a nutshell, both Figure 3(a) and 3(b) suggest that a fixed ensemble size would not work well to minimize the estimation bias during learning for different tasks. This phenomenon has also been observed in the empirical results [17]. In stark contrast, adaptively changing the ensemble size based on the approximation error indeed can help to reduce the estimation bias in different settings. 3 Adaptive Ensemble Q-learning (AdaEQ) Motivated by the illustrative example above, we next devise a generalized ensemble method with ensemble size adaptation to drive the estimation bias to be close to zero, by taking into consideration the time-varying feature of the approximation error during the learning process. Formally, we consider an ensemble of N Q-function approximators, i.e., {Qi}Ni=1, with each approximator initialized independently and randomly. We use the minimum of a subsetM of the N approximators in the Q-learning target as in (3), where the size of subset |M| =M ≤ N . 3.1 Lower Bound and Upper Bound on Estimation Bias We first answer the following key question:“How does the approximation error, together with the ensemble size, impact the estimation bias?". To this end, based on [29], we characterize the intrinsic relationship among the ensemble size M , the Q-function approximation error and the estimation bias, and derive an upper bound and a lower bound on the bias in the tabular case. Without loss of generality, we assume that for each state s, there are A available actions. Let ei(s, a) , Qi(s, a)−Qπ(s, a) be the approximation error for the i-th Q-function approximator, where Qπ(s, a) is the ground-truth of the Q-value for the current policy π. By using (3) to compute the target Q-value, we define the estimation error in the Bellman equation for transition (s, a, r, s′) as ZM : ZM ,r + γmaxa′∈Amini∈MQi(s′, a′)− (r +maxa′∈AQπ(s′, a′)) . Here a positive E[ZM ] implies overestimation bias while a negative E[ZM ] implies underestimation bias. Note that we use the subscription M to emphasize that the estimation bias is intimately related to M . The case with two distributions for Q-function approximation errors. For ease of exposition, we first consider the case when the approximation errors follow one of the two uniform distributions, as illustrated in Figure 4(a). Specifically, assume that for i ∈ K ⊂ M with |K| = K, ei(s, a) ∼ U(−τ1, τ1) , and for i ∈M \ K, ei(s, a) ∼ U(−τ2, τ2). Without loss of generality, we assume that τ1 > τ2 > 0. It is worth noting that in [29][17][6], the approximation error for all approximators is assumed to follow the same uniform distribution, i.e., τ1 = τ2, which is clearly more restrictive than the case here with two error distributions. For instance, when only one approximator is chosen to be updated at each step [17], the approximation error distribution of this approximator would change over time and hence differ from the others. We have the following results on the upper bound and lower bound of the estimation bias E[ZM ]. Theorem 1. For the case with two distributions for Q-function approximation errors, the estimation bias E[ZM ] satisfies that E[ZM ] ≥ γ (τ1(1− fAK − 2fAM ) + τ2(1− fAM )) ; (5) E[ZM ] ≤ γ ( τ1 + τ2(1− 2fA(M−K) − (1− βK)A) ) , (6) where βK = ( 12 − τ2 2τ1 )K , fAK = 1KB( 1 K , A + 1) = Γ(A+1)Γ(1+ 1K ) Γ(A+ 1K +1) = A(A−1)···1 (A+ 1K )(A+ 1 K−1)···(1+ 1 K ) with B(·, ·) being the Beta function. (a) Q-function approximation error distributions. (b) Lower bound and upper bound on Estimation bias. (c) Impact of approximation error on the estimation bias: overestimation vs. underestimation. Figure 4: Illustration of upper bounds and lower bounds on estimation bias in Theorem 1. (a) The case where the approximation errors of the Q-approximators can be categorized into two uniform distributions. (b) The lower bound and the upper bound corresponding to (5) and (6), for given τ1, τ2, A: The blue point represents the ‘critical’ point where decreasing the ensemble size may lead overestimation (the lower bound is positive); and the red point denotes the ‘critical’ point where increasing ensemble size may lead underestimation (the upper bound is negative). (c) Due to time-varying feature of the approximation errors, the blue curve and the red curve depict the ‘critical’ points for the lower bound and the upper bound, respectively. The proof of Theorem 1 is relegated to the Appendix A.1. Theorem 1 reveals that the estimation bias depends on the ensemble size as well as the approximation error distributions. To get a more concrete sense of Theorem 1, we consider an example where τ1 = 0.5 and τ2 = 0.4, as depicted in Figure 4(b), and characterize the relationship between the estimation bias and the ensemble size M . Notably, the estimation bias turns negative when the ensemble size M > Mu = 9 (red point: the value of M where the upper bound is 0) and becomes positive when M < Ml = 4 (blue point: the value of M where the lower bound is 0). In Figure 4(c), we fix τ2 = 0.4 and show how those two critical points (Mu and Ml) change along with τ1. Here the red shaded area indicates underestimation bias when M > Mu, and the blue shaded area indicates overestimation bias when M < Ml. Clearly, in order to avoid the positive bias (blue shaded area), it is desirable to increase the ensemble size when the approximation error is large, e.g., τ1 > 0.6. On the other hand, decreasing the ensemble size is more preferred to avoid underestimation (red shaded area) when the approximation error is small, e.g., τ1 < 0.6. The general case with heterogeneous distributions for Q-function approximation errors. Next, we consider a general case, in which the approximation errors for different approximators {Qi} are independently but non-identically distributed. Specifically, we assume that the approximation error ei(s, a) for Qi(s, a), i = 1, 2, · · · ,M , follows the uniform distribution U(−τi, τi), where τi > 0. We use a multitude of tools to devise the upper bound and lower bound on the estimation bias E[ZM ]. As expected, this general case is technically more challenging and the bounds would be not as sharp as in the special case with two distributions. Theorem 2. For the general case with heterogeneous error distributions, the estimation bias E[ZM ] satisfies that E[ZM ] ≥γ ( τmin − τmax(fA(M−1) + 2fAM ) ) ; (7) E[ZM ] ≤γ (2τmin − τmax (fAM − 2gAM )) , (8) where τmin = mini τi and τmax = maxi τi. gAM = 1M I0.5( 1 M , A + 1) with I0.5(·, ·) being the regularized incomplete Beta function. Observe from Theorem 2 that the lower bound in (7) is positive when τmin(1 − 2fAM ) > τmaxfA(M−1), indicating the existence of the overestimation issue. On thew contrary, the upper bound in (8) is negative when 2τmin < τmax (1 + fAM − 2gAM ), pointing to the underestimation issue. In general, when τmin is large enough, decreasing ensemble size M is likely to cause overestimation, e.g., E[ZM ] ≥ 0 when M < 2. On the other hand, when τmax is small enough, increasing ensemble size M is likely to cause underestimation, e.g., E[ZM ] ≤ 0 when M is sufficiently large. Determination of parameter c. As illustrated in Figure 4(c), for given approximation error characterization, a threshold c can be chosen such that increasing the ensemble size would help to correct the overestimation bias when τmax > c, and decreasing the ensemble size is more conductive to mitigate the underestimation bias when τmax < c. Specifically, parameter c is determined in two steps. Step 1: To estimate approximation error distribution parameters τmin and τmax by running an ensemble based algorithm (e.g., Algorithm 1) for a few epochs with a fixed ensemble size. In particular, a testing trajectory is generated from a random initial state using the current policy to compute the (discounted) MC return Qπ and the estimated Q-function value Qi, i = 1, 2, · · · , N . We next fit a uniform distribution model U(−τi, τi) of the approximation error (Qi −Qπ) for each Q-function approximator Qi. Then, τmin and τmax can be obtained by choosing the minimum and maximum values among τi, i = 1, 2, · · · , N . Step 2: To obtain the upper bound and the lower bound in Theorem 2 by using {τmin, τmax, A, γ}. We investigate the relationship between ensemble size M and the estimation bias by studying the bounds and identifying the ‘critical’ points as illustrated in Figure 4(b). Observe that a ‘proper’ ensemble size should be chosen between the ‘critical’ points, so as to reduce the overestimation and underestimation bias as much as possible. Since the approximation error is time-varying during the learning process, these two ‘critical’ points vary along with {τmax} and {τmin} (as shown in Figure 4(c)). Intuitively, it is desirable to drive the system to avoid both the red region (underestimation) and the blue region (overestimation). It can be clearly observed that there is a wide range of choice for parameter c (e.g., [0.5, 0.7] in Figure 4(c)) for the algorithm to stay in the white region, indicating that even though the pre-determined c above is not optimized, it can still serve the purpose well. The proof of Theorem 2 and numerical illustration can be found in the Appendix A.3. Summarizing, both Theorem 1 and Theorem 2 indicate that the approximation error characterization plays a critical role in controlling the estimation bias. In fact, both the lower bound and the upper bound in Theorem 2 depends on τmin and τmax, which are time-varying due to the iterative nature of the learning process, indicating that it is sensible to use an adaptive ensemble size to drive the estimation bias to be close to zero, as much as possible. 3.2 Practical Implementation Based on the theoretic findings above, we next propose AdaEQ that adapts the ensemble size based on the approximation error feedback on the fly, so as to drive the estimation bias close to zero. Particularly, as summarized in Algorithm 1, AdaEQ introduces two important steps at each iteration t, i.e., approximation error characterization (line 3) and ensemble size adaptation (line 4), which can be combined with the framework of either Q-learning or actor-critic methods. Characterization of the time-varying approximation error. As outlined in Algorithm 1, the first key step is to quantify the time-varying approximation error at each iteration t (for ease of exposition, we omit the subscript t when it is clear from the context). Along the same line as in [9, 33, 6], we run a testing trajectory of length H , T = (s0, a0, s1, a1, · · · , sH , aH), from a random initial state using the current policy π, and compute the discounted Monte Carlo return Qπ(s, a) and the estimated Q-function value Qi(s, a), i = 1, · · · , N for each visited state-action pair (s, a). The empirical standard derivation of Qi(s, a)−Qπ(s, a) can be then obtained to quantify the approximation error of each approximator Qi. Then, we take the average of the empirical standard derivation over all approximators to characterize the approximation error at the current iteration t, i.e., τ̃t = 1 N ∑N i=1 std(Q i(s, a)−Qπ(s, a)), (s, a) ∈ T . (9) Error-feedback based ensemble size adaptation. Based on the theoretic results and Figure 4(c), we update the ensemble size M at each iteration t based on the approximation error (9), using the following piecewise function: Mt = rand(Mt−1 + 1, N) τ̃t−1 > c, Mt−1 + 1 ≤ N rand(2,Mt−1 − 1) τ̃t−1 < c, Mt−1 − 1 ≥ 2 Mt−1 otherwise, (10) where rand(·, ·) is a uniform random function and c is a pre-determined parameter to capture the ‘tolerance’ of the estimation bias during the adaptation process. Recall that parameter c can be determined by using the upper bound and the lower bound in Theorem 2 (Theorem 1). Particularly, a larger c implies that more tolerance of the underestimation bias is allowed when adapting the ensemble size Mt. A smaller c, on the other hand, admits more tolerance of the overestimation. In this way, AdaEQ can be viewed as a generalization of Maxmin and REDQ with ensemble size adaptation. In particular, when c = 0 and Mt+1 ≤ N , the adaptation mechanism would increase the ensemble size until it is equal to N . Consequently, AdaEQ degenerates to Maxmin [17] where M = N , leading Algorithm 1 Adaptive Ensemble Q-learning (AdaEQ) 1: Empty replay buffer D, step size α, number of the approximators N , initial in-target ensemble size M0 ≤ N , initial state s. Initialize N approximators with different training samples. 2: for Iteration t = 1, 2, 3, · · · do 3: Identify approximation error parameter τ̃t using (9) 4: Update ensemble size Mt according to (10) 5: Sample a setM of Mt different indices from {1, 2, · · · , N} 6: Obtain the proxy approximator Qproxy(s, a)← mini∈MQi(s, a), ∀a ∈ A 7: Choose action a from current state s using policy derived from Qproxy (e.g., ε-greedy) 8: Take action a, observe r and next state s′ 9: Update replay buffer D ← D ∪ {s, a, r, s′} 10: for i = 1, 2, · · · , N do 11: Sample a random mini-batch B from D 12: Compute the target: y(s, a, r, s′)← r + γmaxa′∈AQproxy(s′, a′), (s, a, r, s′) ∈ B 13: Update Q-function Qi: Qi(s, a)← (1− α)Qi(s, a) + αy(s, a, r, s′), (s, a, r, s′) ∈ B 14: end for 15: s← s′ 16: end for to possible underestimation bias. Meantime, when c is set sufficiently large, the ensemble size M would decrease until reaching the minimal value 2 during the learning process, where the estimation bias would be positive according to Theorem 2. In this case, AdaEQ is degenerated to REDQ [6] with ensemble size M = 2. We show the convergence analysis of AdaEQ in Appendix A.5. Remark. We use random sampling in Eqn. (10) for two reasons. Firstly, the characterization of the approximation error in Eqn. (9) is noisy in nature. In particular, Monte Carlo returns with finite-length testing trajectory may introduce empirical errors when estimating the underlying ground true value of Qπ . This noisy estimation is often the case when the policy is not deterministic, or the environment is not deterministic. Thus, we use random sampling to ‘capture’ the impact of this noisy estimation. Secondly, in general it is infeasible to characterize the exact relationship between estimation bias ZM and ensemble size M . Without any further prior information except from the bounds we obtained in Theorem 1 and Theorem 2 about the approximation error, the random sampling can be viewed as the ‘exploration’ in AdaEQ. 4 Experimental Results In this section, we evaluate the effectiveness of AdaEQ by answering the following questions: 1) Can AdaEQ minimize the estimation bias and further improve the performance in comparison to existing ensemble methods? 2) How does AdaEQ perform given different initial ensemble sizes? 3) How does the ‘tolerance’ parameter c affect the performance? To make a fair comparison, we follow the setup of [6] and use the same code base to compare the performance of AdaEQ with REDQ [6] and Average-DQN (AVG) [2], on three MuJoCo continuous control tasks: Hopper, Ant and Walker2d. The same hyperparameters are used for all the algorithms. Specifically, we consider N = 10 Q-function approximators in total. The ensemble size M = N = 10 for AVG, while the initial M for AdaEQ is set as 4. The ensemble size for REDQ is set as M = 2, which is the fine-tuned result from [6]. For all the experiments, we set the ‘tolerance’ parameter c in (10) as 0.3 and the length of the testing trajectories as H = 500. The ensemble size is updated according to (10) every 10 epochs in AdaEQ. The discount factor is 0.99. Implementation details and hyperparamter settings are fully described in Appendix B.1. Evaluation of estimation bias. To investigate the impact of the adaptation mechanism in AdaEQ, we begin by examining how the estimation bias changes in the training process. After each epoch, we run an evaluation episode of length H = 500, starting from an initial state sampled from the replay buffer. We calculate the estimation error based on the difference between the Monte Carlo return value and the Q-estimates as in [33, 6, 14]. For each experiment, the shaded area represents a standard deviation of the average evaluation over 3 training seeds. As shown in the first row of Figure 5, AdaEQ can reduce the estimation bias to nearly zero in all three benchmark environments, in contrast to REDQ and AVG. The AVG approach tends to result in positive bias in all three environments (a) Hopper-v2 task. Average returns over different initial ensemble size M = 2, 3, 5. (b) Hopper-v2 task. Estimation bias over different initial ensemble size M = 2, 3, 5. (c) Ant task. Average returns over different initial ensemble size M = 3, 5, 7. (d) Ant task. Estimation bias over different initial ensemble size M = 3, 5, 7. Figure 6: Impacts of the initial ensemble size M on the performance of AdaEQ in Hopper-v2 and Ant task. The solid lines are the mean values and the shaded areas are the standard derivations across three ensemble size settings. during the learning procedure, which is consistent with the results obtained in [6]. Notably, it can be clearly observed from Hopper and Walker2d tasks that the estimation bias for AdaEQ is driven to be close to zero, thanks to the dynamic ensemble size adjustment during the learning process. Meantime, in the Ant task, even though the fine-tuned REDQ can mitigate the overestimation bias, it tends to have underestimation bias, whereas AdaEQ is able to keep the bias closer to zero (gray dashed line) even under a ‘non-optimal’ choice of the initial ensemble size. Performance on MuJoCo benchmark. We evaluate the policy return after each epoch by calculating the undiscounted sum of rewards when running the current learnt policy [6, 14]. The second row of Figure 5 demonstrates the average return during the learning process for AdaEQ, AVG and REDQ, respectively. Especially, we choose the fine-tune ensemble size for REDQ [6]. As observed in Figure 5, AdaEQ can efficiently learn a better policy and achieve higher average return in all three challenging MuJoCo tasks, without searching the optimal parameters beforehand for each of them. Meantime, AdaEQ only incurs slightly more computation time than REDQ in most MuJoCo tasks. Due to space limitation, we have relegated the wall-clock training time comparison to Table 2 in Appendix B.2. Robustness to the initial ensemble size. Next, we investigate the performance of AdaEQ under different settings of the initial ensemble size in the Hopper-v2 and Ant environment, i.e.,M = (2, 3, 5) (a) Hopper-v2 task. Average returns over different parameter c in AdaEQ. (b) Hopper-v2 task. Estimation bias over different parameter c in AdaEQ. (c) Ant task. Average returns over different parameter c in AdaEQ. (d) Ant task. Estimation bias over different parameter c in AdaEQ. Figure 7: Impacts of parameter c on the performance of AdaEQ in Hopper-v2 and Ant task. The initial ensemble size is set to be M = 4. The mean value and the standard derivation are evaluated across three training seeds. and M = (3, 5, 7). As shown in Figure 6, AdaEQ consistently outperforms the others in terms of the average performance over different setups, which implies the benefit of adjusting the in-target ensemble size based on the error feedback. It can be seen from the shaded area that the performance of AVG and REDQ, may vary significantly when the ensemble size changes. Robustness to parameter c in a wide range. As illustrated in Figure 7, we conduct the ablation study by setting c = 0.001, 0.3, 0.5, 0.7, 1.5 on the Hopper-v2 and Ant tasks. Clearly, AdaEQ works better for c ∈ [0.3, 0.7]. The experiment results corroborate our analysis in Section 3.1 that our algorithm is not sensitive to parameter c in a wide range. As mentioned in Section 3.2, when parameter c is close to zero, AdaEQ degenerates to Maxmin, which is known to suffer from underestimation bias when the ensemble size is large [6]. Further, as illustrated in Figure 7(b), when c is large, e.g., c = 1.5, the ensemble size would gradually decrease to the minimum and hence would not be able to throttle the overestimation tendency during the learning process. 5 Conclusion Determining the right ensemble size is highly nontrivial for the ensemble Q-learning to correct the overestimation without introducing significant underestimation bias. In this paper, we devise AdaEQ, a generalized ensemble Q-learning method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. More specifically, by establishing the upper bound and the lower bound of the estimation bias, we first characterize the impact of both the ensemble size and the time-varying approximation error on the estimation bias. Building upon the theoretic results, we treat the estimation bias minimization as an adaptive control problem, and take the approximation error as feedback to adjust the ensemble size adaptively during the learning process. Our experiments show that AdaEQ consistently and effectively outperforms the existing ensemble methods, such as REDQ and AVG in MuJoCo tasks, corroborating the benefit of using AdaEQ to drive the estimation bias close to zero. There are many important avenues for future work. In terms of the bounds of the estimation bias, our analysis builds upon the standard independent assumption as in previous works. It’s worth noting that in practice, the errors are generally correlated [29] and the theoretical analysis for this case remains not well understood. Additionally, in this work, we use a heuristic tolerance parameter in the adaptation mechanism to strike the balance in controlling the positive bias and negative bias. It is of great interest to develop a systematic approach to optimize this tolerance parameter. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their constructive comments. This work was supported in part by NSF grants CNS-2130125, CCSS-2121222 and CNS-2003081.
1. What is the main contribution of the paper, and how does it address the problem of over/under estimation bias in ensemble learning? 2. What are the strengths and weaknesses of the proposed adaptive ensemble method? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the limitations of the paper, particularly regarding its generative assumptions and experimental results? 5. Are there any questions or concerns raised by the reviewer that the authors should address in future work?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an adaptive ensemble size to prevent the over/under estimation bias that a too small/large ensemble creates due to time-varying approximation errors. They derive bounds on the estimation bias for an argmin ensemble method and propose an adaptive ensemble method to mitigate this over/under estimation bias problem. Review Overall: The paper proposes an interesting problem and an interesting solution. However the paper spends a lot of time using simple examples to show intuition where it is not clear whether these examples are “degenerate” examples (Figures 2 and 3 are unnecessary and so is Theorem 1 which is a simple case of Theorem 2). Furthermore they need to spend to spend more time justifying their more unusual generative assumptions (Uniform noise and using the argmin ensemble estimator). It is not clear whether this bias changing “signs” above a certain ensemble size problem is an artifact of the unusual “min” estimator. Major Comments: Line 122: Is the overestimation bias being positive proven in [16] or [6] or somewhere else, if so please state where and how. Otherwise, this seems like an extremely aggressive assumption. Or if by definition a bias being positive implies overestimation, then why can the bias not be negative? Is this not underestimation? Why is it assumed that the approximation error has Uniform noise, not Gaussian noise? Furthermore the variance of distribution of the noise for each of the ensemble members depends heavily on \tau. For example if \tau is large, then the N \tau_i have higher variance, and this in turn makes the distributions of each error_i be more different from each other. If too large of an ensemble size causes underestimation bias, and this bias is negative according to Line 189, why does averaging not reduce bias overall. Furthermore, why is Z_M being studied for only the argmin ensemble method and not the averaging ensemble method in Line 188/189. If only the argmin estimator exhibits both positive and negative biases, wouldn’t averaging an “ensemble” of those estimators create an unbiased estimator? What is K and what is M in the example described after Theorem 1? In the experiments in Figure 5, why is the bias almost always positive even for the “optimized” methods? Shouldn’t the bias be as likely to be positive as negative if the methods have been “optimized” and centered? Also what is the very large dip (between 1 and 1.5) in the AdaEQ model for Ant? Minor Comments: Line 106 missing “the” in “By definition, (the) Q-function is the expected return when” and this phrase is strangely worded “and following with the policy” Line 230: Typo “thew”
NIPS
Title Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback Abstract The ensemble method is a promising way to mitigate the overestimation issue in Q-learning, where multiple function approximators are used to estimate the action values. It is known that the estimation bias hinges heavily on the ensemble size (i.e., the number of Q-function approximators used in the target), and that determining the ‘right’ ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process. To tackle this challenge, we first derive an upper bound and a lower bound on the estimation bias, based on which the ensemble size is adapted to drive the bias to be nearly zero, thereby coping with the impact of the time-varying approximation errors accordingly. Motivated by the theoretic findings, we advocate that the ensemble method can be combined with Model Identification Adaptive Control (MIAC) for effective ensemble size adaptation. Specifically, we devise Adaptive Ensemble Q-learning (AdaEQ), a generalized ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias. Extensive experiments are carried out to show that AdaEQ can improve the learning performance than the existing methods for the MuJoCo benchmark. 1 Introduction Thanks to recent advances in function approximation methods using deep neural networks [20], Q-learning [35] has been widely used to solve reinforcement learning (RL) problems in a variety of applications, e.g., robotic control [23, 13], path planning [15, 24] and production scheduling [34, 21]. Despite the great success, it is well recognized that Q-learning may suffer from the notorious overestimation bias [29, 33, 32, 10, 37], which would significantly impede the learning efficiency. Recent work [9, 11] indicates that this problem also persists in the actor-critic setting. To address this issue, the ensemble method [16, 1, 26, 7] has emerged as a promising solution in which multiple Q-function approximators are used to get better estimation of the action values. Needless to say, the ensemble size, i.e., the number of Q-function approximators used in the target, has intrinsic impact on Q-learning. Notably, it is shown in [6, 17] that while a large ensemble size could completely remove the overestimation bias, it may go to the other extreme and result in underestimation bias and unstable training, which is clearly not desirable. Therefore, instead of simply increasing the ensemble 35th Conference on Neural Information Processing Systems (NeurIPS 2021). size to mitigate the overestimation issue, a fundamental question to ask is:“ Is it possible to determine the right ensemble size on the fly so as to minimize the estimation bias?” Some existing ensemble methods [2, 19, 17] adopt a trial-and-error strategy to search for the ensemble size, which would be time-consuming and require a lot of human engineering for different RL tasks. The approximation error of the Q-function during the learning process plays a nontrivial role in the selection of the ensemble size, since it directly impacts the Q-target estimation accuracy. This however remains not well understood. In particular, the fact that the approximation error is time-varying, due to the iterative nature of Q-learning [36, 5], gives rise to the question that whether a fixed ensemble size should be used in the learning process. To answer this question, we show in Section 2.2 that using a fixed ensemble size is likely to lead to either overestimation or underestimation bias, and the bias may shift between overestimation and underestimation because of the time-varying approximation error, calling for an adaptive ensemble size so as to drive the bias close to zero based on the underlying learning dynamics. Thus motivated, in this work we study effective ensemble size adaptation to minimize the estimation bias that hinges heavily on the time-varying approximation errors during the learning process. To this end, we first characterize the relationship among the ensemble size, the function approximation errors, and the estimation bias, by deriving an upper bound and a lower bound on the estimation bias. Our findings reveal that the ensemble size should be selected adaptively in a way to cope with the impact of the time-varying approximation errors. Building upon the theoretic results, we cast the estimation bias minimization as an adaptive control problem where the approximation error during the learning process is treated as the control object, and the ensemble size is adapted based on the feedback of the control output, i.e., the value of the approximation error from the last iteration. The key idea in this approach is inspired from the classic Model Identification Adaptive Control (MIAC) framework [3, 25], where at each step the current system identification of the control object is fed back to adjust the controller, and consequently a new control signal is devised following the updated control law. One main contribution of this work lies in the development of AdaEQ, a generalized ensemble method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. Specifically, the approximation error in each iteration is quantified by comparing the difference between the Q-estimates and the Monte Carlo return using the current learned policy over a testing trajectory [29, 17]. Inspired by MIAC, the approximation error serves as the feedback to adapt the ensemble size. Besides, we introduce a ‘tolerance’ parameter in the adaptation mechanism to balance the control tendency towards positive or negative bias during the learning process. In this way, AdaEQ can encompass other existing ensemble methods as special cases, including Maxmin [17], by properly setting this hyperparameter. A salient feature of the feedback-adaptation mechanism is that it can be used effectively in conjunction with both standard Q-learning [22] and actor-critic methods [28, 11]. Experimental results on the continuous-control MuJoCo benchmark [30] show that AdaEQ is robust to the initial ensemble size in different environments, and achieves higher average return, thanks to keeping the estimation bias close to zero, when compared to the state-of-the-art ensemble methods such as REDQ [6] and Average-DQN [2]. Related Work. Bias-corrected Q-learning [18] introduces the bias correction term to reduce the overestimation bias. Double Q-learning is proposed in [12, 33] to address the overestimation issue in vanilla Q-learning, by leveraging two independent Q-function approximators to estimate the maximum Q-function value in the target. S-DQN and S-DDQN use the softmax operator instead of the max operator to further reduce the overestimation bias [27]. Self-correcting Q-learning aims to balance the underestimation in double Q-learning and overestimation in classic Q learning by introducing a new self-correcting estimator [38]. Weighted Q-learning proposes a new estimator based on the weighted average of the sample means, and conducts the empirical analysis in the discrete action space [8]. Weighted Double Q-learning [37] uses the Q-approximator together with the double Q-approximator to balance the overestimation and underestimation bias. Nevertheless, acquiring independent approximators is often intractable for large-scale tasks. To resolve this issue, the Twin-Delayed Deep Deterministic policy gradient algorithm (TD3) [9] and Soft Actor-Critic (SAC) [11] have been devised to take the minimum over two approximators in the target network. Along a different avenue, the ensemble-based methods generalize double Q-learning to correct the overestimation bias by increasing the number of Q-function approximators. Particularly, AverageDQN [2] takes the average of multiple approximators in the target to reduce the overestimation error, and Random Ensemble Mixture (REM) [1] estimates the target value using the random convex combination of the approximators. It is worth noting that both Average-DQN and REM cannot completely eliminate the overestimation bias. Most recently, Maxmin Q-learning [17] defines a proxy Q-function by choosing the minimum Q-value for each action among all approximators. Similar to Maxmin, Random Ensembled Q-learning (REDQ) [6] formulates the proxy Q-function by choosing only a subset of the ensemble. Nevertheless, both Maxmin and REDQ use a fixed ensemble size. In this study, we introduce an adaptation mechanism for the ensemble size to drive the estimation bias to be close to zero, thereby mitigating the possible overestimation and underestimation issues. 2 Impact of Ensemble Size on Estimation Bias 2.1 Ensemble Q-learning As is standard, we consider a Markov decision process (MDP) defined by the tuple 〈S,A, P, r, γ〉, where S and A denote the state space and the action space, respectively. P (s′|s, a) : S ×A× S → [0, 1] denotes the probability transition function from current state s to the next state s′ by taking action a ∈ A, and r(s, a) : S ×A → R is the corresponding reward. γ ∈ (0, 1] is the discount factor. At each step t, the agent observes the state st, takes an action at following a policy π : S → A, receives the reward rt, and evolves to a new state st+1. The objective is to find an optimal policy π∗ to maximize the discounted return R = ∑∞ t=0 γ trt. By definition, Q-function is the expected return when choosing action a in state s and following with the policy π: Qπ = E[ ∑∞ t=0 γ trt(st, at)|s0 = s, a0 = a]. Q-learning is an off-policy value-based method that aims at learning the optimal Q-function Q∗ : S ×A → R, where the optimal Q-function is a fixed point of the Bellman optimality equation [4]: T Q∗(s, a) = r(s, a) + γEs′∼P (s′|s,a) [maxa′∈AQ∗(s′, a′)] . (1) Given a transition sample (s, a, r, s′), the Bellman operator can be employed to update the Q-function as follows: Q(s, a)← (1− α)Q(s, a) + αy, y := r + γmaxa′∈AQ(s′, a′). (2) where α is the step size and y is the target. Under some conditions, Q-learning can converge to the optimal fixed-point solution asymptotically [31]. In deep Q-learning, the Q-function is approximated by a neural network, and it has been shown [33] that the approximation error, amplified by the max operator in the target, results in the overestimation phenomena. One promising approach to address this issue is the ensemble Q-learning method, which is the main subject of this study. The Ensemble Method. Specifically, the ensemble method maintains N separate approximators Q1, Q2, · · · , QN of the Q-function, based on which a subset of these approximators is used to devise a proxy Q-function. For example, in Average-DQN [2], the proxy Q-function is obtained by computing the average value over all N approximators to reduce the overestimation bias: Qave(·) = 1N ∑N i=1Q i(·). However, the average operation cannot completely eliminate the overestimation bias, since the average of the overestimation bias is still positive. To tackle this challenge, Maxmin [17] and REDQ [6] take the ‘min’ operation over a subsetM ( size M ) of the ensemble: Qproxy(·) = mini∈MQi(·). (3) The target value in the ensemble-based Q-learning is then computed as y = r +maxa′∈AQproxy. It is worth noting that in the existing studies, the in-target ensemble size M , pre-determined for a given environment, remain fixed in the learning process. 2.2 An Illustrative Example It is known that the determination of the optimal ensemble size is highly nontrivial, and a poor choice of the ensemble size would degrade the performance of ensemble Q-learning significantly [17]. As mentioned earlier, it is unclear a priori if a fixed ensemble size should be used in the learning process. (a) Function approximation. (b) Five function approximators. (c) Ensemble via ‘min’ operator. (d) Estimation error. Figure 2: Illustration of estimation bias in the ensemble method. (a) Each approximator is fitted to the noisy values (green dots) at the sampled states independently. (b) Five Q-function approximators are obtained for both actions (green lines and blue lines). (c) Apply the min operator over M (M = 3) randomly selected approximators to obtain a proxy approximator for each action. (d) The estimation error is obtained by comparing the underlying true value (purple line in (a)) and the target value using the proxy approximator. (a) Estimation bias vs. τ . (b) Estimation bias vs. numbers of actions. Figure 3: Illustration of overestimation and underestimation phenomena for different ensemble sizes. In what follows, we use an example to illustrate the potential pitfalls in the ensemble methods by examining the sensitivity of the estimation bias to the ensemble size [6, 17]. Along the same line as in [33], we consider an example with a real-valued continuous state space. In this example, there are two discrete actions available at each state and the optimal action values depend only on the state, i.e., in each state both actions result in the same optimal value Q∗(s, ·), which is assumed to be Q∗(s, ·) = sin(s). Figure 2 demonstrates how the ensemble method is carried out in four stages: (I) For each Q-function approximator Qi, i = 1, 2, · · · , 5, we first generate 10 noisy action-value samples independently (green dots in Figure 2(a)). Let ei(s, a) denote the approximation error of Qi: Qi(s, a) = Q∗(s, a) + ei(s, a), with ei(s, a) ∼ U(−τi, τi), (4) where τi ∼ U(0, τ) models the approximation error distribution for the i-th approximator. Note that the assumption on the uniform error distribution is commonly used to indicate that both positive and negative approximation error are possible in Q-function approximators [29][17][6]. (II) Next, Figure 2(b) illustrates the ensemble (N = 5) of approximators for two actions, where each approximator is a 6-degree polynomial that fits the noisy values at sampled states. (III) Following the same ensemble approach in [6][17], we randomly choose M approximators from the ensemble and take the minimum over them to obtain a proxy approximator for each action, resulting in the dashed lines in Figure 2(c). (IV) Finally, the maximum action value of the proxy approximator is used as the target to update the current approximators. To evaluate the target value estimation error, Figure 2(d) depicts the difference between the obtained target value and the underlying true value when using different ensemble size M . As in [33], we utilize the average estimation error (i.e., estimation bias) to quantify the performance of current approximators. For example, when the ensemble size M = 2, the red line is above zero for most states, implying the overestimation tendency in the target. Clearly, Figure 2(d) indicates that the estimation bias is highly dependent on the ensemble size, and even a change of M can lead the shift from overestimation to underestimation. Since the Q-function approximation error of each approximator changes over time in the training process [5] (examples for this phenomenon can be found in Appendix B.3), we next analyze the impact of the ensemble size on the estimation bias under different approximation error distributions. As shown in Figure 3(a), with a fixed ensemble size M , the estimation bias may shift between positive and negative and be ‘dramatically’ large for some error distributions. In light of this observation, departing from using a fixed size, we advocate to adapt the in-target ensemble size, e.g., set M = 4 when the noise parameter τ > 1.5 and M = 3 otherwise. The estimation bias resulted by this adaptation mechanism is much closer to zero. Besides, Figure 3(b) characterizes the estimation bias under different action spaces, which is also important considering that different tasks normally have different action spaces and the number of available actions may vary in different states even for the same task. The adaptive ensemble approach is clearly more robust in our setting. In a nutshell, both Figure 3(a) and 3(b) suggest that a fixed ensemble size would not work well to minimize the estimation bias during learning for different tasks. This phenomenon has also been observed in the empirical results [17]. In stark contrast, adaptively changing the ensemble size based on the approximation error indeed can help to reduce the estimation bias in different settings. 3 Adaptive Ensemble Q-learning (AdaEQ) Motivated by the illustrative example above, we next devise a generalized ensemble method with ensemble size adaptation to drive the estimation bias to be close to zero, by taking into consideration the time-varying feature of the approximation error during the learning process. Formally, we consider an ensemble of N Q-function approximators, i.e., {Qi}Ni=1, with each approximator initialized independently and randomly. We use the minimum of a subsetM of the N approximators in the Q-learning target as in (3), where the size of subset |M| =M ≤ N . 3.1 Lower Bound and Upper Bound on Estimation Bias We first answer the following key question:“How does the approximation error, together with the ensemble size, impact the estimation bias?". To this end, based on [29], we characterize the intrinsic relationship among the ensemble size M , the Q-function approximation error and the estimation bias, and derive an upper bound and a lower bound on the bias in the tabular case. Without loss of generality, we assume that for each state s, there are A available actions. Let ei(s, a) , Qi(s, a)−Qπ(s, a) be the approximation error for the i-th Q-function approximator, where Qπ(s, a) is the ground-truth of the Q-value for the current policy π. By using (3) to compute the target Q-value, we define the estimation error in the Bellman equation for transition (s, a, r, s′) as ZM : ZM ,r + γmaxa′∈Amini∈MQi(s′, a′)− (r +maxa′∈AQπ(s′, a′)) . Here a positive E[ZM ] implies overestimation bias while a negative E[ZM ] implies underestimation bias. Note that we use the subscription M to emphasize that the estimation bias is intimately related to M . The case with two distributions for Q-function approximation errors. For ease of exposition, we first consider the case when the approximation errors follow one of the two uniform distributions, as illustrated in Figure 4(a). Specifically, assume that for i ∈ K ⊂ M with |K| = K, ei(s, a) ∼ U(−τ1, τ1) , and for i ∈M \ K, ei(s, a) ∼ U(−τ2, τ2). Without loss of generality, we assume that τ1 > τ2 > 0. It is worth noting that in [29][17][6], the approximation error for all approximators is assumed to follow the same uniform distribution, i.e., τ1 = τ2, which is clearly more restrictive than the case here with two error distributions. For instance, when only one approximator is chosen to be updated at each step [17], the approximation error distribution of this approximator would change over time and hence differ from the others. We have the following results on the upper bound and lower bound of the estimation bias E[ZM ]. Theorem 1. For the case with two distributions for Q-function approximation errors, the estimation bias E[ZM ] satisfies that E[ZM ] ≥ γ (τ1(1− fAK − 2fAM ) + τ2(1− fAM )) ; (5) E[ZM ] ≤ γ ( τ1 + τ2(1− 2fA(M−K) − (1− βK)A) ) , (6) where βK = ( 12 − τ2 2τ1 )K , fAK = 1KB( 1 K , A + 1) = Γ(A+1)Γ(1+ 1K ) Γ(A+ 1K +1) = A(A−1)···1 (A+ 1K )(A+ 1 K−1)···(1+ 1 K ) with B(·, ·) being the Beta function. (a) Q-function approximation error distributions. (b) Lower bound and upper bound on Estimation bias. (c) Impact of approximation error on the estimation bias: overestimation vs. underestimation. Figure 4: Illustration of upper bounds and lower bounds on estimation bias in Theorem 1. (a) The case where the approximation errors of the Q-approximators can be categorized into two uniform distributions. (b) The lower bound and the upper bound corresponding to (5) and (6), for given τ1, τ2, A: The blue point represents the ‘critical’ point where decreasing the ensemble size may lead overestimation (the lower bound is positive); and the red point denotes the ‘critical’ point where increasing ensemble size may lead underestimation (the upper bound is negative). (c) Due to time-varying feature of the approximation errors, the blue curve and the red curve depict the ‘critical’ points for the lower bound and the upper bound, respectively. The proof of Theorem 1 is relegated to the Appendix A.1. Theorem 1 reveals that the estimation bias depends on the ensemble size as well as the approximation error distributions. To get a more concrete sense of Theorem 1, we consider an example where τ1 = 0.5 and τ2 = 0.4, as depicted in Figure 4(b), and characterize the relationship between the estimation bias and the ensemble size M . Notably, the estimation bias turns negative when the ensemble size M > Mu = 9 (red point: the value of M where the upper bound is 0) and becomes positive when M < Ml = 4 (blue point: the value of M where the lower bound is 0). In Figure 4(c), we fix τ2 = 0.4 and show how those two critical points (Mu and Ml) change along with τ1. Here the red shaded area indicates underestimation bias when M > Mu, and the blue shaded area indicates overestimation bias when M < Ml. Clearly, in order to avoid the positive bias (blue shaded area), it is desirable to increase the ensemble size when the approximation error is large, e.g., τ1 > 0.6. On the other hand, decreasing the ensemble size is more preferred to avoid underestimation (red shaded area) when the approximation error is small, e.g., τ1 < 0.6. The general case with heterogeneous distributions for Q-function approximation errors. Next, we consider a general case, in which the approximation errors for different approximators {Qi} are independently but non-identically distributed. Specifically, we assume that the approximation error ei(s, a) for Qi(s, a), i = 1, 2, · · · ,M , follows the uniform distribution U(−τi, τi), where τi > 0. We use a multitude of tools to devise the upper bound and lower bound on the estimation bias E[ZM ]. As expected, this general case is technically more challenging and the bounds would be not as sharp as in the special case with two distributions. Theorem 2. For the general case with heterogeneous error distributions, the estimation bias E[ZM ] satisfies that E[ZM ] ≥γ ( τmin − τmax(fA(M−1) + 2fAM ) ) ; (7) E[ZM ] ≤γ (2τmin − τmax (fAM − 2gAM )) , (8) where τmin = mini τi and τmax = maxi τi. gAM = 1M I0.5( 1 M , A + 1) with I0.5(·, ·) being the regularized incomplete Beta function. Observe from Theorem 2 that the lower bound in (7) is positive when τmin(1 − 2fAM ) > τmaxfA(M−1), indicating the existence of the overestimation issue. On thew contrary, the upper bound in (8) is negative when 2τmin < τmax (1 + fAM − 2gAM ), pointing to the underestimation issue. In general, when τmin is large enough, decreasing ensemble size M is likely to cause overestimation, e.g., E[ZM ] ≥ 0 when M < 2. On the other hand, when τmax is small enough, increasing ensemble size M is likely to cause underestimation, e.g., E[ZM ] ≤ 0 when M is sufficiently large. Determination of parameter c. As illustrated in Figure 4(c), for given approximation error characterization, a threshold c can be chosen such that increasing the ensemble size would help to correct the overestimation bias when τmax > c, and decreasing the ensemble size is more conductive to mitigate the underestimation bias when τmax < c. Specifically, parameter c is determined in two steps. Step 1: To estimate approximation error distribution parameters τmin and τmax by running an ensemble based algorithm (e.g., Algorithm 1) for a few epochs with a fixed ensemble size. In particular, a testing trajectory is generated from a random initial state using the current policy to compute the (discounted) MC return Qπ and the estimated Q-function value Qi, i = 1, 2, · · · , N . We next fit a uniform distribution model U(−τi, τi) of the approximation error (Qi −Qπ) for each Q-function approximator Qi. Then, τmin and τmax can be obtained by choosing the minimum and maximum values among τi, i = 1, 2, · · · , N . Step 2: To obtain the upper bound and the lower bound in Theorem 2 by using {τmin, τmax, A, γ}. We investigate the relationship between ensemble size M and the estimation bias by studying the bounds and identifying the ‘critical’ points as illustrated in Figure 4(b). Observe that a ‘proper’ ensemble size should be chosen between the ‘critical’ points, so as to reduce the overestimation and underestimation bias as much as possible. Since the approximation error is time-varying during the learning process, these two ‘critical’ points vary along with {τmax} and {τmin} (as shown in Figure 4(c)). Intuitively, it is desirable to drive the system to avoid both the red region (underestimation) and the blue region (overestimation). It can be clearly observed that there is a wide range of choice for parameter c (e.g., [0.5, 0.7] in Figure 4(c)) for the algorithm to stay in the white region, indicating that even though the pre-determined c above is not optimized, it can still serve the purpose well. The proof of Theorem 2 and numerical illustration can be found in the Appendix A.3. Summarizing, both Theorem 1 and Theorem 2 indicate that the approximation error characterization plays a critical role in controlling the estimation bias. In fact, both the lower bound and the upper bound in Theorem 2 depends on τmin and τmax, which are time-varying due to the iterative nature of the learning process, indicating that it is sensible to use an adaptive ensemble size to drive the estimation bias to be close to zero, as much as possible. 3.2 Practical Implementation Based on the theoretic findings above, we next propose AdaEQ that adapts the ensemble size based on the approximation error feedback on the fly, so as to drive the estimation bias close to zero. Particularly, as summarized in Algorithm 1, AdaEQ introduces two important steps at each iteration t, i.e., approximation error characterization (line 3) and ensemble size adaptation (line 4), which can be combined with the framework of either Q-learning or actor-critic methods. Characterization of the time-varying approximation error. As outlined in Algorithm 1, the first key step is to quantify the time-varying approximation error at each iteration t (for ease of exposition, we omit the subscript t when it is clear from the context). Along the same line as in [9, 33, 6], we run a testing trajectory of length H , T = (s0, a0, s1, a1, · · · , sH , aH), from a random initial state using the current policy π, and compute the discounted Monte Carlo return Qπ(s, a) and the estimated Q-function value Qi(s, a), i = 1, · · · , N for each visited state-action pair (s, a). The empirical standard derivation of Qi(s, a)−Qπ(s, a) can be then obtained to quantify the approximation error of each approximator Qi. Then, we take the average of the empirical standard derivation over all approximators to characterize the approximation error at the current iteration t, i.e., τ̃t = 1 N ∑N i=1 std(Q i(s, a)−Qπ(s, a)), (s, a) ∈ T . (9) Error-feedback based ensemble size adaptation. Based on the theoretic results and Figure 4(c), we update the ensemble size M at each iteration t based on the approximation error (9), using the following piecewise function: Mt = rand(Mt−1 + 1, N) τ̃t−1 > c, Mt−1 + 1 ≤ N rand(2,Mt−1 − 1) τ̃t−1 < c, Mt−1 − 1 ≥ 2 Mt−1 otherwise, (10) where rand(·, ·) is a uniform random function and c is a pre-determined parameter to capture the ‘tolerance’ of the estimation bias during the adaptation process. Recall that parameter c can be determined by using the upper bound and the lower bound in Theorem 2 (Theorem 1). Particularly, a larger c implies that more tolerance of the underestimation bias is allowed when adapting the ensemble size Mt. A smaller c, on the other hand, admits more tolerance of the overestimation. In this way, AdaEQ can be viewed as a generalization of Maxmin and REDQ with ensemble size adaptation. In particular, when c = 0 and Mt+1 ≤ N , the adaptation mechanism would increase the ensemble size until it is equal to N . Consequently, AdaEQ degenerates to Maxmin [17] where M = N , leading Algorithm 1 Adaptive Ensemble Q-learning (AdaEQ) 1: Empty replay buffer D, step size α, number of the approximators N , initial in-target ensemble size M0 ≤ N , initial state s. Initialize N approximators with different training samples. 2: for Iteration t = 1, 2, 3, · · · do 3: Identify approximation error parameter τ̃t using (9) 4: Update ensemble size Mt according to (10) 5: Sample a setM of Mt different indices from {1, 2, · · · , N} 6: Obtain the proxy approximator Qproxy(s, a)← mini∈MQi(s, a), ∀a ∈ A 7: Choose action a from current state s using policy derived from Qproxy (e.g., ε-greedy) 8: Take action a, observe r and next state s′ 9: Update replay buffer D ← D ∪ {s, a, r, s′} 10: for i = 1, 2, · · · , N do 11: Sample a random mini-batch B from D 12: Compute the target: y(s, a, r, s′)← r + γmaxa′∈AQproxy(s′, a′), (s, a, r, s′) ∈ B 13: Update Q-function Qi: Qi(s, a)← (1− α)Qi(s, a) + αy(s, a, r, s′), (s, a, r, s′) ∈ B 14: end for 15: s← s′ 16: end for to possible underestimation bias. Meantime, when c is set sufficiently large, the ensemble size M would decrease until reaching the minimal value 2 during the learning process, where the estimation bias would be positive according to Theorem 2. In this case, AdaEQ is degenerated to REDQ [6] with ensemble size M = 2. We show the convergence analysis of AdaEQ in Appendix A.5. Remark. We use random sampling in Eqn. (10) for two reasons. Firstly, the characterization of the approximation error in Eqn. (9) is noisy in nature. In particular, Monte Carlo returns with finite-length testing trajectory may introduce empirical errors when estimating the underlying ground true value of Qπ . This noisy estimation is often the case when the policy is not deterministic, or the environment is not deterministic. Thus, we use random sampling to ‘capture’ the impact of this noisy estimation. Secondly, in general it is infeasible to characterize the exact relationship between estimation bias ZM and ensemble size M . Without any further prior information except from the bounds we obtained in Theorem 1 and Theorem 2 about the approximation error, the random sampling can be viewed as the ‘exploration’ in AdaEQ. 4 Experimental Results In this section, we evaluate the effectiveness of AdaEQ by answering the following questions: 1) Can AdaEQ minimize the estimation bias and further improve the performance in comparison to existing ensemble methods? 2) How does AdaEQ perform given different initial ensemble sizes? 3) How does the ‘tolerance’ parameter c affect the performance? To make a fair comparison, we follow the setup of [6] and use the same code base to compare the performance of AdaEQ with REDQ [6] and Average-DQN (AVG) [2], on three MuJoCo continuous control tasks: Hopper, Ant and Walker2d. The same hyperparameters are used for all the algorithms. Specifically, we consider N = 10 Q-function approximators in total. The ensemble size M = N = 10 for AVG, while the initial M for AdaEQ is set as 4. The ensemble size for REDQ is set as M = 2, which is the fine-tuned result from [6]. For all the experiments, we set the ‘tolerance’ parameter c in (10) as 0.3 and the length of the testing trajectories as H = 500. The ensemble size is updated according to (10) every 10 epochs in AdaEQ. The discount factor is 0.99. Implementation details and hyperparamter settings are fully described in Appendix B.1. Evaluation of estimation bias. To investigate the impact of the adaptation mechanism in AdaEQ, we begin by examining how the estimation bias changes in the training process. After each epoch, we run an evaluation episode of length H = 500, starting from an initial state sampled from the replay buffer. We calculate the estimation error based on the difference between the Monte Carlo return value and the Q-estimates as in [33, 6, 14]. For each experiment, the shaded area represents a standard deviation of the average evaluation over 3 training seeds. As shown in the first row of Figure 5, AdaEQ can reduce the estimation bias to nearly zero in all three benchmark environments, in contrast to REDQ and AVG. The AVG approach tends to result in positive bias in all three environments (a) Hopper-v2 task. Average returns over different initial ensemble size M = 2, 3, 5. (b) Hopper-v2 task. Estimation bias over different initial ensemble size M = 2, 3, 5. (c) Ant task. Average returns over different initial ensemble size M = 3, 5, 7. (d) Ant task. Estimation bias over different initial ensemble size M = 3, 5, 7. Figure 6: Impacts of the initial ensemble size M on the performance of AdaEQ in Hopper-v2 and Ant task. The solid lines are the mean values and the shaded areas are the standard derivations across three ensemble size settings. during the learning procedure, which is consistent with the results obtained in [6]. Notably, it can be clearly observed from Hopper and Walker2d tasks that the estimation bias for AdaEQ is driven to be close to zero, thanks to the dynamic ensemble size adjustment during the learning process. Meantime, in the Ant task, even though the fine-tuned REDQ can mitigate the overestimation bias, it tends to have underestimation bias, whereas AdaEQ is able to keep the bias closer to zero (gray dashed line) even under a ‘non-optimal’ choice of the initial ensemble size. Performance on MuJoCo benchmark. We evaluate the policy return after each epoch by calculating the undiscounted sum of rewards when running the current learnt policy [6, 14]. The second row of Figure 5 demonstrates the average return during the learning process for AdaEQ, AVG and REDQ, respectively. Especially, we choose the fine-tune ensemble size for REDQ [6]. As observed in Figure 5, AdaEQ can efficiently learn a better policy and achieve higher average return in all three challenging MuJoCo tasks, without searching the optimal parameters beforehand for each of them. Meantime, AdaEQ only incurs slightly more computation time than REDQ in most MuJoCo tasks. Due to space limitation, we have relegated the wall-clock training time comparison to Table 2 in Appendix B.2. Robustness to the initial ensemble size. Next, we investigate the performance of AdaEQ under different settings of the initial ensemble size in the Hopper-v2 and Ant environment, i.e.,M = (2, 3, 5) (a) Hopper-v2 task. Average returns over different parameter c in AdaEQ. (b) Hopper-v2 task. Estimation bias over different parameter c in AdaEQ. (c) Ant task. Average returns over different parameter c in AdaEQ. (d) Ant task. Estimation bias over different parameter c in AdaEQ. Figure 7: Impacts of parameter c on the performance of AdaEQ in Hopper-v2 and Ant task. The initial ensemble size is set to be M = 4. The mean value and the standard derivation are evaluated across three training seeds. and M = (3, 5, 7). As shown in Figure 6, AdaEQ consistently outperforms the others in terms of the average performance over different setups, which implies the benefit of adjusting the in-target ensemble size based on the error feedback. It can be seen from the shaded area that the performance of AVG and REDQ, may vary significantly when the ensemble size changes. Robustness to parameter c in a wide range. As illustrated in Figure 7, we conduct the ablation study by setting c = 0.001, 0.3, 0.5, 0.7, 1.5 on the Hopper-v2 and Ant tasks. Clearly, AdaEQ works better for c ∈ [0.3, 0.7]. The experiment results corroborate our analysis in Section 3.1 that our algorithm is not sensitive to parameter c in a wide range. As mentioned in Section 3.2, when parameter c is close to zero, AdaEQ degenerates to Maxmin, which is known to suffer from underestimation bias when the ensemble size is large [6]. Further, as illustrated in Figure 7(b), when c is large, e.g., c = 1.5, the ensemble size would gradually decrease to the minimum and hence would not be able to throttle the overestimation tendency during the learning process. 5 Conclusion Determining the right ensemble size is highly nontrivial for the ensemble Q-learning to correct the overestimation without introducing significant underestimation bias. In this paper, we devise AdaEQ, a generalized ensemble Q-learning method for the ensemble size adaptation, aiming to minimize the estimation bias during the learning process. More specifically, by establishing the upper bound and the lower bound of the estimation bias, we first characterize the impact of both the ensemble size and the time-varying approximation error on the estimation bias. Building upon the theoretic results, we treat the estimation bias minimization as an adaptive control problem, and take the approximation error as feedback to adjust the ensemble size adaptively during the learning process. Our experiments show that AdaEQ consistently and effectively outperforms the existing ensemble methods, such as REDQ and AVG in MuJoCo tasks, corroborating the benefit of using AdaEQ to drive the estimation bias close to zero. There are many important avenues for future work. In terms of the bounds of the estimation bias, our analysis builds upon the standard independent assumption as in previous works. It’s worth noting that in practice, the errors are generally correlated [29] and the theoretical analysis for this case remains not well understood. Additionally, in this work, we use a heuristic tolerance parameter in the adaptation mechanism to strike the balance in controlling the positive bias and negative bias. It is of great interest to develop a systematic approach to optimize this tolerance parameter. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their constructive comments. This work was supported in part by NSF grants CNS-2130125, CCSS-2121222 and CNS-2003081.
1. What is the primary contribution of the paper regarding ensemble Q-learning? 2. How does the proposed approach adapt the ensemble size based on estimation error? 3. What are the strengths and weaknesses of the paper's theoretical and empirical analysis? 4. Do you have any questions or concerns about the algorithm's correctness, particularly regarding the definition of the set range \mathcal{M} and the use of different distributions for τ? 5. How does the reviewer assess the experimental evaluation and comparison with other methods, such as TD3 and SAC? 6. Are there any practical advantages or limitations of the proposed adaptive ensemble size Q-learning method compared to existing methods?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an ensemble Q-learning which can adaptively adjust the ensemble size based on estimation error. The authors find that the ensemble min(Q_1, Q_2, ..., Q_M) may suffer from overestimation or underestimation with different M, so they propose to adapt M based on approximation error, which can minimize the estimation bias correspondingly. The paper analyzes both in theoretically and empirically, show better performance than the existing ensemble q-learning methods on a few MuJoCo benchmark tasks. Review The major contribution of this paper is to establish the correlation between the ensemble size and estimation bias under the ensemble q-learning framework(min operator). Overall Clarity is good. The idea is clear and easy to follow, and the authors show the motivation why they propose this adaptive ensemble size q-learning. Originality/Significance: I did not see exactly the same work before (to address the ensemble size). This paper tries to solve the ensemble size problem in ensemble q-learning, but it creates a new problem here, for example how to adjust the new hyperparameter c in Eq. 10. And the experiment show it impacts performance significantly in Fig.6(d). If we need to try different c and also need to set different M to get a good result, I do not see any practical advantage compared to TD3, SAC, etc Regarding correctness of the algorithm: my main concern is when τ1 > τ2 > 0, \mathcal{K} \in [-τ1, τ1] but how do you get M\K \in [-τ2, τ2] in line 194-196. The problem is how do you define the set range \mathcal{M}. Another question your result in Eqs. 5 and 6 is from uniform distribution assumption, but you decide τ based on Gaussian in Eq. 9. Also you randomly sample in the following equation to get M does not make much sense to me. Regarding experimental evaluation: The experimental results show this method is sensitive to new hyperparameter c and it needs to carefully tuned to get good result compared to REDQ. As the authors propose a new ensemble q-learning by adjusting the size M, it is important to compare with TD3 and SAC on a wide range of tasks in MuJoCo environments on which most current continuous control algorithms have been evaluated and compared. Overall, the authors need to address these concerns above before acceptance.
NIPS
Title Towards Lower Bounds on the Depth of ReLU Neural Networks Abstract We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes. 1 Introduction A core problem in machine learning or statistical pattern recognition is the estimation of an unknown data distribution with access to i.i.d. samples from the distribution. It is well-known that there is a tension between how much prior information one has about the data distribution and how many samples one needs to solve the problem with high confidence (or equivalently, how much variance one has in one’s estimate). This is referred to as the bias-variance trade-off or the bias-complexity trade-off. Neural networks provide a way to turn this bias/complexity knob in a controlled manner that has been studied for decades going back to the idea of a perceptron by Rosenblatt [1958]. This is done by modifying the architecture of a neural network class of functions, which has two parameters: depth and size. As one increases these parameters, the class of functions becomes more expressive. In terms of the bias-variance trade-off, the “bias” decreases as the class of functions becomes more expressive, but the “variance” or “complexity” increases. So-called universal approximation theorems [Cybenko, 1989, Hornik, 1991, Anthony and Bartlett, 1999] show that even with a single hidden layer, i.e., when the depth of the architecture is the smallest possible value, one can essentially reduce the “bias” as much as one desires, by increasing the size of the network or the number of neurons used in the neural network. Nevertheless, it can be advantageous both theoretically and empirically to increase the depth because a substantial reduction in the size can be achieved; Telgarsky [2016a], Eldan and Shamir [2016], Arora et al. [2018] is a small sample of recent work in this direction. To get a better quantitative handle on these trade-offs, 35th Conference on Neural Information Processing Systems (NeurIPS 2021). it is important to understand what classes of functions are exactly representable by neural networks with a certain architecture. The precise mathematical statements of universal approximation theorems show that single layer networks can approximate arbitrarily well any continuous function (under some additional mild hypotheses). While this suggests that single layer networks are good enough from a learning perspective, from a mathematical perspective, one can ask the question if the class of functions represented by a single layer is a strict subset of the class of function represented by two or more hidden layers. On the question of size, one can ask for precise bounds on the size of the network of a given depth to represent a certain class of functions. We believe that a better understanding of the function classes exactly represented by different architectures will have implications not just for mathematical foundations, but also algorithmic and statistical learning aspects of neural networks. The task of searching for the “best” function in that class can only benefit from a better understanding of the nature of functions in that class. A motivating question behind the results in this paper is to understand the hierarchy of function classes exactly represented by neural networks of increasing depth. We now introduce more precise notation and terminology to set the stage for our investigations. Notation. We write [n] := {1, 2, . . . , n} for the set of natural numbers up to n (without zero) and [n]0 := [n] ∪ {0} for the same set including zero. For any n ∈ N, let σ : Rn → Rn be the component-wise rectifier function σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any number of hidden layers k ∈ N, a (k + 1)-layer feedforward neural network with rectified linear units (ReLU NN or simply NN) is given by k affine transformations T (`) : Rn`−1 → Rn` , x 7→ A(`)x+ b(`), for ` ∈ [k], and a linear transformation T (k+1) : Rnk → Rnk+1 , x 7→ A(k+1)x. It is said to compute or represent the function f : Rn0 → Rnk+1 given by f = T (k+1) ◦ σ ◦ T (k) ◦ σ ◦ · · · ◦ T (2) ◦ σ ◦ T (1). The matrices A(`) ∈ Rn`×n`−1 are called the weights and the vectors b(`) ∈ Rn` are the biases of the `-th layer. The number n` ∈ N is called the width of the `-th layer. The maximum width of all hidden layers max`∈[k] n` is called the width of the NN. Further, we say that the NN has depth k + 1 and size ∑k `=1 n`. Often, NNs are represented as layered, directed, acyclic graphs where each dimension of each layer (including input layer ` = 0 and output layer ` = k + 1) is one vertex, weights are arc labels, and biases are node labels. Then, the vertices are called neurons. For a given input x = x(0) ∈ Rn0 , let y(`) := T (`)(x(`−1)) ∈ Rn` be the activation vector and x(`) := σ(y`) ∈ Rn` the output vector of the `-th layer. Further, let y := y(k+1) = f(x) be the output of the NN. We also say that the i-th component of each of these vectors is the activation or the output of the i-th neuron in the `-th layer. For k ∈ N, we define ReLUn(k) := {f : Rn → R | f can be represented by a (k + 1)-layer NN}, CPWLn := {f : Rn → R | f is continuous and piecewise linear}. By definition, a continuous function f : Rn → R is piecewise linear in case there is a finite set of polyhedra whose union is Rn, and f is affine linear over each such polyhedron. In order to analyze ReLUn(k), we use another function class defined as follows. We call a function g a p-term max function if it can be expressed as maximum of p affine terms, that is, g(x) = max{`1(x), . . . , `p(x)} where `i : Rn → R is affinely linear for i ∈ [p]. Based on that, we define MAXn(p) := {f : Rn → R | f is a linear combination of p-term max functions}. If the input dimension n is not important for the context, we sometimes drop the index and use ReLU(k) := ⋃ n∈N ReLUn(k) and MAX(p) := ⋃ n∈N MAXn(p) instead. Since we deal with polyhedra a lot in this paper, we will use the standard notations convA and coneA for the convex and conic hulls of a set A ⊆ Rn. For an in-depth treatment of polyhedra and (mixed-integer) optimization, we refer to Schrijver [1986]. 1.1 Our Contribution It is not hard to see that any function expressed by a ReLU network is a continuous and piecewise linear (CPWL) function, because one is composing continuous piecewise linear functions together. Based on a result by Wang and Sun [2005], Arora et al. [2018] show that every CPWL function defined on Rn can be represented by a ReLU neural network with dlog2(n+ 1)e hidden layers. We wish to understand whether one can do better. We believe it is not possible to do better and we pose the following conjecture to better understand the importance of depth in neural networks. Conjecture 1.1. For any n ∈ N, let k∗ := dlog2(n+ 1)e. Then it holds that ReLUn(0) ( ReLUn(1) ( · · · ( ReLUn(k∗ − 1) ( ReLUn(k∗) = CPWLn . (1) Conjecture 1.1 claims that any additional layer up to k∗ hidden layers strictly increases the set of representable functions. This would imply that the construction by Arora et al. [2018, Theorem 2.1] is actually depth-minimal. Observe that in order to prove Conjecture 1.1, it suffices to find a single function f ∈ ReLUn(k∗) \ ReLUn(k∗ − 1) with n = 2k ∗−1 for all k∗ ∈ N. This also implies all remaining strict inclusions ReLUn(i− 1) ( ReLUn(i) for i < k∗ since ReLUn(i− 1) = ReLUn(i) directly implies that ReLUn(i− 1) = ReLUn(i′) for all i′ ≥ i− 1. In fact, there is a canonical candidate for such a function, allowing us to reformulate the conjecture as follows. Conjecture 1.2. For any k ∈ N, n = 2k, the function fn(x) = max{0, x1, . . . , xn} cannot be represented with k hidden layers. Proposition 1.3. Conjecture 1.1 and Conjecture 1.2 are equivalent. Proof (Sketch). We argued above that Conjecture 1.2 implies Conjecture 1.1. For the other direction, one can argue that, if the specific (n+1)-term max function fn can be represented by k hidden layers, then every other (n+ 1)-term max function as well. The claim then follows via a result by [Wang and Sun, 2005] stating that any f ∈ CPWLn can be written as linear combination of (n+ 1)-term max functions. We provide a detailed argument in Appendix A. It is known that Conjecture 1.2 holds for k = 1 [Mukherjee and Basu, 2017]. However, the conjecture remains open for k ≥ 2. In this paper, we present the following results as partial progress towards resolving this conjecture. In Section 2, we resolve Conjecture 1.2 for k = 2, under a natural assumption on the breakpoints of the function represented by any intermediate neuron. We achieve this result by leveraging techniques from mixed-integer programming to analyze the set of functions computable by certain NNs. It is not hard to see that MAX(2k) ⊆ ReLU(k) for all k ∈ N [Arora et al., 2018], that is, any 2k-term max function (and linear combinations thereof) can be expressed with k hidden layers. One might ask whether the converse is true as well, that is, whether the classes MAX(2k) and ReLU(k) are actually equal. This would not only provide a neat characterization of ReLU(k), but also prove Conjecture 1.2 without any additional assumption since one can show that max{0, x1, . . . , x2k} is not contained in MAX(2k). In fact, this is true for k = 1, that is, a function is computable with one hidden layer if and only if it is a linear combination of 2-term max functions. However, in Section 3, we show that for k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In this section, the key technical ingredient is the theory of polyhedral complexes associated with CPWL functions. This way, we provide important insights concerning the richness of the class ReLU(k). So far, we have focused on understanding the smallest depth needed to express CPWL functions using neural networks with ReLU activations. In Section 4, we complement these results by upper bounds on the sizes of the networks needed for expressing arbitrary CPWL functions. In particular, Theorem 4.4 shows that any continuous piecewise linear function with p linear/affine pieces on Rn can be expressed by a network with depth at most O(log n) and width at most pO(n2). We arrive at this result by introducing a novel application of recently established interactions between neural networks and tropical geometry. 1.2 Related Work Depth versus size. Soon after the original universal approximation theorems [Cybenko, 1989, Hornik, 1991], concrete bounds were obtained on the number of neurons needed in the hidden layer to achieve a certain level of accuracy. The literature on this is vast and we refer to a small representative sample here [Barron, 1993, 1994, Mhaskar, 1993, Pinkus, 1999, Mhaskar, 1996, Mhaskar and Micchelli, 1995]. More recently, work has focused on how deeper networks can have exponentially or super exponentially smaller size compared to shallower networks [Telgarsky, 2016a, Eldan and Shamir, 2016, Arora et al., 2018, Vardi et al., 2021]. See also Gribonval et al. [2021] for another perspective on the relationship between expressivity and architecture, and the references therein. We reiterate that the list of references above is far from complete. Mixed-integer optimization and machine learning. Over the past decade, a growing body of work has emerged that explores the interplay between mixed-integer optimization and machine learning. On the one hand, researchers have attempted to improve mixed-integer optimization algorithms by exploiting novel techniques from machine learning [Bonami et al., 2018, Gasse et al., 2019, He et al., 2014, Khalil et al., 2016, 2017, Kruber et al., 2017, Lodi and Zarpellon, 2017, Alvarez et al., 2017]; see also Bengio et al. [2020] for a recent survey. On the flip side, mixed-integer optimization techniques have been used to analyze function classes represented by neural networks [Serra et al., 2018, Anderson et al., 2020, Fischetti and Jo, 2017, Serra and Ramalingam, 2020, Serra et al., 2020]. In Section 2 below, we show another new use of mixed-integer optimization tools for understanding function classes represented by neural networks. Design of training algorithms. We believe that a better understanding of the function classes represented exactly by a neural architecture also has benefits in terms of understanding the complexity of the training problem. For instance, in a paper by Arora et al. [2018], an understanding of single layer ReLU networks enables the design of a globally optimal algorithm for solving the empirical risk minimization (ERM) problem, that runs in polynomial time in the number of data points in fixed dimension. See also Goel et al. [2017, 2018], Goel and Klivans [2019], Dey et al. [2020], Boob et al. [2020], Goel et al. [2021], Froese et al. [2021] for a similar lines of work. Neural Networks and Tropical Geometry. A recent stream of research involves the interplay between neural networks and tropical geometry. The piecewise linear functions computed by neural networks can be seen as (tropical quotients of) tropical polynomials. Linear regions of these functions correspond to vertices of so-called Newton polytopes associated with these tropical polynomials. Applications of this correspondence include bounding the number of linear regions of a neural network [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Montúfar et al., 2021] and understanding decision boundaries [Alfarra et al., 2020]. In Section 4 we present a novel application of tropical concepts to understand neural networks. We refer to Maragos et al. [2021] for a recent survey of connections between machine learning and tropical geometry, as well as to the textbooks by Maclagan and Sturmfels [2015] and Joswig [2022] for in-depth introductions to tropical geometry and tropical combinatorics. 2 Conditional Lower Bounds on Depth via Mixed-Integer Programming In this section, we provide a computer-aided proof that, under a natural, yet unproven assumption, the function f(x) := max{0, x1, x2, x3, x4} cannot be represented by a 3-layer NN. It is worth to note that, to the best of our knowledge, no CPWL function is known for which the non-existence of a 3-layer NN can be proven without additional assumption. For easier notation, we write x0 := 0. We first prove that we may restrict ourselves to NNs without biases. This holds true independent of our assumption, which we introduce afterwards. Definition 2.1. A function g : Rn → Rm is called positively homogeneous if g(λx) = λg(x) for all λ ≥ 0. Definition 2.2. For an NN given by affine transformations T (`)(x) = A(`)x + b(`), we define the corresponding homogenized NN to be the NN given by T̃ (`)(x) = A(`)x with all biases set to zero. Proposition 2.3. If an NN computes a positively homogeneous function, then the corresponding homogenized NN computes the same function. Proof. Let g : Rn0 → Rnk+1 be the function computed by the original NN and g̃ the one computed by the homogenized NN. Further, for any 0 ≤ ` ≤ k, let g(`) = T (`+1) ◦σ ◦T (`) ◦ · · ·◦T (2) ◦σ ◦T (1) be the function computed by the sub-NN consisting of the first (` + 1)-layers and let g̃(`) be the function computed by the corresponding homogenized sub-NN. We first show by induction on ` that the norm of ‖g(`)(x)− g̃(`)(x)‖ is bounded by a global constant that only depends on the parameters of the NN but not on x. For ` = 0, we obviously have ‖g(0)(x) − g̃(0)(x)‖ = ‖b(1)‖ =: C0, settling the induction base. For the induction step, let ` ≥ 1 and assume that ‖g(`−1)(x) − g̃(`−1)(x)‖ ≤ C`−1, where C`−1 only depends on the parameters of the NN. Since component-wise ReLU activation has Lipschitz constant 1, this implies ‖(σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x)‖ ≤ C`−1. Using any matrix norm that is compatible with the Euclidean vector norm, we obtain: ‖g(`)(x)− g̃(`)(x)‖ = ‖b(`+1) +A(`+1)((σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x))‖ ≤ ‖b(`+1)‖+ ‖A(`+1)‖ · C`−1 =: C` Since the right-hand side only depends on NN parameters, the induction is completed. Finally, we show that g = g̃. For the sake of contradiction, suppose that there is an x ∈ Rn0 with ‖g(x) − g̃(x)‖ = δ > 0. Let x′ := Ck+1δ x; then, by positive homogeneity, it follows that ‖g(x′)− g̃(x′)‖ = Ck + 1 > Ck, contradicting the property shown above. Thus, we have g = g̃. Since f = max{0, x1, x2, x3, x4} is positively homogeneous, Proposition 2.3 implies that, if there is a 3-layer NN computing f , then there also is one that has no biases. Therefore, in the remainder of this section, we only consider NNs without biases and assume implicitly that all considered CPWL functions are positively homogeneous. In particular, any piece of such a CPWL function is linear and not only affine linear. Observe that, for function f , the only points of non-differentiability (a.k.a. breakpoints) are at places where at least two of the five numbers x0 = 0, x1, x2, x3, and x4 are equal. Hence, if some neuron of an NN computing f introduces breakpoints at other places, these breakpoints must be canceled out by other neurons. Therefore, it is a natural assumption that such breakpoints are not introduced at all in the first place. To make this assumption formal, let Hij = {x ∈ R4 | xi = xj}, for 0 ≤ i < j ≤ 4, be ten hyperplanes in R4 and H = ⋃ 0≤i<j≤4Hij be the corresponding hyperplane arrangement. The regions or cells of H are defined to be the closures of the connected components of R4 \H . It is easy to see that these regions are in one-to-one correspondence to the 5! = 120 possible orderings of the five numbers x0 = 0, x1, x2, x3, and x4. More precisely, for a permutation π of the five indices [4]0 = {0, 1, 2, 3, 4}, the corresponding region is the polyhedron Cπ := {x ∈ R4 | xπ(0) ≤ xπ(1) ≤ xπ(2) ≤ xπ(3) ≤ xπ(4)}. We say that a (positively homogeneous) CPWL function g is H-conforming, if it is linear within any of these regions of H , that is, if it only has breakpoints where the relative ordering of the five values x0 = 0, x1, x2, x3, x4 changes. Moreover, an NN is said to be H-conforming if the output of each neuron contained in the NN is H-conforming. Equivalently, this is the case if and only if all intermediate functions σ ◦ T (`) ◦ σ ◦ T (`−1) ◦ · · · ◦ σ ◦ T (1), ` ∈ [k], are H-conforming. Now our assumption can be formally phrased as follows. Assumption 2.4. If there exists a 3-layer NN computing f(x) = max{0, x1, x2, x3, x4}, then there also exists one that is H-conforming. We use mixed-integer programming to prove the following theorem. Theorem 2.5. Under Assumption 2.4, there does not exist a 3-layer NN that computes the function f(x) = max{0, x1, x2, x3, x4}. Proof (Outline). We first study some geometric properties of the hyperplane arrangement H . This will show that each of the 120 cells of H is a simplicial polyhedral cone spanned by 4 extreme rays. In total, there are 30 such rays (because rays are used multiple times to span different cones). This implies that the set of H-conforming functions of type R4 → R is a 30-dimensional vector space and each function is uniquely determined by its values on the 30 rays. We then use linear algebra to show that the space of functions generated by H-conforming two-layer NNs is a 14-dimensional subspace. Moreover, with two hidden layers, at least 29 of the 30 dimensions can be generated and f is not contained in this 29-dimensional subspace. So, it remains the question whether the 14 dimensions producible with the first hidden layer can be combined in such a way that after applying a ReLU activation in the second hidden layer, we do not end up within the 29-dimensional subspace. We model this question as a mixed-interger program (MIP). Solving the MIP yields that we always end up within the 29-dimensional subspace, implying that f cannot be represented by a 3-layer NN. This provides a computational proof of Theorem 2.5. All details can be found in Appendix B. 3 Going Beyond 2k-Term Max Functions with k Layers In this section we prove the following result: Theorem 3.1. For any k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In order to prove this theorem, we provide a specific function that is in ReLU(k) \MAX(2k) for any number of hidden layers k ≥ 2. The challenging part is to show that the function is in fact not contained in MAX(2k). Proposition 3.2. For any n ≥ 3, the function f : Rn → R, f(x) = max{0, x1, x2, . . . , xn−3, max{xn−2, xn−1}+ max{0, xn}} (2) cannot be written as a linear combination of n-term max functions. The above proposition means that it is not possible to write f(x) in the form f(x) = p∑ i=1 λi max{`i1(x), . . . , `in(x)} where p ∈ N, λ1, . . . , λp ∈ R, and `ij : Rn → R is an affine linear function for every i ∈ [p] and j ∈ [n]. (Note that max functions with less than n terms are also allowed, as some functions `ij may coincide.) Before we prove Proposition 3.2, we show that it implies Theorem 3.1. Proof of Theorem 3.1. For k ≥ 2, let n := 2k. By Proposition 3.2, function f defined in (2) is not contained in MAX(2k). It remains to show that it can be represented using a ReLU NN with k hidden layers. To see this, first observe that any of the n/2 = 2k−1 terms max{0, x1}, max{x2i, x2i+1} for i ∈ [n/2 − 2], and max{xn−2, xn−1} + max{0, xn} can be expressed by a one-hidden-layer NN since all these are (linear combinations of) 2-term max functions. Since f is the maximum of these 2k−1 terms, and since the maximum of 2k−1 numbers can be computed with k − 1 hidden layers [Arora et al., 2018], this implies that f is in ReLU(k). In order to prove Proposition 3.2, we need the concept of polyhedral complexes. A polyhedral complex P is a finite set of polyhedra such that each face of a polyhedron in P is also in P , and for two polyhedra P,Q ∈ P , their intersection P ∩Q is a common face of P and Q (possibly the empty face). Given a polyhedral complex P in Rn and an integer m ∈ [n], we let Pm denote the collection of all m-dimensional polyhedra in P . For a convex CPWL function f , we define its underlying polyhedral complex as follows: it is the unique polyhedral complex covering Rn (i.e., each point in Rn belongs to some polyhedron in P) whose n-dimensional polyhedra coincide with the domains of the (maximal) affine pieces of f . In particular, f is affinely linear within each P ∈ P , but not within any strict superset of a polyhedron in Pn. Exploiting properties of polyhedral complexes associated with CPWL functions, we prove the following proposition in Appendix C. Proposition 3.3. Let f0 : Rn → R be a convex CPWL function and let P0 be the underlying polyhedral complex. If there exists a hyperplane H ⊆ Rn such that the set T := ⋃{ F ∈ Pn−10 ∣∣ F ⊆ H} is nonempty and contains no line, then f0 cannot be expressed as a linear combination of n-term maxima of affine linear functions. This allows us to prove Proposition 3.2. Proof of Proposition 3.2. Observe that f (defined in (2)) has the alternate representation f(x) = max{0, x1, x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn} as a maximum of n+ 2 terms. Let P be its underlying polyhedral complex. Let the hyperplane H be defined by x1 = 0. Observe that any facet in Pn−1 is a polyhedron defined by two of the n + 2 terms that are equal and at least as large as each of the remaining n terms. Hence, the only facet that could possibly be contained in H is F := {x ∈ Rn | x1 = 0 ≥ x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn}. Note that F is indeed an (n− 1)-dimensional facet in Pn−1, because, for example, the full neighborhood of (0,−1, . . . ,−1) ∈ Rn intersected with H is contained in F . Finally, we need to show that F is pointed, that is, it contains no line. A well-known fact from polyhedral theory says if there is any line in F with direction d ∈ Rn \ {0}, then d must satisfy the defining inequalities with equality. However, only the zero vector does this. Hence, F cannot contain a line. Therefore, when applying Proposition 3.3 to f with underlying polyhedral complex P and hyperplane H , we have T = F , which is nonempty and contains no line. Hence, f cannot be written as linear combination of n-term maxima. 4 A Width Bound for NNs with Small Depth While the arguments in Arora et al. [2018] show that CPWLn = ReLUn(dlog2(n + 1)e), they do not provide any bound on the width of the NN required to represent any particular continuous piecewise linear function. The purpose of this section is to prove that for fixed dimension n, the required width for exact, depth-minimal representation of a CPWL function can be polynomially bounded in the number p of affine pieces; in particular pO(n 2). This is closely related to works that bound the number of linear pieces of an NN as a function of the size [Montúfar et al., 2014, Pascanu et al., 2014, Raghu et al., 2017, Montúfar et al., 2021]. It can also be seen as a counterpart, in the context of exact representations, to quantitative universal approximation theorems that bound the number of neurons required to achieve a certain approximation guarantee; see, e.g., Barron [1993, 1994], Mhaskar [1993], Pinkus [1999], Mhaskar [1996], Mhaskar and Micchelli [1995]. 4.1 The Convex Case We first derive our result for the case of convex CPWL functions and then use this to also prove the general nonconvex case. Our width bound is a consequence of the following theorem about convex CPWL functions, for which we are going to provide a geometric proof later. Theorem 4.1. Let f(x) = max{aTi x + bi | i ∈ [p]} be a convex CPWL function defined on Rn. Then f can be written as f(x) = ∑ S⊆[p], |S|≤n+1 cS max{aTi x+ bi | i ∈ S} with coefficients cS ∈ Z, for S ⊆ [p], |S| ≤ n+ 1. For the convex case, this yields a stronger version of the theorem by Wang and Sun [2005] stating that any (not necessarily convex) CPWL function can be written as a linear combination of (n+ 1)-term maxima. Theorem 4.1 is stronger in the sense that it guarantees that all pieces of the (n+ 1)-term maxima must be pieces of the original function, making it possible to bound the total number of these (n+ 1)-term maxima and, therefore, the size of an NN representing f . Theorem 4.2. Let f : Rn → R be a convex CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(pn+1). Proof. Since the number of possible subsets S ⊆ [p] with |S| ≤ n + 1 is bounded by pn+1, the theorem follows by Theorem 4.1 and the construction from Arora et al. [2018, Theorem 2.1]. Before we present the proof of Theorem 4.1, we show how we can generalize its consequences to the nonconvex case. 4.2 The General (Nonconvex) Case It is a well-known fact that every CPWL function can be expressed as a difference of two convex CPWL functions, see, e.g., Wang [2004, Theorem 1]. This allows us to derive the general case from the convex case. What we need, however, is to bound the number of affine pieces of the two convex CPWL functions in terms of the number of pieces of the original function. Therefore, we consider a specific decomposition for which such bounds can easily be achieved. Proposition 4.3. Let f : Rn → R be a CPWL function with p affine pieces. Then, f can be written as f = g − h where both g and h are convex CPWL functions with at most p2n+1 pieces. Proof. Suppose the p affine pieces of f are given by x 7→ aTi x + bi, i ∈ [p]. Define the function h(x) := ∑ 1≤i<j≤p max{aTi x+ bi, aTj x+ bj} and let g := f + h. Then, obviously, f = g − h. It remains to show that both g and h are convex CPWL functions with at most p2n+1 pieces. The convexity of h is clear by definition. Consider the ( p 2 ) = p(p−1)2 < p 2 hyperplanes given by aTi x+ bi = a T j x+ bj , 1 ≤ i < j ≤ p. They divide Rn into at most ( p2 n ) + ( p2 n−1 ) + · · ·+ ( p2 0 ) ≤ p2n regions (compare Edelsbrunner [1987, Theorem 1.3]) in each of which h is affine. In particular, h has at most p2n ≤ p2n+1 pieces. Next, we show that g = f + h is convex. Intuitively, this holds because each possible breaking hyperplane of f is made convex by adding h. To make this formal, note that by the definition of convexity, it suffices to show that g is convex along each affine line. For this purpose, consider an arbitrary line x(t) = ta+b, t ∈ R, given by a ∈ Rn and b ∈ R. Let f̃(t) := f(x(t)), g̃(t) := g(x(t)), and h̃(t) := h(x(t)). We need to show that g̃ : R→ R is a convex function. Observe that f̃ , g̃, and h̃ are clearly one-dimensional CPWL functions with the property g̃ = f̃ + h̃. Hence, it suffices to show that g̃ is convex locally around each of its breakpoints. Let t ∈ R be an arbitrary breakpoint of g̃. If f̃ is already convex locally around t, then the same holds for g̃ as well since h̃ inherits convexity from h. Now suppose that t is a nonconvex breakpoint of f̃ . Then there exist two distinct pieces of f , indexed by i, j ∈ [p] with i 6= j, such that f̃(t′) = min{aTi x(t′) + bi, aTj x(t′) + bj} for all t′ sufficiently close to t. By construction, h̃(t′) contains the summand max{aTi x(t′)+bi, aTj x(t′)+bj}. Thus, adding this summand to f̃ linearizes the nonconvex breakpoint of f̃ , while adding all the other summands preserves convexity. In total, g̃ is convex locally around t, which finishes the proof that g is a convex function. Finally, observe that pieces of g = f + h are always intersections of pieces of f and h, for which we have only p · p2n = p2n+1 possibilities. Having this, we may conclude the following. Theorem 4.4. Let f : Rn → R be a CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(p2n 2+3n+1). Proof. Consider the decomposition f = g − h from Proposition 4.3. Using Theorem 4.2, we obtain that both g and h can be represented with the required depth dlog2(n + 1)e + 1 and with width O((p2n+1)n+1) = O(p2n2+3n+1). Thus, the same holds for f . 4.3 Extended Newton Polyhedra of Convex CPWL Functions For our proof of Theorem 4.1, we use a correspondence of convex CPWL functions with certain polyhedra, which are known as (extended) Newton polyhedra in tropical geometry [Maclagan and Sturmfels, 2015]. These relations between tropical geometry and neural networks have previously been applied to investigate expressivity of NNs; compare our references in the introduction. In order to formalize this correspondence, let CCPWLn ⊆ CPWLn be the set of convex CPWL functions of type Rn → R. For f(x) = max{aTi x + bi | i ∈ [p]} in CCPWLn, we define its so-called extended Newton polyhedron to be N (f) := conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}) ⊆ Rn+1. We denote the set of all possible extended Newton polyhedra in Rn+1 as Newtn. That is, Newtn is the set of (unbounded) polyhedra in Rn+1 that emerge from a polytope by adding the negative of the (n+ 1)-st unit vector −en+1 as an extreme ray. Hence, a set P ⊆ Rn+1 is an element of Newtn if and only if P can be written as P = conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}). Conversely, for a polyhedron P ∈ Newtn of this form, let F(P ) ∈ CCPWLn be the function defined by F(P )(x) = max{aTi x+ bi | i ∈ [p]}. There is an intuitive way of thinking about the extended Newton polyhedron P of a convex CPWL function f : it consists of all hyperplane coefficients (a, b) ∈ Rn × R such that aTx+ b ≤ f(x) for all x ∈ Rn. This also explains why we add the extreme ray −en+1: decreasing b obviously maintains the property of aTx+ b being a lower bound on the function f . We need the notion of the Minkowski sum of two polyhedra P and Q: it is given as the set P +Q = {p+ q | p ∈ P, q ∈ Q}. In fact, there is a one-to-one correspondence between elements of CCPWLn and Newtn, which is nicely compatible with some (functional and polyhedral) operations. This correspondence has been studied before in tropical geometry [Maclagan and Sturmfels, 2015, Joswig, 2022], convex geometry1 [Hiriart-Urruty and Lemaréchal, 1993], as well as neural network literature [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Alfarra et al., 2020, Montúfar et al., 2021]. We summarize the key findings of this correspondence relevant to our work in the following proposition: Proposition 4.5. Let n ∈ N and f1, f2 ∈ CCPWLn. Then it holds that (i) the functions N : CCPWLn → Newtn and F : Newtn → CCPWLn are well-defined, that is, their output is independent from the representation of the input by pieces or vertices, respectively, (ii) N and F are bijections and inverse to each other, (iii) N (max{f1, f2}) = conv(N (f1),N (f2)) := conv(N (f1) ∪N (f2)), (iv) N (f1 + f2) = N (f1) +N (f2), where the + on the right-hand side is Minkowski addition. An algebraic way of phrasing this proposition is as follows: N and F are isomorphisms between the semirings (CCPWLn,max,+) and (Newtn, conv,+). 4.4 Outline of the Proof of Theorem 4.1 We prove Theorem 4.1 in full detail in Appendix D. The rough idea is as follows. Suppose we have a p-term max function f with p ≥ n + 2. By Proposition 4.5, f corresponds to a polyhedron P ∈ Newtn with at least n + 2 vertices. Applying a classical result from discrete geometry known as Radon’s theorem allows us to carefully decompose P into a “signed”2 Minkowski sum of polyhedra in Newtn whose vertices are subsets of at most p− 1 out of the p vertices of P . Translating this back into the world of CPWL functions by Proposition 4.5 yields that f can be written as linear combination of p′-term maxima with p′ < p, where each of them involves a subset of 1N (f) is the negative of the epigraph of the convex conjugate of f . 2Some polyhedra may occur with “negative” coefficents in that sum, meaning that they are actually added to P instead of the other polyhedra. The corresponding CPWL functions will then have negative coefficients in the linear combination representing f . the p affine terms of f . We can then obtain Theorem 4.1 by iterating until every occurring maximum expression involves at most n+ 1 terms. 4.5 Potential Approaches to Show Lower Bounds on the Width In light of the upper width bounds shown in this section, a natural question to ask is whether also meaningful lower bounds can be achieved. This would mean constructing a family of CPWL functions with p pieces defined on Rn (with different values of p and n), for which we can prove that a large width is required to represent these functions with NNs of depth dlog2(n+ 1)e+ 1. A trivial and not very satisfying answer follows, e.g., from Raghu et al. [2017] or Serra et al. [2018]: for fixed input dimension n, they show that a function computed by an NN with k hidden layers and width w has at most O(wkn) pieces. For our setting, this means that an NN with logarithmic depth needs a width of at least O(p1/(n logn)) to represent a function with p pieces. This is, of course, very far away from our upper bounds. Similar upper bounds on the number of pieces have been proven by many other authors and are often used to show depth-width tradeoffs [Montúfar et al., 2014, 2021, Pascanu et al., 2014, Telgarsky, 2016b, Arora et al., 2018]. However, there is a good reason why all these results only give rise to very trivial lower bounds for our setting: the focus is always on functions with very many pieces, which then, consequently, need many neurons to be represented (with small depth). However, since the lower bounds we strive for depend on the number of pieces, we would need to construct a family of functions with comparably few pieces that still need very many neurons to be represented. In general, it seems to be a tough task to argue why such functions should exist. A different approach could leverage methods from complexity theory, in particular from circuit complexity. Neural networks are basically arithmetic circuits with very special operations allowed. In fact, they can be seen as a tropical variant of arithmetic circuits. Showing circuit lower bounds is a notoriously difficult task in complexity theory, but maybe some conditional result (based on common conjectures similar to P 6= NP) could be established. We think that the question whether our bounds are tight, or whether at least some non-trivial lower bounds on the width for NNs with logarithmic depth can be shown, is an exciting question for further research. 5 Discussion of Future Research Directions The most obvious and, at the same time, most exciting open research question is to prove or disprove Conjecture 1.1, or equivalently Conjecture 1.2. The first step could be to prove Assumption 2.4. The assumption is intuitive because every breakpoint introduced at other places needs to be canceled out later. Therefore, it is natural to assume that these breakpoints do not have to be introduced in the first place. However, this intuition does not seem to be enough for a formal proof because it could occur that additional breakpoints in intermediate steps, which are canceled out later, also influence the behavior of the function at other places where we allow breakpoints in the end. Another step towards resolving our conjecture may be to find an alternative proof of Theorem 2.5 not using Assumption 2.4. This might also be beneficial for generalizing our techniques to more hidden layers, since, while theoretically possible, a direct generalization is infeasible due to computational limitations. In light of our results from Section 3, it would be desirable to provide a complete characterization of the functions contained in ReLU(k). Another potential research goal is improving our upper bounds on the width from Section 4 and/or proving matching lower bounds as discussed in Section 4.5. Some more interesting research directions are the following: 1. Establishing or strengthening our results for special classes of NNs like recurrent neural networks (RNNs) or convolutional neural networks (CNNs), 2. Using exact representation results to show more drastic depth-width tradeoffs compared to existing results in the literature, 3. Understanding how the class ReLU(k) changes when a polynomial upper bound is imposed on the width of the NN; see related work by Vardi et al. [2021]. Acknowledgments and Disclosure of Funding Christoph Hertrich gratefully acknowledges funding by DFG-GRK 2434 “Facets of Complexity”. Amitabh Basu gratefully acknowledges support from AFOSR Grant FA95502010341 and NSF Grant CCF2006587. Martin Skutella gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy — The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
1. What are the main contributions and findings of the paper regarding the representation capabilities of neural networks? 2. How does the paper approach the study of exact representation of continuous functions via neural networks, and what are the key conjectures proposed? 3. What are the strengths and weaknesses of the paper, particularly in terms of the results presented and the techniques used? 4. Are there any potential extensions or future directions for the research, such as applying the theory to recurrent neural networks or exploring depth/width tradeoffs? 5. How might the paper's findings contribute to a better understanding of the capabilities and limitations of neural networks in representing complex functions?
Summary Of The Paper Review
Summary Of The Paper EDIT: After reading authors' responses, I decided to keep my score as is. The paper studies the problem of exact representation of continuous functions via neural networks with the ReLU activation. The literature on the approximation power of neural nets is large, however the paper here delves into the less studied question of whether the class of exactly representable functions strictly increases when adding more layers (with no restrictions on size). The authors want to understand the function classes exactly represented by different architectures and a step towards this direction is to analayze the class of functions captured by a depth-d neural net (without width constraints) and how this class of functions changes as we get to depth-d+1 neural nets. It is obvious that ReLU nets will output continuous piecewise linear functions (CPWL for short) and a non-trivial fact from previous works is that log ⁡ ( n + 1 ) hidden layers suffice to represent any CPWL in n dimensions, via a ReLU net. Let d is a parameter for the depth and and ReLU(d) is all functions representable exactly via ReLU nets of depth at most d. The paper tries to understand ReLU(d) as d goes from 0 to log ⁡ ( n + 1 ) . The authors put forth two equivalent conjectures about the relations between the class of functions ReLU(d) for different d. Conjecture 1.1 states that every additional layer will indeed be substantial in terms of the representational capabilities of ReLU nets up to d ≤ log ⁡ ( n + 1 ) of course, at which point all CPWL are representable. The authors reformulate this with Conjecture 1.2 that is a simple statement about max functions. The authors then show a special case of Conj. 1.2 that corresponds to showing that the max function on 5 variables cannot be representted with 2 or 3 hidden layers, with the caveat that they need a certain assumption on the breakpoints of the function represented by any intermediate neuron. Along the same lines, the authors show that the class ReLU(k) contains more functions than just taking the max on 2 k + 1 variables. To achieve this they use the theory of polyhedral complexes associated with CPWL functions. Finally, the authors find upper bounds on the sizes of the networks needed for expressing arbitrary CPWL functions with p linear pieces, as given in Theorem 4.4, which basically involves depth O(logn) nets with width growing as p n 2 . Review I like the overall theme of understanding the exact representation capabilities of neural networks, escaping the traditional approximation theory viewpoint. As most of the traditional approximation theory results, rely on large widths to approximately construct step functions to approximate a given function, all these techniques cannot work here and new ideas are necessary. The main ideas in the paper involve how to relate the max functions with output of ReLU nets. Both conjectures provided in the introduction are very plausible (equivalent of course as the authors show) and the authors take some small first steps towards understanding them. One of the main results is that "there does not exist a 3-layer NN" to compute max on 5 variables. One quick observation here however is that following the notaion in Conj. 1.2 I think there is a shift by +1 that is not correct (is k=2 or k=3 for the statement?). One annoying thing with this result is that it involves a somewhat technical condition on the breakpoints of intermediate neurons which I can't see how to prove. Furthermore, I believe the result is useful but not very surprising. It basically means that computing the maximum between inputs somehow requires a lot of depth in some sense. The next result is about ReLU(k) being a superset of the class of max functions on 2 k variables. To establish this they use a specific construction involving max functions and compositions between max functions that can be written as a ReLU net but not as a max function on several terms. Although the proof is somewhat complicated, still the result seems not as surprising. The perhaps more interesting part has to do with Section 4 where the authors derive upper bounds for the size of the net to be able to represent CPWL functions. Overall, the motivation of the work is interesting from a theoretical point of view, however the results presented are quite weak. Section 2 relied on a technical assumption and only proves a very special case of the (plausible) conjecture 1.1, Section 3 is a comparison between very special class of functions. Section 4 has some interesting techniques. I like that the authors have identified the relation to max functions and have built several ways to analyze the exact representation capabilities of NN, however I believe more and stronger results are necessary to make this a solid contribution. Questions/Future directions to the authors: Could most of these results also be stated for recurrent neural networks? As far as I know several important approximation theory results (e.g., Telgarsky's "Benefits of depth in neural networks") can also be viewed for RNNs instead of feedforward nets, and having the analogous theory for RNNs would be interesting, and can strengthen the overall message of your paper. To get the separations for the different depth levels and also to separate ReLU(k) from MAX(2^k), the authors identify the max function as a "source of complexity" in some sense. In particular Proposition 3.2 relies on max of max functions. Why not taking this to the extreme? Specifically, is there another way of using repeated compositions of max functions with themselves in order to get a "sufficiently complicated" function? Notice that works that have exploited this repeated compositions trick include the seminal paper by Telgarsky and also follow-ups that extended Telgarsky's results to much broader family of functions using characterizations from dynamical systems (e.g., "Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem"). The "source of complexity" in that case came from oscillations in the input, and not max alternations between the inputs as in your case. Another interesting direction would be to get also tradeoffs for the depth/width or other aspects of the architecture needed to exactly represent certain functions. This again will be a counterpoint to many existing approximation theory results that derive depth/width tradeoffs instead of just approximability results.
NIPS
Title Towards Lower Bounds on the Depth of ReLU Neural Networks Abstract We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes. 1 Introduction A core problem in machine learning or statistical pattern recognition is the estimation of an unknown data distribution with access to i.i.d. samples from the distribution. It is well-known that there is a tension between how much prior information one has about the data distribution and how many samples one needs to solve the problem with high confidence (or equivalently, how much variance one has in one’s estimate). This is referred to as the bias-variance trade-off or the bias-complexity trade-off. Neural networks provide a way to turn this bias/complexity knob in a controlled manner that has been studied for decades going back to the idea of a perceptron by Rosenblatt [1958]. This is done by modifying the architecture of a neural network class of functions, which has two parameters: depth and size. As one increases these parameters, the class of functions becomes more expressive. In terms of the bias-variance trade-off, the “bias” decreases as the class of functions becomes more expressive, but the “variance” or “complexity” increases. So-called universal approximation theorems [Cybenko, 1989, Hornik, 1991, Anthony and Bartlett, 1999] show that even with a single hidden layer, i.e., when the depth of the architecture is the smallest possible value, one can essentially reduce the “bias” as much as one desires, by increasing the size of the network or the number of neurons used in the neural network. Nevertheless, it can be advantageous both theoretically and empirically to increase the depth because a substantial reduction in the size can be achieved; Telgarsky [2016a], Eldan and Shamir [2016], Arora et al. [2018] is a small sample of recent work in this direction. To get a better quantitative handle on these trade-offs, 35th Conference on Neural Information Processing Systems (NeurIPS 2021). it is important to understand what classes of functions are exactly representable by neural networks with a certain architecture. The precise mathematical statements of universal approximation theorems show that single layer networks can approximate arbitrarily well any continuous function (under some additional mild hypotheses). While this suggests that single layer networks are good enough from a learning perspective, from a mathematical perspective, one can ask the question if the class of functions represented by a single layer is a strict subset of the class of function represented by two or more hidden layers. On the question of size, one can ask for precise bounds on the size of the network of a given depth to represent a certain class of functions. We believe that a better understanding of the function classes exactly represented by different architectures will have implications not just for mathematical foundations, but also algorithmic and statistical learning aspects of neural networks. The task of searching for the “best” function in that class can only benefit from a better understanding of the nature of functions in that class. A motivating question behind the results in this paper is to understand the hierarchy of function classes exactly represented by neural networks of increasing depth. We now introduce more precise notation and terminology to set the stage for our investigations. Notation. We write [n] := {1, 2, . . . , n} for the set of natural numbers up to n (without zero) and [n]0 := [n] ∪ {0} for the same set including zero. For any n ∈ N, let σ : Rn → Rn be the component-wise rectifier function σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any number of hidden layers k ∈ N, a (k + 1)-layer feedforward neural network with rectified linear units (ReLU NN or simply NN) is given by k affine transformations T (`) : Rn`−1 → Rn` , x 7→ A(`)x+ b(`), for ` ∈ [k], and a linear transformation T (k+1) : Rnk → Rnk+1 , x 7→ A(k+1)x. It is said to compute or represent the function f : Rn0 → Rnk+1 given by f = T (k+1) ◦ σ ◦ T (k) ◦ σ ◦ · · · ◦ T (2) ◦ σ ◦ T (1). The matrices A(`) ∈ Rn`×n`−1 are called the weights and the vectors b(`) ∈ Rn` are the biases of the `-th layer. The number n` ∈ N is called the width of the `-th layer. The maximum width of all hidden layers max`∈[k] n` is called the width of the NN. Further, we say that the NN has depth k + 1 and size ∑k `=1 n`. Often, NNs are represented as layered, directed, acyclic graphs where each dimension of each layer (including input layer ` = 0 and output layer ` = k + 1) is one vertex, weights are arc labels, and biases are node labels. Then, the vertices are called neurons. For a given input x = x(0) ∈ Rn0 , let y(`) := T (`)(x(`−1)) ∈ Rn` be the activation vector and x(`) := σ(y`) ∈ Rn` the output vector of the `-th layer. Further, let y := y(k+1) = f(x) be the output of the NN. We also say that the i-th component of each of these vectors is the activation or the output of the i-th neuron in the `-th layer. For k ∈ N, we define ReLUn(k) := {f : Rn → R | f can be represented by a (k + 1)-layer NN}, CPWLn := {f : Rn → R | f is continuous and piecewise linear}. By definition, a continuous function f : Rn → R is piecewise linear in case there is a finite set of polyhedra whose union is Rn, and f is affine linear over each such polyhedron. In order to analyze ReLUn(k), we use another function class defined as follows. We call a function g a p-term max function if it can be expressed as maximum of p affine terms, that is, g(x) = max{`1(x), . . . , `p(x)} where `i : Rn → R is affinely linear for i ∈ [p]. Based on that, we define MAXn(p) := {f : Rn → R | f is a linear combination of p-term max functions}. If the input dimension n is not important for the context, we sometimes drop the index and use ReLU(k) := ⋃ n∈N ReLUn(k) and MAX(p) := ⋃ n∈N MAXn(p) instead. Since we deal with polyhedra a lot in this paper, we will use the standard notations convA and coneA for the convex and conic hulls of a set A ⊆ Rn. For an in-depth treatment of polyhedra and (mixed-integer) optimization, we refer to Schrijver [1986]. 1.1 Our Contribution It is not hard to see that any function expressed by a ReLU network is a continuous and piecewise linear (CPWL) function, because one is composing continuous piecewise linear functions together. Based on a result by Wang and Sun [2005], Arora et al. [2018] show that every CPWL function defined on Rn can be represented by a ReLU neural network with dlog2(n+ 1)e hidden layers. We wish to understand whether one can do better. We believe it is not possible to do better and we pose the following conjecture to better understand the importance of depth in neural networks. Conjecture 1.1. For any n ∈ N, let k∗ := dlog2(n+ 1)e. Then it holds that ReLUn(0) ( ReLUn(1) ( · · · ( ReLUn(k∗ − 1) ( ReLUn(k∗) = CPWLn . (1) Conjecture 1.1 claims that any additional layer up to k∗ hidden layers strictly increases the set of representable functions. This would imply that the construction by Arora et al. [2018, Theorem 2.1] is actually depth-minimal. Observe that in order to prove Conjecture 1.1, it suffices to find a single function f ∈ ReLUn(k∗) \ ReLUn(k∗ − 1) with n = 2k ∗−1 for all k∗ ∈ N. This also implies all remaining strict inclusions ReLUn(i− 1) ( ReLUn(i) for i < k∗ since ReLUn(i− 1) = ReLUn(i) directly implies that ReLUn(i− 1) = ReLUn(i′) for all i′ ≥ i− 1. In fact, there is a canonical candidate for such a function, allowing us to reformulate the conjecture as follows. Conjecture 1.2. For any k ∈ N, n = 2k, the function fn(x) = max{0, x1, . . . , xn} cannot be represented with k hidden layers. Proposition 1.3. Conjecture 1.1 and Conjecture 1.2 are equivalent. Proof (Sketch). We argued above that Conjecture 1.2 implies Conjecture 1.1. For the other direction, one can argue that, if the specific (n+1)-term max function fn can be represented by k hidden layers, then every other (n+ 1)-term max function as well. The claim then follows via a result by [Wang and Sun, 2005] stating that any f ∈ CPWLn can be written as linear combination of (n+ 1)-term max functions. We provide a detailed argument in Appendix A. It is known that Conjecture 1.2 holds for k = 1 [Mukherjee and Basu, 2017]. However, the conjecture remains open for k ≥ 2. In this paper, we present the following results as partial progress towards resolving this conjecture. In Section 2, we resolve Conjecture 1.2 for k = 2, under a natural assumption on the breakpoints of the function represented by any intermediate neuron. We achieve this result by leveraging techniques from mixed-integer programming to analyze the set of functions computable by certain NNs. It is not hard to see that MAX(2k) ⊆ ReLU(k) for all k ∈ N [Arora et al., 2018], that is, any 2k-term max function (and linear combinations thereof) can be expressed with k hidden layers. One might ask whether the converse is true as well, that is, whether the classes MAX(2k) and ReLU(k) are actually equal. This would not only provide a neat characterization of ReLU(k), but also prove Conjecture 1.2 without any additional assumption since one can show that max{0, x1, . . . , x2k} is not contained in MAX(2k). In fact, this is true for k = 1, that is, a function is computable with one hidden layer if and only if it is a linear combination of 2-term max functions. However, in Section 3, we show that for k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In this section, the key technical ingredient is the theory of polyhedral complexes associated with CPWL functions. This way, we provide important insights concerning the richness of the class ReLU(k). So far, we have focused on understanding the smallest depth needed to express CPWL functions using neural networks with ReLU activations. In Section 4, we complement these results by upper bounds on the sizes of the networks needed for expressing arbitrary CPWL functions. In particular, Theorem 4.4 shows that any continuous piecewise linear function with p linear/affine pieces on Rn can be expressed by a network with depth at most O(log n) and width at most pO(n2). We arrive at this result by introducing a novel application of recently established interactions between neural networks and tropical geometry. 1.2 Related Work Depth versus size. Soon after the original universal approximation theorems [Cybenko, 1989, Hornik, 1991], concrete bounds were obtained on the number of neurons needed in the hidden layer to achieve a certain level of accuracy. The literature on this is vast and we refer to a small representative sample here [Barron, 1993, 1994, Mhaskar, 1993, Pinkus, 1999, Mhaskar, 1996, Mhaskar and Micchelli, 1995]. More recently, work has focused on how deeper networks can have exponentially or super exponentially smaller size compared to shallower networks [Telgarsky, 2016a, Eldan and Shamir, 2016, Arora et al., 2018, Vardi et al., 2021]. See also Gribonval et al. [2021] for another perspective on the relationship between expressivity and architecture, and the references therein. We reiterate that the list of references above is far from complete. Mixed-integer optimization and machine learning. Over the past decade, a growing body of work has emerged that explores the interplay between mixed-integer optimization and machine learning. On the one hand, researchers have attempted to improve mixed-integer optimization algorithms by exploiting novel techniques from machine learning [Bonami et al., 2018, Gasse et al., 2019, He et al., 2014, Khalil et al., 2016, 2017, Kruber et al., 2017, Lodi and Zarpellon, 2017, Alvarez et al., 2017]; see also Bengio et al. [2020] for a recent survey. On the flip side, mixed-integer optimization techniques have been used to analyze function classes represented by neural networks [Serra et al., 2018, Anderson et al., 2020, Fischetti and Jo, 2017, Serra and Ramalingam, 2020, Serra et al., 2020]. In Section 2 below, we show another new use of mixed-integer optimization tools for understanding function classes represented by neural networks. Design of training algorithms. We believe that a better understanding of the function classes represented exactly by a neural architecture also has benefits in terms of understanding the complexity of the training problem. For instance, in a paper by Arora et al. [2018], an understanding of single layer ReLU networks enables the design of a globally optimal algorithm for solving the empirical risk minimization (ERM) problem, that runs in polynomial time in the number of data points in fixed dimension. See also Goel et al. [2017, 2018], Goel and Klivans [2019], Dey et al. [2020], Boob et al. [2020], Goel et al. [2021], Froese et al. [2021] for a similar lines of work. Neural Networks and Tropical Geometry. A recent stream of research involves the interplay between neural networks and tropical geometry. The piecewise linear functions computed by neural networks can be seen as (tropical quotients of) tropical polynomials. Linear regions of these functions correspond to vertices of so-called Newton polytopes associated with these tropical polynomials. Applications of this correspondence include bounding the number of linear regions of a neural network [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Montúfar et al., 2021] and understanding decision boundaries [Alfarra et al., 2020]. In Section 4 we present a novel application of tropical concepts to understand neural networks. We refer to Maragos et al. [2021] for a recent survey of connections between machine learning and tropical geometry, as well as to the textbooks by Maclagan and Sturmfels [2015] and Joswig [2022] for in-depth introductions to tropical geometry and tropical combinatorics. 2 Conditional Lower Bounds on Depth via Mixed-Integer Programming In this section, we provide a computer-aided proof that, under a natural, yet unproven assumption, the function f(x) := max{0, x1, x2, x3, x4} cannot be represented by a 3-layer NN. It is worth to note that, to the best of our knowledge, no CPWL function is known for which the non-existence of a 3-layer NN can be proven without additional assumption. For easier notation, we write x0 := 0. We first prove that we may restrict ourselves to NNs without biases. This holds true independent of our assumption, which we introduce afterwards. Definition 2.1. A function g : Rn → Rm is called positively homogeneous if g(λx) = λg(x) for all λ ≥ 0. Definition 2.2. For an NN given by affine transformations T (`)(x) = A(`)x + b(`), we define the corresponding homogenized NN to be the NN given by T̃ (`)(x) = A(`)x with all biases set to zero. Proposition 2.3. If an NN computes a positively homogeneous function, then the corresponding homogenized NN computes the same function. Proof. Let g : Rn0 → Rnk+1 be the function computed by the original NN and g̃ the one computed by the homogenized NN. Further, for any 0 ≤ ` ≤ k, let g(`) = T (`+1) ◦σ ◦T (`) ◦ · · ·◦T (2) ◦σ ◦T (1) be the function computed by the sub-NN consisting of the first (` + 1)-layers and let g̃(`) be the function computed by the corresponding homogenized sub-NN. We first show by induction on ` that the norm of ‖g(`)(x)− g̃(`)(x)‖ is bounded by a global constant that only depends on the parameters of the NN but not on x. For ` = 0, we obviously have ‖g(0)(x) − g̃(0)(x)‖ = ‖b(1)‖ =: C0, settling the induction base. For the induction step, let ` ≥ 1 and assume that ‖g(`−1)(x) − g̃(`−1)(x)‖ ≤ C`−1, where C`−1 only depends on the parameters of the NN. Since component-wise ReLU activation has Lipschitz constant 1, this implies ‖(σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x)‖ ≤ C`−1. Using any matrix norm that is compatible with the Euclidean vector norm, we obtain: ‖g(`)(x)− g̃(`)(x)‖ = ‖b(`+1) +A(`+1)((σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x))‖ ≤ ‖b(`+1)‖+ ‖A(`+1)‖ · C`−1 =: C` Since the right-hand side only depends on NN parameters, the induction is completed. Finally, we show that g = g̃. For the sake of contradiction, suppose that there is an x ∈ Rn0 with ‖g(x) − g̃(x)‖ = δ > 0. Let x′ := Ck+1δ x; then, by positive homogeneity, it follows that ‖g(x′)− g̃(x′)‖ = Ck + 1 > Ck, contradicting the property shown above. Thus, we have g = g̃. Since f = max{0, x1, x2, x3, x4} is positively homogeneous, Proposition 2.3 implies that, if there is a 3-layer NN computing f , then there also is one that has no biases. Therefore, in the remainder of this section, we only consider NNs without biases and assume implicitly that all considered CPWL functions are positively homogeneous. In particular, any piece of such a CPWL function is linear and not only affine linear. Observe that, for function f , the only points of non-differentiability (a.k.a. breakpoints) are at places where at least two of the five numbers x0 = 0, x1, x2, x3, and x4 are equal. Hence, if some neuron of an NN computing f introduces breakpoints at other places, these breakpoints must be canceled out by other neurons. Therefore, it is a natural assumption that such breakpoints are not introduced at all in the first place. To make this assumption formal, let Hij = {x ∈ R4 | xi = xj}, for 0 ≤ i < j ≤ 4, be ten hyperplanes in R4 and H = ⋃ 0≤i<j≤4Hij be the corresponding hyperplane arrangement. The regions or cells of H are defined to be the closures of the connected components of R4 \H . It is easy to see that these regions are in one-to-one correspondence to the 5! = 120 possible orderings of the five numbers x0 = 0, x1, x2, x3, and x4. More precisely, for a permutation π of the five indices [4]0 = {0, 1, 2, 3, 4}, the corresponding region is the polyhedron Cπ := {x ∈ R4 | xπ(0) ≤ xπ(1) ≤ xπ(2) ≤ xπ(3) ≤ xπ(4)}. We say that a (positively homogeneous) CPWL function g is H-conforming, if it is linear within any of these regions of H , that is, if it only has breakpoints where the relative ordering of the five values x0 = 0, x1, x2, x3, x4 changes. Moreover, an NN is said to be H-conforming if the output of each neuron contained in the NN is H-conforming. Equivalently, this is the case if and only if all intermediate functions σ ◦ T (`) ◦ σ ◦ T (`−1) ◦ · · · ◦ σ ◦ T (1), ` ∈ [k], are H-conforming. Now our assumption can be formally phrased as follows. Assumption 2.4. If there exists a 3-layer NN computing f(x) = max{0, x1, x2, x3, x4}, then there also exists one that is H-conforming. We use mixed-integer programming to prove the following theorem. Theorem 2.5. Under Assumption 2.4, there does not exist a 3-layer NN that computes the function f(x) = max{0, x1, x2, x3, x4}. Proof (Outline). We first study some geometric properties of the hyperplane arrangement H . This will show that each of the 120 cells of H is a simplicial polyhedral cone spanned by 4 extreme rays. In total, there are 30 such rays (because rays are used multiple times to span different cones). This implies that the set of H-conforming functions of type R4 → R is a 30-dimensional vector space and each function is uniquely determined by its values on the 30 rays. We then use linear algebra to show that the space of functions generated by H-conforming two-layer NNs is a 14-dimensional subspace. Moreover, with two hidden layers, at least 29 of the 30 dimensions can be generated and f is not contained in this 29-dimensional subspace. So, it remains the question whether the 14 dimensions producible with the first hidden layer can be combined in such a way that after applying a ReLU activation in the second hidden layer, we do not end up within the 29-dimensional subspace. We model this question as a mixed-interger program (MIP). Solving the MIP yields that we always end up within the 29-dimensional subspace, implying that f cannot be represented by a 3-layer NN. This provides a computational proof of Theorem 2.5. All details can be found in Appendix B. 3 Going Beyond 2k-Term Max Functions with k Layers In this section we prove the following result: Theorem 3.1. For any k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In order to prove this theorem, we provide a specific function that is in ReLU(k) \MAX(2k) for any number of hidden layers k ≥ 2. The challenging part is to show that the function is in fact not contained in MAX(2k). Proposition 3.2. For any n ≥ 3, the function f : Rn → R, f(x) = max{0, x1, x2, . . . , xn−3, max{xn−2, xn−1}+ max{0, xn}} (2) cannot be written as a linear combination of n-term max functions. The above proposition means that it is not possible to write f(x) in the form f(x) = p∑ i=1 λi max{`i1(x), . . . , `in(x)} where p ∈ N, λ1, . . . , λp ∈ R, and `ij : Rn → R is an affine linear function for every i ∈ [p] and j ∈ [n]. (Note that max functions with less than n terms are also allowed, as some functions `ij may coincide.) Before we prove Proposition 3.2, we show that it implies Theorem 3.1. Proof of Theorem 3.1. For k ≥ 2, let n := 2k. By Proposition 3.2, function f defined in (2) is not contained in MAX(2k). It remains to show that it can be represented using a ReLU NN with k hidden layers. To see this, first observe that any of the n/2 = 2k−1 terms max{0, x1}, max{x2i, x2i+1} for i ∈ [n/2 − 2], and max{xn−2, xn−1} + max{0, xn} can be expressed by a one-hidden-layer NN since all these are (linear combinations of) 2-term max functions. Since f is the maximum of these 2k−1 terms, and since the maximum of 2k−1 numbers can be computed with k − 1 hidden layers [Arora et al., 2018], this implies that f is in ReLU(k). In order to prove Proposition 3.2, we need the concept of polyhedral complexes. A polyhedral complex P is a finite set of polyhedra such that each face of a polyhedron in P is also in P , and for two polyhedra P,Q ∈ P , their intersection P ∩Q is a common face of P and Q (possibly the empty face). Given a polyhedral complex P in Rn and an integer m ∈ [n], we let Pm denote the collection of all m-dimensional polyhedra in P . For a convex CPWL function f , we define its underlying polyhedral complex as follows: it is the unique polyhedral complex covering Rn (i.e., each point in Rn belongs to some polyhedron in P) whose n-dimensional polyhedra coincide with the domains of the (maximal) affine pieces of f . In particular, f is affinely linear within each P ∈ P , but not within any strict superset of a polyhedron in Pn. Exploiting properties of polyhedral complexes associated with CPWL functions, we prove the following proposition in Appendix C. Proposition 3.3. Let f0 : Rn → R be a convex CPWL function and let P0 be the underlying polyhedral complex. If there exists a hyperplane H ⊆ Rn such that the set T := ⋃{ F ∈ Pn−10 ∣∣ F ⊆ H} is nonempty and contains no line, then f0 cannot be expressed as a linear combination of n-term maxima of affine linear functions. This allows us to prove Proposition 3.2. Proof of Proposition 3.2. Observe that f (defined in (2)) has the alternate representation f(x) = max{0, x1, x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn} as a maximum of n+ 2 terms. Let P be its underlying polyhedral complex. Let the hyperplane H be defined by x1 = 0. Observe that any facet in Pn−1 is a polyhedron defined by two of the n + 2 terms that are equal and at least as large as each of the remaining n terms. Hence, the only facet that could possibly be contained in H is F := {x ∈ Rn | x1 = 0 ≥ x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn}. Note that F is indeed an (n− 1)-dimensional facet in Pn−1, because, for example, the full neighborhood of (0,−1, . . . ,−1) ∈ Rn intersected with H is contained in F . Finally, we need to show that F is pointed, that is, it contains no line. A well-known fact from polyhedral theory says if there is any line in F with direction d ∈ Rn \ {0}, then d must satisfy the defining inequalities with equality. However, only the zero vector does this. Hence, F cannot contain a line. Therefore, when applying Proposition 3.3 to f with underlying polyhedral complex P and hyperplane H , we have T = F , which is nonempty and contains no line. Hence, f cannot be written as linear combination of n-term maxima. 4 A Width Bound for NNs with Small Depth While the arguments in Arora et al. [2018] show that CPWLn = ReLUn(dlog2(n + 1)e), they do not provide any bound on the width of the NN required to represent any particular continuous piecewise linear function. The purpose of this section is to prove that for fixed dimension n, the required width for exact, depth-minimal representation of a CPWL function can be polynomially bounded in the number p of affine pieces; in particular pO(n 2). This is closely related to works that bound the number of linear pieces of an NN as a function of the size [Montúfar et al., 2014, Pascanu et al., 2014, Raghu et al., 2017, Montúfar et al., 2021]. It can also be seen as a counterpart, in the context of exact representations, to quantitative universal approximation theorems that bound the number of neurons required to achieve a certain approximation guarantee; see, e.g., Barron [1993, 1994], Mhaskar [1993], Pinkus [1999], Mhaskar [1996], Mhaskar and Micchelli [1995]. 4.1 The Convex Case We first derive our result for the case of convex CPWL functions and then use this to also prove the general nonconvex case. Our width bound is a consequence of the following theorem about convex CPWL functions, for which we are going to provide a geometric proof later. Theorem 4.1. Let f(x) = max{aTi x + bi | i ∈ [p]} be a convex CPWL function defined on Rn. Then f can be written as f(x) = ∑ S⊆[p], |S|≤n+1 cS max{aTi x+ bi | i ∈ S} with coefficients cS ∈ Z, for S ⊆ [p], |S| ≤ n+ 1. For the convex case, this yields a stronger version of the theorem by Wang and Sun [2005] stating that any (not necessarily convex) CPWL function can be written as a linear combination of (n+ 1)-term maxima. Theorem 4.1 is stronger in the sense that it guarantees that all pieces of the (n+ 1)-term maxima must be pieces of the original function, making it possible to bound the total number of these (n+ 1)-term maxima and, therefore, the size of an NN representing f . Theorem 4.2. Let f : Rn → R be a convex CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(pn+1). Proof. Since the number of possible subsets S ⊆ [p] with |S| ≤ n + 1 is bounded by pn+1, the theorem follows by Theorem 4.1 and the construction from Arora et al. [2018, Theorem 2.1]. Before we present the proof of Theorem 4.1, we show how we can generalize its consequences to the nonconvex case. 4.2 The General (Nonconvex) Case It is a well-known fact that every CPWL function can be expressed as a difference of two convex CPWL functions, see, e.g., Wang [2004, Theorem 1]. This allows us to derive the general case from the convex case. What we need, however, is to bound the number of affine pieces of the two convex CPWL functions in terms of the number of pieces of the original function. Therefore, we consider a specific decomposition for which such bounds can easily be achieved. Proposition 4.3. Let f : Rn → R be a CPWL function with p affine pieces. Then, f can be written as f = g − h where both g and h are convex CPWL functions with at most p2n+1 pieces. Proof. Suppose the p affine pieces of f are given by x 7→ aTi x + bi, i ∈ [p]. Define the function h(x) := ∑ 1≤i<j≤p max{aTi x+ bi, aTj x+ bj} and let g := f + h. Then, obviously, f = g − h. It remains to show that both g and h are convex CPWL functions with at most p2n+1 pieces. The convexity of h is clear by definition. Consider the ( p 2 ) = p(p−1)2 < p 2 hyperplanes given by aTi x+ bi = a T j x+ bj , 1 ≤ i < j ≤ p. They divide Rn into at most ( p2 n ) + ( p2 n−1 ) + · · ·+ ( p2 0 ) ≤ p2n regions (compare Edelsbrunner [1987, Theorem 1.3]) in each of which h is affine. In particular, h has at most p2n ≤ p2n+1 pieces. Next, we show that g = f + h is convex. Intuitively, this holds because each possible breaking hyperplane of f is made convex by adding h. To make this formal, note that by the definition of convexity, it suffices to show that g is convex along each affine line. For this purpose, consider an arbitrary line x(t) = ta+b, t ∈ R, given by a ∈ Rn and b ∈ R. Let f̃(t) := f(x(t)), g̃(t) := g(x(t)), and h̃(t) := h(x(t)). We need to show that g̃ : R→ R is a convex function. Observe that f̃ , g̃, and h̃ are clearly one-dimensional CPWL functions with the property g̃ = f̃ + h̃. Hence, it suffices to show that g̃ is convex locally around each of its breakpoints. Let t ∈ R be an arbitrary breakpoint of g̃. If f̃ is already convex locally around t, then the same holds for g̃ as well since h̃ inherits convexity from h. Now suppose that t is a nonconvex breakpoint of f̃ . Then there exist two distinct pieces of f , indexed by i, j ∈ [p] with i 6= j, such that f̃(t′) = min{aTi x(t′) + bi, aTj x(t′) + bj} for all t′ sufficiently close to t. By construction, h̃(t′) contains the summand max{aTi x(t′)+bi, aTj x(t′)+bj}. Thus, adding this summand to f̃ linearizes the nonconvex breakpoint of f̃ , while adding all the other summands preserves convexity. In total, g̃ is convex locally around t, which finishes the proof that g is a convex function. Finally, observe that pieces of g = f + h are always intersections of pieces of f and h, for which we have only p · p2n = p2n+1 possibilities. Having this, we may conclude the following. Theorem 4.4. Let f : Rn → R be a CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(p2n 2+3n+1). Proof. Consider the decomposition f = g − h from Proposition 4.3. Using Theorem 4.2, we obtain that both g and h can be represented with the required depth dlog2(n + 1)e + 1 and with width O((p2n+1)n+1) = O(p2n2+3n+1). Thus, the same holds for f . 4.3 Extended Newton Polyhedra of Convex CPWL Functions For our proof of Theorem 4.1, we use a correspondence of convex CPWL functions with certain polyhedra, which are known as (extended) Newton polyhedra in tropical geometry [Maclagan and Sturmfels, 2015]. These relations between tropical geometry and neural networks have previously been applied to investigate expressivity of NNs; compare our references in the introduction. In order to formalize this correspondence, let CCPWLn ⊆ CPWLn be the set of convex CPWL functions of type Rn → R. For f(x) = max{aTi x + bi | i ∈ [p]} in CCPWLn, we define its so-called extended Newton polyhedron to be N (f) := conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}) ⊆ Rn+1. We denote the set of all possible extended Newton polyhedra in Rn+1 as Newtn. That is, Newtn is the set of (unbounded) polyhedra in Rn+1 that emerge from a polytope by adding the negative of the (n+ 1)-st unit vector −en+1 as an extreme ray. Hence, a set P ⊆ Rn+1 is an element of Newtn if and only if P can be written as P = conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}). Conversely, for a polyhedron P ∈ Newtn of this form, let F(P ) ∈ CCPWLn be the function defined by F(P )(x) = max{aTi x+ bi | i ∈ [p]}. There is an intuitive way of thinking about the extended Newton polyhedron P of a convex CPWL function f : it consists of all hyperplane coefficients (a, b) ∈ Rn × R such that aTx+ b ≤ f(x) for all x ∈ Rn. This also explains why we add the extreme ray −en+1: decreasing b obviously maintains the property of aTx+ b being a lower bound on the function f . We need the notion of the Minkowski sum of two polyhedra P and Q: it is given as the set P +Q = {p+ q | p ∈ P, q ∈ Q}. In fact, there is a one-to-one correspondence between elements of CCPWLn and Newtn, which is nicely compatible with some (functional and polyhedral) operations. This correspondence has been studied before in tropical geometry [Maclagan and Sturmfels, 2015, Joswig, 2022], convex geometry1 [Hiriart-Urruty and Lemaréchal, 1993], as well as neural network literature [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Alfarra et al., 2020, Montúfar et al., 2021]. We summarize the key findings of this correspondence relevant to our work in the following proposition: Proposition 4.5. Let n ∈ N and f1, f2 ∈ CCPWLn. Then it holds that (i) the functions N : CCPWLn → Newtn and F : Newtn → CCPWLn are well-defined, that is, their output is independent from the representation of the input by pieces or vertices, respectively, (ii) N and F are bijections and inverse to each other, (iii) N (max{f1, f2}) = conv(N (f1),N (f2)) := conv(N (f1) ∪N (f2)), (iv) N (f1 + f2) = N (f1) +N (f2), where the + on the right-hand side is Minkowski addition. An algebraic way of phrasing this proposition is as follows: N and F are isomorphisms between the semirings (CCPWLn,max,+) and (Newtn, conv,+). 4.4 Outline of the Proof of Theorem 4.1 We prove Theorem 4.1 in full detail in Appendix D. The rough idea is as follows. Suppose we have a p-term max function f with p ≥ n + 2. By Proposition 4.5, f corresponds to a polyhedron P ∈ Newtn with at least n + 2 vertices. Applying a classical result from discrete geometry known as Radon’s theorem allows us to carefully decompose P into a “signed”2 Minkowski sum of polyhedra in Newtn whose vertices are subsets of at most p− 1 out of the p vertices of P . Translating this back into the world of CPWL functions by Proposition 4.5 yields that f can be written as linear combination of p′-term maxima with p′ < p, where each of them involves a subset of 1N (f) is the negative of the epigraph of the convex conjugate of f . 2Some polyhedra may occur with “negative” coefficents in that sum, meaning that they are actually added to P instead of the other polyhedra. The corresponding CPWL functions will then have negative coefficients in the linear combination representing f . the p affine terms of f . We can then obtain Theorem 4.1 by iterating until every occurring maximum expression involves at most n+ 1 terms. 4.5 Potential Approaches to Show Lower Bounds on the Width In light of the upper width bounds shown in this section, a natural question to ask is whether also meaningful lower bounds can be achieved. This would mean constructing a family of CPWL functions with p pieces defined on Rn (with different values of p and n), for which we can prove that a large width is required to represent these functions with NNs of depth dlog2(n+ 1)e+ 1. A trivial and not very satisfying answer follows, e.g., from Raghu et al. [2017] or Serra et al. [2018]: for fixed input dimension n, they show that a function computed by an NN with k hidden layers and width w has at most O(wkn) pieces. For our setting, this means that an NN with logarithmic depth needs a width of at least O(p1/(n logn)) to represent a function with p pieces. This is, of course, very far away from our upper bounds. Similar upper bounds on the number of pieces have been proven by many other authors and are often used to show depth-width tradeoffs [Montúfar et al., 2014, 2021, Pascanu et al., 2014, Telgarsky, 2016b, Arora et al., 2018]. However, there is a good reason why all these results only give rise to very trivial lower bounds for our setting: the focus is always on functions with very many pieces, which then, consequently, need many neurons to be represented (with small depth). However, since the lower bounds we strive for depend on the number of pieces, we would need to construct a family of functions with comparably few pieces that still need very many neurons to be represented. In general, it seems to be a tough task to argue why such functions should exist. A different approach could leverage methods from complexity theory, in particular from circuit complexity. Neural networks are basically arithmetic circuits with very special operations allowed. In fact, they can be seen as a tropical variant of arithmetic circuits. Showing circuit lower bounds is a notoriously difficult task in complexity theory, but maybe some conditional result (based on common conjectures similar to P 6= NP) could be established. We think that the question whether our bounds are tight, or whether at least some non-trivial lower bounds on the width for NNs with logarithmic depth can be shown, is an exciting question for further research. 5 Discussion of Future Research Directions The most obvious and, at the same time, most exciting open research question is to prove or disprove Conjecture 1.1, or equivalently Conjecture 1.2. The first step could be to prove Assumption 2.4. The assumption is intuitive because every breakpoint introduced at other places needs to be canceled out later. Therefore, it is natural to assume that these breakpoints do not have to be introduced in the first place. However, this intuition does not seem to be enough for a formal proof because it could occur that additional breakpoints in intermediate steps, which are canceled out later, also influence the behavior of the function at other places where we allow breakpoints in the end. Another step towards resolving our conjecture may be to find an alternative proof of Theorem 2.5 not using Assumption 2.4. This might also be beneficial for generalizing our techniques to more hidden layers, since, while theoretically possible, a direct generalization is infeasible due to computational limitations. In light of our results from Section 3, it would be desirable to provide a complete characterization of the functions contained in ReLU(k). Another potential research goal is improving our upper bounds on the width from Section 4 and/or proving matching lower bounds as discussed in Section 4.5. Some more interesting research directions are the following: 1. Establishing or strengthening our results for special classes of NNs like recurrent neural networks (RNNs) or convolutional neural networks (CNNs), 2. Using exact representation results to show more drastic depth-width tradeoffs compared to existing results in the literature, 3. Understanding how the class ReLU(k) changes when a polynomial upper bound is imposed on the width of the NN; see related work by Vardi et al. [2021]. Acknowledgments and Disclosure of Funding Christoph Hertrich gratefully acknowledges funding by DFG-GRK 2434 “Facets of Complexity”. Amitabh Basu gratefully acknowledges support from AFOSR Grant FA95502010341 and NSF Grant CCF2006587. Martin Skutella gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy — The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
1. What is the focus of the paper regarding ReLU networks? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the significance and applicability of the obtained results? 4. Are there any concerns or suggestions regarding the paper's content and its presentation?
Summary Of The Paper Review
Summary Of The Paper This paper considers the problem of characterizing exact representations for ReLU networks of a given depth, but any width. It was previously shown that the functions represented by ReLU networks of depth logarithmic in dimension is exactly equal to the set of all continuous piecewise linear functions. This work provides some results which suggest that this result is tight. That is, a depth logarithmic in dimension is necessary to exactly represent all piecewise linear functions. Proposition 1.3 simplifies this conjecture by giving a simple equivalent condition in terms of representing max(0,x_1,\dots,x_n) and proves the conjecture up to dimension 4 under the assumption 2.4 which is unproven. In section 3 it is shown that the set of functions representable by a depth k network is a strict super set of Max(2^k) - i.e, set of all functions which can written as a linear combination of max of 2^k affine functions. This provides further evidence in support of the conjecture. Section 4 then extends the results of Arora et. al 2018 to provide an upper bound on the width required for a relu network to exactly represent affine function with p pieces. This bound is p O ( n 2 ) . Review Disclaimer: I am not an expert on tropical geometry and related topics. I would first note that this paper is about exact representations. The closure of ReLU n ( k ) with respect to uniform convergence over compacts contains the space of all continuous functions by universal approximation property. This work is about not taking the closure and considering the exact representations. I acknowledge that this is indeed a hard problem. The scope of the work seems to be very limited. It is not clear why having exact representation is ever useful in practical machine learning. The authors do mention the learning algorithm by Arora et. al 2018, which I believe they should expand on. The results seem a bit weak. The main result in Section 2 which proves the conjecture only upto dimension 4, conditioned on an unproven assumption. The bounds in Section 4 are super exponential in dimension, which seems very intractable from a computation perspective. It would be great to see some discussions about lower bounds for the width. Practically speaking, if indeed this is tight, what is the use of exact representations at all? The paper is very well written but I am skeptical about the relevance of this work in a venue like NeurIPS.
NIPS
Title Towards Lower Bounds on the Depth of ReLU Neural Networks Abstract We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes. 1 Introduction A core problem in machine learning or statistical pattern recognition is the estimation of an unknown data distribution with access to i.i.d. samples from the distribution. It is well-known that there is a tension between how much prior information one has about the data distribution and how many samples one needs to solve the problem with high confidence (or equivalently, how much variance one has in one’s estimate). This is referred to as the bias-variance trade-off or the bias-complexity trade-off. Neural networks provide a way to turn this bias/complexity knob in a controlled manner that has been studied for decades going back to the idea of a perceptron by Rosenblatt [1958]. This is done by modifying the architecture of a neural network class of functions, which has two parameters: depth and size. As one increases these parameters, the class of functions becomes more expressive. In terms of the bias-variance trade-off, the “bias” decreases as the class of functions becomes more expressive, but the “variance” or “complexity” increases. So-called universal approximation theorems [Cybenko, 1989, Hornik, 1991, Anthony and Bartlett, 1999] show that even with a single hidden layer, i.e., when the depth of the architecture is the smallest possible value, one can essentially reduce the “bias” as much as one desires, by increasing the size of the network or the number of neurons used in the neural network. Nevertheless, it can be advantageous both theoretically and empirically to increase the depth because a substantial reduction in the size can be achieved; Telgarsky [2016a], Eldan and Shamir [2016], Arora et al. [2018] is a small sample of recent work in this direction. To get a better quantitative handle on these trade-offs, 35th Conference on Neural Information Processing Systems (NeurIPS 2021). it is important to understand what classes of functions are exactly representable by neural networks with a certain architecture. The precise mathematical statements of universal approximation theorems show that single layer networks can approximate arbitrarily well any continuous function (under some additional mild hypotheses). While this suggests that single layer networks are good enough from a learning perspective, from a mathematical perspective, one can ask the question if the class of functions represented by a single layer is a strict subset of the class of function represented by two or more hidden layers. On the question of size, one can ask for precise bounds on the size of the network of a given depth to represent a certain class of functions. We believe that a better understanding of the function classes exactly represented by different architectures will have implications not just for mathematical foundations, but also algorithmic and statistical learning aspects of neural networks. The task of searching for the “best” function in that class can only benefit from a better understanding of the nature of functions in that class. A motivating question behind the results in this paper is to understand the hierarchy of function classes exactly represented by neural networks of increasing depth. We now introduce more precise notation and terminology to set the stage for our investigations. Notation. We write [n] := {1, 2, . . . , n} for the set of natural numbers up to n (without zero) and [n]0 := [n] ∪ {0} for the same set including zero. For any n ∈ N, let σ : Rn → Rn be the component-wise rectifier function σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any number of hidden layers k ∈ N, a (k + 1)-layer feedforward neural network with rectified linear units (ReLU NN or simply NN) is given by k affine transformations T (`) : Rn`−1 → Rn` , x 7→ A(`)x+ b(`), for ` ∈ [k], and a linear transformation T (k+1) : Rnk → Rnk+1 , x 7→ A(k+1)x. It is said to compute or represent the function f : Rn0 → Rnk+1 given by f = T (k+1) ◦ σ ◦ T (k) ◦ σ ◦ · · · ◦ T (2) ◦ σ ◦ T (1). The matrices A(`) ∈ Rn`×n`−1 are called the weights and the vectors b(`) ∈ Rn` are the biases of the `-th layer. The number n` ∈ N is called the width of the `-th layer. The maximum width of all hidden layers max`∈[k] n` is called the width of the NN. Further, we say that the NN has depth k + 1 and size ∑k `=1 n`. Often, NNs are represented as layered, directed, acyclic graphs where each dimension of each layer (including input layer ` = 0 and output layer ` = k + 1) is one vertex, weights are arc labels, and biases are node labels. Then, the vertices are called neurons. For a given input x = x(0) ∈ Rn0 , let y(`) := T (`)(x(`−1)) ∈ Rn` be the activation vector and x(`) := σ(y`) ∈ Rn` the output vector of the `-th layer. Further, let y := y(k+1) = f(x) be the output of the NN. We also say that the i-th component of each of these vectors is the activation or the output of the i-th neuron in the `-th layer. For k ∈ N, we define ReLUn(k) := {f : Rn → R | f can be represented by a (k + 1)-layer NN}, CPWLn := {f : Rn → R | f is continuous and piecewise linear}. By definition, a continuous function f : Rn → R is piecewise linear in case there is a finite set of polyhedra whose union is Rn, and f is affine linear over each such polyhedron. In order to analyze ReLUn(k), we use another function class defined as follows. We call a function g a p-term max function if it can be expressed as maximum of p affine terms, that is, g(x) = max{`1(x), . . . , `p(x)} where `i : Rn → R is affinely linear for i ∈ [p]. Based on that, we define MAXn(p) := {f : Rn → R | f is a linear combination of p-term max functions}. If the input dimension n is not important for the context, we sometimes drop the index and use ReLU(k) := ⋃ n∈N ReLUn(k) and MAX(p) := ⋃ n∈N MAXn(p) instead. Since we deal with polyhedra a lot in this paper, we will use the standard notations convA and coneA for the convex and conic hulls of a set A ⊆ Rn. For an in-depth treatment of polyhedra and (mixed-integer) optimization, we refer to Schrijver [1986]. 1.1 Our Contribution It is not hard to see that any function expressed by a ReLU network is a continuous and piecewise linear (CPWL) function, because one is composing continuous piecewise linear functions together. Based on a result by Wang and Sun [2005], Arora et al. [2018] show that every CPWL function defined on Rn can be represented by a ReLU neural network with dlog2(n+ 1)e hidden layers. We wish to understand whether one can do better. We believe it is not possible to do better and we pose the following conjecture to better understand the importance of depth in neural networks. Conjecture 1.1. For any n ∈ N, let k∗ := dlog2(n+ 1)e. Then it holds that ReLUn(0) ( ReLUn(1) ( · · · ( ReLUn(k∗ − 1) ( ReLUn(k∗) = CPWLn . (1) Conjecture 1.1 claims that any additional layer up to k∗ hidden layers strictly increases the set of representable functions. This would imply that the construction by Arora et al. [2018, Theorem 2.1] is actually depth-minimal. Observe that in order to prove Conjecture 1.1, it suffices to find a single function f ∈ ReLUn(k∗) \ ReLUn(k∗ − 1) with n = 2k ∗−1 for all k∗ ∈ N. This also implies all remaining strict inclusions ReLUn(i− 1) ( ReLUn(i) for i < k∗ since ReLUn(i− 1) = ReLUn(i) directly implies that ReLUn(i− 1) = ReLUn(i′) for all i′ ≥ i− 1. In fact, there is a canonical candidate for such a function, allowing us to reformulate the conjecture as follows. Conjecture 1.2. For any k ∈ N, n = 2k, the function fn(x) = max{0, x1, . . . , xn} cannot be represented with k hidden layers. Proposition 1.3. Conjecture 1.1 and Conjecture 1.2 are equivalent. Proof (Sketch). We argued above that Conjecture 1.2 implies Conjecture 1.1. For the other direction, one can argue that, if the specific (n+1)-term max function fn can be represented by k hidden layers, then every other (n+ 1)-term max function as well. The claim then follows via a result by [Wang and Sun, 2005] stating that any f ∈ CPWLn can be written as linear combination of (n+ 1)-term max functions. We provide a detailed argument in Appendix A. It is known that Conjecture 1.2 holds for k = 1 [Mukherjee and Basu, 2017]. However, the conjecture remains open for k ≥ 2. In this paper, we present the following results as partial progress towards resolving this conjecture. In Section 2, we resolve Conjecture 1.2 for k = 2, under a natural assumption on the breakpoints of the function represented by any intermediate neuron. We achieve this result by leveraging techniques from mixed-integer programming to analyze the set of functions computable by certain NNs. It is not hard to see that MAX(2k) ⊆ ReLU(k) for all k ∈ N [Arora et al., 2018], that is, any 2k-term max function (and linear combinations thereof) can be expressed with k hidden layers. One might ask whether the converse is true as well, that is, whether the classes MAX(2k) and ReLU(k) are actually equal. This would not only provide a neat characterization of ReLU(k), but also prove Conjecture 1.2 without any additional assumption since one can show that max{0, x1, . . . , x2k} is not contained in MAX(2k). In fact, this is true for k = 1, that is, a function is computable with one hidden layer if and only if it is a linear combination of 2-term max functions. However, in Section 3, we show that for k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In this section, the key technical ingredient is the theory of polyhedral complexes associated with CPWL functions. This way, we provide important insights concerning the richness of the class ReLU(k). So far, we have focused on understanding the smallest depth needed to express CPWL functions using neural networks with ReLU activations. In Section 4, we complement these results by upper bounds on the sizes of the networks needed for expressing arbitrary CPWL functions. In particular, Theorem 4.4 shows that any continuous piecewise linear function with p linear/affine pieces on Rn can be expressed by a network with depth at most O(log n) and width at most pO(n2). We arrive at this result by introducing a novel application of recently established interactions between neural networks and tropical geometry. 1.2 Related Work Depth versus size. Soon after the original universal approximation theorems [Cybenko, 1989, Hornik, 1991], concrete bounds were obtained on the number of neurons needed in the hidden layer to achieve a certain level of accuracy. The literature on this is vast and we refer to a small representative sample here [Barron, 1993, 1994, Mhaskar, 1993, Pinkus, 1999, Mhaskar, 1996, Mhaskar and Micchelli, 1995]. More recently, work has focused on how deeper networks can have exponentially or super exponentially smaller size compared to shallower networks [Telgarsky, 2016a, Eldan and Shamir, 2016, Arora et al., 2018, Vardi et al., 2021]. See also Gribonval et al. [2021] for another perspective on the relationship between expressivity and architecture, and the references therein. We reiterate that the list of references above is far from complete. Mixed-integer optimization and machine learning. Over the past decade, a growing body of work has emerged that explores the interplay between mixed-integer optimization and machine learning. On the one hand, researchers have attempted to improve mixed-integer optimization algorithms by exploiting novel techniques from machine learning [Bonami et al., 2018, Gasse et al., 2019, He et al., 2014, Khalil et al., 2016, 2017, Kruber et al., 2017, Lodi and Zarpellon, 2017, Alvarez et al., 2017]; see also Bengio et al. [2020] for a recent survey. On the flip side, mixed-integer optimization techniques have been used to analyze function classes represented by neural networks [Serra et al., 2018, Anderson et al., 2020, Fischetti and Jo, 2017, Serra and Ramalingam, 2020, Serra et al., 2020]. In Section 2 below, we show another new use of mixed-integer optimization tools for understanding function classes represented by neural networks. Design of training algorithms. We believe that a better understanding of the function classes represented exactly by a neural architecture also has benefits in terms of understanding the complexity of the training problem. For instance, in a paper by Arora et al. [2018], an understanding of single layer ReLU networks enables the design of a globally optimal algorithm for solving the empirical risk minimization (ERM) problem, that runs in polynomial time in the number of data points in fixed dimension. See also Goel et al. [2017, 2018], Goel and Klivans [2019], Dey et al. [2020], Boob et al. [2020], Goel et al. [2021], Froese et al. [2021] for a similar lines of work. Neural Networks and Tropical Geometry. A recent stream of research involves the interplay between neural networks and tropical geometry. The piecewise linear functions computed by neural networks can be seen as (tropical quotients of) tropical polynomials. Linear regions of these functions correspond to vertices of so-called Newton polytopes associated with these tropical polynomials. Applications of this correspondence include bounding the number of linear regions of a neural network [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Montúfar et al., 2021] and understanding decision boundaries [Alfarra et al., 2020]. In Section 4 we present a novel application of tropical concepts to understand neural networks. We refer to Maragos et al. [2021] for a recent survey of connections between machine learning and tropical geometry, as well as to the textbooks by Maclagan and Sturmfels [2015] and Joswig [2022] for in-depth introductions to tropical geometry and tropical combinatorics. 2 Conditional Lower Bounds on Depth via Mixed-Integer Programming In this section, we provide a computer-aided proof that, under a natural, yet unproven assumption, the function f(x) := max{0, x1, x2, x3, x4} cannot be represented by a 3-layer NN. It is worth to note that, to the best of our knowledge, no CPWL function is known for which the non-existence of a 3-layer NN can be proven without additional assumption. For easier notation, we write x0 := 0. We first prove that we may restrict ourselves to NNs without biases. This holds true independent of our assumption, which we introduce afterwards. Definition 2.1. A function g : Rn → Rm is called positively homogeneous if g(λx) = λg(x) for all λ ≥ 0. Definition 2.2. For an NN given by affine transformations T (`)(x) = A(`)x + b(`), we define the corresponding homogenized NN to be the NN given by T̃ (`)(x) = A(`)x with all biases set to zero. Proposition 2.3. If an NN computes a positively homogeneous function, then the corresponding homogenized NN computes the same function. Proof. Let g : Rn0 → Rnk+1 be the function computed by the original NN and g̃ the one computed by the homogenized NN. Further, for any 0 ≤ ` ≤ k, let g(`) = T (`+1) ◦σ ◦T (`) ◦ · · ·◦T (2) ◦σ ◦T (1) be the function computed by the sub-NN consisting of the first (` + 1)-layers and let g̃(`) be the function computed by the corresponding homogenized sub-NN. We first show by induction on ` that the norm of ‖g(`)(x)− g̃(`)(x)‖ is bounded by a global constant that only depends on the parameters of the NN but not on x. For ` = 0, we obviously have ‖g(0)(x) − g̃(0)(x)‖ = ‖b(1)‖ =: C0, settling the induction base. For the induction step, let ` ≥ 1 and assume that ‖g(`−1)(x) − g̃(`−1)(x)‖ ≤ C`−1, where C`−1 only depends on the parameters of the NN. Since component-wise ReLU activation has Lipschitz constant 1, this implies ‖(σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x)‖ ≤ C`−1. Using any matrix norm that is compatible with the Euclidean vector norm, we obtain: ‖g(`)(x)− g̃(`)(x)‖ = ‖b(`+1) +A(`+1)((σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x))‖ ≤ ‖b(`+1)‖+ ‖A(`+1)‖ · C`−1 =: C` Since the right-hand side only depends on NN parameters, the induction is completed. Finally, we show that g = g̃. For the sake of contradiction, suppose that there is an x ∈ Rn0 with ‖g(x) − g̃(x)‖ = δ > 0. Let x′ := Ck+1δ x; then, by positive homogeneity, it follows that ‖g(x′)− g̃(x′)‖ = Ck + 1 > Ck, contradicting the property shown above. Thus, we have g = g̃. Since f = max{0, x1, x2, x3, x4} is positively homogeneous, Proposition 2.3 implies that, if there is a 3-layer NN computing f , then there also is one that has no biases. Therefore, in the remainder of this section, we only consider NNs without biases and assume implicitly that all considered CPWL functions are positively homogeneous. In particular, any piece of such a CPWL function is linear and not only affine linear. Observe that, for function f , the only points of non-differentiability (a.k.a. breakpoints) are at places where at least two of the five numbers x0 = 0, x1, x2, x3, and x4 are equal. Hence, if some neuron of an NN computing f introduces breakpoints at other places, these breakpoints must be canceled out by other neurons. Therefore, it is a natural assumption that such breakpoints are not introduced at all in the first place. To make this assumption formal, let Hij = {x ∈ R4 | xi = xj}, for 0 ≤ i < j ≤ 4, be ten hyperplanes in R4 and H = ⋃ 0≤i<j≤4Hij be the corresponding hyperplane arrangement. The regions or cells of H are defined to be the closures of the connected components of R4 \H . It is easy to see that these regions are in one-to-one correspondence to the 5! = 120 possible orderings of the five numbers x0 = 0, x1, x2, x3, and x4. More precisely, for a permutation π of the five indices [4]0 = {0, 1, 2, 3, 4}, the corresponding region is the polyhedron Cπ := {x ∈ R4 | xπ(0) ≤ xπ(1) ≤ xπ(2) ≤ xπ(3) ≤ xπ(4)}. We say that a (positively homogeneous) CPWL function g is H-conforming, if it is linear within any of these regions of H , that is, if it only has breakpoints where the relative ordering of the five values x0 = 0, x1, x2, x3, x4 changes. Moreover, an NN is said to be H-conforming if the output of each neuron contained in the NN is H-conforming. Equivalently, this is the case if and only if all intermediate functions σ ◦ T (`) ◦ σ ◦ T (`−1) ◦ · · · ◦ σ ◦ T (1), ` ∈ [k], are H-conforming. Now our assumption can be formally phrased as follows. Assumption 2.4. If there exists a 3-layer NN computing f(x) = max{0, x1, x2, x3, x4}, then there also exists one that is H-conforming. We use mixed-integer programming to prove the following theorem. Theorem 2.5. Under Assumption 2.4, there does not exist a 3-layer NN that computes the function f(x) = max{0, x1, x2, x3, x4}. Proof (Outline). We first study some geometric properties of the hyperplane arrangement H . This will show that each of the 120 cells of H is a simplicial polyhedral cone spanned by 4 extreme rays. In total, there are 30 such rays (because rays are used multiple times to span different cones). This implies that the set of H-conforming functions of type R4 → R is a 30-dimensional vector space and each function is uniquely determined by its values on the 30 rays. We then use linear algebra to show that the space of functions generated by H-conforming two-layer NNs is a 14-dimensional subspace. Moreover, with two hidden layers, at least 29 of the 30 dimensions can be generated and f is not contained in this 29-dimensional subspace. So, it remains the question whether the 14 dimensions producible with the first hidden layer can be combined in such a way that after applying a ReLU activation in the second hidden layer, we do not end up within the 29-dimensional subspace. We model this question as a mixed-interger program (MIP). Solving the MIP yields that we always end up within the 29-dimensional subspace, implying that f cannot be represented by a 3-layer NN. This provides a computational proof of Theorem 2.5. All details can be found in Appendix B. 3 Going Beyond 2k-Term Max Functions with k Layers In this section we prove the following result: Theorem 3.1. For any k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In order to prove this theorem, we provide a specific function that is in ReLU(k) \MAX(2k) for any number of hidden layers k ≥ 2. The challenging part is to show that the function is in fact not contained in MAX(2k). Proposition 3.2. For any n ≥ 3, the function f : Rn → R, f(x) = max{0, x1, x2, . . . , xn−3, max{xn−2, xn−1}+ max{0, xn}} (2) cannot be written as a linear combination of n-term max functions. The above proposition means that it is not possible to write f(x) in the form f(x) = p∑ i=1 λi max{`i1(x), . . . , `in(x)} where p ∈ N, λ1, . . . , λp ∈ R, and `ij : Rn → R is an affine linear function for every i ∈ [p] and j ∈ [n]. (Note that max functions with less than n terms are also allowed, as some functions `ij may coincide.) Before we prove Proposition 3.2, we show that it implies Theorem 3.1. Proof of Theorem 3.1. For k ≥ 2, let n := 2k. By Proposition 3.2, function f defined in (2) is not contained in MAX(2k). It remains to show that it can be represented using a ReLU NN with k hidden layers. To see this, first observe that any of the n/2 = 2k−1 terms max{0, x1}, max{x2i, x2i+1} for i ∈ [n/2 − 2], and max{xn−2, xn−1} + max{0, xn} can be expressed by a one-hidden-layer NN since all these are (linear combinations of) 2-term max functions. Since f is the maximum of these 2k−1 terms, and since the maximum of 2k−1 numbers can be computed with k − 1 hidden layers [Arora et al., 2018], this implies that f is in ReLU(k). In order to prove Proposition 3.2, we need the concept of polyhedral complexes. A polyhedral complex P is a finite set of polyhedra such that each face of a polyhedron in P is also in P , and for two polyhedra P,Q ∈ P , their intersection P ∩Q is a common face of P and Q (possibly the empty face). Given a polyhedral complex P in Rn and an integer m ∈ [n], we let Pm denote the collection of all m-dimensional polyhedra in P . For a convex CPWL function f , we define its underlying polyhedral complex as follows: it is the unique polyhedral complex covering Rn (i.e., each point in Rn belongs to some polyhedron in P) whose n-dimensional polyhedra coincide with the domains of the (maximal) affine pieces of f . In particular, f is affinely linear within each P ∈ P , but not within any strict superset of a polyhedron in Pn. Exploiting properties of polyhedral complexes associated with CPWL functions, we prove the following proposition in Appendix C. Proposition 3.3. Let f0 : Rn → R be a convex CPWL function and let P0 be the underlying polyhedral complex. If there exists a hyperplane H ⊆ Rn such that the set T := ⋃{ F ∈ Pn−10 ∣∣ F ⊆ H} is nonempty and contains no line, then f0 cannot be expressed as a linear combination of n-term maxima of affine linear functions. This allows us to prove Proposition 3.2. Proof of Proposition 3.2. Observe that f (defined in (2)) has the alternate representation f(x) = max{0, x1, x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn} as a maximum of n+ 2 terms. Let P be its underlying polyhedral complex. Let the hyperplane H be defined by x1 = 0. Observe that any facet in Pn−1 is a polyhedron defined by two of the n + 2 terms that are equal and at least as large as each of the remaining n terms. Hence, the only facet that could possibly be contained in H is F := {x ∈ Rn | x1 = 0 ≥ x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn}. Note that F is indeed an (n− 1)-dimensional facet in Pn−1, because, for example, the full neighborhood of (0,−1, . . . ,−1) ∈ Rn intersected with H is contained in F . Finally, we need to show that F is pointed, that is, it contains no line. A well-known fact from polyhedral theory says if there is any line in F with direction d ∈ Rn \ {0}, then d must satisfy the defining inequalities with equality. However, only the zero vector does this. Hence, F cannot contain a line. Therefore, when applying Proposition 3.3 to f with underlying polyhedral complex P and hyperplane H , we have T = F , which is nonempty and contains no line. Hence, f cannot be written as linear combination of n-term maxima. 4 A Width Bound for NNs with Small Depth While the arguments in Arora et al. [2018] show that CPWLn = ReLUn(dlog2(n + 1)e), they do not provide any bound on the width of the NN required to represent any particular continuous piecewise linear function. The purpose of this section is to prove that for fixed dimension n, the required width for exact, depth-minimal representation of a CPWL function can be polynomially bounded in the number p of affine pieces; in particular pO(n 2). This is closely related to works that bound the number of linear pieces of an NN as a function of the size [Montúfar et al., 2014, Pascanu et al., 2014, Raghu et al., 2017, Montúfar et al., 2021]. It can also be seen as a counterpart, in the context of exact representations, to quantitative universal approximation theorems that bound the number of neurons required to achieve a certain approximation guarantee; see, e.g., Barron [1993, 1994], Mhaskar [1993], Pinkus [1999], Mhaskar [1996], Mhaskar and Micchelli [1995]. 4.1 The Convex Case We first derive our result for the case of convex CPWL functions and then use this to also prove the general nonconvex case. Our width bound is a consequence of the following theorem about convex CPWL functions, for which we are going to provide a geometric proof later. Theorem 4.1. Let f(x) = max{aTi x + bi | i ∈ [p]} be a convex CPWL function defined on Rn. Then f can be written as f(x) = ∑ S⊆[p], |S|≤n+1 cS max{aTi x+ bi | i ∈ S} with coefficients cS ∈ Z, for S ⊆ [p], |S| ≤ n+ 1. For the convex case, this yields a stronger version of the theorem by Wang and Sun [2005] stating that any (not necessarily convex) CPWL function can be written as a linear combination of (n+ 1)-term maxima. Theorem 4.1 is stronger in the sense that it guarantees that all pieces of the (n+ 1)-term maxima must be pieces of the original function, making it possible to bound the total number of these (n+ 1)-term maxima and, therefore, the size of an NN representing f . Theorem 4.2. Let f : Rn → R be a convex CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(pn+1). Proof. Since the number of possible subsets S ⊆ [p] with |S| ≤ n + 1 is bounded by pn+1, the theorem follows by Theorem 4.1 and the construction from Arora et al. [2018, Theorem 2.1]. Before we present the proof of Theorem 4.1, we show how we can generalize its consequences to the nonconvex case. 4.2 The General (Nonconvex) Case It is a well-known fact that every CPWL function can be expressed as a difference of two convex CPWL functions, see, e.g., Wang [2004, Theorem 1]. This allows us to derive the general case from the convex case. What we need, however, is to bound the number of affine pieces of the two convex CPWL functions in terms of the number of pieces of the original function. Therefore, we consider a specific decomposition for which such bounds can easily be achieved. Proposition 4.3. Let f : Rn → R be a CPWL function with p affine pieces. Then, f can be written as f = g − h where both g and h are convex CPWL functions with at most p2n+1 pieces. Proof. Suppose the p affine pieces of f are given by x 7→ aTi x + bi, i ∈ [p]. Define the function h(x) := ∑ 1≤i<j≤p max{aTi x+ bi, aTj x+ bj} and let g := f + h. Then, obviously, f = g − h. It remains to show that both g and h are convex CPWL functions with at most p2n+1 pieces. The convexity of h is clear by definition. Consider the ( p 2 ) = p(p−1)2 < p 2 hyperplanes given by aTi x+ bi = a T j x+ bj , 1 ≤ i < j ≤ p. They divide Rn into at most ( p2 n ) + ( p2 n−1 ) + · · ·+ ( p2 0 ) ≤ p2n regions (compare Edelsbrunner [1987, Theorem 1.3]) in each of which h is affine. In particular, h has at most p2n ≤ p2n+1 pieces. Next, we show that g = f + h is convex. Intuitively, this holds because each possible breaking hyperplane of f is made convex by adding h. To make this formal, note that by the definition of convexity, it suffices to show that g is convex along each affine line. For this purpose, consider an arbitrary line x(t) = ta+b, t ∈ R, given by a ∈ Rn and b ∈ R. Let f̃(t) := f(x(t)), g̃(t) := g(x(t)), and h̃(t) := h(x(t)). We need to show that g̃ : R→ R is a convex function. Observe that f̃ , g̃, and h̃ are clearly one-dimensional CPWL functions with the property g̃ = f̃ + h̃. Hence, it suffices to show that g̃ is convex locally around each of its breakpoints. Let t ∈ R be an arbitrary breakpoint of g̃. If f̃ is already convex locally around t, then the same holds for g̃ as well since h̃ inherits convexity from h. Now suppose that t is a nonconvex breakpoint of f̃ . Then there exist two distinct pieces of f , indexed by i, j ∈ [p] with i 6= j, such that f̃(t′) = min{aTi x(t′) + bi, aTj x(t′) + bj} for all t′ sufficiently close to t. By construction, h̃(t′) contains the summand max{aTi x(t′)+bi, aTj x(t′)+bj}. Thus, adding this summand to f̃ linearizes the nonconvex breakpoint of f̃ , while adding all the other summands preserves convexity. In total, g̃ is convex locally around t, which finishes the proof that g is a convex function. Finally, observe that pieces of g = f + h are always intersections of pieces of f and h, for which we have only p · p2n = p2n+1 possibilities. Having this, we may conclude the following. Theorem 4.4. Let f : Rn → R be a CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(p2n 2+3n+1). Proof. Consider the decomposition f = g − h from Proposition 4.3. Using Theorem 4.2, we obtain that both g and h can be represented with the required depth dlog2(n + 1)e + 1 and with width O((p2n+1)n+1) = O(p2n2+3n+1). Thus, the same holds for f . 4.3 Extended Newton Polyhedra of Convex CPWL Functions For our proof of Theorem 4.1, we use a correspondence of convex CPWL functions with certain polyhedra, which are known as (extended) Newton polyhedra in tropical geometry [Maclagan and Sturmfels, 2015]. These relations between tropical geometry and neural networks have previously been applied to investigate expressivity of NNs; compare our references in the introduction. In order to formalize this correspondence, let CCPWLn ⊆ CPWLn be the set of convex CPWL functions of type Rn → R. For f(x) = max{aTi x + bi | i ∈ [p]} in CCPWLn, we define its so-called extended Newton polyhedron to be N (f) := conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}) ⊆ Rn+1. We denote the set of all possible extended Newton polyhedra in Rn+1 as Newtn. That is, Newtn is the set of (unbounded) polyhedra in Rn+1 that emerge from a polytope by adding the negative of the (n+ 1)-st unit vector −en+1 as an extreme ray. Hence, a set P ⊆ Rn+1 is an element of Newtn if and only if P can be written as P = conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}). Conversely, for a polyhedron P ∈ Newtn of this form, let F(P ) ∈ CCPWLn be the function defined by F(P )(x) = max{aTi x+ bi | i ∈ [p]}. There is an intuitive way of thinking about the extended Newton polyhedron P of a convex CPWL function f : it consists of all hyperplane coefficients (a, b) ∈ Rn × R such that aTx+ b ≤ f(x) for all x ∈ Rn. This also explains why we add the extreme ray −en+1: decreasing b obviously maintains the property of aTx+ b being a lower bound on the function f . We need the notion of the Minkowski sum of two polyhedra P and Q: it is given as the set P +Q = {p+ q | p ∈ P, q ∈ Q}. In fact, there is a one-to-one correspondence between elements of CCPWLn and Newtn, which is nicely compatible with some (functional and polyhedral) operations. This correspondence has been studied before in tropical geometry [Maclagan and Sturmfels, 2015, Joswig, 2022], convex geometry1 [Hiriart-Urruty and Lemaréchal, 1993], as well as neural network literature [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Alfarra et al., 2020, Montúfar et al., 2021]. We summarize the key findings of this correspondence relevant to our work in the following proposition: Proposition 4.5. Let n ∈ N and f1, f2 ∈ CCPWLn. Then it holds that (i) the functions N : CCPWLn → Newtn and F : Newtn → CCPWLn are well-defined, that is, their output is independent from the representation of the input by pieces or vertices, respectively, (ii) N and F are bijections and inverse to each other, (iii) N (max{f1, f2}) = conv(N (f1),N (f2)) := conv(N (f1) ∪N (f2)), (iv) N (f1 + f2) = N (f1) +N (f2), where the + on the right-hand side is Minkowski addition. An algebraic way of phrasing this proposition is as follows: N and F are isomorphisms between the semirings (CCPWLn,max,+) and (Newtn, conv,+). 4.4 Outline of the Proof of Theorem 4.1 We prove Theorem 4.1 in full detail in Appendix D. The rough idea is as follows. Suppose we have a p-term max function f with p ≥ n + 2. By Proposition 4.5, f corresponds to a polyhedron P ∈ Newtn with at least n + 2 vertices. Applying a classical result from discrete geometry known as Radon’s theorem allows us to carefully decompose P into a “signed”2 Minkowski sum of polyhedra in Newtn whose vertices are subsets of at most p− 1 out of the p vertices of P . Translating this back into the world of CPWL functions by Proposition 4.5 yields that f can be written as linear combination of p′-term maxima with p′ < p, where each of them involves a subset of 1N (f) is the negative of the epigraph of the convex conjugate of f . 2Some polyhedra may occur with “negative” coefficents in that sum, meaning that they are actually added to P instead of the other polyhedra. The corresponding CPWL functions will then have negative coefficients in the linear combination representing f . the p affine terms of f . We can then obtain Theorem 4.1 by iterating until every occurring maximum expression involves at most n+ 1 terms. 4.5 Potential Approaches to Show Lower Bounds on the Width In light of the upper width bounds shown in this section, a natural question to ask is whether also meaningful lower bounds can be achieved. This would mean constructing a family of CPWL functions with p pieces defined on Rn (with different values of p and n), for which we can prove that a large width is required to represent these functions with NNs of depth dlog2(n+ 1)e+ 1. A trivial and not very satisfying answer follows, e.g., from Raghu et al. [2017] or Serra et al. [2018]: for fixed input dimension n, they show that a function computed by an NN with k hidden layers and width w has at most O(wkn) pieces. For our setting, this means that an NN with logarithmic depth needs a width of at least O(p1/(n logn)) to represent a function with p pieces. This is, of course, very far away from our upper bounds. Similar upper bounds on the number of pieces have been proven by many other authors and are often used to show depth-width tradeoffs [Montúfar et al., 2014, 2021, Pascanu et al., 2014, Telgarsky, 2016b, Arora et al., 2018]. However, there is a good reason why all these results only give rise to very trivial lower bounds for our setting: the focus is always on functions with very many pieces, which then, consequently, need many neurons to be represented (with small depth). However, since the lower bounds we strive for depend on the number of pieces, we would need to construct a family of functions with comparably few pieces that still need very many neurons to be represented. In general, it seems to be a tough task to argue why such functions should exist. A different approach could leverage methods from complexity theory, in particular from circuit complexity. Neural networks are basically arithmetic circuits with very special operations allowed. In fact, they can be seen as a tropical variant of arithmetic circuits. Showing circuit lower bounds is a notoriously difficult task in complexity theory, but maybe some conditional result (based on common conjectures similar to P 6= NP) could be established. We think that the question whether our bounds are tight, or whether at least some non-trivial lower bounds on the width for NNs with logarithmic depth can be shown, is an exciting question for further research. 5 Discussion of Future Research Directions The most obvious and, at the same time, most exciting open research question is to prove or disprove Conjecture 1.1, or equivalently Conjecture 1.2. The first step could be to prove Assumption 2.4. The assumption is intuitive because every breakpoint introduced at other places needs to be canceled out later. Therefore, it is natural to assume that these breakpoints do not have to be introduced in the first place. However, this intuition does not seem to be enough for a formal proof because it could occur that additional breakpoints in intermediate steps, which are canceled out later, also influence the behavior of the function at other places where we allow breakpoints in the end. Another step towards resolving our conjecture may be to find an alternative proof of Theorem 2.5 not using Assumption 2.4. This might also be beneficial for generalizing our techniques to more hidden layers, since, while theoretically possible, a direct generalization is infeasible due to computational limitations. In light of our results from Section 3, it would be desirable to provide a complete characterization of the functions contained in ReLU(k). Another potential research goal is improving our upper bounds on the width from Section 4 and/or proving matching lower bounds as discussed in Section 4.5. Some more interesting research directions are the following: 1. Establishing or strengthening our results for special classes of NNs like recurrent neural networks (RNNs) or convolutional neural networks (CNNs), 2. Using exact representation results to show more drastic depth-width tradeoffs compared to existing results in the literature, 3. Understanding how the class ReLU(k) changes when a polynomial upper bound is imposed on the width of the NN; see related work by Vardi et al. [2021]. Acknowledgments and Disclosure of Funding Christoph Hertrich gratefully acknowledges funding by DFG-GRK 2434 “Facets of Complexity”. Amitabh Basu gratefully acknowledges support from AFOSR Grant FA95502010341 and NSF Grant CCF2006587. Martin Skutella gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy — The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
1. What is the main contribution of the paper regarding the description of ReLU network functions? 2. What are the strengths and weaknesses of the paper's approach to understanding the set of functions that can precisely be described by a network of certain depth? 3. How does the paper's candidate function help in reducing the conjecture on the lower limit in depth? 4. What are the limitations of the paper's results, particularly in terms of the unproven assumption and the incomplete nature of the presented bound? 5. How do the introduced H-conforming functions contribute to the study of ReLU networks? 6. Can you provide additional details or examples to help explain the proof ideas and techniques used in the paper?
Summary Of The Paper Review
Summary Of The Paper It is known from the universal approximation theorem that ReLU networks with one hidden layer can approximate any continuous function on compact sets arbitrarily well. Instead of approximations, this paper investigates the set of functions that can precisely be described by a network of certain depth. For a given input dimension D, a ReLU network with L=ceil[ log2(D+1) ] (or more) hidden layers (and arbitrary width) can describe the entire set of piecewise continuous functions. Here, new insight is provided for networks with having between 1 and L layers. In particular, under an unproven assumption, the paper shows that L is a strict lower limit for the depth and that adding layers to a network strictly increases the set of describable functions. This is achieved by studying a natural candidate function to require a larger number of hidden layers. The candidate function suggests a conjecture on a nice description of ReLU network functions of finite depth, which is shown to not hold true. Finally, the paper derives a bound on the width and depth of networks that can describe any piecewise linear function with fixed number of linear pieces. Review The paper is very well-written and the arguments are carefully put together. All proofs are described in detail. This theoretical paper is mostly interesting from a mathematical viewpoint. With universal approximation at hand, one could argue that a better understanding of the precise description of the set of ReLU functions does not add much from a practical point of view. However, as the authors point out, a better understanding could also be useful for algorithmic advances. I found the story of the paper intriguing. The conjecture on the lower limit in depth that is necessary to describe all continuous piecewise linear functions on a D-dimensional input space is reduced to proving that the candidate function max{0,x1,x2,,..,xD} cannot be described with less than ceil[ log2(D+1) ] many hidden layers. This statement seems at first like a fairly easy statement to prove or disprove, but appears to be actually quite difficult. Starting from the candidate function, there is natural hope to conjecture that a ReLU network with L hidden layers can only describe linear combinations of maxima of 2^L terms and no more, but this is shown to be false, which adds to the story that the description of ReLU network functions of a certain depth is complicated. As a result, the paper does not present a complete story by fully characterizing the functions implemented by ReLU networks of fixed depth. The following describes the progress made in more detail and discusses its limitations: (i) The conjecture on the tightness of the known lower bound on necessary depth to describe all piecewise linear functions on a D-dimensional input space holds true for networks with two hidden layers, but only under an unproven assumption. That is, the result is limited to three-layer networks and is even incomplete in that case. (ii) There are maxima of D+1 terms that can be described by a network of depth less than L=ceil[ log2(D+1) ]. This result is interesting, but oneit also disproves a conjecture that was first mentioned in this same paper and which did not previously appear important. (iii) A bound on the order of depth and width necessary to exactly describe any continuous piecewise linear function with p linear pieces, saying that the required width is polynomial in p with exponent defined by the input dimension. This is a only a small improvement to previously known results. The main weakness of the paper is that it misses to describe the complications that need to be overcome to extend these limited results. For example, what are the problems to prove the necessary (unproven) assumption for the result in (i)? If it is a natural assumption that should be believed to (probably) hold true for ReLU networks, then why is difficult to show that it does indeed hold? A discussion of the difficulty to prove this critical assumption would make it easier to appreciate the partial result. Similarly, what are the problems of extending the proof to layers with more than two hidden layers? A strength of the paper is that the proofs of the partial results use nontrivial methods and require a good understanding on the geometrical setting of linear regions. The proof of statement (i) from above is based on mixed integer programming (MIP) with a nontrivial setup of the MIP problem. This part of the paper also introduces so-called H-conforming functions, which form a natural subset of piecewise continuous functions for the study of ReLU networks and this viewpoint could be interesting outside the study in question. The proofs to (ii) and (iii) use theoretical insight on piecewise continuous functions and their associated polyhedra of linear pieces. These proof ideas and techniques provide some insight on their own. Taken together, the paper presents an intriguing, theoretically interesting question, but only resolves it partially without explaining well why the partial progress is substantial. Since the ideas involved in setting up the statements and proofs are quite interesting by themselves, I tend to support the acceptance of the paper. Two small suggestions for potential minor improvement in the presentation: Line 66: Better: "By definition, a continuous function is piecewise linear in case ...“ Line 96: The sketch of the proof only states half the arguments necessary for the proof. It would just cost a single line or maybe two to state the other half: If the specific n+1 term maximum function can be written with k layers, then all n+1-term maximum functions can.
NIPS
Title Towards Lower Bounds on the Depth of ReLU Neural Networks Abstract We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes. 1 Introduction A core problem in machine learning or statistical pattern recognition is the estimation of an unknown data distribution with access to i.i.d. samples from the distribution. It is well-known that there is a tension between how much prior information one has about the data distribution and how many samples one needs to solve the problem with high confidence (or equivalently, how much variance one has in one’s estimate). This is referred to as the bias-variance trade-off or the bias-complexity trade-off. Neural networks provide a way to turn this bias/complexity knob in a controlled manner that has been studied for decades going back to the idea of a perceptron by Rosenblatt [1958]. This is done by modifying the architecture of a neural network class of functions, which has two parameters: depth and size. As one increases these parameters, the class of functions becomes more expressive. In terms of the bias-variance trade-off, the “bias” decreases as the class of functions becomes more expressive, but the “variance” or “complexity” increases. So-called universal approximation theorems [Cybenko, 1989, Hornik, 1991, Anthony and Bartlett, 1999] show that even with a single hidden layer, i.e., when the depth of the architecture is the smallest possible value, one can essentially reduce the “bias” as much as one desires, by increasing the size of the network or the number of neurons used in the neural network. Nevertheless, it can be advantageous both theoretically and empirically to increase the depth because a substantial reduction in the size can be achieved; Telgarsky [2016a], Eldan and Shamir [2016], Arora et al. [2018] is a small sample of recent work in this direction. To get a better quantitative handle on these trade-offs, 35th Conference on Neural Information Processing Systems (NeurIPS 2021). it is important to understand what classes of functions are exactly representable by neural networks with a certain architecture. The precise mathematical statements of universal approximation theorems show that single layer networks can approximate arbitrarily well any continuous function (under some additional mild hypotheses). While this suggests that single layer networks are good enough from a learning perspective, from a mathematical perspective, one can ask the question if the class of functions represented by a single layer is a strict subset of the class of function represented by two or more hidden layers. On the question of size, one can ask for precise bounds on the size of the network of a given depth to represent a certain class of functions. We believe that a better understanding of the function classes exactly represented by different architectures will have implications not just for mathematical foundations, but also algorithmic and statistical learning aspects of neural networks. The task of searching for the “best” function in that class can only benefit from a better understanding of the nature of functions in that class. A motivating question behind the results in this paper is to understand the hierarchy of function classes exactly represented by neural networks of increasing depth. We now introduce more precise notation and terminology to set the stage for our investigations. Notation. We write [n] := {1, 2, . . . , n} for the set of natural numbers up to n (without zero) and [n]0 := [n] ∪ {0} for the same set including zero. For any n ∈ N, let σ : Rn → Rn be the component-wise rectifier function σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any number of hidden layers k ∈ N, a (k + 1)-layer feedforward neural network with rectified linear units (ReLU NN or simply NN) is given by k affine transformations T (`) : Rn`−1 → Rn` , x 7→ A(`)x+ b(`), for ` ∈ [k], and a linear transformation T (k+1) : Rnk → Rnk+1 , x 7→ A(k+1)x. It is said to compute or represent the function f : Rn0 → Rnk+1 given by f = T (k+1) ◦ σ ◦ T (k) ◦ σ ◦ · · · ◦ T (2) ◦ σ ◦ T (1). The matrices A(`) ∈ Rn`×n`−1 are called the weights and the vectors b(`) ∈ Rn` are the biases of the `-th layer. The number n` ∈ N is called the width of the `-th layer. The maximum width of all hidden layers max`∈[k] n` is called the width of the NN. Further, we say that the NN has depth k + 1 and size ∑k `=1 n`. Often, NNs are represented as layered, directed, acyclic graphs where each dimension of each layer (including input layer ` = 0 and output layer ` = k + 1) is one vertex, weights are arc labels, and biases are node labels. Then, the vertices are called neurons. For a given input x = x(0) ∈ Rn0 , let y(`) := T (`)(x(`−1)) ∈ Rn` be the activation vector and x(`) := σ(y`) ∈ Rn` the output vector of the `-th layer. Further, let y := y(k+1) = f(x) be the output of the NN. We also say that the i-th component of each of these vectors is the activation or the output of the i-th neuron in the `-th layer. For k ∈ N, we define ReLUn(k) := {f : Rn → R | f can be represented by a (k + 1)-layer NN}, CPWLn := {f : Rn → R | f is continuous and piecewise linear}. By definition, a continuous function f : Rn → R is piecewise linear in case there is a finite set of polyhedra whose union is Rn, and f is affine linear over each such polyhedron. In order to analyze ReLUn(k), we use another function class defined as follows. We call a function g a p-term max function if it can be expressed as maximum of p affine terms, that is, g(x) = max{`1(x), . . . , `p(x)} where `i : Rn → R is affinely linear for i ∈ [p]. Based on that, we define MAXn(p) := {f : Rn → R | f is a linear combination of p-term max functions}. If the input dimension n is not important for the context, we sometimes drop the index and use ReLU(k) := ⋃ n∈N ReLUn(k) and MAX(p) := ⋃ n∈N MAXn(p) instead. Since we deal with polyhedra a lot in this paper, we will use the standard notations convA and coneA for the convex and conic hulls of a set A ⊆ Rn. For an in-depth treatment of polyhedra and (mixed-integer) optimization, we refer to Schrijver [1986]. 1.1 Our Contribution It is not hard to see that any function expressed by a ReLU network is a continuous and piecewise linear (CPWL) function, because one is composing continuous piecewise linear functions together. Based on a result by Wang and Sun [2005], Arora et al. [2018] show that every CPWL function defined on Rn can be represented by a ReLU neural network with dlog2(n+ 1)e hidden layers. We wish to understand whether one can do better. We believe it is not possible to do better and we pose the following conjecture to better understand the importance of depth in neural networks. Conjecture 1.1. For any n ∈ N, let k∗ := dlog2(n+ 1)e. Then it holds that ReLUn(0) ( ReLUn(1) ( · · · ( ReLUn(k∗ − 1) ( ReLUn(k∗) = CPWLn . (1) Conjecture 1.1 claims that any additional layer up to k∗ hidden layers strictly increases the set of representable functions. This would imply that the construction by Arora et al. [2018, Theorem 2.1] is actually depth-minimal. Observe that in order to prove Conjecture 1.1, it suffices to find a single function f ∈ ReLUn(k∗) \ ReLUn(k∗ − 1) with n = 2k ∗−1 for all k∗ ∈ N. This also implies all remaining strict inclusions ReLUn(i− 1) ( ReLUn(i) for i < k∗ since ReLUn(i− 1) = ReLUn(i) directly implies that ReLUn(i− 1) = ReLUn(i′) for all i′ ≥ i− 1. In fact, there is a canonical candidate for such a function, allowing us to reformulate the conjecture as follows. Conjecture 1.2. For any k ∈ N, n = 2k, the function fn(x) = max{0, x1, . . . , xn} cannot be represented with k hidden layers. Proposition 1.3. Conjecture 1.1 and Conjecture 1.2 are equivalent. Proof (Sketch). We argued above that Conjecture 1.2 implies Conjecture 1.1. For the other direction, one can argue that, if the specific (n+1)-term max function fn can be represented by k hidden layers, then every other (n+ 1)-term max function as well. The claim then follows via a result by [Wang and Sun, 2005] stating that any f ∈ CPWLn can be written as linear combination of (n+ 1)-term max functions. We provide a detailed argument in Appendix A. It is known that Conjecture 1.2 holds for k = 1 [Mukherjee and Basu, 2017]. However, the conjecture remains open for k ≥ 2. In this paper, we present the following results as partial progress towards resolving this conjecture. In Section 2, we resolve Conjecture 1.2 for k = 2, under a natural assumption on the breakpoints of the function represented by any intermediate neuron. We achieve this result by leveraging techniques from mixed-integer programming to analyze the set of functions computable by certain NNs. It is not hard to see that MAX(2k) ⊆ ReLU(k) for all k ∈ N [Arora et al., 2018], that is, any 2k-term max function (and linear combinations thereof) can be expressed with k hidden layers. One might ask whether the converse is true as well, that is, whether the classes MAX(2k) and ReLU(k) are actually equal. This would not only provide a neat characterization of ReLU(k), but also prove Conjecture 1.2 without any additional assumption since one can show that max{0, x1, . . . , x2k} is not contained in MAX(2k). In fact, this is true for k = 1, that is, a function is computable with one hidden layer if and only if it is a linear combination of 2-term max functions. However, in Section 3, we show that for k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In this section, the key technical ingredient is the theory of polyhedral complexes associated with CPWL functions. This way, we provide important insights concerning the richness of the class ReLU(k). So far, we have focused on understanding the smallest depth needed to express CPWL functions using neural networks with ReLU activations. In Section 4, we complement these results by upper bounds on the sizes of the networks needed for expressing arbitrary CPWL functions. In particular, Theorem 4.4 shows that any continuous piecewise linear function with p linear/affine pieces on Rn can be expressed by a network with depth at most O(log n) and width at most pO(n2). We arrive at this result by introducing a novel application of recently established interactions between neural networks and tropical geometry. 1.2 Related Work Depth versus size. Soon after the original universal approximation theorems [Cybenko, 1989, Hornik, 1991], concrete bounds were obtained on the number of neurons needed in the hidden layer to achieve a certain level of accuracy. The literature on this is vast and we refer to a small representative sample here [Barron, 1993, 1994, Mhaskar, 1993, Pinkus, 1999, Mhaskar, 1996, Mhaskar and Micchelli, 1995]. More recently, work has focused on how deeper networks can have exponentially or super exponentially smaller size compared to shallower networks [Telgarsky, 2016a, Eldan and Shamir, 2016, Arora et al., 2018, Vardi et al., 2021]. See also Gribonval et al. [2021] for another perspective on the relationship between expressivity and architecture, and the references therein. We reiterate that the list of references above is far from complete. Mixed-integer optimization and machine learning. Over the past decade, a growing body of work has emerged that explores the interplay between mixed-integer optimization and machine learning. On the one hand, researchers have attempted to improve mixed-integer optimization algorithms by exploiting novel techniques from machine learning [Bonami et al., 2018, Gasse et al., 2019, He et al., 2014, Khalil et al., 2016, 2017, Kruber et al., 2017, Lodi and Zarpellon, 2017, Alvarez et al., 2017]; see also Bengio et al. [2020] for a recent survey. On the flip side, mixed-integer optimization techniques have been used to analyze function classes represented by neural networks [Serra et al., 2018, Anderson et al., 2020, Fischetti and Jo, 2017, Serra and Ramalingam, 2020, Serra et al., 2020]. In Section 2 below, we show another new use of mixed-integer optimization tools for understanding function classes represented by neural networks. Design of training algorithms. We believe that a better understanding of the function classes represented exactly by a neural architecture also has benefits in terms of understanding the complexity of the training problem. For instance, in a paper by Arora et al. [2018], an understanding of single layer ReLU networks enables the design of a globally optimal algorithm for solving the empirical risk minimization (ERM) problem, that runs in polynomial time in the number of data points in fixed dimension. See also Goel et al. [2017, 2018], Goel and Klivans [2019], Dey et al. [2020], Boob et al. [2020], Goel et al. [2021], Froese et al. [2021] for a similar lines of work. Neural Networks and Tropical Geometry. A recent stream of research involves the interplay between neural networks and tropical geometry. The piecewise linear functions computed by neural networks can be seen as (tropical quotients of) tropical polynomials. Linear regions of these functions correspond to vertices of so-called Newton polytopes associated with these tropical polynomials. Applications of this correspondence include bounding the number of linear regions of a neural network [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Montúfar et al., 2021] and understanding decision boundaries [Alfarra et al., 2020]. In Section 4 we present a novel application of tropical concepts to understand neural networks. We refer to Maragos et al. [2021] for a recent survey of connections between machine learning and tropical geometry, as well as to the textbooks by Maclagan and Sturmfels [2015] and Joswig [2022] for in-depth introductions to tropical geometry and tropical combinatorics. 2 Conditional Lower Bounds on Depth via Mixed-Integer Programming In this section, we provide a computer-aided proof that, under a natural, yet unproven assumption, the function f(x) := max{0, x1, x2, x3, x4} cannot be represented by a 3-layer NN. It is worth to note that, to the best of our knowledge, no CPWL function is known for which the non-existence of a 3-layer NN can be proven without additional assumption. For easier notation, we write x0 := 0. We first prove that we may restrict ourselves to NNs without biases. This holds true independent of our assumption, which we introduce afterwards. Definition 2.1. A function g : Rn → Rm is called positively homogeneous if g(λx) = λg(x) for all λ ≥ 0. Definition 2.2. For an NN given by affine transformations T (`)(x) = A(`)x + b(`), we define the corresponding homogenized NN to be the NN given by T̃ (`)(x) = A(`)x with all biases set to zero. Proposition 2.3. If an NN computes a positively homogeneous function, then the corresponding homogenized NN computes the same function. Proof. Let g : Rn0 → Rnk+1 be the function computed by the original NN and g̃ the one computed by the homogenized NN. Further, for any 0 ≤ ` ≤ k, let g(`) = T (`+1) ◦σ ◦T (`) ◦ · · ·◦T (2) ◦σ ◦T (1) be the function computed by the sub-NN consisting of the first (` + 1)-layers and let g̃(`) be the function computed by the corresponding homogenized sub-NN. We first show by induction on ` that the norm of ‖g(`)(x)− g̃(`)(x)‖ is bounded by a global constant that only depends on the parameters of the NN but not on x. For ` = 0, we obviously have ‖g(0)(x) − g̃(0)(x)‖ = ‖b(1)‖ =: C0, settling the induction base. For the induction step, let ` ≥ 1 and assume that ‖g(`−1)(x) − g̃(`−1)(x)‖ ≤ C`−1, where C`−1 only depends on the parameters of the NN. Since component-wise ReLU activation has Lipschitz constant 1, this implies ‖(σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x)‖ ≤ C`−1. Using any matrix norm that is compatible with the Euclidean vector norm, we obtain: ‖g(`)(x)− g̃(`)(x)‖ = ‖b(`+1) +A(`+1)((σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x))‖ ≤ ‖b(`+1)‖+ ‖A(`+1)‖ · C`−1 =: C` Since the right-hand side only depends on NN parameters, the induction is completed. Finally, we show that g = g̃. For the sake of contradiction, suppose that there is an x ∈ Rn0 with ‖g(x) − g̃(x)‖ = δ > 0. Let x′ := Ck+1δ x; then, by positive homogeneity, it follows that ‖g(x′)− g̃(x′)‖ = Ck + 1 > Ck, contradicting the property shown above. Thus, we have g = g̃. Since f = max{0, x1, x2, x3, x4} is positively homogeneous, Proposition 2.3 implies that, if there is a 3-layer NN computing f , then there also is one that has no biases. Therefore, in the remainder of this section, we only consider NNs without biases and assume implicitly that all considered CPWL functions are positively homogeneous. In particular, any piece of such a CPWL function is linear and not only affine linear. Observe that, for function f , the only points of non-differentiability (a.k.a. breakpoints) are at places where at least two of the five numbers x0 = 0, x1, x2, x3, and x4 are equal. Hence, if some neuron of an NN computing f introduces breakpoints at other places, these breakpoints must be canceled out by other neurons. Therefore, it is a natural assumption that such breakpoints are not introduced at all in the first place. To make this assumption formal, let Hij = {x ∈ R4 | xi = xj}, for 0 ≤ i < j ≤ 4, be ten hyperplanes in R4 and H = ⋃ 0≤i<j≤4Hij be the corresponding hyperplane arrangement. The regions or cells of H are defined to be the closures of the connected components of R4 \H . It is easy to see that these regions are in one-to-one correspondence to the 5! = 120 possible orderings of the five numbers x0 = 0, x1, x2, x3, and x4. More precisely, for a permutation π of the five indices [4]0 = {0, 1, 2, 3, 4}, the corresponding region is the polyhedron Cπ := {x ∈ R4 | xπ(0) ≤ xπ(1) ≤ xπ(2) ≤ xπ(3) ≤ xπ(4)}. We say that a (positively homogeneous) CPWL function g is H-conforming, if it is linear within any of these regions of H , that is, if it only has breakpoints where the relative ordering of the five values x0 = 0, x1, x2, x3, x4 changes. Moreover, an NN is said to be H-conforming if the output of each neuron contained in the NN is H-conforming. Equivalently, this is the case if and only if all intermediate functions σ ◦ T (`) ◦ σ ◦ T (`−1) ◦ · · · ◦ σ ◦ T (1), ` ∈ [k], are H-conforming. Now our assumption can be formally phrased as follows. Assumption 2.4. If there exists a 3-layer NN computing f(x) = max{0, x1, x2, x3, x4}, then there also exists one that is H-conforming. We use mixed-integer programming to prove the following theorem. Theorem 2.5. Under Assumption 2.4, there does not exist a 3-layer NN that computes the function f(x) = max{0, x1, x2, x3, x4}. Proof (Outline). We first study some geometric properties of the hyperplane arrangement H . This will show that each of the 120 cells of H is a simplicial polyhedral cone spanned by 4 extreme rays. In total, there are 30 such rays (because rays are used multiple times to span different cones). This implies that the set of H-conforming functions of type R4 → R is a 30-dimensional vector space and each function is uniquely determined by its values on the 30 rays. We then use linear algebra to show that the space of functions generated by H-conforming two-layer NNs is a 14-dimensional subspace. Moreover, with two hidden layers, at least 29 of the 30 dimensions can be generated and f is not contained in this 29-dimensional subspace. So, it remains the question whether the 14 dimensions producible with the first hidden layer can be combined in such a way that after applying a ReLU activation in the second hidden layer, we do not end up within the 29-dimensional subspace. We model this question as a mixed-interger program (MIP). Solving the MIP yields that we always end up within the 29-dimensional subspace, implying that f cannot be represented by a 3-layer NN. This provides a computational proof of Theorem 2.5. All details can be found in Appendix B. 3 Going Beyond 2k-Term Max Functions with k Layers In this section we prove the following result: Theorem 3.1. For any k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In order to prove this theorem, we provide a specific function that is in ReLU(k) \MAX(2k) for any number of hidden layers k ≥ 2. The challenging part is to show that the function is in fact not contained in MAX(2k). Proposition 3.2. For any n ≥ 3, the function f : Rn → R, f(x) = max{0, x1, x2, . . . , xn−3, max{xn−2, xn−1}+ max{0, xn}} (2) cannot be written as a linear combination of n-term max functions. The above proposition means that it is not possible to write f(x) in the form f(x) = p∑ i=1 λi max{`i1(x), . . . , `in(x)} where p ∈ N, λ1, . . . , λp ∈ R, and `ij : Rn → R is an affine linear function for every i ∈ [p] and j ∈ [n]. (Note that max functions with less than n terms are also allowed, as some functions `ij may coincide.) Before we prove Proposition 3.2, we show that it implies Theorem 3.1. Proof of Theorem 3.1. For k ≥ 2, let n := 2k. By Proposition 3.2, function f defined in (2) is not contained in MAX(2k). It remains to show that it can be represented using a ReLU NN with k hidden layers. To see this, first observe that any of the n/2 = 2k−1 terms max{0, x1}, max{x2i, x2i+1} for i ∈ [n/2 − 2], and max{xn−2, xn−1} + max{0, xn} can be expressed by a one-hidden-layer NN since all these are (linear combinations of) 2-term max functions. Since f is the maximum of these 2k−1 terms, and since the maximum of 2k−1 numbers can be computed with k − 1 hidden layers [Arora et al., 2018], this implies that f is in ReLU(k). In order to prove Proposition 3.2, we need the concept of polyhedral complexes. A polyhedral complex P is a finite set of polyhedra such that each face of a polyhedron in P is also in P , and for two polyhedra P,Q ∈ P , their intersection P ∩Q is a common face of P and Q (possibly the empty face). Given a polyhedral complex P in Rn and an integer m ∈ [n], we let Pm denote the collection of all m-dimensional polyhedra in P . For a convex CPWL function f , we define its underlying polyhedral complex as follows: it is the unique polyhedral complex covering Rn (i.e., each point in Rn belongs to some polyhedron in P) whose n-dimensional polyhedra coincide with the domains of the (maximal) affine pieces of f . In particular, f is affinely linear within each P ∈ P , but not within any strict superset of a polyhedron in Pn. Exploiting properties of polyhedral complexes associated with CPWL functions, we prove the following proposition in Appendix C. Proposition 3.3. Let f0 : Rn → R be a convex CPWL function and let P0 be the underlying polyhedral complex. If there exists a hyperplane H ⊆ Rn such that the set T := ⋃{ F ∈ Pn−10 ∣∣ F ⊆ H} is nonempty and contains no line, then f0 cannot be expressed as a linear combination of n-term maxima of affine linear functions. This allows us to prove Proposition 3.2. Proof of Proposition 3.2. Observe that f (defined in (2)) has the alternate representation f(x) = max{0, x1, x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn} as a maximum of n+ 2 terms. Let P be its underlying polyhedral complex. Let the hyperplane H be defined by x1 = 0. Observe that any facet in Pn−1 is a polyhedron defined by two of the n + 2 terms that are equal and at least as large as each of the remaining n terms. Hence, the only facet that could possibly be contained in H is F := {x ∈ Rn | x1 = 0 ≥ x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn}. Note that F is indeed an (n− 1)-dimensional facet in Pn−1, because, for example, the full neighborhood of (0,−1, . . . ,−1) ∈ Rn intersected with H is contained in F . Finally, we need to show that F is pointed, that is, it contains no line. A well-known fact from polyhedral theory says if there is any line in F with direction d ∈ Rn \ {0}, then d must satisfy the defining inequalities with equality. However, only the zero vector does this. Hence, F cannot contain a line. Therefore, when applying Proposition 3.3 to f with underlying polyhedral complex P and hyperplane H , we have T = F , which is nonempty and contains no line. Hence, f cannot be written as linear combination of n-term maxima. 4 A Width Bound for NNs with Small Depth While the arguments in Arora et al. [2018] show that CPWLn = ReLUn(dlog2(n + 1)e), they do not provide any bound on the width of the NN required to represent any particular continuous piecewise linear function. The purpose of this section is to prove that for fixed dimension n, the required width for exact, depth-minimal representation of a CPWL function can be polynomially bounded in the number p of affine pieces; in particular pO(n 2). This is closely related to works that bound the number of linear pieces of an NN as a function of the size [Montúfar et al., 2014, Pascanu et al., 2014, Raghu et al., 2017, Montúfar et al., 2021]. It can also be seen as a counterpart, in the context of exact representations, to quantitative universal approximation theorems that bound the number of neurons required to achieve a certain approximation guarantee; see, e.g., Barron [1993, 1994], Mhaskar [1993], Pinkus [1999], Mhaskar [1996], Mhaskar and Micchelli [1995]. 4.1 The Convex Case We first derive our result for the case of convex CPWL functions and then use this to also prove the general nonconvex case. Our width bound is a consequence of the following theorem about convex CPWL functions, for which we are going to provide a geometric proof later. Theorem 4.1. Let f(x) = max{aTi x + bi | i ∈ [p]} be a convex CPWL function defined on Rn. Then f can be written as f(x) = ∑ S⊆[p], |S|≤n+1 cS max{aTi x+ bi | i ∈ S} with coefficients cS ∈ Z, for S ⊆ [p], |S| ≤ n+ 1. For the convex case, this yields a stronger version of the theorem by Wang and Sun [2005] stating that any (not necessarily convex) CPWL function can be written as a linear combination of (n+ 1)-term maxima. Theorem 4.1 is stronger in the sense that it guarantees that all pieces of the (n+ 1)-term maxima must be pieces of the original function, making it possible to bound the total number of these (n+ 1)-term maxima and, therefore, the size of an NN representing f . Theorem 4.2. Let f : Rn → R be a convex CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(pn+1). Proof. Since the number of possible subsets S ⊆ [p] with |S| ≤ n + 1 is bounded by pn+1, the theorem follows by Theorem 4.1 and the construction from Arora et al. [2018, Theorem 2.1]. Before we present the proof of Theorem 4.1, we show how we can generalize its consequences to the nonconvex case. 4.2 The General (Nonconvex) Case It is a well-known fact that every CPWL function can be expressed as a difference of two convex CPWL functions, see, e.g., Wang [2004, Theorem 1]. This allows us to derive the general case from the convex case. What we need, however, is to bound the number of affine pieces of the two convex CPWL functions in terms of the number of pieces of the original function. Therefore, we consider a specific decomposition for which such bounds can easily be achieved. Proposition 4.3. Let f : Rn → R be a CPWL function with p affine pieces. Then, f can be written as f = g − h where both g and h are convex CPWL functions with at most p2n+1 pieces. Proof. Suppose the p affine pieces of f are given by x 7→ aTi x + bi, i ∈ [p]. Define the function h(x) := ∑ 1≤i<j≤p max{aTi x+ bi, aTj x+ bj} and let g := f + h. Then, obviously, f = g − h. It remains to show that both g and h are convex CPWL functions with at most p2n+1 pieces. The convexity of h is clear by definition. Consider the ( p 2 ) = p(p−1)2 < p 2 hyperplanes given by aTi x+ bi = a T j x+ bj , 1 ≤ i < j ≤ p. They divide Rn into at most ( p2 n ) + ( p2 n−1 ) + · · ·+ ( p2 0 ) ≤ p2n regions (compare Edelsbrunner [1987, Theorem 1.3]) in each of which h is affine. In particular, h has at most p2n ≤ p2n+1 pieces. Next, we show that g = f + h is convex. Intuitively, this holds because each possible breaking hyperplane of f is made convex by adding h. To make this formal, note that by the definition of convexity, it suffices to show that g is convex along each affine line. For this purpose, consider an arbitrary line x(t) = ta+b, t ∈ R, given by a ∈ Rn and b ∈ R. Let f̃(t) := f(x(t)), g̃(t) := g(x(t)), and h̃(t) := h(x(t)). We need to show that g̃ : R→ R is a convex function. Observe that f̃ , g̃, and h̃ are clearly one-dimensional CPWL functions with the property g̃ = f̃ + h̃. Hence, it suffices to show that g̃ is convex locally around each of its breakpoints. Let t ∈ R be an arbitrary breakpoint of g̃. If f̃ is already convex locally around t, then the same holds for g̃ as well since h̃ inherits convexity from h. Now suppose that t is a nonconvex breakpoint of f̃ . Then there exist two distinct pieces of f , indexed by i, j ∈ [p] with i 6= j, such that f̃(t′) = min{aTi x(t′) + bi, aTj x(t′) + bj} for all t′ sufficiently close to t. By construction, h̃(t′) contains the summand max{aTi x(t′)+bi, aTj x(t′)+bj}. Thus, adding this summand to f̃ linearizes the nonconvex breakpoint of f̃ , while adding all the other summands preserves convexity. In total, g̃ is convex locally around t, which finishes the proof that g is a convex function. Finally, observe that pieces of g = f + h are always intersections of pieces of f and h, for which we have only p · p2n = p2n+1 possibilities. Having this, we may conclude the following. Theorem 4.4. Let f : Rn → R be a CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(p2n 2+3n+1). Proof. Consider the decomposition f = g − h from Proposition 4.3. Using Theorem 4.2, we obtain that both g and h can be represented with the required depth dlog2(n + 1)e + 1 and with width O((p2n+1)n+1) = O(p2n2+3n+1). Thus, the same holds for f . 4.3 Extended Newton Polyhedra of Convex CPWL Functions For our proof of Theorem 4.1, we use a correspondence of convex CPWL functions with certain polyhedra, which are known as (extended) Newton polyhedra in tropical geometry [Maclagan and Sturmfels, 2015]. These relations between tropical geometry and neural networks have previously been applied to investigate expressivity of NNs; compare our references in the introduction. In order to formalize this correspondence, let CCPWLn ⊆ CPWLn be the set of convex CPWL functions of type Rn → R. For f(x) = max{aTi x + bi | i ∈ [p]} in CCPWLn, we define its so-called extended Newton polyhedron to be N (f) := conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}) ⊆ Rn+1. We denote the set of all possible extended Newton polyhedra in Rn+1 as Newtn. That is, Newtn is the set of (unbounded) polyhedra in Rn+1 that emerge from a polytope by adding the negative of the (n+ 1)-st unit vector −en+1 as an extreme ray. Hence, a set P ⊆ Rn+1 is an element of Newtn if and only if P can be written as P = conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}). Conversely, for a polyhedron P ∈ Newtn of this form, let F(P ) ∈ CCPWLn be the function defined by F(P )(x) = max{aTi x+ bi | i ∈ [p]}. There is an intuitive way of thinking about the extended Newton polyhedron P of a convex CPWL function f : it consists of all hyperplane coefficients (a, b) ∈ Rn × R such that aTx+ b ≤ f(x) for all x ∈ Rn. This also explains why we add the extreme ray −en+1: decreasing b obviously maintains the property of aTx+ b being a lower bound on the function f . We need the notion of the Minkowski sum of two polyhedra P and Q: it is given as the set P +Q = {p+ q | p ∈ P, q ∈ Q}. In fact, there is a one-to-one correspondence between elements of CCPWLn and Newtn, which is nicely compatible with some (functional and polyhedral) operations. This correspondence has been studied before in tropical geometry [Maclagan and Sturmfels, 2015, Joswig, 2022], convex geometry1 [Hiriart-Urruty and Lemaréchal, 1993], as well as neural network literature [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Alfarra et al., 2020, Montúfar et al., 2021]. We summarize the key findings of this correspondence relevant to our work in the following proposition: Proposition 4.5. Let n ∈ N and f1, f2 ∈ CCPWLn. Then it holds that (i) the functions N : CCPWLn → Newtn and F : Newtn → CCPWLn are well-defined, that is, their output is independent from the representation of the input by pieces or vertices, respectively, (ii) N and F are bijections and inverse to each other, (iii) N (max{f1, f2}) = conv(N (f1),N (f2)) := conv(N (f1) ∪N (f2)), (iv) N (f1 + f2) = N (f1) +N (f2), where the + on the right-hand side is Minkowski addition. An algebraic way of phrasing this proposition is as follows: N and F are isomorphisms between the semirings (CCPWLn,max,+) and (Newtn, conv,+). 4.4 Outline of the Proof of Theorem 4.1 We prove Theorem 4.1 in full detail in Appendix D. The rough idea is as follows. Suppose we have a p-term max function f with p ≥ n + 2. By Proposition 4.5, f corresponds to a polyhedron P ∈ Newtn with at least n + 2 vertices. Applying a classical result from discrete geometry known as Radon’s theorem allows us to carefully decompose P into a “signed”2 Minkowski sum of polyhedra in Newtn whose vertices are subsets of at most p− 1 out of the p vertices of P . Translating this back into the world of CPWL functions by Proposition 4.5 yields that f can be written as linear combination of p′-term maxima with p′ < p, where each of them involves a subset of 1N (f) is the negative of the epigraph of the convex conjugate of f . 2Some polyhedra may occur with “negative” coefficents in that sum, meaning that they are actually added to P instead of the other polyhedra. The corresponding CPWL functions will then have negative coefficients in the linear combination representing f . the p affine terms of f . We can then obtain Theorem 4.1 by iterating until every occurring maximum expression involves at most n+ 1 terms. 4.5 Potential Approaches to Show Lower Bounds on the Width In light of the upper width bounds shown in this section, a natural question to ask is whether also meaningful lower bounds can be achieved. This would mean constructing a family of CPWL functions with p pieces defined on Rn (with different values of p and n), for which we can prove that a large width is required to represent these functions with NNs of depth dlog2(n+ 1)e+ 1. A trivial and not very satisfying answer follows, e.g., from Raghu et al. [2017] or Serra et al. [2018]: for fixed input dimension n, they show that a function computed by an NN with k hidden layers and width w has at most O(wkn) pieces. For our setting, this means that an NN with logarithmic depth needs a width of at least O(p1/(n logn)) to represent a function with p pieces. This is, of course, very far away from our upper bounds. Similar upper bounds on the number of pieces have been proven by many other authors and are often used to show depth-width tradeoffs [Montúfar et al., 2014, 2021, Pascanu et al., 2014, Telgarsky, 2016b, Arora et al., 2018]. However, there is a good reason why all these results only give rise to very trivial lower bounds for our setting: the focus is always on functions with very many pieces, which then, consequently, need many neurons to be represented (with small depth). However, since the lower bounds we strive for depend on the number of pieces, we would need to construct a family of functions with comparably few pieces that still need very many neurons to be represented. In general, it seems to be a tough task to argue why such functions should exist. A different approach could leverage methods from complexity theory, in particular from circuit complexity. Neural networks are basically arithmetic circuits with very special operations allowed. In fact, they can be seen as a tropical variant of arithmetic circuits. Showing circuit lower bounds is a notoriously difficult task in complexity theory, but maybe some conditional result (based on common conjectures similar to P 6= NP) could be established. We think that the question whether our bounds are tight, or whether at least some non-trivial lower bounds on the width for NNs with logarithmic depth can be shown, is an exciting question for further research. 5 Discussion of Future Research Directions The most obvious and, at the same time, most exciting open research question is to prove or disprove Conjecture 1.1, or equivalently Conjecture 1.2. The first step could be to prove Assumption 2.4. The assumption is intuitive because every breakpoint introduced at other places needs to be canceled out later. Therefore, it is natural to assume that these breakpoints do not have to be introduced in the first place. However, this intuition does not seem to be enough for a formal proof because it could occur that additional breakpoints in intermediate steps, which are canceled out later, also influence the behavior of the function at other places where we allow breakpoints in the end. Another step towards resolving our conjecture may be to find an alternative proof of Theorem 2.5 not using Assumption 2.4. This might also be beneficial for generalizing our techniques to more hidden layers, since, while theoretically possible, a direct generalization is infeasible due to computational limitations. In light of our results from Section 3, it would be desirable to provide a complete characterization of the functions contained in ReLU(k). Another potential research goal is improving our upper bounds on the width from Section 4 and/or proving matching lower bounds as discussed in Section 4.5. Some more interesting research directions are the following: 1. Establishing or strengthening our results for special classes of NNs like recurrent neural networks (RNNs) or convolutional neural networks (CNNs), 2. Using exact representation results to show more drastic depth-width tradeoffs compared to existing results in the literature, 3. Understanding how the class ReLU(k) changes when a polynomial upper bound is imposed on the width of the NN; see related work by Vardi et al. [2021]. Acknowledgments and Disclosure of Funding Christoph Hertrich gratefully acknowledges funding by DFG-GRK 2434 “Facets of Complexity”. Amitabh Basu gratefully acknowledges support from AFOSR Grant FA95502010341 and NSF Grant CCF2006587. Martin Skutella gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy — The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
1. What is the focus of the paper regarding deep neural networks? 2. What are the three main results of the paper, and how do they contribute to understanding deep neural networks theoretically? 3. What is the concern regarding the proof of Conjecture 1.2 for the k = 2 case? 4. How does the reviewer assess the originality, quality, clarity, and significance of the paper?
Summary Of The Paper Review
Summary Of The Paper The authors study the class of functions that can be represented by a fully connected neural network with ReLU activations. First, They conjectured that for any k ∈ N , n = 2 k , the function f n ( x ) = max 0 , x 1 , … , x n cannot be represented by a fully connected network with k hidden layers, and prove this conjecture for k = 2 under some mild assumption. Second, they proved the class of functions that can be represented by a ( k + 1 ) -layer NN is strictly larger than the class of functions that are linear combinations of 2 k -term max functions. Finally, they provided a bound on the width of the NN required to represent continuous piecewise linear functions. Review Originality: The related works are adequately cited. The novelty of this paper is high. The three main results in this paper, as mentioned in the above summary part, will certainly help us have a better understating of deep neural networks from a theoretical way. I have checked the technique parts and find that the proofs are solid. I think this is a significant contribution to deep learning immunity. My only concern is that, the proof of Conjecture 1.2 for k = 2 case, relies on Assumption 2.4, which is not quite elegant. It will be interesting to prove Conjecture 1.2 for k = 2 case without any further assumptions. Quality: This paper is technically sound. Clarity: This paper is clearly written and well organized. I find it easy to follow. Significance: I think the results in this paper is significant, as explained above.
NIPS
Title Towards Lower Bounds on the Depth of ReLU Neural Networks Abstract We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes. 1 Introduction A core problem in machine learning or statistical pattern recognition is the estimation of an unknown data distribution with access to i.i.d. samples from the distribution. It is well-known that there is a tension between how much prior information one has about the data distribution and how many samples one needs to solve the problem with high confidence (or equivalently, how much variance one has in one’s estimate). This is referred to as the bias-variance trade-off or the bias-complexity trade-off. Neural networks provide a way to turn this bias/complexity knob in a controlled manner that has been studied for decades going back to the idea of a perceptron by Rosenblatt [1958]. This is done by modifying the architecture of a neural network class of functions, which has two parameters: depth and size. As one increases these parameters, the class of functions becomes more expressive. In terms of the bias-variance trade-off, the “bias” decreases as the class of functions becomes more expressive, but the “variance” or “complexity” increases. So-called universal approximation theorems [Cybenko, 1989, Hornik, 1991, Anthony and Bartlett, 1999] show that even with a single hidden layer, i.e., when the depth of the architecture is the smallest possible value, one can essentially reduce the “bias” as much as one desires, by increasing the size of the network or the number of neurons used in the neural network. Nevertheless, it can be advantageous both theoretically and empirically to increase the depth because a substantial reduction in the size can be achieved; Telgarsky [2016a], Eldan and Shamir [2016], Arora et al. [2018] is a small sample of recent work in this direction. To get a better quantitative handle on these trade-offs, 35th Conference on Neural Information Processing Systems (NeurIPS 2021). it is important to understand what classes of functions are exactly representable by neural networks with a certain architecture. The precise mathematical statements of universal approximation theorems show that single layer networks can approximate arbitrarily well any continuous function (under some additional mild hypotheses). While this suggests that single layer networks are good enough from a learning perspective, from a mathematical perspective, one can ask the question if the class of functions represented by a single layer is a strict subset of the class of function represented by two or more hidden layers. On the question of size, one can ask for precise bounds on the size of the network of a given depth to represent a certain class of functions. We believe that a better understanding of the function classes exactly represented by different architectures will have implications not just for mathematical foundations, but also algorithmic and statistical learning aspects of neural networks. The task of searching for the “best” function in that class can only benefit from a better understanding of the nature of functions in that class. A motivating question behind the results in this paper is to understand the hierarchy of function classes exactly represented by neural networks of increasing depth. We now introduce more precise notation and terminology to set the stage for our investigations. Notation. We write [n] := {1, 2, . . . , n} for the set of natural numbers up to n (without zero) and [n]0 := [n] ∪ {0} for the same set including zero. For any n ∈ N, let σ : Rn → Rn be the component-wise rectifier function σ(x) = (max{0, x1},max{0, x2}, . . . ,max{0, xn}). For any number of hidden layers k ∈ N, a (k + 1)-layer feedforward neural network with rectified linear units (ReLU NN or simply NN) is given by k affine transformations T (`) : Rn`−1 → Rn` , x 7→ A(`)x+ b(`), for ` ∈ [k], and a linear transformation T (k+1) : Rnk → Rnk+1 , x 7→ A(k+1)x. It is said to compute or represent the function f : Rn0 → Rnk+1 given by f = T (k+1) ◦ σ ◦ T (k) ◦ σ ◦ · · · ◦ T (2) ◦ σ ◦ T (1). The matrices A(`) ∈ Rn`×n`−1 are called the weights and the vectors b(`) ∈ Rn` are the biases of the `-th layer. The number n` ∈ N is called the width of the `-th layer. The maximum width of all hidden layers max`∈[k] n` is called the width of the NN. Further, we say that the NN has depth k + 1 and size ∑k `=1 n`. Often, NNs are represented as layered, directed, acyclic graphs where each dimension of each layer (including input layer ` = 0 and output layer ` = k + 1) is one vertex, weights are arc labels, and biases are node labels. Then, the vertices are called neurons. For a given input x = x(0) ∈ Rn0 , let y(`) := T (`)(x(`−1)) ∈ Rn` be the activation vector and x(`) := σ(y`) ∈ Rn` the output vector of the `-th layer. Further, let y := y(k+1) = f(x) be the output of the NN. We also say that the i-th component of each of these vectors is the activation or the output of the i-th neuron in the `-th layer. For k ∈ N, we define ReLUn(k) := {f : Rn → R | f can be represented by a (k + 1)-layer NN}, CPWLn := {f : Rn → R | f is continuous and piecewise linear}. By definition, a continuous function f : Rn → R is piecewise linear in case there is a finite set of polyhedra whose union is Rn, and f is affine linear over each such polyhedron. In order to analyze ReLUn(k), we use another function class defined as follows. We call a function g a p-term max function if it can be expressed as maximum of p affine terms, that is, g(x) = max{`1(x), . . . , `p(x)} where `i : Rn → R is affinely linear for i ∈ [p]. Based on that, we define MAXn(p) := {f : Rn → R | f is a linear combination of p-term max functions}. If the input dimension n is not important for the context, we sometimes drop the index and use ReLU(k) := ⋃ n∈N ReLUn(k) and MAX(p) := ⋃ n∈N MAXn(p) instead. Since we deal with polyhedra a lot in this paper, we will use the standard notations convA and coneA for the convex and conic hulls of a set A ⊆ Rn. For an in-depth treatment of polyhedra and (mixed-integer) optimization, we refer to Schrijver [1986]. 1.1 Our Contribution It is not hard to see that any function expressed by a ReLU network is a continuous and piecewise linear (CPWL) function, because one is composing continuous piecewise linear functions together. Based on a result by Wang and Sun [2005], Arora et al. [2018] show that every CPWL function defined on Rn can be represented by a ReLU neural network with dlog2(n+ 1)e hidden layers. We wish to understand whether one can do better. We believe it is not possible to do better and we pose the following conjecture to better understand the importance of depth in neural networks. Conjecture 1.1. For any n ∈ N, let k∗ := dlog2(n+ 1)e. Then it holds that ReLUn(0) ( ReLUn(1) ( · · · ( ReLUn(k∗ − 1) ( ReLUn(k∗) = CPWLn . (1) Conjecture 1.1 claims that any additional layer up to k∗ hidden layers strictly increases the set of representable functions. This would imply that the construction by Arora et al. [2018, Theorem 2.1] is actually depth-minimal. Observe that in order to prove Conjecture 1.1, it suffices to find a single function f ∈ ReLUn(k∗) \ ReLUn(k∗ − 1) with n = 2k ∗−1 for all k∗ ∈ N. This also implies all remaining strict inclusions ReLUn(i− 1) ( ReLUn(i) for i < k∗ since ReLUn(i− 1) = ReLUn(i) directly implies that ReLUn(i− 1) = ReLUn(i′) for all i′ ≥ i− 1. In fact, there is a canonical candidate for such a function, allowing us to reformulate the conjecture as follows. Conjecture 1.2. For any k ∈ N, n = 2k, the function fn(x) = max{0, x1, . . . , xn} cannot be represented with k hidden layers. Proposition 1.3. Conjecture 1.1 and Conjecture 1.2 are equivalent. Proof (Sketch). We argued above that Conjecture 1.2 implies Conjecture 1.1. For the other direction, one can argue that, if the specific (n+1)-term max function fn can be represented by k hidden layers, then every other (n+ 1)-term max function as well. The claim then follows via a result by [Wang and Sun, 2005] stating that any f ∈ CPWLn can be written as linear combination of (n+ 1)-term max functions. We provide a detailed argument in Appendix A. It is known that Conjecture 1.2 holds for k = 1 [Mukherjee and Basu, 2017]. However, the conjecture remains open for k ≥ 2. In this paper, we present the following results as partial progress towards resolving this conjecture. In Section 2, we resolve Conjecture 1.2 for k = 2, under a natural assumption on the breakpoints of the function represented by any intermediate neuron. We achieve this result by leveraging techniques from mixed-integer programming to analyze the set of functions computable by certain NNs. It is not hard to see that MAX(2k) ⊆ ReLU(k) for all k ∈ N [Arora et al., 2018], that is, any 2k-term max function (and linear combinations thereof) can be expressed with k hidden layers. One might ask whether the converse is true as well, that is, whether the classes MAX(2k) and ReLU(k) are actually equal. This would not only provide a neat characterization of ReLU(k), but also prove Conjecture 1.2 without any additional assumption since one can show that max{0, x1, . . . , x2k} is not contained in MAX(2k). In fact, this is true for k = 1, that is, a function is computable with one hidden layer if and only if it is a linear combination of 2-term max functions. However, in Section 3, we show that for k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In this section, the key technical ingredient is the theory of polyhedral complexes associated with CPWL functions. This way, we provide important insights concerning the richness of the class ReLU(k). So far, we have focused on understanding the smallest depth needed to express CPWL functions using neural networks with ReLU activations. In Section 4, we complement these results by upper bounds on the sizes of the networks needed for expressing arbitrary CPWL functions. In particular, Theorem 4.4 shows that any continuous piecewise linear function with p linear/affine pieces on Rn can be expressed by a network with depth at most O(log n) and width at most pO(n2). We arrive at this result by introducing a novel application of recently established interactions between neural networks and tropical geometry. 1.2 Related Work Depth versus size. Soon after the original universal approximation theorems [Cybenko, 1989, Hornik, 1991], concrete bounds were obtained on the number of neurons needed in the hidden layer to achieve a certain level of accuracy. The literature on this is vast and we refer to a small representative sample here [Barron, 1993, 1994, Mhaskar, 1993, Pinkus, 1999, Mhaskar, 1996, Mhaskar and Micchelli, 1995]. More recently, work has focused on how deeper networks can have exponentially or super exponentially smaller size compared to shallower networks [Telgarsky, 2016a, Eldan and Shamir, 2016, Arora et al., 2018, Vardi et al., 2021]. See also Gribonval et al. [2021] for another perspective on the relationship between expressivity and architecture, and the references therein. We reiterate that the list of references above is far from complete. Mixed-integer optimization and machine learning. Over the past decade, a growing body of work has emerged that explores the interplay between mixed-integer optimization and machine learning. On the one hand, researchers have attempted to improve mixed-integer optimization algorithms by exploiting novel techniques from machine learning [Bonami et al., 2018, Gasse et al., 2019, He et al., 2014, Khalil et al., 2016, 2017, Kruber et al., 2017, Lodi and Zarpellon, 2017, Alvarez et al., 2017]; see also Bengio et al. [2020] for a recent survey. On the flip side, mixed-integer optimization techniques have been used to analyze function classes represented by neural networks [Serra et al., 2018, Anderson et al., 2020, Fischetti and Jo, 2017, Serra and Ramalingam, 2020, Serra et al., 2020]. In Section 2 below, we show another new use of mixed-integer optimization tools for understanding function classes represented by neural networks. Design of training algorithms. We believe that a better understanding of the function classes represented exactly by a neural architecture also has benefits in terms of understanding the complexity of the training problem. For instance, in a paper by Arora et al. [2018], an understanding of single layer ReLU networks enables the design of a globally optimal algorithm for solving the empirical risk minimization (ERM) problem, that runs in polynomial time in the number of data points in fixed dimension. See also Goel et al. [2017, 2018], Goel and Klivans [2019], Dey et al. [2020], Boob et al. [2020], Goel et al. [2021], Froese et al. [2021] for a similar lines of work. Neural Networks and Tropical Geometry. A recent stream of research involves the interplay between neural networks and tropical geometry. The piecewise linear functions computed by neural networks can be seen as (tropical quotients of) tropical polynomials. Linear regions of these functions correspond to vertices of so-called Newton polytopes associated with these tropical polynomials. Applications of this correspondence include bounding the number of linear regions of a neural network [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Montúfar et al., 2021] and understanding decision boundaries [Alfarra et al., 2020]. In Section 4 we present a novel application of tropical concepts to understand neural networks. We refer to Maragos et al. [2021] for a recent survey of connections between machine learning and tropical geometry, as well as to the textbooks by Maclagan and Sturmfels [2015] and Joswig [2022] for in-depth introductions to tropical geometry and tropical combinatorics. 2 Conditional Lower Bounds on Depth via Mixed-Integer Programming In this section, we provide a computer-aided proof that, under a natural, yet unproven assumption, the function f(x) := max{0, x1, x2, x3, x4} cannot be represented by a 3-layer NN. It is worth to note that, to the best of our knowledge, no CPWL function is known for which the non-existence of a 3-layer NN can be proven without additional assumption. For easier notation, we write x0 := 0. We first prove that we may restrict ourselves to NNs without biases. This holds true independent of our assumption, which we introduce afterwards. Definition 2.1. A function g : Rn → Rm is called positively homogeneous if g(λx) = λg(x) for all λ ≥ 0. Definition 2.2. For an NN given by affine transformations T (`)(x) = A(`)x + b(`), we define the corresponding homogenized NN to be the NN given by T̃ (`)(x) = A(`)x with all biases set to zero. Proposition 2.3. If an NN computes a positively homogeneous function, then the corresponding homogenized NN computes the same function. Proof. Let g : Rn0 → Rnk+1 be the function computed by the original NN and g̃ the one computed by the homogenized NN. Further, for any 0 ≤ ` ≤ k, let g(`) = T (`+1) ◦σ ◦T (`) ◦ · · ·◦T (2) ◦σ ◦T (1) be the function computed by the sub-NN consisting of the first (` + 1)-layers and let g̃(`) be the function computed by the corresponding homogenized sub-NN. We first show by induction on ` that the norm of ‖g(`)(x)− g̃(`)(x)‖ is bounded by a global constant that only depends on the parameters of the NN but not on x. For ` = 0, we obviously have ‖g(0)(x) − g̃(0)(x)‖ = ‖b(1)‖ =: C0, settling the induction base. For the induction step, let ` ≥ 1 and assume that ‖g(`−1)(x) − g̃(`−1)(x)‖ ≤ C`−1, where C`−1 only depends on the parameters of the NN. Since component-wise ReLU activation has Lipschitz constant 1, this implies ‖(σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x)‖ ≤ C`−1. Using any matrix norm that is compatible with the Euclidean vector norm, we obtain: ‖g(`)(x)− g̃(`)(x)‖ = ‖b(`+1) +A(`+1)((σ ◦ g(`−1))(x)− (σ ◦ g̃(`−1))(x))‖ ≤ ‖b(`+1)‖+ ‖A(`+1)‖ · C`−1 =: C` Since the right-hand side only depends on NN parameters, the induction is completed. Finally, we show that g = g̃. For the sake of contradiction, suppose that there is an x ∈ Rn0 with ‖g(x) − g̃(x)‖ = δ > 0. Let x′ := Ck+1δ x; then, by positive homogeneity, it follows that ‖g(x′)− g̃(x′)‖ = Ck + 1 > Ck, contradicting the property shown above. Thus, we have g = g̃. Since f = max{0, x1, x2, x3, x4} is positively homogeneous, Proposition 2.3 implies that, if there is a 3-layer NN computing f , then there also is one that has no biases. Therefore, in the remainder of this section, we only consider NNs without biases and assume implicitly that all considered CPWL functions are positively homogeneous. In particular, any piece of such a CPWL function is linear and not only affine linear. Observe that, for function f , the only points of non-differentiability (a.k.a. breakpoints) are at places where at least two of the five numbers x0 = 0, x1, x2, x3, and x4 are equal. Hence, if some neuron of an NN computing f introduces breakpoints at other places, these breakpoints must be canceled out by other neurons. Therefore, it is a natural assumption that such breakpoints are not introduced at all in the first place. To make this assumption formal, let Hij = {x ∈ R4 | xi = xj}, for 0 ≤ i < j ≤ 4, be ten hyperplanes in R4 and H = ⋃ 0≤i<j≤4Hij be the corresponding hyperplane arrangement. The regions or cells of H are defined to be the closures of the connected components of R4 \H . It is easy to see that these regions are in one-to-one correspondence to the 5! = 120 possible orderings of the five numbers x0 = 0, x1, x2, x3, and x4. More precisely, for a permutation π of the five indices [4]0 = {0, 1, 2, 3, 4}, the corresponding region is the polyhedron Cπ := {x ∈ R4 | xπ(0) ≤ xπ(1) ≤ xπ(2) ≤ xπ(3) ≤ xπ(4)}. We say that a (positively homogeneous) CPWL function g is H-conforming, if it is linear within any of these regions of H , that is, if it only has breakpoints where the relative ordering of the five values x0 = 0, x1, x2, x3, x4 changes. Moreover, an NN is said to be H-conforming if the output of each neuron contained in the NN is H-conforming. Equivalently, this is the case if and only if all intermediate functions σ ◦ T (`) ◦ σ ◦ T (`−1) ◦ · · · ◦ σ ◦ T (1), ` ∈ [k], are H-conforming. Now our assumption can be formally phrased as follows. Assumption 2.4. If there exists a 3-layer NN computing f(x) = max{0, x1, x2, x3, x4}, then there also exists one that is H-conforming. We use mixed-integer programming to prove the following theorem. Theorem 2.5. Under Assumption 2.4, there does not exist a 3-layer NN that computes the function f(x) = max{0, x1, x2, x3, x4}. Proof (Outline). We first study some geometric properties of the hyperplane arrangement H . This will show that each of the 120 cells of H is a simplicial polyhedral cone spanned by 4 extreme rays. In total, there are 30 such rays (because rays are used multiple times to span different cones). This implies that the set of H-conforming functions of type R4 → R is a 30-dimensional vector space and each function is uniquely determined by its values on the 30 rays. We then use linear algebra to show that the space of functions generated by H-conforming two-layer NNs is a 14-dimensional subspace. Moreover, with two hidden layers, at least 29 of the 30 dimensions can be generated and f is not contained in this 29-dimensional subspace. So, it remains the question whether the 14 dimensions producible with the first hidden layer can be combined in such a way that after applying a ReLU activation in the second hidden layer, we do not end up within the 29-dimensional subspace. We model this question as a mixed-interger program (MIP). Solving the MIP yields that we always end up within the 29-dimensional subspace, implying that f cannot be represented by a 3-layer NN. This provides a computational proof of Theorem 2.5. All details can be found in Appendix B. 3 Going Beyond 2k-Term Max Functions with k Layers In this section we prove the following result: Theorem 3.1. For any k ≥ 2, the class ReLU(k) is a strict superset of MAX(2k). In order to prove this theorem, we provide a specific function that is in ReLU(k) \MAX(2k) for any number of hidden layers k ≥ 2. The challenging part is to show that the function is in fact not contained in MAX(2k). Proposition 3.2. For any n ≥ 3, the function f : Rn → R, f(x) = max{0, x1, x2, . . . , xn−3, max{xn−2, xn−1}+ max{0, xn}} (2) cannot be written as a linear combination of n-term max functions. The above proposition means that it is not possible to write f(x) in the form f(x) = p∑ i=1 λi max{`i1(x), . . . , `in(x)} where p ∈ N, λ1, . . . , λp ∈ R, and `ij : Rn → R is an affine linear function for every i ∈ [p] and j ∈ [n]. (Note that max functions with less than n terms are also allowed, as some functions `ij may coincide.) Before we prove Proposition 3.2, we show that it implies Theorem 3.1. Proof of Theorem 3.1. For k ≥ 2, let n := 2k. By Proposition 3.2, function f defined in (2) is not contained in MAX(2k). It remains to show that it can be represented using a ReLU NN with k hidden layers. To see this, first observe that any of the n/2 = 2k−1 terms max{0, x1}, max{x2i, x2i+1} for i ∈ [n/2 − 2], and max{xn−2, xn−1} + max{0, xn} can be expressed by a one-hidden-layer NN since all these are (linear combinations of) 2-term max functions. Since f is the maximum of these 2k−1 terms, and since the maximum of 2k−1 numbers can be computed with k − 1 hidden layers [Arora et al., 2018], this implies that f is in ReLU(k). In order to prove Proposition 3.2, we need the concept of polyhedral complexes. A polyhedral complex P is a finite set of polyhedra such that each face of a polyhedron in P is also in P , and for two polyhedra P,Q ∈ P , their intersection P ∩Q is a common face of P and Q (possibly the empty face). Given a polyhedral complex P in Rn and an integer m ∈ [n], we let Pm denote the collection of all m-dimensional polyhedra in P . For a convex CPWL function f , we define its underlying polyhedral complex as follows: it is the unique polyhedral complex covering Rn (i.e., each point in Rn belongs to some polyhedron in P) whose n-dimensional polyhedra coincide with the domains of the (maximal) affine pieces of f . In particular, f is affinely linear within each P ∈ P , but not within any strict superset of a polyhedron in Pn. Exploiting properties of polyhedral complexes associated with CPWL functions, we prove the following proposition in Appendix C. Proposition 3.3. Let f0 : Rn → R be a convex CPWL function and let P0 be the underlying polyhedral complex. If there exists a hyperplane H ⊆ Rn such that the set T := ⋃{ F ∈ Pn−10 ∣∣ F ⊆ H} is nonempty and contains no line, then f0 cannot be expressed as a linear combination of n-term maxima of affine linear functions. This allows us to prove Proposition 3.2. Proof of Proposition 3.2. Observe that f (defined in (2)) has the alternate representation f(x) = max{0, x1, x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn} as a maximum of n+ 2 terms. Let P be its underlying polyhedral complex. Let the hyperplane H be defined by x1 = 0. Observe that any facet in Pn−1 is a polyhedron defined by two of the n + 2 terms that are equal and at least as large as each of the remaining n terms. Hence, the only facet that could possibly be contained in H is F := {x ∈ Rn | x1 = 0 ≥ x2, . . . , xn−3, xn−2, xn−1, xn−2 + xn, xn−1 + xn}. Note that F is indeed an (n− 1)-dimensional facet in Pn−1, because, for example, the full neighborhood of (0,−1, . . . ,−1) ∈ Rn intersected with H is contained in F . Finally, we need to show that F is pointed, that is, it contains no line. A well-known fact from polyhedral theory says if there is any line in F with direction d ∈ Rn \ {0}, then d must satisfy the defining inequalities with equality. However, only the zero vector does this. Hence, F cannot contain a line. Therefore, when applying Proposition 3.3 to f with underlying polyhedral complex P and hyperplane H , we have T = F , which is nonempty and contains no line. Hence, f cannot be written as linear combination of n-term maxima. 4 A Width Bound for NNs with Small Depth While the arguments in Arora et al. [2018] show that CPWLn = ReLUn(dlog2(n + 1)e), they do not provide any bound on the width of the NN required to represent any particular continuous piecewise linear function. The purpose of this section is to prove that for fixed dimension n, the required width for exact, depth-minimal representation of a CPWL function can be polynomially bounded in the number p of affine pieces; in particular pO(n 2). This is closely related to works that bound the number of linear pieces of an NN as a function of the size [Montúfar et al., 2014, Pascanu et al., 2014, Raghu et al., 2017, Montúfar et al., 2021]. It can also be seen as a counterpart, in the context of exact representations, to quantitative universal approximation theorems that bound the number of neurons required to achieve a certain approximation guarantee; see, e.g., Barron [1993, 1994], Mhaskar [1993], Pinkus [1999], Mhaskar [1996], Mhaskar and Micchelli [1995]. 4.1 The Convex Case We first derive our result for the case of convex CPWL functions and then use this to also prove the general nonconvex case. Our width bound is a consequence of the following theorem about convex CPWL functions, for which we are going to provide a geometric proof later. Theorem 4.1. Let f(x) = max{aTi x + bi | i ∈ [p]} be a convex CPWL function defined on Rn. Then f can be written as f(x) = ∑ S⊆[p], |S|≤n+1 cS max{aTi x+ bi | i ∈ S} with coefficients cS ∈ Z, for S ⊆ [p], |S| ≤ n+ 1. For the convex case, this yields a stronger version of the theorem by Wang and Sun [2005] stating that any (not necessarily convex) CPWL function can be written as a linear combination of (n+ 1)-term maxima. Theorem 4.1 is stronger in the sense that it guarantees that all pieces of the (n+ 1)-term maxima must be pieces of the original function, making it possible to bound the total number of these (n+ 1)-term maxima and, therefore, the size of an NN representing f . Theorem 4.2. Let f : Rn → R be a convex CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(pn+1). Proof. Since the number of possible subsets S ⊆ [p] with |S| ≤ n + 1 is bounded by pn+1, the theorem follows by Theorem 4.1 and the construction from Arora et al. [2018, Theorem 2.1]. Before we present the proof of Theorem 4.1, we show how we can generalize its consequences to the nonconvex case. 4.2 The General (Nonconvex) Case It is a well-known fact that every CPWL function can be expressed as a difference of two convex CPWL functions, see, e.g., Wang [2004, Theorem 1]. This allows us to derive the general case from the convex case. What we need, however, is to bound the number of affine pieces of the two convex CPWL functions in terms of the number of pieces of the original function. Therefore, we consider a specific decomposition for which such bounds can easily be achieved. Proposition 4.3. Let f : Rn → R be a CPWL function with p affine pieces. Then, f can be written as f = g − h where both g and h are convex CPWL functions with at most p2n+1 pieces. Proof. Suppose the p affine pieces of f are given by x 7→ aTi x + bi, i ∈ [p]. Define the function h(x) := ∑ 1≤i<j≤p max{aTi x+ bi, aTj x+ bj} and let g := f + h. Then, obviously, f = g − h. It remains to show that both g and h are convex CPWL functions with at most p2n+1 pieces. The convexity of h is clear by definition. Consider the ( p 2 ) = p(p−1)2 < p 2 hyperplanes given by aTi x+ bi = a T j x+ bj , 1 ≤ i < j ≤ p. They divide Rn into at most ( p2 n ) + ( p2 n−1 ) + · · ·+ ( p2 0 ) ≤ p2n regions (compare Edelsbrunner [1987, Theorem 1.3]) in each of which h is affine. In particular, h has at most p2n ≤ p2n+1 pieces. Next, we show that g = f + h is convex. Intuitively, this holds because each possible breaking hyperplane of f is made convex by adding h. To make this formal, note that by the definition of convexity, it suffices to show that g is convex along each affine line. For this purpose, consider an arbitrary line x(t) = ta+b, t ∈ R, given by a ∈ Rn and b ∈ R. Let f̃(t) := f(x(t)), g̃(t) := g(x(t)), and h̃(t) := h(x(t)). We need to show that g̃ : R→ R is a convex function. Observe that f̃ , g̃, and h̃ are clearly one-dimensional CPWL functions with the property g̃ = f̃ + h̃. Hence, it suffices to show that g̃ is convex locally around each of its breakpoints. Let t ∈ R be an arbitrary breakpoint of g̃. If f̃ is already convex locally around t, then the same holds for g̃ as well since h̃ inherits convexity from h. Now suppose that t is a nonconvex breakpoint of f̃ . Then there exist two distinct pieces of f , indexed by i, j ∈ [p] with i 6= j, such that f̃(t′) = min{aTi x(t′) + bi, aTj x(t′) + bj} for all t′ sufficiently close to t. By construction, h̃(t′) contains the summand max{aTi x(t′)+bi, aTj x(t′)+bj}. Thus, adding this summand to f̃ linearizes the nonconvex breakpoint of f̃ , while adding all the other summands preserves convexity. In total, g̃ is convex locally around t, which finishes the proof that g is a convex function. Finally, observe that pieces of g = f + h are always intersections of pieces of f and h, for which we have only p · p2n = p2n+1 possibilities. Having this, we may conclude the following. Theorem 4.4. Let f : Rn → R be a CPWL function with p affine pieces. Then f can be represented by a ReLU NN with depth dlog2(n+ 1)e+ 1 and width O(p2n 2+3n+1). Proof. Consider the decomposition f = g − h from Proposition 4.3. Using Theorem 4.2, we obtain that both g and h can be represented with the required depth dlog2(n + 1)e + 1 and with width O((p2n+1)n+1) = O(p2n2+3n+1). Thus, the same holds for f . 4.3 Extended Newton Polyhedra of Convex CPWL Functions For our proof of Theorem 4.1, we use a correspondence of convex CPWL functions with certain polyhedra, which are known as (extended) Newton polyhedra in tropical geometry [Maclagan and Sturmfels, 2015]. These relations between tropical geometry and neural networks have previously been applied to investigate expressivity of NNs; compare our references in the introduction. In order to formalize this correspondence, let CCPWLn ⊆ CPWLn be the set of convex CPWL functions of type Rn → R. For f(x) = max{aTi x + bi | i ∈ [p]} in CCPWLn, we define its so-called extended Newton polyhedron to be N (f) := conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}) ⊆ Rn+1. We denote the set of all possible extended Newton polyhedra in Rn+1 as Newtn. That is, Newtn is the set of (unbounded) polyhedra in Rn+1 that emerge from a polytope by adding the negative of the (n+ 1)-st unit vector −en+1 as an extreme ray. Hence, a set P ⊆ Rn+1 is an element of Newtn if and only if P can be written as P = conv({(ai, bi) ∈ Rn × R | i ∈ [p]}) + cone({−en+1}). Conversely, for a polyhedron P ∈ Newtn of this form, let F(P ) ∈ CCPWLn be the function defined by F(P )(x) = max{aTi x+ bi | i ∈ [p]}. There is an intuitive way of thinking about the extended Newton polyhedron P of a convex CPWL function f : it consists of all hyperplane coefficients (a, b) ∈ Rn × R such that aTx+ b ≤ f(x) for all x ∈ Rn. This also explains why we add the extreme ray −en+1: decreasing b obviously maintains the property of aTx+ b being a lower bound on the function f . We need the notion of the Minkowski sum of two polyhedra P and Q: it is given as the set P +Q = {p+ q | p ∈ P, q ∈ Q}. In fact, there is a one-to-one correspondence between elements of CCPWLn and Newtn, which is nicely compatible with some (functional and polyhedral) operations. This correspondence has been studied before in tropical geometry [Maclagan and Sturmfels, 2015, Joswig, 2022], convex geometry1 [Hiriart-Urruty and Lemaréchal, 1993], as well as neural network literature [Zhang et al., 2018, Charisopoulos and Maragos, 2018, Alfarra et al., 2020, Montúfar et al., 2021]. We summarize the key findings of this correspondence relevant to our work in the following proposition: Proposition 4.5. Let n ∈ N and f1, f2 ∈ CCPWLn. Then it holds that (i) the functions N : CCPWLn → Newtn and F : Newtn → CCPWLn are well-defined, that is, their output is independent from the representation of the input by pieces or vertices, respectively, (ii) N and F are bijections and inverse to each other, (iii) N (max{f1, f2}) = conv(N (f1),N (f2)) := conv(N (f1) ∪N (f2)), (iv) N (f1 + f2) = N (f1) +N (f2), where the + on the right-hand side is Minkowski addition. An algebraic way of phrasing this proposition is as follows: N and F are isomorphisms between the semirings (CCPWLn,max,+) and (Newtn, conv,+). 4.4 Outline of the Proof of Theorem 4.1 We prove Theorem 4.1 in full detail in Appendix D. The rough idea is as follows. Suppose we have a p-term max function f with p ≥ n + 2. By Proposition 4.5, f corresponds to a polyhedron P ∈ Newtn with at least n + 2 vertices. Applying a classical result from discrete geometry known as Radon’s theorem allows us to carefully decompose P into a “signed”2 Minkowski sum of polyhedra in Newtn whose vertices are subsets of at most p− 1 out of the p vertices of P . Translating this back into the world of CPWL functions by Proposition 4.5 yields that f can be written as linear combination of p′-term maxima with p′ < p, where each of them involves a subset of 1N (f) is the negative of the epigraph of the convex conjugate of f . 2Some polyhedra may occur with “negative” coefficents in that sum, meaning that they are actually added to P instead of the other polyhedra. The corresponding CPWL functions will then have negative coefficients in the linear combination representing f . the p affine terms of f . We can then obtain Theorem 4.1 by iterating until every occurring maximum expression involves at most n+ 1 terms. 4.5 Potential Approaches to Show Lower Bounds on the Width In light of the upper width bounds shown in this section, a natural question to ask is whether also meaningful lower bounds can be achieved. This would mean constructing a family of CPWL functions with p pieces defined on Rn (with different values of p and n), for which we can prove that a large width is required to represent these functions with NNs of depth dlog2(n+ 1)e+ 1. A trivial and not very satisfying answer follows, e.g., from Raghu et al. [2017] or Serra et al. [2018]: for fixed input dimension n, they show that a function computed by an NN with k hidden layers and width w has at most O(wkn) pieces. For our setting, this means that an NN with logarithmic depth needs a width of at least O(p1/(n logn)) to represent a function with p pieces. This is, of course, very far away from our upper bounds. Similar upper bounds on the number of pieces have been proven by many other authors and are often used to show depth-width tradeoffs [Montúfar et al., 2014, 2021, Pascanu et al., 2014, Telgarsky, 2016b, Arora et al., 2018]. However, there is a good reason why all these results only give rise to very trivial lower bounds for our setting: the focus is always on functions with very many pieces, which then, consequently, need many neurons to be represented (with small depth). However, since the lower bounds we strive for depend on the number of pieces, we would need to construct a family of functions with comparably few pieces that still need very many neurons to be represented. In general, it seems to be a tough task to argue why such functions should exist. A different approach could leverage methods from complexity theory, in particular from circuit complexity. Neural networks are basically arithmetic circuits with very special operations allowed. In fact, they can be seen as a tropical variant of arithmetic circuits. Showing circuit lower bounds is a notoriously difficult task in complexity theory, but maybe some conditional result (based on common conjectures similar to P 6= NP) could be established. We think that the question whether our bounds are tight, or whether at least some non-trivial lower bounds on the width for NNs with logarithmic depth can be shown, is an exciting question for further research. 5 Discussion of Future Research Directions The most obvious and, at the same time, most exciting open research question is to prove or disprove Conjecture 1.1, or equivalently Conjecture 1.2. The first step could be to prove Assumption 2.4. The assumption is intuitive because every breakpoint introduced at other places needs to be canceled out later. Therefore, it is natural to assume that these breakpoints do not have to be introduced in the first place. However, this intuition does not seem to be enough for a formal proof because it could occur that additional breakpoints in intermediate steps, which are canceled out later, also influence the behavior of the function at other places where we allow breakpoints in the end. Another step towards resolving our conjecture may be to find an alternative proof of Theorem 2.5 not using Assumption 2.4. This might also be beneficial for generalizing our techniques to more hidden layers, since, while theoretically possible, a direct generalization is infeasible due to computational limitations. In light of our results from Section 3, it would be desirable to provide a complete characterization of the functions contained in ReLU(k). Another potential research goal is improving our upper bounds on the width from Section 4 and/or proving matching lower bounds as discussed in Section 4.5. Some more interesting research directions are the following: 1. Establishing or strengthening our results for special classes of NNs like recurrent neural networks (RNNs) or convolutional neural networks (CNNs), 2. Using exact representation results to show more drastic depth-width tradeoffs compared to existing results in the literature, 3. Understanding how the class ReLU(k) changes when a polynomial upper bound is imposed on the width of the NN; see related work by Vardi et al. [2021]. Acknowledgments and Disclosure of Funding Christoph Hertrich gratefully acknowledges funding by DFG-GRK 2434 “Facets of Complexity”. Amitabh Basu gratefully acknowledges support from AFOSR Grant FA95502010341 and NSF Grant CCF2006587. Martin Skutella gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy — The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
1. What is the focus of the paper regarding real function representation? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the significance of the main conjecture and partial results? 4. What are the concerns regarding the requirement for equality in all points? 5. Are there any suggestions for future research directions or improvements?
Summary Of The Paper Review
Summary Of The Paper This paper studies the role of depth in exactly representing real functions by ReLU networks. Unlike the common ML setting, the focus is on neural nets that are equal in every point to the target function. Some partial results are given as well as a conjecture that networks of depth k+1 have strictly more expressivity than networks of depth k. Review Pros: It is an interesting question to study expressivity in the interpolation setting where pointwise equality is required. The main conjecture (that depth k+1 is strictly larger than depth k) is of interest and the partial results serve as a good intro to the kind of polyhedral theory that is likely to be involved in these problems. The paper is generally well written and conveys the questions well (although some improvements are possible: see below). I agree with the authors that finding connections between well develop fields such as polyhedral combinatorics and integer programing to neural networks could yield interesting insights. Cons: The requirement for equality in all points is very strong and less useful in machine learning. One issue is that it is easy to come up with simple functions that cannot be represented in this model by a finite network. In general, it is easy to obtain strong lower bounds by considering functions with a very large number of linear regions and using the ideas of Montufar et al (2014). See also "Size and Depth Separation in Approximating Benign Functions with Neural Networks" by Vardi et al (2021). Therefore lower bounds in this model may be less informative with respect to the power or limitations of neural networks. It would be good if the authors could further justify their representational assumption from a machine learning perspective. Perhaps it is related to memorization and interpolation in neural networks. The authors do not impose any bounds on the width or size of the networks. It might be of interest to consider what happens if one restricts the size of the ReLU networks to be polynomial: arguably networks used in practice have strong limitation on their size. It could be that such a restriction makes the questions raised in this paper more difficult: see Vardi et al. With respect to the width bound in section 4 it would be nice to have also some kind of a lower bound, or at least an estimate of what the right answer should be. I've found the first two paragraphs not very informative reiterating common knowledge that does not seem to be very related to the body of the paper. Perhaps the authors could stress more their novel contribution in the first two paragraphs. Note after rebuttal: I think the paper studies an interesting question and introduces new techniques that might yield interesting insights. I thereby change my review to accept.
NIPS
Title SNAKE: Shape-aware Neural 3D Keypoint Field Abstract Detecting 3D keypoints from point clouds is important for shape reconstruction, while this work investigates the dual question: can shape reconstruction benefit 3D keypoint detection? Existing methods either seek salient features according to statistics of different orders or learn to predict keypoints that are invariant to transformation. Nevertheless, the idea of incorporating shape reconstruction into 3D keypoint detection is under-explored. We argue that this is restricted by former problem formulations. To this end, a novel unsupervised paradigm named SNAKE is proposed, which is short for shape-aware neural 3D keypoint field. Similar to recent coordinate-based radiance or distance field, our network takes 3D coordinates as inputs and predicts implicit shape indicators and keypoint saliency simultaneously, thus naturally entangling 3D keypoint detection and shape reconstruction. We achieve superior performance on various public benchmarks, including standalone object datasets ModelNet40, KeypointNet, SMPL meshes and scene-level datasets 3DMatch and Redwood. Intrinsic shape awareness brings several advantages as follows. (1) SNAKE generates 3D keypoints consistent with human semantic annotation, even without such supervision. (2) SNAKE outperforms counterparts in terms of repeatability, especially when the input point clouds are down-sampled. (3) the generated keypoints allow accurate geometric registration, notably in a zero-shot setting. Codes are available at https://github.com/zhongcl-thu/SNAKE. 1 Introduction 2D sparse keypoints play a vital role in reconstruction [32], recognition [22] and pose estimation [43], with scale invariant feature transform (SIFT) [19] being arguably the most important preDeep Learning (DL) computer vision algorithm. Altough dense alignment using photometric or featuremetric losses is also successful in various domains [2, 36, 8], sparse keypoints are usually preferred due to compactness in storage/computation and robustness to illumination/rotation. Just like their 2D counterparts, 3D keypoints have also drawn a lot of attention from the community in both pre-DL [13, 35] and DL [15, 1, 38] literature, with various applications in reconstruction [45, 41] and recognition[26, 34]. However, detecting 3D keypoints from raw point cloud data is very challenging due to sampling sparsity. No matter how we obtain raw point clouds (e.g., through RGB-D cameras [40], stereo [4], or LIDAR [10]), they are only a discrete representation of the underlying 3D shape. This fact drives us to explore the question of whether jointly reconstructing underlying 3D shapes helps 3D ∗Corresponding author: Fuchun Sun. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). keypoint detection. To our knowledge, former methods have seldom visited this idea. Traditional 3D keypoint detection methods are built upon some forms of first-order (e.g., density in intrinsic shape signature [44]) or second-order (e.g., curvature in mesh saliency [14]) statistics, including sophisticated reformulation like heat diffusion [33]. Modern learning-based methods rely upon the idea of consistency under geometric transformations, which can be imposed on either coordinate like USIP [15] or saliency value like D3Feat [1]. The most related method that studies joint reconstruction and 3D keypoint detection is a recent one named UKPGAN [38], yet it reconstructs input point cloud coordinates using an auxiliary decoder instead of the underlying shape manifold. Why is this promising idea under-explored in the literature? We argue the reason is that former problem formulations are not naturally applicable for reconstructing the underlying shape surface. Existing paradigms are conceptually illustrated in Fig. 1. USIP-like methods directly output keypoint coordinates while UKPGAN-like methods generate saliency values for input point clouds. In both cases, the representations are based upon discrete point clouds. By contrast, we reformulate the problem using coordinate-based networks, as inspired by the recent success of neural radiance fields [21, 17, 29] and neural distance fields [23, 31]. As shown in Fig. 1-c, our model predicts a keypoint saliency value for each continuous input query point coordinate q(x, y, z). A direct advantage of this new paradigm is the possibility of tightly entangling shape reconstruction and 3D keypoint detection. As shown in Fig. 1-c, besides the keypoint saliency decoder, we attach a parallel shape indicator decoder that predicts whether the query point q is occupied. The input to decoders is feature embedding generated by trilinearly sampling representations conditioned on input point clouds P . Imagine a feature embedding at the wing tip of an airplane, if it can be used to reconstruct the sharp curvature of the wing tip, it can be naturally detected as a keypoint with high repeatability. As such, our method is named as shape-aware neural 3D keypoint field, or SNAKE. Shape awareness, as the core feature of our new formulation, brings several advantages. (1) High repeatability. Repeatability is the most important metric for keypoint detection, i.e., an algorithm should detect the same keypoint locations in two-view point clouds. If the feature embedding can successfully reconstruct the same chair junction from two-view point clouds, they are expected to generate similar saliency scores. (2) Robustness to down-sampling. When input point clouds are sparse, UKPGAN-like frameworks can only achieve reconstruction up to the density of inputs. In contrast, our SNAKE formulation can naturally reconstruct the underlying surface up to any resolution because it exploits coordinate-based networks. (3) Semantic consistency. SNAKE reconstructs the shape across instances of the same category, thus naturally encouraging semantic consistency although no semantic annotation is used. For example, intermediate representations need to be similar for successfully reconstructing different human bodies because human shapes are intrinsically similar. To summarize, this study has the following two contributions: • We propose a new network for joint surface reconstruction and 3D keypoint detection based upon implicit neural representations. During training, we develop several self-supervised losses that exploit the mutual relationship between two decoders. During testing, we design a gradient-based optimization strategy for maximizing the saliency of keypoints. • Via extensive quantitative and qualitative evaluations on standalone object datasets ModelNet40, KeypointNet, SMPL meshes, and scene-level datasets 3DMatch and Redwood, we demonstrate that our shape-aware formulation achieves state-of-the-art performance under three settings: (1) semantic consistency; (2) repeatability; (3) geometric registration. 2 Related Work 3D Keypoint Detector As discussed in the introduction, 3D keypoint detection methods can be mainly categorized into hand-crafted and learning-based. Popular hand-crafted approaches [44, 30, 28] employ local geometric statistics to generate keypoints. These methods usually fail to detect consistent keypoints due to the lack of global context, especially under real-world disturbances, such as density variations and noise. USIP [15] is a pioneering learning-based 3D keypoint detector that outperforms traditional methods by a large margin. However, the detected keypoints are not semantically salient, and the number of keypoints is fixed. Fernandez et al. [9] exploit the symmetry prior to generate semantically consistent keypoints. But this method is category-specific, limiting the generalization to unseen categories and scenes. Recently, UKPGAN [38] makes use of reconstruction to find semanticsaware 3D keypoints. Yet, it recovers explicit coordinates instead of implicit shape indicators. As shown in Fig. 1, different from these explicit keypoint detection methods, we propose a new detection framework using implicit neural fields, which naturally incorporates shape reconstruction. Implicit Neural Representation Our method exploits implicit neural representations to parameterize a continuous 3D keypoint field, which is inspired by recent studies of neural radiance fields [17, 21, 29] and neural distance fields [23, 31, 16, 42]. Unlike explicit 3D representations such as point clouds, voxels, or meshes, implicit neural functions can decode shapes continuously and learn complex shape topologies. To obtain fine geometry, ConvONet [24] proposes to use volumetric embeddings to get local instead of global features [20] of the input. Recently, similar local geometry preserving networks show a great success for the grasp pose generation [12] and articulated model estimation [11]. They utilize the synergies between their main tasks and 3D reconstruction using shared local representations and implicit functions. Unlike [11, 12] that learn geometry as an auxiliary task, our novel losses tightly couple surface occupancy and keypoint saliency estimates. 3 Method This section presents SNAKE, a shape-aware implicit network for 3D keypoint detection. SNAKE conditions two implicit decoders (for shape and keypoint saliency) on shared volumetric feature embeddings, which is shown in Fig. 2-framework. To encourage repeatable, uniformly scattered, and sparse keypoints, we employ several self-supervised loss functions which entangle the predicted surface occupancy and keypoint saliency, as depicted in the middle panel of Fig. 2. During inference, query points with high saliency are further refined by gradient-based optimization since the implicit keypoint field is continuous and differentiable, which is displayed in Fig. 2-inference. 3.1 Network Architecture Point Cloud Encoder As fine geometry is essential to local keypoint detection, we adopt the ConvONets [24], which can obtain local details and scale to large scenes, as the point cloud encoder denoted fθen for SNAKE. Given an input point cloud P ∈ RN×3, our encoder firstly processes it with the PointNet++ [25] (or alternatives like [46]) to get a feature embedding Z ∈ RN×C1 , where N and C1 are respectively the number of points and the dimension of the features. Then, these features are projected and aggregated into structured volume Z ′ ∈ RC1×H×W×D, where H , W and D are the number of voxels in three orthogonal axes. The volumetric embeddings serve as input to the 3D UNet [6] to further integrate local and global information, resulting in the output G ∈ RC2×H×W×D, where C2 is the output feature dimension. More details can be found in the Appendix. Shape Implicit Decoder As shown in the top panel of Fig. 2, each point q ∈ R3 from a query set Q is encoded into a Ce-dimensional vector qe via a multi-layer perceptron that is denoted the positional encoder fθpos , i.e. qe = fθpos(q). Then, the local feature Gq is retrieved from the feature volume G according to the coordinate of q via trilinear interpolation. The generated qe and Gq are concatenated and mapped to the surface occupancy probability Probo(q|P ) ∈ [0, 1] by the occupancy decoder fθo , as given in Eq. (1). If q is on the input surface, the Probo(q|P ) would be 1, otherwise be 0. In our formulation, the points inside the surface are also considered unoccupied. fθo(qe, Gq) → Probo(q|P ) (1) Keypoint Implicit Decoder Most of the process here is the same as in shape implicit decoder, except for the last mapping function. The goal of keypoint implicit decoder fθs is to estimate the saliency of the query point q conditioned on input points P , which is denoted as Probs(q|P ) ∈ [0, 1] and formulated by: fθs(qe, Gq) → Probs(q|P ). (2) Here, saliency of the query point q is the likelihood that it is a keypoint. 3.2 Implicit Field Training The implicit field is jointly optimized for surface occupancy and saliency estimation by several selfsupervised losses. In contrast to former arts [12, 11] with a similar architecture that learn multiple tasks separately, we leverage the geometry knowledge from shape field to enhance the performance of keypoint field, as shown in the green arrows of Fig. 2. Specifically, the total loss is given by: L = Lo + Lr + Lm + Ls, (3) where Lo encourages the model to learn the shape from the sparse input, Lr, Lm and Ls respectively help the predicted keypoint to be repeatable, located on the underlying surface and sparse. Surface Occupancy Loss The binary cross-entropy loss lBCE between the predicted surface occupancy Probo(q|P ) and the ground-truth label Probgto is used for shape recovery. The queries Q are randomly sampled from the whole volume size H × W × D. The average over all queries is as follows: Lo = 1 |Q| ∑ q∈Q lBCE ( Probo(q|P ), P robgto (q|P ) ) , (4) where |Q| is the number of queries Q. Repeatability Loss Detecting keypoints with high repeatability is essential for downstream tasks like registration between two-view point clouds. That indicates the positions of keypoint are covariant to the rigid transformation of the input. To achieve a similar goal, 2D keypoint detection methods [27, 7, 43] enforce the similarity of corresponding local salient patches from multiple views. Inspired by them, we enforce the similarity of local overlapped saliency fields from two-view point clouds. Since the implicit field is continuous, we uniformly sample some values from a local field to represent the local saliency distribution. Specifically, as shown in the top and the middle part of Fig. 2, we build several local 3D Cartesian grids {Qi}ni=1 with resolution of Hl × Wl × Dl and size of 1/U . We empirically set the resolution of Qi to be almost the same as the feature volume G. As non-occupied regions are uninformative, the center of Qi is randomly sampled from the input. Then, we perform random rigid transformation T on the P and Qi to generate TP and TQi. Similar to [27], the cosine similarity, denoted as cosim, is exploited for the corresponding saliency grids of Qi and TQi: Lr = 1− 1 n ∑ i∈n cosim ( Probs(Qi|P ), P robs(TQi|TP ) ) . (5) Surface Constraint Loss As discussed in [15], 3D keypoints are encouraged to close to the input. They propose a loss to constrain the distance between the keypoint and its nearest neighbor from the input. Yet, the generated keypoints are inconsistent when given the same input but with a different density. Thanks to the shape decoder, SNAKE can reconstruct the underlying surface of the input, which is robust to the resolution change. Hence, we use the surface occupancy probability to represent the inverse distance between the query and the input. As can be seen in Fig. 2-(surface constraint), we enforce the saliency of the query that is far from input P close to 0, which is defined as Lm = 1 |Q| ∑ q∈Q ( 1− Probo(q|P ) ) · Probs(q|P ). (6) Sparsity Loss Similar to 2D keypoint detection methods [27], we design a sparsity loss to avoid the trivial solution (Probs(Q|P )=0) in Eq.( 5)( 6). As can be seen in Fig. 2, the goal is to maximize the local peakiness of the local saliency grids. As the sailency values of non-occupied points are enforced to 0 by Lm, we only impose the sparsity loss on the points with high surface occupancy probability. Hence, we derive the sparsity loss with the help of decoded geometry by Ls = 1− 1 n ∑ i∈n ( maxProbs(Q 1 i |P )−meanProbs(Q1i |P ) ) , (7) where Q1i = {q|q ∈ Qi, P robo(q|P ) > 1− thro}, thro ∈ (0, 0.5] is a constant, and n is the number of grids. It is noted that the spatial frequency of local peakiness is dependent on the grid size 1/U , see section 4.4. Since the network is not only required to find sparse keypoints, but also expected to recover the object shape, it would generate high saliency at the critical parts of the input, like joint points of a desk and corners of a house, as shown in the Fig. 2-result. 3.3 Explicit Keypoint Extraction The query point q whose saliency is above a predefined threshold thrs ∈ (0, 1) would be selected as a keypoint at the inference stage. Although SNAKE can obtain the saliency of any query point, a higher resolution query set results in a high computational cost. Hence, as shown in Fig. 2-inference, we build a relatively low-resolution query sets Qinfer which are evenly distributed in the input space and further refine the coordinates of Qinfer by gradient-based optimization on this energy function: E(Qinfer, P ) = 1 |Qinfer| ∑ q∈Qinfer 1− Probs(q|P ). (8) Specifically, details of the explicit keypoint extraction algorithm are summarized in Alg. 1. Algorithm 1 Optimization for Explicit Keypoint Extraction Require: P,Qinfer, fθen , fθpos , fθo , fθs . Hyper-parameters: λ, J , thro, thrs. Get initial Probo(Qinfer|P ) according to Eq.( 1). Filter to get new query set Qinfer′ = {q|q ∈ Qinfer, P robo(q|P ) > 1− thro}. for 1 to J do Evaluate energy function E(Qinfer′ , P ). Update coordinates with gradient descent: Qinfer′ = Qinfer′ − λ∇Qinfer′E(Qinfer′ , P ). end for Sample final keypoints Qk = {q|q ∈ Qinfer′ , P robs(q|P ) > thrs}. 4 Experiment In this section, we evaluate SNAKE under three settings. First, we compare keypoint semantic consistency across different instances of the same category, using both rigid and deformable objects. Next, keypoint repeatability of the same instance under disturbances such as SE(3) transformation, noise and downsample is evaluated. Finally, we inspect the point cloud registration task on the 3DMatch benchmark, notably in a zero-shot generalization setting. Besides, an ablation study is done to verify the effect of each design choice in SNAKE. The implementation details and hyper-parameters for SNAKE in three settings can be found in the Appendix. 4.1 Semantic Consistency Datasets The KeypointNet [39] dataset and meshes generated with the SMPL model [18] are utilized. KeypointNet has numerous human-annotated 3D keypoints for 16 object categories from ShapeNet [3]. The training set covers all categories that contain 5500 instances. Following [38], we evaluate 630 unseen instances from airplanes, chairs, and tables. SMPL is a skinned vertex-based deformable model that accurately captures body shape variations in natural human poses. We use the same strategy in [38] to generate both training and testing data. Metric Mean Intersection over Union (mIoU) is adopted to show whether the keypoints across intra-class instances have the same semantics or not. For KeypointNet, a predicted keypoint is considered the same as a human-annotated semantic point if the geodesic distance between them is under some threshold. Due to the lack of human-labeled keypoints on SMPL, we compare the keypoint consistency in a pair of human models. A keypoint in the first model is regarded semantically consistent if the distance between its corresponding point and the nearest keypoint in the second model is below some threshold. Evaluation and Results We compare SNAKE with random detection, hand-crafted detectors: ISS [44], Harris-3D [30] and SIFT-3D [28], and DL-based unsupervised detectors: USIP [15] and UKPGAN [38]. As USIP has not performed semantic consistency evaluations, we train the model with the code they provided. We follow the same protocols in [38] to filter the keypoints via NMS with a Euclidean radius of 0.1. Quantitative results are provided in Fig. 5-(a,e). SNAKE obtains higher mIoU than other methods under most thresholds on KeypointNet and SMPL. Qualitative results in Fig. 3 show our keypoints make good alignment with human annotations. Fig. 4 provides qualitative comparisons of semantically consistent keypoints on rigid and deformable objects. Owing to entangling shape reconstruction and keypoint detection, SNAKE can extract aligned representation for intra-class instances. Thus, our keypoints better outline the object shapes and are more semantically consistent under large shape variations. As shown in the saliency field projected slices, we can get symmetrical keypoints, although without any explicit constraint like the one used in [38]. Here, a projected slice is obtained by taking the maximum value of a given field along the projection direction. 4.2 Repeatability Datasets ModelNet40 [37] is a synthetic object-level dataset that contains 12,311 pre-aligned shapes from 40 categories, such as plane, guitar, and table. We adopt the official dataset split strategy. 3DMatch [41] and Redwood [5] are RGB-D reconstruction datasets for indoor scenes. Following [15], we train the model on 3DMatch and test it on Redwood to show the generalization performance. The training set contains around 19k samples and the test set consists of 207 point clouds. Metric We adopt the relative repeatability proposed in USIP [15] as the evaluation metric. Given two point clouds captured from different viewpoints, a keypoint in the first point cloud is repeatable if its distance to the nearest keypoint in the other point cloud is below a threshold ϵ. Relative repeatability means the number of repeatable points divided by the total number of detected keypoints. Evaluation and Results Random detection, traditional methods and USIP are chosen as our baselines. Since UKPGAN does not provide pre-trained models on these two datasets, we do not report its results in Fig. 5 but make an additional comparison on KeypointNet, which is illustrated in the next paragraph. We use NMS to select the local peaky keypoints with a small radius (0.01 normalized distance on ModelNet40 and 0.04 meters on Redwood) for ours and baselines. We generate 64 keypoints in each sample and show the performance under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. We set a fixed ϵ of 0.04 normalized distance and 0.2 meters on the ModelNet40 and Redwood dataset when testing under the last two cases. As shown in Fig. 5- (b,f), SNAKE outperforms state-of-the-art at most distance thresholds. We do not surpass USIP on Redwood in the lower thresholds. Note that it is challenging to get higher repeatability on Redwood because the paired inputs have very small overlapping regions. Fig. 5-(c,d,g,h) show the repeatability robustness to different downsample rates (d.r.) and Gaussian noise N(0, σ) levels. SNAKE gets the highest repeatability in most cases because the shape-aware strategy helps the model reason about the underlying shapes of the objects/scenes, which makes keypoints robust to the input variations. Fig. 6 provides visualization of object-level and scene-level keypoints of the original and disturbed inputs. SNAKE can generate more consistent keypoints than other methods under drastic input changes. We have tried to train UKPGAN (official implementation) on ModelNet40 and 3DMatch datasets from scratch but observed divergence under default hyper-parameters. As such, we provide a new experiment to compare their repeatability on the KeypointNet dataset, on which UKPGAN provided a pre-trained model. We randomly perform SE(3) transformation on the test point clouds to generate the second view point clouds. Then, we select top-32 salient keypoints with NMS (radius=0.03) in each sample and report the keypoint repeatability under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. The results are summarized in Table 1, 2, which show that SNAKE achieves significant gains over UKPGAN in most cases. More discussions can be found in the Appendix. 4.3 Zero-shot Point Cloud Registration Datasets We follow the same protocols in [38] to train the model on KeypointNet and then directly test it on 3DMatch [41] dataset, evaluating how well two-view point clouds can be registered. The test set consists of 8 scenes which include some partially overlapped point cloud fragments and the ground truth SE(3) transformation matrices. Metric To evaluate geometric registration, we need both keypoint detectors and descriptors. Thus, we combine an off-the-shelf and state-of-the-art descriptor D3Feat [1] with our and other keypoint detectors. Following [38], we compute three metrics: Feature Matching Recall, Inlier Ratio, and Registration Recall for a pair of point clouds. Evaluation and Results As baselines, we choose random detection, ISS, SIFT-3D, UKPGAN, and D3Feat. Note that D3Feat is a task-specific learning-based detector trained on the 3DMatch dataset, thus not included in this zero-shot comparison. Ours and UKPGAN are trained on the synthetic object dataset KeypointNet only. The results are reported under different numbers of keypoints (i.e., 2500, 1000, 500, 250, 100). The NMS with a radius of 0.05m is used for D3Feat, UKPGAN, and ours. As shown in Table 3, SNAKE outperforms other methods consistently under three metrics. For registration recall and inlier ratio, we achieve significant gains over UKPGAN and other traditional keypoint methods. Notably, when the keypoints are high in numbers, SNAKE even outperforms D3Feat which has seen the target domain. Local shape primitives like planes, corners, or curves may be shared between objects and scenes, so our shape-aware formulation allows a superior generalization from objects to scenes. 4.4 Ablation Study Loss Function Table 4 reports the performance w.r.t. designs of loss functions. (Row 1) If the surface occupancy decoder is removed, the surface constraint cannot be performed according to Eq.( 6), so they are removed simultaneously. Although the model could detect significantly repeatable keypoints on ModelNet40 [37], it fails to give semantically consistent keypoints on KeypointNet [39]. Fig. 7-a shows that SNAKE is unable to output symmetric and meaningful keypoints without the shape-aware technique. That indicates the repeatability could not be the only criterion for keypoint detection if an implicit formulation is adopted. (Row 2-4) Each loss function for training keypoint field is vital for keypoint detection. Note that the model gives a trivial solution (0) for the saliency field and cannot extract distinctive points when removing the sparsity loss. Grid Size and Volumetric Resolution The grid size 1/U controls the number of keypoints because Ls enforces the model to predict a single local maxima per grid of size (1/U)3. Fig. 7-b shows Table 5: Impact of different local grid size used in the Lo and Ls on ModelNet40. U 4 6 8 10 rr. (%) (ϵ=0.04) 0.79 0.85 0.79 0.77 Table 6: Impact of different global volumetric resolution on ModelNet40. H(= W = D) 32 48 64 80 rr. (%) (ϵ=0.04) 0.62 0.79 0.85 0.78 different saliency field slices obtained from the same input with various 1/U . When U is small, SNAKE outputs fewer salient responses, and more for larger values of U . We also give the relative repeatability results on ModelNet40 under distance threshold ϵ = 0.04 in Table 5, indicating that U = 6 gives the best results. From Table 6, we can see that higher resolution improves performance. However, the performance drops when it reaches the resolution of 80. The potential reason is as such: the number of queries in a single grid increases when the resolution becomes higher, as mentioned in 3.2. In this case, finer details make the input to cosine similarity too long and contain spurious values. Optimization Step and Learning Rate Fig. 7-c shows the importance of optimization (see Alg. 1) for refining keypoint coordinates on the ModelNet40 dataset. It is noted that too many optimization steps will not bring more gains but increase the computational overhead. In this paper, we set the number of update steps to 10. The learning rate for optimization is also key to the final result. When the learning rate is set to 0.1, 0.01, 0.001 and 0.0001, the relative repeatability (%) on ModelNet40 dataset with the same experimental settings as Table 6 are 0.002, 0.622, 0.854 and 0.826, respectively. In addition, the comparison of computation cost of baselines and ours can be found in the Appendix. 5 Conclusion and Discussion We propose SNAKE, a method for 3D keypoint detection based on implicit neural representations. Extensive evaluations show our keypoints are semantically consistent, repeatable, robust to downsample, and generalizable to unseen scenarios. Limitations. The optimization for keypoint extraction during inference requires considerable computational cost and time, which may not be applicable for use in scenarios that require real-time keypoint detection. Negative Social Impact. The industry may use the method for pose estimation in autonomous robots. Since our method is not perfect, it may lead to wrong decision making and potential human injury. 6 Acknowledgments This research is jointly supported by following projects: the Scientific Innovation 2030 Major Project for New Generation of AI under Grant No.2020AAA0107300, Ministry of Science and Technology of the People’s Republic of China; the Key Field R&D Program of Guangdong Province (No.2021B0101410002); Sino-German Collaborative Research Project Crossmodal Learning (NSFC 61621136008/DFG SFB/TRR169); the National Natural Science Foundation of China (No.62006137); Beijing Outstanding Young Scientist Program (No.BJJWZYJH012019100020098). We would like to thank Pengfei Li for discussions about implicit field learning. We would also like to thank the anonymous reviewers for their insightful comments.
1. What is the focus and contribution of the paper on 3D keypoint detection? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper regarding its comparisons with other works and experimental protocols? 4. Do you have any concerns about the suitability of the D3Feat descriptor for evaluating the proposed method? 5. What are the limitations of relying on a specific descriptor for registration experiments?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents a novel unsupervised method SNAKE to detect 3D keypoints from point clouds based on implicit neural representations. The key idea is to combine shape reconstruction and keypoint detection during training. Experiments show that jointly learning 3D shapes and key points improves semantical consistency, better repeatability under disturbance, and accurate geometric registration under zero-shot settings. Strengths And Weaknesses Though the idea of combining reconstruction and saliency prediction is not new (UKPGAN also reconstructs shape), this paper takes advantage of implicit representation and shows advantages over the GAN-based method. The proposed method is simple but effective. The proposed four loss functions are intuitive and ablations show that they are important. The experiments are mostly thorough and show good qualitative and quantitative results compared to competitive baselines. The paper is well written and the figures are easy to understand. Missing comparison with UKPGAN in section 4.2. UKPGAN is a competitive baseline and its code is publicly available. Post-rebuttal: My final rating is weak accept. Thanks to the authors and reviewers for their effort. The rebuttal mostly answers my questions. I think the paper is novel and has enough difference from UKPGAN -- the method does not have a GAN and the experiment results are better. However, I do think D3Feat descriptor may not be the best way to evaluate the proposed method in experiments since it is designed for D3Feat detector and may not be discriminative enough for different feature detectors. Since the prior works are using this protocol, I won't criticize too much for following it. Questions L224 states that UKPGAN is not involved in the experiments due to the absence of pretrained model. Since their code is publicly available, I wonder if training from scratch is possible? If not, I wonder if it is possible to compare alternative datasets? The registration experiments (sec 4.3) rely on D3Feat descriptors for the detected keypoints. I am aware that this descriptor is commonly used in the UKPGAN’s experiment. I am interested in understanding the bottleneck of the problem. Since D3Feat detector + D3Feat descriptor still serves as an upper bound in this experiment, I wonder whether it is possible that certain detector requires specifically designed descriptors in order to work perfectly in the registration problem. D3Feat descriptor may not provide the best features for SNAKE or UKPGAN. Limitations Yes.
NIPS
Title SNAKE: Shape-aware Neural 3D Keypoint Field Abstract Detecting 3D keypoints from point clouds is important for shape reconstruction, while this work investigates the dual question: can shape reconstruction benefit 3D keypoint detection? Existing methods either seek salient features according to statistics of different orders or learn to predict keypoints that are invariant to transformation. Nevertheless, the idea of incorporating shape reconstruction into 3D keypoint detection is under-explored. We argue that this is restricted by former problem formulations. To this end, a novel unsupervised paradigm named SNAKE is proposed, which is short for shape-aware neural 3D keypoint field. Similar to recent coordinate-based radiance or distance field, our network takes 3D coordinates as inputs and predicts implicit shape indicators and keypoint saliency simultaneously, thus naturally entangling 3D keypoint detection and shape reconstruction. We achieve superior performance on various public benchmarks, including standalone object datasets ModelNet40, KeypointNet, SMPL meshes and scene-level datasets 3DMatch and Redwood. Intrinsic shape awareness brings several advantages as follows. (1) SNAKE generates 3D keypoints consistent with human semantic annotation, even without such supervision. (2) SNAKE outperforms counterparts in terms of repeatability, especially when the input point clouds are down-sampled. (3) the generated keypoints allow accurate geometric registration, notably in a zero-shot setting. Codes are available at https://github.com/zhongcl-thu/SNAKE. 1 Introduction 2D sparse keypoints play a vital role in reconstruction [32], recognition [22] and pose estimation [43], with scale invariant feature transform (SIFT) [19] being arguably the most important preDeep Learning (DL) computer vision algorithm. Altough dense alignment using photometric or featuremetric losses is also successful in various domains [2, 36, 8], sparse keypoints are usually preferred due to compactness in storage/computation and robustness to illumination/rotation. Just like their 2D counterparts, 3D keypoints have also drawn a lot of attention from the community in both pre-DL [13, 35] and DL [15, 1, 38] literature, with various applications in reconstruction [45, 41] and recognition[26, 34]. However, detecting 3D keypoints from raw point cloud data is very challenging due to sampling sparsity. No matter how we obtain raw point clouds (e.g., through RGB-D cameras [40], stereo [4], or LIDAR [10]), they are only a discrete representation of the underlying 3D shape. This fact drives us to explore the question of whether jointly reconstructing underlying 3D shapes helps 3D ∗Corresponding author: Fuchun Sun. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). keypoint detection. To our knowledge, former methods have seldom visited this idea. Traditional 3D keypoint detection methods are built upon some forms of first-order (e.g., density in intrinsic shape signature [44]) or second-order (e.g., curvature in mesh saliency [14]) statistics, including sophisticated reformulation like heat diffusion [33]. Modern learning-based methods rely upon the idea of consistency under geometric transformations, which can be imposed on either coordinate like USIP [15] or saliency value like D3Feat [1]. The most related method that studies joint reconstruction and 3D keypoint detection is a recent one named UKPGAN [38], yet it reconstructs input point cloud coordinates using an auxiliary decoder instead of the underlying shape manifold. Why is this promising idea under-explored in the literature? We argue the reason is that former problem formulations are not naturally applicable for reconstructing the underlying shape surface. Existing paradigms are conceptually illustrated in Fig. 1. USIP-like methods directly output keypoint coordinates while UKPGAN-like methods generate saliency values for input point clouds. In both cases, the representations are based upon discrete point clouds. By contrast, we reformulate the problem using coordinate-based networks, as inspired by the recent success of neural radiance fields [21, 17, 29] and neural distance fields [23, 31]. As shown in Fig. 1-c, our model predicts a keypoint saliency value for each continuous input query point coordinate q(x, y, z). A direct advantage of this new paradigm is the possibility of tightly entangling shape reconstruction and 3D keypoint detection. As shown in Fig. 1-c, besides the keypoint saliency decoder, we attach a parallel shape indicator decoder that predicts whether the query point q is occupied. The input to decoders is feature embedding generated by trilinearly sampling representations conditioned on input point clouds P . Imagine a feature embedding at the wing tip of an airplane, if it can be used to reconstruct the sharp curvature of the wing tip, it can be naturally detected as a keypoint with high repeatability. As such, our method is named as shape-aware neural 3D keypoint field, or SNAKE. Shape awareness, as the core feature of our new formulation, brings several advantages. (1) High repeatability. Repeatability is the most important metric for keypoint detection, i.e., an algorithm should detect the same keypoint locations in two-view point clouds. If the feature embedding can successfully reconstruct the same chair junction from two-view point clouds, they are expected to generate similar saliency scores. (2) Robustness to down-sampling. When input point clouds are sparse, UKPGAN-like frameworks can only achieve reconstruction up to the density of inputs. In contrast, our SNAKE formulation can naturally reconstruct the underlying surface up to any resolution because it exploits coordinate-based networks. (3) Semantic consistency. SNAKE reconstructs the shape across instances of the same category, thus naturally encouraging semantic consistency although no semantic annotation is used. For example, intermediate representations need to be similar for successfully reconstructing different human bodies because human shapes are intrinsically similar. To summarize, this study has the following two contributions: • We propose a new network for joint surface reconstruction and 3D keypoint detection based upon implicit neural representations. During training, we develop several self-supervised losses that exploit the mutual relationship between two decoders. During testing, we design a gradient-based optimization strategy for maximizing the saliency of keypoints. • Via extensive quantitative and qualitative evaluations on standalone object datasets ModelNet40, KeypointNet, SMPL meshes, and scene-level datasets 3DMatch and Redwood, we demonstrate that our shape-aware formulation achieves state-of-the-art performance under three settings: (1) semantic consistency; (2) repeatability; (3) geometric registration. 2 Related Work 3D Keypoint Detector As discussed in the introduction, 3D keypoint detection methods can be mainly categorized into hand-crafted and learning-based. Popular hand-crafted approaches [44, 30, 28] employ local geometric statistics to generate keypoints. These methods usually fail to detect consistent keypoints due to the lack of global context, especially under real-world disturbances, such as density variations and noise. USIP [15] is a pioneering learning-based 3D keypoint detector that outperforms traditional methods by a large margin. However, the detected keypoints are not semantically salient, and the number of keypoints is fixed. Fernandez et al. [9] exploit the symmetry prior to generate semantically consistent keypoints. But this method is category-specific, limiting the generalization to unseen categories and scenes. Recently, UKPGAN [38] makes use of reconstruction to find semanticsaware 3D keypoints. Yet, it recovers explicit coordinates instead of implicit shape indicators. As shown in Fig. 1, different from these explicit keypoint detection methods, we propose a new detection framework using implicit neural fields, which naturally incorporates shape reconstruction. Implicit Neural Representation Our method exploits implicit neural representations to parameterize a continuous 3D keypoint field, which is inspired by recent studies of neural radiance fields [17, 21, 29] and neural distance fields [23, 31, 16, 42]. Unlike explicit 3D representations such as point clouds, voxels, or meshes, implicit neural functions can decode shapes continuously and learn complex shape topologies. To obtain fine geometry, ConvONet [24] proposes to use volumetric embeddings to get local instead of global features [20] of the input. Recently, similar local geometry preserving networks show a great success for the grasp pose generation [12] and articulated model estimation [11]. They utilize the synergies between their main tasks and 3D reconstruction using shared local representations and implicit functions. Unlike [11, 12] that learn geometry as an auxiliary task, our novel losses tightly couple surface occupancy and keypoint saliency estimates. 3 Method This section presents SNAKE, a shape-aware implicit network for 3D keypoint detection. SNAKE conditions two implicit decoders (for shape and keypoint saliency) on shared volumetric feature embeddings, which is shown in Fig. 2-framework. To encourage repeatable, uniformly scattered, and sparse keypoints, we employ several self-supervised loss functions which entangle the predicted surface occupancy and keypoint saliency, as depicted in the middle panel of Fig. 2. During inference, query points with high saliency are further refined by gradient-based optimization since the implicit keypoint field is continuous and differentiable, which is displayed in Fig. 2-inference. 3.1 Network Architecture Point Cloud Encoder As fine geometry is essential to local keypoint detection, we adopt the ConvONets [24], which can obtain local details and scale to large scenes, as the point cloud encoder denoted fθen for SNAKE. Given an input point cloud P ∈ RN×3, our encoder firstly processes it with the PointNet++ [25] (or alternatives like [46]) to get a feature embedding Z ∈ RN×C1 , where N and C1 are respectively the number of points and the dimension of the features. Then, these features are projected and aggregated into structured volume Z ′ ∈ RC1×H×W×D, where H , W and D are the number of voxels in three orthogonal axes. The volumetric embeddings serve as input to the 3D UNet [6] to further integrate local and global information, resulting in the output G ∈ RC2×H×W×D, where C2 is the output feature dimension. More details can be found in the Appendix. Shape Implicit Decoder As shown in the top panel of Fig. 2, each point q ∈ R3 from a query set Q is encoded into a Ce-dimensional vector qe via a multi-layer perceptron that is denoted the positional encoder fθpos , i.e. qe = fθpos(q). Then, the local feature Gq is retrieved from the feature volume G according to the coordinate of q via trilinear interpolation. The generated qe and Gq are concatenated and mapped to the surface occupancy probability Probo(q|P ) ∈ [0, 1] by the occupancy decoder fθo , as given in Eq. (1). If q is on the input surface, the Probo(q|P ) would be 1, otherwise be 0. In our formulation, the points inside the surface are also considered unoccupied. fθo(qe, Gq) → Probo(q|P ) (1) Keypoint Implicit Decoder Most of the process here is the same as in shape implicit decoder, except for the last mapping function. The goal of keypoint implicit decoder fθs is to estimate the saliency of the query point q conditioned on input points P , which is denoted as Probs(q|P ) ∈ [0, 1] and formulated by: fθs(qe, Gq) → Probs(q|P ). (2) Here, saliency of the query point q is the likelihood that it is a keypoint. 3.2 Implicit Field Training The implicit field is jointly optimized for surface occupancy and saliency estimation by several selfsupervised losses. In contrast to former arts [12, 11] with a similar architecture that learn multiple tasks separately, we leverage the geometry knowledge from shape field to enhance the performance of keypoint field, as shown in the green arrows of Fig. 2. Specifically, the total loss is given by: L = Lo + Lr + Lm + Ls, (3) where Lo encourages the model to learn the shape from the sparse input, Lr, Lm and Ls respectively help the predicted keypoint to be repeatable, located on the underlying surface and sparse. Surface Occupancy Loss The binary cross-entropy loss lBCE between the predicted surface occupancy Probo(q|P ) and the ground-truth label Probgto is used for shape recovery. The queries Q are randomly sampled from the whole volume size H × W × D. The average over all queries is as follows: Lo = 1 |Q| ∑ q∈Q lBCE ( Probo(q|P ), P robgto (q|P ) ) , (4) where |Q| is the number of queries Q. Repeatability Loss Detecting keypoints with high repeatability is essential for downstream tasks like registration between two-view point clouds. That indicates the positions of keypoint are covariant to the rigid transformation of the input. To achieve a similar goal, 2D keypoint detection methods [27, 7, 43] enforce the similarity of corresponding local salient patches from multiple views. Inspired by them, we enforce the similarity of local overlapped saliency fields from two-view point clouds. Since the implicit field is continuous, we uniformly sample some values from a local field to represent the local saliency distribution. Specifically, as shown in the top and the middle part of Fig. 2, we build several local 3D Cartesian grids {Qi}ni=1 with resolution of Hl × Wl × Dl and size of 1/U . We empirically set the resolution of Qi to be almost the same as the feature volume G. As non-occupied regions are uninformative, the center of Qi is randomly sampled from the input. Then, we perform random rigid transformation T on the P and Qi to generate TP and TQi. Similar to [27], the cosine similarity, denoted as cosim, is exploited for the corresponding saliency grids of Qi and TQi: Lr = 1− 1 n ∑ i∈n cosim ( Probs(Qi|P ), P robs(TQi|TP ) ) . (5) Surface Constraint Loss As discussed in [15], 3D keypoints are encouraged to close to the input. They propose a loss to constrain the distance between the keypoint and its nearest neighbor from the input. Yet, the generated keypoints are inconsistent when given the same input but with a different density. Thanks to the shape decoder, SNAKE can reconstruct the underlying surface of the input, which is robust to the resolution change. Hence, we use the surface occupancy probability to represent the inverse distance between the query and the input. As can be seen in Fig. 2-(surface constraint), we enforce the saliency of the query that is far from input P close to 0, which is defined as Lm = 1 |Q| ∑ q∈Q ( 1− Probo(q|P ) ) · Probs(q|P ). (6) Sparsity Loss Similar to 2D keypoint detection methods [27], we design a sparsity loss to avoid the trivial solution (Probs(Q|P )=0) in Eq.( 5)( 6). As can be seen in Fig. 2, the goal is to maximize the local peakiness of the local saliency grids. As the sailency values of non-occupied points are enforced to 0 by Lm, we only impose the sparsity loss on the points with high surface occupancy probability. Hence, we derive the sparsity loss with the help of decoded geometry by Ls = 1− 1 n ∑ i∈n ( maxProbs(Q 1 i |P )−meanProbs(Q1i |P ) ) , (7) where Q1i = {q|q ∈ Qi, P robo(q|P ) > 1− thro}, thro ∈ (0, 0.5] is a constant, and n is the number of grids. It is noted that the spatial frequency of local peakiness is dependent on the grid size 1/U , see section 4.4. Since the network is not only required to find sparse keypoints, but also expected to recover the object shape, it would generate high saliency at the critical parts of the input, like joint points of a desk and corners of a house, as shown in the Fig. 2-result. 3.3 Explicit Keypoint Extraction The query point q whose saliency is above a predefined threshold thrs ∈ (0, 1) would be selected as a keypoint at the inference stage. Although SNAKE can obtain the saliency of any query point, a higher resolution query set results in a high computational cost. Hence, as shown in Fig. 2-inference, we build a relatively low-resolution query sets Qinfer which are evenly distributed in the input space and further refine the coordinates of Qinfer by gradient-based optimization on this energy function: E(Qinfer, P ) = 1 |Qinfer| ∑ q∈Qinfer 1− Probs(q|P ). (8) Specifically, details of the explicit keypoint extraction algorithm are summarized in Alg. 1. Algorithm 1 Optimization for Explicit Keypoint Extraction Require: P,Qinfer, fθen , fθpos , fθo , fθs . Hyper-parameters: λ, J , thro, thrs. Get initial Probo(Qinfer|P ) according to Eq.( 1). Filter to get new query set Qinfer′ = {q|q ∈ Qinfer, P robo(q|P ) > 1− thro}. for 1 to J do Evaluate energy function E(Qinfer′ , P ). Update coordinates with gradient descent: Qinfer′ = Qinfer′ − λ∇Qinfer′E(Qinfer′ , P ). end for Sample final keypoints Qk = {q|q ∈ Qinfer′ , P robs(q|P ) > thrs}. 4 Experiment In this section, we evaluate SNAKE under three settings. First, we compare keypoint semantic consistency across different instances of the same category, using both rigid and deformable objects. Next, keypoint repeatability of the same instance under disturbances such as SE(3) transformation, noise and downsample is evaluated. Finally, we inspect the point cloud registration task on the 3DMatch benchmark, notably in a zero-shot generalization setting. Besides, an ablation study is done to verify the effect of each design choice in SNAKE. The implementation details and hyper-parameters for SNAKE in three settings can be found in the Appendix. 4.1 Semantic Consistency Datasets The KeypointNet [39] dataset and meshes generated with the SMPL model [18] are utilized. KeypointNet has numerous human-annotated 3D keypoints for 16 object categories from ShapeNet [3]. The training set covers all categories that contain 5500 instances. Following [38], we evaluate 630 unseen instances from airplanes, chairs, and tables. SMPL is a skinned vertex-based deformable model that accurately captures body shape variations in natural human poses. We use the same strategy in [38] to generate both training and testing data. Metric Mean Intersection over Union (mIoU) is adopted to show whether the keypoints across intra-class instances have the same semantics or not. For KeypointNet, a predicted keypoint is considered the same as a human-annotated semantic point if the geodesic distance between them is under some threshold. Due to the lack of human-labeled keypoints on SMPL, we compare the keypoint consistency in a pair of human models. A keypoint in the first model is regarded semantically consistent if the distance between its corresponding point and the nearest keypoint in the second model is below some threshold. Evaluation and Results We compare SNAKE with random detection, hand-crafted detectors: ISS [44], Harris-3D [30] and SIFT-3D [28], and DL-based unsupervised detectors: USIP [15] and UKPGAN [38]. As USIP has not performed semantic consistency evaluations, we train the model with the code they provided. We follow the same protocols in [38] to filter the keypoints via NMS with a Euclidean radius of 0.1. Quantitative results are provided in Fig. 5-(a,e). SNAKE obtains higher mIoU than other methods under most thresholds on KeypointNet and SMPL. Qualitative results in Fig. 3 show our keypoints make good alignment with human annotations. Fig. 4 provides qualitative comparisons of semantically consistent keypoints on rigid and deformable objects. Owing to entangling shape reconstruction and keypoint detection, SNAKE can extract aligned representation for intra-class instances. Thus, our keypoints better outline the object shapes and are more semantically consistent under large shape variations. As shown in the saliency field projected slices, we can get symmetrical keypoints, although without any explicit constraint like the one used in [38]. Here, a projected slice is obtained by taking the maximum value of a given field along the projection direction. 4.2 Repeatability Datasets ModelNet40 [37] is a synthetic object-level dataset that contains 12,311 pre-aligned shapes from 40 categories, such as plane, guitar, and table. We adopt the official dataset split strategy. 3DMatch [41] and Redwood [5] are RGB-D reconstruction datasets for indoor scenes. Following [15], we train the model on 3DMatch and test it on Redwood to show the generalization performance. The training set contains around 19k samples and the test set consists of 207 point clouds. Metric We adopt the relative repeatability proposed in USIP [15] as the evaluation metric. Given two point clouds captured from different viewpoints, a keypoint in the first point cloud is repeatable if its distance to the nearest keypoint in the other point cloud is below a threshold ϵ. Relative repeatability means the number of repeatable points divided by the total number of detected keypoints. Evaluation and Results Random detection, traditional methods and USIP are chosen as our baselines. Since UKPGAN does not provide pre-trained models on these two datasets, we do not report its results in Fig. 5 but make an additional comparison on KeypointNet, which is illustrated in the next paragraph. We use NMS to select the local peaky keypoints with a small radius (0.01 normalized distance on ModelNet40 and 0.04 meters on Redwood) for ours and baselines. We generate 64 keypoints in each sample and show the performance under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. We set a fixed ϵ of 0.04 normalized distance and 0.2 meters on the ModelNet40 and Redwood dataset when testing under the last two cases. As shown in Fig. 5- (b,f), SNAKE outperforms state-of-the-art at most distance thresholds. We do not surpass USIP on Redwood in the lower thresholds. Note that it is challenging to get higher repeatability on Redwood because the paired inputs have very small overlapping regions. Fig. 5-(c,d,g,h) show the repeatability robustness to different downsample rates (d.r.) and Gaussian noise N(0, σ) levels. SNAKE gets the highest repeatability in most cases because the shape-aware strategy helps the model reason about the underlying shapes of the objects/scenes, which makes keypoints robust to the input variations. Fig. 6 provides visualization of object-level and scene-level keypoints of the original and disturbed inputs. SNAKE can generate more consistent keypoints than other methods under drastic input changes. We have tried to train UKPGAN (official implementation) on ModelNet40 and 3DMatch datasets from scratch but observed divergence under default hyper-parameters. As such, we provide a new experiment to compare their repeatability on the KeypointNet dataset, on which UKPGAN provided a pre-trained model. We randomly perform SE(3) transformation on the test point clouds to generate the second view point clouds. Then, we select top-32 salient keypoints with NMS (radius=0.03) in each sample and report the keypoint repeatability under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. The results are summarized in Table 1, 2, which show that SNAKE achieves significant gains over UKPGAN in most cases. More discussions can be found in the Appendix. 4.3 Zero-shot Point Cloud Registration Datasets We follow the same protocols in [38] to train the model on KeypointNet and then directly test it on 3DMatch [41] dataset, evaluating how well two-view point clouds can be registered. The test set consists of 8 scenes which include some partially overlapped point cloud fragments and the ground truth SE(3) transformation matrices. Metric To evaluate geometric registration, we need both keypoint detectors and descriptors. Thus, we combine an off-the-shelf and state-of-the-art descriptor D3Feat [1] with our and other keypoint detectors. Following [38], we compute three metrics: Feature Matching Recall, Inlier Ratio, and Registration Recall for a pair of point clouds. Evaluation and Results As baselines, we choose random detection, ISS, SIFT-3D, UKPGAN, and D3Feat. Note that D3Feat is a task-specific learning-based detector trained on the 3DMatch dataset, thus not included in this zero-shot comparison. Ours and UKPGAN are trained on the synthetic object dataset KeypointNet only. The results are reported under different numbers of keypoints (i.e., 2500, 1000, 500, 250, 100). The NMS with a radius of 0.05m is used for D3Feat, UKPGAN, and ours. As shown in Table 3, SNAKE outperforms other methods consistently under three metrics. For registration recall and inlier ratio, we achieve significant gains over UKPGAN and other traditional keypoint methods. Notably, when the keypoints are high in numbers, SNAKE even outperforms D3Feat which has seen the target domain. Local shape primitives like planes, corners, or curves may be shared between objects and scenes, so our shape-aware formulation allows a superior generalization from objects to scenes. 4.4 Ablation Study Loss Function Table 4 reports the performance w.r.t. designs of loss functions. (Row 1) If the surface occupancy decoder is removed, the surface constraint cannot be performed according to Eq.( 6), so they are removed simultaneously. Although the model could detect significantly repeatable keypoints on ModelNet40 [37], it fails to give semantically consistent keypoints on KeypointNet [39]. Fig. 7-a shows that SNAKE is unable to output symmetric and meaningful keypoints without the shape-aware technique. That indicates the repeatability could not be the only criterion for keypoint detection if an implicit formulation is adopted. (Row 2-4) Each loss function for training keypoint field is vital for keypoint detection. Note that the model gives a trivial solution (0) for the saliency field and cannot extract distinctive points when removing the sparsity loss. Grid Size and Volumetric Resolution The grid size 1/U controls the number of keypoints because Ls enforces the model to predict a single local maxima per grid of size (1/U)3. Fig. 7-b shows Table 5: Impact of different local grid size used in the Lo and Ls on ModelNet40. U 4 6 8 10 rr. (%) (ϵ=0.04) 0.79 0.85 0.79 0.77 Table 6: Impact of different global volumetric resolution on ModelNet40. H(= W = D) 32 48 64 80 rr. (%) (ϵ=0.04) 0.62 0.79 0.85 0.78 different saliency field slices obtained from the same input with various 1/U . When U is small, SNAKE outputs fewer salient responses, and more for larger values of U . We also give the relative repeatability results on ModelNet40 under distance threshold ϵ = 0.04 in Table 5, indicating that U = 6 gives the best results. From Table 6, we can see that higher resolution improves performance. However, the performance drops when it reaches the resolution of 80. The potential reason is as such: the number of queries in a single grid increases when the resolution becomes higher, as mentioned in 3.2. In this case, finer details make the input to cosine similarity too long and contain spurious values. Optimization Step and Learning Rate Fig. 7-c shows the importance of optimization (see Alg. 1) for refining keypoint coordinates on the ModelNet40 dataset. It is noted that too many optimization steps will not bring more gains but increase the computational overhead. In this paper, we set the number of update steps to 10. The learning rate for optimization is also key to the final result. When the learning rate is set to 0.1, 0.01, 0.001 and 0.0001, the relative repeatability (%) on ModelNet40 dataset with the same experimental settings as Table 6 are 0.002, 0.622, 0.854 and 0.826, respectively. In addition, the comparison of computation cost of baselines and ours can be found in the Appendix. 5 Conclusion and Discussion We propose SNAKE, a method for 3D keypoint detection based on implicit neural representations. Extensive evaluations show our keypoints are semantically consistent, repeatable, robust to downsample, and generalizable to unseen scenarios. Limitations. The optimization for keypoint extraction during inference requires considerable computational cost and time, which may not be applicable for use in scenarios that require real-time keypoint detection. Negative Social Impact. The industry may use the method for pose estimation in autonomous robots. Since our method is not perfect, it may lead to wrong decision making and potential human injury. 6 Acknowledgments This research is jointly supported by following projects: the Scientific Innovation 2030 Major Project for New Generation of AI under Grant No.2020AAA0107300, Ministry of Science and Technology of the People’s Republic of China; the Key Field R&D Program of Guangdong Province (No.2021B0101410002); Sino-German Collaborative Research Project Crossmodal Learning (NSFC 61621136008/DFG SFB/TRR169); the National Natural Science Foundation of China (No.62006137); Beijing Outstanding Young Scientist Program (No.BJJWZYJH012019100020098). We would like to thank Pengfei Li for discussions about implicit field learning. We would also like to thank the anonymous reviewers for their insightful comments.
1. What is the novel approach introduced by the paper in keypoint detection? 2. What are the strengths and weaknesses of the proposed method, particularly in its technical details and effectiveness? 3. Do you have any suggestions for additional visualizations to better demonstrate the contribution of the paper? 4. What are the limitations of the proposed method regarding its computational cost and suitability for real-world applications?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper is well written, the motivation and technical details are clearly presented. The idea of estimating saliency field from sparse keypoints seem novel, and it is shown to be effective to produce repeatable and consistent keypoint detection result. Strengths And Weaknesses Strength: The paper is well written, the motivation and technical details are clearly presented. The idea of estimating saliency field from sparse keypoints seem novel, and it is shown to be effective to produce repeatable and consistent keypoint detection result. Weakness: Keypoint extraction in inference time requires iterative gradient descent and query the implicitly defined saliency field. Thus the computational cost is high and is not suitable for real-world applications at its current form. Questions Additional visualization of repeatibility of keypoints under SE3 transformation as well as different sparsity of points can make the contribution of the paper more clear. The plots in Figure 5 alone is not visual enough to show the quality of the proposed method in terms of repeatibility. Limitations Keypoint extraction during inference requires considerable computational cost, not suitable for real-time on-device applications.
NIPS
Title SNAKE: Shape-aware Neural 3D Keypoint Field Abstract Detecting 3D keypoints from point clouds is important for shape reconstruction, while this work investigates the dual question: can shape reconstruction benefit 3D keypoint detection? Existing methods either seek salient features according to statistics of different orders or learn to predict keypoints that are invariant to transformation. Nevertheless, the idea of incorporating shape reconstruction into 3D keypoint detection is under-explored. We argue that this is restricted by former problem formulations. To this end, a novel unsupervised paradigm named SNAKE is proposed, which is short for shape-aware neural 3D keypoint field. Similar to recent coordinate-based radiance or distance field, our network takes 3D coordinates as inputs and predicts implicit shape indicators and keypoint saliency simultaneously, thus naturally entangling 3D keypoint detection and shape reconstruction. We achieve superior performance on various public benchmarks, including standalone object datasets ModelNet40, KeypointNet, SMPL meshes and scene-level datasets 3DMatch and Redwood. Intrinsic shape awareness brings several advantages as follows. (1) SNAKE generates 3D keypoints consistent with human semantic annotation, even without such supervision. (2) SNAKE outperforms counterparts in terms of repeatability, especially when the input point clouds are down-sampled. (3) the generated keypoints allow accurate geometric registration, notably in a zero-shot setting. Codes are available at https://github.com/zhongcl-thu/SNAKE. 1 Introduction 2D sparse keypoints play a vital role in reconstruction [32], recognition [22] and pose estimation [43], with scale invariant feature transform (SIFT) [19] being arguably the most important preDeep Learning (DL) computer vision algorithm. Altough dense alignment using photometric or featuremetric losses is also successful in various domains [2, 36, 8], sparse keypoints are usually preferred due to compactness in storage/computation and robustness to illumination/rotation. Just like their 2D counterparts, 3D keypoints have also drawn a lot of attention from the community in both pre-DL [13, 35] and DL [15, 1, 38] literature, with various applications in reconstruction [45, 41] and recognition[26, 34]. However, detecting 3D keypoints from raw point cloud data is very challenging due to sampling sparsity. No matter how we obtain raw point clouds (e.g., through RGB-D cameras [40], stereo [4], or LIDAR [10]), they are only a discrete representation of the underlying 3D shape. This fact drives us to explore the question of whether jointly reconstructing underlying 3D shapes helps 3D ∗Corresponding author: Fuchun Sun. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). keypoint detection. To our knowledge, former methods have seldom visited this idea. Traditional 3D keypoint detection methods are built upon some forms of first-order (e.g., density in intrinsic shape signature [44]) or second-order (e.g., curvature in mesh saliency [14]) statistics, including sophisticated reformulation like heat diffusion [33]. Modern learning-based methods rely upon the idea of consistency under geometric transformations, which can be imposed on either coordinate like USIP [15] or saliency value like D3Feat [1]. The most related method that studies joint reconstruction and 3D keypoint detection is a recent one named UKPGAN [38], yet it reconstructs input point cloud coordinates using an auxiliary decoder instead of the underlying shape manifold. Why is this promising idea under-explored in the literature? We argue the reason is that former problem formulations are not naturally applicable for reconstructing the underlying shape surface. Existing paradigms are conceptually illustrated in Fig. 1. USIP-like methods directly output keypoint coordinates while UKPGAN-like methods generate saliency values for input point clouds. In both cases, the representations are based upon discrete point clouds. By contrast, we reformulate the problem using coordinate-based networks, as inspired by the recent success of neural radiance fields [21, 17, 29] and neural distance fields [23, 31]. As shown in Fig. 1-c, our model predicts a keypoint saliency value for each continuous input query point coordinate q(x, y, z). A direct advantage of this new paradigm is the possibility of tightly entangling shape reconstruction and 3D keypoint detection. As shown in Fig. 1-c, besides the keypoint saliency decoder, we attach a parallel shape indicator decoder that predicts whether the query point q is occupied. The input to decoders is feature embedding generated by trilinearly sampling representations conditioned on input point clouds P . Imagine a feature embedding at the wing tip of an airplane, if it can be used to reconstruct the sharp curvature of the wing tip, it can be naturally detected as a keypoint with high repeatability. As such, our method is named as shape-aware neural 3D keypoint field, or SNAKE. Shape awareness, as the core feature of our new formulation, brings several advantages. (1) High repeatability. Repeatability is the most important metric for keypoint detection, i.e., an algorithm should detect the same keypoint locations in two-view point clouds. If the feature embedding can successfully reconstruct the same chair junction from two-view point clouds, they are expected to generate similar saliency scores. (2) Robustness to down-sampling. When input point clouds are sparse, UKPGAN-like frameworks can only achieve reconstruction up to the density of inputs. In contrast, our SNAKE formulation can naturally reconstruct the underlying surface up to any resolution because it exploits coordinate-based networks. (3) Semantic consistency. SNAKE reconstructs the shape across instances of the same category, thus naturally encouraging semantic consistency although no semantic annotation is used. For example, intermediate representations need to be similar for successfully reconstructing different human bodies because human shapes are intrinsically similar. To summarize, this study has the following two contributions: • We propose a new network for joint surface reconstruction and 3D keypoint detection based upon implicit neural representations. During training, we develop several self-supervised losses that exploit the mutual relationship between two decoders. During testing, we design a gradient-based optimization strategy for maximizing the saliency of keypoints. • Via extensive quantitative and qualitative evaluations on standalone object datasets ModelNet40, KeypointNet, SMPL meshes, and scene-level datasets 3DMatch and Redwood, we demonstrate that our shape-aware formulation achieves state-of-the-art performance under three settings: (1) semantic consistency; (2) repeatability; (3) geometric registration. 2 Related Work 3D Keypoint Detector As discussed in the introduction, 3D keypoint detection methods can be mainly categorized into hand-crafted and learning-based. Popular hand-crafted approaches [44, 30, 28] employ local geometric statistics to generate keypoints. These methods usually fail to detect consistent keypoints due to the lack of global context, especially under real-world disturbances, such as density variations and noise. USIP [15] is a pioneering learning-based 3D keypoint detector that outperforms traditional methods by a large margin. However, the detected keypoints are not semantically salient, and the number of keypoints is fixed. Fernandez et al. [9] exploit the symmetry prior to generate semantically consistent keypoints. But this method is category-specific, limiting the generalization to unseen categories and scenes. Recently, UKPGAN [38] makes use of reconstruction to find semanticsaware 3D keypoints. Yet, it recovers explicit coordinates instead of implicit shape indicators. As shown in Fig. 1, different from these explicit keypoint detection methods, we propose a new detection framework using implicit neural fields, which naturally incorporates shape reconstruction. Implicit Neural Representation Our method exploits implicit neural representations to parameterize a continuous 3D keypoint field, which is inspired by recent studies of neural radiance fields [17, 21, 29] and neural distance fields [23, 31, 16, 42]. Unlike explicit 3D representations such as point clouds, voxels, or meshes, implicit neural functions can decode shapes continuously and learn complex shape topologies. To obtain fine geometry, ConvONet [24] proposes to use volumetric embeddings to get local instead of global features [20] of the input. Recently, similar local geometry preserving networks show a great success for the grasp pose generation [12] and articulated model estimation [11]. They utilize the synergies between their main tasks and 3D reconstruction using shared local representations and implicit functions. Unlike [11, 12] that learn geometry as an auxiliary task, our novel losses tightly couple surface occupancy and keypoint saliency estimates. 3 Method This section presents SNAKE, a shape-aware implicit network for 3D keypoint detection. SNAKE conditions two implicit decoders (for shape and keypoint saliency) on shared volumetric feature embeddings, which is shown in Fig. 2-framework. To encourage repeatable, uniformly scattered, and sparse keypoints, we employ several self-supervised loss functions which entangle the predicted surface occupancy and keypoint saliency, as depicted in the middle panel of Fig. 2. During inference, query points with high saliency are further refined by gradient-based optimization since the implicit keypoint field is continuous and differentiable, which is displayed in Fig. 2-inference. 3.1 Network Architecture Point Cloud Encoder As fine geometry is essential to local keypoint detection, we adopt the ConvONets [24], which can obtain local details and scale to large scenes, as the point cloud encoder denoted fθen for SNAKE. Given an input point cloud P ∈ RN×3, our encoder firstly processes it with the PointNet++ [25] (or alternatives like [46]) to get a feature embedding Z ∈ RN×C1 , where N and C1 are respectively the number of points and the dimension of the features. Then, these features are projected and aggregated into structured volume Z ′ ∈ RC1×H×W×D, where H , W and D are the number of voxels in three orthogonal axes. The volumetric embeddings serve as input to the 3D UNet [6] to further integrate local and global information, resulting in the output G ∈ RC2×H×W×D, where C2 is the output feature dimension. More details can be found in the Appendix. Shape Implicit Decoder As shown in the top panel of Fig. 2, each point q ∈ R3 from a query set Q is encoded into a Ce-dimensional vector qe via a multi-layer perceptron that is denoted the positional encoder fθpos , i.e. qe = fθpos(q). Then, the local feature Gq is retrieved from the feature volume G according to the coordinate of q via trilinear interpolation. The generated qe and Gq are concatenated and mapped to the surface occupancy probability Probo(q|P ) ∈ [0, 1] by the occupancy decoder fθo , as given in Eq. (1). If q is on the input surface, the Probo(q|P ) would be 1, otherwise be 0. In our formulation, the points inside the surface are also considered unoccupied. fθo(qe, Gq) → Probo(q|P ) (1) Keypoint Implicit Decoder Most of the process here is the same as in shape implicit decoder, except for the last mapping function. The goal of keypoint implicit decoder fθs is to estimate the saliency of the query point q conditioned on input points P , which is denoted as Probs(q|P ) ∈ [0, 1] and formulated by: fθs(qe, Gq) → Probs(q|P ). (2) Here, saliency of the query point q is the likelihood that it is a keypoint. 3.2 Implicit Field Training The implicit field is jointly optimized for surface occupancy and saliency estimation by several selfsupervised losses. In contrast to former arts [12, 11] with a similar architecture that learn multiple tasks separately, we leverage the geometry knowledge from shape field to enhance the performance of keypoint field, as shown in the green arrows of Fig. 2. Specifically, the total loss is given by: L = Lo + Lr + Lm + Ls, (3) where Lo encourages the model to learn the shape from the sparse input, Lr, Lm and Ls respectively help the predicted keypoint to be repeatable, located on the underlying surface and sparse. Surface Occupancy Loss The binary cross-entropy loss lBCE between the predicted surface occupancy Probo(q|P ) and the ground-truth label Probgto is used for shape recovery. The queries Q are randomly sampled from the whole volume size H × W × D. The average over all queries is as follows: Lo = 1 |Q| ∑ q∈Q lBCE ( Probo(q|P ), P robgto (q|P ) ) , (4) where |Q| is the number of queries Q. Repeatability Loss Detecting keypoints with high repeatability is essential for downstream tasks like registration between two-view point clouds. That indicates the positions of keypoint are covariant to the rigid transformation of the input. To achieve a similar goal, 2D keypoint detection methods [27, 7, 43] enforce the similarity of corresponding local salient patches from multiple views. Inspired by them, we enforce the similarity of local overlapped saliency fields from two-view point clouds. Since the implicit field is continuous, we uniformly sample some values from a local field to represent the local saliency distribution. Specifically, as shown in the top and the middle part of Fig. 2, we build several local 3D Cartesian grids {Qi}ni=1 with resolution of Hl × Wl × Dl and size of 1/U . We empirically set the resolution of Qi to be almost the same as the feature volume G. As non-occupied regions are uninformative, the center of Qi is randomly sampled from the input. Then, we perform random rigid transformation T on the P and Qi to generate TP and TQi. Similar to [27], the cosine similarity, denoted as cosim, is exploited for the corresponding saliency grids of Qi and TQi: Lr = 1− 1 n ∑ i∈n cosim ( Probs(Qi|P ), P robs(TQi|TP ) ) . (5) Surface Constraint Loss As discussed in [15], 3D keypoints are encouraged to close to the input. They propose a loss to constrain the distance between the keypoint and its nearest neighbor from the input. Yet, the generated keypoints are inconsistent when given the same input but with a different density. Thanks to the shape decoder, SNAKE can reconstruct the underlying surface of the input, which is robust to the resolution change. Hence, we use the surface occupancy probability to represent the inverse distance between the query and the input. As can be seen in Fig. 2-(surface constraint), we enforce the saliency of the query that is far from input P close to 0, which is defined as Lm = 1 |Q| ∑ q∈Q ( 1− Probo(q|P ) ) · Probs(q|P ). (6) Sparsity Loss Similar to 2D keypoint detection methods [27], we design a sparsity loss to avoid the trivial solution (Probs(Q|P )=0) in Eq.( 5)( 6). As can be seen in Fig. 2, the goal is to maximize the local peakiness of the local saliency grids. As the sailency values of non-occupied points are enforced to 0 by Lm, we only impose the sparsity loss on the points with high surface occupancy probability. Hence, we derive the sparsity loss with the help of decoded geometry by Ls = 1− 1 n ∑ i∈n ( maxProbs(Q 1 i |P )−meanProbs(Q1i |P ) ) , (7) where Q1i = {q|q ∈ Qi, P robo(q|P ) > 1− thro}, thro ∈ (0, 0.5] is a constant, and n is the number of grids. It is noted that the spatial frequency of local peakiness is dependent on the grid size 1/U , see section 4.4. Since the network is not only required to find sparse keypoints, but also expected to recover the object shape, it would generate high saliency at the critical parts of the input, like joint points of a desk and corners of a house, as shown in the Fig. 2-result. 3.3 Explicit Keypoint Extraction The query point q whose saliency is above a predefined threshold thrs ∈ (0, 1) would be selected as a keypoint at the inference stage. Although SNAKE can obtain the saliency of any query point, a higher resolution query set results in a high computational cost. Hence, as shown in Fig. 2-inference, we build a relatively low-resolution query sets Qinfer which are evenly distributed in the input space and further refine the coordinates of Qinfer by gradient-based optimization on this energy function: E(Qinfer, P ) = 1 |Qinfer| ∑ q∈Qinfer 1− Probs(q|P ). (8) Specifically, details of the explicit keypoint extraction algorithm are summarized in Alg. 1. Algorithm 1 Optimization for Explicit Keypoint Extraction Require: P,Qinfer, fθen , fθpos , fθo , fθs . Hyper-parameters: λ, J , thro, thrs. Get initial Probo(Qinfer|P ) according to Eq.( 1). Filter to get new query set Qinfer′ = {q|q ∈ Qinfer, P robo(q|P ) > 1− thro}. for 1 to J do Evaluate energy function E(Qinfer′ , P ). Update coordinates with gradient descent: Qinfer′ = Qinfer′ − λ∇Qinfer′E(Qinfer′ , P ). end for Sample final keypoints Qk = {q|q ∈ Qinfer′ , P robs(q|P ) > thrs}. 4 Experiment In this section, we evaluate SNAKE under three settings. First, we compare keypoint semantic consistency across different instances of the same category, using both rigid and deformable objects. Next, keypoint repeatability of the same instance under disturbances such as SE(3) transformation, noise and downsample is evaluated. Finally, we inspect the point cloud registration task on the 3DMatch benchmark, notably in a zero-shot generalization setting. Besides, an ablation study is done to verify the effect of each design choice in SNAKE. The implementation details and hyper-parameters for SNAKE in three settings can be found in the Appendix. 4.1 Semantic Consistency Datasets The KeypointNet [39] dataset and meshes generated with the SMPL model [18] are utilized. KeypointNet has numerous human-annotated 3D keypoints for 16 object categories from ShapeNet [3]. The training set covers all categories that contain 5500 instances. Following [38], we evaluate 630 unseen instances from airplanes, chairs, and tables. SMPL is a skinned vertex-based deformable model that accurately captures body shape variations in natural human poses. We use the same strategy in [38] to generate both training and testing data. Metric Mean Intersection over Union (mIoU) is adopted to show whether the keypoints across intra-class instances have the same semantics or not. For KeypointNet, a predicted keypoint is considered the same as a human-annotated semantic point if the geodesic distance between them is under some threshold. Due to the lack of human-labeled keypoints on SMPL, we compare the keypoint consistency in a pair of human models. A keypoint in the first model is regarded semantically consistent if the distance between its corresponding point and the nearest keypoint in the second model is below some threshold. Evaluation and Results We compare SNAKE with random detection, hand-crafted detectors: ISS [44], Harris-3D [30] and SIFT-3D [28], and DL-based unsupervised detectors: USIP [15] and UKPGAN [38]. As USIP has not performed semantic consistency evaluations, we train the model with the code they provided. We follow the same protocols in [38] to filter the keypoints via NMS with a Euclidean radius of 0.1. Quantitative results are provided in Fig. 5-(a,e). SNAKE obtains higher mIoU than other methods under most thresholds on KeypointNet and SMPL. Qualitative results in Fig. 3 show our keypoints make good alignment with human annotations. Fig. 4 provides qualitative comparisons of semantically consistent keypoints on rigid and deformable objects. Owing to entangling shape reconstruction and keypoint detection, SNAKE can extract aligned representation for intra-class instances. Thus, our keypoints better outline the object shapes and are more semantically consistent under large shape variations. As shown in the saliency field projected slices, we can get symmetrical keypoints, although without any explicit constraint like the one used in [38]. Here, a projected slice is obtained by taking the maximum value of a given field along the projection direction. 4.2 Repeatability Datasets ModelNet40 [37] is a synthetic object-level dataset that contains 12,311 pre-aligned shapes from 40 categories, such as plane, guitar, and table. We adopt the official dataset split strategy. 3DMatch [41] and Redwood [5] are RGB-D reconstruction datasets for indoor scenes. Following [15], we train the model on 3DMatch and test it on Redwood to show the generalization performance. The training set contains around 19k samples and the test set consists of 207 point clouds. Metric We adopt the relative repeatability proposed in USIP [15] as the evaluation metric. Given two point clouds captured from different viewpoints, a keypoint in the first point cloud is repeatable if its distance to the nearest keypoint in the other point cloud is below a threshold ϵ. Relative repeatability means the number of repeatable points divided by the total number of detected keypoints. Evaluation and Results Random detection, traditional methods and USIP are chosen as our baselines. Since UKPGAN does not provide pre-trained models on these two datasets, we do not report its results in Fig. 5 but make an additional comparison on KeypointNet, which is illustrated in the next paragraph. We use NMS to select the local peaky keypoints with a small radius (0.01 normalized distance on ModelNet40 and 0.04 meters on Redwood) for ours and baselines. We generate 64 keypoints in each sample and show the performance under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. We set a fixed ϵ of 0.04 normalized distance and 0.2 meters on the ModelNet40 and Redwood dataset when testing under the last two cases. As shown in Fig. 5- (b,f), SNAKE outperforms state-of-the-art at most distance thresholds. We do not surpass USIP on Redwood in the lower thresholds. Note that it is challenging to get higher repeatability on Redwood because the paired inputs have very small overlapping regions. Fig. 5-(c,d,g,h) show the repeatability robustness to different downsample rates (d.r.) and Gaussian noise N(0, σ) levels. SNAKE gets the highest repeatability in most cases because the shape-aware strategy helps the model reason about the underlying shapes of the objects/scenes, which makes keypoints robust to the input variations. Fig. 6 provides visualization of object-level and scene-level keypoints of the original and disturbed inputs. SNAKE can generate more consistent keypoints than other methods under drastic input changes. We have tried to train UKPGAN (official implementation) on ModelNet40 and 3DMatch datasets from scratch but observed divergence under default hyper-parameters. As such, we provide a new experiment to compare their repeatability on the KeypointNet dataset, on which UKPGAN provided a pre-trained model. We randomly perform SE(3) transformation on the test point clouds to generate the second view point clouds. Then, we select top-32 salient keypoints with NMS (radius=0.03) in each sample and report the keypoint repeatability under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. The results are summarized in Table 1, 2, which show that SNAKE achieves significant gains over UKPGAN in most cases. More discussions can be found in the Appendix. 4.3 Zero-shot Point Cloud Registration Datasets We follow the same protocols in [38] to train the model on KeypointNet and then directly test it on 3DMatch [41] dataset, evaluating how well two-view point clouds can be registered. The test set consists of 8 scenes which include some partially overlapped point cloud fragments and the ground truth SE(3) transformation matrices. Metric To evaluate geometric registration, we need both keypoint detectors and descriptors. Thus, we combine an off-the-shelf and state-of-the-art descriptor D3Feat [1] with our and other keypoint detectors. Following [38], we compute three metrics: Feature Matching Recall, Inlier Ratio, and Registration Recall for a pair of point clouds. Evaluation and Results As baselines, we choose random detection, ISS, SIFT-3D, UKPGAN, and D3Feat. Note that D3Feat is a task-specific learning-based detector trained on the 3DMatch dataset, thus not included in this zero-shot comparison. Ours and UKPGAN are trained on the synthetic object dataset KeypointNet only. The results are reported under different numbers of keypoints (i.e., 2500, 1000, 500, 250, 100). The NMS with a radius of 0.05m is used for D3Feat, UKPGAN, and ours. As shown in Table 3, SNAKE outperforms other methods consistently under three metrics. For registration recall and inlier ratio, we achieve significant gains over UKPGAN and other traditional keypoint methods. Notably, when the keypoints are high in numbers, SNAKE even outperforms D3Feat which has seen the target domain. Local shape primitives like planes, corners, or curves may be shared between objects and scenes, so our shape-aware formulation allows a superior generalization from objects to scenes. 4.4 Ablation Study Loss Function Table 4 reports the performance w.r.t. designs of loss functions. (Row 1) If the surface occupancy decoder is removed, the surface constraint cannot be performed according to Eq.( 6), so they are removed simultaneously. Although the model could detect significantly repeatable keypoints on ModelNet40 [37], it fails to give semantically consistent keypoints on KeypointNet [39]. Fig. 7-a shows that SNAKE is unable to output symmetric and meaningful keypoints without the shape-aware technique. That indicates the repeatability could not be the only criterion for keypoint detection if an implicit formulation is adopted. (Row 2-4) Each loss function for training keypoint field is vital for keypoint detection. Note that the model gives a trivial solution (0) for the saliency field and cannot extract distinctive points when removing the sparsity loss. Grid Size and Volumetric Resolution The grid size 1/U controls the number of keypoints because Ls enforces the model to predict a single local maxima per grid of size (1/U)3. Fig. 7-b shows Table 5: Impact of different local grid size used in the Lo and Ls on ModelNet40. U 4 6 8 10 rr. (%) (ϵ=0.04) 0.79 0.85 0.79 0.77 Table 6: Impact of different global volumetric resolution on ModelNet40. H(= W = D) 32 48 64 80 rr. (%) (ϵ=0.04) 0.62 0.79 0.85 0.78 different saliency field slices obtained from the same input with various 1/U . When U is small, SNAKE outputs fewer salient responses, and more for larger values of U . We also give the relative repeatability results on ModelNet40 under distance threshold ϵ = 0.04 in Table 5, indicating that U = 6 gives the best results. From Table 6, we can see that higher resolution improves performance. However, the performance drops when it reaches the resolution of 80. The potential reason is as such: the number of queries in a single grid increases when the resolution becomes higher, as mentioned in 3.2. In this case, finer details make the input to cosine similarity too long and contain spurious values. Optimization Step and Learning Rate Fig. 7-c shows the importance of optimization (see Alg. 1) for refining keypoint coordinates on the ModelNet40 dataset. It is noted that too many optimization steps will not bring more gains but increase the computational overhead. In this paper, we set the number of update steps to 10. The learning rate for optimization is also key to the final result. When the learning rate is set to 0.1, 0.01, 0.001 and 0.0001, the relative repeatability (%) on ModelNet40 dataset with the same experimental settings as Table 6 are 0.002, 0.622, 0.854 and 0.826, respectively. In addition, the comparison of computation cost of baselines and ours can be found in the Appendix. 5 Conclusion and Discussion We propose SNAKE, a method for 3D keypoint detection based on implicit neural representations. Extensive evaluations show our keypoints are semantically consistent, repeatable, robust to downsample, and generalizable to unseen scenarios. Limitations. The optimization for keypoint extraction during inference requires considerable computational cost and time, which may not be applicable for use in scenarios that require real-time keypoint detection. Negative Social Impact. The industry may use the method for pose estimation in autonomous robots. Since our method is not perfect, it may lead to wrong decision making and potential human injury. 6 Acknowledgments This research is jointly supported by following projects: the Scientific Innovation 2030 Major Project for New Generation of AI under Grant No.2020AAA0107300, Ministry of Science and Technology of the People’s Republic of China; the Key Field R&D Program of Guangdong Province (No.2021B0101410002); Sino-German Collaborative Research Project Crossmodal Learning (NSFC 61621136008/DFG SFB/TRR169); the National Natural Science Foundation of China (No.62006137); Beijing Outstanding Young Scientist Program (No.BJJWZYJH012019100020098). We would like to thank Pengfei Li for discussions about implicit field learning. We would also like to thank the anonymous reviewers for their insightful comments.
1. What is the main contribution of the paper in 3D keypoint prediction? 2. What are the strengths of the proposed approach, particularly in terms of novel losses and architecture? 3. What are the weaknesses of the paper regarding the lack of related work discussion? 4. Do you have any questions regarding the difference between discrete and continuous space in this context? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents an unsupervised method to predict 3d keypoints from point cloud. Several novel losses are proposed to enforce repeatability, Surface Constraint, and Sparsity. They also achieve superior performance on various public benchmarks. They also use the novel two head to model occupancy and saliency separately to better disentangle these two tasks and let them serve as independently different functions. Strengths And Weaknesses Strengths: I like the intuition that starts from continuous instead of discrete space. The proposed architectures and losses are novel as far as I see. The presentation is very well. It achieves the SOTA performance on several datasets. Weaknesses: Lack of related work discussion: FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY VARYING KERNELS Questions Are there any other newer methods to compare with? The UKPGAN seems to be in the year 2020. Can you elaborate more on discrete space and continuous space? Can you elaborate more of the difference when compared with "R2d2: Reliable and repeatable detector and descriptor"? Limitations Yes
NIPS
Title SNAKE: Shape-aware Neural 3D Keypoint Field Abstract Detecting 3D keypoints from point clouds is important for shape reconstruction, while this work investigates the dual question: can shape reconstruction benefit 3D keypoint detection? Existing methods either seek salient features according to statistics of different orders or learn to predict keypoints that are invariant to transformation. Nevertheless, the idea of incorporating shape reconstruction into 3D keypoint detection is under-explored. We argue that this is restricted by former problem formulations. To this end, a novel unsupervised paradigm named SNAKE is proposed, which is short for shape-aware neural 3D keypoint field. Similar to recent coordinate-based radiance or distance field, our network takes 3D coordinates as inputs and predicts implicit shape indicators and keypoint saliency simultaneously, thus naturally entangling 3D keypoint detection and shape reconstruction. We achieve superior performance on various public benchmarks, including standalone object datasets ModelNet40, KeypointNet, SMPL meshes and scene-level datasets 3DMatch and Redwood. Intrinsic shape awareness brings several advantages as follows. (1) SNAKE generates 3D keypoints consistent with human semantic annotation, even without such supervision. (2) SNAKE outperforms counterparts in terms of repeatability, especially when the input point clouds are down-sampled. (3) the generated keypoints allow accurate geometric registration, notably in a zero-shot setting. Codes are available at https://github.com/zhongcl-thu/SNAKE. 1 Introduction 2D sparse keypoints play a vital role in reconstruction [32], recognition [22] and pose estimation [43], with scale invariant feature transform (SIFT) [19] being arguably the most important preDeep Learning (DL) computer vision algorithm. Altough dense alignment using photometric or featuremetric losses is also successful in various domains [2, 36, 8], sparse keypoints are usually preferred due to compactness in storage/computation and robustness to illumination/rotation. Just like their 2D counterparts, 3D keypoints have also drawn a lot of attention from the community in both pre-DL [13, 35] and DL [15, 1, 38] literature, with various applications in reconstruction [45, 41] and recognition[26, 34]. However, detecting 3D keypoints from raw point cloud data is very challenging due to sampling sparsity. No matter how we obtain raw point clouds (e.g., through RGB-D cameras [40], stereo [4], or LIDAR [10]), they are only a discrete representation of the underlying 3D shape. This fact drives us to explore the question of whether jointly reconstructing underlying 3D shapes helps 3D ∗Corresponding author: Fuchun Sun. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). keypoint detection. To our knowledge, former methods have seldom visited this idea. Traditional 3D keypoint detection methods are built upon some forms of first-order (e.g., density in intrinsic shape signature [44]) or second-order (e.g., curvature in mesh saliency [14]) statistics, including sophisticated reformulation like heat diffusion [33]. Modern learning-based methods rely upon the idea of consistency under geometric transformations, which can be imposed on either coordinate like USIP [15] or saliency value like D3Feat [1]. The most related method that studies joint reconstruction and 3D keypoint detection is a recent one named UKPGAN [38], yet it reconstructs input point cloud coordinates using an auxiliary decoder instead of the underlying shape manifold. Why is this promising idea under-explored in the literature? We argue the reason is that former problem formulations are not naturally applicable for reconstructing the underlying shape surface. Existing paradigms are conceptually illustrated in Fig. 1. USIP-like methods directly output keypoint coordinates while UKPGAN-like methods generate saliency values for input point clouds. In both cases, the representations are based upon discrete point clouds. By contrast, we reformulate the problem using coordinate-based networks, as inspired by the recent success of neural radiance fields [21, 17, 29] and neural distance fields [23, 31]. As shown in Fig. 1-c, our model predicts a keypoint saliency value for each continuous input query point coordinate q(x, y, z). A direct advantage of this new paradigm is the possibility of tightly entangling shape reconstruction and 3D keypoint detection. As shown in Fig. 1-c, besides the keypoint saliency decoder, we attach a parallel shape indicator decoder that predicts whether the query point q is occupied. The input to decoders is feature embedding generated by trilinearly sampling representations conditioned on input point clouds P . Imagine a feature embedding at the wing tip of an airplane, if it can be used to reconstruct the sharp curvature of the wing tip, it can be naturally detected as a keypoint with high repeatability. As such, our method is named as shape-aware neural 3D keypoint field, or SNAKE. Shape awareness, as the core feature of our new formulation, brings several advantages. (1) High repeatability. Repeatability is the most important metric for keypoint detection, i.e., an algorithm should detect the same keypoint locations in two-view point clouds. If the feature embedding can successfully reconstruct the same chair junction from two-view point clouds, they are expected to generate similar saliency scores. (2) Robustness to down-sampling. When input point clouds are sparse, UKPGAN-like frameworks can only achieve reconstruction up to the density of inputs. In contrast, our SNAKE formulation can naturally reconstruct the underlying surface up to any resolution because it exploits coordinate-based networks. (3) Semantic consistency. SNAKE reconstructs the shape across instances of the same category, thus naturally encouraging semantic consistency although no semantic annotation is used. For example, intermediate representations need to be similar for successfully reconstructing different human bodies because human shapes are intrinsically similar. To summarize, this study has the following two contributions: • We propose a new network for joint surface reconstruction and 3D keypoint detection based upon implicit neural representations. During training, we develop several self-supervised losses that exploit the mutual relationship between two decoders. During testing, we design a gradient-based optimization strategy for maximizing the saliency of keypoints. • Via extensive quantitative and qualitative evaluations on standalone object datasets ModelNet40, KeypointNet, SMPL meshes, and scene-level datasets 3DMatch and Redwood, we demonstrate that our shape-aware formulation achieves state-of-the-art performance under three settings: (1) semantic consistency; (2) repeatability; (3) geometric registration. 2 Related Work 3D Keypoint Detector As discussed in the introduction, 3D keypoint detection methods can be mainly categorized into hand-crafted and learning-based. Popular hand-crafted approaches [44, 30, 28] employ local geometric statistics to generate keypoints. These methods usually fail to detect consistent keypoints due to the lack of global context, especially under real-world disturbances, such as density variations and noise. USIP [15] is a pioneering learning-based 3D keypoint detector that outperforms traditional methods by a large margin. However, the detected keypoints are not semantically salient, and the number of keypoints is fixed. Fernandez et al. [9] exploit the symmetry prior to generate semantically consistent keypoints. But this method is category-specific, limiting the generalization to unseen categories and scenes. Recently, UKPGAN [38] makes use of reconstruction to find semanticsaware 3D keypoints. Yet, it recovers explicit coordinates instead of implicit shape indicators. As shown in Fig. 1, different from these explicit keypoint detection methods, we propose a new detection framework using implicit neural fields, which naturally incorporates shape reconstruction. Implicit Neural Representation Our method exploits implicit neural representations to parameterize a continuous 3D keypoint field, which is inspired by recent studies of neural radiance fields [17, 21, 29] and neural distance fields [23, 31, 16, 42]. Unlike explicit 3D representations such as point clouds, voxels, or meshes, implicit neural functions can decode shapes continuously and learn complex shape topologies. To obtain fine geometry, ConvONet [24] proposes to use volumetric embeddings to get local instead of global features [20] of the input. Recently, similar local geometry preserving networks show a great success for the grasp pose generation [12] and articulated model estimation [11]. They utilize the synergies between their main tasks and 3D reconstruction using shared local representations and implicit functions. Unlike [11, 12] that learn geometry as an auxiliary task, our novel losses tightly couple surface occupancy and keypoint saliency estimates. 3 Method This section presents SNAKE, a shape-aware implicit network for 3D keypoint detection. SNAKE conditions two implicit decoders (for shape and keypoint saliency) on shared volumetric feature embeddings, which is shown in Fig. 2-framework. To encourage repeatable, uniformly scattered, and sparse keypoints, we employ several self-supervised loss functions which entangle the predicted surface occupancy and keypoint saliency, as depicted in the middle panel of Fig. 2. During inference, query points with high saliency are further refined by gradient-based optimization since the implicit keypoint field is continuous and differentiable, which is displayed in Fig. 2-inference. 3.1 Network Architecture Point Cloud Encoder As fine geometry is essential to local keypoint detection, we adopt the ConvONets [24], which can obtain local details and scale to large scenes, as the point cloud encoder denoted fθen for SNAKE. Given an input point cloud P ∈ RN×3, our encoder firstly processes it with the PointNet++ [25] (or alternatives like [46]) to get a feature embedding Z ∈ RN×C1 , where N and C1 are respectively the number of points and the dimension of the features. Then, these features are projected and aggregated into structured volume Z ′ ∈ RC1×H×W×D, where H , W and D are the number of voxels in three orthogonal axes. The volumetric embeddings serve as input to the 3D UNet [6] to further integrate local and global information, resulting in the output G ∈ RC2×H×W×D, where C2 is the output feature dimension. More details can be found in the Appendix. Shape Implicit Decoder As shown in the top panel of Fig. 2, each point q ∈ R3 from a query set Q is encoded into a Ce-dimensional vector qe via a multi-layer perceptron that is denoted the positional encoder fθpos , i.e. qe = fθpos(q). Then, the local feature Gq is retrieved from the feature volume G according to the coordinate of q via trilinear interpolation. The generated qe and Gq are concatenated and mapped to the surface occupancy probability Probo(q|P ) ∈ [0, 1] by the occupancy decoder fθo , as given in Eq. (1). If q is on the input surface, the Probo(q|P ) would be 1, otherwise be 0. In our formulation, the points inside the surface are also considered unoccupied. fθo(qe, Gq) → Probo(q|P ) (1) Keypoint Implicit Decoder Most of the process here is the same as in shape implicit decoder, except for the last mapping function. The goal of keypoint implicit decoder fθs is to estimate the saliency of the query point q conditioned on input points P , which is denoted as Probs(q|P ) ∈ [0, 1] and formulated by: fθs(qe, Gq) → Probs(q|P ). (2) Here, saliency of the query point q is the likelihood that it is a keypoint. 3.2 Implicit Field Training The implicit field is jointly optimized for surface occupancy and saliency estimation by several selfsupervised losses. In contrast to former arts [12, 11] with a similar architecture that learn multiple tasks separately, we leverage the geometry knowledge from shape field to enhance the performance of keypoint field, as shown in the green arrows of Fig. 2. Specifically, the total loss is given by: L = Lo + Lr + Lm + Ls, (3) where Lo encourages the model to learn the shape from the sparse input, Lr, Lm and Ls respectively help the predicted keypoint to be repeatable, located on the underlying surface and sparse. Surface Occupancy Loss The binary cross-entropy loss lBCE between the predicted surface occupancy Probo(q|P ) and the ground-truth label Probgto is used for shape recovery. The queries Q are randomly sampled from the whole volume size H × W × D. The average over all queries is as follows: Lo = 1 |Q| ∑ q∈Q lBCE ( Probo(q|P ), P robgto (q|P ) ) , (4) where |Q| is the number of queries Q. Repeatability Loss Detecting keypoints with high repeatability is essential for downstream tasks like registration between two-view point clouds. That indicates the positions of keypoint are covariant to the rigid transformation of the input. To achieve a similar goal, 2D keypoint detection methods [27, 7, 43] enforce the similarity of corresponding local salient patches from multiple views. Inspired by them, we enforce the similarity of local overlapped saliency fields from two-view point clouds. Since the implicit field is continuous, we uniformly sample some values from a local field to represent the local saliency distribution. Specifically, as shown in the top and the middle part of Fig. 2, we build several local 3D Cartesian grids {Qi}ni=1 with resolution of Hl × Wl × Dl and size of 1/U . We empirically set the resolution of Qi to be almost the same as the feature volume G. As non-occupied regions are uninformative, the center of Qi is randomly sampled from the input. Then, we perform random rigid transformation T on the P and Qi to generate TP and TQi. Similar to [27], the cosine similarity, denoted as cosim, is exploited for the corresponding saliency grids of Qi and TQi: Lr = 1− 1 n ∑ i∈n cosim ( Probs(Qi|P ), P robs(TQi|TP ) ) . (5) Surface Constraint Loss As discussed in [15], 3D keypoints are encouraged to close to the input. They propose a loss to constrain the distance between the keypoint and its nearest neighbor from the input. Yet, the generated keypoints are inconsistent when given the same input but with a different density. Thanks to the shape decoder, SNAKE can reconstruct the underlying surface of the input, which is robust to the resolution change. Hence, we use the surface occupancy probability to represent the inverse distance between the query and the input. As can be seen in Fig. 2-(surface constraint), we enforce the saliency of the query that is far from input P close to 0, which is defined as Lm = 1 |Q| ∑ q∈Q ( 1− Probo(q|P ) ) · Probs(q|P ). (6) Sparsity Loss Similar to 2D keypoint detection methods [27], we design a sparsity loss to avoid the trivial solution (Probs(Q|P )=0) in Eq.( 5)( 6). As can be seen in Fig. 2, the goal is to maximize the local peakiness of the local saliency grids. As the sailency values of non-occupied points are enforced to 0 by Lm, we only impose the sparsity loss on the points with high surface occupancy probability. Hence, we derive the sparsity loss with the help of decoded geometry by Ls = 1− 1 n ∑ i∈n ( maxProbs(Q 1 i |P )−meanProbs(Q1i |P ) ) , (7) where Q1i = {q|q ∈ Qi, P robo(q|P ) > 1− thro}, thro ∈ (0, 0.5] is a constant, and n is the number of grids. It is noted that the spatial frequency of local peakiness is dependent on the grid size 1/U , see section 4.4. Since the network is not only required to find sparse keypoints, but also expected to recover the object shape, it would generate high saliency at the critical parts of the input, like joint points of a desk and corners of a house, as shown in the Fig. 2-result. 3.3 Explicit Keypoint Extraction The query point q whose saliency is above a predefined threshold thrs ∈ (0, 1) would be selected as a keypoint at the inference stage. Although SNAKE can obtain the saliency of any query point, a higher resolution query set results in a high computational cost. Hence, as shown in Fig. 2-inference, we build a relatively low-resolution query sets Qinfer which are evenly distributed in the input space and further refine the coordinates of Qinfer by gradient-based optimization on this energy function: E(Qinfer, P ) = 1 |Qinfer| ∑ q∈Qinfer 1− Probs(q|P ). (8) Specifically, details of the explicit keypoint extraction algorithm are summarized in Alg. 1. Algorithm 1 Optimization for Explicit Keypoint Extraction Require: P,Qinfer, fθen , fθpos , fθo , fθs . Hyper-parameters: λ, J , thro, thrs. Get initial Probo(Qinfer|P ) according to Eq.( 1). Filter to get new query set Qinfer′ = {q|q ∈ Qinfer, P robo(q|P ) > 1− thro}. for 1 to J do Evaluate energy function E(Qinfer′ , P ). Update coordinates with gradient descent: Qinfer′ = Qinfer′ − λ∇Qinfer′E(Qinfer′ , P ). end for Sample final keypoints Qk = {q|q ∈ Qinfer′ , P robs(q|P ) > thrs}. 4 Experiment In this section, we evaluate SNAKE under three settings. First, we compare keypoint semantic consistency across different instances of the same category, using both rigid and deformable objects. Next, keypoint repeatability of the same instance under disturbances such as SE(3) transformation, noise and downsample is evaluated. Finally, we inspect the point cloud registration task on the 3DMatch benchmark, notably in a zero-shot generalization setting. Besides, an ablation study is done to verify the effect of each design choice in SNAKE. The implementation details and hyper-parameters for SNAKE in three settings can be found in the Appendix. 4.1 Semantic Consistency Datasets The KeypointNet [39] dataset and meshes generated with the SMPL model [18] are utilized. KeypointNet has numerous human-annotated 3D keypoints for 16 object categories from ShapeNet [3]. The training set covers all categories that contain 5500 instances. Following [38], we evaluate 630 unseen instances from airplanes, chairs, and tables. SMPL is a skinned vertex-based deformable model that accurately captures body shape variations in natural human poses. We use the same strategy in [38] to generate both training and testing data. Metric Mean Intersection over Union (mIoU) is adopted to show whether the keypoints across intra-class instances have the same semantics or not. For KeypointNet, a predicted keypoint is considered the same as a human-annotated semantic point if the geodesic distance between them is under some threshold. Due to the lack of human-labeled keypoints on SMPL, we compare the keypoint consistency in a pair of human models. A keypoint in the first model is regarded semantically consistent if the distance between its corresponding point and the nearest keypoint in the second model is below some threshold. Evaluation and Results We compare SNAKE with random detection, hand-crafted detectors: ISS [44], Harris-3D [30] and SIFT-3D [28], and DL-based unsupervised detectors: USIP [15] and UKPGAN [38]. As USIP has not performed semantic consistency evaluations, we train the model with the code they provided. We follow the same protocols in [38] to filter the keypoints via NMS with a Euclidean radius of 0.1. Quantitative results are provided in Fig. 5-(a,e). SNAKE obtains higher mIoU than other methods under most thresholds on KeypointNet and SMPL. Qualitative results in Fig. 3 show our keypoints make good alignment with human annotations. Fig. 4 provides qualitative comparisons of semantically consistent keypoints on rigid and deformable objects. Owing to entangling shape reconstruction and keypoint detection, SNAKE can extract aligned representation for intra-class instances. Thus, our keypoints better outline the object shapes and are more semantically consistent under large shape variations. As shown in the saliency field projected slices, we can get symmetrical keypoints, although without any explicit constraint like the one used in [38]. Here, a projected slice is obtained by taking the maximum value of a given field along the projection direction. 4.2 Repeatability Datasets ModelNet40 [37] is a synthetic object-level dataset that contains 12,311 pre-aligned shapes from 40 categories, such as plane, guitar, and table. We adopt the official dataset split strategy. 3DMatch [41] and Redwood [5] are RGB-D reconstruction datasets for indoor scenes. Following [15], we train the model on 3DMatch and test it on Redwood to show the generalization performance. The training set contains around 19k samples and the test set consists of 207 point clouds. Metric We adopt the relative repeatability proposed in USIP [15] as the evaluation metric. Given two point clouds captured from different viewpoints, a keypoint in the first point cloud is repeatable if its distance to the nearest keypoint in the other point cloud is below a threshold ϵ. Relative repeatability means the number of repeatable points divided by the total number of detected keypoints. Evaluation and Results Random detection, traditional methods and USIP are chosen as our baselines. Since UKPGAN does not provide pre-trained models on these two datasets, we do not report its results in Fig. 5 but make an additional comparison on KeypointNet, which is illustrated in the next paragraph. We use NMS to select the local peaky keypoints with a small radius (0.01 normalized distance on ModelNet40 and 0.04 meters on Redwood) for ours and baselines. We generate 64 keypoints in each sample and show the performance under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. We set a fixed ϵ of 0.04 normalized distance and 0.2 meters on the ModelNet40 and Redwood dataset when testing under the last two cases. As shown in Fig. 5- (b,f), SNAKE outperforms state-of-the-art at most distance thresholds. We do not surpass USIP on Redwood in the lower thresholds. Note that it is challenging to get higher repeatability on Redwood because the paired inputs have very small overlapping regions. Fig. 5-(c,d,g,h) show the repeatability robustness to different downsample rates (d.r.) and Gaussian noise N(0, σ) levels. SNAKE gets the highest repeatability in most cases because the shape-aware strategy helps the model reason about the underlying shapes of the objects/scenes, which makes keypoints robust to the input variations. Fig. 6 provides visualization of object-level and scene-level keypoints of the original and disturbed inputs. SNAKE can generate more consistent keypoints than other methods under drastic input changes. We have tried to train UKPGAN (official implementation) on ModelNet40 and 3DMatch datasets from scratch but observed divergence under default hyper-parameters. As such, we provide a new experiment to compare their repeatability on the KeypointNet dataset, on which UKPGAN provided a pre-trained model. We randomly perform SE(3) transformation on the test point clouds to generate the second view point clouds. Then, we select top-32 salient keypoints with NMS (radius=0.03) in each sample and report the keypoint repeatability under different distance thresholds ϵ, downsample rates, and Gaussian noise scales. The results are summarized in Table 1, 2, which show that SNAKE achieves significant gains over UKPGAN in most cases. More discussions can be found in the Appendix. 4.3 Zero-shot Point Cloud Registration Datasets We follow the same protocols in [38] to train the model on KeypointNet and then directly test it on 3DMatch [41] dataset, evaluating how well two-view point clouds can be registered. The test set consists of 8 scenes which include some partially overlapped point cloud fragments and the ground truth SE(3) transformation matrices. Metric To evaluate geometric registration, we need both keypoint detectors and descriptors. Thus, we combine an off-the-shelf and state-of-the-art descriptor D3Feat [1] with our and other keypoint detectors. Following [38], we compute three metrics: Feature Matching Recall, Inlier Ratio, and Registration Recall for a pair of point clouds. Evaluation and Results As baselines, we choose random detection, ISS, SIFT-3D, UKPGAN, and D3Feat. Note that D3Feat is a task-specific learning-based detector trained on the 3DMatch dataset, thus not included in this zero-shot comparison. Ours and UKPGAN are trained on the synthetic object dataset KeypointNet only. The results are reported under different numbers of keypoints (i.e., 2500, 1000, 500, 250, 100). The NMS with a radius of 0.05m is used for D3Feat, UKPGAN, and ours. As shown in Table 3, SNAKE outperforms other methods consistently under three metrics. For registration recall and inlier ratio, we achieve significant gains over UKPGAN and other traditional keypoint methods. Notably, when the keypoints are high in numbers, SNAKE even outperforms D3Feat which has seen the target domain. Local shape primitives like planes, corners, or curves may be shared between objects and scenes, so our shape-aware formulation allows a superior generalization from objects to scenes. 4.4 Ablation Study Loss Function Table 4 reports the performance w.r.t. designs of loss functions. (Row 1) If the surface occupancy decoder is removed, the surface constraint cannot be performed according to Eq.( 6), so they are removed simultaneously. Although the model could detect significantly repeatable keypoints on ModelNet40 [37], it fails to give semantically consistent keypoints on KeypointNet [39]. Fig. 7-a shows that SNAKE is unable to output symmetric and meaningful keypoints without the shape-aware technique. That indicates the repeatability could not be the only criterion for keypoint detection if an implicit formulation is adopted. (Row 2-4) Each loss function for training keypoint field is vital for keypoint detection. Note that the model gives a trivial solution (0) for the saliency field and cannot extract distinctive points when removing the sparsity loss. Grid Size and Volumetric Resolution The grid size 1/U controls the number of keypoints because Ls enforces the model to predict a single local maxima per grid of size (1/U)3. Fig. 7-b shows Table 5: Impact of different local grid size used in the Lo and Ls on ModelNet40. U 4 6 8 10 rr. (%) (ϵ=0.04) 0.79 0.85 0.79 0.77 Table 6: Impact of different global volumetric resolution on ModelNet40. H(= W = D) 32 48 64 80 rr. (%) (ϵ=0.04) 0.62 0.79 0.85 0.78 different saliency field slices obtained from the same input with various 1/U . When U is small, SNAKE outputs fewer salient responses, and more for larger values of U . We also give the relative repeatability results on ModelNet40 under distance threshold ϵ = 0.04 in Table 5, indicating that U = 6 gives the best results. From Table 6, we can see that higher resolution improves performance. However, the performance drops when it reaches the resolution of 80. The potential reason is as such: the number of queries in a single grid increases when the resolution becomes higher, as mentioned in 3.2. In this case, finer details make the input to cosine similarity too long and contain spurious values. Optimization Step and Learning Rate Fig. 7-c shows the importance of optimization (see Alg. 1) for refining keypoint coordinates on the ModelNet40 dataset. It is noted that too many optimization steps will not bring more gains but increase the computational overhead. In this paper, we set the number of update steps to 10. The learning rate for optimization is also key to the final result. When the learning rate is set to 0.1, 0.01, 0.001 and 0.0001, the relative repeatability (%) on ModelNet40 dataset with the same experimental settings as Table 6 are 0.002, 0.622, 0.854 and 0.826, respectively. In addition, the comparison of computation cost of baselines and ours can be found in the Appendix. 5 Conclusion and Discussion We propose SNAKE, a method for 3D keypoint detection based on implicit neural representations. Extensive evaluations show our keypoints are semantically consistent, repeatable, robust to downsample, and generalizable to unseen scenarios. Limitations. The optimization for keypoint extraction during inference requires considerable computational cost and time, which may not be applicable for use in scenarios that require real-time keypoint detection. Negative Social Impact. The industry may use the method for pose estimation in autonomous robots. Since our method is not perfect, it may lead to wrong decision making and potential human injury. 6 Acknowledgments This research is jointly supported by following projects: the Scientific Innovation 2030 Major Project for New Generation of AI under Grant No.2020AAA0107300, Ministry of Science and Technology of the People’s Republic of China; the Key Field R&D Program of Guangdong Province (No.2021B0101410002); Sino-German Collaborative Research Project Crossmodal Learning (NSFC 61621136008/DFG SFB/TRR169); the National Natural Science Foundation of China (No.62006137); Beijing Outstanding Young Scientist Program (No.BJJWZYJH012019100020098). We would like to thank Pengfei Li for discussions about implicit field learning. We would also like to thank the anonymous reviewers for their insightful comments.
1. What is the main contribution of the paper regarding keypoint detection from point clouds? 2. What are the strengths and weaknesses of the proposed method, particularly in its novelty and effectiveness? 3. Do you have any concerns or questions about the method's approach, such as learning keypoint probability fields or combining implicit fields and keypoint fields? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? 5. Are there any ethical or societal implications of the research that the authors should consider?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors introduced a method to detect keypoints from point clouds. They leverage a deep learning model to learn an implicit function which is a mapping to provide each location a probability of being a keypoint. To make the method shape-aware, the authors use deep learning models to learn the kepoint field along with the learning of an occupancy field to explore whether the shape information can improve the keypoint detection. The contribution lies in the way of combining the learning of implicit fields and keypoint fields. The authors evaluate the effectiveness by comparing it with the latest methods under the widely used benchmark. Strengths And Weaknesses Strengths: The visualization is good. The paper is easy to follow. Weaknesses: The motivation is not convincing at all. The experimental results cannot justify the effectiveness of the method. Questions First of all, I do not think it makes any sense to learn the probability of keypoints as a field. Keypoints should be located on surfaces, they are not floating in the space. Specifically, the authors aim to detect keypoints from input point clouds, the keypoints to be determined should be some of the input points. It should be much easier to learn the probability of keypoints among the discrete input points than in the continuous whole space. The motivation of improving the performance with surface reconstruction does not make any sense either. Surface reconstruction is a harder task than keypoint detection, since point clouds have been given as input which provides a range as a good constraint to find solutions for kepoint detection. We already know the structure of shapes represented by point clouds and why we have to learn an implicit function to reconstruct shapes. I believe it just makes this problem more complex to get resolved. The authors claim that the proposed method is a novel unsupervised method. This is also not correct, since this method requires ground truth occupancy information as a supervision. The proposed method is also not novel. UKPGAN has explored the feasibility of combining shape reconstruction and keypoint detection together, although UKPGAN reconstructs the input point cloud rather than learning an implicit function to represent the same shape. The minor difference here is what representation is used to represent 3D shapes. Learning occupancy field results in using an additional surface constraint. This term should not be there due to the learning of the occupancy field. Fig.2(b) may be confusing, I think the close to 0 line should be parallel to the x axis rather than y axis. The term of repeatability loss is hard to understand. Why do we need a term like this? For shapes, they are well aligned, why do we need to consider the rigid transformation? Another question is about the cosine similarity, the probabilities are scalars, right? Is there any reason to use cosine similarity to evaluate the difference between two numbers? The authors use occupancy probability to represent the inverse distance between the query and the input. I do not think it works here. Since occupancy probability always gets smaller from 1 to 0 when moving a query from inside to outside while the inverse distance gets smaller to larger before going across the surface, and then gets smaller after that. Explicit keypoint extraction via optimization is also an operation that I do not understand. An intuitive idea of extracting keypoints is to use points of the input point clouds as queries to predict the occupancy values since keypoints can only locate on the surface. Why do we have to extract keypoints via updating the locations of randomly sampled queries in the 3D space? In the visualization of saliency field slices, how to select the slices to visualize? How to determine the keypoint number? I noticed that the numbers of keypoints produced by different methods are different even in the same case, such as the visual comparison in Fig.6. Why do not use the same number of points to perform the comparison? The reason why I do not think the reconstruction can improve the performance of keypoint detection is the results in Table 2. The results without surface reconstruction are better than the results with surface reconstruction. I do not think this is a good support for the argument. With the ground truth occupancy supervision, it is not a fair comparison with other methods. Limitations Yes, the authors addressed the limitations and potential negative societal impact of their work.
NIPS
Title Near-Optimal Reinforcement Learning with Self-Play Abstract This paper considers the problem of designing optimal algorithms for reinforcement learning in two-player zero-sum games. We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision. In a tabular episodic Markov game with S states, A max-player actions and B min-player actions, the best existing algorithm for finding an approximate Nash equilibrium requires Õ(SAB) steps of game playing, when only highlighting the dependency on (S,A,B). In contrast, the best existing lower bound scales as Ω(S(A + B)) and has a significant gap from the upper bound. This paper closes this gap for the first time: we propose an optimistic variant of the Nash Qlearning algorithm with sample complexity Õ(SAB), and a new Nash V-learning algorithm with sample complexity Õ(S(A + B)). The latter result matches the information-theoretic lower bound in all problem-dependent parameters except for a polynomial factor of the length of each episode. In addition, we present a computational hardness result for learning the best responses against a fixed opponent in Markov games—a learning objective different from finding the Nash equilibrium. 1 Introduction A wide range of modern artificial intelligence challenges can be cast as a multi-agent reinforcement learning (multi-agent RL) problem, in which more than one agent performs sequential decision making in an interactive environment. Multi-agent RL has achieved significant recent success on traditionally challenging tasks, for example in the game of GO [30, 31], Poker [6], real-time strategy games [33, 22], decentralized controls or multiagent robotics systems [5], autonomous driving [27], as well as complex social scenarios such as hide-and-seek [3]. In many scenarios, the learning agents even outperform the best human experts . Despite the great empirical success, a major bottleneck for many existing RL algorithms is that they require a tremendous number of samples. For example, the biggest AlphaGo Zero model is trained on tens of millions of games and took more than a month to train [31]. While requiring such amount of samples may be acceptable in simulatable environments such as GO, it is not so in other sampleexpensive real world settings such as robotics and autonomous driving. It is thus important for us to understand the sample complexity in RL—how can we design algorithms that find a near optimal policy with a small number of samples, and what is the fundamental limit, i.e. the minimum number of samples required for any algorithm to find a good policy. Theoretical understandings on the sample complexity for multi-agent RL are rather limited, especially when compared with single-agent settings. The standard model for a single-agent setting is an episodic Markov Decision Process (MDP) with S states, and A actions, and H steps per episode. The best known algorithm can find an near-optimal policy in Θ̃(poly(H)SA/ 2) episodes, which matches the lower bound up to a single H factor [1, 8]. In contrast, in multi-agent settings, the optimal sample complexity remains open even in the basic setting of two-player tabular Markov games 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [28], where the agents are required to find the solutions of the games—the Nash equilibria. The best known algorithm, VI-ULCB, finds an -approximate Nash equilibrium in Õ(poly(H)S2AB/ 2) episodes [2], where B is the number of actions for the other player. The information theoretical lower bound is Ω(poly(H)S(A + B)/ 2). Specifically, the number of episodes required for the algorithm scales quadratically in both S and (A,B), and exhibits a gap from the linear dependency in the lower bound. This motivates the following question: Can we design algorithms with near-optimal sample complexity for learning Markov games? In this paper, we present the first line of near-optimal algorithms for two-player Markov games that match the aforementioned lower bound up to a poly(H) factor. This closes the open problem for achieving the optimal sample complexity in all (S,A,B) dependency. Our algorithm learns by playing against itself without requiring any direct supervision, and is thus a self-play algorithm. 1.1 Our contributions • We propose an optimistic variant of Nash Q-learning [11], and prove that it achieves sample complexity Õ(H5SAB/ 2) for finding an -approximate Nash equilibrium in two-player Markov games (Section 3). Our algorithm builds optimistic upper and lower estimates of Q-values, and computes the Coarse Correlated Equilibrium (CCE) over this pair of Q estimates as its execution policies for both players. • We design a new algorithm—Nash V-learning—for finding approximate Nash equilibria, and show that it achieves sample complexity Õ(H6S(A + B)/ 2) (Section 4). This improves upon Nash Q-learning in case min {A,B} > H . It is also the first result that matches the minimax lower bound up to only a poly(H) factor. This algorithm builds optimistic upper and lower estimates of V -values, and features a novel combination of Follow-the-Regularized-Leader (FTRL) and standard Q-learning algorithm to determine its execution policies. • Apart from finding Nash equilibria, we prove that learning the best responses of fixed opponents in Markov games is as hard as learning parity with noise—a notoriously difficult problem that is believed to be computationally hard (Section 5). As a corollary, this hardness result directly implies that achieving sublinear regret against adversarial opponents in Markov games is also computationally hard, a result that first appeared in [25]. This in turn rules out the possibility of designing efficient algorithms for finding Nash equilibria by running no-regret algorithms for each player separately. In addition to above contributions, this paper also features a novel approach of extracting certified policies—from the estimates produced by reinforcement learning algorithms such as Nash Qlearning and Nash V-learning—that are certified to have similar performance as Nash equilibrium policies, even when facing against their best response (see Section 3 for more details). We believe this technique could be of broader interest to the community. 1.2 Related Work Markov games Markov games (or stochastic games) are proposed in the early 1950s [28]. They are widely used to model multi-agent RL. Learning the Nash equilibria of Markov games has been studied in classical work [18, 19, 11, 10], where the transition matrix and reward are assumed to be known, or in the asymptotic setting where the number of data goes to infinity. These results do not directly apply to the non-asymptotic setting where the transition and reward are unknown and only a limited amount of data are available for estimating them. A recent line of work tackles self-play algorithms for Markov games in the non-asymptotic setting with strong reachability assumptions. Specifically, Wei et al. [35] assumes no matter what strategy one agent sticks to, the other agent can always reach all states by playing a certain policy, and Jia et al. [13], Sidford et al. [29] assume access to simulators (or generative models) that enable the agent to directly sample transition and reward information for any state-action pair. These settings ensure that all states can be reached directly, so no sophisticated exploration is not required. Very recently, [2, 36] study learning Markov games without these reachability assumptions, where exploration becomes essential. However, both results suffer from highly suboptimal sample complexity. We compare them with our results in Table 1. The results of [36] also applies to the linear function approximation setting. We remark that the R-max algorithm [4] does provide provable guarantees for learning Markov game, even in the setting of playing against the adversarial opponent, but using a definition of regret that is weaker than the standard regret. Their result does not imply any sample complexity result for finding Nash equilibrium policies. Adversarial MDP Another line of related work focuses on provably efficient algorithms for adversarial MDPs. Most work in this line considers the setting with adversarial rewards [38, 26, 15], because adversarial MDP with changing dynamics is computationally hard even under fullinformation feedback [37]. These results do not direcly imply provable self-play algorithms in our setting, because the opponent in Markov games can affect both the reward and the transition. Single-agent RL There is a rich literature on reinforcement learning in MDPs [see e.g. 12, 24, 1, 7, 32, 14]. MDP is a special case of Markov games, where only a single agent interacts with a stochastic environment. For the tabular episodic setting with nonstationary dynamics and no simulators, the best sample complexity achieved by existing model-based and model-free algorithms are Õ(H3SA/ 2) [1] and Õ(H4SA/ 2) [14], respectively, where S is the number of states, A is the number of actions, H is the length of each episode. Both of them (nearly) match the lower bound Ω(H3SA/ 2) [12, 23, 14]. 2 Preliminaries We consider zero-sum Markov Games (MG) [28, 18], which are also known as stochastic games in the literature. Zero-sum Markov games are generalization of standard Markov Decision Processes (MDP) into the two-player setting, in which the max-player seeks to maximize the total return and the min-player seeks to minimize the total return. Formally, we denote a tabular episodic Markov game as MG(H,S,A,B,P, r), where H is the number of steps in each episode, S is the set of states with |S| ≤ S, (A,B) are the sets of actions of the max-player and the min-player respectively, P = {Ph}h∈[H] is a collection of transition matrices, so that Ph(·|s, a, b) gives the distribution over states if action pair (a, b) is taken for state s at step h, and r = {rh}h∈[H] is a collection of reward functions, and rh : S ×A×B → [0, 1] is the deterministic reward function at step h. 1 In each episode of this MG, we start with a fixed initial state s1. Then, at each step h ∈ [H], both players observe state sh ∈ S, and the max-player picks action ah ∈ A while the min-player picks action bh ∈ B simultaneously. Both players observe the actions of the opponents, receive reward rh(sh, ah, bh), and then the environment transitions to the next state sh+1 ∼ Ph(·|sh, ah, bh). The episode ends when sH+1 is reached. 1We assume the rewards in [0, 1] for normalization. Our results directly generalize to randomized reward functions, since learning the transition is more difficult than learning the reward. Markov policy, value function A Markov policy µ of the max-player is a collection of H functions {µh : S → ∆A}h∈[H], which maps from a state to a distribution of actions. Here ∆A is the probability simplex over action set A. Similarly, a policy ν of the min-player is a collection of H functions {νh : S → ∆B}h∈[H]. We use the notation µh(a|s) and νh(b|s) to present the probability of taking action a or b for state s at step h under Markov policy µ or ν respectively. We use V µ,νh : S → R to denote the value function at step h under policy µ and ν, so that V µ,ν h (s) gives the expected cumulative rewards received under policy µ and ν, starting from s at step h: V µ,νh (s) := Eµ,ν [∑H h′=h rh′(sh′ , ah′ , bh′) ∣∣∣ sh = s] . (1) We also define Qµ,νh : S × A × B → R to denote Q-value function at step h so that Q µ,ν h (s, a, b) gives the cumulative rewards received under policy µ and ν, starting from (s, a, b) at step h: Qµ,νh (s, a, b) := Eµ,ν [∑H h′=h rh′(sh′ , ah′ , bh′) ∣∣∣ sh = s, ah = a, bh = b] . (2) For simplicity, we use notation of operator Ph so that [PhV ](s, a, b) := Es′∼Ph(·|s,a,b)V (s′) for any value function V . We also use notation [DπQ](s) := E(a,b)∼π(·,·|s)Q(s, a, b) for any action-value function Q. By definition of value functions, we have the Bellman equation Qµ,νh (s, a, b) = (rh + PhV µ,ν h+1)(s, a, b), V µ,ν h (s) = (Dµh×νhQ µ,ν h )(s) for all (s, a, b, h) ∈ S ×A× B × [H]. We define V µ,νH+1(s) = 0 for all s ∈ SH+1. Best response and Nash equilibrium For any Markov policy of the max-player µ, there exists a best response of the min-player, which is a Markov policy ν†(µ) satisfying V µ,ν †(µ) h (s) = infν V µ,ν h (s) for any (s, h) ∈ S × [H]. Here the infimum is taken over all possible policies which are not necessarily Markovian (we will define later in this section). We define V µ,†h := V µ,ν†(µ) h . By symmetry, we can also define µ†(ν) and V †,νh . It is further known (cf. [9]) that there exist Markov policies µ?, ν? that are optimal against the best responses of the opponents, in the sense that V µ ?,† h (s) = supµ V µ,† h (s), V †,ν? h (s) = infν V †,ν h (s), for all (s, h). We call these optimal strategies (µ?, ν?) the Nash equilibrium of the Markov game, which satisfies the following minimax equation: 2 supµ infν V µ,ν h (s) = V µ?,ν? h (s) = infν supµ V µ,ν h (s). Intuitively, a Nash equilibrium gives a solution in which no player has anything to gain by changing only her own policy. We further abbreviate the values of Nash equilibrium V µ ?,ν? h and Q µ?,ν? h as V ?h and Q ? h. We refer readers to Appendix A for Bellman optimality equations for values of best responses or Nash equilibria. General (non-Markovian) policy In certain situations, it is beneficial to consider general, historydependent policies that are not necessarily Markovian. A (general) policy µ of the max-player is a set of H maps µ := { µh : R × (S × A × B × R)h−1 × S → ∆A } h∈[H], from a random number z ∈ R and a history of length h—say (s1, a1, b1, r1, · · · , sh), to a distribution over actions inA. By symmetry, we can also define the (general) policy ν of the min-player, by replacing the action set A in the definition by set B. The random number z is sampled from some underlying distribution D, but may be shared among all steps h ∈ [H]. For a pair of general policy (µ, ν), we can still use the same definitions (1) to define their value V µ,ν1 (s1) at step 1. We can also define the best response ν †(µ) of a general policy µ as the minimizing policy so that V µ,†1 (s1) ≡ V µ,ν†(µ) 1 (s1) = infν V µ,ν h (s1) at step 1. We remark that the best response of a general policy is not necessarily Markovian. 2The minimax theorem here is different from the one for matrix games, i.e. maxφminψ φ>Aψ = minψmaxφ φ >Aψ for any matrix A, since here V µ,νh (s) is in general not bilinear in µ, ν. Algorithm 1 Optimistic Nash Q-learning 1: Initialize: for any (s, a, b, h), Qh(s, a, b)← H , Qh(s, a, b)← 0, Nh(s, a, b)← 0, πh(a, b|s)← 1/(AB). 2: for episode k = 1, . . . ,K do 3: receive s1. 4: for step h = 1, . . . ,H do 5: take action (ah, bh) ∼ πh(·, ·|sh). 6: observe reward rh(sh, ah, bh) and next state sh+1. 7: t = Nh(sh, ah, bh)← Nh(sh, ah, bh) + 1. 8: Qh(sh, ah, bh)← (1− αt)Qh(sh, ah, bh) + αt(rh(sh, ah, bh) + V h+1(sh+1) + βt) 9: Q h (sh, ah, bh)← (1− αt)Qh(sh, ah, bh) + αt(rh(sh, ah, bh) + V h+1(sh+1)− βt) 10: πh(·, ·|sh)← CCE(Qh(sh, ·, ·), Qh(sh, ·, ·)) 11: V h(sh)← (DπhQh)(sh); V h(sh)← (DπhQh)(sh). Learning Objective There are two possible learning objectives in the setting of Markov games. The first one is to find the best response for a fixed opponent. Without loss of generality, we consider the case where the learning agent is the max-player, and the min-player is the opponent. Definition 1 ( -approximate best response). For an opponent with an fixed unknown general policy ν, a general policy µ̂ is the -approximate best response if V †,ν1 (s1)− V µ̂,ν 1 (s1) ≤ . The second goal is to find a Nash equilibrium of the Markov games. We measure the suboptimality of any pair of general policies (µ̂, ν̂) using the gap between their performance and the performance of the optimal strategy (i.e. Nash equilibrium) when playing against the best responses respectively: V †,ν̂1 (s1)− V µ̂,† 1 (s1) = [ V †,ν̂1 (s1)− V ?1 (s1) ] + [ V ?1 (s1)− V µ̂,† 1 (s1) ] Definition 2 ( -approximate Nash equilibrium). A pair of general policies (µ̂, ν̂) is an - approximate Nash equilibrium, if V †,ν̂1 (s1)− V µ̂,† 1 (s1) ≤ . Loosely speaking, Nash equilibria can be viewed as “the best responses to the best responses”. In most applications, they are the ultimate solutions to the games. In Section 3 and 4, we present sharp guarantees for learning an approximate Nash equilibrium with near-optimal sample complexity. However, rather surprisingly, learning a best response in the worst case is more challenging than learning the Nash equilibrium. In Section 5, we present a computational hardness result for learning an approximate best response. 3 Optimistic Nash Q-learning In this section, we present our first algorithm Optimistic Nash Q-learning and its corresponding theoretical guarantees. Algorithm part I: learning values Our algorithm Optimistic Nash Q-learning (Algorithm 1) is an optimistic variant of Nash Q-learning [11]. For each step in each episode, it (a) takes actions according to the previously computed policy πh, and observes the reward and next state, (b) performs incremental updates on Q-values, and (c) computes new greedy policies and updates V -values. Part (a) is straightforward; we now focus on explaining part (b) and part (c). In part (b), the incremental updates on Q-values (Line 8, 9) are almost the same as standard Qlearning [34], except here we maintain two separate Q-values—Qh and Qh, as upper and lower confidence versions respectively. We add and subtract a bonus term βt in the corresponding updates, which depends on t = Nh(sh, ah, bh)—the number of times (sh, ah, bh) has been visited at step h. We pick parameter αt and βt as follows for some large constant c , and log factors ι: αt = (H + 1)/(H + t), βt = c √ H3ι/t (3) In part (c), our greedy policies are computed using a Coarse Correlated Equilibrium (CCE) subroutine, which is first introduced by [36] to solve Markov games using value iteration algorithms. For Algorithm 2 Certified Policy µ̂ of Nash Q-learning 1: sample k ← Uniform([K]). 2: for step h = 1, . . . ,H do 3: observe sh, and take action ah ∼ µkh(·|sh). 4: observe bh, and set t← Nkh (sh, ah, bh). 5: sample m ∈ [t] with P(m = i) = αit. 6: k ← kmh (sh, ah, bh) any pair of matrices Q,Q ∈ [0, H]A×B , CCE(Q,Q) returns a distribution π ∈ ∆A×B such that E(a,b)∼πQ(a, b) ≥max a? E(a,b)∼πQ(a?, b) (4) E(a,b)∼πQ(a, b) ≤min b? E(a,b)∼πQ(a, b?) It can be shown that a CCE always exists, and it can be computed by linear programming in polynomial time (see Appendix B for more details). Now we are ready to state an intermediate guarantee for optimistic Nash Q-learning. We assume the algorithm has played the game for K episodes, and we use V k, Qk, Nk, πk to denote values, visitation counts, and policies at the beginning of the k-th episode in Algorithm 1. Lemma 3. For any p ∈ (0, 1], choose hyperparameters αt, βt as in (3) for a large absolute constant c and ι = log(SABT/p). Then, with probability at least 1−p, Algorithm 1 has following guarantees • V kh(s) ≥ V ?h (s) ≥ V k h(s) for all (s, h, k) ∈ S × [H]× [K]. • (1/K) · ∑K k=1(V k 1 − V k 1)(s1) ≤ O (√ H5SABι/K ) . Lemma 3 makes two statements. First, it claims that the V k h(s) and V k h(s) computed in Algorithm 1 are indeed upper and lower bounds of the value of the Nash equilibrium. Second, Lemma 3 claims that the averages of the upper bounds and the lower bounds are also very close to the value of Nash equilibrium V ?1 (s1), where the gap decrease as 1/ √ K. This implies that in order to learn the value V ?1 (s1) up to -accuracy, we only need O(H5SABι/ 2) episodes. However, Lemma 3 has a significant drawback: it only guarantees the learning of the value of Nash equilibrium. It does not imply that the policies (µk, νk) used in Algorithm 1 are close to the Nash equilibrium, which requires the policies to have a near-optimal performance even against their best responses. This is a major difference between Markov games and standard MDPs, and is the reason why standard techniques from the MDP literature does not apply here. To resolve this problem, we propose a novel way to extract a certified policy from the optimistic Nash Q-learning algorithm. Algorithm part II: certified policies We describe our procedure of executing the certified policy µ̂ of the max-player is described in Algorithm 2. Above, µkh, ν k h denote the marginal distributions of πkh produced in Algorithm 1 over action set A,B respectively. We also introduce the following quantities that directly induced by αt: α0t := ∏t j=1 (1− αj), αit := αi ∏t j=i+1 (1− αj) (5) whose properties are listed in the following Lemma 11. Especially, ∑t i=1 α i t = 1, so {αit}ti=1 defines a distribution over [t]. We use kmh (s, a, b) to denote the index of the episode where (s, a, b) is observed in step h for the m-th time. The certified policy ν̂ of the min-player is easily defined by symmetry. We note that µ̂, ν̂ are clearly general policies, but they are no longer Markov policies. The intuitive reason why such policy µ̂ defined in Algorithm 2 is certified by Nash Q-learning algorithm, is because the update equation in line 8 of Algorithm 1 and equation (5) gives relation: Q k h(s, a, b) = α 0 tH + ∑t i=1 α i t [ rh(s, a, b) + V kih(s,a,b) h+1 (s kih(s,a,b) h+1 ) + βi ] Algorithm 3 Optimistic Nash V-learning (the max-player version) 1: Initialize: for any (s, a, b, h), V h(s)← H , Lh(s, a)← 0, Nh(s)← 0, µh(a|s)← 1/A. 2: for episode k = 1, . . . ,K do 3: receive s1. 4: for step h = 1, . . . ,H do 5: take action ah ∼ µh(·|sh), observe the action bh from opponent. 6: observe reward rh(sh, ah, bh) and next state sh+1. 7: t = Nh(sh)← Nh(sh) + 1. 8: V h(sh)← min{H, (1− αt)V h(sh) + αt(rh(sh, ah, bh) + V h+1(sh+1) + βt)}. 9: for all a ∈ A do 10: `h(sh, a)← [H − rh(sh, ah, bh)− V h+1(sh+1)]I{ah = a}/[µh(ah|sh) + ηt]. 11: Lh(sh, a)← (1− αt)Lh(sh, a) + αt`h(sh, a). 12: set µh(·|sh) ∝ exp[−(ηt/αt)Lh(sh, ·)]. This certifies the good performance against the best responses if the max-player plays a mixture of policies {µk i h(s,a,b) h+1 }ti=1 at step h + 1 with mixing weights {αit}ti=1 (see Appendix C.2 for more details). A recursion of this argument leads to the certified policy µ̂—a nested mixture of policies. We now present our main result for Nash Q-learning, using the certified policies (µ̂, ν̂). Theorem 4 (Sample Complexity of Nash Q-learning). For any p ∈ (0, 1], choose hyperparameters αt, βt as in (3) for large absolute constant c and ι = log(SABT/p). Then, with probability at least 1− p, if we run Nash Q-learning (Algorithm 1) for K episodes where K ≥ Ω ( H5SABι/ 2 ) , the certified policies (µ̂, ν̂) (Algorithm 2) will be -approximate Nash, i.e. V †,ν̂1 (s1)−V µ̂,† 1 (s1) ≤ . Theorem 4 asserts that if we run the optimistic Nash Q-learning algorithm for more than O(H5SABι/ 2) episodes, the certified policies (µ̂, ν̂) extracted using Algorithm 2 will be - approximate Nash equilibrium (Definition 2). We make two remarks. First, the executions of the certified policies µ̂, ν̂ require the storage of {µkh} and {νkh} for all k, h ∈ [H] × [K]. This makes the space complexity of our algorithm scales up linearly in the total number of episodes K. Second, Q-learning style algorithms (especially online updates) are crucial in our analysis for achieving sample complexity linear in S. They enjoy the property that every sample is only been used once, on the value function that is independent of this sample. In contrast, value iteration type algorithms do not enjoy such an independence property, which is why the best existing sample complexity scales as S2 [2]. 3 4 Optimistic Nash V-learning In this section, we present our new algorithm Optimistic Nash V-learning and its corresponding theoretical guarantees. This algorithm improves over Nash Q-learning in sample complexity from Õ(SAB) to Õ(S(A+B)), when only highlighting the dependency on S,A,B. Algorithm description Nash V-learning combines the idea of Follow-The-Regularized-Leader (FTRL) in the bandit literature with the Q-learning algorithm in reinforcement learning. This algorithm does not require extra information exchange between players other than standard game playing, thus can be ran separately by the two players. We describe the max-player version in Algorithm 3. See Algorithm 7 in Appendix D for the min-player version, where V h, Lh, νh, ηt and βt are defined symmetrically. For each step in each episode, the algorithm (a) first takes action according to µh, observes the action of the opponent, the reward, and the next state, (b) performs an incremental update on V , and (c) 3Despite [1] provides techniques to improve the sample complexity from S2 to S for value iteration in MDP, the same techniques can not be applied to Markov games due to the unique challenge that, in Markov games, we aim at finding policies that are good against their best responses. Algorithm 4 Certified Policy µ̂ of Nash V-learning 1: sample k ← Uniform([K]). 2: for step h = 1, . . . ,H do 3: observe sh, and set t← Nkh (sh). 4: sample m ∈ [t] with P(m = i) = αit. 5: k ← kmh (sh). 6: take action ah ∼ µkh(·|sh). updates policy µh. The first two parts are very similar to Nash Q-learning. In the third part, the agent first computes `h(sh, ·) as the importance weighted estimator of the current loss. She then computes the weighted cumulative loss Lh(sh, ·). Finally, the policy µh is updated using FTRL principle: µh(·|sh)← argminµ∈∆A ηt〈Lh(sh, ·), µ〉+ αtKL(µ‖µ0) Here µ0 is the uniform distribution over all actions A. Solving above minimization problem gives the update equation as in Line 12 in Algorithm 3. In multi-arm bandit, FTRL can defend against adversarial losses, with regret independent of the number of the opponent’s actions. This property turns out to be crucial for Nash V-learning to achieve sharper sample complexity than Nash Qlearning (see the analog of Lemma 3 in Lemma 15). Similar to Nash Q-learning, we also propose a new algorithm (Algorithm 4) to extract a certified policy from the optimistic Nash V-learning algorithm. The certified policies are again non-Markovian. We choose all hyperparameters as follows, for some large constant c , and log factors ι. αt = H + 1 H + t , ηt = √ logA At , η t = √ logB Bt , βt = c √ H4Aι t , β t = c √ H4Bι t , (6) We now present our main result on the sample complexity of Nash V-learning. Theorem 5 (Sample Complexity of Nash V-learning). For any p ∈ (0, 1], choose hyperparameters as in (6) for large absolute constant c and ι = log(SABT/p). Then, with probability at least 1− p, if we run Nash V-learning (Algorithm 3 and 7) for K episodes with K ≥ Ω ( H6S(A+B)ι/ 2 ) , its induced policies (µ̂, ν̂) (Algorithm 4) will be -approximate Nash, i.e. V †,ν̂1 (s1)− V µ̂,† 1 (s1) ≤ . Theorem 4 claims that if we run the optimistic Nash V-learning for more thanO(H6S(A+B)ι/ 2) episodes, the certified policies (µ̂, ν̂) extracted from Algorithm 4 will be -approximate Nash (Definition 2). Nash V-learning is the first algorithm of which the sample complexity matches the information theoretical lower bound Ω(H3S(A+B)/ 2) up to poly(H) factors and logarithmic terms. 5 Hardness for Learning the Best Response In this section, we present a computational hardness result for computing the best response against an opponent with a fixed unknown policy. We further show that this implies the computational hardness result for achieving sublinear regret in Markov games when playing against adversarial opponents, which rules out a popular approach to design algorithms for finding Nash equilibria. We first remark that if the opponent is restricted to only play Markov policies, then learning the best response is as easy as learning a optimal policy in the standard single-agent Markov decision process, where efficient algorithms are known to exist. Nevertheless, when the opponent can as well play any policy which may be non-Markovian, we show that finding the best response against those policies is computationally challenging. We say an algorithm is a polynomial time algorithm for learning the best response if for any policy of the opponent ν, and for any > 0, the algorithm finds the -approximate best response of policy ν (Definition 1) with probability at least 1/2, in time polynomial in S,H,A,B, −1. We can show the following hardness result for finding the best response in polynomial time. Theorem 6 (Hardness for learning the best response). There exists a Markov game with deterministic transitions and rewards defined for any horizon H ≥ 1 with S = 2, A = 2, and B = 2, such that if there exists a polynomial time algorithm for learning the best response for this Markov game, then there exists a polynomial time algorithm for learning parity with noise (see problem description in Appendix E). We remark that learning parity with noise is a notoriously difficult problem that has been used to design efficient cryptographic schemes. It is conjectured by the community to be hard. Conjecture 7 ([16]). There is no polynomial time algorithm for learning party with noise. Theorem 6 with Conjecture 7 demonstrates the fundamental difficulty—if not strict impossibility— of designing a polynomial time for learning the best responses in Markov games. The intuitive reason for such computational hardness is that, while the underlying system has Markov transitions, the opponent can play policies that encode long-term correlations with non-Markovian nature, such as parity with noise, which makes it very challenging to find the best response. It is known that learning many other sequential models with long-term correlations (such as hidden Markov models or partially observable MDPs) is as hard as learning parity with noise [20]. 5.1 Hardness for Playing Against Adversarial Opponent Theorem 6 directly implies the difficulty for achieving sublinear regret in Markov games when playing against adversarial opponents in Markov games. Our construction of hard instances in the proof of Theorem 6 further allows the adversarial opponent to only play Markov policies in each episode. Since playing against adversarial opponent is a different problem with independent interest, we present the full result here. Without loss of generality, we still consider the setting where the algorithm can only control the max-player, while the min-player is an adversarial opponent. In the beginning of every episode k, both players pick their own policies µk and νk, and execute them throughout the episode. The adversarial opponent can possibly pick her policy νk adaptive to all the observations in the earlier episodes. We say an algorithm for the learner is a polynomial time no-regret algorithm if there exists a δ > 0 such that for any adversarial opponent, and any fixedK > 0, the algorithm outputs policies {µk}Kk=1 which satisfies the following, with probability at least 1/2, in time polynomial in S,H,A,B,K. Regret(K) = sup µ K∑ k=1 V µ,ν k 1 (s1)− K∑ k=1 V µ k,νk 1 (s1) ≤ poly(S,H,A,B)K1−δ (7) Theorem 6 directly implies the following hardness result for achieving no-regret against adversarial opponents, a result that first appeared in [25]. Corollary 8 (Hardness for playing against adversarial opponent). There exists a Markov game with deterministic transitions and rewards defined for any horizon H ≥ 1 with S = 2, A = 2, and B = 2, such that if there exists a polynomial time no-regret algorithm for this Markov game, then there exists a polynomial time algorithm for learning parity with noise (see problem description in Appendix E). The claim remains to hold even if we restrict the adversarial opponents in the Markov game to be non-adaptive, and to only play Markov policies in each episode. Similar to Theorem 6, Corollary 8 combined with Conjecture 7 demonstrates the fundamental difficulty of designing a polynomial time no-regret algorithm against adversarial opponents for Markov games. Implications on algorithm design for finding Nash Equilibria Corollary 8 also rules out a natural approach for designing efficient algorithms for finding approximate Nash equilibrium through combining two no-regret algorithms. In fact, it is not hard to see that if the min-player also runs a non-regret algorithm, and obtain a regret bound symmetric to (7), then summing the two regret bounds shows the mixture policies (µ̂, ν̂)—which assigns uniform mixing weights to policies {µk}Kk=1 and {νk}Kk=1 respectively—is an approximate Nash equilibrium. Corollary 8 with Conjecture 7 claims that any algorithm designed using this approach is not a polynomial time algorithm. Broader Impact As this is a theoretical contribution, we do not envision that our direct results will have a tangible societal impact. Our broader line of inquiry could impact a line of thinking about how to design more sample-efficient algorithms for multi-agent reinforcement learning, which could be useful towards making artificial intelligence more resource and energy efficient. Acknowledgments TY is partially supported by NSF BIGDATA grant IIS1741341.
1. What are the key contributions and strengths of the paper regarding sample-efficient learning in two-player zero-sum model-free games? 2. What are the weaknesses or limitations of the paper, particularly in terms of its compact nature and lack of intuition in some parts? 3. How do the proposed algorithms, Optimistic Nash Q-Learning and Optimistic Nash V-Learning, compare to prior works in terms of their sample complexity and applicability to different scenarios? 4. What is the significance of the theoretical result that learning a policy against an opponent with no regret cannot be done with a polynomial complexity? 5. Are there any questions or concerns regarding the implementation or practical application of the proposed algorithms and theories?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents three theoretical results, focusing on sample-efficient learning in two-player zero-sum model-free games with discrete states and discrete actions: - Optimistic Nash Q-Learning, that maintains an upper and lower bound on the Q-Value of every state-actionA-actionB tuple. This algorithms is proven to have a sample complexity of O(H^5 SAB) - Optimistic Nash V-Learning, that learns an upper bound on the V-function of every state, and iterates on the actions of the A agent to compute a loss, used to train a policy. This algorithm has a complexity of O(H^6 S(A+B)). - A theoretical result that learning a policy against an opponent with no regret cannot be done with a polynomial complexity. Strengths The theoretical results are strong and well-explained. They seem novel, and largely improve compared to the current state of the art. The two algorithms proposed in the paper allow to choose which ones suits a problem the best, depending on the length of an episode and the number of actions that the A and B agents have. The Appendix contains many proofs and additional details, useful to understand the main results presented in the paper. Weaknesses A "weakness" of the paper is that it introduces three contributions at once, and therefore is very compact. While the paper is well-written and easy to read for the amount of information it contains, many parts of the paper lack intuition (like Equation 5 and the whole meaning of alpha, and the impact on how it is computed), and many references to the Appendix make reading the paper a bit difficult at time. This is really a paper that the reader must accept to read while not understanding everything, even though the Appendix answers many questions after the paper is read.
NIPS
Title Near-Optimal Reinforcement Learning with Self-Play Abstract This paper considers the problem of designing optimal algorithms for reinforcement learning in two-player zero-sum games. We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision. In a tabular episodic Markov game with S states, A max-player actions and B min-player actions, the best existing algorithm for finding an approximate Nash equilibrium requires Õ(SAB) steps of game playing, when only highlighting the dependency on (S,A,B). In contrast, the best existing lower bound scales as Ω(S(A + B)) and has a significant gap from the upper bound. This paper closes this gap for the first time: we propose an optimistic variant of the Nash Qlearning algorithm with sample complexity Õ(SAB), and a new Nash V-learning algorithm with sample complexity Õ(S(A + B)). The latter result matches the information-theoretic lower bound in all problem-dependent parameters except for a polynomial factor of the length of each episode. In addition, we present a computational hardness result for learning the best responses against a fixed opponent in Markov games—a learning objective different from finding the Nash equilibrium. 1 Introduction A wide range of modern artificial intelligence challenges can be cast as a multi-agent reinforcement learning (multi-agent RL) problem, in which more than one agent performs sequential decision making in an interactive environment. Multi-agent RL has achieved significant recent success on traditionally challenging tasks, for example in the game of GO [30, 31], Poker [6], real-time strategy games [33, 22], decentralized controls or multiagent robotics systems [5], autonomous driving [27], as well as complex social scenarios such as hide-and-seek [3]. In many scenarios, the learning agents even outperform the best human experts . Despite the great empirical success, a major bottleneck for many existing RL algorithms is that they require a tremendous number of samples. For example, the biggest AlphaGo Zero model is trained on tens of millions of games and took more than a month to train [31]. While requiring such amount of samples may be acceptable in simulatable environments such as GO, it is not so in other sampleexpensive real world settings such as robotics and autonomous driving. It is thus important for us to understand the sample complexity in RL—how can we design algorithms that find a near optimal policy with a small number of samples, and what is the fundamental limit, i.e. the minimum number of samples required for any algorithm to find a good policy. Theoretical understandings on the sample complexity for multi-agent RL are rather limited, especially when compared with single-agent settings. The standard model for a single-agent setting is an episodic Markov Decision Process (MDP) with S states, and A actions, and H steps per episode. The best known algorithm can find an near-optimal policy in Θ̃(poly(H)SA/ 2) episodes, which matches the lower bound up to a single H factor [1, 8]. In contrast, in multi-agent settings, the optimal sample complexity remains open even in the basic setting of two-player tabular Markov games 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [28], where the agents are required to find the solutions of the games—the Nash equilibria. The best known algorithm, VI-ULCB, finds an -approximate Nash equilibrium in Õ(poly(H)S2AB/ 2) episodes [2], where B is the number of actions for the other player. The information theoretical lower bound is Ω(poly(H)S(A + B)/ 2). Specifically, the number of episodes required for the algorithm scales quadratically in both S and (A,B), and exhibits a gap from the linear dependency in the lower bound. This motivates the following question: Can we design algorithms with near-optimal sample complexity for learning Markov games? In this paper, we present the first line of near-optimal algorithms for two-player Markov games that match the aforementioned lower bound up to a poly(H) factor. This closes the open problem for achieving the optimal sample complexity in all (S,A,B) dependency. Our algorithm learns by playing against itself without requiring any direct supervision, and is thus a self-play algorithm. 1.1 Our contributions • We propose an optimistic variant of Nash Q-learning [11], and prove that it achieves sample complexity Õ(H5SAB/ 2) for finding an -approximate Nash equilibrium in two-player Markov games (Section 3). Our algorithm builds optimistic upper and lower estimates of Q-values, and computes the Coarse Correlated Equilibrium (CCE) over this pair of Q estimates as its execution policies for both players. • We design a new algorithm—Nash V-learning—for finding approximate Nash equilibria, and show that it achieves sample complexity Õ(H6S(A + B)/ 2) (Section 4). This improves upon Nash Q-learning in case min {A,B} > H . It is also the first result that matches the minimax lower bound up to only a poly(H) factor. This algorithm builds optimistic upper and lower estimates of V -values, and features a novel combination of Follow-the-Regularized-Leader (FTRL) and standard Q-learning algorithm to determine its execution policies. • Apart from finding Nash equilibria, we prove that learning the best responses of fixed opponents in Markov games is as hard as learning parity with noise—a notoriously difficult problem that is believed to be computationally hard (Section 5). As a corollary, this hardness result directly implies that achieving sublinear regret against adversarial opponents in Markov games is also computationally hard, a result that first appeared in [25]. This in turn rules out the possibility of designing efficient algorithms for finding Nash equilibria by running no-regret algorithms for each player separately. In addition to above contributions, this paper also features a novel approach of extracting certified policies—from the estimates produced by reinforcement learning algorithms such as Nash Qlearning and Nash V-learning—that are certified to have similar performance as Nash equilibrium policies, even when facing against their best response (see Section 3 for more details). We believe this technique could be of broader interest to the community. 1.2 Related Work Markov games Markov games (or stochastic games) are proposed in the early 1950s [28]. They are widely used to model multi-agent RL. Learning the Nash equilibria of Markov games has been studied in classical work [18, 19, 11, 10], where the transition matrix and reward are assumed to be known, or in the asymptotic setting where the number of data goes to infinity. These results do not directly apply to the non-asymptotic setting where the transition and reward are unknown and only a limited amount of data are available for estimating them. A recent line of work tackles self-play algorithms for Markov games in the non-asymptotic setting with strong reachability assumptions. Specifically, Wei et al. [35] assumes no matter what strategy one agent sticks to, the other agent can always reach all states by playing a certain policy, and Jia et al. [13], Sidford et al. [29] assume access to simulators (or generative models) that enable the agent to directly sample transition and reward information for any state-action pair. These settings ensure that all states can be reached directly, so no sophisticated exploration is not required. Very recently, [2, 36] study learning Markov games without these reachability assumptions, where exploration becomes essential. However, both results suffer from highly suboptimal sample complexity. We compare them with our results in Table 1. The results of [36] also applies to the linear function approximation setting. We remark that the R-max algorithm [4] does provide provable guarantees for learning Markov game, even in the setting of playing against the adversarial opponent, but using a definition of regret that is weaker than the standard regret. Their result does not imply any sample complexity result for finding Nash equilibrium policies. Adversarial MDP Another line of related work focuses on provably efficient algorithms for adversarial MDPs. Most work in this line considers the setting with adversarial rewards [38, 26, 15], because adversarial MDP with changing dynamics is computationally hard even under fullinformation feedback [37]. These results do not direcly imply provable self-play algorithms in our setting, because the opponent in Markov games can affect both the reward and the transition. Single-agent RL There is a rich literature on reinforcement learning in MDPs [see e.g. 12, 24, 1, 7, 32, 14]. MDP is a special case of Markov games, where only a single agent interacts with a stochastic environment. For the tabular episodic setting with nonstationary dynamics and no simulators, the best sample complexity achieved by existing model-based and model-free algorithms are Õ(H3SA/ 2) [1] and Õ(H4SA/ 2) [14], respectively, where S is the number of states, A is the number of actions, H is the length of each episode. Both of them (nearly) match the lower bound Ω(H3SA/ 2) [12, 23, 14]. 2 Preliminaries We consider zero-sum Markov Games (MG) [28, 18], which are also known as stochastic games in the literature. Zero-sum Markov games are generalization of standard Markov Decision Processes (MDP) into the two-player setting, in which the max-player seeks to maximize the total return and the min-player seeks to minimize the total return. Formally, we denote a tabular episodic Markov game as MG(H,S,A,B,P, r), where H is the number of steps in each episode, S is the set of states with |S| ≤ S, (A,B) are the sets of actions of the max-player and the min-player respectively, P = {Ph}h∈[H] is a collection of transition matrices, so that Ph(·|s, a, b) gives the distribution over states if action pair (a, b) is taken for state s at step h, and r = {rh}h∈[H] is a collection of reward functions, and rh : S ×A×B → [0, 1] is the deterministic reward function at step h. 1 In each episode of this MG, we start with a fixed initial state s1. Then, at each step h ∈ [H], both players observe state sh ∈ S, and the max-player picks action ah ∈ A while the min-player picks action bh ∈ B simultaneously. Both players observe the actions of the opponents, receive reward rh(sh, ah, bh), and then the environment transitions to the next state sh+1 ∼ Ph(·|sh, ah, bh). The episode ends when sH+1 is reached. 1We assume the rewards in [0, 1] for normalization. Our results directly generalize to randomized reward functions, since learning the transition is more difficult than learning the reward. Markov policy, value function A Markov policy µ of the max-player is a collection of H functions {µh : S → ∆A}h∈[H], which maps from a state to a distribution of actions. Here ∆A is the probability simplex over action set A. Similarly, a policy ν of the min-player is a collection of H functions {νh : S → ∆B}h∈[H]. We use the notation µh(a|s) and νh(b|s) to present the probability of taking action a or b for state s at step h under Markov policy µ or ν respectively. We use V µ,νh : S → R to denote the value function at step h under policy µ and ν, so that V µ,ν h (s) gives the expected cumulative rewards received under policy µ and ν, starting from s at step h: V µ,νh (s) := Eµ,ν [∑H h′=h rh′(sh′ , ah′ , bh′) ∣∣∣ sh = s] . (1) We also define Qµ,νh : S × A × B → R to denote Q-value function at step h so that Q µ,ν h (s, a, b) gives the cumulative rewards received under policy µ and ν, starting from (s, a, b) at step h: Qµ,νh (s, a, b) := Eµ,ν [∑H h′=h rh′(sh′ , ah′ , bh′) ∣∣∣ sh = s, ah = a, bh = b] . (2) For simplicity, we use notation of operator Ph so that [PhV ](s, a, b) := Es′∼Ph(·|s,a,b)V (s′) for any value function V . We also use notation [DπQ](s) := E(a,b)∼π(·,·|s)Q(s, a, b) for any action-value function Q. By definition of value functions, we have the Bellman equation Qµ,νh (s, a, b) = (rh + PhV µ,ν h+1)(s, a, b), V µ,ν h (s) = (Dµh×νhQ µ,ν h )(s) for all (s, a, b, h) ∈ S ×A× B × [H]. We define V µ,νH+1(s) = 0 for all s ∈ SH+1. Best response and Nash equilibrium For any Markov policy of the max-player µ, there exists a best response of the min-player, which is a Markov policy ν†(µ) satisfying V µ,ν †(µ) h (s) = infν V µ,ν h (s) for any (s, h) ∈ S × [H]. Here the infimum is taken over all possible policies which are not necessarily Markovian (we will define later in this section). We define V µ,†h := V µ,ν†(µ) h . By symmetry, we can also define µ†(ν) and V †,νh . It is further known (cf. [9]) that there exist Markov policies µ?, ν? that are optimal against the best responses of the opponents, in the sense that V µ ?,† h (s) = supµ V µ,† h (s), V †,ν? h (s) = infν V †,ν h (s), for all (s, h). We call these optimal strategies (µ?, ν?) the Nash equilibrium of the Markov game, which satisfies the following minimax equation: 2 supµ infν V µ,ν h (s) = V µ?,ν? h (s) = infν supµ V µ,ν h (s). Intuitively, a Nash equilibrium gives a solution in which no player has anything to gain by changing only her own policy. We further abbreviate the values of Nash equilibrium V µ ?,ν? h and Q µ?,ν? h as V ?h and Q ? h. We refer readers to Appendix A for Bellman optimality equations for values of best responses or Nash equilibria. General (non-Markovian) policy In certain situations, it is beneficial to consider general, historydependent policies that are not necessarily Markovian. A (general) policy µ of the max-player is a set of H maps µ := { µh : R × (S × A × B × R)h−1 × S → ∆A } h∈[H], from a random number z ∈ R and a history of length h—say (s1, a1, b1, r1, · · · , sh), to a distribution over actions inA. By symmetry, we can also define the (general) policy ν of the min-player, by replacing the action set A in the definition by set B. The random number z is sampled from some underlying distribution D, but may be shared among all steps h ∈ [H]. For a pair of general policy (µ, ν), we can still use the same definitions (1) to define their value V µ,ν1 (s1) at step 1. We can also define the best response ν †(µ) of a general policy µ as the minimizing policy so that V µ,†1 (s1) ≡ V µ,ν†(µ) 1 (s1) = infν V µ,ν h (s1) at step 1. We remark that the best response of a general policy is not necessarily Markovian. 2The minimax theorem here is different from the one for matrix games, i.e. maxφminψ φ>Aψ = minψmaxφ φ >Aψ for any matrix A, since here V µ,νh (s) is in general not bilinear in µ, ν. Algorithm 1 Optimistic Nash Q-learning 1: Initialize: for any (s, a, b, h), Qh(s, a, b)← H , Qh(s, a, b)← 0, Nh(s, a, b)← 0, πh(a, b|s)← 1/(AB). 2: for episode k = 1, . . . ,K do 3: receive s1. 4: for step h = 1, . . . ,H do 5: take action (ah, bh) ∼ πh(·, ·|sh). 6: observe reward rh(sh, ah, bh) and next state sh+1. 7: t = Nh(sh, ah, bh)← Nh(sh, ah, bh) + 1. 8: Qh(sh, ah, bh)← (1− αt)Qh(sh, ah, bh) + αt(rh(sh, ah, bh) + V h+1(sh+1) + βt) 9: Q h (sh, ah, bh)← (1− αt)Qh(sh, ah, bh) + αt(rh(sh, ah, bh) + V h+1(sh+1)− βt) 10: πh(·, ·|sh)← CCE(Qh(sh, ·, ·), Qh(sh, ·, ·)) 11: V h(sh)← (DπhQh)(sh); V h(sh)← (DπhQh)(sh). Learning Objective There are two possible learning objectives in the setting of Markov games. The first one is to find the best response for a fixed opponent. Without loss of generality, we consider the case where the learning agent is the max-player, and the min-player is the opponent. Definition 1 ( -approximate best response). For an opponent with an fixed unknown general policy ν, a general policy µ̂ is the -approximate best response if V †,ν1 (s1)− V µ̂,ν 1 (s1) ≤ . The second goal is to find a Nash equilibrium of the Markov games. We measure the suboptimality of any pair of general policies (µ̂, ν̂) using the gap between their performance and the performance of the optimal strategy (i.e. Nash equilibrium) when playing against the best responses respectively: V †,ν̂1 (s1)− V µ̂,† 1 (s1) = [ V †,ν̂1 (s1)− V ?1 (s1) ] + [ V ?1 (s1)− V µ̂,† 1 (s1) ] Definition 2 ( -approximate Nash equilibrium). A pair of general policies (µ̂, ν̂) is an - approximate Nash equilibrium, if V †,ν̂1 (s1)− V µ̂,† 1 (s1) ≤ . Loosely speaking, Nash equilibria can be viewed as “the best responses to the best responses”. In most applications, they are the ultimate solutions to the games. In Section 3 and 4, we present sharp guarantees for learning an approximate Nash equilibrium with near-optimal sample complexity. However, rather surprisingly, learning a best response in the worst case is more challenging than learning the Nash equilibrium. In Section 5, we present a computational hardness result for learning an approximate best response. 3 Optimistic Nash Q-learning In this section, we present our first algorithm Optimistic Nash Q-learning and its corresponding theoretical guarantees. Algorithm part I: learning values Our algorithm Optimistic Nash Q-learning (Algorithm 1) is an optimistic variant of Nash Q-learning [11]. For each step in each episode, it (a) takes actions according to the previously computed policy πh, and observes the reward and next state, (b) performs incremental updates on Q-values, and (c) computes new greedy policies and updates V -values. Part (a) is straightforward; we now focus on explaining part (b) and part (c). In part (b), the incremental updates on Q-values (Line 8, 9) are almost the same as standard Qlearning [34], except here we maintain two separate Q-values—Qh and Qh, as upper and lower confidence versions respectively. We add and subtract a bonus term βt in the corresponding updates, which depends on t = Nh(sh, ah, bh)—the number of times (sh, ah, bh) has been visited at step h. We pick parameter αt and βt as follows for some large constant c , and log factors ι: αt = (H + 1)/(H + t), βt = c √ H3ι/t (3) In part (c), our greedy policies are computed using a Coarse Correlated Equilibrium (CCE) subroutine, which is first introduced by [36] to solve Markov games using value iteration algorithms. For Algorithm 2 Certified Policy µ̂ of Nash Q-learning 1: sample k ← Uniform([K]). 2: for step h = 1, . . . ,H do 3: observe sh, and take action ah ∼ µkh(·|sh). 4: observe bh, and set t← Nkh (sh, ah, bh). 5: sample m ∈ [t] with P(m = i) = αit. 6: k ← kmh (sh, ah, bh) any pair of matrices Q,Q ∈ [0, H]A×B , CCE(Q,Q) returns a distribution π ∈ ∆A×B such that E(a,b)∼πQ(a, b) ≥max a? E(a,b)∼πQ(a?, b) (4) E(a,b)∼πQ(a, b) ≤min b? E(a,b)∼πQ(a, b?) It can be shown that a CCE always exists, and it can be computed by linear programming in polynomial time (see Appendix B for more details). Now we are ready to state an intermediate guarantee for optimistic Nash Q-learning. We assume the algorithm has played the game for K episodes, and we use V k, Qk, Nk, πk to denote values, visitation counts, and policies at the beginning of the k-th episode in Algorithm 1. Lemma 3. For any p ∈ (0, 1], choose hyperparameters αt, βt as in (3) for a large absolute constant c and ι = log(SABT/p). Then, with probability at least 1−p, Algorithm 1 has following guarantees • V kh(s) ≥ V ?h (s) ≥ V k h(s) for all (s, h, k) ∈ S × [H]× [K]. • (1/K) · ∑K k=1(V k 1 − V k 1)(s1) ≤ O (√ H5SABι/K ) . Lemma 3 makes two statements. First, it claims that the V k h(s) and V k h(s) computed in Algorithm 1 are indeed upper and lower bounds of the value of the Nash equilibrium. Second, Lemma 3 claims that the averages of the upper bounds and the lower bounds are also very close to the value of Nash equilibrium V ?1 (s1), where the gap decrease as 1/ √ K. This implies that in order to learn the value V ?1 (s1) up to -accuracy, we only need O(H5SABι/ 2) episodes. However, Lemma 3 has a significant drawback: it only guarantees the learning of the value of Nash equilibrium. It does not imply that the policies (µk, νk) used in Algorithm 1 are close to the Nash equilibrium, which requires the policies to have a near-optimal performance even against their best responses. This is a major difference between Markov games and standard MDPs, and is the reason why standard techniques from the MDP literature does not apply here. To resolve this problem, we propose a novel way to extract a certified policy from the optimistic Nash Q-learning algorithm. Algorithm part II: certified policies We describe our procedure of executing the certified policy µ̂ of the max-player is described in Algorithm 2. Above, µkh, ν k h denote the marginal distributions of πkh produced in Algorithm 1 over action set A,B respectively. We also introduce the following quantities that directly induced by αt: α0t := ∏t j=1 (1− αj), αit := αi ∏t j=i+1 (1− αj) (5) whose properties are listed in the following Lemma 11. Especially, ∑t i=1 α i t = 1, so {αit}ti=1 defines a distribution over [t]. We use kmh (s, a, b) to denote the index of the episode where (s, a, b) is observed in step h for the m-th time. The certified policy ν̂ of the min-player is easily defined by symmetry. We note that µ̂, ν̂ are clearly general policies, but they are no longer Markov policies. The intuitive reason why such policy µ̂ defined in Algorithm 2 is certified by Nash Q-learning algorithm, is because the update equation in line 8 of Algorithm 1 and equation (5) gives relation: Q k h(s, a, b) = α 0 tH + ∑t i=1 α i t [ rh(s, a, b) + V kih(s,a,b) h+1 (s kih(s,a,b) h+1 ) + βi ] Algorithm 3 Optimistic Nash V-learning (the max-player version) 1: Initialize: for any (s, a, b, h), V h(s)← H , Lh(s, a)← 0, Nh(s)← 0, µh(a|s)← 1/A. 2: for episode k = 1, . . . ,K do 3: receive s1. 4: for step h = 1, . . . ,H do 5: take action ah ∼ µh(·|sh), observe the action bh from opponent. 6: observe reward rh(sh, ah, bh) and next state sh+1. 7: t = Nh(sh)← Nh(sh) + 1. 8: V h(sh)← min{H, (1− αt)V h(sh) + αt(rh(sh, ah, bh) + V h+1(sh+1) + βt)}. 9: for all a ∈ A do 10: `h(sh, a)← [H − rh(sh, ah, bh)− V h+1(sh+1)]I{ah = a}/[µh(ah|sh) + ηt]. 11: Lh(sh, a)← (1− αt)Lh(sh, a) + αt`h(sh, a). 12: set µh(·|sh) ∝ exp[−(ηt/αt)Lh(sh, ·)]. This certifies the good performance against the best responses if the max-player plays a mixture of policies {µk i h(s,a,b) h+1 }ti=1 at step h + 1 with mixing weights {αit}ti=1 (see Appendix C.2 for more details). A recursion of this argument leads to the certified policy µ̂—a nested mixture of policies. We now present our main result for Nash Q-learning, using the certified policies (µ̂, ν̂). Theorem 4 (Sample Complexity of Nash Q-learning). For any p ∈ (0, 1], choose hyperparameters αt, βt as in (3) for large absolute constant c and ι = log(SABT/p). Then, with probability at least 1− p, if we run Nash Q-learning (Algorithm 1) for K episodes where K ≥ Ω ( H5SABι/ 2 ) , the certified policies (µ̂, ν̂) (Algorithm 2) will be -approximate Nash, i.e. V †,ν̂1 (s1)−V µ̂,† 1 (s1) ≤ . Theorem 4 asserts that if we run the optimistic Nash Q-learning algorithm for more than O(H5SABι/ 2) episodes, the certified policies (µ̂, ν̂) extracted using Algorithm 2 will be - approximate Nash equilibrium (Definition 2). We make two remarks. First, the executions of the certified policies µ̂, ν̂ require the storage of {µkh} and {νkh} for all k, h ∈ [H] × [K]. This makes the space complexity of our algorithm scales up linearly in the total number of episodes K. Second, Q-learning style algorithms (especially online updates) are crucial in our analysis for achieving sample complexity linear in S. They enjoy the property that every sample is only been used once, on the value function that is independent of this sample. In contrast, value iteration type algorithms do not enjoy such an independence property, which is why the best existing sample complexity scales as S2 [2]. 3 4 Optimistic Nash V-learning In this section, we present our new algorithm Optimistic Nash V-learning and its corresponding theoretical guarantees. This algorithm improves over Nash Q-learning in sample complexity from Õ(SAB) to Õ(S(A+B)), when only highlighting the dependency on S,A,B. Algorithm description Nash V-learning combines the idea of Follow-The-Regularized-Leader (FTRL) in the bandit literature with the Q-learning algorithm in reinforcement learning. This algorithm does not require extra information exchange between players other than standard game playing, thus can be ran separately by the two players. We describe the max-player version in Algorithm 3. See Algorithm 7 in Appendix D for the min-player version, where V h, Lh, νh, ηt and βt are defined symmetrically. For each step in each episode, the algorithm (a) first takes action according to µh, observes the action of the opponent, the reward, and the next state, (b) performs an incremental update on V , and (c) 3Despite [1] provides techniques to improve the sample complexity from S2 to S for value iteration in MDP, the same techniques can not be applied to Markov games due to the unique challenge that, in Markov games, we aim at finding policies that are good against their best responses. Algorithm 4 Certified Policy µ̂ of Nash V-learning 1: sample k ← Uniform([K]). 2: for step h = 1, . . . ,H do 3: observe sh, and set t← Nkh (sh). 4: sample m ∈ [t] with P(m = i) = αit. 5: k ← kmh (sh). 6: take action ah ∼ µkh(·|sh). updates policy µh. The first two parts are very similar to Nash Q-learning. In the third part, the agent first computes `h(sh, ·) as the importance weighted estimator of the current loss. She then computes the weighted cumulative loss Lh(sh, ·). Finally, the policy µh is updated using FTRL principle: µh(·|sh)← argminµ∈∆A ηt〈Lh(sh, ·), µ〉+ αtKL(µ‖µ0) Here µ0 is the uniform distribution over all actions A. Solving above minimization problem gives the update equation as in Line 12 in Algorithm 3. In multi-arm bandit, FTRL can defend against adversarial losses, with regret independent of the number of the opponent’s actions. This property turns out to be crucial for Nash V-learning to achieve sharper sample complexity than Nash Qlearning (see the analog of Lemma 3 in Lemma 15). Similar to Nash Q-learning, we also propose a new algorithm (Algorithm 4) to extract a certified policy from the optimistic Nash V-learning algorithm. The certified policies are again non-Markovian. We choose all hyperparameters as follows, for some large constant c , and log factors ι. αt = H + 1 H + t , ηt = √ logA At , η t = √ logB Bt , βt = c √ H4Aι t , β t = c √ H4Bι t , (6) We now present our main result on the sample complexity of Nash V-learning. Theorem 5 (Sample Complexity of Nash V-learning). For any p ∈ (0, 1], choose hyperparameters as in (6) for large absolute constant c and ι = log(SABT/p). Then, with probability at least 1− p, if we run Nash V-learning (Algorithm 3 and 7) for K episodes with K ≥ Ω ( H6S(A+B)ι/ 2 ) , its induced policies (µ̂, ν̂) (Algorithm 4) will be -approximate Nash, i.e. V †,ν̂1 (s1)− V µ̂,† 1 (s1) ≤ . Theorem 4 claims that if we run the optimistic Nash V-learning for more thanO(H6S(A+B)ι/ 2) episodes, the certified policies (µ̂, ν̂) extracted from Algorithm 4 will be -approximate Nash (Definition 2). Nash V-learning is the first algorithm of which the sample complexity matches the information theoretical lower bound Ω(H3S(A+B)/ 2) up to poly(H) factors and logarithmic terms. 5 Hardness for Learning the Best Response In this section, we present a computational hardness result for computing the best response against an opponent with a fixed unknown policy. We further show that this implies the computational hardness result for achieving sublinear regret in Markov games when playing against adversarial opponents, which rules out a popular approach to design algorithms for finding Nash equilibria. We first remark that if the opponent is restricted to only play Markov policies, then learning the best response is as easy as learning a optimal policy in the standard single-agent Markov decision process, where efficient algorithms are known to exist. Nevertheless, when the opponent can as well play any policy which may be non-Markovian, we show that finding the best response against those policies is computationally challenging. We say an algorithm is a polynomial time algorithm for learning the best response if for any policy of the opponent ν, and for any > 0, the algorithm finds the -approximate best response of policy ν (Definition 1) with probability at least 1/2, in time polynomial in S,H,A,B, −1. We can show the following hardness result for finding the best response in polynomial time. Theorem 6 (Hardness for learning the best response). There exists a Markov game with deterministic transitions and rewards defined for any horizon H ≥ 1 with S = 2, A = 2, and B = 2, such that if there exists a polynomial time algorithm for learning the best response for this Markov game, then there exists a polynomial time algorithm for learning parity with noise (see problem description in Appendix E). We remark that learning parity with noise is a notoriously difficult problem that has been used to design efficient cryptographic schemes. It is conjectured by the community to be hard. Conjecture 7 ([16]). There is no polynomial time algorithm for learning party with noise. Theorem 6 with Conjecture 7 demonstrates the fundamental difficulty—if not strict impossibility— of designing a polynomial time for learning the best responses in Markov games. The intuitive reason for such computational hardness is that, while the underlying system has Markov transitions, the opponent can play policies that encode long-term correlations with non-Markovian nature, such as parity with noise, which makes it very challenging to find the best response. It is known that learning many other sequential models with long-term correlations (such as hidden Markov models or partially observable MDPs) is as hard as learning parity with noise [20]. 5.1 Hardness for Playing Against Adversarial Opponent Theorem 6 directly implies the difficulty for achieving sublinear regret in Markov games when playing against adversarial opponents in Markov games. Our construction of hard instances in the proof of Theorem 6 further allows the adversarial opponent to only play Markov policies in each episode. Since playing against adversarial opponent is a different problem with independent interest, we present the full result here. Without loss of generality, we still consider the setting where the algorithm can only control the max-player, while the min-player is an adversarial opponent. In the beginning of every episode k, both players pick their own policies µk and νk, and execute them throughout the episode. The adversarial opponent can possibly pick her policy νk adaptive to all the observations in the earlier episodes. We say an algorithm for the learner is a polynomial time no-regret algorithm if there exists a δ > 0 such that for any adversarial opponent, and any fixedK > 0, the algorithm outputs policies {µk}Kk=1 which satisfies the following, with probability at least 1/2, in time polynomial in S,H,A,B,K. Regret(K) = sup µ K∑ k=1 V µ,ν k 1 (s1)− K∑ k=1 V µ k,νk 1 (s1) ≤ poly(S,H,A,B)K1−δ (7) Theorem 6 directly implies the following hardness result for achieving no-regret against adversarial opponents, a result that first appeared in [25]. Corollary 8 (Hardness for playing against adversarial opponent). There exists a Markov game with deterministic transitions and rewards defined for any horizon H ≥ 1 with S = 2, A = 2, and B = 2, such that if there exists a polynomial time no-regret algorithm for this Markov game, then there exists a polynomial time algorithm for learning parity with noise (see problem description in Appendix E). The claim remains to hold even if we restrict the adversarial opponents in the Markov game to be non-adaptive, and to only play Markov policies in each episode. Similar to Theorem 6, Corollary 8 combined with Conjecture 7 demonstrates the fundamental difficulty of designing a polynomial time no-regret algorithm against adversarial opponents for Markov games. Implications on algorithm design for finding Nash Equilibria Corollary 8 also rules out a natural approach for designing efficient algorithms for finding approximate Nash equilibrium through combining two no-regret algorithms. In fact, it is not hard to see that if the min-player also runs a non-regret algorithm, and obtain a regret bound symmetric to (7), then summing the two regret bounds shows the mixture policies (µ̂, ν̂)—which assigns uniform mixing weights to policies {µk}Kk=1 and {νk}Kk=1 respectively—is an approximate Nash equilibrium. Corollary 8 with Conjecture 7 claims that any algorithm designed using this approach is not a polynomial time algorithm. Broader Impact As this is a theoretical contribution, we do not envision that our direct results will have a tangible societal impact. Our broader line of inquiry could impact a line of thinking about how to design more sample-efficient algorithms for multi-agent reinforcement learning, which could be useful towards making artificial intelligence more resource and energy efficient. Acknowledgments TY is partially supported by NSF BIGDATA grant IIS1741341.
1. What is the focus and contribution of the paper regarding self-play RL settings? 2. What are the strengths of the proposed approach, particularly its performance and achievements? 3. What are the weaknesses of the paper, especially regarding memory complexity and policy sampling? 4. Do you have any concerns about the necessity and effectiveness of the policy certification algorithm? 5. Why was Algorithm 1 included in the paper despite its inferior performance compared to Algorithm 2?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors study the self play RL setting and analyze algorithms with provable guarantees (when the the control is centralized, i.e, a single algorithm controls the actions of both players). Furthermore, the authors also give some hardness results which suggest that decentralized learning is generally hard for this setting. ** I read the author's response and decided to keep my score. Thanks for the clarifications. Strengths *) The setting is interesting, and the fact the algorithm achieves performance nearly as the lower bound (up to factors of the horizon) is very impressive. Weaknesses *) Memory complexity. As far as I understood, the algorithms need to store K policies. This makes the algorithm impractical. I also suspect that in practice using only the last policies would improve the algorithm. *) The sampling of the policies. The authors suggest a policy certification algorithm from which they can read the final policies on which they prove the PAC guarantees. I felt that this procedure is not explained well enough (it is discussed in half a page, however, it seems to me there is not much of a discussion on why this procedure works). For example, there is no discussion on the question whether the policy certification procedure is necessary or why there is no 'natural' way to obtain a policy beside of performing this procedure. I believe that the lack of clarity on this subject harms the quality of the paper. *) What is the reason to include algorithm 1 in the paper? it seems it results in worse performance relatively to algorithm 2 and does not improve it in any other way.
NIPS
Title Near-Optimal Reinforcement Learning with Self-Play Abstract This paper considers the problem of designing optimal algorithms for reinforcement learning in two-player zero-sum games. We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision. In a tabular episodic Markov game with S states, A max-player actions and B min-player actions, the best existing algorithm for finding an approximate Nash equilibrium requires Õ(SAB) steps of game playing, when only highlighting the dependency on (S,A,B). In contrast, the best existing lower bound scales as Ω(S(A + B)) and has a significant gap from the upper bound. This paper closes this gap for the first time: we propose an optimistic variant of the Nash Qlearning algorithm with sample complexity Õ(SAB), and a new Nash V-learning algorithm with sample complexity Õ(S(A + B)). The latter result matches the information-theoretic lower bound in all problem-dependent parameters except for a polynomial factor of the length of each episode. In addition, we present a computational hardness result for learning the best responses against a fixed opponent in Markov games—a learning objective different from finding the Nash equilibrium. 1 Introduction A wide range of modern artificial intelligence challenges can be cast as a multi-agent reinforcement learning (multi-agent RL) problem, in which more than one agent performs sequential decision making in an interactive environment. Multi-agent RL has achieved significant recent success on traditionally challenging tasks, for example in the game of GO [30, 31], Poker [6], real-time strategy games [33, 22], decentralized controls or multiagent robotics systems [5], autonomous driving [27], as well as complex social scenarios such as hide-and-seek [3]. In many scenarios, the learning agents even outperform the best human experts . Despite the great empirical success, a major bottleneck for many existing RL algorithms is that they require a tremendous number of samples. For example, the biggest AlphaGo Zero model is trained on tens of millions of games and took more than a month to train [31]. While requiring such amount of samples may be acceptable in simulatable environments such as GO, it is not so in other sampleexpensive real world settings such as robotics and autonomous driving. It is thus important for us to understand the sample complexity in RL—how can we design algorithms that find a near optimal policy with a small number of samples, and what is the fundamental limit, i.e. the minimum number of samples required for any algorithm to find a good policy. Theoretical understandings on the sample complexity for multi-agent RL are rather limited, especially when compared with single-agent settings. The standard model for a single-agent setting is an episodic Markov Decision Process (MDP) with S states, and A actions, and H steps per episode. The best known algorithm can find an near-optimal policy in Θ̃(poly(H)SA/ 2) episodes, which matches the lower bound up to a single H factor [1, 8]. In contrast, in multi-agent settings, the optimal sample complexity remains open even in the basic setting of two-player tabular Markov games 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [28], where the agents are required to find the solutions of the games—the Nash equilibria. The best known algorithm, VI-ULCB, finds an -approximate Nash equilibrium in Õ(poly(H)S2AB/ 2) episodes [2], where B is the number of actions for the other player. The information theoretical lower bound is Ω(poly(H)S(A + B)/ 2). Specifically, the number of episodes required for the algorithm scales quadratically in both S and (A,B), and exhibits a gap from the linear dependency in the lower bound. This motivates the following question: Can we design algorithms with near-optimal sample complexity for learning Markov games? In this paper, we present the first line of near-optimal algorithms for two-player Markov games that match the aforementioned lower bound up to a poly(H) factor. This closes the open problem for achieving the optimal sample complexity in all (S,A,B) dependency. Our algorithm learns by playing against itself without requiring any direct supervision, and is thus a self-play algorithm. 1.1 Our contributions • We propose an optimistic variant of Nash Q-learning [11], and prove that it achieves sample complexity Õ(H5SAB/ 2) for finding an -approximate Nash equilibrium in two-player Markov games (Section 3). Our algorithm builds optimistic upper and lower estimates of Q-values, and computes the Coarse Correlated Equilibrium (CCE) over this pair of Q estimates as its execution policies for both players. • We design a new algorithm—Nash V-learning—for finding approximate Nash equilibria, and show that it achieves sample complexity Õ(H6S(A + B)/ 2) (Section 4). This improves upon Nash Q-learning in case min {A,B} > H . It is also the first result that matches the minimax lower bound up to only a poly(H) factor. This algorithm builds optimistic upper and lower estimates of V -values, and features a novel combination of Follow-the-Regularized-Leader (FTRL) and standard Q-learning algorithm to determine its execution policies. • Apart from finding Nash equilibria, we prove that learning the best responses of fixed opponents in Markov games is as hard as learning parity with noise—a notoriously difficult problem that is believed to be computationally hard (Section 5). As a corollary, this hardness result directly implies that achieving sublinear regret against adversarial opponents in Markov games is also computationally hard, a result that first appeared in [25]. This in turn rules out the possibility of designing efficient algorithms for finding Nash equilibria by running no-regret algorithms for each player separately. In addition to above contributions, this paper also features a novel approach of extracting certified policies—from the estimates produced by reinforcement learning algorithms such as Nash Qlearning and Nash V-learning—that are certified to have similar performance as Nash equilibrium policies, even when facing against their best response (see Section 3 for more details). We believe this technique could be of broader interest to the community. 1.2 Related Work Markov games Markov games (or stochastic games) are proposed in the early 1950s [28]. They are widely used to model multi-agent RL. Learning the Nash equilibria of Markov games has been studied in classical work [18, 19, 11, 10], where the transition matrix and reward are assumed to be known, or in the asymptotic setting where the number of data goes to infinity. These results do not directly apply to the non-asymptotic setting where the transition and reward are unknown and only a limited amount of data are available for estimating them. A recent line of work tackles self-play algorithms for Markov games in the non-asymptotic setting with strong reachability assumptions. Specifically, Wei et al. [35] assumes no matter what strategy one agent sticks to, the other agent can always reach all states by playing a certain policy, and Jia et al. [13], Sidford et al. [29] assume access to simulators (or generative models) that enable the agent to directly sample transition and reward information for any state-action pair. These settings ensure that all states can be reached directly, so no sophisticated exploration is not required. Very recently, [2, 36] study learning Markov games without these reachability assumptions, where exploration becomes essential. However, both results suffer from highly suboptimal sample complexity. We compare them with our results in Table 1. The results of [36] also applies to the linear function approximation setting. We remark that the R-max algorithm [4] does provide provable guarantees for learning Markov game, even in the setting of playing against the adversarial opponent, but using a definition of regret that is weaker than the standard regret. Their result does not imply any sample complexity result for finding Nash equilibrium policies. Adversarial MDP Another line of related work focuses on provably efficient algorithms for adversarial MDPs. Most work in this line considers the setting with adversarial rewards [38, 26, 15], because adversarial MDP with changing dynamics is computationally hard even under fullinformation feedback [37]. These results do not direcly imply provable self-play algorithms in our setting, because the opponent in Markov games can affect both the reward and the transition. Single-agent RL There is a rich literature on reinforcement learning in MDPs [see e.g. 12, 24, 1, 7, 32, 14]. MDP is a special case of Markov games, where only a single agent interacts with a stochastic environment. For the tabular episodic setting with nonstationary dynamics and no simulators, the best sample complexity achieved by existing model-based and model-free algorithms are Õ(H3SA/ 2) [1] and Õ(H4SA/ 2) [14], respectively, where S is the number of states, A is the number of actions, H is the length of each episode. Both of them (nearly) match the lower bound Ω(H3SA/ 2) [12, 23, 14]. 2 Preliminaries We consider zero-sum Markov Games (MG) [28, 18], which are also known as stochastic games in the literature. Zero-sum Markov games are generalization of standard Markov Decision Processes (MDP) into the two-player setting, in which the max-player seeks to maximize the total return and the min-player seeks to minimize the total return. Formally, we denote a tabular episodic Markov game as MG(H,S,A,B,P, r), where H is the number of steps in each episode, S is the set of states with |S| ≤ S, (A,B) are the sets of actions of the max-player and the min-player respectively, P = {Ph}h∈[H] is a collection of transition matrices, so that Ph(·|s, a, b) gives the distribution over states if action pair (a, b) is taken for state s at step h, and r = {rh}h∈[H] is a collection of reward functions, and rh : S ×A×B → [0, 1] is the deterministic reward function at step h. 1 In each episode of this MG, we start with a fixed initial state s1. Then, at each step h ∈ [H], both players observe state sh ∈ S, and the max-player picks action ah ∈ A while the min-player picks action bh ∈ B simultaneously. Both players observe the actions of the opponents, receive reward rh(sh, ah, bh), and then the environment transitions to the next state sh+1 ∼ Ph(·|sh, ah, bh). The episode ends when sH+1 is reached. 1We assume the rewards in [0, 1] for normalization. Our results directly generalize to randomized reward functions, since learning the transition is more difficult than learning the reward. Markov policy, value function A Markov policy µ of the max-player is a collection of H functions {µh : S → ∆A}h∈[H], which maps from a state to a distribution of actions. Here ∆A is the probability simplex over action set A. Similarly, a policy ν of the min-player is a collection of H functions {νh : S → ∆B}h∈[H]. We use the notation µh(a|s) and νh(b|s) to present the probability of taking action a or b for state s at step h under Markov policy µ or ν respectively. We use V µ,νh : S → R to denote the value function at step h under policy µ and ν, so that V µ,ν h (s) gives the expected cumulative rewards received under policy µ and ν, starting from s at step h: V µ,νh (s) := Eµ,ν [∑H h′=h rh′(sh′ , ah′ , bh′) ∣∣∣ sh = s] . (1) We also define Qµ,νh : S × A × B → R to denote Q-value function at step h so that Q µ,ν h (s, a, b) gives the cumulative rewards received under policy µ and ν, starting from (s, a, b) at step h: Qµ,νh (s, a, b) := Eµ,ν [∑H h′=h rh′(sh′ , ah′ , bh′) ∣∣∣ sh = s, ah = a, bh = b] . (2) For simplicity, we use notation of operator Ph so that [PhV ](s, a, b) := Es′∼Ph(·|s,a,b)V (s′) for any value function V . We also use notation [DπQ](s) := E(a,b)∼π(·,·|s)Q(s, a, b) for any action-value function Q. By definition of value functions, we have the Bellman equation Qµ,νh (s, a, b) = (rh + PhV µ,ν h+1)(s, a, b), V µ,ν h (s) = (Dµh×νhQ µ,ν h )(s) for all (s, a, b, h) ∈ S ×A× B × [H]. We define V µ,νH+1(s) = 0 for all s ∈ SH+1. Best response and Nash equilibrium For any Markov policy of the max-player µ, there exists a best response of the min-player, which is a Markov policy ν†(µ) satisfying V µ,ν †(µ) h (s) = infν V µ,ν h (s) for any (s, h) ∈ S × [H]. Here the infimum is taken over all possible policies which are not necessarily Markovian (we will define later in this section). We define V µ,†h := V µ,ν†(µ) h . By symmetry, we can also define µ†(ν) and V †,νh . It is further known (cf. [9]) that there exist Markov policies µ?, ν? that are optimal against the best responses of the opponents, in the sense that V µ ?,† h (s) = supµ V µ,† h (s), V †,ν? h (s) = infν V †,ν h (s), for all (s, h). We call these optimal strategies (µ?, ν?) the Nash equilibrium of the Markov game, which satisfies the following minimax equation: 2 supµ infν V µ,ν h (s) = V µ?,ν? h (s) = infν supµ V µ,ν h (s). Intuitively, a Nash equilibrium gives a solution in which no player has anything to gain by changing only her own policy. We further abbreviate the values of Nash equilibrium V µ ?,ν? h and Q µ?,ν? h as V ?h and Q ? h. We refer readers to Appendix A for Bellman optimality equations for values of best responses or Nash equilibria. General (non-Markovian) policy In certain situations, it is beneficial to consider general, historydependent policies that are not necessarily Markovian. A (general) policy µ of the max-player is a set of H maps µ := { µh : R × (S × A × B × R)h−1 × S → ∆A } h∈[H], from a random number z ∈ R and a history of length h—say (s1, a1, b1, r1, · · · , sh), to a distribution over actions inA. By symmetry, we can also define the (general) policy ν of the min-player, by replacing the action set A in the definition by set B. The random number z is sampled from some underlying distribution D, but may be shared among all steps h ∈ [H]. For a pair of general policy (µ, ν), we can still use the same definitions (1) to define their value V µ,ν1 (s1) at step 1. We can also define the best response ν †(µ) of a general policy µ as the minimizing policy so that V µ,†1 (s1) ≡ V µ,ν†(µ) 1 (s1) = infν V µ,ν h (s1) at step 1. We remark that the best response of a general policy is not necessarily Markovian. 2The minimax theorem here is different from the one for matrix games, i.e. maxφminψ φ>Aψ = minψmaxφ φ >Aψ for any matrix A, since here V µ,νh (s) is in general not bilinear in µ, ν. Algorithm 1 Optimistic Nash Q-learning 1: Initialize: for any (s, a, b, h), Qh(s, a, b)← H , Qh(s, a, b)← 0, Nh(s, a, b)← 0, πh(a, b|s)← 1/(AB). 2: for episode k = 1, . . . ,K do 3: receive s1. 4: for step h = 1, . . . ,H do 5: take action (ah, bh) ∼ πh(·, ·|sh). 6: observe reward rh(sh, ah, bh) and next state sh+1. 7: t = Nh(sh, ah, bh)← Nh(sh, ah, bh) + 1. 8: Qh(sh, ah, bh)← (1− αt)Qh(sh, ah, bh) + αt(rh(sh, ah, bh) + V h+1(sh+1) + βt) 9: Q h (sh, ah, bh)← (1− αt)Qh(sh, ah, bh) + αt(rh(sh, ah, bh) + V h+1(sh+1)− βt) 10: πh(·, ·|sh)← CCE(Qh(sh, ·, ·), Qh(sh, ·, ·)) 11: V h(sh)← (DπhQh)(sh); V h(sh)← (DπhQh)(sh). Learning Objective There are two possible learning objectives in the setting of Markov games. The first one is to find the best response for a fixed opponent. Without loss of generality, we consider the case where the learning agent is the max-player, and the min-player is the opponent. Definition 1 ( -approximate best response). For an opponent with an fixed unknown general policy ν, a general policy µ̂ is the -approximate best response if V †,ν1 (s1)− V µ̂,ν 1 (s1) ≤ . The second goal is to find a Nash equilibrium of the Markov games. We measure the suboptimality of any pair of general policies (µ̂, ν̂) using the gap between their performance and the performance of the optimal strategy (i.e. Nash equilibrium) when playing against the best responses respectively: V †,ν̂1 (s1)− V µ̂,† 1 (s1) = [ V †,ν̂1 (s1)− V ?1 (s1) ] + [ V ?1 (s1)− V µ̂,† 1 (s1) ] Definition 2 ( -approximate Nash equilibrium). A pair of general policies (µ̂, ν̂) is an - approximate Nash equilibrium, if V †,ν̂1 (s1)− V µ̂,† 1 (s1) ≤ . Loosely speaking, Nash equilibria can be viewed as “the best responses to the best responses”. In most applications, they are the ultimate solutions to the games. In Section 3 and 4, we present sharp guarantees for learning an approximate Nash equilibrium with near-optimal sample complexity. However, rather surprisingly, learning a best response in the worst case is more challenging than learning the Nash equilibrium. In Section 5, we present a computational hardness result for learning an approximate best response. 3 Optimistic Nash Q-learning In this section, we present our first algorithm Optimistic Nash Q-learning and its corresponding theoretical guarantees. Algorithm part I: learning values Our algorithm Optimistic Nash Q-learning (Algorithm 1) is an optimistic variant of Nash Q-learning [11]. For each step in each episode, it (a) takes actions according to the previously computed policy πh, and observes the reward and next state, (b) performs incremental updates on Q-values, and (c) computes new greedy policies and updates V -values. Part (a) is straightforward; we now focus on explaining part (b) and part (c). In part (b), the incremental updates on Q-values (Line 8, 9) are almost the same as standard Qlearning [34], except here we maintain two separate Q-values—Qh and Qh, as upper and lower confidence versions respectively. We add and subtract a bonus term βt in the corresponding updates, which depends on t = Nh(sh, ah, bh)—the number of times (sh, ah, bh) has been visited at step h. We pick parameter αt and βt as follows for some large constant c , and log factors ι: αt = (H + 1)/(H + t), βt = c √ H3ι/t (3) In part (c), our greedy policies are computed using a Coarse Correlated Equilibrium (CCE) subroutine, which is first introduced by [36] to solve Markov games using value iteration algorithms. For Algorithm 2 Certified Policy µ̂ of Nash Q-learning 1: sample k ← Uniform([K]). 2: for step h = 1, . . . ,H do 3: observe sh, and take action ah ∼ µkh(·|sh). 4: observe bh, and set t← Nkh (sh, ah, bh). 5: sample m ∈ [t] with P(m = i) = αit. 6: k ← kmh (sh, ah, bh) any pair of matrices Q,Q ∈ [0, H]A×B , CCE(Q,Q) returns a distribution π ∈ ∆A×B such that E(a,b)∼πQ(a, b) ≥max a? E(a,b)∼πQ(a?, b) (4) E(a,b)∼πQ(a, b) ≤min b? E(a,b)∼πQ(a, b?) It can be shown that a CCE always exists, and it can be computed by linear programming in polynomial time (see Appendix B for more details). Now we are ready to state an intermediate guarantee for optimistic Nash Q-learning. We assume the algorithm has played the game for K episodes, and we use V k, Qk, Nk, πk to denote values, visitation counts, and policies at the beginning of the k-th episode in Algorithm 1. Lemma 3. For any p ∈ (0, 1], choose hyperparameters αt, βt as in (3) for a large absolute constant c and ι = log(SABT/p). Then, with probability at least 1−p, Algorithm 1 has following guarantees • V kh(s) ≥ V ?h (s) ≥ V k h(s) for all (s, h, k) ∈ S × [H]× [K]. • (1/K) · ∑K k=1(V k 1 − V k 1)(s1) ≤ O (√ H5SABι/K ) . Lemma 3 makes two statements. First, it claims that the V k h(s) and V k h(s) computed in Algorithm 1 are indeed upper and lower bounds of the value of the Nash equilibrium. Second, Lemma 3 claims that the averages of the upper bounds and the lower bounds are also very close to the value of Nash equilibrium V ?1 (s1), where the gap decrease as 1/ √ K. This implies that in order to learn the value V ?1 (s1) up to -accuracy, we only need O(H5SABι/ 2) episodes. However, Lemma 3 has a significant drawback: it only guarantees the learning of the value of Nash equilibrium. It does not imply that the policies (µk, νk) used in Algorithm 1 are close to the Nash equilibrium, which requires the policies to have a near-optimal performance even against their best responses. This is a major difference between Markov games and standard MDPs, and is the reason why standard techniques from the MDP literature does not apply here. To resolve this problem, we propose a novel way to extract a certified policy from the optimistic Nash Q-learning algorithm. Algorithm part II: certified policies We describe our procedure of executing the certified policy µ̂ of the max-player is described in Algorithm 2. Above, µkh, ν k h denote the marginal distributions of πkh produced in Algorithm 1 over action set A,B respectively. We also introduce the following quantities that directly induced by αt: α0t := ∏t j=1 (1− αj), αit := αi ∏t j=i+1 (1− αj) (5) whose properties are listed in the following Lemma 11. Especially, ∑t i=1 α i t = 1, so {αit}ti=1 defines a distribution over [t]. We use kmh (s, a, b) to denote the index of the episode where (s, a, b) is observed in step h for the m-th time. The certified policy ν̂ of the min-player is easily defined by symmetry. We note that µ̂, ν̂ are clearly general policies, but they are no longer Markov policies. The intuitive reason why such policy µ̂ defined in Algorithm 2 is certified by Nash Q-learning algorithm, is because the update equation in line 8 of Algorithm 1 and equation (5) gives relation: Q k h(s, a, b) = α 0 tH + ∑t i=1 α i t [ rh(s, a, b) + V kih(s,a,b) h+1 (s kih(s,a,b) h+1 ) + βi ] Algorithm 3 Optimistic Nash V-learning (the max-player version) 1: Initialize: for any (s, a, b, h), V h(s)← H , Lh(s, a)← 0, Nh(s)← 0, µh(a|s)← 1/A. 2: for episode k = 1, . . . ,K do 3: receive s1. 4: for step h = 1, . . . ,H do 5: take action ah ∼ µh(·|sh), observe the action bh from opponent. 6: observe reward rh(sh, ah, bh) and next state sh+1. 7: t = Nh(sh)← Nh(sh) + 1. 8: V h(sh)← min{H, (1− αt)V h(sh) + αt(rh(sh, ah, bh) + V h+1(sh+1) + βt)}. 9: for all a ∈ A do 10: `h(sh, a)← [H − rh(sh, ah, bh)− V h+1(sh+1)]I{ah = a}/[µh(ah|sh) + ηt]. 11: Lh(sh, a)← (1− αt)Lh(sh, a) + αt`h(sh, a). 12: set µh(·|sh) ∝ exp[−(ηt/αt)Lh(sh, ·)]. This certifies the good performance against the best responses if the max-player plays a mixture of policies {µk i h(s,a,b) h+1 }ti=1 at step h + 1 with mixing weights {αit}ti=1 (see Appendix C.2 for more details). A recursion of this argument leads to the certified policy µ̂—a nested mixture of policies. We now present our main result for Nash Q-learning, using the certified policies (µ̂, ν̂). Theorem 4 (Sample Complexity of Nash Q-learning). For any p ∈ (0, 1], choose hyperparameters αt, βt as in (3) for large absolute constant c and ι = log(SABT/p). Then, with probability at least 1− p, if we run Nash Q-learning (Algorithm 1) for K episodes where K ≥ Ω ( H5SABι/ 2 ) , the certified policies (µ̂, ν̂) (Algorithm 2) will be -approximate Nash, i.e. V †,ν̂1 (s1)−V µ̂,† 1 (s1) ≤ . Theorem 4 asserts that if we run the optimistic Nash Q-learning algorithm for more than O(H5SABι/ 2) episodes, the certified policies (µ̂, ν̂) extracted using Algorithm 2 will be - approximate Nash equilibrium (Definition 2). We make two remarks. First, the executions of the certified policies µ̂, ν̂ require the storage of {µkh} and {νkh} for all k, h ∈ [H] × [K]. This makes the space complexity of our algorithm scales up linearly in the total number of episodes K. Second, Q-learning style algorithms (especially online updates) are crucial in our analysis for achieving sample complexity linear in S. They enjoy the property that every sample is only been used once, on the value function that is independent of this sample. In contrast, value iteration type algorithms do not enjoy such an independence property, which is why the best existing sample complexity scales as S2 [2]. 3 4 Optimistic Nash V-learning In this section, we present our new algorithm Optimistic Nash V-learning and its corresponding theoretical guarantees. This algorithm improves over Nash Q-learning in sample complexity from Õ(SAB) to Õ(S(A+B)), when only highlighting the dependency on S,A,B. Algorithm description Nash V-learning combines the idea of Follow-The-Regularized-Leader (FTRL) in the bandit literature with the Q-learning algorithm in reinforcement learning. This algorithm does not require extra information exchange between players other than standard game playing, thus can be ran separately by the two players. We describe the max-player version in Algorithm 3. See Algorithm 7 in Appendix D for the min-player version, where V h, Lh, νh, ηt and βt are defined symmetrically. For each step in each episode, the algorithm (a) first takes action according to µh, observes the action of the opponent, the reward, and the next state, (b) performs an incremental update on V , and (c) 3Despite [1] provides techniques to improve the sample complexity from S2 to S for value iteration in MDP, the same techniques can not be applied to Markov games due to the unique challenge that, in Markov games, we aim at finding policies that are good against their best responses. Algorithm 4 Certified Policy µ̂ of Nash V-learning 1: sample k ← Uniform([K]). 2: for step h = 1, . . . ,H do 3: observe sh, and set t← Nkh (sh). 4: sample m ∈ [t] with P(m = i) = αit. 5: k ← kmh (sh). 6: take action ah ∼ µkh(·|sh). updates policy µh. The first two parts are very similar to Nash Q-learning. In the third part, the agent first computes `h(sh, ·) as the importance weighted estimator of the current loss. She then computes the weighted cumulative loss Lh(sh, ·). Finally, the policy µh is updated using FTRL principle: µh(·|sh)← argminµ∈∆A ηt〈Lh(sh, ·), µ〉+ αtKL(µ‖µ0) Here µ0 is the uniform distribution over all actions A. Solving above minimization problem gives the update equation as in Line 12 in Algorithm 3. In multi-arm bandit, FTRL can defend against adversarial losses, with regret independent of the number of the opponent’s actions. This property turns out to be crucial for Nash V-learning to achieve sharper sample complexity than Nash Qlearning (see the analog of Lemma 3 in Lemma 15). Similar to Nash Q-learning, we also propose a new algorithm (Algorithm 4) to extract a certified policy from the optimistic Nash V-learning algorithm. The certified policies are again non-Markovian. We choose all hyperparameters as follows, for some large constant c , and log factors ι. αt = H + 1 H + t , ηt = √ logA At , η t = √ logB Bt , βt = c √ H4Aι t , β t = c √ H4Bι t , (6) We now present our main result on the sample complexity of Nash V-learning. Theorem 5 (Sample Complexity of Nash V-learning). For any p ∈ (0, 1], choose hyperparameters as in (6) for large absolute constant c and ι = log(SABT/p). Then, with probability at least 1− p, if we run Nash V-learning (Algorithm 3 and 7) for K episodes with K ≥ Ω ( H6S(A+B)ι/ 2 ) , its induced policies (µ̂, ν̂) (Algorithm 4) will be -approximate Nash, i.e. V †,ν̂1 (s1)− V µ̂,† 1 (s1) ≤ . Theorem 4 claims that if we run the optimistic Nash V-learning for more thanO(H6S(A+B)ι/ 2) episodes, the certified policies (µ̂, ν̂) extracted from Algorithm 4 will be -approximate Nash (Definition 2). Nash V-learning is the first algorithm of which the sample complexity matches the information theoretical lower bound Ω(H3S(A+B)/ 2) up to poly(H) factors and logarithmic terms. 5 Hardness for Learning the Best Response In this section, we present a computational hardness result for computing the best response against an opponent with a fixed unknown policy. We further show that this implies the computational hardness result for achieving sublinear regret in Markov games when playing against adversarial opponents, which rules out a popular approach to design algorithms for finding Nash equilibria. We first remark that if the opponent is restricted to only play Markov policies, then learning the best response is as easy as learning a optimal policy in the standard single-agent Markov decision process, where efficient algorithms are known to exist. Nevertheless, when the opponent can as well play any policy which may be non-Markovian, we show that finding the best response against those policies is computationally challenging. We say an algorithm is a polynomial time algorithm for learning the best response if for any policy of the opponent ν, and for any > 0, the algorithm finds the -approximate best response of policy ν (Definition 1) with probability at least 1/2, in time polynomial in S,H,A,B, −1. We can show the following hardness result for finding the best response in polynomial time. Theorem 6 (Hardness for learning the best response). There exists a Markov game with deterministic transitions and rewards defined for any horizon H ≥ 1 with S = 2, A = 2, and B = 2, such that if there exists a polynomial time algorithm for learning the best response for this Markov game, then there exists a polynomial time algorithm for learning parity with noise (see problem description in Appendix E). We remark that learning parity with noise is a notoriously difficult problem that has been used to design efficient cryptographic schemes. It is conjectured by the community to be hard. Conjecture 7 ([16]). There is no polynomial time algorithm for learning party with noise. Theorem 6 with Conjecture 7 demonstrates the fundamental difficulty—if not strict impossibility— of designing a polynomial time for learning the best responses in Markov games. The intuitive reason for such computational hardness is that, while the underlying system has Markov transitions, the opponent can play policies that encode long-term correlations with non-Markovian nature, such as parity with noise, which makes it very challenging to find the best response. It is known that learning many other sequential models with long-term correlations (such as hidden Markov models or partially observable MDPs) is as hard as learning parity with noise [20]. 5.1 Hardness for Playing Against Adversarial Opponent Theorem 6 directly implies the difficulty for achieving sublinear regret in Markov games when playing against adversarial opponents in Markov games. Our construction of hard instances in the proof of Theorem 6 further allows the adversarial opponent to only play Markov policies in each episode. Since playing against adversarial opponent is a different problem with independent interest, we present the full result here. Without loss of generality, we still consider the setting where the algorithm can only control the max-player, while the min-player is an adversarial opponent. In the beginning of every episode k, both players pick their own policies µk and νk, and execute them throughout the episode. The adversarial opponent can possibly pick her policy νk adaptive to all the observations in the earlier episodes. We say an algorithm for the learner is a polynomial time no-regret algorithm if there exists a δ > 0 such that for any adversarial opponent, and any fixedK > 0, the algorithm outputs policies {µk}Kk=1 which satisfies the following, with probability at least 1/2, in time polynomial in S,H,A,B,K. Regret(K) = sup µ K∑ k=1 V µ,ν k 1 (s1)− K∑ k=1 V µ k,νk 1 (s1) ≤ poly(S,H,A,B)K1−δ (7) Theorem 6 directly implies the following hardness result for achieving no-regret against adversarial opponents, a result that first appeared in [25]. Corollary 8 (Hardness for playing against adversarial opponent). There exists a Markov game with deterministic transitions and rewards defined for any horizon H ≥ 1 with S = 2, A = 2, and B = 2, such that if there exists a polynomial time no-regret algorithm for this Markov game, then there exists a polynomial time algorithm for learning parity with noise (see problem description in Appendix E). The claim remains to hold even if we restrict the adversarial opponents in the Markov game to be non-adaptive, and to only play Markov policies in each episode. Similar to Theorem 6, Corollary 8 combined with Conjecture 7 demonstrates the fundamental difficulty of designing a polynomial time no-regret algorithm against adversarial opponents for Markov games. Implications on algorithm design for finding Nash Equilibria Corollary 8 also rules out a natural approach for designing efficient algorithms for finding approximate Nash equilibrium through combining two no-regret algorithms. In fact, it is not hard to see that if the min-player also runs a non-regret algorithm, and obtain a regret bound symmetric to (7), then summing the two regret bounds shows the mixture policies (µ̂, ν̂)—which assigns uniform mixing weights to policies {µk}Kk=1 and {νk}Kk=1 respectively—is an approximate Nash equilibrium. Corollary 8 with Conjecture 7 claims that any algorithm designed using this approach is not a polynomial time algorithm. Broader Impact As this is a theoretical contribution, we do not envision that our direct results will have a tangible societal impact. Our broader line of inquiry could impact a line of thinking about how to design more sample-efficient algorithms for multi-agent reinforcement learning, which could be useful towards making artificial intelligence more resource and energy efficient. Acknowledgments TY is partially supported by NSF BIGDATA grant IIS1741341.
1. What is the focus and contribution of the paper on two player zero sum games? 2. What are the strengths of the proposed approach, particularly in terms of sample complexity? 3. What are the weaknesses of the paper, especially regarding empirical experiments? 4. Do you have any concerns about the algorithm's ability to reduce sample complexity? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work proves that for two player zero sum games, an optimistic variant of Nash Q learning can approximate nash equilibrium with sample complexity O(SAB), and an optimistic variant of Nash V learning with sample complexity O(S(A+B)), improved from previous results O(S^2AB). Additionally, achieving sublinear regret when playing against adversarial opponents in Markov games is proved to be hard and non-polynomial. Strengths This is the first time that sample complexity in two player zero-sum game to reduce to O(S(A+B)), which matches theoretical lower bound expect polynomial of episode length. The algorithm itself is described clearly, which is built upon Coarse Correlated Equilibrium subroutine and Follow-the-Regularized-Leader algorithm. A novel approach to compute the certified policies given the near-optimal Nash equilibrium values are also provided, for both Nash Q learning and Nash V learning. The new bound is a significant contribution in terms of sample complexity. Weaknesses A major drawback is that not a single empirical experiment is done, in contract to a few prior works. e.g. Nash Q-Learning for General-Sum Stochastic Games, J. Hu and M. P. Wellman. At least experiments in matrix games or grid world should be conducted, to verify the correctness of the proof. It will be great to see that sample complexity increase linearly with S, A, B with detailed ablation studies, and what percentage of runs reach eps-nash with the new algorithms. It is also not clear that why the algorithm can bring the sample complexity down. What is the intuition that the bound can be reduced? What is the intuition to pick the a_t and b_t and other hyperparameters in the formulas? Since the algorithm is built upon Q learning, how can this algorithm extend to other more general cases? Is there any intuitions that can help with real world algorithms in terms of sample complexity? === Post rebuttal: Some of the intuitions are explained. After the discussion with other reviewers I still think this paper needs to be improved with presentation and preliminary experiments if possible.
NIPS
Title Reinforced Continual Learning Abstract Most artificial intelligence models are limited in their ability to solve new tasks faster, without forgetting previously acquired knowledge. The recently emerging paradigm of continual learning aims to solve this issue, in which the model learns various tasks in a sequential fashion. In this work, a novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies. We name it as Reinforced Continual Learning. Our method not only has good performance on preventing catastrophic forgetting but also fits new tasks well. The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks. 1 Introduction Continual learning, or lifelong learning [15], the ability to learn consecutive tasks without forgetting how to perform previously trained tasks, is an important topic for developing artificial intelligence. The primary goal of continual learning is to overcome the forgetting of learned tasks and to leverage the earlier knowledge for obtaining better performance or faster convergence/training speed on the newly coming tasks. In the deep learning community, two groups of strategies have been developed to alleviate the problem of forgetting the previously trained tasks, distinguished by whether the network architecture changes during learning. The first category of approaches maintain a fixed network architecture with large capacity. When training the network for consecutive tasks, some regularization term is enforced to prevent the model parameters from deviating too much from the previous learned parameters according to their significance to old tasks [4, 19]. In [6], the authors proposed to incrementally match the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. Alternatively, an episodic memory [7] is budgeted to store the subsets of previous datasets, and then trained together with the new task. FearNet [3] mitigates catastrophic forgetting by consolidating recent memories into long-term storage using pseudorehearsal [10] which employs a generative autoencoder to generate previously learned examples that are replayed alongside novel information during consolidation. Fernando et al. [2] proposed PathNet, in which a neural network has ten or twenty modules in each layer, and three or four modules are picked for one task in each layer by an evolutionary approach. However, these methods typically require unnecessarily largecapacity networks, particularly when the number of tasks is large, since the network architecture is never dynamically adjusted during training. ∗Corresponding author. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. The other group of methods for overcoming catastrophic forgetting dynamically expand the network to accommodate the new coming task while keeping the parameters of previous architecture unchanged. Progressive networks [11] expand the architectures with a fixed size of nodes or layers, leading to an extremely large network structure particularly faced with a large number of sequential tasks. The resultant complex architecture might be expensive to store and even unnecessary due to its high redundancy. Dynamically Expandable Network (DEN, [17] alleviated this issue slightly by introducing group sparsity regularization when adding new parameters to the original network; unfortunately, there involves many hyperparameters in DEN, including various regularization and thresholding ones, which need to be tuned carefully due to the high sensitivity to the model performance. In this work, in order to better facilitate knowledge transfer and avoid catastrophic forgetting, we propose a novel framework to adaptively expand the network. Faced with a new task, deciding optimal number of nodes/filters to add for each layer is posed as a combinatorial optimization problem. We provide a sophisticatedly designed reinforcement learning method to solve this problem. Thus, we name it as Reinforced Continual Learning (RCL). In RCL, a controller implemented as a recurrent neural network is adopted to determine the best architectural hyper-parameters of neural networks for each task. We train the controller by an actor-critic strategy guided by a reward signal deriving from both validation accuracy and network complexity. This can maintain the prediction accuracy on older tasks as much as possible while reducing the overall model complexity. To the best of our knowledge, the proposal is the first attempt that employs the reinforcement learning for solving the continual learning problems. RCL not only differs from adding a fixed number of units to the old network for solving a new task [11], which might be suboptimal and computationally expensive, but also distinguishes from [17] as well that performs group sparsity regularization on the added parameters. We validate the effectiveness of RCL on various sequential tasks. And the results show that RCL can obtain better performance than existing methods even with adding much less units. The rest of this paper is organized as follows. In Section 2, we introduce the preliminary knowledge on reinforcement learning. We propose the new method RCL in Section 3, a model to learn a sequence of tasks dynamically based on reinforcement learning. In Section 4, we implement various experiments to demonstrate the superiority of RCL over other state-of-the-art methods. Finally, we conclude our paper in Section 5 and provide some directions for future research. 2 Preliminaries of Reinforcement learning Reinforcement learning [13] deals with learning a policy for an agent interacting in an unknown environment. It has been applied successfully to various problems, such as games [8, 12], natural language processing [18], neural architecture/optimizer search [20, 1] and so on. At each step, an agent observes the current state st of the environment, decides of an action at according to a policy π(at|st), and observes a reward signal rt+1. The goal of the agent is to find a policy that maximizes the expected sum of discounted rewards Rt, Rt = ∑∞ t′=t+1 γ t′−t−1rt′ , where γ ∈ (0, 1] is a discount factor that determines the importance of future rewards. The value function of a policy π is defined as the expected return Vπ(s) = Eπ[ ∑∞ t=0 γ trt+1|s0 = s] and its action-value function as Qπ(s, a) = Eπ[ ∑∞ t=0 γ trt+1|s0 = s, a0 = a]. Policy gradient methods address the problem of finding a good policy by performing stochastic gradient descent to optimize a performance objective over a given family of parametrized stochastic policies πθ(a|s) parameterized by θ. The policy gradient theorem [14] provides expressions for the gradient of the average reward and discounted reward objectives with respect to θ. In the discounted setting, the objective is defined with respect to a designated start state (or distribution) s0: ρ(θ, s0) = Eπθ [ ∑∞ t=0 γ trt+1|s0]. The policy gradient theorem shows that: ∂ρ(θ, s0) ∂θ = ∑ s µπθ (s|s0) ∑ a ∂ππθ (a|s) ∂θ Qπθ (s, a). (1) where µπθ (s|s0) = ∑∞ t=0 γ tP (st = s|s0). 3 Our Proposal: Reinforced Continual Learning In this section, we elaborate on the new framework for continual learning, Reinforced Continual Learning(RCL). RCL consists of three networks, controller, value network, and task network. The controller is implemented as a Long Short-Term Memory network (LSTM) for generating policies and determining how many filters or nodes will be added for each task. We design the value network as a fully-connected network, which approximates the value of the state. The task network can be any network of interest for solving a particular task, such as image classification or object detection. In this paper, we use a convolutional network (CNN) as the task network to demonstrate how RCL adaptively expands this CNN to prevent forgetting, though our method can not only adapt to convolutional networks, but also to fully-connected networks. 3.1 The Controller Figure 1(a) visually shows how RCL expands the network when a new task arrives. After the learning process of task t − 1 finishes and task t arrives, we use a controller to decide how many filters or nodes should be added to each layer. In order to prevent semantic drift, we withhold modification of network weights for previous tasks and only train the newly added filters. After we have trained the model for task t, we timestamp each newly added filter by the shape of every layer. During the inference time, each task only employs the parameters introduced in stage t, and does not consider the new filters added in the later tasks to prevent the caused semantic drift. Suppose the task network has m layers, when faced with a newly coming task, for each layer i, we specify the the number of filters to add in the range between 0 and ni − 1. A straightforward idea to obtain the optimal configuration of added filters for m layers is to traverse all the combinatorial combinations of actions. However, for an m-layer network, the time complexity of collecting the best action combination is O( ∏m 1 ni), which is NP-hard and unacceptable for very deep architectures such as VGG and ResNet. To deal with this issue, we treat a series of actions as a fixed-length string. It is possible to use a controller to generate such a string, representing how many filters should be added in each layer. Since there is a recurrent relationship between consecutive layers, the controller can be naturally designed as a LSTM network. At the first step, the controller network receives an empty embedding as input (i.e. the state s) for the current task, which will be fixed during the training. For each task t, we equip the network with softmax output, pt,i ∈ Rni representing the probabilities of sampling each action for layer i, i.e. the number of filters to be added. We design the LSTM in an autoregressive manner, as Figure 1(b) shows, the probability pt,i in the previous step is fed as input into the next step. This process is circulated until we obtain the actions and probabilities for all the m layers. And the policy probability of the sequence of actions a1:m follows the product rule, π(a1:m|s; θc) = m∏ i=1 pt,i,ai , (2) where θc denotes the parameters of the controller network. 3.2 The Task Network We deal with T tasks arriving in a sequential manner with training dataset Dt = {xi, yi}Nti=1 , validation dataset Vt = {xi, yi}Mti=1, test dataset Tt = {xi, yi} Kt i=1 at time t. For the first task, we train a basic task network that performs well enough via solving a standard supervised learning problem, min W1 L1(W1;D1). (3) We define the well-trained parameters as W at for task t. When the t-th task arrives, we already know the best parameters W at−1 for task t − 1. Now we use the controller to decide how many filters should be added to each layer, and then we obtain an expanded child network, whose parameters to be learned are denoted as Wt (including W at−1). The training procedure for the new task is as follows, keeping W at−1 fixed and only back-propagating the newly added parameters of Wt\W at−1. Thus, the optimization formula for the new task is, min Wt\Wat−1 Lt(Wt;Dt). (4) We use stochastic gradient descent to learn the newly added filters with η as the learning rate, Wt\W at−1 ←−Wt\W at−1 − η∇Wt\Wat−1Lt. (5) The expanded child network will be trained until the required number of epochs or convergence are reached. And then we test the child network on the validation dataset Vt and the corresponding accuracy At will be returned. The parameters of the expanded network achieving the maximal reward (described in Section 3.3) will be the optimal ones for task t, and we store them for later tasks. 3.3 Reward Design In order to facilitate our controller to generate better actions over time, we need design a reward function to reflect the performance of our actions. Considering both the validation accuracy and complexity of the expanded network, we design the reward for task t by the combination of the two terms, Rt = At(St, a1:m) + αCt(St, a1:m), (6) where At represents the validation accuracy on Vt, the network complexity as Ct = − m∑ i=1 ki, ki is the number of filters added in layer i, and α is a parameter to balance between the prediction performance and model complexity. Since Rt is non-differentiable, we use policy gradient to update the controller, described in the following section. 3.4 Training Procedures The controller’s prediction can be viewed as a list of actions a1:m, which means the number of filters added in m layers , to design an new architecture for a child network and then be trained in a new task. At convergence, this child network will achieve an accuracy At on a validation dataset and the model complexity Ct, finally we can obtain the reward Rt as defined in Eq. (6). We can use this reward Rt and reinforcement learning to train the controller. To find the optimal incremental architecture the new task t, the controller aims to maximize its expected reward, J(θc) = Vθc(st). (7) where Vθc is the true value function. In order to accelerate policy gradient training over θc, we use actorcritic methods with a value network parameterized by θv to approximate the state value V (st; θv). The REINFORCE algorithm [16] can be used to learn θc, ∇θcJ(θc) = E [∑ a1:m π(a1:m|st, θc)(R(st, a1:m)− V (st, θv)) ∇θcπ(a1:m|st, θc) π(a1:m|st, θc) ] . (8) Algorithm 1 RCL for Continual Learning 1: Input: A sequence of dataset D = {D1,D2, . . . ,DT } 2: Output: W aT 3: for t = 1, ..., T do 4: if t = 1 then 5: Train the base network using ( 3) on the first datasest D1 and obtain W a1 . 6: else 7: Expand the network by Algorithm 2, and obtain the trained W at . 8: end if 9: end for Algorithm 2 Routine for Network Expansion 1: Input: Current dataset Dt; previous parameter W at−1; the size of action space for each layer ni, i = 1 . . . ,m; number of epochs for training the controller and value network, Te. 2: Output: Network parameter W at 3: for i = 1, . . . , Te do 4: Generate actions a1:m by controller’s policy; 5: Generate W (i)t by expanding parameters W a t−1 according to a1:m; 6: Train the expanded network using Eq. (5) to obtain W (i)t . 7: Evaluate the gradients of the controller and value network by Eq. (9) and Eq.(10), θc = θc + ηc∇θcJ(θc), θv = θv − ηv∇θvLv(θv). 8: end for 9: Return the best network parameter configuration, W at = argmaxW (i)t Rt(W (i) t ). A Monte Carlo approximation for the above quantity is, 1 N N∑ i=1 ∇θc log π(a (i) 1:m|st; θc) ( R(st, a (i) 1:m)− V (st, θv) ) . (9) where N is the batch size. For the value network, we utilize gradient-based method to update θv , the gradient of which can be evaluated as follows, Lv = 1 N N∑ i=1 (V (st; θv)−R(st, a(i)1:m))2, ∇θvLv = 2 N N∑ i=1 ( V (st; θv)−R(st, a(i)1:m) ) ∂V (st; θv) ∂θv . (10) Finally we summarize our RCL approach for continual learning in Algorithm 1, in which the subroutine for network expansion is described in Algorithm 2. 3.5 Comparison with Other Approaches As a new framework for network expansion to achieve continual learning, RCL distinguishes from progressive network [11] and DEN [17] from the following aspects. • Compared with DEN, instead of performing selective retraining and network split, RCL keeps the learned parameters for previous tasks fixed and only updates the added parameters. Through this training strategy, RCL can totally prevent catastrophic forgetting due to the freezing parameters for corresponding tasks. • Progressive neural networks expand the architecture with a fixed number of units or filters. To obtain a satisfying model accuracy when number of sequential tasks is large, the final complexity of progressive nets is required to be extremely high. This directly leads to high computational burden both in training and inference, even difficult for the storage of the entire model. To handle this issue, both RCL and DEN dynamically adjust the networks to reach a more economic architecture. • While DEN achieves the expandable network by sparse regularization, RCL adaptively expands the network by reinforcement learning. However, the performance of DEN is quite sensitive to the various hyperparameters, including regularization parameters and thresholding coefficients. RCL largely reduces the number of hyperparameters and boils down to only balancing the average validation accuracy and model complexity when the designed reward function. Through different experiments in Section 4, we demonstrate that RCL could achieve more stable results, and better model performance could be achieved simultaneously with even much less neurons than DEN. 4 Experiments We perform a variety of experiments to access the performance of RCL in continual learning. We will report the accuracy, the model complexity and the training time consumption between our RCL and the state-of-the-art baselines. We implemented all the experiments in Tensorfolw framework on GPU Tesla K80. Datasets (1) MNIST Permutations [4]. Ten variants of the MNIST data, where each task is transformed by a fixed permutation of pixels. In the dataset, the samples from different task are not independent and identically distributed; (2) MNIST Mix. Five MNIST permutations (P1, . . . , P5) and five variants of the MNIST dataset (R1, . . . , R5) where each contains digits rotated by a fixed angle between 0 and 180 degrees. These tasks are arranged in the order P1, R1, P2, . . . , P5, R5. (3) Incremental CIFAR-100 [9]. Different from the original CIFAR-100, each task introduces a new set of classes. For the total number of tasks T , each new task contains digits from a subset of 100/T classes. In this dataset, the distribution of the input is similar for all tasks, but the distribution of the output is different. For all of the above datasets, we set the number of tasks to be learned as T = 10. For the MNIST datasets, each task contains 60000 training examples and 10000 test examples from 10 different classes. For the CIFAR-100 datasets, each task contains 5000 train examples and 1000 examples from 10 different classes. The model observes the tasks one by one, and once the task had been observed, the task will not be observed later during the training. Baselines (1) SN, a single network trained across all tasks; (2) EWC, deep network trained with elastic weight consolidation [4] for regularization; (3) GEM, gradient episodic memory [7]; (4) PGN, progressive neural network proposed in [11]; (5) DEN, dynamically expandable network [17]. Base network settings (1) Fully connected networks for MNIST Permutations and MNIST Mix datasets. We use a three-layer network with 784-312-128-10 neurons with RELU activations; (2) LeNet is used for Incremental CIFAR-100. LeNet has two convolutional layers and three fullyconnected layers, the detailed structure of LeNet can be found in [5]. 4.1 Results We evaluate each compared approach by considering average test accuracy on all the tasks, model complexity and training time. Model complexity is measured via the number of model parameters after training all the tasks. We first report the test accuracy and model complexity of baselines and our proposed RCL for the three datasets in Figure 2. Comparison between fixed-size and expandable networks. From Figure 2, we can easily observe that the approaches with fixed-size network architectures, such as IN, EWC and GEM, own low model complexity, but their prediction accuracy is much worse than those methods with expandable networks, including PGN, DEN and RCL. This shows that dynamically expanding networks can indeed contribute to the model performance by a large margin. Comparison between PGN, DEN and RCL. Regarding to the expandable networks, RCL outperforms PGN and DEN on both test accuracy and model complexity. Particularly, RCL achieves significant reduction on the number of parameters compared with PGN and DEN, e.g. for incremental Cifar100 data, 42% and 53% parameter reduction, respectively. To further see the difference of the three methods, we vary the hyperparameters settings and train the networks accordingly, and obtain how test accuracy changes with respect to the number of parameters, as shown in Figure 3. We can clearly observe that RCL can achieve significant model reduction with the same test accuracy as that of PGN and DEN, and accuracy improvement with same size of networks. This demonstrates the benefits of employing reinforcement learning to adaptively control the complexity of the entire model architecture. Comparison between RCL and Random Search. We compare our policy gradient controller and random search controller on different datasets. In every experiment setup, hyper-parameters are the same except the controller (random search controller v.s. policy gradient controller). We run each experiment for four times. We found that random search achieves more than 0.1% less accuracy and almost the same number of parameters on these three datasets compared with policy gradient. We note that random search performs surprisingly well, which we attribute to the representation power of our reward design. This demonstrates that our well-constructed reward strikes a balance between accuracy and model complexity very effectively. Evaluating the forgetting behavior. Figure 4 shows the evolution of the test accuracy on the first task as more tasks are learned. RCL and PGN exhibit no forgetting while the approaches without expanding the networks raise catastrophic forgetting. Moreover, DEN can not completely prevent forgetting since it retrains the previous parameters when learning new tasks. Training time We report the wall clock training time for each compared method in Table 1). Since RCL is based on reinforcement learning, a large number of trials are typically required that leads to more training time than other methods. Improving the training efficiency of reinforcement learning is still an open problem, and we leave it as future work. Balance between test accuracy and model complexity. We control the tradeoff between the model performance and complexity through the coefficient α in the reward function (6). Figure 5 shows how varying α affects the test accuracy and number of model parameters. As expected, with increasing α the model complexity drops significantly while the model performance also deteriorate gradually. Interestingly, when α is small, accuracy drops much slower compared with the decreasing of the number of parameters. This observation could help to choose a suitable α such that a mediumsized network can still achieve a relatively good model performance. 5 Conclusion We propose a novel framework for continual learning, Reinforced Continual Learning. Our method searches for the best neural architecture for coming task by reinforcement learning, which increases its capacity when necessary and effectively prevents semantic drift. We implement both fully connected and convolutional neural networks as our task networks, and validate them on different datasets. The experiments demonstrate that our proposal outperforms the exiting baselines significantly both on prediction accuracy and model complexity. As for future works, two directions are worthy of consideration. Firstly, we will develop new strategies for RCL to facilitate backward transfer, i.e. improve previous tasks’ performance by learning new tasks. Moreover, how to reduce the training time of RCL is particularly important for large networks with more layers. Acknowledgments Supported by National Natural Science Foundation of China (Grant No: 61806009) and Beijing Natural Science Foundation (Grant No: 4184090).
1. What is the focus of the paper regarding catastrophic forgetting? 2. What are the strengths of the proposed approach, particularly in terms of adaptive network expansion? 3. What are the weaknesses of the paper, especially regarding experimentation? 4. Do you have any concerns about the use of REINFORCE objective in this setup? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Summary: This paper proposes a way to deal with "catastrophic forgetting" by assigning the parts of the model specifically for particular tasks that are being learned and learn to grow the network by adding more filters as more and more task are introduced. In this way parts of the model become more task-specific. The controller which decides how many filters or units to add to each layer is an LSTM network over the layers that learns the sequential relation between consecutive layers. They train the controller via reinforcement learning using REINFORCE with learned actor and critic. They have done a smart reward shaping in order to train the controller. Overall Comments: The idea of adaptively expanding the network according to the new task that is being added and the capacity that is assigned by the controller is task dependent is very neat. However, I am quite surprised that the REINFORCE objective actually worked in this setup. Because in the similar settings for instance in adaptive computation time, REINFORCE has not really worked that well. I think the main reason that it worked in this setup might be the reward shaping. I think overall the paper is well-written besides some small typos. For example "s/matche/matches/" in line 24. I think one main part that this paper lacks is the experiments section: 1) Experiments are only limited to CIFAR10 and MNIST. These are very toyish tasks. I would be in particular interested in seeing results on some NLP or reinforcement learning task. For example on one of the tasks that Progressive neural networks were tested on. 2) More ablations are needed. In particular, ablations on the controller would be super useful. For example, one ablation that I would like to see would be a fixed controller or a random controller would be interesting as well. Overall, I like the paper and the idea, but I don't buy the experiments part of the paper completely.
NIPS
Title Reinforced Continual Learning Abstract Most artificial intelligence models are limited in their ability to solve new tasks faster, without forgetting previously acquired knowledge. The recently emerging paradigm of continual learning aims to solve this issue, in which the model learns various tasks in a sequential fashion. In this work, a novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies. We name it as Reinforced Continual Learning. Our method not only has good performance on preventing catastrophic forgetting but also fits new tasks well. The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks. 1 Introduction Continual learning, or lifelong learning [15], the ability to learn consecutive tasks without forgetting how to perform previously trained tasks, is an important topic for developing artificial intelligence. The primary goal of continual learning is to overcome the forgetting of learned tasks and to leverage the earlier knowledge for obtaining better performance or faster convergence/training speed on the newly coming tasks. In the deep learning community, two groups of strategies have been developed to alleviate the problem of forgetting the previously trained tasks, distinguished by whether the network architecture changes during learning. The first category of approaches maintain a fixed network architecture with large capacity. When training the network for consecutive tasks, some regularization term is enforced to prevent the model parameters from deviating too much from the previous learned parameters according to their significance to old tasks [4, 19]. In [6], the authors proposed to incrementally match the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. Alternatively, an episodic memory [7] is budgeted to store the subsets of previous datasets, and then trained together with the new task. FearNet [3] mitigates catastrophic forgetting by consolidating recent memories into long-term storage using pseudorehearsal [10] which employs a generative autoencoder to generate previously learned examples that are replayed alongside novel information during consolidation. Fernando et al. [2] proposed PathNet, in which a neural network has ten or twenty modules in each layer, and three or four modules are picked for one task in each layer by an evolutionary approach. However, these methods typically require unnecessarily largecapacity networks, particularly when the number of tasks is large, since the network architecture is never dynamically adjusted during training. ∗Corresponding author. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. The other group of methods for overcoming catastrophic forgetting dynamically expand the network to accommodate the new coming task while keeping the parameters of previous architecture unchanged. Progressive networks [11] expand the architectures with a fixed size of nodes or layers, leading to an extremely large network structure particularly faced with a large number of sequential tasks. The resultant complex architecture might be expensive to store and even unnecessary due to its high redundancy. Dynamically Expandable Network (DEN, [17] alleviated this issue slightly by introducing group sparsity regularization when adding new parameters to the original network; unfortunately, there involves many hyperparameters in DEN, including various regularization and thresholding ones, which need to be tuned carefully due to the high sensitivity to the model performance. In this work, in order to better facilitate knowledge transfer and avoid catastrophic forgetting, we propose a novel framework to adaptively expand the network. Faced with a new task, deciding optimal number of nodes/filters to add for each layer is posed as a combinatorial optimization problem. We provide a sophisticatedly designed reinforcement learning method to solve this problem. Thus, we name it as Reinforced Continual Learning (RCL). In RCL, a controller implemented as a recurrent neural network is adopted to determine the best architectural hyper-parameters of neural networks for each task. We train the controller by an actor-critic strategy guided by a reward signal deriving from both validation accuracy and network complexity. This can maintain the prediction accuracy on older tasks as much as possible while reducing the overall model complexity. To the best of our knowledge, the proposal is the first attempt that employs the reinforcement learning for solving the continual learning problems. RCL not only differs from adding a fixed number of units to the old network for solving a new task [11], which might be suboptimal and computationally expensive, but also distinguishes from [17] as well that performs group sparsity regularization on the added parameters. We validate the effectiveness of RCL on various sequential tasks. And the results show that RCL can obtain better performance than existing methods even with adding much less units. The rest of this paper is organized as follows. In Section 2, we introduce the preliminary knowledge on reinforcement learning. We propose the new method RCL in Section 3, a model to learn a sequence of tasks dynamically based on reinforcement learning. In Section 4, we implement various experiments to demonstrate the superiority of RCL over other state-of-the-art methods. Finally, we conclude our paper in Section 5 and provide some directions for future research. 2 Preliminaries of Reinforcement learning Reinforcement learning [13] deals with learning a policy for an agent interacting in an unknown environment. It has been applied successfully to various problems, such as games [8, 12], natural language processing [18], neural architecture/optimizer search [20, 1] and so on. At each step, an agent observes the current state st of the environment, decides of an action at according to a policy π(at|st), and observes a reward signal rt+1. The goal of the agent is to find a policy that maximizes the expected sum of discounted rewards Rt, Rt = ∑∞ t′=t+1 γ t′−t−1rt′ , where γ ∈ (0, 1] is a discount factor that determines the importance of future rewards. The value function of a policy π is defined as the expected return Vπ(s) = Eπ[ ∑∞ t=0 γ trt+1|s0 = s] and its action-value function as Qπ(s, a) = Eπ[ ∑∞ t=0 γ trt+1|s0 = s, a0 = a]. Policy gradient methods address the problem of finding a good policy by performing stochastic gradient descent to optimize a performance objective over a given family of parametrized stochastic policies πθ(a|s) parameterized by θ. The policy gradient theorem [14] provides expressions for the gradient of the average reward and discounted reward objectives with respect to θ. In the discounted setting, the objective is defined with respect to a designated start state (or distribution) s0: ρ(θ, s0) = Eπθ [ ∑∞ t=0 γ trt+1|s0]. The policy gradient theorem shows that: ∂ρ(θ, s0) ∂θ = ∑ s µπθ (s|s0) ∑ a ∂ππθ (a|s) ∂θ Qπθ (s, a). (1) where µπθ (s|s0) = ∑∞ t=0 γ tP (st = s|s0). 3 Our Proposal: Reinforced Continual Learning In this section, we elaborate on the new framework for continual learning, Reinforced Continual Learning(RCL). RCL consists of three networks, controller, value network, and task network. The controller is implemented as a Long Short-Term Memory network (LSTM) for generating policies and determining how many filters or nodes will be added for each task. We design the value network as a fully-connected network, which approximates the value of the state. The task network can be any network of interest for solving a particular task, such as image classification or object detection. In this paper, we use a convolutional network (CNN) as the task network to demonstrate how RCL adaptively expands this CNN to prevent forgetting, though our method can not only adapt to convolutional networks, but also to fully-connected networks. 3.1 The Controller Figure 1(a) visually shows how RCL expands the network when a new task arrives. After the learning process of task t − 1 finishes and task t arrives, we use a controller to decide how many filters or nodes should be added to each layer. In order to prevent semantic drift, we withhold modification of network weights for previous tasks and only train the newly added filters. After we have trained the model for task t, we timestamp each newly added filter by the shape of every layer. During the inference time, each task only employs the parameters introduced in stage t, and does not consider the new filters added in the later tasks to prevent the caused semantic drift. Suppose the task network has m layers, when faced with a newly coming task, for each layer i, we specify the the number of filters to add in the range between 0 and ni − 1. A straightforward idea to obtain the optimal configuration of added filters for m layers is to traverse all the combinatorial combinations of actions. However, for an m-layer network, the time complexity of collecting the best action combination is O( ∏m 1 ni), which is NP-hard and unacceptable for very deep architectures such as VGG and ResNet. To deal with this issue, we treat a series of actions as a fixed-length string. It is possible to use a controller to generate such a string, representing how many filters should be added in each layer. Since there is a recurrent relationship between consecutive layers, the controller can be naturally designed as a LSTM network. At the first step, the controller network receives an empty embedding as input (i.e. the state s) for the current task, which will be fixed during the training. For each task t, we equip the network with softmax output, pt,i ∈ Rni representing the probabilities of sampling each action for layer i, i.e. the number of filters to be added. We design the LSTM in an autoregressive manner, as Figure 1(b) shows, the probability pt,i in the previous step is fed as input into the next step. This process is circulated until we obtain the actions and probabilities for all the m layers. And the policy probability of the sequence of actions a1:m follows the product rule, π(a1:m|s; θc) = m∏ i=1 pt,i,ai , (2) where θc denotes the parameters of the controller network. 3.2 The Task Network We deal with T tasks arriving in a sequential manner with training dataset Dt = {xi, yi}Nti=1 , validation dataset Vt = {xi, yi}Mti=1, test dataset Tt = {xi, yi} Kt i=1 at time t. For the first task, we train a basic task network that performs well enough via solving a standard supervised learning problem, min W1 L1(W1;D1). (3) We define the well-trained parameters as W at for task t. When the t-th task arrives, we already know the best parameters W at−1 for task t − 1. Now we use the controller to decide how many filters should be added to each layer, and then we obtain an expanded child network, whose parameters to be learned are denoted as Wt (including W at−1). The training procedure for the new task is as follows, keeping W at−1 fixed and only back-propagating the newly added parameters of Wt\W at−1. Thus, the optimization formula for the new task is, min Wt\Wat−1 Lt(Wt;Dt). (4) We use stochastic gradient descent to learn the newly added filters with η as the learning rate, Wt\W at−1 ←−Wt\W at−1 − η∇Wt\Wat−1Lt. (5) The expanded child network will be trained until the required number of epochs or convergence are reached. And then we test the child network on the validation dataset Vt and the corresponding accuracy At will be returned. The parameters of the expanded network achieving the maximal reward (described in Section 3.3) will be the optimal ones for task t, and we store them for later tasks. 3.3 Reward Design In order to facilitate our controller to generate better actions over time, we need design a reward function to reflect the performance of our actions. Considering both the validation accuracy and complexity of the expanded network, we design the reward for task t by the combination of the two terms, Rt = At(St, a1:m) + αCt(St, a1:m), (6) where At represents the validation accuracy on Vt, the network complexity as Ct = − m∑ i=1 ki, ki is the number of filters added in layer i, and α is a parameter to balance between the prediction performance and model complexity. Since Rt is non-differentiable, we use policy gradient to update the controller, described in the following section. 3.4 Training Procedures The controller’s prediction can be viewed as a list of actions a1:m, which means the number of filters added in m layers , to design an new architecture for a child network and then be trained in a new task. At convergence, this child network will achieve an accuracy At on a validation dataset and the model complexity Ct, finally we can obtain the reward Rt as defined in Eq. (6). We can use this reward Rt and reinforcement learning to train the controller. To find the optimal incremental architecture the new task t, the controller aims to maximize its expected reward, J(θc) = Vθc(st). (7) where Vθc is the true value function. In order to accelerate policy gradient training over θc, we use actorcritic methods with a value network parameterized by θv to approximate the state value V (st; θv). The REINFORCE algorithm [16] can be used to learn θc, ∇θcJ(θc) = E [∑ a1:m π(a1:m|st, θc)(R(st, a1:m)− V (st, θv)) ∇θcπ(a1:m|st, θc) π(a1:m|st, θc) ] . (8) Algorithm 1 RCL for Continual Learning 1: Input: A sequence of dataset D = {D1,D2, . . . ,DT } 2: Output: W aT 3: for t = 1, ..., T do 4: if t = 1 then 5: Train the base network using ( 3) on the first datasest D1 and obtain W a1 . 6: else 7: Expand the network by Algorithm 2, and obtain the trained W at . 8: end if 9: end for Algorithm 2 Routine for Network Expansion 1: Input: Current dataset Dt; previous parameter W at−1; the size of action space for each layer ni, i = 1 . . . ,m; number of epochs for training the controller and value network, Te. 2: Output: Network parameter W at 3: for i = 1, . . . , Te do 4: Generate actions a1:m by controller’s policy; 5: Generate W (i)t by expanding parameters W a t−1 according to a1:m; 6: Train the expanded network using Eq. (5) to obtain W (i)t . 7: Evaluate the gradients of the controller and value network by Eq. (9) and Eq.(10), θc = θc + ηc∇θcJ(θc), θv = θv − ηv∇θvLv(θv). 8: end for 9: Return the best network parameter configuration, W at = argmaxW (i)t Rt(W (i) t ). A Monte Carlo approximation for the above quantity is, 1 N N∑ i=1 ∇θc log π(a (i) 1:m|st; θc) ( R(st, a (i) 1:m)− V (st, θv) ) . (9) where N is the batch size. For the value network, we utilize gradient-based method to update θv , the gradient of which can be evaluated as follows, Lv = 1 N N∑ i=1 (V (st; θv)−R(st, a(i)1:m))2, ∇θvLv = 2 N N∑ i=1 ( V (st; θv)−R(st, a(i)1:m) ) ∂V (st; θv) ∂θv . (10) Finally we summarize our RCL approach for continual learning in Algorithm 1, in which the subroutine for network expansion is described in Algorithm 2. 3.5 Comparison with Other Approaches As a new framework for network expansion to achieve continual learning, RCL distinguishes from progressive network [11] and DEN [17] from the following aspects. • Compared with DEN, instead of performing selective retraining and network split, RCL keeps the learned parameters for previous tasks fixed and only updates the added parameters. Through this training strategy, RCL can totally prevent catastrophic forgetting due to the freezing parameters for corresponding tasks. • Progressive neural networks expand the architecture with a fixed number of units or filters. To obtain a satisfying model accuracy when number of sequential tasks is large, the final complexity of progressive nets is required to be extremely high. This directly leads to high computational burden both in training and inference, even difficult for the storage of the entire model. To handle this issue, both RCL and DEN dynamically adjust the networks to reach a more economic architecture. • While DEN achieves the expandable network by sparse regularization, RCL adaptively expands the network by reinforcement learning. However, the performance of DEN is quite sensitive to the various hyperparameters, including regularization parameters and thresholding coefficients. RCL largely reduces the number of hyperparameters and boils down to only balancing the average validation accuracy and model complexity when the designed reward function. Through different experiments in Section 4, we demonstrate that RCL could achieve more stable results, and better model performance could be achieved simultaneously with even much less neurons than DEN. 4 Experiments We perform a variety of experiments to access the performance of RCL in continual learning. We will report the accuracy, the model complexity and the training time consumption between our RCL and the state-of-the-art baselines. We implemented all the experiments in Tensorfolw framework on GPU Tesla K80. Datasets (1) MNIST Permutations [4]. Ten variants of the MNIST data, where each task is transformed by a fixed permutation of pixels. In the dataset, the samples from different task are not independent and identically distributed; (2) MNIST Mix. Five MNIST permutations (P1, . . . , P5) and five variants of the MNIST dataset (R1, . . . , R5) where each contains digits rotated by a fixed angle between 0 and 180 degrees. These tasks are arranged in the order P1, R1, P2, . . . , P5, R5. (3) Incremental CIFAR-100 [9]. Different from the original CIFAR-100, each task introduces a new set of classes. For the total number of tasks T , each new task contains digits from a subset of 100/T classes. In this dataset, the distribution of the input is similar for all tasks, but the distribution of the output is different. For all of the above datasets, we set the number of tasks to be learned as T = 10. For the MNIST datasets, each task contains 60000 training examples and 10000 test examples from 10 different classes. For the CIFAR-100 datasets, each task contains 5000 train examples and 1000 examples from 10 different classes. The model observes the tasks one by one, and once the task had been observed, the task will not be observed later during the training. Baselines (1) SN, a single network trained across all tasks; (2) EWC, deep network trained with elastic weight consolidation [4] for regularization; (3) GEM, gradient episodic memory [7]; (4) PGN, progressive neural network proposed in [11]; (5) DEN, dynamically expandable network [17]. Base network settings (1) Fully connected networks for MNIST Permutations and MNIST Mix datasets. We use a three-layer network with 784-312-128-10 neurons with RELU activations; (2) LeNet is used for Incremental CIFAR-100. LeNet has two convolutional layers and three fullyconnected layers, the detailed structure of LeNet can be found in [5]. 4.1 Results We evaluate each compared approach by considering average test accuracy on all the tasks, model complexity and training time. Model complexity is measured via the number of model parameters after training all the tasks. We first report the test accuracy and model complexity of baselines and our proposed RCL for the three datasets in Figure 2. Comparison between fixed-size and expandable networks. From Figure 2, we can easily observe that the approaches with fixed-size network architectures, such as IN, EWC and GEM, own low model complexity, but their prediction accuracy is much worse than those methods with expandable networks, including PGN, DEN and RCL. This shows that dynamically expanding networks can indeed contribute to the model performance by a large margin. Comparison between PGN, DEN and RCL. Regarding to the expandable networks, RCL outperforms PGN and DEN on both test accuracy and model complexity. Particularly, RCL achieves significant reduction on the number of parameters compared with PGN and DEN, e.g. for incremental Cifar100 data, 42% and 53% parameter reduction, respectively. To further see the difference of the three methods, we vary the hyperparameters settings and train the networks accordingly, and obtain how test accuracy changes with respect to the number of parameters, as shown in Figure 3. We can clearly observe that RCL can achieve significant model reduction with the same test accuracy as that of PGN and DEN, and accuracy improvement with same size of networks. This demonstrates the benefits of employing reinforcement learning to adaptively control the complexity of the entire model architecture. Comparison between RCL and Random Search. We compare our policy gradient controller and random search controller on different datasets. In every experiment setup, hyper-parameters are the same except the controller (random search controller v.s. policy gradient controller). We run each experiment for four times. We found that random search achieves more than 0.1% less accuracy and almost the same number of parameters on these three datasets compared with policy gradient. We note that random search performs surprisingly well, which we attribute to the representation power of our reward design. This demonstrates that our well-constructed reward strikes a balance between accuracy and model complexity very effectively. Evaluating the forgetting behavior. Figure 4 shows the evolution of the test accuracy on the first task as more tasks are learned. RCL and PGN exhibit no forgetting while the approaches without expanding the networks raise catastrophic forgetting. Moreover, DEN can not completely prevent forgetting since it retrains the previous parameters when learning new tasks. Training time We report the wall clock training time for each compared method in Table 1). Since RCL is based on reinforcement learning, a large number of trials are typically required that leads to more training time than other methods. Improving the training efficiency of reinforcement learning is still an open problem, and we leave it as future work. Balance between test accuracy and model complexity. We control the tradeoff between the model performance and complexity through the coefficient α in the reward function (6). Figure 5 shows how varying α affects the test accuracy and number of model parameters. As expected, with increasing α the model complexity drops significantly while the model performance also deteriorate gradually. Interestingly, when α is small, accuracy drops much slower compared with the decreasing of the number of parameters. This observation could help to choose a suitable α such that a mediumsized network can still achieve a relatively good model performance. 5 Conclusion We propose a novel framework for continual learning, Reinforced Continual Learning. Our method searches for the best neural architecture for coming task by reinforcement learning, which increases its capacity when necessary and effectively prevents semantic drift. We implement both fully connected and convolutional neural networks as our task networks, and validate them on different datasets. The experiments demonstrate that our proposal outperforms the exiting baselines significantly both on prediction accuracy and model complexity. As for future works, two directions are worthy of consideration. Firstly, we will develop new strategies for RCL to facilitate backward transfer, i.e. improve previous tasks’ performance by learning new tasks. Moreover, how to reduce the training time of RCL is particularly important for large networks with more layers. Acknowledgments Supported by National Natural Science Foundation of China (Grant No: 61806009) and Beijing Natural Science Foundation (Grant No: 4184090).
1. What is the focus of the paper regarding continual learning? 2. What are the strengths of the proposed approach in terms of model architecture and RL application? 3. What are the weaknesses of the paper regarding training time and model complexity? 4. How does the reviewer assess the novelty of the idea and its comparison to prior works like Progressive Net and DEN?
Review
Review The work gave a nice application of RL to the continual learning problem, particularly focusing on the forward transfer learning case. Namely, as in Progressive Net or DEN, the proposed method follows to expand model architecture for the new task. But, unlike Progressive Net, which expands with fixed size network, and DEN, which has various hyperparameters to tune, the proposed method applies RL framework to learn the model expanding step for each task. The specific RL technique is not necessarily novel, but quite standard - LSTM controller and actor-critic method for learning. However, I think the idea of applying RL to continual learning is novel enough for a publication. In their experimental results, RCL achieves essentially the same accuracy as PGN and DEN, but with fewer parameters and hyperparameters to tune. The downside is that the training time is much longer than the other method. Also, since their model complexity indeed grows with the number of tasks, it cannot handle too many tasks, which is the common limitation of PGN and DEN. The result on the forgetting behavior is not too surprising since they are freezing the network for the older tasks. Hence, there is not backward transfer happening, if the data from the old task arrives again.
NIPS
Title Reinforced Continual Learning Abstract Most artificial intelligence models are limited in their ability to solve new tasks faster, without forgetting previously acquired knowledge. The recently emerging paradigm of continual learning aims to solve this issue, in which the model learns various tasks in a sequential fashion. In this work, a novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies. We name it as Reinforced Continual Learning. Our method not only has good performance on preventing catastrophic forgetting but also fits new tasks well. The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks. 1 Introduction Continual learning, or lifelong learning [15], the ability to learn consecutive tasks without forgetting how to perform previously trained tasks, is an important topic for developing artificial intelligence. The primary goal of continual learning is to overcome the forgetting of learned tasks and to leverage the earlier knowledge for obtaining better performance or faster convergence/training speed on the newly coming tasks. In the deep learning community, two groups of strategies have been developed to alleviate the problem of forgetting the previously trained tasks, distinguished by whether the network architecture changes during learning. The first category of approaches maintain a fixed network architecture with large capacity. When training the network for consecutive tasks, some regularization term is enforced to prevent the model parameters from deviating too much from the previous learned parameters according to their significance to old tasks [4, 19]. In [6], the authors proposed to incrementally match the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. Alternatively, an episodic memory [7] is budgeted to store the subsets of previous datasets, and then trained together with the new task. FearNet [3] mitigates catastrophic forgetting by consolidating recent memories into long-term storage using pseudorehearsal [10] which employs a generative autoencoder to generate previously learned examples that are replayed alongside novel information during consolidation. Fernando et al. [2] proposed PathNet, in which a neural network has ten or twenty modules in each layer, and three or four modules are picked for one task in each layer by an evolutionary approach. However, these methods typically require unnecessarily largecapacity networks, particularly when the number of tasks is large, since the network architecture is never dynamically adjusted during training. ∗Corresponding author. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. The other group of methods for overcoming catastrophic forgetting dynamically expand the network to accommodate the new coming task while keeping the parameters of previous architecture unchanged. Progressive networks [11] expand the architectures with a fixed size of nodes or layers, leading to an extremely large network structure particularly faced with a large number of sequential tasks. The resultant complex architecture might be expensive to store and even unnecessary due to its high redundancy. Dynamically Expandable Network (DEN, [17] alleviated this issue slightly by introducing group sparsity regularization when adding new parameters to the original network; unfortunately, there involves many hyperparameters in DEN, including various regularization and thresholding ones, which need to be tuned carefully due to the high sensitivity to the model performance. In this work, in order to better facilitate knowledge transfer and avoid catastrophic forgetting, we propose a novel framework to adaptively expand the network. Faced with a new task, deciding optimal number of nodes/filters to add for each layer is posed as a combinatorial optimization problem. We provide a sophisticatedly designed reinforcement learning method to solve this problem. Thus, we name it as Reinforced Continual Learning (RCL). In RCL, a controller implemented as a recurrent neural network is adopted to determine the best architectural hyper-parameters of neural networks for each task. We train the controller by an actor-critic strategy guided by a reward signal deriving from both validation accuracy and network complexity. This can maintain the prediction accuracy on older tasks as much as possible while reducing the overall model complexity. To the best of our knowledge, the proposal is the first attempt that employs the reinforcement learning for solving the continual learning problems. RCL not only differs from adding a fixed number of units to the old network for solving a new task [11], which might be suboptimal and computationally expensive, but also distinguishes from [17] as well that performs group sparsity regularization on the added parameters. We validate the effectiveness of RCL on various sequential tasks. And the results show that RCL can obtain better performance than existing methods even with adding much less units. The rest of this paper is organized as follows. In Section 2, we introduce the preliminary knowledge on reinforcement learning. We propose the new method RCL in Section 3, a model to learn a sequence of tasks dynamically based on reinforcement learning. In Section 4, we implement various experiments to demonstrate the superiority of RCL over other state-of-the-art methods. Finally, we conclude our paper in Section 5 and provide some directions for future research. 2 Preliminaries of Reinforcement learning Reinforcement learning [13] deals with learning a policy for an agent interacting in an unknown environment. It has been applied successfully to various problems, such as games [8, 12], natural language processing [18], neural architecture/optimizer search [20, 1] and so on. At each step, an agent observes the current state st of the environment, decides of an action at according to a policy π(at|st), and observes a reward signal rt+1. The goal of the agent is to find a policy that maximizes the expected sum of discounted rewards Rt, Rt = ∑∞ t′=t+1 γ t′−t−1rt′ , where γ ∈ (0, 1] is a discount factor that determines the importance of future rewards. The value function of a policy π is defined as the expected return Vπ(s) = Eπ[ ∑∞ t=0 γ trt+1|s0 = s] and its action-value function as Qπ(s, a) = Eπ[ ∑∞ t=0 γ trt+1|s0 = s, a0 = a]. Policy gradient methods address the problem of finding a good policy by performing stochastic gradient descent to optimize a performance objective over a given family of parametrized stochastic policies πθ(a|s) parameterized by θ. The policy gradient theorem [14] provides expressions for the gradient of the average reward and discounted reward objectives with respect to θ. In the discounted setting, the objective is defined with respect to a designated start state (or distribution) s0: ρ(θ, s0) = Eπθ [ ∑∞ t=0 γ trt+1|s0]. The policy gradient theorem shows that: ∂ρ(θ, s0) ∂θ = ∑ s µπθ (s|s0) ∑ a ∂ππθ (a|s) ∂θ Qπθ (s, a). (1) where µπθ (s|s0) = ∑∞ t=0 γ tP (st = s|s0). 3 Our Proposal: Reinforced Continual Learning In this section, we elaborate on the new framework for continual learning, Reinforced Continual Learning(RCL). RCL consists of three networks, controller, value network, and task network. The controller is implemented as a Long Short-Term Memory network (LSTM) for generating policies and determining how many filters or nodes will be added for each task. We design the value network as a fully-connected network, which approximates the value of the state. The task network can be any network of interest for solving a particular task, such as image classification or object detection. In this paper, we use a convolutional network (CNN) as the task network to demonstrate how RCL adaptively expands this CNN to prevent forgetting, though our method can not only adapt to convolutional networks, but also to fully-connected networks. 3.1 The Controller Figure 1(a) visually shows how RCL expands the network when a new task arrives. After the learning process of task t − 1 finishes and task t arrives, we use a controller to decide how many filters or nodes should be added to each layer. In order to prevent semantic drift, we withhold modification of network weights for previous tasks and only train the newly added filters. After we have trained the model for task t, we timestamp each newly added filter by the shape of every layer. During the inference time, each task only employs the parameters introduced in stage t, and does not consider the new filters added in the later tasks to prevent the caused semantic drift. Suppose the task network has m layers, when faced with a newly coming task, for each layer i, we specify the the number of filters to add in the range between 0 and ni − 1. A straightforward idea to obtain the optimal configuration of added filters for m layers is to traverse all the combinatorial combinations of actions. However, for an m-layer network, the time complexity of collecting the best action combination is O( ∏m 1 ni), which is NP-hard and unacceptable for very deep architectures such as VGG and ResNet. To deal with this issue, we treat a series of actions as a fixed-length string. It is possible to use a controller to generate such a string, representing how many filters should be added in each layer. Since there is a recurrent relationship between consecutive layers, the controller can be naturally designed as a LSTM network. At the first step, the controller network receives an empty embedding as input (i.e. the state s) for the current task, which will be fixed during the training. For each task t, we equip the network with softmax output, pt,i ∈ Rni representing the probabilities of sampling each action for layer i, i.e. the number of filters to be added. We design the LSTM in an autoregressive manner, as Figure 1(b) shows, the probability pt,i in the previous step is fed as input into the next step. This process is circulated until we obtain the actions and probabilities for all the m layers. And the policy probability of the sequence of actions a1:m follows the product rule, π(a1:m|s; θc) = m∏ i=1 pt,i,ai , (2) where θc denotes the parameters of the controller network. 3.2 The Task Network We deal with T tasks arriving in a sequential manner with training dataset Dt = {xi, yi}Nti=1 , validation dataset Vt = {xi, yi}Mti=1, test dataset Tt = {xi, yi} Kt i=1 at time t. For the first task, we train a basic task network that performs well enough via solving a standard supervised learning problem, min W1 L1(W1;D1). (3) We define the well-trained parameters as W at for task t. When the t-th task arrives, we already know the best parameters W at−1 for task t − 1. Now we use the controller to decide how many filters should be added to each layer, and then we obtain an expanded child network, whose parameters to be learned are denoted as Wt (including W at−1). The training procedure for the new task is as follows, keeping W at−1 fixed and only back-propagating the newly added parameters of Wt\W at−1. Thus, the optimization formula for the new task is, min Wt\Wat−1 Lt(Wt;Dt). (4) We use stochastic gradient descent to learn the newly added filters with η as the learning rate, Wt\W at−1 ←−Wt\W at−1 − η∇Wt\Wat−1Lt. (5) The expanded child network will be trained until the required number of epochs or convergence are reached. And then we test the child network on the validation dataset Vt and the corresponding accuracy At will be returned. The parameters of the expanded network achieving the maximal reward (described in Section 3.3) will be the optimal ones for task t, and we store them for later tasks. 3.3 Reward Design In order to facilitate our controller to generate better actions over time, we need design a reward function to reflect the performance of our actions. Considering both the validation accuracy and complexity of the expanded network, we design the reward for task t by the combination of the two terms, Rt = At(St, a1:m) + αCt(St, a1:m), (6) where At represents the validation accuracy on Vt, the network complexity as Ct = − m∑ i=1 ki, ki is the number of filters added in layer i, and α is a parameter to balance between the prediction performance and model complexity. Since Rt is non-differentiable, we use policy gradient to update the controller, described in the following section. 3.4 Training Procedures The controller’s prediction can be viewed as a list of actions a1:m, which means the number of filters added in m layers , to design an new architecture for a child network and then be trained in a new task. At convergence, this child network will achieve an accuracy At on a validation dataset and the model complexity Ct, finally we can obtain the reward Rt as defined in Eq. (6). We can use this reward Rt and reinforcement learning to train the controller. To find the optimal incremental architecture the new task t, the controller aims to maximize its expected reward, J(θc) = Vθc(st). (7) where Vθc is the true value function. In order to accelerate policy gradient training over θc, we use actorcritic methods with a value network parameterized by θv to approximate the state value V (st; θv). The REINFORCE algorithm [16] can be used to learn θc, ∇θcJ(θc) = E [∑ a1:m π(a1:m|st, θc)(R(st, a1:m)− V (st, θv)) ∇θcπ(a1:m|st, θc) π(a1:m|st, θc) ] . (8) Algorithm 1 RCL for Continual Learning 1: Input: A sequence of dataset D = {D1,D2, . . . ,DT } 2: Output: W aT 3: for t = 1, ..., T do 4: if t = 1 then 5: Train the base network using ( 3) on the first datasest D1 and obtain W a1 . 6: else 7: Expand the network by Algorithm 2, and obtain the trained W at . 8: end if 9: end for Algorithm 2 Routine for Network Expansion 1: Input: Current dataset Dt; previous parameter W at−1; the size of action space for each layer ni, i = 1 . . . ,m; number of epochs for training the controller and value network, Te. 2: Output: Network parameter W at 3: for i = 1, . . . , Te do 4: Generate actions a1:m by controller’s policy; 5: Generate W (i)t by expanding parameters W a t−1 according to a1:m; 6: Train the expanded network using Eq. (5) to obtain W (i)t . 7: Evaluate the gradients of the controller and value network by Eq. (9) and Eq.(10), θc = θc + ηc∇θcJ(θc), θv = θv − ηv∇θvLv(θv). 8: end for 9: Return the best network parameter configuration, W at = argmaxW (i)t Rt(W (i) t ). A Monte Carlo approximation for the above quantity is, 1 N N∑ i=1 ∇θc log π(a (i) 1:m|st; θc) ( R(st, a (i) 1:m)− V (st, θv) ) . (9) where N is the batch size. For the value network, we utilize gradient-based method to update θv , the gradient of which can be evaluated as follows, Lv = 1 N N∑ i=1 (V (st; θv)−R(st, a(i)1:m))2, ∇θvLv = 2 N N∑ i=1 ( V (st; θv)−R(st, a(i)1:m) ) ∂V (st; θv) ∂θv . (10) Finally we summarize our RCL approach for continual learning in Algorithm 1, in which the subroutine for network expansion is described in Algorithm 2. 3.5 Comparison with Other Approaches As a new framework for network expansion to achieve continual learning, RCL distinguishes from progressive network [11] and DEN [17] from the following aspects. • Compared with DEN, instead of performing selective retraining and network split, RCL keeps the learned parameters for previous tasks fixed and only updates the added parameters. Through this training strategy, RCL can totally prevent catastrophic forgetting due to the freezing parameters for corresponding tasks. • Progressive neural networks expand the architecture with a fixed number of units or filters. To obtain a satisfying model accuracy when number of sequential tasks is large, the final complexity of progressive nets is required to be extremely high. This directly leads to high computational burden both in training and inference, even difficult for the storage of the entire model. To handle this issue, both RCL and DEN dynamically adjust the networks to reach a more economic architecture. • While DEN achieves the expandable network by sparse regularization, RCL adaptively expands the network by reinforcement learning. However, the performance of DEN is quite sensitive to the various hyperparameters, including regularization parameters and thresholding coefficients. RCL largely reduces the number of hyperparameters and boils down to only balancing the average validation accuracy and model complexity when the designed reward function. Through different experiments in Section 4, we demonstrate that RCL could achieve more stable results, and better model performance could be achieved simultaneously with even much less neurons than DEN. 4 Experiments We perform a variety of experiments to access the performance of RCL in continual learning. We will report the accuracy, the model complexity and the training time consumption between our RCL and the state-of-the-art baselines. We implemented all the experiments in Tensorfolw framework on GPU Tesla K80. Datasets (1) MNIST Permutations [4]. Ten variants of the MNIST data, where each task is transformed by a fixed permutation of pixels. In the dataset, the samples from different task are not independent and identically distributed; (2) MNIST Mix. Five MNIST permutations (P1, . . . , P5) and five variants of the MNIST dataset (R1, . . . , R5) where each contains digits rotated by a fixed angle between 0 and 180 degrees. These tasks are arranged in the order P1, R1, P2, . . . , P5, R5. (3) Incremental CIFAR-100 [9]. Different from the original CIFAR-100, each task introduces a new set of classes. For the total number of tasks T , each new task contains digits from a subset of 100/T classes. In this dataset, the distribution of the input is similar for all tasks, but the distribution of the output is different. For all of the above datasets, we set the number of tasks to be learned as T = 10. For the MNIST datasets, each task contains 60000 training examples and 10000 test examples from 10 different classes. For the CIFAR-100 datasets, each task contains 5000 train examples and 1000 examples from 10 different classes. The model observes the tasks one by one, and once the task had been observed, the task will not be observed later during the training. Baselines (1) SN, a single network trained across all tasks; (2) EWC, deep network trained with elastic weight consolidation [4] for regularization; (3) GEM, gradient episodic memory [7]; (4) PGN, progressive neural network proposed in [11]; (5) DEN, dynamically expandable network [17]. Base network settings (1) Fully connected networks for MNIST Permutations and MNIST Mix datasets. We use a three-layer network with 784-312-128-10 neurons with RELU activations; (2) LeNet is used for Incremental CIFAR-100. LeNet has two convolutional layers and three fullyconnected layers, the detailed structure of LeNet can be found in [5]. 4.1 Results We evaluate each compared approach by considering average test accuracy on all the tasks, model complexity and training time. Model complexity is measured via the number of model parameters after training all the tasks. We first report the test accuracy and model complexity of baselines and our proposed RCL for the three datasets in Figure 2. Comparison between fixed-size and expandable networks. From Figure 2, we can easily observe that the approaches with fixed-size network architectures, such as IN, EWC and GEM, own low model complexity, but their prediction accuracy is much worse than those methods with expandable networks, including PGN, DEN and RCL. This shows that dynamically expanding networks can indeed contribute to the model performance by a large margin. Comparison between PGN, DEN and RCL. Regarding to the expandable networks, RCL outperforms PGN and DEN on both test accuracy and model complexity. Particularly, RCL achieves significant reduction on the number of parameters compared with PGN and DEN, e.g. for incremental Cifar100 data, 42% and 53% parameter reduction, respectively. To further see the difference of the three methods, we vary the hyperparameters settings and train the networks accordingly, and obtain how test accuracy changes with respect to the number of parameters, as shown in Figure 3. We can clearly observe that RCL can achieve significant model reduction with the same test accuracy as that of PGN and DEN, and accuracy improvement with same size of networks. This demonstrates the benefits of employing reinforcement learning to adaptively control the complexity of the entire model architecture. Comparison between RCL and Random Search. We compare our policy gradient controller and random search controller on different datasets. In every experiment setup, hyper-parameters are the same except the controller (random search controller v.s. policy gradient controller). We run each experiment for four times. We found that random search achieves more than 0.1% less accuracy and almost the same number of parameters on these three datasets compared with policy gradient. We note that random search performs surprisingly well, which we attribute to the representation power of our reward design. This demonstrates that our well-constructed reward strikes a balance between accuracy and model complexity very effectively. Evaluating the forgetting behavior. Figure 4 shows the evolution of the test accuracy on the first task as more tasks are learned. RCL and PGN exhibit no forgetting while the approaches without expanding the networks raise catastrophic forgetting. Moreover, DEN can not completely prevent forgetting since it retrains the previous parameters when learning new tasks. Training time We report the wall clock training time for each compared method in Table 1). Since RCL is based on reinforcement learning, a large number of trials are typically required that leads to more training time than other methods. Improving the training efficiency of reinforcement learning is still an open problem, and we leave it as future work. Balance between test accuracy and model complexity. We control the tradeoff between the model performance and complexity through the coefficient α in the reward function (6). Figure 5 shows how varying α affects the test accuracy and number of model parameters. As expected, with increasing α the model complexity drops significantly while the model performance also deteriorate gradually. Interestingly, when α is small, accuracy drops much slower compared with the decreasing of the number of parameters. This observation could help to choose a suitable α such that a mediumsized network can still achieve a relatively good model performance. 5 Conclusion We propose a novel framework for continual learning, Reinforced Continual Learning. Our method searches for the best neural architecture for coming task by reinforcement learning, which increases its capacity when necessary and effectively prevents semantic drift. We implement both fully connected and convolutional neural networks as our task networks, and validate them on different datasets. The experiments demonstrate that our proposal outperforms the exiting baselines significantly both on prediction accuracy and model complexity. As for future works, two directions are worthy of consideration. Firstly, we will develop new strategies for RCL to facilitate backward transfer, i.e. improve previous tasks’ performance by learning new tasks. Moreover, how to reduce the training time of RCL is particularly important for large networks with more layers. Acknowledgments Supported by National Natural Science Foundation of China (Grant No: 61806009) and Beijing Natural Science Foundation (Grant No: 4184090).
1. What is the main contribution of the paper regarding dynamic neural network growth in continual learning? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper, such as concerns about hyperparameter sensitivity and stability, and lack of comparison to newer methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any specific questions or suggestions for improvement regarding the presentation of results, experiment design, and analysis?
Review
Review This paper presents a simple but effective way to dynamically grow a neural network in a continual learning setup. The framework uses reinforcement learning to learn a policy that outputs decisions in terms of number of nodes/filters to add, and those additional filters are trained specifically for the new task (while the previous layers are held constant to prevent reduction in performance for old tasks). This is progressively done layer by layer where previous decisions for earlier layers serve as input to an LSTM in order to make decisions for the subsequent layer. Results show similar performance to approaches such as DEM, but have some better results for CIFAR-100 especially and the method results in networks with lower complexity. Strengths - Good overview of two groups of approaches on this topic - The idea of using RL for continual learning is a good one, and one that makes sense given recent advancements in RL for architecture search. - The results seem positive, especially for CIFAR-100 and for lowering the overall complexity of the solutions. - The method is simple, which means that it should be easy to implement, barring issues in RL training stability. Releasing source code would be very helpful in this regard for reproducing the results in the paper. Weaknesses - An argument against DEN, a competitor, is hyper-parameter sensitivity. First, this isn't really shown, but second (and more importantly) reinforcement learning is well-known to be extremely unstable and require a great deal of tuning. For example, even random seed changes are known to change the behavior of the same algorithm, and different implementation of the same algorithm can get very different results (this has been heavily discussed in the community; see keynote ICLR talk by Joelle Pineau as an example). This is not to say the proposed method doesn't have an advantage, but the argument that other methods require more tuning is not shown or consistent with known characteristics of RL. * Related to this, I am not sure I understand experiments for Figure 3. The authors say they vary the hyper-parameters but then show results with respect to # of parameters. Is that # of parameters of the final models at each timestep? Isn't that just varying one hyperparameter? I am not sure how this shows that RCL is more stable. - Newer approaches such as FearNet [1] should be compared to, as they demonstrated significant improvement in performance (although they did not compare to all of the methods compared to here). [1] FearNet: Brain-Inspired Model for Incremental Learning, Ronald Kemker, Christopher Kanan, ICLR 2018. - There is a deeper tie to meta-learning, which has several approaches as well. While these works don't target continual learning directly, they should be cited and the authors should try to distinguish those approaches. The work on RL for architecture search and/or as optimizers for learning (which are already cited) should be more heavily linked to this work, as it seems to directly follow as an application to continual learning. - It seems to me that continuously adding capacity while not fine-tuning the underlying features (which training of task 1 will determine) is extremely limiting. If the task is too different and the underlying feature space in the early layers are not appropriate to new tasks, then the method will never be able to overcome the performance gap. Perhaps the authors can comment on this. - Please review the language in the paper and fix typos/grammatical issues; a few examples: * [1] "have limitation to solve" => "are limited in their ability to solve" * [18] "In deep learning community" => "In THE deep learning community" * [24] "incrementally matche" => "incrementally MATCH" * [118] "we have already known" => "we already know" * and so on Some more specific comments/questions: - This sentence is confusing [93-95] "After we have trained the model for task t, we memorize each newly added filter by the shape of every layer to prevent the caused semantic drift." I believe I understood it after re-reading it and the subsequent sentences but it is not immediately obvious what is meant. - [218] Please use more objective terms than remarkable: "and remarkable accuracy improvement with same size of networks". Looking at the axes, which are rather squished, the improvement is definitely there but it would be difficult to characterize it as remarkable. - The symbols in the graphs across the conditions/algorithms is sometimes hard to distinguish (e.g. + vs *). Please make the graphs more readable in that regard. Overall, the idea of using reinforcement learning for continual learning is an interesting one, and one that makes sense considering recent advances in architecture search using RL. However, this paper could be strengthened by 1) Strengthening the analysis in terms of the claims made, especially with respect to not requiring as much hyper-parameter tuning, which requires more evidence given that RL often does require significant tuning, and 2) comparison to more recent methods and demonstration of more challenging continual learning setups where tasks can differ more widely. It would be good to have more in-depth analysis of the trade-offs between three approaches (regularization of large-capacity networks, growing networks, and meta-learning). ============================================== Update after rebuttal: Thank you for the rebuttal. However, there wasn't much new information in the rebuttal to change the overall conclusions. In terms of hyper-parameters, there are actually more hyper-parameters for reinforcement learning that you are not mentioning (gamma, learning rate, etc.) which your algorithm might still be sensitive to. You cannot consider only the hyper-parameter related to the continual learning part. Given this and the other limitations mentioned, overall this paper is marginally above acceptance so the score has been kept the same.
NIPS
Title BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images Abstract We present BlockGAN, an image generative model that learns object-aware 3D scene representations directly from unlabelled 2D images. Current work on scene representation learning either ignores scene background or treats the whole scene as one object. Meanwhile, work that considers scene compositionality treats scene objects only as image patches or 2D layers with alpha maps. Inspired by the computer graphics pipeline, we design BlockGAN to learn to first generate 3D features of background and foreground objects, then combine them into 3D features for the whole scene, and finally render them into realistic images. This allows BlockGAN to reason over occlusion and interaction between objects’ appearance, such as shadow and lighting, and provides control over each object’s 3D pose and identity, while maintaining image realism. BlockGAN is trained end-to-end, using only unlabelled single images, without the need for 3D geometry, pose labels, object masks, or multiple views of the same scene. Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity). Our code is available at https://github.com/thunguyenphuoc/BlockGAN. 1 Introduction The computer graphics pipeline has achieved impressive results in generating high-quality images, while offering users a great level of freedom and controllability over the generated images. This has many applications in creating and editing content for the creative industries, such as films, games, scientific visualisation, and more recently, in generating training data for computer vision tasks. However, the current pipeline, ranging from generating 3D geometry and textures, rendering, compositing and image post-processing, can be very expensive in terms of labour, time, and costs. Recent image generative models, in particular generative adversarial networks [GANs; 14], have greatly improved the visual fidelity and resolution of generated images [5, 23, 24]. Conditional GANs [36] allow users to manipulate images, but require labels during training. Recent work on unsupervised disentangled representations using GANs [9, 24, 38] relaxes this need for labels. The ability to produce high-quality, controllable images has made GANs an increasingly attractive alternative to the traditional graphics pipeline for content generation. However, most work focuses on property disentanglement, such as shape, pose and appearance, without considering the compositionality of the images, i.e., scenes being made up of multiple objects. Therefore, they do not offer control over individual objects in a way that respects the interaction of objects, such as consistent lighting and shadows. This is a major limitation of current image generative models, compared to the graphics pipeline, where 3D objects are modelled individually in terms of geometry and appearance, and combined into 3D scenes with consistent lighting. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Even when considering object compositionality, most approaches treat objects as 2D layers combined using alpha compositing [12, 50, 53]. Moreover, they also assume that each object’s appearance is independent [3, 6, 12]. While this layering approach has led to good results in terms of object separation and visual fidelity, it is fundamentally limited by the choice of 2D representation. Firstly, it is hard to manipulate properties that require 3D understanding, such as pose or perspective. Secondly, object layers tend to bake in appearance and cannot adequately represent view-specific appearance, such as shadows or material highlights changing as objects move around in the scene. Finally, it is non-trivial to model the appearance interactions between objects, such as scene lighting that affects objects’ shadows on a background. We introduce BlockGAN, a generative adversarial network that learns 3D object-oriented scene representations directly from unlabelled 2D images. Instead of learning 2D layers of objects and combining them with alpha compositing, BlockGAN learns to generate 3D object features and to combine them into deep 3D scene features that are projected and rendered as 2D images. This process closely resembles the computer graphics pipeline where scenes are modelled in 3D, enabling reasoning over occlusion and interaction between object appearance, such as shadows or highlights. During test time, each object’s pose can be manipulated using 3D transforms directly applied to the object’s deep 3D features. We can also add new objects and remove existing objects in the generated image by changing the number of 3D object features in the 3D scene features at inference time. This shows that BlockGAN has learnt a non-trivial representation of objects and their interaction, instead of merely memorizing images. BlockGAN is trained end-to-end in an unsupervised manner directly from unlabelled 2D images, without any multi-view images, paired images, pose labels, or 3D shapes. We experiment with BlockGAN on a variety of synthetic and natural image datasets. In summary, our main contributions are: • BlockGAN, an unsupervised image generative model that learns an object-aware 3D scene representation directly from unlabelled 2D images, disentangling both between objects and individual object properties (pose and identity); • showing that BlockGAN can learn to separate objects even from cluttered backgrounds; and • demonstrating that BlockGAN’s object features can be added, removed and manipulated to create novel scenes that are not observed during training. 2 Related work GANs. Unsupervised GANs learn to map samples from a latent distribution to data categorised as real by a discriminator network. Conditional GANs enable control over the generated image content, but require labels during training. Recent work on unsupervised disentangled representation learning using GANs provides controllability over the final images without the need for labels. Loss functions can be designed to maximize mutual information between generated images and latent variables [9, 20]. However, these models do not guarantee which factors can be learnt, and have limited success when applied to natural images. Network architectures can play a vital role in both improving training stability [7] and controllability of generated images [24, 38]. We also focus on designing an appropriate architecture to learn object-level disentangled representations. We show that injecting inductive biases about how the 3D world is composed of 3D objects enables BlockGAN to learn 3D object-aware scene representations directly from 2D images, thus providing control over both 3D pose and appearance of individual objects. 3D-aware neural image synthesis. Introducing 3D structures into neural networks can improve the quality [37, 41, 44, 48] and controllability of the image generation process [38, 39, 59]. This can be achieved with explicit 3D representations, like appearance flow [58], occupancy voxel grids [43, 59], meshes, or shape templates [27, 46, 56], in conjunction with handcrafted differentiable renderers [8, 17, 31, 33]. Renderable deep 3D representations can also be learnt directly from images [38, 47, 48]. HoloGAN [38] further shows that adding inductive biases about the 3D structure of the world enables unsupervised disentangled feature learning between shape, appearance and pose. However, these learnt representations are either object-centric (i.e., no background), or treat the whole scene as one object. Thus, they do not consider scene compositionality, i.e., components that can move independently. BlockGAN, in contrast, is designed to learn object-aware 3D representations that are combined into a unified 3D scene representation. Object-aware image synthesis. Recent methods decompose image synthesis into generating components like layers or image patches, and combining them into the final image [28, 50, 53]. This includes conditional GANs that use segmentation masks [40, 49], scene graphs [22], object labels, key points or bounding boxes [18, 42], which have shown impressive results for natural image datasets. Recently, unsupervised methods [2, 12, 13, 26, 50, 55] learned object disentanglement for multi-object scenes on simpler synthetic datasets (single-colour objects, simple lighting, and material). Other approaches successfully separate foreground from background objects in natural images, but make strong assumptions about the size of objects [53] or independent object appearance [3, 6]. These methods treat object components as image patches or 2D layers with corresponding masks, which are combined via alpha compositing at the pixel level to generate the final stage. The work closest to ours learns to generate multiple 3D primitives (cuboids, spheres and point clouds), renders them into separate 2D layers with a handcrafted differentiable renderer, and alpha-composes them based on their depth ordering to create the final image [29]. Despite the explicit 3D geometry, this method does not handle cluttered backgrounds and requires extra supervision in the shape of labelled images with and without foreground objects. BlockGAN takes a different approach. We treat objects as learnt 3D features with corresponding 3D poses, and learn to combine them into 3D scene features. Not only does this provide control over 3D pose, but also enables learning of realistic lighting and shadows. Our approach allows adding more objects into the 3D scene features to generate images with multiple objects, which are not observed at training time. 3 Method Inspired by the computer graphics pipeline, we assume that each image x is a rendered 2D image of a 3D scene composed of K 3D foreground objects {O1,...,OK} in addition to the background O0: x=p ( f( O0,︸︷︷︸ background O1, ...,OK︸ ︷︷ ︸ foreground ) ) , (1) where the function f combines multiple objects into unified scene features that are projected to the image x by p. We assume each object Oi is defined in a canonical orientation and generated from a noise vector zi by a function gi before being individually posed using parameters θi: Oi=gi(zi,θi). We inject the inductive bias of compositionality of the 3D world into BlockGAN in two ways. (1) The generator is designed to first generate 3D features for each object independently, before transforming and combining them into unified scene features, in which objects interact. (2) Unlike other methods that use 2D image patches or layers to represent objects, BlockGAN directly learns from unlabelled images how to generate objects as 3D features. This allows our model to disentangle the scene into separate 3D objects and allows the generator to reason over 3D space, enabling object pose manipulation and appearance interaction between objects. BlockGAN, therefore, learns to both generate and render the scene features into images that can fool the discriminator. Figure 1 illustrates the BlockGAN generator architecture. Each noise vector zi is mapped to 3D object features Oi. Objects are then transformed according to their pose θi using a 3D similarity transform, before being combined into 3D scene features using the scene composer f . The scene features are transformed into the camera coordinate system before being projected to 2D features to render the final images using the camera projector function p. During training, we randomly sample both the noise vectors zi and poses θi. During test time, objects can be generated with a given identity zi in the desired pose θi. BlockGAN is trained end-to-end using only unlabelled 2D images, without the need for any labels, such as poses, 3D shapes, multi-view inputs, masks, or geometry priors like shape templates, symmetry or smoothness terms. We next explain each component of the generator in more detail. 3.1. Learning 3D object representations. Each object Oi∈RHo×Wo×Do×Co is a deep 3D feature grid generated by Oi = gi(zi,θi), where gi is an object generator that takes as input a noise vector zi controlling the object appearance, and the object’s 3D pose θi=(si,Ri,ti), which comprises its uniform scale si∈R, rotation Ri∈SE(3) and translation ti∈R3. The object generator gi is specific to each category of objects, and is shared between objects of the same category. We assume that 3D scenes consist of at least two objects: the background O0 and one or more foreground objects {O1, ... ,OK}. This is different to object-centric methods that only assume a single object with a simple white background [47], or only deal with static scenes whose object components cannot move independently [38]. We show that, even when BlockGAN is trained with only one foreground and background object, we can add an arbitrary number of foreground objects to the scene at test time. To generate 3D object features, BlockGAN implements the style-based strategy, which helps to disentangle between pose and identity [38] while improving training stability [24]. As illustrated in Figure 2, the noise vector zi is mapped to affine parameters – the “style controller” – for adaptive instance normalization [AdaIN; 19] after each 3D convolution layer. However, unlike HoloGAN [38], which learns 3D features directly for the whole scene, BlockGAN learns 3D features for each object, which are then transformed to their target poses using similarity transforms, and combined into 3D scene features. We implement these 3D similarity transforms by trilinear resampling of the 3D features according to the translation, rotation and scale parameters θi; samples falling outside the feature tensor are clamped to zero. This allows BlockGAN to not only separate object pose from identity, but also to disentangle multiple objects in the same scene. 3.2. Scene composer function. We combine the 3D object features {Oi} into scene features S= f(O0, O1, ... , OK) ∈ RHs×Ws×Ds×Cs using a scene composer function f . For this, we use the element-wise maximum as it achieves the best image quality compared to element-wise summation and a multi-layer perceptron (MLP); please see our supplemental document for an ablation. Additionally, the maximum is invariant to permutation and allows a flexible number of input objects to add new objects into the scene features during test time, even when trained with fewer objects (see Section 4.3). 3.3. Learning to render. Instead of using a handcrafted differentiable renderer, we aim to learn rendering directly from unlabelled images. HoloGAN showed that this approach is more expressive as it is capable of handling unlabelled, natural image data. However, their projection model is limited to a weak perspective, which does not support foreshortening – an effect that is observed when objects are close to real (perspective) cameras. We therefore introduce a graphics-based perspective projection function p : RHs×Ws×Ds×Cs 7→RHc×Wc×Cc that transforms the 3D scene features into camera space using a projective transform, and then learns the projection of the 3D features to a 2D feature map. The computer graphics pipeline implements perspective projection using a projective transform that converts objects from world coordinates (our scene space) to camera coordinates [34]. We implement this camera transform like the similarity transforms used to manipulate objects in Section 3.1, by resampling the 3D scene features according to the viewing volume (frustum) of the virtual perspective camera (see Figure 3). For correct perspective projection, this transform must be a projective transform, the superset of similarity transforms [52]. Specifically, the viewing frustum, in scene space, can be defined relative to the camera’s pose θcam using the angle of view, and the distance of the near and far planes. The camera-space features are a new 3D tensor of features, of size Hc×Wc×Dc×Cs, whose corners are mapped to the corners of the camera’s viewing frustum using the unique projective 3D transform computed from the coordinates of corresponding corners using the direct linear transform [16]. In practice, we combine the object and camera transforms into a single transform by multiplying both transform matrices and resampling the object features in a single step, directly from object to camera space. This is computationally more efficient than resampling twice, and advantageous from a sampling theory point of view, as the features are only interpolated once, not twice, and thus less information is lost by the resampling. The combined transform is a fixed, differentiable function with parameters (θi,θcam). The individual objects are then combined in camera space before the final projection. After the camera transform, the 3D features are projected into view-specific 2D feature maps using the learnt camera projection p′ : RHc×Wc×Dc×Cs 7→RHc×Wc×Cc . This function ensures that occlusion correctly shows nearby objects in front of distant objects. Following the RenderNet projection unit [37], we reshape the 3D camera-space features (with depth Dc and Cs channels) into a 2D feature map with (Dc ·Cs) channels, followed by a per-pixel MLP (i.e., 1×1 convolution) that outputs Cc channels. We choose to use this learnt renderer following HoloGAN, which shows the effectiveness of the renderer in learning powerful 3D representations directly from unlabelled images. This is different from the supervised multi-view setting with pose labels in the renderer of DeepVoxels [47], which learns occlusion values, or Neural Volumes [32] and NeRF [35], which learn explicit density values. 3.4. Loss functions. We train BlockGAN adversarially using the non-saturating GAN loss [14]. For natural images with cluttered backgrounds, we also add a style discriminator loss [38]. In addition to classifying the images as real or fake, this discriminator also looks at images at the feature level. Given image features Φl at layer l, the style discriminator classifies the mean µ(Φl) and standard deviation σ(Φl) over the spatial dimensions, which describe the image “style” [19]. This more powerful discriminator discourages the foreground generator to include parts of the background within the foreground object(s). We provide detailed network and loss definitions in the supplemental material. 4 Experiments Datasets. We train BlockGAN on images at 64×64 pixels, with increasing complexity in terms of number of foreground objects (1–4) and texture (synthetic images with simple shapes and simple to natural images with complex texture and cluttered background). These datasets include the synthetic CLEVRn [21], SYNTH-CARn and SYNTH-CHAIRn, and the real REAL-CAR [54], where n is the number of foreground objects. Additional details and results are included in the supplementary material. Implementation details. We assume a fixed and known number of objects of the same type. Foreand background generators have similar architectures and the same number of output channels, but foreground generators have twice as many channels in the learnt constant tensor. Since foreground objects are smaller than the background, we set scale=1 for the background object, and randomly sample scales <1 for foreground objects. Please see our supplemental material for more implementation details and an ablation experiment. We make our code publicly available at github.com/thunguyenphuoc/BlockGAN. 4.1. Qualitative results. Despite being trained with only unlabelled images, Figure 4 shows that BlockGAN learns to disentangle different objects within a scene: foreground from background, and between multiple foreground objects. More importantly, BlockGAN also provides explicit control and enables smooth manipulation of each object’s pose θi and identity zi. Figure 6 shows results on natural images with a cluttered background, where BlockGAN is still able to separate objects and enables 3D object-centric modifications. Since BlockGAN combines deep object features into scene features, changes in an object’s properties also influence its shadows, and highlights adapt to the object’s movement. These effects can be better observed in the supplementary animations. 4.2. Quantitative results. We evaluate the visual fidelity of BlockGAN’s results using Kernel Inception Distance [KID; 4], which has an unbiased estimator and works even for a small number of images. Note that KID does not measure the quality of object disentanglement, which is the main contribution of BlockGAN. We first compare with a vanilla GAN [WGAN-GP; 15] using a publicly available implementation1. Secondly, we compare with LR-GAN [53], a 2D-based method that learns to generate image background and foregrounds separately and recursively. Finally, we compare with HoloGAN, which learns 3D scene representations that separate camera pose and identity, but does not consider object disentanglement. For LR-GAN and HoloGAN, we use the authors’ code. We tune hyperparameters and then compute the KID for 10,000 images generated by each model (samples by all methods are included in the supplementary material). Table 1 shows that BlockGAN generates images with competitive or better visual fidelity than other methods. 4.3. Scene manipulation beyond training data. We show that at test time, 3D object features learnt by BlockGAN can be realistically manipulated in ways that have not been observed during training time. First, we show that the learnt 3D object features can also be reused to add more objects to the scene at test time, thanks to the compositionality inductive bias and our choice of scene composer function. Firstly, we use BlockGAN trained on datasets with only one foreground object and one background, and show that more foreground objects of the same category can be added to the same scene at test time. Figure 6 shows that 2–4 new objects are added and manipulated just like the original objects while maintaining realistic shadows and highlights. In Figure 5, we use BlockGAN trained on CLEVR4 and then remove (top) and add (bottom) more objects to the scene. Note how BlockGAN generates realistic shadows and occlusion for scenes that the model has never seen before. Secondly, we apply spatial manipulations that were not part of the similarity transform used during training, such as horizontal stretching, or slicing and combining different foreground objects. Figure 6 shows that object features can be geometrically modified intuitively, without needing explicit 3D geometry or multi-view supervision during training. 4.4. Comparison to 2D-based LR-GAN. LR-GAN [53] first generates a 2D background layer, and then generates and combines foreground layers with the generated background using alpha-compositing. Both BlockGAN and LR-GAN show the importance of combining objects in a contextually relevant manner to generate visually realistic images (see Table 1). However, LR-GAN does not offer explicit control over object location. More importantly, LR-GAN learns an entangled representation of the scene: sampling a different background noise vector also changes the foreground (Figure 7). Finally, unlike BlockGAN, LR-GAN does not allow adding more foreground objects during test time. This demonstrates the benefits of learning disentangled 3D object features compared to a 2D-based approach. 4.5. Ablation study: Non-uniform pose distribution. For the natural REAL-CAR dataset, we observe that BlockGAN has difficulties learning the full 360° rotation of the car, even though fore- and background are disentangled well. We hypothesise that this is due to the mismatch between the true (unknown) pose distribution of the car, and the uniform pose distribution we assume during training. To test this, we create a synthetic dataset similar to SYNTH-CAR1 with a limited range of rotation, and train BlockGAN with a uniform pose distribution. To generate the imbalanced rotation dataset, we sample the rotation uniformly from the front/left/back/right viewing directions±15°. In other words, the car is 1https://github.com/LynnHo/DCGAN-LSGAN-WGAN-WGAN-GP-Tensorflow Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii Figure 7: Comparison between (i) LR-GAN [53] and (ii) BlockGAN for SYNTH-CAR1 (left) and REAL-CAR (right). Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii only seen from the front/left/back/right 30°, respectively, and there are four evenly spaced gaps of 60° that are never observed, for example views from the front-right. With the imbalanced dataset, Figure 8 (bottom) shows correct disentangling of foreground and background. However, rotation of the car only produces images with (near-)frontal views (top), while depth translation results in cars that are randomly rotated sideways (middle). We observe similar behaviour for the natural REAL-CAR dataset. This suggests that learning object disentanglement and full 3D pose rotation might be two independent problems. While assuming a uniform pose distribution already enables good object disentanglement, learning the pose distribution from the training data would likely improve the quality of 3D transforms. In our supplemental material, we include comparisons to HoloGAN [38] as well as additional ablation studies on comparing different scene composer functions, using a perspective camera versus a weak- perspective camera, adopting the style discriminator for scenes with cluttered backgrounds, and training on images with an incorrect number of objects. 5 Discussion and Future Work We introduced BlockGAN, an image generative model that learns 3D object-aware scene representations from unlabelled images. We show that BlockGAN can learn a disentangled scene representation both in terms of objects and their properties, which allows geometric manipulations not observed during training. Most excitingly, even when BlockGAN is trained with fewer or even single objects, additional 3D object features can be added to the scene features at test time to create novel scenes with multiple objects. In addition to computer graphics applications, this opens up exciting possibilities, such as combining BlockGAN with models like BiGAN [10] or ALI [11] to learn powerful object representations for scene understanding and reasoning. Future work can adopt more powerful relational learning models [25, 45, 51] to learn more complex object interactions such as inter-object shadowing or reflections. Currently, we assume prior knowledge of object category and the number of objects for training. We also assume object poses are uniformly distributed and independent from each other. Therefore, the ability to learn this information directly from training images would allow BlockGAN to be applied to more complex datasets with a varying number of objects and different object categories, such as COCO [30] or LSUN [57]. Acknowledgments and Disclosure of Funding We received support from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 665992, the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), and an NVIDIA Corporation GPU Grant. We received a gift from Adobe. Broader Impact BlockGAN is an image generative model that learns an object-oriented 3D scene representation directly from unlabelled 2D images. Our approach is a new machine learning technique that makes it possible to generate unseen images from a noise vector, with unprecedented control over the identity and pose of multiple independent objects as well as the background. In the long term, our approach could enable powerful tools for digital artists that facilitate artistic control over realistic procedurally generated digital content. However, any tool can in principle be abused, for example by adding new, manipulating or removing existing objects or people from images. At training time, our network performs a task somewhat akin to scene understanding, as our approach learns to disentangle between multiple objects and individual object properties (specifically their pose and identity). At test time, our approach enables sampling new images with control over pose and identity for each object in the scene, but does not directly take any image input. However, it is possible to embed images into the latent space of generative models [1]. A highly realistic generative image model and a good image fit would then make it possible to approximate the input image and, more importantly, to edit the individual objects in a pictured scene. Similar to existing image editing software, this enables the creation of image manipulations that could be used for ill-intended misinformation (fake news), but also for a wide range of creative and other positive applications. We expect the benefits of positive applications to clearly outweigh the potential downsides of malicious applications.
1. What is the main contribution of the paper regarding 3D scene representations? 2. How does the proposed method differ from other approaches in terms of generating 3D scenes? 3. Can you explain the significance of learning 3D representations without supervision? 4. What are the strengths and weaknesses of the proposed method according to the reviewer? 5. Are there any concerns regarding the applicability of the proposed approach?
Summary and Contributions Strengths Weaknesses
Summary and Contributions [Main Contributions] 1- The proposed method allows generating 3D scene representations flexibly in a compositional fashion. Significance: very high 2- The proposed approach is a good step in the direction of modeling the computer graphics pipeline (creating geometry, manipulating them and doing rendering) using neural networks. Significance: high 3- The proposed method learns such 3D representations without any supervision and only requires 2D RGB images during training. Significance: high Reviewer’s legend for the level of significance: [very low, low, medium, high, very high] The ratings are a function of the results/analysis shown in the paper, current state of the art methods and the reviewer’s perspective on the authors’ approach [High-level summary of the paper] This paper proposes a generative model that operates on compositional, implicit 3D scene representations and is trained using low resolution (64 x 64) 2D images only without any additional supervision signal. The model is trained on scenes with 1-4 objects but, thanks to compositionality, it can generate scenes with more number of objects. [Low-level summary] The model first generates {1,…,K} foreground object(s) and a background separately using two decoder neural networks that map a noise vector, Z, whose elements control object appearance properties and some transformation parameters (rotation, translation, scaling) to “3D features”. The 3D features are then composed through taking their element-wise maximum and then concatenation to form a “unified 3D scene representation”. For rendering, the unified scene representation is fed to another neural network (the renderer) which also takes as input the camera parameters and produces a rendering of the scene. The model is trained with a GAN-based objective. The authors evaluate their model on multiple datasets such as CLEVR, Real-Car and Synthetic cars and chairs. The main experiments they conduct is as follow 1) To show that their model has learned disentangled representations (i.e. they change attributes of the foreground objects or the background and show the renderings which reflect those changes) 2) To evaluate visual fidelity, they show that their model achieves nearly the same or lower KID estimates compared to other methods for all datasets 3) They train a model on scenes with 1 object but show that they can use that model to generate scenes up to 5 objects of the same category with different attributes 4) They show that they can manipulate some of the attributes (e.g. stretching) that were not seen during training and get renderings that reflect such changes to a degree. Strengths - The proposed method in this work is novel (specially the compositionality part) and the authors have conducted good experiments to evaluate their model and make their case. - The paper’s organization is good and each section is well-developed overall (introduction may need a bit of work). - The results clearly show that the authors’ proposed approach works well compared to prior work Technical strengths: - Out of distribution generalization: Due to compositionality, the model trained on a 1-4 objects can generate scenes with higher number of objects - No supervision except using unlabeled 2D images - Given the difficulty of the problem, the renderer seems to be doing a good job - The authors show that their model can deal with basic forms of occlusion Weaknesses - Despite that the contributions of the work are good but the results do not meet the expectations set by paper’s tone and the claims of the authors make (see below for more details on this) - The fact that the object generator is category-specific makes application of the proposed approach limited Technical weaknesses: - The proposed model does not not have/use an inference mechanism - There is no guarantee that the elements of Z are interpretable or correspond to a meaningful disentangled 3D feature - The background is plain white for all of the experiments - The authors mention in the methods section that the object generator is category-specific but it is possible that the renderer is also category-specific because the authors do not say whether or not this is the case this in the text and do not provide any results on that either.
NIPS
Title BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images Abstract We present BlockGAN, an image generative model that learns object-aware 3D scene representations directly from unlabelled 2D images. Current work on scene representation learning either ignores scene background or treats the whole scene as one object. Meanwhile, work that considers scene compositionality treats scene objects only as image patches or 2D layers with alpha maps. Inspired by the computer graphics pipeline, we design BlockGAN to learn to first generate 3D features of background and foreground objects, then combine them into 3D features for the whole scene, and finally render them into realistic images. This allows BlockGAN to reason over occlusion and interaction between objects’ appearance, such as shadow and lighting, and provides control over each object’s 3D pose and identity, while maintaining image realism. BlockGAN is trained end-to-end, using only unlabelled single images, without the need for 3D geometry, pose labels, object masks, or multiple views of the same scene. Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity). Our code is available at https://github.com/thunguyenphuoc/BlockGAN. 1 Introduction The computer graphics pipeline has achieved impressive results in generating high-quality images, while offering users a great level of freedom and controllability over the generated images. This has many applications in creating and editing content for the creative industries, such as films, games, scientific visualisation, and more recently, in generating training data for computer vision tasks. However, the current pipeline, ranging from generating 3D geometry and textures, rendering, compositing and image post-processing, can be very expensive in terms of labour, time, and costs. Recent image generative models, in particular generative adversarial networks [GANs; 14], have greatly improved the visual fidelity and resolution of generated images [5, 23, 24]. Conditional GANs [36] allow users to manipulate images, but require labels during training. Recent work on unsupervised disentangled representations using GANs [9, 24, 38] relaxes this need for labels. The ability to produce high-quality, controllable images has made GANs an increasingly attractive alternative to the traditional graphics pipeline for content generation. However, most work focuses on property disentanglement, such as shape, pose and appearance, without considering the compositionality of the images, i.e., scenes being made up of multiple objects. Therefore, they do not offer control over individual objects in a way that respects the interaction of objects, such as consistent lighting and shadows. This is a major limitation of current image generative models, compared to the graphics pipeline, where 3D objects are modelled individually in terms of geometry and appearance, and combined into 3D scenes with consistent lighting. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Even when considering object compositionality, most approaches treat objects as 2D layers combined using alpha compositing [12, 50, 53]. Moreover, they also assume that each object’s appearance is independent [3, 6, 12]. While this layering approach has led to good results in terms of object separation and visual fidelity, it is fundamentally limited by the choice of 2D representation. Firstly, it is hard to manipulate properties that require 3D understanding, such as pose or perspective. Secondly, object layers tend to bake in appearance and cannot adequately represent view-specific appearance, such as shadows or material highlights changing as objects move around in the scene. Finally, it is non-trivial to model the appearance interactions between objects, such as scene lighting that affects objects’ shadows on a background. We introduce BlockGAN, a generative adversarial network that learns 3D object-oriented scene representations directly from unlabelled 2D images. Instead of learning 2D layers of objects and combining them with alpha compositing, BlockGAN learns to generate 3D object features and to combine them into deep 3D scene features that are projected and rendered as 2D images. This process closely resembles the computer graphics pipeline where scenes are modelled in 3D, enabling reasoning over occlusion and interaction between object appearance, such as shadows or highlights. During test time, each object’s pose can be manipulated using 3D transforms directly applied to the object’s deep 3D features. We can also add new objects and remove existing objects in the generated image by changing the number of 3D object features in the 3D scene features at inference time. This shows that BlockGAN has learnt a non-trivial representation of objects and their interaction, instead of merely memorizing images. BlockGAN is trained end-to-end in an unsupervised manner directly from unlabelled 2D images, without any multi-view images, paired images, pose labels, or 3D shapes. We experiment with BlockGAN on a variety of synthetic and natural image datasets. In summary, our main contributions are: • BlockGAN, an unsupervised image generative model that learns an object-aware 3D scene representation directly from unlabelled 2D images, disentangling both between objects and individual object properties (pose and identity); • showing that BlockGAN can learn to separate objects even from cluttered backgrounds; and • demonstrating that BlockGAN’s object features can be added, removed and manipulated to create novel scenes that are not observed during training. 2 Related work GANs. Unsupervised GANs learn to map samples from a latent distribution to data categorised as real by a discriminator network. Conditional GANs enable control over the generated image content, but require labels during training. Recent work on unsupervised disentangled representation learning using GANs provides controllability over the final images without the need for labels. Loss functions can be designed to maximize mutual information between generated images and latent variables [9, 20]. However, these models do not guarantee which factors can be learnt, and have limited success when applied to natural images. Network architectures can play a vital role in both improving training stability [7] and controllability of generated images [24, 38]. We also focus on designing an appropriate architecture to learn object-level disentangled representations. We show that injecting inductive biases about how the 3D world is composed of 3D objects enables BlockGAN to learn 3D object-aware scene representations directly from 2D images, thus providing control over both 3D pose and appearance of individual objects. 3D-aware neural image synthesis. Introducing 3D structures into neural networks can improve the quality [37, 41, 44, 48] and controllability of the image generation process [38, 39, 59]. This can be achieved with explicit 3D representations, like appearance flow [58], occupancy voxel grids [43, 59], meshes, or shape templates [27, 46, 56], in conjunction with handcrafted differentiable renderers [8, 17, 31, 33]. Renderable deep 3D representations can also be learnt directly from images [38, 47, 48]. HoloGAN [38] further shows that adding inductive biases about the 3D structure of the world enables unsupervised disentangled feature learning between shape, appearance and pose. However, these learnt representations are either object-centric (i.e., no background), or treat the whole scene as one object. Thus, they do not consider scene compositionality, i.e., components that can move independently. BlockGAN, in contrast, is designed to learn object-aware 3D representations that are combined into a unified 3D scene representation. Object-aware image synthesis. Recent methods decompose image synthesis into generating components like layers or image patches, and combining them into the final image [28, 50, 53]. This includes conditional GANs that use segmentation masks [40, 49], scene graphs [22], object labels, key points or bounding boxes [18, 42], which have shown impressive results for natural image datasets. Recently, unsupervised methods [2, 12, 13, 26, 50, 55] learned object disentanglement for multi-object scenes on simpler synthetic datasets (single-colour objects, simple lighting, and material). Other approaches successfully separate foreground from background objects in natural images, but make strong assumptions about the size of objects [53] or independent object appearance [3, 6]. These methods treat object components as image patches or 2D layers with corresponding masks, which are combined via alpha compositing at the pixel level to generate the final stage. The work closest to ours learns to generate multiple 3D primitives (cuboids, spheres and point clouds), renders them into separate 2D layers with a handcrafted differentiable renderer, and alpha-composes them based on their depth ordering to create the final image [29]. Despite the explicit 3D geometry, this method does not handle cluttered backgrounds and requires extra supervision in the shape of labelled images with and without foreground objects. BlockGAN takes a different approach. We treat objects as learnt 3D features with corresponding 3D poses, and learn to combine them into 3D scene features. Not only does this provide control over 3D pose, but also enables learning of realistic lighting and shadows. Our approach allows adding more objects into the 3D scene features to generate images with multiple objects, which are not observed at training time. 3 Method Inspired by the computer graphics pipeline, we assume that each image x is a rendered 2D image of a 3D scene composed of K 3D foreground objects {O1,...,OK} in addition to the background O0: x=p ( f( O0,︸︷︷︸ background O1, ...,OK︸ ︷︷ ︸ foreground ) ) , (1) where the function f combines multiple objects into unified scene features that are projected to the image x by p. We assume each object Oi is defined in a canonical orientation and generated from a noise vector zi by a function gi before being individually posed using parameters θi: Oi=gi(zi,θi). We inject the inductive bias of compositionality of the 3D world into BlockGAN in two ways. (1) The generator is designed to first generate 3D features for each object independently, before transforming and combining them into unified scene features, in which objects interact. (2) Unlike other methods that use 2D image patches or layers to represent objects, BlockGAN directly learns from unlabelled images how to generate objects as 3D features. This allows our model to disentangle the scene into separate 3D objects and allows the generator to reason over 3D space, enabling object pose manipulation and appearance interaction between objects. BlockGAN, therefore, learns to both generate and render the scene features into images that can fool the discriminator. Figure 1 illustrates the BlockGAN generator architecture. Each noise vector zi is mapped to 3D object features Oi. Objects are then transformed according to their pose θi using a 3D similarity transform, before being combined into 3D scene features using the scene composer f . The scene features are transformed into the camera coordinate system before being projected to 2D features to render the final images using the camera projector function p. During training, we randomly sample both the noise vectors zi and poses θi. During test time, objects can be generated with a given identity zi in the desired pose θi. BlockGAN is trained end-to-end using only unlabelled 2D images, without the need for any labels, such as poses, 3D shapes, multi-view inputs, masks, or geometry priors like shape templates, symmetry or smoothness terms. We next explain each component of the generator in more detail. 3.1. Learning 3D object representations. Each object Oi∈RHo×Wo×Do×Co is a deep 3D feature grid generated by Oi = gi(zi,θi), where gi is an object generator that takes as input a noise vector zi controlling the object appearance, and the object’s 3D pose θi=(si,Ri,ti), which comprises its uniform scale si∈R, rotation Ri∈SE(3) and translation ti∈R3. The object generator gi is specific to each category of objects, and is shared between objects of the same category. We assume that 3D scenes consist of at least two objects: the background O0 and one or more foreground objects {O1, ... ,OK}. This is different to object-centric methods that only assume a single object with a simple white background [47], or only deal with static scenes whose object components cannot move independently [38]. We show that, even when BlockGAN is trained with only one foreground and background object, we can add an arbitrary number of foreground objects to the scene at test time. To generate 3D object features, BlockGAN implements the style-based strategy, which helps to disentangle between pose and identity [38] while improving training stability [24]. As illustrated in Figure 2, the noise vector zi is mapped to affine parameters – the “style controller” – for adaptive instance normalization [AdaIN; 19] after each 3D convolution layer. However, unlike HoloGAN [38], which learns 3D features directly for the whole scene, BlockGAN learns 3D features for each object, which are then transformed to their target poses using similarity transforms, and combined into 3D scene features. We implement these 3D similarity transforms by trilinear resampling of the 3D features according to the translation, rotation and scale parameters θi; samples falling outside the feature tensor are clamped to zero. This allows BlockGAN to not only separate object pose from identity, but also to disentangle multiple objects in the same scene. 3.2. Scene composer function. We combine the 3D object features {Oi} into scene features S= f(O0, O1, ... , OK) ∈ RHs×Ws×Ds×Cs using a scene composer function f . For this, we use the element-wise maximum as it achieves the best image quality compared to element-wise summation and a multi-layer perceptron (MLP); please see our supplemental document for an ablation. Additionally, the maximum is invariant to permutation and allows a flexible number of input objects to add new objects into the scene features during test time, even when trained with fewer objects (see Section 4.3). 3.3. Learning to render. Instead of using a handcrafted differentiable renderer, we aim to learn rendering directly from unlabelled images. HoloGAN showed that this approach is more expressive as it is capable of handling unlabelled, natural image data. However, their projection model is limited to a weak perspective, which does not support foreshortening – an effect that is observed when objects are close to real (perspective) cameras. We therefore introduce a graphics-based perspective projection function p : RHs×Ws×Ds×Cs 7→RHc×Wc×Cc that transforms the 3D scene features into camera space using a projective transform, and then learns the projection of the 3D features to a 2D feature map. The computer graphics pipeline implements perspective projection using a projective transform that converts objects from world coordinates (our scene space) to camera coordinates [34]. We implement this camera transform like the similarity transforms used to manipulate objects in Section 3.1, by resampling the 3D scene features according to the viewing volume (frustum) of the virtual perspective camera (see Figure 3). For correct perspective projection, this transform must be a projective transform, the superset of similarity transforms [52]. Specifically, the viewing frustum, in scene space, can be defined relative to the camera’s pose θcam using the angle of view, and the distance of the near and far planes. The camera-space features are a new 3D tensor of features, of size Hc×Wc×Dc×Cs, whose corners are mapped to the corners of the camera’s viewing frustum using the unique projective 3D transform computed from the coordinates of corresponding corners using the direct linear transform [16]. In practice, we combine the object and camera transforms into a single transform by multiplying both transform matrices and resampling the object features in a single step, directly from object to camera space. This is computationally more efficient than resampling twice, and advantageous from a sampling theory point of view, as the features are only interpolated once, not twice, and thus less information is lost by the resampling. The combined transform is a fixed, differentiable function with parameters (θi,θcam). The individual objects are then combined in camera space before the final projection. After the camera transform, the 3D features are projected into view-specific 2D feature maps using the learnt camera projection p′ : RHc×Wc×Dc×Cs 7→RHc×Wc×Cc . This function ensures that occlusion correctly shows nearby objects in front of distant objects. Following the RenderNet projection unit [37], we reshape the 3D camera-space features (with depth Dc and Cs channels) into a 2D feature map with (Dc ·Cs) channels, followed by a per-pixel MLP (i.e., 1×1 convolution) that outputs Cc channels. We choose to use this learnt renderer following HoloGAN, which shows the effectiveness of the renderer in learning powerful 3D representations directly from unlabelled images. This is different from the supervised multi-view setting with pose labels in the renderer of DeepVoxels [47], which learns occlusion values, or Neural Volumes [32] and NeRF [35], which learn explicit density values. 3.4. Loss functions. We train BlockGAN adversarially using the non-saturating GAN loss [14]. For natural images with cluttered backgrounds, we also add a style discriminator loss [38]. In addition to classifying the images as real or fake, this discriminator also looks at images at the feature level. Given image features Φl at layer l, the style discriminator classifies the mean µ(Φl) and standard deviation σ(Φl) over the spatial dimensions, which describe the image “style” [19]. This more powerful discriminator discourages the foreground generator to include parts of the background within the foreground object(s). We provide detailed network and loss definitions in the supplemental material. 4 Experiments Datasets. We train BlockGAN on images at 64×64 pixels, with increasing complexity in terms of number of foreground objects (1–4) and texture (synthetic images with simple shapes and simple to natural images with complex texture and cluttered background). These datasets include the synthetic CLEVRn [21], SYNTH-CARn and SYNTH-CHAIRn, and the real REAL-CAR [54], where n is the number of foreground objects. Additional details and results are included in the supplementary material. Implementation details. We assume a fixed and known number of objects of the same type. Foreand background generators have similar architectures and the same number of output channels, but foreground generators have twice as many channels in the learnt constant tensor. Since foreground objects are smaller than the background, we set scale=1 for the background object, and randomly sample scales <1 for foreground objects. Please see our supplemental material for more implementation details and an ablation experiment. We make our code publicly available at github.com/thunguyenphuoc/BlockGAN. 4.1. Qualitative results. Despite being trained with only unlabelled images, Figure 4 shows that BlockGAN learns to disentangle different objects within a scene: foreground from background, and between multiple foreground objects. More importantly, BlockGAN also provides explicit control and enables smooth manipulation of each object’s pose θi and identity zi. Figure 6 shows results on natural images with a cluttered background, where BlockGAN is still able to separate objects and enables 3D object-centric modifications. Since BlockGAN combines deep object features into scene features, changes in an object’s properties also influence its shadows, and highlights adapt to the object’s movement. These effects can be better observed in the supplementary animations. 4.2. Quantitative results. We evaluate the visual fidelity of BlockGAN’s results using Kernel Inception Distance [KID; 4], which has an unbiased estimator and works even for a small number of images. Note that KID does not measure the quality of object disentanglement, which is the main contribution of BlockGAN. We first compare with a vanilla GAN [WGAN-GP; 15] using a publicly available implementation1. Secondly, we compare with LR-GAN [53], a 2D-based method that learns to generate image background and foregrounds separately and recursively. Finally, we compare with HoloGAN, which learns 3D scene representations that separate camera pose and identity, but does not consider object disentanglement. For LR-GAN and HoloGAN, we use the authors’ code. We tune hyperparameters and then compute the KID for 10,000 images generated by each model (samples by all methods are included in the supplementary material). Table 1 shows that BlockGAN generates images with competitive or better visual fidelity than other methods. 4.3. Scene manipulation beyond training data. We show that at test time, 3D object features learnt by BlockGAN can be realistically manipulated in ways that have not been observed during training time. First, we show that the learnt 3D object features can also be reused to add more objects to the scene at test time, thanks to the compositionality inductive bias and our choice of scene composer function. Firstly, we use BlockGAN trained on datasets with only one foreground object and one background, and show that more foreground objects of the same category can be added to the same scene at test time. Figure 6 shows that 2–4 new objects are added and manipulated just like the original objects while maintaining realistic shadows and highlights. In Figure 5, we use BlockGAN trained on CLEVR4 and then remove (top) and add (bottom) more objects to the scene. Note how BlockGAN generates realistic shadows and occlusion for scenes that the model has never seen before. Secondly, we apply spatial manipulations that were not part of the similarity transform used during training, such as horizontal stretching, or slicing and combining different foreground objects. Figure 6 shows that object features can be geometrically modified intuitively, without needing explicit 3D geometry or multi-view supervision during training. 4.4. Comparison to 2D-based LR-GAN. LR-GAN [53] first generates a 2D background layer, and then generates and combines foreground layers with the generated background using alpha-compositing. Both BlockGAN and LR-GAN show the importance of combining objects in a contextually relevant manner to generate visually realistic images (see Table 1). However, LR-GAN does not offer explicit control over object location. More importantly, LR-GAN learns an entangled representation of the scene: sampling a different background noise vector also changes the foreground (Figure 7). Finally, unlike BlockGAN, LR-GAN does not allow adding more foreground objects during test time. This demonstrates the benefits of learning disentangled 3D object features compared to a 2D-based approach. 4.5. Ablation study: Non-uniform pose distribution. For the natural REAL-CAR dataset, we observe that BlockGAN has difficulties learning the full 360° rotation of the car, even though fore- and background are disentangled well. We hypothesise that this is due to the mismatch between the true (unknown) pose distribution of the car, and the uniform pose distribution we assume during training. To test this, we create a synthetic dataset similar to SYNTH-CAR1 with a limited range of rotation, and train BlockGAN with a uniform pose distribution. To generate the imbalanced rotation dataset, we sample the rotation uniformly from the front/left/back/right viewing directions±15°. In other words, the car is 1https://github.com/LynnHo/DCGAN-LSGAN-WGAN-WGAN-GP-Tensorflow Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii Figure 7: Comparison between (i) LR-GAN [53] and (ii) BlockGAN for SYNTH-CAR1 (left) and REAL-CAR (right). Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii only seen from the front/left/back/right 30°, respectively, and there are four evenly spaced gaps of 60° that are never observed, for example views from the front-right. With the imbalanced dataset, Figure 8 (bottom) shows correct disentangling of foreground and background. However, rotation of the car only produces images with (near-)frontal views (top), while depth translation results in cars that are randomly rotated sideways (middle). We observe similar behaviour for the natural REAL-CAR dataset. This suggests that learning object disentanglement and full 3D pose rotation might be two independent problems. While assuming a uniform pose distribution already enables good object disentanglement, learning the pose distribution from the training data would likely improve the quality of 3D transforms. In our supplemental material, we include comparisons to HoloGAN [38] as well as additional ablation studies on comparing different scene composer functions, using a perspective camera versus a weak- perspective camera, adopting the style discriminator for scenes with cluttered backgrounds, and training on images with an incorrect number of objects. 5 Discussion and Future Work We introduced BlockGAN, an image generative model that learns 3D object-aware scene representations from unlabelled images. We show that BlockGAN can learn a disentangled scene representation both in terms of objects and their properties, which allows geometric manipulations not observed during training. Most excitingly, even when BlockGAN is trained with fewer or even single objects, additional 3D object features can be added to the scene features at test time to create novel scenes with multiple objects. In addition to computer graphics applications, this opens up exciting possibilities, such as combining BlockGAN with models like BiGAN [10] or ALI [11] to learn powerful object representations for scene understanding and reasoning. Future work can adopt more powerful relational learning models [25, 45, 51] to learn more complex object interactions such as inter-object shadowing or reflections. Currently, we assume prior knowledge of object category and the number of objects for training. We also assume object poses are uniformly distributed and independent from each other. Therefore, the ability to learn this information directly from training images would allow BlockGAN to be applied to more complex datasets with a varying number of objects and different object categories, such as COCO [30] or LSUN [57]. Acknowledgments and Disclosure of Funding We received support from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 665992, the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), and an NVIDIA Corporation GPU Grant. We received a gift from Adobe. Broader Impact BlockGAN is an image generative model that learns an object-oriented 3D scene representation directly from unlabelled 2D images. Our approach is a new machine learning technique that makes it possible to generate unseen images from a noise vector, with unprecedented control over the identity and pose of multiple independent objects as well as the background. In the long term, our approach could enable powerful tools for digital artists that facilitate artistic control over realistic procedurally generated digital content. However, any tool can in principle be abused, for example by adding new, manipulating or removing existing objects or people from images. At training time, our network performs a task somewhat akin to scene understanding, as our approach learns to disentangle between multiple objects and individual object properties (specifically their pose and identity). At test time, our approach enables sampling new images with control over pose and identity for each object in the scene, but does not directly take any image input. However, it is possible to embed images into the latent space of generative models [1]. A highly realistic generative image model and a good image fit would then make it possible to approximate the input image and, more importantly, to edit the individual objects in a pictured scene. Similar to existing image editing software, this enables the creation of image manipulations that could be used for ill-intended misinformation (fake news), but also for a wide range of creative and other positive applications. We expect the benefits of positive applications to clearly outweigh the potential downsides of malicious applications.
1. What is the focus and contribution of the paper on generative image modeling? 2. What are the strengths of the proposed method, particularly in terms of its ability to disentangle objects/background appearance, orientation, and viewpoint? 3. What are the weaknesses of the paper, especially regarding its limitations in resolution and scene complexity, and the need for better justification of certain method details? 4. Do you have any concerns or questions about the rendering method used in the paper, and how it relates to previous works such as DeepVoxels, Neural Volumes, and NeRF? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a generative image model that disentangles objects/background appearance, orientation, and viewpoint. The proposed method samples noise vectors for the background and a set of objects, which are oriented in space and then mapped to 3D voxel grids with deep features, combined into a single 3D volume, and rendered. At test time, this enables effects when sampling the generative model such as rendering views with different numbers of objects, rotating the objects, and moving the camera viewpoint. After rebuttal discussion comments: I appreciate the authors' rebuttal experiment that helped satisfy my curiosity regarding a mismatched number of expected vs. actual objects. The rebuttal did not really address my concerns regarding the rendering method, but I still stand by the paper's strengths and still vote that the paper should be accepted. Strengths The overall method and strategy seem sound, and the results are quite impressive. I am generally strongly in favor of this research direction of baking in more 3D graphics knowledge into deep learning models (including generative image models in the case of this paper), and I think that this paper is a good point of evidence that supports this overall trend. Weaknesses The approach is limited to low-resolution images of relatively simple scenes where we know the number of objects. What happens if an incorrect number of foreground objects are generated/used? Additionally, some of the specific method details regarding composing and rendering could be better justified. In 3.2, the scene composition is described as an elementwise maximum of the features for each object at each location. Wouldn't this result in a composed feature vector at a 3D location being a mix of different dimensions of different objects? Or is elementwise maximum actually supposed to be a max-pooling over just the object dimension and not the feature dimension? Why not something more straightforward like a linear combination of features where one of the feature dimensions is treated as the weight? Some of the details regarding the scene composition and learned rendering seem quite similar to the DeepVoxels paper, and should be discussed. I am also curious why the projection/depth composition is learned. It seems like this would require the network to learn correct occlusion (which appeared to be an issue in the results of the DeepVoxels paper by causing slight viewpoint inconsistencies, and later works such as Neural Volumes and NeRF remedied this issue by using explicit opacity/density instead of learning the occlusion composition). Maybe this choice of rendering function should be discussed more.
NIPS
Title BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images Abstract We present BlockGAN, an image generative model that learns object-aware 3D scene representations directly from unlabelled 2D images. Current work on scene representation learning either ignores scene background or treats the whole scene as one object. Meanwhile, work that considers scene compositionality treats scene objects only as image patches or 2D layers with alpha maps. Inspired by the computer graphics pipeline, we design BlockGAN to learn to first generate 3D features of background and foreground objects, then combine them into 3D features for the whole scene, and finally render them into realistic images. This allows BlockGAN to reason over occlusion and interaction between objects’ appearance, such as shadow and lighting, and provides control over each object’s 3D pose and identity, while maintaining image realism. BlockGAN is trained end-to-end, using only unlabelled single images, without the need for 3D geometry, pose labels, object masks, or multiple views of the same scene. Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity). Our code is available at https://github.com/thunguyenphuoc/BlockGAN. 1 Introduction The computer graphics pipeline has achieved impressive results in generating high-quality images, while offering users a great level of freedom and controllability over the generated images. This has many applications in creating and editing content for the creative industries, such as films, games, scientific visualisation, and more recently, in generating training data for computer vision tasks. However, the current pipeline, ranging from generating 3D geometry and textures, rendering, compositing and image post-processing, can be very expensive in terms of labour, time, and costs. Recent image generative models, in particular generative adversarial networks [GANs; 14], have greatly improved the visual fidelity and resolution of generated images [5, 23, 24]. Conditional GANs [36] allow users to manipulate images, but require labels during training. Recent work on unsupervised disentangled representations using GANs [9, 24, 38] relaxes this need for labels. The ability to produce high-quality, controllable images has made GANs an increasingly attractive alternative to the traditional graphics pipeline for content generation. However, most work focuses on property disentanglement, such as shape, pose and appearance, without considering the compositionality of the images, i.e., scenes being made up of multiple objects. Therefore, they do not offer control over individual objects in a way that respects the interaction of objects, such as consistent lighting and shadows. This is a major limitation of current image generative models, compared to the graphics pipeline, where 3D objects are modelled individually in terms of geometry and appearance, and combined into 3D scenes with consistent lighting. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Even when considering object compositionality, most approaches treat objects as 2D layers combined using alpha compositing [12, 50, 53]. Moreover, they also assume that each object’s appearance is independent [3, 6, 12]. While this layering approach has led to good results in terms of object separation and visual fidelity, it is fundamentally limited by the choice of 2D representation. Firstly, it is hard to manipulate properties that require 3D understanding, such as pose or perspective. Secondly, object layers tend to bake in appearance and cannot adequately represent view-specific appearance, such as shadows or material highlights changing as objects move around in the scene. Finally, it is non-trivial to model the appearance interactions between objects, such as scene lighting that affects objects’ shadows on a background. We introduce BlockGAN, a generative adversarial network that learns 3D object-oriented scene representations directly from unlabelled 2D images. Instead of learning 2D layers of objects and combining them with alpha compositing, BlockGAN learns to generate 3D object features and to combine them into deep 3D scene features that are projected and rendered as 2D images. This process closely resembles the computer graphics pipeline where scenes are modelled in 3D, enabling reasoning over occlusion and interaction between object appearance, such as shadows or highlights. During test time, each object’s pose can be manipulated using 3D transforms directly applied to the object’s deep 3D features. We can also add new objects and remove existing objects in the generated image by changing the number of 3D object features in the 3D scene features at inference time. This shows that BlockGAN has learnt a non-trivial representation of objects and their interaction, instead of merely memorizing images. BlockGAN is trained end-to-end in an unsupervised manner directly from unlabelled 2D images, without any multi-view images, paired images, pose labels, or 3D shapes. We experiment with BlockGAN on a variety of synthetic and natural image datasets. In summary, our main contributions are: • BlockGAN, an unsupervised image generative model that learns an object-aware 3D scene representation directly from unlabelled 2D images, disentangling both between objects and individual object properties (pose and identity); • showing that BlockGAN can learn to separate objects even from cluttered backgrounds; and • demonstrating that BlockGAN’s object features can be added, removed and manipulated to create novel scenes that are not observed during training. 2 Related work GANs. Unsupervised GANs learn to map samples from a latent distribution to data categorised as real by a discriminator network. Conditional GANs enable control over the generated image content, but require labels during training. Recent work on unsupervised disentangled representation learning using GANs provides controllability over the final images without the need for labels. Loss functions can be designed to maximize mutual information between generated images and latent variables [9, 20]. However, these models do not guarantee which factors can be learnt, and have limited success when applied to natural images. Network architectures can play a vital role in both improving training stability [7] and controllability of generated images [24, 38]. We also focus on designing an appropriate architecture to learn object-level disentangled representations. We show that injecting inductive biases about how the 3D world is composed of 3D objects enables BlockGAN to learn 3D object-aware scene representations directly from 2D images, thus providing control over both 3D pose and appearance of individual objects. 3D-aware neural image synthesis. Introducing 3D structures into neural networks can improve the quality [37, 41, 44, 48] and controllability of the image generation process [38, 39, 59]. This can be achieved with explicit 3D representations, like appearance flow [58], occupancy voxel grids [43, 59], meshes, or shape templates [27, 46, 56], in conjunction with handcrafted differentiable renderers [8, 17, 31, 33]. Renderable deep 3D representations can also be learnt directly from images [38, 47, 48]. HoloGAN [38] further shows that adding inductive biases about the 3D structure of the world enables unsupervised disentangled feature learning between shape, appearance and pose. However, these learnt representations are either object-centric (i.e., no background), or treat the whole scene as one object. Thus, they do not consider scene compositionality, i.e., components that can move independently. BlockGAN, in contrast, is designed to learn object-aware 3D representations that are combined into a unified 3D scene representation. Object-aware image synthesis. Recent methods decompose image synthesis into generating components like layers or image patches, and combining them into the final image [28, 50, 53]. This includes conditional GANs that use segmentation masks [40, 49], scene graphs [22], object labels, key points or bounding boxes [18, 42], which have shown impressive results for natural image datasets. Recently, unsupervised methods [2, 12, 13, 26, 50, 55] learned object disentanglement for multi-object scenes on simpler synthetic datasets (single-colour objects, simple lighting, and material). Other approaches successfully separate foreground from background objects in natural images, but make strong assumptions about the size of objects [53] or independent object appearance [3, 6]. These methods treat object components as image patches or 2D layers with corresponding masks, which are combined via alpha compositing at the pixel level to generate the final stage. The work closest to ours learns to generate multiple 3D primitives (cuboids, spheres and point clouds), renders them into separate 2D layers with a handcrafted differentiable renderer, and alpha-composes them based on their depth ordering to create the final image [29]. Despite the explicit 3D geometry, this method does not handle cluttered backgrounds and requires extra supervision in the shape of labelled images with and without foreground objects. BlockGAN takes a different approach. We treat objects as learnt 3D features with corresponding 3D poses, and learn to combine them into 3D scene features. Not only does this provide control over 3D pose, but also enables learning of realistic lighting and shadows. Our approach allows adding more objects into the 3D scene features to generate images with multiple objects, which are not observed at training time. 3 Method Inspired by the computer graphics pipeline, we assume that each image x is a rendered 2D image of a 3D scene composed of K 3D foreground objects {O1,...,OK} in addition to the background O0: x=p ( f( O0,︸︷︷︸ background O1, ...,OK︸ ︷︷ ︸ foreground ) ) , (1) where the function f combines multiple objects into unified scene features that are projected to the image x by p. We assume each object Oi is defined in a canonical orientation and generated from a noise vector zi by a function gi before being individually posed using parameters θi: Oi=gi(zi,θi). We inject the inductive bias of compositionality of the 3D world into BlockGAN in two ways. (1) The generator is designed to first generate 3D features for each object independently, before transforming and combining them into unified scene features, in which objects interact. (2) Unlike other methods that use 2D image patches or layers to represent objects, BlockGAN directly learns from unlabelled images how to generate objects as 3D features. This allows our model to disentangle the scene into separate 3D objects and allows the generator to reason over 3D space, enabling object pose manipulation and appearance interaction between objects. BlockGAN, therefore, learns to both generate and render the scene features into images that can fool the discriminator. Figure 1 illustrates the BlockGAN generator architecture. Each noise vector zi is mapped to 3D object features Oi. Objects are then transformed according to their pose θi using a 3D similarity transform, before being combined into 3D scene features using the scene composer f . The scene features are transformed into the camera coordinate system before being projected to 2D features to render the final images using the camera projector function p. During training, we randomly sample both the noise vectors zi and poses θi. During test time, objects can be generated with a given identity zi in the desired pose θi. BlockGAN is trained end-to-end using only unlabelled 2D images, without the need for any labels, such as poses, 3D shapes, multi-view inputs, masks, or geometry priors like shape templates, symmetry or smoothness terms. We next explain each component of the generator in more detail. 3.1. Learning 3D object representations. Each object Oi∈RHo×Wo×Do×Co is a deep 3D feature grid generated by Oi = gi(zi,θi), where gi is an object generator that takes as input a noise vector zi controlling the object appearance, and the object’s 3D pose θi=(si,Ri,ti), which comprises its uniform scale si∈R, rotation Ri∈SE(3) and translation ti∈R3. The object generator gi is specific to each category of objects, and is shared between objects of the same category. We assume that 3D scenes consist of at least two objects: the background O0 and one or more foreground objects {O1, ... ,OK}. This is different to object-centric methods that only assume a single object with a simple white background [47], or only deal with static scenes whose object components cannot move independently [38]. We show that, even when BlockGAN is trained with only one foreground and background object, we can add an arbitrary number of foreground objects to the scene at test time. To generate 3D object features, BlockGAN implements the style-based strategy, which helps to disentangle between pose and identity [38] while improving training stability [24]. As illustrated in Figure 2, the noise vector zi is mapped to affine parameters – the “style controller” – for adaptive instance normalization [AdaIN; 19] after each 3D convolution layer. However, unlike HoloGAN [38], which learns 3D features directly for the whole scene, BlockGAN learns 3D features for each object, which are then transformed to their target poses using similarity transforms, and combined into 3D scene features. We implement these 3D similarity transforms by trilinear resampling of the 3D features according to the translation, rotation and scale parameters θi; samples falling outside the feature tensor are clamped to zero. This allows BlockGAN to not only separate object pose from identity, but also to disentangle multiple objects in the same scene. 3.2. Scene composer function. We combine the 3D object features {Oi} into scene features S= f(O0, O1, ... , OK) ∈ RHs×Ws×Ds×Cs using a scene composer function f . For this, we use the element-wise maximum as it achieves the best image quality compared to element-wise summation and a multi-layer perceptron (MLP); please see our supplemental document for an ablation. Additionally, the maximum is invariant to permutation and allows a flexible number of input objects to add new objects into the scene features during test time, even when trained with fewer objects (see Section 4.3). 3.3. Learning to render. Instead of using a handcrafted differentiable renderer, we aim to learn rendering directly from unlabelled images. HoloGAN showed that this approach is more expressive as it is capable of handling unlabelled, natural image data. However, their projection model is limited to a weak perspective, which does not support foreshortening – an effect that is observed when objects are close to real (perspective) cameras. We therefore introduce a graphics-based perspective projection function p : RHs×Ws×Ds×Cs 7→RHc×Wc×Cc that transforms the 3D scene features into camera space using a projective transform, and then learns the projection of the 3D features to a 2D feature map. The computer graphics pipeline implements perspective projection using a projective transform that converts objects from world coordinates (our scene space) to camera coordinates [34]. We implement this camera transform like the similarity transforms used to manipulate objects in Section 3.1, by resampling the 3D scene features according to the viewing volume (frustum) of the virtual perspective camera (see Figure 3). For correct perspective projection, this transform must be a projective transform, the superset of similarity transforms [52]. Specifically, the viewing frustum, in scene space, can be defined relative to the camera’s pose θcam using the angle of view, and the distance of the near and far planes. The camera-space features are a new 3D tensor of features, of size Hc×Wc×Dc×Cs, whose corners are mapped to the corners of the camera’s viewing frustum using the unique projective 3D transform computed from the coordinates of corresponding corners using the direct linear transform [16]. In practice, we combine the object and camera transforms into a single transform by multiplying both transform matrices and resampling the object features in a single step, directly from object to camera space. This is computationally more efficient than resampling twice, and advantageous from a sampling theory point of view, as the features are only interpolated once, not twice, and thus less information is lost by the resampling. The combined transform is a fixed, differentiable function with parameters (θi,θcam). The individual objects are then combined in camera space before the final projection. After the camera transform, the 3D features are projected into view-specific 2D feature maps using the learnt camera projection p′ : RHc×Wc×Dc×Cs 7→RHc×Wc×Cc . This function ensures that occlusion correctly shows nearby objects in front of distant objects. Following the RenderNet projection unit [37], we reshape the 3D camera-space features (with depth Dc and Cs channels) into a 2D feature map with (Dc ·Cs) channels, followed by a per-pixel MLP (i.e., 1×1 convolution) that outputs Cc channels. We choose to use this learnt renderer following HoloGAN, which shows the effectiveness of the renderer in learning powerful 3D representations directly from unlabelled images. This is different from the supervised multi-view setting with pose labels in the renderer of DeepVoxels [47], which learns occlusion values, or Neural Volumes [32] and NeRF [35], which learn explicit density values. 3.4. Loss functions. We train BlockGAN adversarially using the non-saturating GAN loss [14]. For natural images with cluttered backgrounds, we also add a style discriminator loss [38]. In addition to classifying the images as real or fake, this discriminator also looks at images at the feature level. Given image features Φl at layer l, the style discriminator classifies the mean µ(Φl) and standard deviation σ(Φl) over the spatial dimensions, which describe the image “style” [19]. This more powerful discriminator discourages the foreground generator to include parts of the background within the foreground object(s). We provide detailed network and loss definitions in the supplemental material. 4 Experiments Datasets. We train BlockGAN on images at 64×64 pixels, with increasing complexity in terms of number of foreground objects (1–4) and texture (synthetic images with simple shapes and simple to natural images with complex texture and cluttered background). These datasets include the synthetic CLEVRn [21], SYNTH-CARn and SYNTH-CHAIRn, and the real REAL-CAR [54], where n is the number of foreground objects. Additional details and results are included in the supplementary material. Implementation details. We assume a fixed and known number of objects of the same type. Foreand background generators have similar architectures and the same number of output channels, but foreground generators have twice as many channels in the learnt constant tensor. Since foreground objects are smaller than the background, we set scale=1 for the background object, and randomly sample scales <1 for foreground objects. Please see our supplemental material for more implementation details and an ablation experiment. We make our code publicly available at github.com/thunguyenphuoc/BlockGAN. 4.1. Qualitative results. Despite being trained with only unlabelled images, Figure 4 shows that BlockGAN learns to disentangle different objects within a scene: foreground from background, and between multiple foreground objects. More importantly, BlockGAN also provides explicit control and enables smooth manipulation of each object’s pose θi and identity zi. Figure 6 shows results on natural images with a cluttered background, where BlockGAN is still able to separate objects and enables 3D object-centric modifications. Since BlockGAN combines deep object features into scene features, changes in an object’s properties also influence its shadows, and highlights adapt to the object’s movement. These effects can be better observed in the supplementary animations. 4.2. Quantitative results. We evaluate the visual fidelity of BlockGAN’s results using Kernel Inception Distance [KID; 4], which has an unbiased estimator and works even for a small number of images. Note that KID does not measure the quality of object disentanglement, which is the main contribution of BlockGAN. We first compare with a vanilla GAN [WGAN-GP; 15] using a publicly available implementation1. Secondly, we compare with LR-GAN [53], a 2D-based method that learns to generate image background and foregrounds separately and recursively. Finally, we compare with HoloGAN, which learns 3D scene representations that separate camera pose and identity, but does not consider object disentanglement. For LR-GAN and HoloGAN, we use the authors’ code. We tune hyperparameters and then compute the KID for 10,000 images generated by each model (samples by all methods are included in the supplementary material). Table 1 shows that BlockGAN generates images with competitive or better visual fidelity than other methods. 4.3. Scene manipulation beyond training data. We show that at test time, 3D object features learnt by BlockGAN can be realistically manipulated in ways that have not been observed during training time. First, we show that the learnt 3D object features can also be reused to add more objects to the scene at test time, thanks to the compositionality inductive bias and our choice of scene composer function. Firstly, we use BlockGAN trained on datasets with only one foreground object and one background, and show that more foreground objects of the same category can be added to the same scene at test time. Figure 6 shows that 2–4 new objects are added and manipulated just like the original objects while maintaining realistic shadows and highlights. In Figure 5, we use BlockGAN trained on CLEVR4 and then remove (top) and add (bottom) more objects to the scene. Note how BlockGAN generates realistic shadows and occlusion for scenes that the model has never seen before. Secondly, we apply spatial manipulations that were not part of the similarity transform used during training, such as horizontal stretching, or slicing and combining different foreground objects. Figure 6 shows that object features can be geometrically modified intuitively, without needing explicit 3D geometry or multi-view supervision during training. 4.4. Comparison to 2D-based LR-GAN. LR-GAN [53] first generates a 2D background layer, and then generates and combines foreground layers with the generated background using alpha-compositing. Both BlockGAN and LR-GAN show the importance of combining objects in a contextually relevant manner to generate visually realistic images (see Table 1). However, LR-GAN does not offer explicit control over object location. More importantly, LR-GAN learns an entangled representation of the scene: sampling a different background noise vector also changes the foreground (Figure 7). Finally, unlike BlockGAN, LR-GAN does not allow adding more foreground objects during test time. This demonstrates the benefits of learning disentangled 3D object features compared to a 2D-based approach. 4.5. Ablation study: Non-uniform pose distribution. For the natural REAL-CAR dataset, we observe that BlockGAN has difficulties learning the full 360° rotation of the car, even though fore- and background are disentangled well. We hypothesise that this is due to the mismatch between the true (unknown) pose distribution of the car, and the uniform pose distribution we assume during training. To test this, we create a synthetic dataset similar to SYNTH-CAR1 with a limited range of rotation, and train BlockGAN with a uniform pose distribution. To generate the imbalanced rotation dataset, we sample the rotation uniformly from the front/left/back/right viewing directions±15°. In other words, the car is 1https://github.com/LynnHo/DCGAN-LSGAN-WGAN-WGAN-GP-Tensorflow Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii Figure 7: Comparison between (i) LR-GAN [53] and (ii) BlockGAN for SYNTH-CAR1 (left) and REAL-CAR (right). Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii only seen from the front/left/back/right 30°, respectively, and there are four evenly spaced gaps of 60° that are never observed, for example views from the front-right. With the imbalanced dataset, Figure 8 (bottom) shows correct disentangling of foreground and background. However, rotation of the car only produces images with (near-)frontal views (top), while depth translation results in cars that are randomly rotated sideways (middle). We observe similar behaviour for the natural REAL-CAR dataset. This suggests that learning object disentanglement and full 3D pose rotation might be two independent problems. While assuming a uniform pose distribution already enables good object disentanglement, learning the pose distribution from the training data would likely improve the quality of 3D transforms. In our supplemental material, we include comparisons to HoloGAN [38] as well as additional ablation studies on comparing different scene composer functions, using a perspective camera versus a weak- perspective camera, adopting the style discriminator for scenes with cluttered backgrounds, and training on images with an incorrect number of objects. 5 Discussion and Future Work We introduced BlockGAN, an image generative model that learns 3D object-aware scene representations from unlabelled images. We show that BlockGAN can learn a disentangled scene representation both in terms of objects and their properties, which allows geometric manipulations not observed during training. Most excitingly, even when BlockGAN is trained with fewer or even single objects, additional 3D object features can be added to the scene features at test time to create novel scenes with multiple objects. In addition to computer graphics applications, this opens up exciting possibilities, such as combining BlockGAN with models like BiGAN [10] or ALI [11] to learn powerful object representations for scene understanding and reasoning. Future work can adopt more powerful relational learning models [25, 45, 51] to learn more complex object interactions such as inter-object shadowing or reflections. Currently, we assume prior knowledge of object category and the number of objects for training. We also assume object poses are uniformly distributed and independent from each other. Therefore, the ability to learn this information directly from training images would allow BlockGAN to be applied to more complex datasets with a varying number of objects and different object categories, such as COCO [30] or LSUN [57]. Acknowledgments and Disclosure of Funding We received support from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 665992, the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), and an NVIDIA Corporation GPU Grant. We received a gift from Adobe. Broader Impact BlockGAN is an image generative model that learns an object-oriented 3D scene representation directly from unlabelled 2D images. Our approach is a new machine learning technique that makes it possible to generate unseen images from a noise vector, with unprecedented control over the identity and pose of multiple independent objects as well as the background. In the long term, our approach could enable powerful tools for digital artists that facilitate artistic control over realistic procedurally generated digital content. However, any tool can in principle be abused, for example by adding new, manipulating or removing existing objects or people from images. At training time, our network performs a task somewhat akin to scene understanding, as our approach learns to disentangle between multiple objects and individual object properties (specifically their pose and identity). At test time, our approach enables sampling new images with control over pose and identity for each object in the scene, but does not directly take any image input. However, it is possible to embed images into the latent space of generative models [1]. A highly realistic generative image model and a good image fit would then make it possible to approximate the input image and, more importantly, to edit the individual objects in a pictured scene. Similar to existing image editing software, this enables the creation of image manipulations that could be used for ill-intended misinformation (fake news), but also for a wide range of creative and other positive applications. We expect the benefits of positive applications to clearly outweigh the potential downsides of malicious applications.
1. What is the focus of the paper regarding scene representation? 2. What are the strengths of the proposed approach, particularly in terms of visual results and decomposition of objects? 3. What are the weaknesses of the paper regarding implementation details?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose a generative model that represents a scene using 3D representations of the scenes constituent objects. Strengths The paper is well motivated, the method is explained well and the visual results are clear, informative and demonstrated on a variety of datasets. It is interesting to decompose the object into pose and identity features and leads to some really nice visualisations in Figure 4 showing how the model deals well with translation and rotation. Results in Figure 6 are also very nice, showing that object can be added to the scene. Weaknesses A number of implementation details are missing from the main body of the paper but they authors do provide code (though I have not tried to run it).
NIPS
Title BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images Abstract We present BlockGAN, an image generative model that learns object-aware 3D scene representations directly from unlabelled 2D images. Current work on scene representation learning either ignores scene background or treats the whole scene as one object. Meanwhile, work that considers scene compositionality treats scene objects only as image patches or 2D layers with alpha maps. Inspired by the computer graphics pipeline, we design BlockGAN to learn to first generate 3D features of background and foreground objects, then combine them into 3D features for the whole scene, and finally render them into realistic images. This allows BlockGAN to reason over occlusion and interaction between objects’ appearance, such as shadow and lighting, and provides control over each object’s 3D pose and identity, while maintaining image realism. BlockGAN is trained end-to-end, using only unlabelled single images, without the need for 3D geometry, pose labels, object masks, or multiple views of the same scene. Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity). Our code is available at https://github.com/thunguyenphuoc/BlockGAN. 1 Introduction The computer graphics pipeline has achieved impressive results in generating high-quality images, while offering users a great level of freedom and controllability over the generated images. This has many applications in creating and editing content for the creative industries, such as films, games, scientific visualisation, and more recently, in generating training data for computer vision tasks. However, the current pipeline, ranging from generating 3D geometry and textures, rendering, compositing and image post-processing, can be very expensive in terms of labour, time, and costs. Recent image generative models, in particular generative adversarial networks [GANs; 14], have greatly improved the visual fidelity and resolution of generated images [5, 23, 24]. Conditional GANs [36] allow users to manipulate images, but require labels during training. Recent work on unsupervised disentangled representations using GANs [9, 24, 38] relaxes this need for labels. The ability to produce high-quality, controllable images has made GANs an increasingly attractive alternative to the traditional graphics pipeline for content generation. However, most work focuses on property disentanglement, such as shape, pose and appearance, without considering the compositionality of the images, i.e., scenes being made up of multiple objects. Therefore, they do not offer control over individual objects in a way that respects the interaction of objects, such as consistent lighting and shadows. This is a major limitation of current image generative models, compared to the graphics pipeline, where 3D objects are modelled individually in terms of geometry and appearance, and combined into 3D scenes with consistent lighting. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Even when considering object compositionality, most approaches treat objects as 2D layers combined using alpha compositing [12, 50, 53]. Moreover, they also assume that each object’s appearance is independent [3, 6, 12]. While this layering approach has led to good results in terms of object separation and visual fidelity, it is fundamentally limited by the choice of 2D representation. Firstly, it is hard to manipulate properties that require 3D understanding, such as pose or perspective. Secondly, object layers tend to bake in appearance and cannot adequately represent view-specific appearance, such as shadows or material highlights changing as objects move around in the scene. Finally, it is non-trivial to model the appearance interactions between objects, such as scene lighting that affects objects’ shadows on a background. We introduce BlockGAN, a generative adversarial network that learns 3D object-oriented scene representations directly from unlabelled 2D images. Instead of learning 2D layers of objects and combining them with alpha compositing, BlockGAN learns to generate 3D object features and to combine them into deep 3D scene features that are projected and rendered as 2D images. This process closely resembles the computer graphics pipeline where scenes are modelled in 3D, enabling reasoning over occlusion and interaction between object appearance, such as shadows or highlights. During test time, each object’s pose can be manipulated using 3D transforms directly applied to the object’s deep 3D features. We can also add new objects and remove existing objects in the generated image by changing the number of 3D object features in the 3D scene features at inference time. This shows that BlockGAN has learnt a non-trivial representation of objects and their interaction, instead of merely memorizing images. BlockGAN is trained end-to-end in an unsupervised manner directly from unlabelled 2D images, without any multi-view images, paired images, pose labels, or 3D shapes. We experiment with BlockGAN on a variety of synthetic and natural image datasets. In summary, our main contributions are: • BlockGAN, an unsupervised image generative model that learns an object-aware 3D scene representation directly from unlabelled 2D images, disentangling both between objects and individual object properties (pose and identity); • showing that BlockGAN can learn to separate objects even from cluttered backgrounds; and • demonstrating that BlockGAN’s object features can be added, removed and manipulated to create novel scenes that are not observed during training. 2 Related work GANs. Unsupervised GANs learn to map samples from a latent distribution to data categorised as real by a discriminator network. Conditional GANs enable control over the generated image content, but require labels during training. Recent work on unsupervised disentangled representation learning using GANs provides controllability over the final images without the need for labels. Loss functions can be designed to maximize mutual information between generated images and latent variables [9, 20]. However, these models do not guarantee which factors can be learnt, and have limited success when applied to natural images. Network architectures can play a vital role in both improving training stability [7] and controllability of generated images [24, 38]. We also focus on designing an appropriate architecture to learn object-level disentangled representations. We show that injecting inductive biases about how the 3D world is composed of 3D objects enables BlockGAN to learn 3D object-aware scene representations directly from 2D images, thus providing control over both 3D pose and appearance of individual objects. 3D-aware neural image synthesis. Introducing 3D structures into neural networks can improve the quality [37, 41, 44, 48] and controllability of the image generation process [38, 39, 59]. This can be achieved with explicit 3D representations, like appearance flow [58], occupancy voxel grids [43, 59], meshes, or shape templates [27, 46, 56], in conjunction with handcrafted differentiable renderers [8, 17, 31, 33]. Renderable deep 3D representations can also be learnt directly from images [38, 47, 48]. HoloGAN [38] further shows that adding inductive biases about the 3D structure of the world enables unsupervised disentangled feature learning between shape, appearance and pose. However, these learnt representations are either object-centric (i.e., no background), or treat the whole scene as one object. Thus, they do not consider scene compositionality, i.e., components that can move independently. BlockGAN, in contrast, is designed to learn object-aware 3D representations that are combined into a unified 3D scene representation. Object-aware image synthesis. Recent methods decompose image synthesis into generating components like layers or image patches, and combining them into the final image [28, 50, 53]. This includes conditional GANs that use segmentation masks [40, 49], scene graphs [22], object labels, key points or bounding boxes [18, 42], which have shown impressive results for natural image datasets. Recently, unsupervised methods [2, 12, 13, 26, 50, 55] learned object disentanglement for multi-object scenes on simpler synthetic datasets (single-colour objects, simple lighting, and material). Other approaches successfully separate foreground from background objects in natural images, but make strong assumptions about the size of objects [53] or independent object appearance [3, 6]. These methods treat object components as image patches or 2D layers with corresponding masks, which are combined via alpha compositing at the pixel level to generate the final stage. The work closest to ours learns to generate multiple 3D primitives (cuboids, spheres and point clouds), renders them into separate 2D layers with a handcrafted differentiable renderer, and alpha-composes them based on their depth ordering to create the final image [29]. Despite the explicit 3D geometry, this method does not handle cluttered backgrounds and requires extra supervision in the shape of labelled images with and without foreground objects. BlockGAN takes a different approach. We treat objects as learnt 3D features with corresponding 3D poses, and learn to combine them into 3D scene features. Not only does this provide control over 3D pose, but also enables learning of realistic lighting and shadows. Our approach allows adding more objects into the 3D scene features to generate images with multiple objects, which are not observed at training time. 3 Method Inspired by the computer graphics pipeline, we assume that each image x is a rendered 2D image of a 3D scene composed of K 3D foreground objects {O1,...,OK} in addition to the background O0: x=p ( f( O0,︸︷︷︸ background O1, ...,OK︸ ︷︷ ︸ foreground ) ) , (1) where the function f combines multiple objects into unified scene features that are projected to the image x by p. We assume each object Oi is defined in a canonical orientation and generated from a noise vector zi by a function gi before being individually posed using parameters θi: Oi=gi(zi,θi). We inject the inductive bias of compositionality of the 3D world into BlockGAN in two ways. (1) The generator is designed to first generate 3D features for each object independently, before transforming and combining them into unified scene features, in which objects interact. (2) Unlike other methods that use 2D image patches or layers to represent objects, BlockGAN directly learns from unlabelled images how to generate objects as 3D features. This allows our model to disentangle the scene into separate 3D objects and allows the generator to reason over 3D space, enabling object pose manipulation and appearance interaction between objects. BlockGAN, therefore, learns to both generate and render the scene features into images that can fool the discriminator. Figure 1 illustrates the BlockGAN generator architecture. Each noise vector zi is mapped to 3D object features Oi. Objects are then transformed according to their pose θi using a 3D similarity transform, before being combined into 3D scene features using the scene composer f . The scene features are transformed into the camera coordinate system before being projected to 2D features to render the final images using the camera projector function p. During training, we randomly sample both the noise vectors zi and poses θi. During test time, objects can be generated with a given identity zi in the desired pose θi. BlockGAN is trained end-to-end using only unlabelled 2D images, without the need for any labels, such as poses, 3D shapes, multi-view inputs, masks, or geometry priors like shape templates, symmetry or smoothness terms. We next explain each component of the generator in more detail. 3.1. Learning 3D object representations. Each object Oi∈RHo×Wo×Do×Co is a deep 3D feature grid generated by Oi = gi(zi,θi), where gi is an object generator that takes as input a noise vector zi controlling the object appearance, and the object’s 3D pose θi=(si,Ri,ti), which comprises its uniform scale si∈R, rotation Ri∈SE(3) and translation ti∈R3. The object generator gi is specific to each category of objects, and is shared between objects of the same category. We assume that 3D scenes consist of at least two objects: the background O0 and one or more foreground objects {O1, ... ,OK}. This is different to object-centric methods that only assume a single object with a simple white background [47], or only deal with static scenes whose object components cannot move independently [38]. We show that, even when BlockGAN is trained with only one foreground and background object, we can add an arbitrary number of foreground objects to the scene at test time. To generate 3D object features, BlockGAN implements the style-based strategy, which helps to disentangle between pose and identity [38] while improving training stability [24]. As illustrated in Figure 2, the noise vector zi is mapped to affine parameters – the “style controller” – for adaptive instance normalization [AdaIN; 19] after each 3D convolution layer. However, unlike HoloGAN [38], which learns 3D features directly for the whole scene, BlockGAN learns 3D features for each object, which are then transformed to their target poses using similarity transforms, and combined into 3D scene features. We implement these 3D similarity transforms by trilinear resampling of the 3D features according to the translation, rotation and scale parameters θi; samples falling outside the feature tensor are clamped to zero. This allows BlockGAN to not only separate object pose from identity, but also to disentangle multiple objects in the same scene. 3.2. Scene composer function. We combine the 3D object features {Oi} into scene features S= f(O0, O1, ... , OK) ∈ RHs×Ws×Ds×Cs using a scene composer function f . For this, we use the element-wise maximum as it achieves the best image quality compared to element-wise summation and a multi-layer perceptron (MLP); please see our supplemental document for an ablation. Additionally, the maximum is invariant to permutation and allows a flexible number of input objects to add new objects into the scene features during test time, even when trained with fewer objects (see Section 4.3). 3.3. Learning to render. Instead of using a handcrafted differentiable renderer, we aim to learn rendering directly from unlabelled images. HoloGAN showed that this approach is more expressive as it is capable of handling unlabelled, natural image data. However, their projection model is limited to a weak perspective, which does not support foreshortening – an effect that is observed when objects are close to real (perspective) cameras. We therefore introduce a graphics-based perspective projection function p : RHs×Ws×Ds×Cs 7→RHc×Wc×Cc that transforms the 3D scene features into camera space using a projective transform, and then learns the projection of the 3D features to a 2D feature map. The computer graphics pipeline implements perspective projection using a projective transform that converts objects from world coordinates (our scene space) to camera coordinates [34]. We implement this camera transform like the similarity transforms used to manipulate objects in Section 3.1, by resampling the 3D scene features according to the viewing volume (frustum) of the virtual perspective camera (see Figure 3). For correct perspective projection, this transform must be a projective transform, the superset of similarity transforms [52]. Specifically, the viewing frustum, in scene space, can be defined relative to the camera’s pose θcam using the angle of view, and the distance of the near and far planes. The camera-space features are a new 3D tensor of features, of size Hc×Wc×Dc×Cs, whose corners are mapped to the corners of the camera’s viewing frustum using the unique projective 3D transform computed from the coordinates of corresponding corners using the direct linear transform [16]. In practice, we combine the object and camera transforms into a single transform by multiplying both transform matrices and resampling the object features in a single step, directly from object to camera space. This is computationally more efficient than resampling twice, and advantageous from a sampling theory point of view, as the features are only interpolated once, not twice, and thus less information is lost by the resampling. The combined transform is a fixed, differentiable function with parameters (θi,θcam). The individual objects are then combined in camera space before the final projection. After the camera transform, the 3D features are projected into view-specific 2D feature maps using the learnt camera projection p′ : RHc×Wc×Dc×Cs 7→RHc×Wc×Cc . This function ensures that occlusion correctly shows nearby objects in front of distant objects. Following the RenderNet projection unit [37], we reshape the 3D camera-space features (with depth Dc and Cs channels) into a 2D feature map with (Dc ·Cs) channels, followed by a per-pixel MLP (i.e., 1×1 convolution) that outputs Cc channels. We choose to use this learnt renderer following HoloGAN, which shows the effectiveness of the renderer in learning powerful 3D representations directly from unlabelled images. This is different from the supervised multi-view setting with pose labels in the renderer of DeepVoxels [47], which learns occlusion values, or Neural Volumes [32] and NeRF [35], which learn explicit density values. 3.4. Loss functions. We train BlockGAN adversarially using the non-saturating GAN loss [14]. For natural images with cluttered backgrounds, we also add a style discriminator loss [38]. In addition to classifying the images as real or fake, this discriminator also looks at images at the feature level. Given image features Φl at layer l, the style discriminator classifies the mean µ(Φl) and standard deviation σ(Φl) over the spatial dimensions, which describe the image “style” [19]. This more powerful discriminator discourages the foreground generator to include parts of the background within the foreground object(s). We provide detailed network and loss definitions in the supplemental material. 4 Experiments Datasets. We train BlockGAN on images at 64×64 pixels, with increasing complexity in terms of number of foreground objects (1–4) and texture (synthetic images with simple shapes and simple to natural images with complex texture and cluttered background). These datasets include the synthetic CLEVRn [21], SYNTH-CARn and SYNTH-CHAIRn, and the real REAL-CAR [54], where n is the number of foreground objects. Additional details and results are included in the supplementary material. Implementation details. We assume a fixed and known number of objects of the same type. Foreand background generators have similar architectures and the same number of output channels, but foreground generators have twice as many channels in the learnt constant tensor. Since foreground objects are smaller than the background, we set scale=1 for the background object, and randomly sample scales <1 for foreground objects. Please see our supplemental material for more implementation details and an ablation experiment. We make our code publicly available at github.com/thunguyenphuoc/BlockGAN. 4.1. Qualitative results. Despite being trained with only unlabelled images, Figure 4 shows that BlockGAN learns to disentangle different objects within a scene: foreground from background, and between multiple foreground objects. More importantly, BlockGAN also provides explicit control and enables smooth manipulation of each object’s pose θi and identity zi. Figure 6 shows results on natural images with a cluttered background, where BlockGAN is still able to separate objects and enables 3D object-centric modifications. Since BlockGAN combines deep object features into scene features, changes in an object’s properties also influence its shadows, and highlights adapt to the object’s movement. These effects can be better observed in the supplementary animations. 4.2. Quantitative results. We evaluate the visual fidelity of BlockGAN’s results using Kernel Inception Distance [KID; 4], which has an unbiased estimator and works even for a small number of images. Note that KID does not measure the quality of object disentanglement, which is the main contribution of BlockGAN. We first compare with a vanilla GAN [WGAN-GP; 15] using a publicly available implementation1. Secondly, we compare with LR-GAN [53], a 2D-based method that learns to generate image background and foregrounds separately and recursively. Finally, we compare with HoloGAN, which learns 3D scene representations that separate camera pose and identity, but does not consider object disentanglement. For LR-GAN and HoloGAN, we use the authors’ code. We tune hyperparameters and then compute the KID for 10,000 images generated by each model (samples by all methods are included in the supplementary material). Table 1 shows that BlockGAN generates images with competitive or better visual fidelity than other methods. 4.3. Scene manipulation beyond training data. We show that at test time, 3D object features learnt by BlockGAN can be realistically manipulated in ways that have not been observed during training time. First, we show that the learnt 3D object features can also be reused to add more objects to the scene at test time, thanks to the compositionality inductive bias and our choice of scene composer function. Firstly, we use BlockGAN trained on datasets with only one foreground object and one background, and show that more foreground objects of the same category can be added to the same scene at test time. Figure 6 shows that 2–4 new objects are added and manipulated just like the original objects while maintaining realistic shadows and highlights. In Figure 5, we use BlockGAN trained on CLEVR4 and then remove (top) and add (bottom) more objects to the scene. Note how BlockGAN generates realistic shadows and occlusion for scenes that the model has never seen before. Secondly, we apply spatial manipulations that were not part of the similarity transform used during training, such as horizontal stretching, or slicing and combining different foreground objects. Figure 6 shows that object features can be geometrically modified intuitively, without needing explicit 3D geometry or multi-view supervision during training. 4.4. Comparison to 2D-based LR-GAN. LR-GAN [53] first generates a 2D background layer, and then generates and combines foreground layers with the generated background using alpha-compositing. Both BlockGAN and LR-GAN show the importance of combining objects in a contextually relevant manner to generate visually realistic images (see Table 1). However, LR-GAN does not offer explicit control over object location. More importantly, LR-GAN learns an entangled representation of the scene: sampling a different background noise vector also changes the foreground (Figure 7). Finally, unlike BlockGAN, LR-GAN does not allow adding more foreground objects during test time. This demonstrates the benefits of learning disentangled 3D object features compared to a 2D-based approach. 4.5. Ablation study: Non-uniform pose distribution. For the natural REAL-CAR dataset, we observe that BlockGAN has difficulties learning the full 360° rotation of the car, even though fore- and background are disentangled well. We hypothesise that this is due to the mismatch between the true (unknown) pose distribution of the car, and the uniform pose distribution we assume during training. To test this, we create a synthetic dataset similar to SYNTH-CAR1 with a limited range of rotation, and train BlockGAN with a uniform pose distribution. To generate the imbalanced rotation dataset, we sample the rotation uniformly from the front/left/back/right viewing directions±15°. In other words, the car is 1https://github.com/LynnHo/DCGAN-LSGAN-WGAN-WGAN-GP-Tensorflow Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii Figure 7: Comparison between (i) LR-GAN [53] and (ii) BlockGAN for SYNTH-CAR1 (left) and REAL-CAR (right). Ch an gi ng fo re go ru nd Ch an gi ng b ac kg ro un d Ad di ng o bj ec ts i ii i ii i ii only seen from the front/left/back/right 30°, respectively, and there are four evenly spaced gaps of 60° that are never observed, for example views from the front-right. With the imbalanced dataset, Figure 8 (bottom) shows correct disentangling of foreground and background. However, rotation of the car only produces images with (near-)frontal views (top), while depth translation results in cars that are randomly rotated sideways (middle). We observe similar behaviour for the natural REAL-CAR dataset. This suggests that learning object disentanglement and full 3D pose rotation might be two independent problems. While assuming a uniform pose distribution already enables good object disentanglement, learning the pose distribution from the training data would likely improve the quality of 3D transforms. In our supplemental material, we include comparisons to HoloGAN [38] as well as additional ablation studies on comparing different scene composer functions, using a perspective camera versus a weak- perspective camera, adopting the style discriminator for scenes with cluttered backgrounds, and training on images with an incorrect number of objects. 5 Discussion and Future Work We introduced BlockGAN, an image generative model that learns 3D object-aware scene representations from unlabelled images. We show that BlockGAN can learn a disentangled scene representation both in terms of objects and their properties, which allows geometric manipulations not observed during training. Most excitingly, even when BlockGAN is trained with fewer or even single objects, additional 3D object features can be added to the scene features at test time to create novel scenes with multiple objects. In addition to computer graphics applications, this opens up exciting possibilities, such as combining BlockGAN with models like BiGAN [10] or ALI [11] to learn powerful object representations for scene understanding and reasoning. Future work can adopt more powerful relational learning models [25, 45, 51] to learn more complex object interactions such as inter-object shadowing or reflections. Currently, we assume prior knowledge of object category and the number of objects for training. We also assume object poses are uniformly distributed and independent from each other. Therefore, the ability to learn this information directly from training images would allow BlockGAN to be applied to more complex datasets with a varying number of objects and different object categories, such as COCO [30] or LSUN [57]. Acknowledgments and Disclosure of Funding We received support from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 665992, the EPSRC Centre for Doctoral Training in Digital Entertainment (EP/L016540/1), RCUK grant CAMERA (EP/M023281/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1), and an NVIDIA Corporation GPU Grant. We received a gift from Adobe. Broader Impact BlockGAN is an image generative model that learns an object-oriented 3D scene representation directly from unlabelled 2D images. Our approach is a new machine learning technique that makes it possible to generate unseen images from a noise vector, with unprecedented control over the identity and pose of multiple independent objects as well as the background. In the long term, our approach could enable powerful tools for digital artists that facilitate artistic control over realistic procedurally generated digital content. However, any tool can in principle be abused, for example by adding new, manipulating or removing existing objects or people from images. At training time, our network performs a task somewhat akin to scene understanding, as our approach learns to disentangle between multiple objects and individual object properties (specifically their pose and identity). At test time, our approach enables sampling new images with control over pose and identity for each object in the scene, but does not directly take any image input. However, it is possible to embed images into the latent space of generative models [1]. A highly realistic generative image model and a good image fit would then make it possible to approximate the input image and, more importantly, to edit the individual objects in a pictured scene. Similar to existing image editing software, this enables the creation of image manipulations that could be used for ill-intended misinformation (fake news), but also for a wide range of creative and other positive applications. We expect the benefits of positive applications to clearly outweigh the potential downsides of malicious applications.
1. What is the focus and contribution of the paper on BlockGAN? 2. What are the strengths of the proposed approach, particularly in incorporating 3D representations and object compositionality into GANs? 3. What are the weaknesses of the paper, especially regarding the voxel-like 3D object features and the practicability of the claimed "manipulation"? 4. How does the reviewer assess the novelty and significance of the paper's contributions? 5. Are there any suggestions or recommendations for future improvements or research directions related to BlockGAN?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper propose BlockGAN, a GAN variant that trains on unlabeled images and learns to explicitly model objects with 3D features and compose them to render, using a graphics-inspired neural pipeline. Strengths Incorporating 3D representations and object compositionality into GANs has been an important direction. This paper contributes to this direction by bringing together several components: voxel-like 3D object features disentangled from pose, scene composer as element-wise maximum over object features, and scene rendering by perspective projection. These components are not brand new ideas individually, but they follow graphics insight and are put together in a novel way to model object geometry and composition without explicit supervision. Weaknesses Weaknesses: - The voxel-like 3D object features are low-resolution, and seem hard to scale due to their cubic size. This paper only trains on 64x64 images. - The voxel-like 3D object features also lack geometry-texture disentanglement, which limits its modeling capacity for objects with more complex geometry and texture. For example, cars in Figure 5 Left do not look totally realistic despite their low resolution. * After rebuttal: some of my above concerns about representation choices are relieved. - I also have concerns for the practicability of the claimed "manipulation", as it can neither control the source image to operate on (e.g. given a user image, as opposed to randomly sampled one), nor fully control the manipulation direction (e.g. user wants the car to be green, as opposed to a random color). As pointed out in "Broader Impact", learning image encoding along with the generative network could be a fix, but seems hard.
NIPS
Title Doubly Robust Thompson Sampling with Linear Payoffs Abstract A challenging aspect of the bandit problem is that a stochastic reward is observed only for the chosen arm and the rewards of other arms remain missing. The dependence of the arm choice on the past context and reward pairs compounds the complexity of regret analysis. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust (DR) Thompson Sampling employing the doubly-robust estimator used in missing data literature to Thompson Sampling with contexts (LinTS). Different from previous works relying on missing data techniques (Dimakopoulou et al. [2019], Kim and Paik [2019]), the proposed algorithm is designed to allow a novel additive regret decomposition leading to an improved regret bound with the order of Õ(φ−2 √ T ), where φ is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ without the dimension of the context, d. Applying the relationship between φ and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. N/A √ T ), where φ2 is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ2 without the dimension of the context, d. Applying the relationship between φ2 and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. 1 Introduction Contextual bandit has been popular in sequential decision tasks such as news article recommendation systems. In bandit problems, the learner sequentially pulls one arm among multiple arms and receives random rewards on each round of time. While not knowing the compensation mechanisms of rewards, the learner should make his/her decision to maximize the cumulative sum of rewards. In the course of gaining information about the compensation mechanisms through feedback, the learner should carefully balance between exploitation, pulling the best arm based on information accumulated so far, and exploration, pulling the arm that will assist in future choices, although it does not seem to be the best option at the moment. Therefore in the bandit problem, estimation or learning is an important element besides decision making. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A challenging aspect of estimation in the bandit problem is that a stochastic reward is observed only for the chosen arm. Consequently, only the context and reward pair of the chosen arm is used for estimation, which causes dependency of the context data at the round on the past contexts and rewards. To handle this difficulty, we view bandit problems as missing data problems. The first step in handling missing data is to define full, observed, and missing data. In bandit settings, full data consist of rewards and contexts of all arms; observed data consist of full contexts for all arms and the reward for the chosen arm; missing data consist of the rewards for the arms that are not chosen. Typical estimation procedures require both rewards and contexts pairs to be observed, and the observed contexts from the unselected are discarded (see Table 1). The analysis based on the completely observed pairs only is called complete record analysis. Most stochastic bandit algorithms utilize estimates based on complete record analysis. Estimators from complete record analysis are known to be inefficient. In bandit setting, using the observed data whose probability of observation depends on previous rewards requires special theoretical treatment. There are two main approaches to missing data: imputation and inverse probability weighting (IPW). Imputation is to fill in the predicted value of missing data from a specified model, and IPW is to use the observed records only but weight them by the inverse of the observation probability. The doubly robust (DR) method [Robins et al., 1994, Bang and Robins, 2005] is a combination of imputation and IPW tools. We provide a review of missing data and DR methods in supplementary materials. The robustness against model misspecification in missing data settings is insignificant in the bandit setting since the probability of observation or allocation to an arm is known. The merit of the DR method in the bandit setting is its ability to employ all the contexts including unselected arms. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust Thompson Sampling (DRTS) that applies the DR technique used in missing data literature to Thompson Sampling with linear contextual bandits (LinTS). The main thrust of DRTS is to utilize contexts information for all arms, not just chosen arms. By using the unselected, yet observed contexts, along with a novel algorithmic device, the proposed algorithm renders a unique regret decomposition which leads to a novel regret bound without resorting to the technical definition of unsaturated arms used by Agrawal and Goyal [2014]. Since categorizing the arms into saturated vs. unsaturated plays a critical role in costing extra √ d, by circumventing it, we prove a Õ(d √ T ) bound of the cumulative regret in many practical occasions compared to Õ(d3/2 √ T ) shown in Agrawal and Goyal [2014]. The main contributions of this paper are as follows. • We propose a novel contextual bandit algorithm that improves the cumulative regret bound of LinTS by a factor of √ d (Theorem 1) in many practical scenarios (Section 4.1). This improvement is attained mainly by defining a novel set called super-unsaturated arms, that is utilizable due to the proposed estimator and resampling technique adopted in the algorithm. • We provide a novel estimation error bound of the proposed estimator (Theorem 3) which depends on the minimum eigenvalue of the covariance matrix of the contexts from all arms without d. • We develop a novel dimension-free concentration inequality for sub-Gaussian vector martingale (Lemma 4) and use it in deriving our regret bound in place of the self-normalized theorem by Abbasi-Yadkori et al. [2011]. • We develop a novel concentration inequality for the bounded matrix martingale (Lemma 6) which improves the existing result (Proposition 5) by removing the dependency on d in the bound. Lemma 6 also allows eliminating the forced sampling phases required in some bandit algorithms relying on Proposition 5 [Amani et al., 2019, Bastani and Bayati, 2020]. All missing proofs are in supplementary materials. 2 Related works Thompson Sampling [Thompson, 1933] has been extensively studied and shown solid performances in many applications (e.g. Chapelle and Li [2011]). Agrawal and Goyal [2013] is the first to prove theoretical bounds for LinTS and an alternative proof is given by Abeille et al. [2017]. Both papers show Õ(d3/2 √ T ) regret bound, which is known as the best regret bound for LinTS. Recently, Hamidi and Bayati [2020] points out that Õ(d3/2 √ T ) could be the best possible one can get when the estimator used by LinTS is employed. In our work, we improve this regret bound by a factor of√ d in many practical scenarios through a novel definition of super-unsaturated arms, which becomes utilizable due to the proposed estimator and resampling device implemented in the algorithm. Our work assumes the independence of the contexts from all arms across time rounds. Some notable works have used the assumption that the contexts are independently identically distributed (IID). Leveraging the IID assumption with a margin condition, Goldenshluger and Zeevi [2013] derives a two-armed linear contextual bandit algorithm with a regret upper bound of order O(d3logT ). Bastani and Bayati [2020] has extended this algorithm to any number of arms and improves the regret bound to O(d2log 3 2 d · logT ). The margin condition states that the gap between the expected rewards of the optimal arm and the next best arm is nonzero with some constant probability. This condition is crucial in achieving a O(logT ) regret bound instead of Õ( √ T ). In this paper, we do not assume this margin condition, and focus on the dependence on the dimension of contexts d. From a missing data point of view, most stochastic contextual bandit algorithms use the estimator from complete record analysis except Dimakopoulou et al. [2019] and Kim and Paik [2019]. Dimakopoulou et al. [2019] employs an IPW estimator that is based on the selected contexts alone. Dimakopoulou et al. [2019] proves a Õ(d √ −1T 1+ N) regret bound for their algorithm which depends on the number of arms, N . Kim and Paik [2019] considers the high-dimensional settings with sparsity, utilizes a DR technique, and improves the regret bound in terms of the sparse dimension instead of the actual dimension of the context, d. Kim and Paik [2019] is different from ours in several aspects: the mode of exploration ( -greedy vs. Thompson Sampling), the mode of regularization (Lasso vs. ridge regression); and the form of the estimator. A sharp distinction between the two estimators lies in that Kim and Paik [2019] aggregates contexts and rewards over the arms although they employ all the contexts. If we apply this aggregating estimator and DR-Lasso bandit algorithm to the low-dimensional setting, we obtain a regret bound of order O(Ndφ2 √ T ) when the contexts from the arms are independent. This bound is bigger than our bound by a factor of d and N . It is because the aggregated form of the estimator does not permit the novel regret decomposition derived in Section 4.2. The proposed estimator coupled with a novel algorithmic device renders the additive regret decomposition which in turn improves the order of the regret bound. 3 Proposed estimator and algorithm 3.1 Settings and assumptions We denote a d-dimensional context for the ith arm at round t by Xi(t) ∈ Rd, and the corresponding random reward by Yi(t) for i = 1, . . . , N . We assume E [Yi(t)|Xi(t)] = Xi(t)Tβ for some unknown parameter β ∈ Rd. At round t, the arm that the learner chooses is denoted by at ∈ {1, . . . , N}, and the optimal arm by a∗t := arg maxi=1,...,N { Xi(t) Tβ } . Let regret(t) be the difference between the expected reward of the chosen arm and the optimal arm at round t, i.e., regret(t) := Xa∗t (t) Tβ − Xat(t)Tβ. The goal is to minimize the sum of regrets over T rounds, R(T ) := ∑T t=1 regret(t). The total round T is finite but possibly unknown. We also make the following assumptions. Assumption 1. Boundedness for scale-free regrets. For all i = 1, . . . , N and t = 1, . . . , T , we have ‖Xi(t)‖2 ≤ 1 and ‖β‖2 ≤ 1. Assumption 2. Sub-Gaussian error. Let Ht := ⋃t−1 τ=1 [ {Xi(τ)}Ni=1 ∪ {aτ} ∪ {Yaτ (τ)} ] ∪ {Xi(t)}Ni=1 be the set of observed data at round t. For each t and i, the error ηi(t) := Yi(t)−Xi(t)Tβ is conditionally zero-mean σ-sub-Gaussian for a fixed constant σ ≥ 0, i.e, E [ηi(t)|Ht] = 0 and E [ exp (ληi(t))|Ht] ≤ exp(λ2σ2/2), for all λ ∈ R. Furthermore, the distribution of ηi(t) does not depend on the choice at round t, i.e. at. Assumption 3. Independently distributed contexts. The stacked contexts vectors {Xi(1)}Ni=1, . . . , {Xi(T )}Ni=1 ∈ RdN are independently distributed. Assumption 4. Positive minimum eigenvalue of the average of covariance matrices. For each t, there exists a constant φ2 > 0 such that λmin ( E [ 1 N ∑N i=1Xi(t)Xi(t) T ]) ≥ φ2. Assumptions 1 and 2 are standard in stochastic bandit literature Agrawal and Goyal [2013]. We point out that given round t, Assumption 3 allows that the contexts among different arms,X1(t), . . . , XN (t) are correlated to each other. Assumption 3 is weaker than the assumption of IID, and the IID condition is considered by Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020]. As Bastani and Bayati [2020] points out, the IID assumption is reasonable in some practical settings, including clinical trials, where health outcomes of patients are independent of those of other patients. Both Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020] address the problem where the contexts are equal across all arms, i.e. X(t) = X1(t) = . . . = XN (t), while our work admits different contexts over all arms. Assumption 4 guarantees that the average of covariance matrices of contexts over the arms is well-behaved so that the inverse of the sample covariance matrix is bounded by the spectral norm. This assumption helps controlling the estimation error of β in linear regression models. Similar assumptions are adopted in existing works in the bandit setting [Goldenshluger and Zeevi, 2013, Amani et al., 2019, Li et al., 2017, Bastani and Bayati, 2020]. 3.2 Doubly robust estimator To describe the contextual bandit DR estimator, let πi(t) := P (at = i|Ht) > 0 be the probability of selecting arm i at round t. We define a DR pseudo-reward as Y DRi (t) = { 1− I (i = at) πi(t) } Xi(t) T β̆t + I (i = at) πi(t) Yat(t), (1) for some β̆t depending on Ht. Background of missing data methods and derivation of the DR pseudo-reward is provided in the supplementary material. Now, we propose our new estimator β̂t with a regularization parameter λt as below: β̂t = ( t∑ τ=1 N∑ i=1 Xi(τ)Xi(τ) T + λtI )−1( t∑ τ=1 N∑ i=1 Xi(τ)Y DR i (τ) ) . (2) Harnessing the pseudo-rewards defined in (1), we can make use of all contexts rather than just selected contexts. The DR estimator by Kim and Paik [2019] utilizes all contexts but has a different form from ours. While Kim and Paik [2019] uses Lasso estimator with pseudo-rewards aggregated over all arms, we use ridge regression estimator with pseudo-rewards in (1) which are defined separately for each i = 1, . . . , N . This seemingly small but important difference in forms paves a way in rendering our unique regret decomposition and improving the regret bound. 3.3 Algorithm In this subsection, we describe our proposed algorithm, DRTS which adapts DR technique to LinTS. The DRTS is presented in Algorithm 1. Distinctive features of DRTS compared to LinTS include the novel estimator and the resampling technique. At each round t ≥ 1, the algorithm samples β̃i(t) from the distribution N(β̂t−1, v2V −1t−1) for each i independently. Let Ỹi(t) := Xi(t) T β̃i(t) and mt := arg maxi Ỹi(t). We set mt as a candidate action and compute π̃mt(t) := P(Ỹmt(t) = maxi Ỹi(t)|Ht). 1 If π̃mt(t) > γ, then the arm mt is selected, i.e., at = mt. Otherwise, the algorithm resamples β̃i(t) until it finds another arm satisfying π̃i(t) > γ up to a predetermined fixed value Mt. Section A.3 in supplementary materials describes issues related to Mt including a suitable choice of Mt. 1This computation is known to be challenging but employing the independence among β̃1(t), . . . , β̃N (t), we derive an explicit form approximating π̃mt(t) in supplementary materials Section H.1. Algorithm 1 Doubly Robust Thompson Sampling for Linear Contextual Bandits (DRTS) Input: Exploration parameter v > 0, Regularization parameter λ > 0, Selection probability threshold γ ∈ [1/(N + 1), 1/N), Imputation estimator β̆u = f({X(τ), Yaτ (τ)}u−1τ=1), Number of maximum possible resampling Mt. Set F0 = 0, W0 = 0, β̂0 = 0 and V0 = λI for t = 1 to T do Observe contexts {Xi(t)}Ni=1. Sample β̃1(t), . . . , β̃N (t) from N(β̂t−1, v2V −1t−1) independently. Compute Ỹi(t) = Xi(t) T β̃i(t) Observe a candidate action mt := arg maxi Ỹi(t). Compute π̃mt(t) := P ( maxi Ỹi(t) = Ỹmt(t) ∣∣∣Ht). for l = 1 to Mt do if π̃mt(t) ≤ γ then Sample another β̃1(t), . . . , β̃N (t), observe another mt, and update π̃mt(t). else Break. end if end for Set at = mt, and play arm at. Observe reward Yat(t) and compute Y DR i (t) Ft = Ft−1 + ∑N i=1Xi(t)Y DR i (t); Wt = Wt−1 + ∑N i=1Xi(t)Xi(t) T ; Vt = Wt + λ √ tI β̂t = V −1 t Ft Update β̆t+1 for next round. end for The resampling step is incorporated to avoid small values of the probability of selection so that the pseudo-reward in (1) is numerically stable. A naive remedy to stabilize the pseudo-reward is to use max{πi(t), γ}, which fails to leading to our regret bound since it induces bias and also cannot guarantee that the selected arm is in the super-unsaturated arms defined in (5) with high probability (For details, see Section 4.2). The resampling step implemented in the proposed algorithm is designed to solve these problems. 4 Theoretical results Our theoretical results are organized as follows. In Section 4.1, we provide the main result, the cumulative regret bound of Õ(φ−2 √ T ) of DRTS. The main thrust of deriving the regret bound is to define super-unsaturated arms. In Section 4.2 we introduce the definition of super-unsaturated arms and show how it admits a novel decomposition of the regret into two additive terms as in (6). In Section 4.3 we bound each term of the decomposed regret bounds (6). The first term is the estimation error, and Theorem 3 finds its bound. In the course of proving Theorem 3, we need Lemma 4, which plays a similar role to the self-normalized theorem of Abbasi-Yadkori et al. [2011]. We conclude the section by presenting Lemma 6 and bound the second term of (6). 4.1 An improved regret bound Theorem 1 provides the regret bound of DRTS in terms of the minimum eigenvalue without d. Theorem 1. Suppose that Assumptions 1-4 hold. If β̆t in Algorithm 1 satisfies ‖β̆t − β‖2 ≤ b for a constant b > 0, for all t = 1, . . . , T , then with probability 1− 2δ, the cumulative regret by time T for DRTS algorithm is bounded by R(T ) ≤ 2 + 4Cb,σ φ2 √ T log 12T 2 δ + 2 √ 2T φ √ N , (3) where Cb,σ is a constant which depends only on b and σ. The bound (3) has a rate of O(φ−2 √ T ). The relationship between the dimension d and the minimum eigenvalue φ2 can be shown by dφ2 = d N λmin ( E N∑ i=1 Xi(t)Xi(t) T ) ≤ 1 N E N∑ i=1 Tr ( Xi(t)Xi(t) T ) = 1 N E N∑ i=1 ‖Xi(t)‖22 ≤ 1. This implies φ−2 ≥ d, 2 but there are many practical scenarios such that φ−2 = O(d) holds. Bastani et al. [2021] identifies such examples including the uniform distribution and truncated multivariate normal distributions. When the context has uniform distribution on the unit ball, φ−2 = d+ 2. When the context has truncated multivariate normal distribution with mean 0 and covariance Σ, we can set φ−2 = (d+ 2) exp( 12λmin(Σ) ). For more examples, we refer to Bastani et al. [2021]. Furthermore, regardless of distributions, φ−2 = O(d) holds when the correlation structure has the row sum of offdiagonals independent of the dimension, for example, AR(1), tri-diagonal, block-diagonal matrices. In these scenarios, the regret bound in (3) becomes Õ(d √ T ). Compared to the previous bound of LinTS [Agrawal and Goyal, 2014, Abeille et al., 2017], we obtain a better regret bound by the factor of √ d for identified practical cases. As for the imputation estimator β̌t, we assume that ‖β̌t − β‖2 ≤ b, where b is an absolute constant. We suggest two cases which guarantee this assumption. First, if a biased estimator is used, we can rescale the estimator so that its l2-norm is bounded by some constant C > 0. Then, ‖β̌t − β‖2 ≤ ‖β̌t‖2 + ‖β‖2 ≤ C + 1 and b = C + 1. Second, consistent estimators such as ridge estimator or the least squared estimator satisfy the condition since ‖β̌t − β‖2 = O(d √ log t/t). The term d is cancelled out when t ≥ td, where td is the minimum integer that satisfies log t/t ≤ d−2. In these two cases, we can find a constant b which satisfies the assumption on the imputation estimator β̌t. 4.2 Super-unsaturated arms and a novel regret decomposition The key element in deriving (3) is to decompose the regret into two additive terms as in (6). To allow such decomposition to be utilizable, we need to define a novel set of arms called super-unsaturated arms, which replaces the role of unsaturated arms in [Agrawal and Goyal, 2014]. The superunsaturated arms are formulated so that the chosen arm is included in this set with high probability. For each i and t, let ∆i(t) := Xa∗t (t) Tβ−Xi(t)Tβ. DefineAt := ∑t τ=1Xaτ (τ)Xaτ (τ) T +λI and Vt := ∑t τ=1 ∑N i=1Xi(τ)Xi(τ) T + λtI . For the sake of contrast, recall the definition of unsaturated arms by Agrawal and Goyal [2014] Ut := { i : ∆i(t) ≤ gt ‖Xi(t)‖A−1t−1 } , (4) where gt := C √ d log(t/δ) min{ √ d, √ logN} for some constant C > 0. This gt is constructed to ensure that there exists a positive lower bound for the probability that the selected arm is unsaturated. In place of (4), we define a set of super-unsaturated arms for each round t by Nt := { i : ∆i(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xi(t)‖2V −1t−1 } . (5) While gt ‖Xi(t)‖A−1t−1 in (4) is normalized with only selected contexts, the second term in the right hand side of (5) is normalized with all contexts including Xa∗t (t), the contexts of the optimal arm. This bound of ∆i(t) plays a crucial role in bounding the regret with a novel decomposition as in (6). The following Lemma shows a lower bound of the probability that the candidate arm is super-unsaturated. Lemma 2. For each t, let mt := arg maxi Ỹi(t) and let Nt be the super-unsaturated arms defined in (5). For any given γ ∈ [1/(N + 1), 1/N), set v = (2 log (N/(1− γN)))−1/2. Then, P (mt ∈ Nt|Ht) ≥ 1− γ. Lemma 2 directly contributes to the reduction of √ d in the hyperparamter v. In Agrawal and Goyal [2014], to prove a lower bound of P (at ∈ Ut|Ht), it is required to set v = √ 9d log(t/δ), with the 2Some previous works assume φ−2 = O(1) even when ‖Xi(t)‖2 ≤ 1 (e.g. Li et al. [2017]). As pointed out by Ding et al. [2021], this assumption is unrealistic and the reported regret bound should be multiplied by O(d). order of √ d. In contrast, Lemma 2 shows that v does not need to depend on d due to the definition of super-unsaturated arms in (5). In this way, we obtain a lower bound of P (mt ∈ Nt|Ht) without costing extra √ d. Using the lower bound, we can show that the resampling scheme allows the algorithm to choose the super-unsaturated arms with high probability. For all i /∈ Nt, π̃i(t) := P (mt = i|Ht) ≤ P ( ∪j /∈Nt{mt = j} ∣∣Ht) = P (mt /∈ Nt|Ht) ≤ γ, where the last inequality holds due to Lemma 2. Thus, in turn, if π̃i(t) > γ, then i ∈ Nt. This means that {i : π̃i(t) > γ} is a subset of Nt and {at ∈ {i : π̃i(t) > γ}} ⊂ {at ∈ Nt}. Hence, the probability of the event {at ∈ Nt} is greater than the probability of sampling any arm which satisfies π̃i(t) > γ. Therefore, with resampling, the event {at ∈ Nt} occurs with high probability. (See supplementary materials Section A for details.) When the algorithm chooses the arm from the super-unsaturated set, i.e., when at ∈ Nt happens, (5) implies ∆at(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xat(t)‖2V −1t−1 . (6) By definition, ∆at(t) = regret(t) and the regret at round t can be expressed as the two additive terms, which presents a stark contrast with multiplicative decomposition of the regret in Agrawal and Goyal [2014]. In section 4.3 we show how each term can be bounded with separate rate. 4.3 Bounds for the cumulative regret We first bound the leading term of (6) and introduce a novel estimation error bound free of d for the contextual bandit DR estimator. Theorem 3. (A dimension-free estimation error bound for the contextual bandit DR estimator.) Suppose Assumptions 1-4 hold. For each t = 1, . . . , T , let β̆t be any Ht-measurable estimator satisfying ‖β̆t − β‖2 ≤ b, for some constant b > 0. For each i and t, assume that πi(t) > 0 and that there exists γ ∈ [1/(N + 1), 1/N) such that πat(t) > γ. Given any δ ∈ (0, 1), set λt = 4 √ 2N √ t log 12τ 2 δ . Then with probability at least 1− δ, the estimator β̂t in (2) satisfies∥∥∥β̂t − β∥∥∥ 2 ≤ Cb,σ φ2 √ t √ log 12t2 δ , (7) for all t = 1, . . . , T , where the constant Cb,σ which depends only on b and σ. In bandit literature, estimation error bounds typically include a term involving d which emerges from using the following two Lemmas: (i) the self-normalized bound for vector-valued martingales [Abbasi-Yadkori et al., 2011, Theorem 1], and (ii) the concentration inequality for the covariance matrix [Tropp, 2015, Corollary 5.2]. Instead of using (i) and (ii), we develop the two dimension-free bounds in Lemmas 4 and 6, to replace (i) and (ii), respectively. With the two Lemmas, we eliminate the dependence on d and express the estimation error bound with φ2 alone. Lemma 4. (A dimension-free bound for vector-valued martingales.) Let {Fτ}tτ=1 be a filtration and {η(τ)}tτ=1 be a real-valued stochastic process such that η(τ) is Fτ -measurable. Let {X(τ)} t τ=1 be an Rd-valued stochastic process where X(τ) is Fτ−1-measurable and ‖X(τ)‖2 ≤ 1. Assume that {η(τ)}tτ=1 are σ-sub-Gaussian as in Assumption 2. Then with probability at least 1− δ/t2, there exists an absolute constant C > 0 such that∥∥∥∥∥ t∑ τ=1 η(τ)X(τ) ∥∥∥∥∥ 2 ≤ Cσ √ t √ log 4t2 δ . (8) Compared to Theorem 1 of Abbasi-Yadkori et al. [2011], our bound (8) does not involve d, yielding a dimension-free bound for vector-valued martingales. However, the bound (8) has √ t term which comes from using ‖·‖2 instead of the self-normalized norm ‖·‖V −1t . To complete the proof of Theorem 3, we need the following condition, λmin (Vt) ≥ ct, (9) for some constant c > 0. Li et al. [2017] points out that satisfying (9) is challenging. To overcome this difficulty, Amani et al. [2019] and Bastani and Bayati [2020] use an assumption on the covariance matrix of contexts and a concentration inequality for matrix to prove (9), described as follows. Proposition 5. [Tropp, 2015, Theorem 5.1.1] Let P (1), . . . , P (t) ∈ Rd×d be the symmetric matrices such that λmin(P (τ)) ≥ 0, λmax(P (τ)) ≤ L and λmin(E[P (τ)]) ≥ φ2, for all τ = 1, 2, . . . , t. Then, P ( λmin ( t∑ τ=1 P (τ) ) ≤ tφ 2 2 ) ≤ d exp ( − tφ 2 8L ) . (10) To prove (9) using (10) with probability at least 1 − δ, for δ ∈ (0, 1), it requires t ≥ 8Lφ2 log d δ . Thus, one can use (10) only after O(φ−2 log d) rounds. Due to this requirement, Bastani and Bayati [2020] implements the forced sampling techniques for O ( N2d4(log d)2 ) rounds, and Amani et al. [2019] forces to select arms randomly for O ( φ−2 log d ) rounds. These mandatory exploration phase empirically prevents the algorithm choosing the optimal arm. An alternative form of matrix Chernoff inequality for adapted sequences is Theorem 3 in Tropp [2011], but the bound also has a multiplicative factor of d. Instead of applying Proposition 5 to prove (9), we utilize a novel dimension-free concentration inequality stated in the following Lemma. Lemma 6. (A dimension-free concentration bound for symmetric bounded matrices.) Let ‖A‖F be a Frobenious norm of a matrix A. Let {P (τ)}tτ=1 ∈ Rd×d be the symmetric matrices adapted to a filtration {Fτ}tτ=1. For each τ = 1, . . . , t, suppose that ‖P (τ)‖F ≤ c, for some c > 0 and λmin (E [P (τ)| Fτ−1]) ≥ φ2 > 0, almost surely. For given any δ ∈ (0, 1), set λt ≥ 4 √ 2c √ t √ log 4t 2 δ . Then with probability at least 1− δ/t 2, λmin ( t∑ τ=1 P (τ) + λtI ) ≥ φ2t. (11) Lemma 6 shows that setting λt with √ t rate guarantees (9) for all t ≥ 1. We incorporate λt stated in Lemma 6 in our estimator (2), and show in Section 5 that the DR estimator regularized with λt outperforms estimators from other contextual bandit algorithms in early rounds. We obtain the bounds free of d in Lemmas 4 and 6 mainly by applying Lemma 2.3 in Lee et al. [2016] which states that any Hilbert space martingale can be reduced to R2. Thus, we can project the vector-valued (or the matrix) martingales to R2-martingales, and reduce the dimension from d (or d2) to 2. Then we apply Azuma-Hoeffding inequality just twice, instead of d times. In this way, Lemma 6 provides a novel dimension-free bound for the covariance matrix. Lemmas 4 and 6 can be applied to other works to improve the existing bounds. For example, using these Lemmas, the estimation error bound of Bastani and Bayati [2020] can be improved by a factor of log d. Proposition EC.1 of Bastani and Bayati [2020] provides an estimation error bound for the ordinary least square estimator by using Proposition 5 and bounding all values of d coordinates. By applying Lemmas 4 and 6, one does not have to deal with each coordinate and eliminate dependence on d. Using Lemma 6, we can bound the second term of the regret in (6) as follows. For j = 1, . . . , N ‖Xj(t)‖V −1t−1 ≤ ‖Xj(t)‖2 √∥∥V −1t−1∥∥2 ≤ λmin (Vt−1)−1/2 ≤ 1√φ2N(t− 1) . (12) Finally, we are ready to bound regret(t) in (6). Lemma 7. Suppose the assumptions in Theorem 1 hold. Then with probability at least 1− 2δ, regret(t) ≤ 2Cb,σ φ2 √ t− 1 √ log 12t2 δ + √ 2 φ √ N(t− 1) , (13) for all t = 2, . . . , T . Proof. Since at is shown to be super-unsaturated with high probability, we can use (6) to have regret(t) ≤ 2‖β̂t−1 − β‖2 + √ ‖Xa∗t (t)‖ 2 V −1t−1 + ‖Xat(t)‖2V −1t−1 , for all t = 2, . . . , T . We see that the first term is bounded by Theorem 3, and the second term by (12). Note that to prove Theorem 1, Lemma 6 is invoked, and the event (11) of Lemma 6 is a subset of that in (7). Therefore (13) holds with probability at least 1− 2δ instead of 1− 3δ. Details are given in supplementary materials. Lemma 7 shows that the regret at round t does not exceed a O(φ−2t−1/2) bound when at ∈ Nt, which is guaranteed in our algorithm via resampling with high probability (See Section A.3 for details). This concludes the proof of Theorem 1. 5 Simulation studies In this section, we compare the performances of the three algorithms: (i) LinTS [Agrawal and Goyal, 2013], (ii) BLTS [Dimakopoulou et al., 2019], and (iii) the proposed DRTS. We use simulated data described as follows. The number of arms N is set to 10 or 20, and the dimension of contexts d is set to 20 or 30. For each element of the contexts j = 1, · · · , d, we generate [X1j(t), · · · , XNj(t)] from a normal distribution N (µN , VN ) with mean µ10 = [−10,−8, · · · ,−2, 2, · · · , 8, 10]T , or µ20 = [−20,−18, · · · ,−2, 2, · · · , 18, 20]T , and the covariance matrix VN ∈ RN×N has VN (i, i) = 1 for every i and VN (i, k) = ρ for every i 6= k. We set ρ = 0.5 and truncate the sampled contexts to satisfy ‖Xi(t)‖2 ≤ 1. To generate the stochastic rewards, we sample ηi(t) independently from N (0, 1). Each element of β follows a uniform distribution, U(−1/ √ d, 1/ √ d). All three algorithms have v as an input parameter which controls the variance of β̃i(t). BLTS and DRTS require a positive threshold γ which truncates the selection probability. We consider v ∈ {0.001, 0.01, 0.1, 1} in all three algorithms, γ ∈ {0.01, 0.05, 0.1} for BLTS, and set γ = 1/(N + 1) in DRTS. Then we report the minimum regrets among all combinations. The regularization parameter is λt = √ t in DRTS and λt = 1 in both LinTS and BLTS. To obtain an imputation estimator β̌t required in DRTS, we use ridge regression with {Xaτ (τ), Yaτ (τ)}t−1τ=1, for each round t. Other implementation details are in supplementary materials. Figure 1 shows the average of the cumulative regrets and the estimation error ‖β̂t − β‖2 of the three algorithms based on 10 replications. The figures in the two left columns show the average cumulative regret according to the number of rounds with the best set of hyperparameters for each algorithm. The total rounds are T = 20000. The figures in the third columns show the average of the estimation error ‖β̂t − β‖2. In the early stage, the estimation errors of LinTS and BLTS increase rapidly, while that of DRTS is stable. The stability of the DR estimator follows possibly by using full contexts and the regularization parameter λt = √ t. This yields a large margin of estimation error among LinTS, BLTS and DRTS, especially when the dimension is large. 6 Conclusion In this paper, we propose a novel algorithm for stochastic contextual linear bandits. Viewing the bandit problem as a missing data problem, we use the DR technique to employ all contexts including those that are not chosen. With the definition of super-unsaturated arms, we show a regret bound which only depends on the minimum eigenvalue of the sample covariance matrices. This new bound has Õ(d √ T ) rate in many practical scenarios, which is improved by a factor of √ d compared to the previous LinTS regret bounds. Simulation studies show that the proposed algorithm performs better than other LinTS algorithms in a large dimension. Acknowledgements This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2020R1A2C1A01011950) (Wonyoung Kim and Myunghee Cho Paik), and by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program(UNIST)) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2021R1G1A100980111) (Gi-Soo Kim). Wonyoung Kim was also supported by Hyundai Chung Mong-koo foundation.
1. What is the focus of the paper regarding Thompson Sampling? 2. What are the strengths of the proposed approach, particularly in its application of the doubly robust estimator? 3. Do you have any concerns or questions about the choice of M_t in implementing the algorithm? 4. How does the reviewer assess the theoretical results and empirical performance of the algorithm? 5. Are there any minor issues or typos in the paper that the reviewer has identified?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a variant of Thompson Sampling for the linear contextual bandit problem. Using a doubly robust estimator for estimating the underlying parameter, the paper shows a regret bound of the order \sqrt{T}/\phi^2, where \phi^2 is a lower bound of the minimum eigenvalue of the covariance matrix of contexts. In some special cases, this bound implies a bound of the order d\sqrt{T}, which is an improvement of the current result. Review This paper presents an interesting application of the doubly robust estimator in the online setting (Thompson Sampling). Especially, it uses a resampling technique to provide the propensity score with a lower bound, without bringing auxiliary bias (when compared with other methods, say, propensity score clipping). The resulting bound improves the current result in some special cases. How should one choose M_t when implementing the algorithm? How do the theoretical results and the empirical performance of the algorithm depend on the choice of M_t? In line 534, is Lemma 4 applied conditional on the event {all \pi_{a_t} > \gamma}? In that case, does the conditions required by Lemma 4 still hold (e.g. sub-gaussian)? Minor issue: In Algorithm 1, when sampling another (\tilde{\beta}_1,...\tilde{\beta}_N) do we choose another m_t?
NIPS
Title Doubly Robust Thompson Sampling with Linear Payoffs Abstract A challenging aspect of the bandit problem is that a stochastic reward is observed only for the chosen arm and the rewards of other arms remain missing. The dependence of the arm choice on the past context and reward pairs compounds the complexity of regret analysis. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust (DR) Thompson Sampling employing the doubly-robust estimator used in missing data literature to Thompson Sampling with contexts (LinTS). Different from previous works relying on missing data techniques (Dimakopoulou et al. [2019], Kim and Paik [2019]), the proposed algorithm is designed to allow a novel additive regret decomposition leading to an improved regret bound with the order of Õ(φ−2 √ T ), where φ is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ without the dimension of the context, d. Applying the relationship between φ and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. N/A √ T ), where φ2 is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ2 without the dimension of the context, d. Applying the relationship between φ2 and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. 1 Introduction Contextual bandit has been popular in sequential decision tasks such as news article recommendation systems. In bandit problems, the learner sequentially pulls one arm among multiple arms and receives random rewards on each round of time. While not knowing the compensation mechanisms of rewards, the learner should make his/her decision to maximize the cumulative sum of rewards. In the course of gaining information about the compensation mechanisms through feedback, the learner should carefully balance between exploitation, pulling the best arm based on information accumulated so far, and exploration, pulling the arm that will assist in future choices, although it does not seem to be the best option at the moment. Therefore in the bandit problem, estimation or learning is an important element besides decision making. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A challenging aspect of estimation in the bandit problem is that a stochastic reward is observed only for the chosen arm. Consequently, only the context and reward pair of the chosen arm is used for estimation, which causes dependency of the context data at the round on the past contexts and rewards. To handle this difficulty, we view bandit problems as missing data problems. The first step in handling missing data is to define full, observed, and missing data. In bandit settings, full data consist of rewards and contexts of all arms; observed data consist of full contexts for all arms and the reward for the chosen arm; missing data consist of the rewards for the arms that are not chosen. Typical estimation procedures require both rewards and contexts pairs to be observed, and the observed contexts from the unselected are discarded (see Table 1). The analysis based on the completely observed pairs only is called complete record analysis. Most stochastic bandit algorithms utilize estimates based on complete record analysis. Estimators from complete record analysis are known to be inefficient. In bandit setting, using the observed data whose probability of observation depends on previous rewards requires special theoretical treatment. There are two main approaches to missing data: imputation and inverse probability weighting (IPW). Imputation is to fill in the predicted value of missing data from a specified model, and IPW is to use the observed records only but weight them by the inverse of the observation probability. The doubly robust (DR) method [Robins et al., 1994, Bang and Robins, 2005] is a combination of imputation and IPW tools. We provide a review of missing data and DR methods in supplementary materials. The robustness against model misspecification in missing data settings is insignificant in the bandit setting since the probability of observation or allocation to an arm is known. The merit of the DR method in the bandit setting is its ability to employ all the contexts including unselected arms. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust Thompson Sampling (DRTS) that applies the DR technique used in missing data literature to Thompson Sampling with linear contextual bandits (LinTS). The main thrust of DRTS is to utilize contexts information for all arms, not just chosen arms. By using the unselected, yet observed contexts, along with a novel algorithmic device, the proposed algorithm renders a unique regret decomposition which leads to a novel regret bound without resorting to the technical definition of unsaturated arms used by Agrawal and Goyal [2014]. Since categorizing the arms into saturated vs. unsaturated plays a critical role in costing extra √ d, by circumventing it, we prove a Õ(d √ T ) bound of the cumulative regret in many practical occasions compared to Õ(d3/2 √ T ) shown in Agrawal and Goyal [2014]. The main contributions of this paper are as follows. • We propose a novel contextual bandit algorithm that improves the cumulative regret bound of LinTS by a factor of √ d (Theorem 1) in many practical scenarios (Section 4.1). This improvement is attained mainly by defining a novel set called super-unsaturated arms, that is utilizable due to the proposed estimator and resampling technique adopted in the algorithm. • We provide a novel estimation error bound of the proposed estimator (Theorem 3) which depends on the minimum eigenvalue of the covariance matrix of the contexts from all arms without d. • We develop a novel dimension-free concentration inequality for sub-Gaussian vector martingale (Lemma 4) and use it in deriving our regret bound in place of the self-normalized theorem by Abbasi-Yadkori et al. [2011]. • We develop a novel concentration inequality for the bounded matrix martingale (Lemma 6) which improves the existing result (Proposition 5) by removing the dependency on d in the bound. Lemma 6 also allows eliminating the forced sampling phases required in some bandit algorithms relying on Proposition 5 [Amani et al., 2019, Bastani and Bayati, 2020]. All missing proofs are in supplementary materials. 2 Related works Thompson Sampling [Thompson, 1933] has been extensively studied and shown solid performances in many applications (e.g. Chapelle and Li [2011]). Agrawal and Goyal [2013] is the first to prove theoretical bounds for LinTS and an alternative proof is given by Abeille et al. [2017]. Both papers show Õ(d3/2 √ T ) regret bound, which is known as the best regret bound for LinTS. Recently, Hamidi and Bayati [2020] points out that Õ(d3/2 √ T ) could be the best possible one can get when the estimator used by LinTS is employed. In our work, we improve this regret bound by a factor of√ d in many practical scenarios through a novel definition of super-unsaturated arms, which becomes utilizable due to the proposed estimator and resampling device implemented in the algorithm. Our work assumes the independence of the contexts from all arms across time rounds. Some notable works have used the assumption that the contexts are independently identically distributed (IID). Leveraging the IID assumption with a margin condition, Goldenshluger and Zeevi [2013] derives a two-armed linear contextual bandit algorithm with a regret upper bound of order O(d3logT ). Bastani and Bayati [2020] has extended this algorithm to any number of arms and improves the regret bound to O(d2log 3 2 d · logT ). The margin condition states that the gap between the expected rewards of the optimal arm and the next best arm is nonzero with some constant probability. This condition is crucial in achieving a O(logT ) regret bound instead of Õ( √ T ). In this paper, we do not assume this margin condition, and focus on the dependence on the dimension of contexts d. From a missing data point of view, most stochastic contextual bandit algorithms use the estimator from complete record analysis except Dimakopoulou et al. [2019] and Kim and Paik [2019]. Dimakopoulou et al. [2019] employs an IPW estimator that is based on the selected contexts alone. Dimakopoulou et al. [2019] proves a Õ(d √ −1T 1+ N) regret bound for their algorithm which depends on the number of arms, N . Kim and Paik [2019] considers the high-dimensional settings with sparsity, utilizes a DR technique, and improves the regret bound in terms of the sparse dimension instead of the actual dimension of the context, d. Kim and Paik [2019] is different from ours in several aspects: the mode of exploration ( -greedy vs. Thompson Sampling), the mode of regularization (Lasso vs. ridge regression); and the form of the estimator. A sharp distinction between the two estimators lies in that Kim and Paik [2019] aggregates contexts and rewards over the arms although they employ all the contexts. If we apply this aggregating estimator and DR-Lasso bandit algorithm to the low-dimensional setting, we obtain a regret bound of order O(Ndφ2 √ T ) when the contexts from the arms are independent. This bound is bigger than our bound by a factor of d and N . It is because the aggregated form of the estimator does not permit the novel regret decomposition derived in Section 4.2. The proposed estimator coupled with a novel algorithmic device renders the additive regret decomposition which in turn improves the order of the regret bound. 3 Proposed estimator and algorithm 3.1 Settings and assumptions We denote a d-dimensional context for the ith arm at round t by Xi(t) ∈ Rd, and the corresponding random reward by Yi(t) for i = 1, . . . , N . We assume E [Yi(t)|Xi(t)] = Xi(t)Tβ for some unknown parameter β ∈ Rd. At round t, the arm that the learner chooses is denoted by at ∈ {1, . . . , N}, and the optimal arm by a∗t := arg maxi=1,...,N { Xi(t) Tβ } . Let regret(t) be the difference between the expected reward of the chosen arm and the optimal arm at round t, i.e., regret(t) := Xa∗t (t) Tβ − Xat(t)Tβ. The goal is to minimize the sum of regrets over T rounds, R(T ) := ∑T t=1 regret(t). The total round T is finite but possibly unknown. We also make the following assumptions. Assumption 1. Boundedness for scale-free regrets. For all i = 1, . . . , N and t = 1, . . . , T , we have ‖Xi(t)‖2 ≤ 1 and ‖β‖2 ≤ 1. Assumption 2. Sub-Gaussian error. Let Ht := ⋃t−1 τ=1 [ {Xi(τ)}Ni=1 ∪ {aτ} ∪ {Yaτ (τ)} ] ∪ {Xi(t)}Ni=1 be the set of observed data at round t. For each t and i, the error ηi(t) := Yi(t)−Xi(t)Tβ is conditionally zero-mean σ-sub-Gaussian for a fixed constant σ ≥ 0, i.e, E [ηi(t)|Ht] = 0 and E [ exp (ληi(t))|Ht] ≤ exp(λ2σ2/2), for all λ ∈ R. Furthermore, the distribution of ηi(t) does not depend on the choice at round t, i.e. at. Assumption 3. Independently distributed contexts. The stacked contexts vectors {Xi(1)}Ni=1, . . . , {Xi(T )}Ni=1 ∈ RdN are independently distributed. Assumption 4. Positive minimum eigenvalue of the average of covariance matrices. For each t, there exists a constant φ2 > 0 such that λmin ( E [ 1 N ∑N i=1Xi(t)Xi(t) T ]) ≥ φ2. Assumptions 1 and 2 are standard in stochastic bandit literature Agrawal and Goyal [2013]. We point out that given round t, Assumption 3 allows that the contexts among different arms,X1(t), . . . , XN (t) are correlated to each other. Assumption 3 is weaker than the assumption of IID, and the IID condition is considered by Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020]. As Bastani and Bayati [2020] points out, the IID assumption is reasonable in some practical settings, including clinical trials, where health outcomes of patients are independent of those of other patients. Both Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020] address the problem where the contexts are equal across all arms, i.e. X(t) = X1(t) = . . . = XN (t), while our work admits different contexts over all arms. Assumption 4 guarantees that the average of covariance matrices of contexts over the arms is well-behaved so that the inverse of the sample covariance matrix is bounded by the spectral norm. This assumption helps controlling the estimation error of β in linear regression models. Similar assumptions are adopted in existing works in the bandit setting [Goldenshluger and Zeevi, 2013, Amani et al., 2019, Li et al., 2017, Bastani and Bayati, 2020]. 3.2 Doubly robust estimator To describe the contextual bandit DR estimator, let πi(t) := P (at = i|Ht) > 0 be the probability of selecting arm i at round t. We define a DR pseudo-reward as Y DRi (t) = { 1− I (i = at) πi(t) } Xi(t) T β̆t + I (i = at) πi(t) Yat(t), (1) for some β̆t depending on Ht. Background of missing data methods and derivation of the DR pseudo-reward is provided in the supplementary material. Now, we propose our new estimator β̂t with a regularization parameter λt as below: β̂t = ( t∑ τ=1 N∑ i=1 Xi(τ)Xi(τ) T + λtI )−1( t∑ τ=1 N∑ i=1 Xi(τ)Y DR i (τ) ) . (2) Harnessing the pseudo-rewards defined in (1), we can make use of all contexts rather than just selected contexts. The DR estimator by Kim and Paik [2019] utilizes all contexts but has a different form from ours. While Kim and Paik [2019] uses Lasso estimator with pseudo-rewards aggregated over all arms, we use ridge regression estimator with pseudo-rewards in (1) which are defined separately for each i = 1, . . . , N . This seemingly small but important difference in forms paves a way in rendering our unique regret decomposition and improving the regret bound. 3.3 Algorithm In this subsection, we describe our proposed algorithm, DRTS which adapts DR technique to LinTS. The DRTS is presented in Algorithm 1. Distinctive features of DRTS compared to LinTS include the novel estimator and the resampling technique. At each round t ≥ 1, the algorithm samples β̃i(t) from the distribution N(β̂t−1, v2V −1t−1) for each i independently. Let Ỹi(t) := Xi(t) T β̃i(t) and mt := arg maxi Ỹi(t). We set mt as a candidate action and compute π̃mt(t) := P(Ỹmt(t) = maxi Ỹi(t)|Ht). 1 If π̃mt(t) > γ, then the arm mt is selected, i.e., at = mt. Otherwise, the algorithm resamples β̃i(t) until it finds another arm satisfying π̃i(t) > γ up to a predetermined fixed value Mt. Section A.3 in supplementary materials describes issues related to Mt including a suitable choice of Mt. 1This computation is known to be challenging but employing the independence among β̃1(t), . . . , β̃N (t), we derive an explicit form approximating π̃mt(t) in supplementary materials Section H.1. Algorithm 1 Doubly Robust Thompson Sampling for Linear Contextual Bandits (DRTS) Input: Exploration parameter v > 0, Regularization parameter λ > 0, Selection probability threshold γ ∈ [1/(N + 1), 1/N), Imputation estimator β̆u = f({X(τ), Yaτ (τ)}u−1τ=1), Number of maximum possible resampling Mt. Set F0 = 0, W0 = 0, β̂0 = 0 and V0 = λI for t = 1 to T do Observe contexts {Xi(t)}Ni=1. Sample β̃1(t), . . . , β̃N (t) from N(β̂t−1, v2V −1t−1) independently. Compute Ỹi(t) = Xi(t) T β̃i(t) Observe a candidate action mt := arg maxi Ỹi(t). Compute π̃mt(t) := P ( maxi Ỹi(t) = Ỹmt(t) ∣∣∣Ht). for l = 1 to Mt do if π̃mt(t) ≤ γ then Sample another β̃1(t), . . . , β̃N (t), observe another mt, and update π̃mt(t). else Break. end if end for Set at = mt, and play arm at. Observe reward Yat(t) and compute Y DR i (t) Ft = Ft−1 + ∑N i=1Xi(t)Y DR i (t); Wt = Wt−1 + ∑N i=1Xi(t)Xi(t) T ; Vt = Wt + λ √ tI β̂t = V −1 t Ft Update β̆t+1 for next round. end for The resampling step is incorporated to avoid small values of the probability of selection so that the pseudo-reward in (1) is numerically stable. A naive remedy to stabilize the pseudo-reward is to use max{πi(t), γ}, which fails to leading to our regret bound since it induces bias and also cannot guarantee that the selected arm is in the super-unsaturated arms defined in (5) with high probability (For details, see Section 4.2). The resampling step implemented in the proposed algorithm is designed to solve these problems. 4 Theoretical results Our theoretical results are organized as follows. In Section 4.1, we provide the main result, the cumulative regret bound of Õ(φ−2 √ T ) of DRTS. The main thrust of deriving the regret bound is to define super-unsaturated arms. In Section 4.2 we introduce the definition of super-unsaturated arms and show how it admits a novel decomposition of the regret into two additive terms as in (6). In Section 4.3 we bound each term of the decomposed regret bounds (6). The first term is the estimation error, and Theorem 3 finds its bound. In the course of proving Theorem 3, we need Lemma 4, which plays a similar role to the self-normalized theorem of Abbasi-Yadkori et al. [2011]. We conclude the section by presenting Lemma 6 and bound the second term of (6). 4.1 An improved regret bound Theorem 1 provides the regret bound of DRTS in terms of the minimum eigenvalue without d. Theorem 1. Suppose that Assumptions 1-4 hold. If β̆t in Algorithm 1 satisfies ‖β̆t − β‖2 ≤ b for a constant b > 0, for all t = 1, . . . , T , then with probability 1− 2δ, the cumulative regret by time T for DRTS algorithm is bounded by R(T ) ≤ 2 + 4Cb,σ φ2 √ T log 12T 2 δ + 2 √ 2T φ √ N , (3) where Cb,σ is a constant which depends only on b and σ. The bound (3) has a rate of O(φ−2 √ T ). The relationship between the dimension d and the minimum eigenvalue φ2 can be shown by dφ2 = d N λmin ( E N∑ i=1 Xi(t)Xi(t) T ) ≤ 1 N E N∑ i=1 Tr ( Xi(t)Xi(t) T ) = 1 N E N∑ i=1 ‖Xi(t)‖22 ≤ 1. This implies φ−2 ≥ d, 2 but there are many practical scenarios such that φ−2 = O(d) holds. Bastani et al. [2021] identifies such examples including the uniform distribution and truncated multivariate normal distributions. When the context has uniform distribution on the unit ball, φ−2 = d+ 2. When the context has truncated multivariate normal distribution with mean 0 and covariance Σ, we can set φ−2 = (d+ 2) exp( 12λmin(Σ) ). For more examples, we refer to Bastani et al. [2021]. Furthermore, regardless of distributions, φ−2 = O(d) holds when the correlation structure has the row sum of offdiagonals independent of the dimension, for example, AR(1), tri-diagonal, block-diagonal matrices. In these scenarios, the regret bound in (3) becomes Õ(d √ T ). Compared to the previous bound of LinTS [Agrawal and Goyal, 2014, Abeille et al., 2017], we obtain a better regret bound by the factor of √ d for identified practical cases. As for the imputation estimator β̌t, we assume that ‖β̌t − β‖2 ≤ b, where b is an absolute constant. We suggest two cases which guarantee this assumption. First, if a biased estimator is used, we can rescale the estimator so that its l2-norm is bounded by some constant C > 0. Then, ‖β̌t − β‖2 ≤ ‖β̌t‖2 + ‖β‖2 ≤ C + 1 and b = C + 1. Second, consistent estimators such as ridge estimator or the least squared estimator satisfy the condition since ‖β̌t − β‖2 = O(d √ log t/t). The term d is cancelled out when t ≥ td, where td is the minimum integer that satisfies log t/t ≤ d−2. In these two cases, we can find a constant b which satisfies the assumption on the imputation estimator β̌t. 4.2 Super-unsaturated arms and a novel regret decomposition The key element in deriving (3) is to decompose the regret into two additive terms as in (6). To allow such decomposition to be utilizable, we need to define a novel set of arms called super-unsaturated arms, which replaces the role of unsaturated arms in [Agrawal and Goyal, 2014]. The superunsaturated arms are formulated so that the chosen arm is included in this set with high probability. For each i and t, let ∆i(t) := Xa∗t (t) Tβ−Xi(t)Tβ. DefineAt := ∑t τ=1Xaτ (τ)Xaτ (τ) T +λI and Vt := ∑t τ=1 ∑N i=1Xi(τ)Xi(τ) T + λtI . For the sake of contrast, recall the definition of unsaturated arms by Agrawal and Goyal [2014] Ut := { i : ∆i(t) ≤ gt ‖Xi(t)‖A−1t−1 } , (4) where gt := C √ d log(t/δ) min{ √ d, √ logN} for some constant C > 0. This gt is constructed to ensure that there exists a positive lower bound for the probability that the selected arm is unsaturated. In place of (4), we define a set of super-unsaturated arms for each round t by Nt := { i : ∆i(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xi(t)‖2V −1t−1 } . (5) While gt ‖Xi(t)‖A−1t−1 in (4) is normalized with only selected contexts, the second term in the right hand side of (5) is normalized with all contexts including Xa∗t (t), the contexts of the optimal arm. This bound of ∆i(t) plays a crucial role in bounding the regret with a novel decomposition as in (6). The following Lemma shows a lower bound of the probability that the candidate arm is super-unsaturated. Lemma 2. For each t, let mt := arg maxi Ỹi(t) and let Nt be the super-unsaturated arms defined in (5). For any given γ ∈ [1/(N + 1), 1/N), set v = (2 log (N/(1− γN)))−1/2. Then, P (mt ∈ Nt|Ht) ≥ 1− γ. Lemma 2 directly contributes to the reduction of √ d in the hyperparamter v. In Agrawal and Goyal [2014], to prove a lower bound of P (at ∈ Ut|Ht), it is required to set v = √ 9d log(t/δ), with the 2Some previous works assume φ−2 = O(1) even when ‖Xi(t)‖2 ≤ 1 (e.g. Li et al. [2017]). As pointed out by Ding et al. [2021], this assumption is unrealistic and the reported regret bound should be multiplied by O(d). order of √ d. In contrast, Lemma 2 shows that v does not need to depend on d due to the definition of super-unsaturated arms in (5). In this way, we obtain a lower bound of P (mt ∈ Nt|Ht) without costing extra √ d. Using the lower bound, we can show that the resampling scheme allows the algorithm to choose the super-unsaturated arms with high probability. For all i /∈ Nt, π̃i(t) := P (mt = i|Ht) ≤ P ( ∪j /∈Nt{mt = j} ∣∣Ht) = P (mt /∈ Nt|Ht) ≤ γ, where the last inequality holds due to Lemma 2. Thus, in turn, if π̃i(t) > γ, then i ∈ Nt. This means that {i : π̃i(t) > γ} is a subset of Nt and {at ∈ {i : π̃i(t) > γ}} ⊂ {at ∈ Nt}. Hence, the probability of the event {at ∈ Nt} is greater than the probability of sampling any arm which satisfies π̃i(t) > γ. Therefore, with resampling, the event {at ∈ Nt} occurs with high probability. (See supplementary materials Section A for details.) When the algorithm chooses the arm from the super-unsaturated set, i.e., when at ∈ Nt happens, (5) implies ∆at(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xat(t)‖2V −1t−1 . (6) By definition, ∆at(t) = regret(t) and the regret at round t can be expressed as the two additive terms, which presents a stark contrast with multiplicative decomposition of the regret in Agrawal and Goyal [2014]. In section 4.3 we show how each term can be bounded with separate rate. 4.3 Bounds for the cumulative regret We first bound the leading term of (6) and introduce a novel estimation error bound free of d for the contextual bandit DR estimator. Theorem 3. (A dimension-free estimation error bound for the contextual bandit DR estimator.) Suppose Assumptions 1-4 hold. For each t = 1, . . . , T , let β̆t be any Ht-measurable estimator satisfying ‖β̆t − β‖2 ≤ b, for some constant b > 0. For each i and t, assume that πi(t) > 0 and that there exists γ ∈ [1/(N + 1), 1/N) such that πat(t) > γ. Given any δ ∈ (0, 1), set λt = 4 √ 2N √ t log 12τ 2 δ . Then with probability at least 1− δ, the estimator β̂t in (2) satisfies∥∥∥β̂t − β∥∥∥ 2 ≤ Cb,σ φ2 √ t √ log 12t2 δ , (7) for all t = 1, . . . , T , where the constant Cb,σ which depends only on b and σ. In bandit literature, estimation error bounds typically include a term involving d which emerges from using the following two Lemmas: (i) the self-normalized bound for vector-valued martingales [Abbasi-Yadkori et al., 2011, Theorem 1], and (ii) the concentration inequality for the covariance matrix [Tropp, 2015, Corollary 5.2]. Instead of using (i) and (ii), we develop the two dimension-free bounds in Lemmas 4 and 6, to replace (i) and (ii), respectively. With the two Lemmas, we eliminate the dependence on d and express the estimation error bound with φ2 alone. Lemma 4. (A dimension-free bound for vector-valued martingales.) Let {Fτ}tτ=1 be a filtration and {η(τ)}tτ=1 be a real-valued stochastic process such that η(τ) is Fτ -measurable. Let {X(τ)} t τ=1 be an Rd-valued stochastic process where X(τ) is Fτ−1-measurable and ‖X(τ)‖2 ≤ 1. Assume that {η(τ)}tτ=1 are σ-sub-Gaussian as in Assumption 2. Then with probability at least 1− δ/t2, there exists an absolute constant C > 0 such that∥∥∥∥∥ t∑ τ=1 η(τ)X(τ) ∥∥∥∥∥ 2 ≤ Cσ √ t √ log 4t2 δ . (8) Compared to Theorem 1 of Abbasi-Yadkori et al. [2011], our bound (8) does not involve d, yielding a dimension-free bound for vector-valued martingales. However, the bound (8) has √ t term which comes from using ‖·‖2 instead of the self-normalized norm ‖·‖V −1t . To complete the proof of Theorem 3, we need the following condition, λmin (Vt) ≥ ct, (9) for some constant c > 0. Li et al. [2017] points out that satisfying (9) is challenging. To overcome this difficulty, Amani et al. [2019] and Bastani and Bayati [2020] use an assumption on the covariance matrix of contexts and a concentration inequality for matrix to prove (9), described as follows. Proposition 5. [Tropp, 2015, Theorem 5.1.1] Let P (1), . . . , P (t) ∈ Rd×d be the symmetric matrices such that λmin(P (τ)) ≥ 0, λmax(P (τ)) ≤ L and λmin(E[P (τ)]) ≥ φ2, for all τ = 1, 2, . . . , t. Then, P ( λmin ( t∑ τ=1 P (τ) ) ≤ tφ 2 2 ) ≤ d exp ( − tφ 2 8L ) . (10) To prove (9) using (10) with probability at least 1 − δ, for δ ∈ (0, 1), it requires t ≥ 8Lφ2 log d δ . Thus, one can use (10) only after O(φ−2 log d) rounds. Due to this requirement, Bastani and Bayati [2020] implements the forced sampling techniques for O ( N2d4(log d)2 ) rounds, and Amani et al. [2019] forces to select arms randomly for O ( φ−2 log d ) rounds. These mandatory exploration phase empirically prevents the algorithm choosing the optimal arm. An alternative form of matrix Chernoff inequality for adapted sequences is Theorem 3 in Tropp [2011], but the bound also has a multiplicative factor of d. Instead of applying Proposition 5 to prove (9), we utilize a novel dimension-free concentration inequality stated in the following Lemma. Lemma 6. (A dimension-free concentration bound for symmetric bounded matrices.) Let ‖A‖F be a Frobenious norm of a matrix A. Let {P (τ)}tτ=1 ∈ Rd×d be the symmetric matrices adapted to a filtration {Fτ}tτ=1. For each τ = 1, . . . , t, suppose that ‖P (τ)‖F ≤ c, for some c > 0 and λmin (E [P (τ)| Fτ−1]) ≥ φ2 > 0, almost surely. For given any δ ∈ (0, 1), set λt ≥ 4 √ 2c √ t √ log 4t 2 δ . Then with probability at least 1− δ/t 2, λmin ( t∑ τ=1 P (τ) + λtI ) ≥ φ2t. (11) Lemma 6 shows that setting λt with √ t rate guarantees (9) for all t ≥ 1. We incorporate λt stated in Lemma 6 in our estimator (2), and show in Section 5 that the DR estimator regularized with λt outperforms estimators from other contextual bandit algorithms in early rounds. We obtain the bounds free of d in Lemmas 4 and 6 mainly by applying Lemma 2.3 in Lee et al. [2016] which states that any Hilbert space martingale can be reduced to R2. Thus, we can project the vector-valued (or the matrix) martingales to R2-martingales, and reduce the dimension from d (or d2) to 2. Then we apply Azuma-Hoeffding inequality just twice, instead of d times. In this way, Lemma 6 provides a novel dimension-free bound for the covariance matrix. Lemmas 4 and 6 can be applied to other works to improve the existing bounds. For example, using these Lemmas, the estimation error bound of Bastani and Bayati [2020] can be improved by a factor of log d. Proposition EC.1 of Bastani and Bayati [2020] provides an estimation error bound for the ordinary least square estimator by using Proposition 5 and bounding all values of d coordinates. By applying Lemmas 4 and 6, one does not have to deal with each coordinate and eliminate dependence on d. Using Lemma 6, we can bound the second term of the regret in (6) as follows. For j = 1, . . . , N ‖Xj(t)‖V −1t−1 ≤ ‖Xj(t)‖2 √∥∥V −1t−1∥∥2 ≤ λmin (Vt−1)−1/2 ≤ 1√φ2N(t− 1) . (12) Finally, we are ready to bound regret(t) in (6). Lemma 7. Suppose the assumptions in Theorem 1 hold. Then with probability at least 1− 2δ, regret(t) ≤ 2Cb,σ φ2 √ t− 1 √ log 12t2 δ + √ 2 φ √ N(t− 1) , (13) for all t = 2, . . . , T . Proof. Since at is shown to be super-unsaturated with high probability, we can use (6) to have regret(t) ≤ 2‖β̂t−1 − β‖2 + √ ‖Xa∗t (t)‖ 2 V −1t−1 + ‖Xat(t)‖2V −1t−1 , for all t = 2, . . . , T . We see that the first term is bounded by Theorem 3, and the second term by (12). Note that to prove Theorem 1, Lemma 6 is invoked, and the event (11) of Lemma 6 is a subset of that in (7). Therefore (13) holds with probability at least 1− 2δ instead of 1− 3δ. Details are given in supplementary materials. Lemma 7 shows that the regret at round t does not exceed a O(φ−2t−1/2) bound when at ∈ Nt, which is guaranteed in our algorithm via resampling with high probability (See Section A.3 for details). This concludes the proof of Theorem 1. 5 Simulation studies In this section, we compare the performances of the three algorithms: (i) LinTS [Agrawal and Goyal, 2013], (ii) BLTS [Dimakopoulou et al., 2019], and (iii) the proposed DRTS. We use simulated data described as follows. The number of arms N is set to 10 or 20, and the dimension of contexts d is set to 20 or 30. For each element of the contexts j = 1, · · · , d, we generate [X1j(t), · · · , XNj(t)] from a normal distribution N (µN , VN ) with mean µ10 = [−10,−8, · · · ,−2, 2, · · · , 8, 10]T , or µ20 = [−20,−18, · · · ,−2, 2, · · · , 18, 20]T , and the covariance matrix VN ∈ RN×N has VN (i, i) = 1 for every i and VN (i, k) = ρ for every i 6= k. We set ρ = 0.5 and truncate the sampled contexts to satisfy ‖Xi(t)‖2 ≤ 1. To generate the stochastic rewards, we sample ηi(t) independently from N (0, 1). Each element of β follows a uniform distribution, U(−1/ √ d, 1/ √ d). All three algorithms have v as an input parameter which controls the variance of β̃i(t). BLTS and DRTS require a positive threshold γ which truncates the selection probability. We consider v ∈ {0.001, 0.01, 0.1, 1} in all three algorithms, γ ∈ {0.01, 0.05, 0.1} for BLTS, and set γ = 1/(N + 1) in DRTS. Then we report the minimum regrets among all combinations. The regularization parameter is λt = √ t in DRTS and λt = 1 in both LinTS and BLTS. To obtain an imputation estimator β̌t required in DRTS, we use ridge regression with {Xaτ (τ), Yaτ (τ)}t−1τ=1, for each round t. Other implementation details are in supplementary materials. Figure 1 shows the average of the cumulative regrets and the estimation error ‖β̂t − β‖2 of the three algorithms based on 10 replications. The figures in the two left columns show the average cumulative regret according to the number of rounds with the best set of hyperparameters for each algorithm. The total rounds are T = 20000. The figures in the third columns show the average of the estimation error ‖β̂t − β‖2. In the early stage, the estimation errors of LinTS and BLTS increase rapidly, while that of DRTS is stable. The stability of the DR estimator follows possibly by using full contexts and the regularization parameter λt = √ t. This yields a large margin of estimation error among LinTS, BLTS and DRTS, especially when the dimension is large. 6 Conclusion In this paper, we propose a novel algorithm for stochastic contextual linear bandits. Viewing the bandit problem as a missing data problem, we use the DR technique to employ all contexts including those that are not chosen. With the definition of super-unsaturated arms, we show a regret bound which only depends on the minimum eigenvalue of the sample covariance matrices. This new bound has Õ(d √ T ) rate in many practical scenarios, which is improved by a factor of √ d compared to the previous LinTS regret bounds. Simulation studies show that the proposed algorithm performs better than other LinTS algorithms in a large dimension. Acknowledgements This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2020R1A2C1A01011950) (Wonyoung Kim and Myunghee Cho Paik), and by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program(UNIST)) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2021R1G1A100980111) (Gi-Soo Kim). Wonyoung Kim was also supported by Hyundai Chung Mong-koo foundation.
1. What is the focus of the paper regarding contextual linear bandits? 2. What are the strengths of the proposed algorithm, particularly in improving the cumulative regret bound? 3. What are some concerns or questions regarding the definition and implementation of certain components of the algorithm? 4. How does the reviewer assess the clarity and organization of the paper's content? 5. Are there any suggestions or ideas for further improvement or refinement of the proposed method?
Summary Of The Paper Review
Summary Of The Paper This work studies the problem of contextual linear bandits following a combination of Thompson Sampling approach and Doubly Robust methods. It proposes an algorithm that improves the cumulative regret bound of existing LinTS algorithm by a factor of \sqrt{d} through the new definition of super-unsaturated arms.for some specific examples where the contexts are independently distributed. Review The paper is well-written and the problem formulation and the contributions are clearly stated. My first comment is on the definition of \pi_i(t) which is crucial in computing Y_i^{DR}(t) and thus the algorithm implementation. The computation of \pi_i(t) is not discussed until in appendix A.2. I would suggest that the authors bring up this remark sometime earlier in the main body or make a small comment on it when they first introduce \pi_i(t) in line 144. Questions: Just to make sure, \tilde\pi_i(t) is P(\tilde Y_i(t)=\amax_{j\in[N]}\tilde Y_j(t))? Could the authors comment on how they make sure that \pi_i(t) is non-zero as it is necessary in the computation of Y_i^{DR}(t)? What part of the analysis does the following affect? If we consider that the action set is fixed at all rounds and replace the current definition of \phi^2 with \frac{1}{N}\sum_{i=1}^N X_i X_i^T. Could the authors comment on the computation of imputation estimator and what the condition is to guarantee that | \breve\beta_t-\beta|\leq b for all t\in[T]? Can we simply replace \breve\beta_t with LSE of \beta, and b with O(\sqrt{d \log(T)})? In this case regret bound will become O(d^{3/2}\sqrt{T}). Do the authors have a better choice for \breve\beta_t and b?
NIPS
Title Doubly Robust Thompson Sampling with Linear Payoffs Abstract A challenging aspect of the bandit problem is that a stochastic reward is observed only for the chosen arm and the rewards of other arms remain missing. The dependence of the arm choice on the past context and reward pairs compounds the complexity of regret analysis. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust (DR) Thompson Sampling employing the doubly-robust estimator used in missing data literature to Thompson Sampling with contexts (LinTS). Different from previous works relying on missing data techniques (Dimakopoulou et al. [2019], Kim and Paik [2019]), the proposed algorithm is designed to allow a novel additive regret decomposition leading to an improved regret bound with the order of Õ(φ−2 √ T ), where φ is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ without the dimension of the context, d. Applying the relationship between φ and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. N/A √ T ), where φ2 is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ2 without the dimension of the context, d. Applying the relationship between φ2 and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. 1 Introduction Contextual bandit has been popular in sequential decision tasks such as news article recommendation systems. In bandit problems, the learner sequentially pulls one arm among multiple arms and receives random rewards on each round of time. While not knowing the compensation mechanisms of rewards, the learner should make his/her decision to maximize the cumulative sum of rewards. In the course of gaining information about the compensation mechanisms through feedback, the learner should carefully balance between exploitation, pulling the best arm based on information accumulated so far, and exploration, pulling the arm that will assist in future choices, although it does not seem to be the best option at the moment. Therefore in the bandit problem, estimation or learning is an important element besides decision making. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A challenging aspect of estimation in the bandit problem is that a stochastic reward is observed only for the chosen arm. Consequently, only the context and reward pair of the chosen arm is used for estimation, which causes dependency of the context data at the round on the past contexts and rewards. To handle this difficulty, we view bandit problems as missing data problems. The first step in handling missing data is to define full, observed, and missing data. In bandit settings, full data consist of rewards and contexts of all arms; observed data consist of full contexts for all arms and the reward for the chosen arm; missing data consist of the rewards for the arms that are not chosen. Typical estimation procedures require both rewards and contexts pairs to be observed, and the observed contexts from the unselected are discarded (see Table 1). The analysis based on the completely observed pairs only is called complete record analysis. Most stochastic bandit algorithms utilize estimates based on complete record analysis. Estimators from complete record analysis are known to be inefficient. In bandit setting, using the observed data whose probability of observation depends on previous rewards requires special theoretical treatment. There are two main approaches to missing data: imputation and inverse probability weighting (IPW). Imputation is to fill in the predicted value of missing data from a specified model, and IPW is to use the observed records only but weight them by the inverse of the observation probability. The doubly robust (DR) method [Robins et al., 1994, Bang and Robins, 2005] is a combination of imputation and IPW tools. We provide a review of missing data and DR methods in supplementary materials. The robustness against model misspecification in missing data settings is insignificant in the bandit setting since the probability of observation or allocation to an arm is known. The merit of the DR method in the bandit setting is its ability to employ all the contexts including unselected arms. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust Thompson Sampling (DRTS) that applies the DR technique used in missing data literature to Thompson Sampling with linear contextual bandits (LinTS). The main thrust of DRTS is to utilize contexts information for all arms, not just chosen arms. By using the unselected, yet observed contexts, along with a novel algorithmic device, the proposed algorithm renders a unique regret decomposition which leads to a novel regret bound without resorting to the technical definition of unsaturated arms used by Agrawal and Goyal [2014]. Since categorizing the arms into saturated vs. unsaturated plays a critical role in costing extra √ d, by circumventing it, we prove a Õ(d √ T ) bound of the cumulative regret in many practical occasions compared to Õ(d3/2 √ T ) shown in Agrawal and Goyal [2014]. The main contributions of this paper are as follows. • We propose a novel contextual bandit algorithm that improves the cumulative regret bound of LinTS by a factor of √ d (Theorem 1) in many practical scenarios (Section 4.1). This improvement is attained mainly by defining a novel set called super-unsaturated arms, that is utilizable due to the proposed estimator and resampling technique adopted in the algorithm. • We provide a novel estimation error bound of the proposed estimator (Theorem 3) which depends on the minimum eigenvalue of the covariance matrix of the contexts from all arms without d. • We develop a novel dimension-free concentration inequality for sub-Gaussian vector martingale (Lemma 4) and use it in deriving our regret bound in place of the self-normalized theorem by Abbasi-Yadkori et al. [2011]. • We develop a novel concentration inequality for the bounded matrix martingale (Lemma 6) which improves the existing result (Proposition 5) by removing the dependency on d in the bound. Lemma 6 also allows eliminating the forced sampling phases required in some bandit algorithms relying on Proposition 5 [Amani et al., 2019, Bastani and Bayati, 2020]. All missing proofs are in supplementary materials. 2 Related works Thompson Sampling [Thompson, 1933] has been extensively studied and shown solid performances in many applications (e.g. Chapelle and Li [2011]). Agrawal and Goyal [2013] is the first to prove theoretical bounds for LinTS and an alternative proof is given by Abeille et al. [2017]. Both papers show Õ(d3/2 √ T ) regret bound, which is known as the best regret bound for LinTS. Recently, Hamidi and Bayati [2020] points out that Õ(d3/2 √ T ) could be the best possible one can get when the estimator used by LinTS is employed. In our work, we improve this regret bound by a factor of√ d in many practical scenarios through a novel definition of super-unsaturated arms, which becomes utilizable due to the proposed estimator and resampling device implemented in the algorithm. Our work assumes the independence of the contexts from all arms across time rounds. Some notable works have used the assumption that the contexts are independently identically distributed (IID). Leveraging the IID assumption with a margin condition, Goldenshluger and Zeevi [2013] derives a two-armed linear contextual bandit algorithm with a regret upper bound of order O(d3logT ). Bastani and Bayati [2020] has extended this algorithm to any number of arms and improves the regret bound to O(d2log 3 2 d · logT ). The margin condition states that the gap between the expected rewards of the optimal arm and the next best arm is nonzero with some constant probability. This condition is crucial in achieving a O(logT ) regret bound instead of Õ( √ T ). In this paper, we do not assume this margin condition, and focus on the dependence on the dimension of contexts d. From a missing data point of view, most stochastic contextual bandit algorithms use the estimator from complete record analysis except Dimakopoulou et al. [2019] and Kim and Paik [2019]. Dimakopoulou et al. [2019] employs an IPW estimator that is based on the selected contexts alone. Dimakopoulou et al. [2019] proves a Õ(d √ −1T 1+ N) regret bound for their algorithm which depends on the number of arms, N . Kim and Paik [2019] considers the high-dimensional settings with sparsity, utilizes a DR technique, and improves the regret bound in terms of the sparse dimension instead of the actual dimension of the context, d. Kim and Paik [2019] is different from ours in several aspects: the mode of exploration ( -greedy vs. Thompson Sampling), the mode of regularization (Lasso vs. ridge regression); and the form of the estimator. A sharp distinction between the two estimators lies in that Kim and Paik [2019] aggregates contexts and rewards over the arms although they employ all the contexts. If we apply this aggregating estimator and DR-Lasso bandit algorithm to the low-dimensional setting, we obtain a regret bound of order O(Ndφ2 √ T ) when the contexts from the arms are independent. This bound is bigger than our bound by a factor of d and N . It is because the aggregated form of the estimator does not permit the novel regret decomposition derived in Section 4.2. The proposed estimator coupled with a novel algorithmic device renders the additive regret decomposition which in turn improves the order of the regret bound. 3 Proposed estimator and algorithm 3.1 Settings and assumptions We denote a d-dimensional context for the ith arm at round t by Xi(t) ∈ Rd, and the corresponding random reward by Yi(t) for i = 1, . . . , N . We assume E [Yi(t)|Xi(t)] = Xi(t)Tβ for some unknown parameter β ∈ Rd. At round t, the arm that the learner chooses is denoted by at ∈ {1, . . . , N}, and the optimal arm by a∗t := arg maxi=1,...,N { Xi(t) Tβ } . Let regret(t) be the difference between the expected reward of the chosen arm and the optimal arm at round t, i.e., regret(t) := Xa∗t (t) Tβ − Xat(t)Tβ. The goal is to minimize the sum of regrets over T rounds, R(T ) := ∑T t=1 regret(t). The total round T is finite but possibly unknown. We also make the following assumptions. Assumption 1. Boundedness for scale-free regrets. For all i = 1, . . . , N and t = 1, . . . , T , we have ‖Xi(t)‖2 ≤ 1 and ‖β‖2 ≤ 1. Assumption 2. Sub-Gaussian error. Let Ht := ⋃t−1 τ=1 [ {Xi(τ)}Ni=1 ∪ {aτ} ∪ {Yaτ (τ)} ] ∪ {Xi(t)}Ni=1 be the set of observed data at round t. For each t and i, the error ηi(t) := Yi(t)−Xi(t)Tβ is conditionally zero-mean σ-sub-Gaussian for a fixed constant σ ≥ 0, i.e, E [ηi(t)|Ht] = 0 and E [ exp (ληi(t))|Ht] ≤ exp(λ2σ2/2), for all λ ∈ R. Furthermore, the distribution of ηi(t) does not depend on the choice at round t, i.e. at. Assumption 3. Independently distributed contexts. The stacked contexts vectors {Xi(1)}Ni=1, . . . , {Xi(T )}Ni=1 ∈ RdN are independently distributed. Assumption 4. Positive minimum eigenvalue of the average of covariance matrices. For each t, there exists a constant φ2 > 0 such that λmin ( E [ 1 N ∑N i=1Xi(t)Xi(t) T ]) ≥ φ2. Assumptions 1 and 2 are standard in stochastic bandit literature Agrawal and Goyal [2013]. We point out that given round t, Assumption 3 allows that the contexts among different arms,X1(t), . . . , XN (t) are correlated to each other. Assumption 3 is weaker than the assumption of IID, and the IID condition is considered by Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020]. As Bastani and Bayati [2020] points out, the IID assumption is reasonable in some practical settings, including clinical trials, where health outcomes of patients are independent of those of other patients. Both Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020] address the problem where the contexts are equal across all arms, i.e. X(t) = X1(t) = . . . = XN (t), while our work admits different contexts over all arms. Assumption 4 guarantees that the average of covariance matrices of contexts over the arms is well-behaved so that the inverse of the sample covariance matrix is bounded by the spectral norm. This assumption helps controlling the estimation error of β in linear regression models. Similar assumptions are adopted in existing works in the bandit setting [Goldenshluger and Zeevi, 2013, Amani et al., 2019, Li et al., 2017, Bastani and Bayati, 2020]. 3.2 Doubly robust estimator To describe the contextual bandit DR estimator, let πi(t) := P (at = i|Ht) > 0 be the probability of selecting arm i at round t. We define a DR pseudo-reward as Y DRi (t) = { 1− I (i = at) πi(t) } Xi(t) T β̆t + I (i = at) πi(t) Yat(t), (1) for some β̆t depending on Ht. Background of missing data methods and derivation of the DR pseudo-reward is provided in the supplementary material. Now, we propose our new estimator β̂t with a regularization parameter λt as below: β̂t = ( t∑ τ=1 N∑ i=1 Xi(τ)Xi(τ) T + λtI )−1( t∑ τ=1 N∑ i=1 Xi(τ)Y DR i (τ) ) . (2) Harnessing the pseudo-rewards defined in (1), we can make use of all contexts rather than just selected contexts. The DR estimator by Kim and Paik [2019] utilizes all contexts but has a different form from ours. While Kim and Paik [2019] uses Lasso estimator with pseudo-rewards aggregated over all arms, we use ridge regression estimator with pseudo-rewards in (1) which are defined separately for each i = 1, . . . , N . This seemingly small but important difference in forms paves a way in rendering our unique regret decomposition and improving the regret bound. 3.3 Algorithm In this subsection, we describe our proposed algorithm, DRTS which adapts DR technique to LinTS. The DRTS is presented in Algorithm 1. Distinctive features of DRTS compared to LinTS include the novel estimator and the resampling technique. At each round t ≥ 1, the algorithm samples β̃i(t) from the distribution N(β̂t−1, v2V −1t−1) for each i independently. Let Ỹi(t) := Xi(t) T β̃i(t) and mt := arg maxi Ỹi(t). We set mt as a candidate action and compute π̃mt(t) := P(Ỹmt(t) = maxi Ỹi(t)|Ht). 1 If π̃mt(t) > γ, then the arm mt is selected, i.e., at = mt. Otherwise, the algorithm resamples β̃i(t) until it finds another arm satisfying π̃i(t) > γ up to a predetermined fixed value Mt. Section A.3 in supplementary materials describes issues related to Mt including a suitable choice of Mt. 1This computation is known to be challenging but employing the independence among β̃1(t), . . . , β̃N (t), we derive an explicit form approximating π̃mt(t) in supplementary materials Section H.1. Algorithm 1 Doubly Robust Thompson Sampling for Linear Contextual Bandits (DRTS) Input: Exploration parameter v > 0, Regularization parameter λ > 0, Selection probability threshold γ ∈ [1/(N + 1), 1/N), Imputation estimator β̆u = f({X(τ), Yaτ (τ)}u−1τ=1), Number of maximum possible resampling Mt. Set F0 = 0, W0 = 0, β̂0 = 0 and V0 = λI for t = 1 to T do Observe contexts {Xi(t)}Ni=1. Sample β̃1(t), . . . , β̃N (t) from N(β̂t−1, v2V −1t−1) independently. Compute Ỹi(t) = Xi(t) T β̃i(t) Observe a candidate action mt := arg maxi Ỹi(t). Compute π̃mt(t) := P ( maxi Ỹi(t) = Ỹmt(t) ∣∣∣Ht). for l = 1 to Mt do if π̃mt(t) ≤ γ then Sample another β̃1(t), . . . , β̃N (t), observe another mt, and update π̃mt(t). else Break. end if end for Set at = mt, and play arm at. Observe reward Yat(t) and compute Y DR i (t) Ft = Ft−1 + ∑N i=1Xi(t)Y DR i (t); Wt = Wt−1 + ∑N i=1Xi(t)Xi(t) T ; Vt = Wt + λ √ tI β̂t = V −1 t Ft Update β̆t+1 for next round. end for The resampling step is incorporated to avoid small values of the probability of selection so that the pseudo-reward in (1) is numerically stable. A naive remedy to stabilize the pseudo-reward is to use max{πi(t), γ}, which fails to leading to our regret bound since it induces bias and also cannot guarantee that the selected arm is in the super-unsaturated arms defined in (5) with high probability (For details, see Section 4.2). The resampling step implemented in the proposed algorithm is designed to solve these problems. 4 Theoretical results Our theoretical results are organized as follows. In Section 4.1, we provide the main result, the cumulative regret bound of Õ(φ−2 √ T ) of DRTS. The main thrust of deriving the regret bound is to define super-unsaturated arms. In Section 4.2 we introduce the definition of super-unsaturated arms and show how it admits a novel decomposition of the regret into two additive terms as in (6). In Section 4.3 we bound each term of the decomposed regret bounds (6). The first term is the estimation error, and Theorem 3 finds its bound. In the course of proving Theorem 3, we need Lemma 4, which plays a similar role to the self-normalized theorem of Abbasi-Yadkori et al. [2011]. We conclude the section by presenting Lemma 6 and bound the second term of (6). 4.1 An improved regret bound Theorem 1 provides the regret bound of DRTS in terms of the minimum eigenvalue without d. Theorem 1. Suppose that Assumptions 1-4 hold. If β̆t in Algorithm 1 satisfies ‖β̆t − β‖2 ≤ b for a constant b > 0, for all t = 1, . . . , T , then with probability 1− 2δ, the cumulative regret by time T for DRTS algorithm is bounded by R(T ) ≤ 2 + 4Cb,σ φ2 √ T log 12T 2 δ + 2 √ 2T φ √ N , (3) where Cb,σ is a constant which depends only on b and σ. The bound (3) has a rate of O(φ−2 √ T ). The relationship between the dimension d and the minimum eigenvalue φ2 can be shown by dφ2 = d N λmin ( E N∑ i=1 Xi(t)Xi(t) T ) ≤ 1 N E N∑ i=1 Tr ( Xi(t)Xi(t) T ) = 1 N E N∑ i=1 ‖Xi(t)‖22 ≤ 1. This implies φ−2 ≥ d, 2 but there are many practical scenarios such that φ−2 = O(d) holds. Bastani et al. [2021] identifies such examples including the uniform distribution and truncated multivariate normal distributions. When the context has uniform distribution on the unit ball, φ−2 = d+ 2. When the context has truncated multivariate normal distribution with mean 0 and covariance Σ, we can set φ−2 = (d+ 2) exp( 12λmin(Σ) ). For more examples, we refer to Bastani et al. [2021]. Furthermore, regardless of distributions, φ−2 = O(d) holds when the correlation structure has the row sum of offdiagonals independent of the dimension, for example, AR(1), tri-diagonal, block-diagonal matrices. In these scenarios, the regret bound in (3) becomes Õ(d √ T ). Compared to the previous bound of LinTS [Agrawal and Goyal, 2014, Abeille et al., 2017], we obtain a better regret bound by the factor of √ d for identified practical cases. As for the imputation estimator β̌t, we assume that ‖β̌t − β‖2 ≤ b, where b is an absolute constant. We suggest two cases which guarantee this assumption. First, if a biased estimator is used, we can rescale the estimator so that its l2-norm is bounded by some constant C > 0. Then, ‖β̌t − β‖2 ≤ ‖β̌t‖2 + ‖β‖2 ≤ C + 1 and b = C + 1. Second, consistent estimators such as ridge estimator or the least squared estimator satisfy the condition since ‖β̌t − β‖2 = O(d √ log t/t). The term d is cancelled out when t ≥ td, where td is the minimum integer that satisfies log t/t ≤ d−2. In these two cases, we can find a constant b which satisfies the assumption on the imputation estimator β̌t. 4.2 Super-unsaturated arms and a novel regret decomposition The key element in deriving (3) is to decompose the regret into two additive terms as in (6). To allow such decomposition to be utilizable, we need to define a novel set of arms called super-unsaturated arms, which replaces the role of unsaturated arms in [Agrawal and Goyal, 2014]. The superunsaturated arms are formulated so that the chosen arm is included in this set with high probability. For each i and t, let ∆i(t) := Xa∗t (t) Tβ−Xi(t)Tβ. DefineAt := ∑t τ=1Xaτ (τ)Xaτ (τ) T +λI and Vt := ∑t τ=1 ∑N i=1Xi(τ)Xi(τ) T + λtI . For the sake of contrast, recall the definition of unsaturated arms by Agrawal and Goyal [2014] Ut := { i : ∆i(t) ≤ gt ‖Xi(t)‖A−1t−1 } , (4) where gt := C √ d log(t/δ) min{ √ d, √ logN} for some constant C > 0. This gt is constructed to ensure that there exists a positive lower bound for the probability that the selected arm is unsaturated. In place of (4), we define a set of super-unsaturated arms for each round t by Nt := { i : ∆i(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xi(t)‖2V −1t−1 } . (5) While gt ‖Xi(t)‖A−1t−1 in (4) is normalized with only selected contexts, the second term in the right hand side of (5) is normalized with all contexts including Xa∗t (t), the contexts of the optimal arm. This bound of ∆i(t) plays a crucial role in bounding the regret with a novel decomposition as in (6). The following Lemma shows a lower bound of the probability that the candidate arm is super-unsaturated. Lemma 2. For each t, let mt := arg maxi Ỹi(t) and let Nt be the super-unsaturated arms defined in (5). For any given γ ∈ [1/(N + 1), 1/N), set v = (2 log (N/(1− γN)))−1/2. Then, P (mt ∈ Nt|Ht) ≥ 1− γ. Lemma 2 directly contributes to the reduction of √ d in the hyperparamter v. In Agrawal and Goyal [2014], to prove a lower bound of P (at ∈ Ut|Ht), it is required to set v = √ 9d log(t/δ), with the 2Some previous works assume φ−2 = O(1) even when ‖Xi(t)‖2 ≤ 1 (e.g. Li et al. [2017]). As pointed out by Ding et al. [2021], this assumption is unrealistic and the reported regret bound should be multiplied by O(d). order of √ d. In contrast, Lemma 2 shows that v does not need to depend on d due to the definition of super-unsaturated arms in (5). In this way, we obtain a lower bound of P (mt ∈ Nt|Ht) without costing extra √ d. Using the lower bound, we can show that the resampling scheme allows the algorithm to choose the super-unsaturated arms with high probability. For all i /∈ Nt, π̃i(t) := P (mt = i|Ht) ≤ P ( ∪j /∈Nt{mt = j} ∣∣Ht) = P (mt /∈ Nt|Ht) ≤ γ, where the last inequality holds due to Lemma 2. Thus, in turn, if π̃i(t) > γ, then i ∈ Nt. This means that {i : π̃i(t) > γ} is a subset of Nt and {at ∈ {i : π̃i(t) > γ}} ⊂ {at ∈ Nt}. Hence, the probability of the event {at ∈ Nt} is greater than the probability of sampling any arm which satisfies π̃i(t) > γ. Therefore, with resampling, the event {at ∈ Nt} occurs with high probability. (See supplementary materials Section A for details.) When the algorithm chooses the arm from the super-unsaturated set, i.e., when at ∈ Nt happens, (5) implies ∆at(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xat(t)‖2V −1t−1 . (6) By definition, ∆at(t) = regret(t) and the regret at round t can be expressed as the two additive terms, which presents a stark contrast with multiplicative decomposition of the regret in Agrawal and Goyal [2014]. In section 4.3 we show how each term can be bounded with separate rate. 4.3 Bounds for the cumulative regret We first bound the leading term of (6) and introduce a novel estimation error bound free of d for the contextual bandit DR estimator. Theorem 3. (A dimension-free estimation error bound for the contextual bandit DR estimator.) Suppose Assumptions 1-4 hold. For each t = 1, . . . , T , let β̆t be any Ht-measurable estimator satisfying ‖β̆t − β‖2 ≤ b, for some constant b > 0. For each i and t, assume that πi(t) > 0 and that there exists γ ∈ [1/(N + 1), 1/N) such that πat(t) > γ. Given any δ ∈ (0, 1), set λt = 4 √ 2N √ t log 12τ 2 δ . Then with probability at least 1− δ, the estimator β̂t in (2) satisfies∥∥∥β̂t − β∥∥∥ 2 ≤ Cb,σ φ2 √ t √ log 12t2 δ , (7) for all t = 1, . . . , T , where the constant Cb,σ which depends only on b and σ. In bandit literature, estimation error bounds typically include a term involving d which emerges from using the following two Lemmas: (i) the self-normalized bound for vector-valued martingales [Abbasi-Yadkori et al., 2011, Theorem 1], and (ii) the concentration inequality for the covariance matrix [Tropp, 2015, Corollary 5.2]. Instead of using (i) and (ii), we develop the two dimension-free bounds in Lemmas 4 and 6, to replace (i) and (ii), respectively. With the two Lemmas, we eliminate the dependence on d and express the estimation error bound with φ2 alone. Lemma 4. (A dimension-free bound for vector-valued martingales.) Let {Fτ}tτ=1 be a filtration and {η(τ)}tτ=1 be a real-valued stochastic process such that η(τ) is Fτ -measurable. Let {X(τ)} t τ=1 be an Rd-valued stochastic process where X(τ) is Fτ−1-measurable and ‖X(τ)‖2 ≤ 1. Assume that {η(τ)}tτ=1 are σ-sub-Gaussian as in Assumption 2. Then with probability at least 1− δ/t2, there exists an absolute constant C > 0 such that∥∥∥∥∥ t∑ τ=1 η(τ)X(τ) ∥∥∥∥∥ 2 ≤ Cσ √ t √ log 4t2 δ . (8) Compared to Theorem 1 of Abbasi-Yadkori et al. [2011], our bound (8) does not involve d, yielding a dimension-free bound for vector-valued martingales. However, the bound (8) has √ t term which comes from using ‖·‖2 instead of the self-normalized norm ‖·‖V −1t . To complete the proof of Theorem 3, we need the following condition, λmin (Vt) ≥ ct, (9) for some constant c > 0. Li et al. [2017] points out that satisfying (9) is challenging. To overcome this difficulty, Amani et al. [2019] and Bastani and Bayati [2020] use an assumption on the covariance matrix of contexts and a concentration inequality for matrix to prove (9), described as follows. Proposition 5. [Tropp, 2015, Theorem 5.1.1] Let P (1), . . . , P (t) ∈ Rd×d be the symmetric matrices such that λmin(P (τ)) ≥ 0, λmax(P (τ)) ≤ L and λmin(E[P (τ)]) ≥ φ2, for all τ = 1, 2, . . . , t. Then, P ( λmin ( t∑ τ=1 P (τ) ) ≤ tφ 2 2 ) ≤ d exp ( − tφ 2 8L ) . (10) To prove (9) using (10) with probability at least 1 − δ, for δ ∈ (0, 1), it requires t ≥ 8Lφ2 log d δ . Thus, one can use (10) only after O(φ−2 log d) rounds. Due to this requirement, Bastani and Bayati [2020] implements the forced sampling techniques for O ( N2d4(log d)2 ) rounds, and Amani et al. [2019] forces to select arms randomly for O ( φ−2 log d ) rounds. These mandatory exploration phase empirically prevents the algorithm choosing the optimal arm. An alternative form of matrix Chernoff inequality for adapted sequences is Theorem 3 in Tropp [2011], but the bound also has a multiplicative factor of d. Instead of applying Proposition 5 to prove (9), we utilize a novel dimension-free concentration inequality stated in the following Lemma. Lemma 6. (A dimension-free concentration bound for symmetric bounded matrices.) Let ‖A‖F be a Frobenious norm of a matrix A. Let {P (τ)}tτ=1 ∈ Rd×d be the symmetric matrices adapted to a filtration {Fτ}tτ=1. For each τ = 1, . . . , t, suppose that ‖P (τ)‖F ≤ c, for some c > 0 and λmin (E [P (τ)| Fτ−1]) ≥ φ2 > 0, almost surely. For given any δ ∈ (0, 1), set λt ≥ 4 √ 2c √ t √ log 4t 2 δ . Then with probability at least 1− δ/t 2, λmin ( t∑ τ=1 P (τ) + λtI ) ≥ φ2t. (11) Lemma 6 shows that setting λt with √ t rate guarantees (9) for all t ≥ 1. We incorporate λt stated in Lemma 6 in our estimator (2), and show in Section 5 that the DR estimator regularized with λt outperforms estimators from other contextual bandit algorithms in early rounds. We obtain the bounds free of d in Lemmas 4 and 6 mainly by applying Lemma 2.3 in Lee et al. [2016] which states that any Hilbert space martingale can be reduced to R2. Thus, we can project the vector-valued (or the matrix) martingales to R2-martingales, and reduce the dimension from d (or d2) to 2. Then we apply Azuma-Hoeffding inequality just twice, instead of d times. In this way, Lemma 6 provides a novel dimension-free bound for the covariance matrix. Lemmas 4 and 6 can be applied to other works to improve the existing bounds. For example, using these Lemmas, the estimation error bound of Bastani and Bayati [2020] can be improved by a factor of log d. Proposition EC.1 of Bastani and Bayati [2020] provides an estimation error bound for the ordinary least square estimator by using Proposition 5 and bounding all values of d coordinates. By applying Lemmas 4 and 6, one does not have to deal with each coordinate and eliminate dependence on d. Using Lemma 6, we can bound the second term of the regret in (6) as follows. For j = 1, . . . , N ‖Xj(t)‖V −1t−1 ≤ ‖Xj(t)‖2 √∥∥V −1t−1∥∥2 ≤ λmin (Vt−1)−1/2 ≤ 1√φ2N(t− 1) . (12) Finally, we are ready to bound regret(t) in (6). Lemma 7. Suppose the assumptions in Theorem 1 hold. Then with probability at least 1− 2δ, regret(t) ≤ 2Cb,σ φ2 √ t− 1 √ log 12t2 δ + √ 2 φ √ N(t− 1) , (13) for all t = 2, . . . , T . Proof. Since at is shown to be super-unsaturated with high probability, we can use (6) to have regret(t) ≤ 2‖β̂t−1 − β‖2 + √ ‖Xa∗t (t)‖ 2 V −1t−1 + ‖Xat(t)‖2V −1t−1 , for all t = 2, . . . , T . We see that the first term is bounded by Theorem 3, and the second term by (12). Note that to prove Theorem 1, Lemma 6 is invoked, and the event (11) of Lemma 6 is a subset of that in (7). Therefore (13) holds with probability at least 1− 2δ instead of 1− 3δ. Details are given in supplementary materials. Lemma 7 shows that the regret at round t does not exceed a O(φ−2t−1/2) bound when at ∈ Nt, which is guaranteed in our algorithm via resampling with high probability (See Section A.3 for details). This concludes the proof of Theorem 1. 5 Simulation studies In this section, we compare the performances of the three algorithms: (i) LinTS [Agrawal and Goyal, 2013], (ii) BLTS [Dimakopoulou et al., 2019], and (iii) the proposed DRTS. We use simulated data described as follows. The number of arms N is set to 10 or 20, and the dimension of contexts d is set to 20 or 30. For each element of the contexts j = 1, · · · , d, we generate [X1j(t), · · · , XNj(t)] from a normal distribution N (µN , VN ) with mean µ10 = [−10,−8, · · · ,−2, 2, · · · , 8, 10]T , or µ20 = [−20,−18, · · · ,−2, 2, · · · , 18, 20]T , and the covariance matrix VN ∈ RN×N has VN (i, i) = 1 for every i and VN (i, k) = ρ for every i 6= k. We set ρ = 0.5 and truncate the sampled contexts to satisfy ‖Xi(t)‖2 ≤ 1. To generate the stochastic rewards, we sample ηi(t) independently from N (0, 1). Each element of β follows a uniform distribution, U(−1/ √ d, 1/ √ d). All three algorithms have v as an input parameter which controls the variance of β̃i(t). BLTS and DRTS require a positive threshold γ which truncates the selection probability. We consider v ∈ {0.001, 0.01, 0.1, 1} in all three algorithms, γ ∈ {0.01, 0.05, 0.1} for BLTS, and set γ = 1/(N + 1) in DRTS. Then we report the minimum regrets among all combinations. The regularization parameter is λt = √ t in DRTS and λt = 1 in both LinTS and BLTS. To obtain an imputation estimator β̌t required in DRTS, we use ridge regression with {Xaτ (τ), Yaτ (τ)}t−1τ=1, for each round t. Other implementation details are in supplementary materials. Figure 1 shows the average of the cumulative regrets and the estimation error ‖β̂t − β‖2 of the three algorithms based on 10 replications. The figures in the two left columns show the average cumulative regret according to the number of rounds with the best set of hyperparameters for each algorithm. The total rounds are T = 20000. The figures in the third columns show the average of the estimation error ‖β̂t − β‖2. In the early stage, the estimation errors of LinTS and BLTS increase rapidly, while that of DRTS is stable. The stability of the DR estimator follows possibly by using full contexts and the regularization parameter λt = √ t. This yields a large margin of estimation error among LinTS, BLTS and DRTS, especially when the dimension is large. 6 Conclusion In this paper, we propose a novel algorithm for stochastic contextual linear bandits. Viewing the bandit problem as a missing data problem, we use the DR technique to employ all contexts including those that are not chosen. With the definition of super-unsaturated arms, we show a regret bound which only depends on the minimum eigenvalue of the sample covariance matrices. This new bound has Õ(d √ T ) rate in many practical scenarios, which is improved by a factor of √ d compared to the previous LinTS regret bounds. Simulation studies show that the proposed algorithm performs better than other LinTS algorithms in a large dimension. Acknowledgements This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2020R1A2C1A01011950) (Wonyoung Kim and Myunghee Cho Paik), and by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program(UNIST)) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2021R1G1A100980111) (Gi-Soo Kim). Wonyoung Kim was also supported by Hyundai Chung Mong-koo foundation.
1. What is the focus of the paper regarding Thompson Sampling for linear contextual bandits? 2. What are the strengths of the proposed variant of Thompson Sampling, particularly in its regret bound improvement? 3. Are there any concerns or limitations regarding the proposed estimator and its dependence on the context distribution? 4. How does the proposed method compare with existing methods, specifically LinTS, in terms of computational efficiency and practicality? 5. What are some potential directions for future research related to this work?
Summary Of The Paper Review
Summary Of The Paper This work proposes and analyzes a variant of the classical Thompson Sampling algorithm for linear contextual bandits (DRTS) which relies on a novel doubly robust estimator inspired from the missing data literature. They show a regret bound for their algorithm which depends on the context distribution and which, for a variety of different context distributions, improves on the state of the art for linear Thompson Sampling by a factor of \sqrt{d}. This result partially resolves a long-standing open question on the worst-case performance of linear Thompson Sampling. Review Thompson Sampling for the linear contextual bandit problem has been studied extensively. In the frequentist setting, the best known worst case upper bound for the classical linear Thompson Sampling algorithm (LinTS) scales as d^{3/2}\sqrt{T}, a factor of d^{1/2} off of the minimax optimal rate. The extra \sqrt{d} arises from inflating the covariance of the posterior by an extra factor of d to ensure sufficient exploration. It has been an open question whether this inflation factor of d can be removed for LinTS and the regret improved to d\sqrt{T} (see chapter 36 of [1]). This work proposes a variant of the LinTS algorithm which relies on a novel estimator inspired by the the missing data literature. The authors show that their algorithm does not require inflating the covariance of the posterior by a factor of d. As such, instead of incurring regret of d^{3/2}\sqrt{T}, they show that they are able to obtain regret scaling as a function of the context distribution, which they show that, in many cases, yields a regret of d\sqrt{T}. While this does not completely resolve the open question around LinTS, it takes a major step forward in illustrating how the context distribution can be leveraged to obtain tighter regret bounds. Pros: The improvement of the regret bound for Thompson Sampling to d\sqrt{T} in some cases is a noteworthy contribution. Furthermore, the explicit dependence on the context distribution in the regret bound is interesting and, to my knowledge, novel in existing minimax results. As noted above, the improvements here partially answer an open question and are a major step forward in our understanding of Thompson Sampling for linear bandits. The proposed estimator of \beta is novel in the bandits literature, to my knowledge, and could potentially be used to improve dimension dependence in other linear bandit algorithms. The dimension-free estimation error bound (Theorem 3) is a novel contribution itself and may be of independent interest. Simulation results are also convincing as to the effectiveness of DRTS. Cons: While in some cases it is better, the regret bound could actually be worse than the existing d^{3/2} \sqrt{T} regret bounds for LinTS. Is it possible to instead get regret of \min{d^{3/2}, \phi^{-2}}? For really applications of contextual linear bandits, the context distribution could be nearly anything (e.g. the feature vectors corresponding to a stream of users we have no control over) so without some sort of a worst-case guarantee, this regret bound could actually be quite bad. Can the \phi be improved from being the worst-case lower bound on the minimum eigenvalue to, e.g., the average case lower bound on the minimum eigenvalue? Alternatively, if one could show a lower bound that depended on \phi when running a TS-like algorithm without inflating the covariance by a factor of d, this would also help in convincing the reader that the \phi is fundamental and necessary. More discussion and examples should be given on the value of \phi obtained by various context distributions. While there is a short paragraph after Theorem 1 giving some discussion of \phi, I would like to see more precise statements of the value of \phi in the various settings mentioned. \check{\beta} (the imputation estimator) is not specified in the main text. This should be defined. Furthermore, it should be clarified what the value of C_{b,\sigma} is and what value of b is achievable if | \check{\beta} - \beta |2 \le b. From the appendix, it seems C{b,\sigma} = O(b), so it should be shown that b does not need to scale with d (naively one might imagine we would have b = \Omega(\sqrt{d})). As stated, the algorithm cannot be run in a computationally efficient way, due to the computation of \tilde{\pi}_t. While an approximation to this is proposed in the appendix, it is not stated how using the approximate version affects the theoretical results. Furthermore, it is not stated how to compute \pi_t (which is needed to compute Y^{DR}(t)). It would be interesting to see an experimental comparison against the version of LinTS which does not have covariance inflated by a factor of d. While this version doesn’t have theoretical guarantees in the frequentist setting, it is known to often perform well empirically, and it would be interesting to see how it compares to DRTS. Overall, this work makes a non-trivial contribution to our understanding of Thompson Sampling in the linear contextual bandit setting, partially answering an open question, and, as such, I believe merits an accept, assuming the issues outlined above are resolved. [1] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.
NIPS
Title Doubly Robust Thompson Sampling with Linear Payoffs Abstract A challenging aspect of the bandit problem is that a stochastic reward is observed only for the chosen arm and the rewards of other arms remain missing. The dependence of the arm choice on the past context and reward pairs compounds the complexity of regret analysis. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust (DR) Thompson Sampling employing the doubly-robust estimator used in missing data literature to Thompson Sampling with contexts (LinTS). Different from previous works relying on missing data techniques (Dimakopoulou et al. [2019], Kim and Paik [2019]), the proposed algorithm is designed to allow a novel additive regret decomposition leading to an improved regret bound with the order of Õ(φ−2 √ T ), where φ is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ without the dimension of the context, d. Applying the relationship between φ and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. N/A √ T ), where φ2 is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of LinTS using φ2 without the dimension of the context, d. Applying the relationship between φ2 and d, the regret bound of the proposed algorithm is Õ(d √ T ) in many practical scenarios, improving the bound of LinTS by a factor of √ d. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of LinTS. Empirical studies show the advantage of the proposed algorithm over LinTS. 1 Introduction Contextual bandit has been popular in sequential decision tasks such as news article recommendation systems. In bandit problems, the learner sequentially pulls one arm among multiple arms and receives random rewards on each round of time. While not knowing the compensation mechanisms of rewards, the learner should make his/her decision to maximize the cumulative sum of rewards. In the course of gaining information about the compensation mechanisms through feedback, the learner should carefully balance between exploitation, pulling the best arm based on information accumulated so far, and exploration, pulling the arm that will assist in future choices, although it does not seem to be the best option at the moment. Therefore in the bandit problem, estimation or learning is an important element besides decision making. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A challenging aspect of estimation in the bandit problem is that a stochastic reward is observed only for the chosen arm. Consequently, only the context and reward pair of the chosen arm is used for estimation, which causes dependency of the context data at the round on the past contexts and rewards. To handle this difficulty, we view bandit problems as missing data problems. The first step in handling missing data is to define full, observed, and missing data. In bandit settings, full data consist of rewards and contexts of all arms; observed data consist of full contexts for all arms and the reward for the chosen arm; missing data consist of the rewards for the arms that are not chosen. Typical estimation procedures require both rewards and contexts pairs to be observed, and the observed contexts from the unselected are discarded (see Table 1). The analysis based on the completely observed pairs only is called complete record analysis. Most stochastic bandit algorithms utilize estimates based on complete record analysis. Estimators from complete record analysis are known to be inefficient. In bandit setting, using the observed data whose probability of observation depends on previous rewards requires special theoretical treatment. There are two main approaches to missing data: imputation and inverse probability weighting (IPW). Imputation is to fill in the predicted value of missing data from a specified model, and IPW is to use the observed records only but weight them by the inverse of the observation probability. The doubly robust (DR) method [Robins et al., 1994, Bang and Robins, 2005] is a combination of imputation and IPW tools. We provide a review of missing data and DR methods in supplementary materials. The robustness against model misspecification in missing data settings is insignificant in the bandit setting since the probability of observation or allocation to an arm is known. The merit of the DR method in the bandit setting is its ability to employ all the contexts including unselected arms. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust Thompson Sampling (DRTS) that applies the DR technique used in missing data literature to Thompson Sampling with linear contextual bandits (LinTS). The main thrust of DRTS is to utilize contexts information for all arms, not just chosen arms. By using the unselected, yet observed contexts, along with a novel algorithmic device, the proposed algorithm renders a unique regret decomposition which leads to a novel regret bound without resorting to the technical definition of unsaturated arms used by Agrawal and Goyal [2014]. Since categorizing the arms into saturated vs. unsaturated plays a critical role in costing extra √ d, by circumventing it, we prove a Õ(d √ T ) bound of the cumulative regret in many practical occasions compared to Õ(d3/2 √ T ) shown in Agrawal and Goyal [2014]. The main contributions of this paper are as follows. • We propose a novel contextual bandit algorithm that improves the cumulative regret bound of LinTS by a factor of √ d (Theorem 1) in many practical scenarios (Section 4.1). This improvement is attained mainly by defining a novel set called super-unsaturated arms, that is utilizable due to the proposed estimator and resampling technique adopted in the algorithm. • We provide a novel estimation error bound of the proposed estimator (Theorem 3) which depends on the minimum eigenvalue of the covariance matrix of the contexts from all arms without d. • We develop a novel dimension-free concentration inequality for sub-Gaussian vector martingale (Lemma 4) and use it in deriving our regret bound in place of the self-normalized theorem by Abbasi-Yadkori et al. [2011]. • We develop a novel concentration inequality for the bounded matrix martingale (Lemma 6) which improves the existing result (Proposition 5) by removing the dependency on d in the bound. Lemma 6 also allows eliminating the forced sampling phases required in some bandit algorithms relying on Proposition 5 [Amani et al., 2019, Bastani and Bayati, 2020]. All missing proofs are in supplementary materials. 2 Related works Thompson Sampling [Thompson, 1933] has been extensively studied and shown solid performances in many applications (e.g. Chapelle and Li [2011]). Agrawal and Goyal [2013] is the first to prove theoretical bounds for LinTS and an alternative proof is given by Abeille et al. [2017]. Both papers show Õ(d3/2 √ T ) regret bound, which is known as the best regret bound for LinTS. Recently, Hamidi and Bayati [2020] points out that Õ(d3/2 √ T ) could be the best possible one can get when the estimator used by LinTS is employed. In our work, we improve this regret bound by a factor of√ d in many practical scenarios through a novel definition of super-unsaturated arms, which becomes utilizable due to the proposed estimator and resampling device implemented in the algorithm. Our work assumes the independence of the contexts from all arms across time rounds. Some notable works have used the assumption that the contexts are independently identically distributed (IID). Leveraging the IID assumption with a margin condition, Goldenshluger and Zeevi [2013] derives a two-armed linear contextual bandit algorithm with a regret upper bound of order O(d3logT ). Bastani and Bayati [2020] has extended this algorithm to any number of arms and improves the regret bound to O(d2log 3 2 d · logT ). The margin condition states that the gap between the expected rewards of the optimal arm and the next best arm is nonzero with some constant probability. This condition is crucial in achieving a O(logT ) regret bound instead of Õ( √ T ). In this paper, we do not assume this margin condition, and focus on the dependence on the dimension of contexts d. From a missing data point of view, most stochastic contextual bandit algorithms use the estimator from complete record analysis except Dimakopoulou et al. [2019] and Kim and Paik [2019]. Dimakopoulou et al. [2019] employs an IPW estimator that is based on the selected contexts alone. Dimakopoulou et al. [2019] proves a Õ(d √ −1T 1+ N) regret bound for their algorithm which depends on the number of arms, N . Kim and Paik [2019] considers the high-dimensional settings with sparsity, utilizes a DR technique, and improves the regret bound in terms of the sparse dimension instead of the actual dimension of the context, d. Kim and Paik [2019] is different from ours in several aspects: the mode of exploration ( -greedy vs. Thompson Sampling), the mode of regularization (Lasso vs. ridge regression); and the form of the estimator. A sharp distinction between the two estimators lies in that Kim and Paik [2019] aggregates contexts and rewards over the arms although they employ all the contexts. If we apply this aggregating estimator and DR-Lasso bandit algorithm to the low-dimensional setting, we obtain a regret bound of order O(Ndφ2 √ T ) when the contexts from the arms are independent. This bound is bigger than our bound by a factor of d and N . It is because the aggregated form of the estimator does not permit the novel regret decomposition derived in Section 4.2. The proposed estimator coupled with a novel algorithmic device renders the additive regret decomposition which in turn improves the order of the regret bound. 3 Proposed estimator and algorithm 3.1 Settings and assumptions We denote a d-dimensional context for the ith arm at round t by Xi(t) ∈ Rd, and the corresponding random reward by Yi(t) for i = 1, . . . , N . We assume E [Yi(t)|Xi(t)] = Xi(t)Tβ for some unknown parameter β ∈ Rd. At round t, the arm that the learner chooses is denoted by at ∈ {1, . . . , N}, and the optimal arm by a∗t := arg maxi=1,...,N { Xi(t) Tβ } . Let regret(t) be the difference between the expected reward of the chosen arm and the optimal arm at round t, i.e., regret(t) := Xa∗t (t) Tβ − Xat(t)Tβ. The goal is to minimize the sum of regrets over T rounds, R(T ) := ∑T t=1 regret(t). The total round T is finite but possibly unknown. We also make the following assumptions. Assumption 1. Boundedness for scale-free regrets. For all i = 1, . . . , N and t = 1, . . . , T , we have ‖Xi(t)‖2 ≤ 1 and ‖β‖2 ≤ 1. Assumption 2. Sub-Gaussian error. Let Ht := ⋃t−1 τ=1 [ {Xi(τ)}Ni=1 ∪ {aτ} ∪ {Yaτ (τ)} ] ∪ {Xi(t)}Ni=1 be the set of observed data at round t. For each t and i, the error ηi(t) := Yi(t)−Xi(t)Tβ is conditionally zero-mean σ-sub-Gaussian for a fixed constant σ ≥ 0, i.e, E [ηi(t)|Ht] = 0 and E [ exp (ληi(t))|Ht] ≤ exp(λ2σ2/2), for all λ ∈ R. Furthermore, the distribution of ηi(t) does not depend on the choice at round t, i.e. at. Assumption 3. Independently distributed contexts. The stacked contexts vectors {Xi(1)}Ni=1, . . . , {Xi(T )}Ni=1 ∈ RdN are independently distributed. Assumption 4. Positive minimum eigenvalue of the average of covariance matrices. For each t, there exists a constant φ2 > 0 such that λmin ( E [ 1 N ∑N i=1Xi(t)Xi(t) T ]) ≥ φ2. Assumptions 1 and 2 are standard in stochastic bandit literature Agrawal and Goyal [2013]. We point out that given round t, Assumption 3 allows that the contexts among different arms,X1(t), . . . , XN (t) are correlated to each other. Assumption 3 is weaker than the assumption of IID, and the IID condition is considered by Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020]. As Bastani and Bayati [2020] points out, the IID assumption is reasonable in some practical settings, including clinical trials, where health outcomes of patients are independent of those of other patients. Both Goldenshluger and Zeevi [2013] and Bastani and Bayati [2020] address the problem where the contexts are equal across all arms, i.e. X(t) = X1(t) = . . . = XN (t), while our work admits different contexts over all arms. Assumption 4 guarantees that the average of covariance matrices of contexts over the arms is well-behaved so that the inverse of the sample covariance matrix is bounded by the spectral norm. This assumption helps controlling the estimation error of β in linear regression models. Similar assumptions are adopted in existing works in the bandit setting [Goldenshluger and Zeevi, 2013, Amani et al., 2019, Li et al., 2017, Bastani and Bayati, 2020]. 3.2 Doubly robust estimator To describe the contextual bandit DR estimator, let πi(t) := P (at = i|Ht) > 0 be the probability of selecting arm i at round t. We define a DR pseudo-reward as Y DRi (t) = { 1− I (i = at) πi(t) } Xi(t) T β̆t + I (i = at) πi(t) Yat(t), (1) for some β̆t depending on Ht. Background of missing data methods and derivation of the DR pseudo-reward is provided in the supplementary material. Now, we propose our new estimator β̂t with a regularization parameter λt as below: β̂t = ( t∑ τ=1 N∑ i=1 Xi(τ)Xi(τ) T + λtI )−1( t∑ τ=1 N∑ i=1 Xi(τ)Y DR i (τ) ) . (2) Harnessing the pseudo-rewards defined in (1), we can make use of all contexts rather than just selected contexts. The DR estimator by Kim and Paik [2019] utilizes all contexts but has a different form from ours. While Kim and Paik [2019] uses Lasso estimator with pseudo-rewards aggregated over all arms, we use ridge regression estimator with pseudo-rewards in (1) which are defined separately for each i = 1, . . . , N . This seemingly small but important difference in forms paves a way in rendering our unique regret decomposition and improving the regret bound. 3.3 Algorithm In this subsection, we describe our proposed algorithm, DRTS which adapts DR technique to LinTS. The DRTS is presented in Algorithm 1. Distinctive features of DRTS compared to LinTS include the novel estimator and the resampling technique. At each round t ≥ 1, the algorithm samples β̃i(t) from the distribution N(β̂t−1, v2V −1t−1) for each i independently. Let Ỹi(t) := Xi(t) T β̃i(t) and mt := arg maxi Ỹi(t). We set mt as a candidate action and compute π̃mt(t) := P(Ỹmt(t) = maxi Ỹi(t)|Ht). 1 If π̃mt(t) > γ, then the arm mt is selected, i.e., at = mt. Otherwise, the algorithm resamples β̃i(t) until it finds another arm satisfying π̃i(t) > γ up to a predetermined fixed value Mt. Section A.3 in supplementary materials describes issues related to Mt including a suitable choice of Mt. 1This computation is known to be challenging but employing the independence among β̃1(t), . . . , β̃N (t), we derive an explicit form approximating π̃mt(t) in supplementary materials Section H.1. Algorithm 1 Doubly Robust Thompson Sampling for Linear Contextual Bandits (DRTS) Input: Exploration parameter v > 0, Regularization parameter λ > 0, Selection probability threshold γ ∈ [1/(N + 1), 1/N), Imputation estimator β̆u = f({X(τ), Yaτ (τ)}u−1τ=1), Number of maximum possible resampling Mt. Set F0 = 0, W0 = 0, β̂0 = 0 and V0 = λI for t = 1 to T do Observe contexts {Xi(t)}Ni=1. Sample β̃1(t), . . . , β̃N (t) from N(β̂t−1, v2V −1t−1) independently. Compute Ỹi(t) = Xi(t) T β̃i(t) Observe a candidate action mt := arg maxi Ỹi(t). Compute π̃mt(t) := P ( maxi Ỹi(t) = Ỹmt(t) ∣∣∣Ht). for l = 1 to Mt do if π̃mt(t) ≤ γ then Sample another β̃1(t), . . . , β̃N (t), observe another mt, and update π̃mt(t). else Break. end if end for Set at = mt, and play arm at. Observe reward Yat(t) and compute Y DR i (t) Ft = Ft−1 + ∑N i=1Xi(t)Y DR i (t); Wt = Wt−1 + ∑N i=1Xi(t)Xi(t) T ; Vt = Wt + λ √ tI β̂t = V −1 t Ft Update β̆t+1 for next round. end for The resampling step is incorporated to avoid small values of the probability of selection so that the pseudo-reward in (1) is numerically stable. A naive remedy to stabilize the pseudo-reward is to use max{πi(t), γ}, which fails to leading to our regret bound since it induces bias and also cannot guarantee that the selected arm is in the super-unsaturated arms defined in (5) with high probability (For details, see Section 4.2). The resampling step implemented in the proposed algorithm is designed to solve these problems. 4 Theoretical results Our theoretical results are organized as follows. In Section 4.1, we provide the main result, the cumulative regret bound of Õ(φ−2 √ T ) of DRTS. The main thrust of deriving the regret bound is to define super-unsaturated arms. In Section 4.2 we introduce the definition of super-unsaturated arms and show how it admits a novel decomposition of the regret into two additive terms as in (6). In Section 4.3 we bound each term of the decomposed regret bounds (6). The first term is the estimation error, and Theorem 3 finds its bound. In the course of proving Theorem 3, we need Lemma 4, which plays a similar role to the self-normalized theorem of Abbasi-Yadkori et al. [2011]. We conclude the section by presenting Lemma 6 and bound the second term of (6). 4.1 An improved regret bound Theorem 1 provides the regret bound of DRTS in terms of the minimum eigenvalue without d. Theorem 1. Suppose that Assumptions 1-4 hold. If β̆t in Algorithm 1 satisfies ‖β̆t − β‖2 ≤ b for a constant b > 0, for all t = 1, . . . , T , then with probability 1− 2δ, the cumulative regret by time T for DRTS algorithm is bounded by R(T ) ≤ 2 + 4Cb,σ φ2 √ T log 12T 2 δ + 2 √ 2T φ √ N , (3) where Cb,σ is a constant which depends only on b and σ. The bound (3) has a rate of O(φ−2 √ T ). The relationship between the dimension d and the minimum eigenvalue φ2 can be shown by dφ2 = d N λmin ( E N∑ i=1 Xi(t)Xi(t) T ) ≤ 1 N E N∑ i=1 Tr ( Xi(t)Xi(t) T ) = 1 N E N∑ i=1 ‖Xi(t)‖22 ≤ 1. This implies φ−2 ≥ d, 2 but there are many practical scenarios such that φ−2 = O(d) holds. Bastani et al. [2021] identifies such examples including the uniform distribution and truncated multivariate normal distributions. When the context has uniform distribution on the unit ball, φ−2 = d+ 2. When the context has truncated multivariate normal distribution with mean 0 and covariance Σ, we can set φ−2 = (d+ 2) exp( 12λmin(Σ) ). For more examples, we refer to Bastani et al. [2021]. Furthermore, regardless of distributions, φ−2 = O(d) holds when the correlation structure has the row sum of offdiagonals independent of the dimension, for example, AR(1), tri-diagonal, block-diagonal matrices. In these scenarios, the regret bound in (3) becomes Õ(d √ T ). Compared to the previous bound of LinTS [Agrawal and Goyal, 2014, Abeille et al., 2017], we obtain a better regret bound by the factor of √ d for identified practical cases. As for the imputation estimator β̌t, we assume that ‖β̌t − β‖2 ≤ b, where b is an absolute constant. We suggest two cases which guarantee this assumption. First, if a biased estimator is used, we can rescale the estimator so that its l2-norm is bounded by some constant C > 0. Then, ‖β̌t − β‖2 ≤ ‖β̌t‖2 + ‖β‖2 ≤ C + 1 and b = C + 1. Second, consistent estimators such as ridge estimator or the least squared estimator satisfy the condition since ‖β̌t − β‖2 = O(d √ log t/t). The term d is cancelled out when t ≥ td, where td is the minimum integer that satisfies log t/t ≤ d−2. In these two cases, we can find a constant b which satisfies the assumption on the imputation estimator β̌t. 4.2 Super-unsaturated arms and a novel regret decomposition The key element in deriving (3) is to decompose the regret into two additive terms as in (6). To allow such decomposition to be utilizable, we need to define a novel set of arms called super-unsaturated arms, which replaces the role of unsaturated arms in [Agrawal and Goyal, 2014]. The superunsaturated arms are formulated so that the chosen arm is included in this set with high probability. For each i and t, let ∆i(t) := Xa∗t (t) Tβ−Xi(t)Tβ. DefineAt := ∑t τ=1Xaτ (τ)Xaτ (τ) T +λI and Vt := ∑t τ=1 ∑N i=1Xi(τ)Xi(τ) T + λtI . For the sake of contrast, recall the definition of unsaturated arms by Agrawal and Goyal [2014] Ut := { i : ∆i(t) ≤ gt ‖Xi(t)‖A−1t−1 } , (4) where gt := C √ d log(t/δ) min{ √ d, √ logN} for some constant C > 0. This gt is constructed to ensure that there exists a positive lower bound for the probability that the selected arm is unsaturated. In place of (4), we define a set of super-unsaturated arms for each round t by Nt := { i : ∆i(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xi(t)‖2V −1t−1 } . (5) While gt ‖Xi(t)‖A−1t−1 in (4) is normalized with only selected contexts, the second term in the right hand side of (5) is normalized with all contexts including Xa∗t (t), the contexts of the optimal arm. This bound of ∆i(t) plays a crucial role in bounding the regret with a novel decomposition as in (6). The following Lemma shows a lower bound of the probability that the candidate arm is super-unsaturated. Lemma 2. For each t, let mt := arg maxi Ỹi(t) and let Nt be the super-unsaturated arms defined in (5). For any given γ ∈ [1/(N + 1), 1/N), set v = (2 log (N/(1− γN)))−1/2. Then, P (mt ∈ Nt|Ht) ≥ 1− γ. Lemma 2 directly contributes to the reduction of √ d in the hyperparamter v. In Agrawal and Goyal [2014], to prove a lower bound of P (at ∈ Ut|Ht), it is required to set v = √ 9d log(t/δ), with the 2Some previous works assume φ−2 = O(1) even when ‖Xi(t)‖2 ≤ 1 (e.g. Li et al. [2017]). As pointed out by Ding et al. [2021], this assumption is unrealistic and the reported regret bound should be multiplied by O(d). order of √ d. In contrast, Lemma 2 shows that v does not need to depend on d due to the definition of super-unsaturated arms in (5). In this way, we obtain a lower bound of P (mt ∈ Nt|Ht) without costing extra √ d. Using the lower bound, we can show that the resampling scheme allows the algorithm to choose the super-unsaturated arms with high probability. For all i /∈ Nt, π̃i(t) := P (mt = i|Ht) ≤ P ( ∪j /∈Nt{mt = j} ∣∣Ht) = P (mt /∈ Nt|Ht) ≤ γ, where the last inequality holds due to Lemma 2. Thus, in turn, if π̃i(t) > γ, then i ∈ Nt. This means that {i : π̃i(t) > γ} is a subset of Nt and {at ∈ {i : π̃i(t) > γ}} ⊂ {at ∈ Nt}. Hence, the probability of the event {at ∈ Nt} is greater than the probability of sampling any arm which satisfies π̃i(t) > γ. Therefore, with resampling, the event {at ∈ Nt} occurs with high probability. (See supplementary materials Section A for details.) When the algorithm chooses the arm from the super-unsaturated set, i.e., when at ∈ Nt happens, (5) implies ∆at(t) ≤ 2 ∥∥∥β̂t−1 − β∥∥∥ 2 + √∥∥Xa∗t (t)∥∥2V −1t−1 + ‖Xat(t)‖2V −1t−1 . (6) By definition, ∆at(t) = regret(t) and the regret at round t can be expressed as the two additive terms, which presents a stark contrast with multiplicative decomposition of the regret in Agrawal and Goyal [2014]. In section 4.3 we show how each term can be bounded with separate rate. 4.3 Bounds for the cumulative regret We first bound the leading term of (6) and introduce a novel estimation error bound free of d for the contextual bandit DR estimator. Theorem 3. (A dimension-free estimation error bound for the contextual bandit DR estimator.) Suppose Assumptions 1-4 hold. For each t = 1, . . . , T , let β̆t be any Ht-measurable estimator satisfying ‖β̆t − β‖2 ≤ b, for some constant b > 0. For each i and t, assume that πi(t) > 0 and that there exists γ ∈ [1/(N + 1), 1/N) such that πat(t) > γ. Given any δ ∈ (0, 1), set λt = 4 √ 2N √ t log 12τ 2 δ . Then with probability at least 1− δ, the estimator β̂t in (2) satisfies∥∥∥β̂t − β∥∥∥ 2 ≤ Cb,σ φ2 √ t √ log 12t2 δ , (7) for all t = 1, . . . , T , where the constant Cb,σ which depends only on b and σ. In bandit literature, estimation error bounds typically include a term involving d which emerges from using the following two Lemmas: (i) the self-normalized bound for vector-valued martingales [Abbasi-Yadkori et al., 2011, Theorem 1], and (ii) the concentration inequality for the covariance matrix [Tropp, 2015, Corollary 5.2]. Instead of using (i) and (ii), we develop the two dimension-free bounds in Lemmas 4 and 6, to replace (i) and (ii), respectively. With the two Lemmas, we eliminate the dependence on d and express the estimation error bound with φ2 alone. Lemma 4. (A dimension-free bound for vector-valued martingales.) Let {Fτ}tτ=1 be a filtration and {η(τ)}tτ=1 be a real-valued stochastic process such that η(τ) is Fτ -measurable. Let {X(τ)} t τ=1 be an Rd-valued stochastic process where X(τ) is Fτ−1-measurable and ‖X(τ)‖2 ≤ 1. Assume that {η(τ)}tτ=1 are σ-sub-Gaussian as in Assumption 2. Then with probability at least 1− δ/t2, there exists an absolute constant C > 0 such that∥∥∥∥∥ t∑ τ=1 η(τ)X(τ) ∥∥∥∥∥ 2 ≤ Cσ √ t √ log 4t2 δ . (8) Compared to Theorem 1 of Abbasi-Yadkori et al. [2011], our bound (8) does not involve d, yielding a dimension-free bound for vector-valued martingales. However, the bound (8) has √ t term which comes from using ‖·‖2 instead of the self-normalized norm ‖·‖V −1t . To complete the proof of Theorem 3, we need the following condition, λmin (Vt) ≥ ct, (9) for some constant c > 0. Li et al. [2017] points out that satisfying (9) is challenging. To overcome this difficulty, Amani et al. [2019] and Bastani and Bayati [2020] use an assumption on the covariance matrix of contexts and a concentration inequality for matrix to prove (9), described as follows. Proposition 5. [Tropp, 2015, Theorem 5.1.1] Let P (1), . . . , P (t) ∈ Rd×d be the symmetric matrices such that λmin(P (τ)) ≥ 0, λmax(P (τ)) ≤ L and λmin(E[P (τ)]) ≥ φ2, for all τ = 1, 2, . . . , t. Then, P ( λmin ( t∑ τ=1 P (τ) ) ≤ tφ 2 2 ) ≤ d exp ( − tφ 2 8L ) . (10) To prove (9) using (10) with probability at least 1 − δ, for δ ∈ (0, 1), it requires t ≥ 8Lφ2 log d δ . Thus, one can use (10) only after O(φ−2 log d) rounds. Due to this requirement, Bastani and Bayati [2020] implements the forced sampling techniques for O ( N2d4(log d)2 ) rounds, and Amani et al. [2019] forces to select arms randomly for O ( φ−2 log d ) rounds. These mandatory exploration phase empirically prevents the algorithm choosing the optimal arm. An alternative form of matrix Chernoff inequality for adapted sequences is Theorem 3 in Tropp [2011], but the bound also has a multiplicative factor of d. Instead of applying Proposition 5 to prove (9), we utilize a novel dimension-free concentration inequality stated in the following Lemma. Lemma 6. (A dimension-free concentration bound for symmetric bounded matrices.) Let ‖A‖F be a Frobenious norm of a matrix A. Let {P (τ)}tτ=1 ∈ Rd×d be the symmetric matrices adapted to a filtration {Fτ}tτ=1. For each τ = 1, . . . , t, suppose that ‖P (τ)‖F ≤ c, for some c > 0 and λmin (E [P (τ)| Fτ−1]) ≥ φ2 > 0, almost surely. For given any δ ∈ (0, 1), set λt ≥ 4 √ 2c √ t √ log 4t 2 δ . Then with probability at least 1− δ/t 2, λmin ( t∑ τ=1 P (τ) + λtI ) ≥ φ2t. (11) Lemma 6 shows that setting λt with √ t rate guarantees (9) for all t ≥ 1. We incorporate λt stated in Lemma 6 in our estimator (2), and show in Section 5 that the DR estimator regularized with λt outperforms estimators from other contextual bandit algorithms in early rounds. We obtain the bounds free of d in Lemmas 4 and 6 mainly by applying Lemma 2.3 in Lee et al. [2016] which states that any Hilbert space martingale can be reduced to R2. Thus, we can project the vector-valued (or the matrix) martingales to R2-martingales, and reduce the dimension from d (or d2) to 2. Then we apply Azuma-Hoeffding inequality just twice, instead of d times. In this way, Lemma 6 provides a novel dimension-free bound for the covariance matrix. Lemmas 4 and 6 can be applied to other works to improve the existing bounds. For example, using these Lemmas, the estimation error bound of Bastani and Bayati [2020] can be improved by a factor of log d. Proposition EC.1 of Bastani and Bayati [2020] provides an estimation error bound for the ordinary least square estimator by using Proposition 5 and bounding all values of d coordinates. By applying Lemmas 4 and 6, one does not have to deal with each coordinate and eliminate dependence on d. Using Lemma 6, we can bound the second term of the regret in (6) as follows. For j = 1, . . . , N ‖Xj(t)‖V −1t−1 ≤ ‖Xj(t)‖2 √∥∥V −1t−1∥∥2 ≤ λmin (Vt−1)−1/2 ≤ 1√φ2N(t− 1) . (12) Finally, we are ready to bound regret(t) in (6). Lemma 7. Suppose the assumptions in Theorem 1 hold. Then with probability at least 1− 2δ, regret(t) ≤ 2Cb,σ φ2 √ t− 1 √ log 12t2 δ + √ 2 φ √ N(t− 1) , (13) for all t = 2, . . . , T . Proof. Since at is shown to be super-unsaturated with high probability, we can use (6) to have regret(t) ≤ 2‖β̂t−1 − β‖2 + √ ‖Xa∗t (t)‖ 2 V −1t−1 + ‖Xat(t)‖2V −1t−1 , for all t = 2, . . . , T . We see that the first term is bounded by Theorem 3, and the second term by (12). Note that to prove Theorem 1, Lemma 6 is invoked, and the event (11) of Lemma 6 is a subset of that in (7). Therefore (13) holds with probability at least 1− 2δ instead of 1− 3δ. Details are given in supplementary materials. Lemma 7 shows that the regret at round t does not exceed a O(φ−2t−1/2) bound when at ∈ Nt, which is guaranteed in our algorithm via resampling with high probability (See Section A.3 for details). This concludes the proof of Theorem 1. 5 Simulation studies In this section, we compare the performances of the three algorithms: (i) LinTS [Agrawal and Goyal, 2013], (ii) BLTS [Dimakopoulou et al., 2019], and (iii) the proposed DRTS. We use simulated data described as follows. The number of arms N is set to 10 or 20, and the dimension of contexts d is set to 20 or 30. For each element of the contexts j = 1, · · · , d, we generate [X1j(t), · · · , XNj(t)] from a normal distribution N (µN , VN ) with mean µ10 = [−10,−8, · · · ,−2, 2, · · · , 8, 10]T , or µ20 = [−20,−18, · · · ,−2, 2, · · · , 18, 20]T , and the covariance matrix VN ∈ RN×N has VN (i, i) = 1 for every i and VN (i, k) = ρ for every i 6= k. We set ρ = 0.5 and truncate the sampled contexts to satisfy ‖Xi(t)‖2 ≤ 1. To generate the stochastic rewards, we sample ηi(t) independently from N (0, 1). Each element of β follows a uniform distribution, U(−1/ √ d, 1/ √ d). All three algorithms have v as an input parameter which controls the variance of β̃i(t). BLTS and DRTS require a positive threshold γ which truncates the selection probability. We consider v ∈ {0.001, 0.01, 0.1, 1} in all three algorithms, γ ∈ {0.01, 0.05, 0.1} for BLTS, and set γ = 1/(N + 1) in DRTS. Then we report the minimum regrets among all combinations. The regularization parameter is λt = √ t in DRTS and λt = 1 in both LinTS and BLTS. To obtain an imputation estimator β̌t required in DRTS, we use ridge regression with {Xaτ (τ), Yaτ (τ)}t−1τ=1, for each round t. Other implementation details are in supplementary materials. Figure 1 shows the average of the cumulative regrets and the estimation error ‖β̂t − β‖2 of the three algorithms based on 10 replications. The figures in the two left columns show the average cumulative regret according to the number of rounds with the best set of hyperparameters for each algorithm. The total rounds are T = 20000. The figures in the third columns show the average of the estimation error ‖β̂t − β‖2. In the early stage, the estimation errors of LinTS and BLTS increase rapidly, while that of DRTS is stable. The stability of the DR estimator follows possibly by using full contexts and the regularization parameter λt = √ t. This yields a large margin of estimation error among LinTS, BLTS and DRTS, especially when the dimension is large. 6 Conclusion In this paper, we propose a novel algorithm for stochastic contextual linear bandits. Viewing the bandit problem as a missing data problem, we use the DR technique to employ all contexts including those that are not chosen. With the definition of super-unsaturated arms, we show a regret bound which only depends on the minimum eigenvalue of the sample covariance matrices. This new bound has Õ(d √ T ) rate in many practical scenarios, which is improved by a factor of √ d compared to the previous LinTS regret bounds. Simulation studies show that the proposed algorithm performs better than other LinTS algorithms in a large dimension. Acknowledgements This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2020R1A2C1A01011950) (Wonyoung Kim and Myunghee Cho Paik), and by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program(UNIST)) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT, No.2021R1G1A100980111) (Gi-Soo Kim). Wonyoung Kim was also supported by Hyundai Chung Mong-koo foundation.
1. What is the main contribution of the paper regarding the Thompson Sampling-based algorithm? 2. What are the strengths of the proposed algorithm compared to prior works? 3. Do you have any concerns or questions about the technical aspects of the paper, such as the proof, algorithm, and theorem? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any typos or errors in the paper that need to be addressed?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new Thompson Sampling-based algorithm for the canonical N-armed T period contextual bandit problem that achieves an expected cumulative regret of O ( ϕ − 2 T ) , where ϕ 2 is the min eigenvalue of the context covariance matrix. It is easily observed that ϕ − 2 ⩾ d , where d is the dimension of the context space. However, the authors point out that ϕ − 2 = O ( d ) , in fact, holds in many practical scenarios. In such cases, therefore, the ϕ -dependent scaling corresponds to an improvement upon the well-known O ~ ( d 3 / 2 T ) standard by a factor of d . The primary algorithmic innovation driving this result is the use of a "doubly robust" model for unobserved rewards, and a ridge estimator for the underlying d -dimensional (true) parameter vector, which facilitates a regret decomposition that is amenable to the aforementioned improvement in the upper bound. Numerical studies illustrate a significant performance improvement over standard well-known algorithms for the problem. The robust performance is plausibly due to a full utilization of the entire N × d × T context stack, and use of the ridge estimator. Review The paper is well-written overall with accessible proofs. A provable d -factor performance improvement in "typical" scenarios certainly makes it worthy of inclusion in the broader literature. Additionally, some of the technical development may be of independent interest in the analysis of related algorithms. Some broad remarks and typos are noted below: Under a similar eigenvalue positivity condition per arm, can the algorithm and results be adapted to the setting where each arm has a different β i ? Or is it a non-trivial extension? Table 1: Entry for Arm 3 at t=1 should be Y a 1 ( 1 ) . In Algorithm 1, inside the "for l = 1 to M t do" loop, what if π ~ m t ( t ) ⩽ γ is always true? In this case, which arm a t is played after this loop has finished execution? Line 181 - Theorem 1: How does the result depend on inputs to Algorithm 1, in particular, M t ? An appropriate specification of M t seems critical to the achievability of sub-linear regret. It is unclear how this is being ensured. Where is M t specified? Since the result is projected as the primary contribution, these points need to be appropriately addressed.
NIPS
Title Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need? Abstract Finding accurate solutions to the Schrödinger equation is the key unsolved challenge of computational chemistry. Given its importance for the development of new chemical compounds, decades of research have been dedicated to this problem, but due to the large dimensionality even the best available methods do not yet reach the desired accuracy. Recently the combination of deep learning with Monte Carlo methods has emerged as a promising way to obtain highly accurate energies and moderate scaling of computational cost. In this paper we significantly contribute towards this goal by introducing a novel deep-learning architecture that achieves 40-70% lower energy error at 6x lower computational cost compared to previous approaches. Using our method we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules. We systematically break down and measure our improvements, focusing in particular on the effect of increasing physical prior knowledge. We surprisingly find that increasing the prior knowledge given to the architecture can actually decrease accuracy. 1 Introduction The challenge of the Schrödinger Equation Accurately predicting properties of molecules and materials is of utmost importance for many applications, including the development of new materials or pharmaceuticals. In principle, any property of any molecule can be calculated from its wavefunction, which is obtained by solving the Schrödinger equation. In practice, computing accurate wavefunctions and corresponding energies is computationally extremely difficult for two reasons: First, the wavefunction is a high-dimensional function, depending on all coordinates of all electrons, subjecting most methods to the curse of dimensionality. Second, the required level of accuracy is extremely high. While total energies of small molecules are typically hundreds of Hartrees, the chemically relevant energy differences are on the order of 1 milli-Hartree as depicted in Fig. 1. Decades of research have produced a plethora of methods, which all require a trade-off between accuracy and computational cost: On one end of the spectrum are approximate methods such as Hartree-Fock (HF) or Density Functional Theory (DFT), which was awarded the Nobel prize in 1998. These methods can treat thousands of particles but can often only crudely approximate chemical properties. On the other end of the spectrum are "gold-standard" methods such as FCI (Full Configuration Interaction) or CCSD(T) (Coupled Clusters Singles Doubles (Perturbative Triples)) 36th Conference on Neural Information Processing Systems (NeurIPS 2022). which yield energies that often closely agree with experiments, but can only treat up to 100 particles. Despite all these efforts, even for small molecules there do currently not exist highly accurate energy calculations. A 2020 benchmark of state-of-the-art methods for the benzene molecule found a spread of 4 mHa across different methods [1] – as a matter of fact, our results show that the absolute energies calculated in [1] are off by at least 600 mHa, due to the small basis set used in their calculations. An important characteristic of a method is the type of approximation being made: Hartree-Fock or FCI are "variational", meaning that their predicted energies at least upper-bound the ground-truth energy. Since a lower energy is always guaranteed to be a closer approximation of the true energy, this makes assessment of these methods straight-forward. In contrast, CCSD(T) or DFT do not have any guarantees on the accuracy of their results. The approximations resulting from such methods, while working well for many common systems, often fail for chemically challenging situations such as breaking of chemical bonds [2, 3]. Deep-learning-based variational Monte Carlo Combining deep learning and Monte Carlo methods has recently emerged as a promising new approach for solving the Schrödinger equation [4, 5]. These methods offer high accuracy, moderate scaling of computational cost with system size and obey the variational principle. Within a few years deep-learning-based methods have managed to outperform conventional high-accuracy methods for many different molecules, potentially defining a new gold-standard for high-accuracy solutions. In the Born-Oppenheimer approximation a molecule, consisting of nnuc nuclei and nel electrons, is fully described by its Hamiltonian in atomic units H = −1 2 ∑ i ∇2ri + ∑ i>j 1 |ri − rj | + ∑ I>J ZIZJ |RI −RJ | − ∑ i,I ZI |ri −RI | . Here RI , ZI , I ∈ {1, . . . , nnuc} denote the coordinates and charges of the nuclei, r = (r1, . . . , rn↑ , . . . , rnel) ∈ R3×nel denotes the set of nel Cartesian electron coordinates differentiated between n↑ spin-up and n↓ spin-down electrons. We define the inter-particle vectors rij = ri − rj and ρiJ = ri −RJ . All properties of the molecule depend on the wavefunction ψ(r), which must fulfill the antisymmetry constraint: ψ(Pr) = −ψ(r) for any permutation P of two electrons with the same spin [6]. The wavefunction ψ can be found as the solution to the Schrödinger equation Hψ = E0ψ with the ground-state energy and smallest eigenvalue E0. By the Rayleigh-Ritz principle [7], the ground-state energy and the corresponding wavefunction can be found through minimization of the loss L(ψθ) = Er∼ψ2θ(r) [ Hψθ(r) ψθ(r) ] ≥ E0 (1) for a trial wavefunction ψθ, parameterized by parameters θ. The trial function ψθ is represented by a neural network and typically has the form ψθ(r) = ndet∑ d=1 det [ Λdki(r)Ω dαi k (ri) ] k,i=1,...,nel (2) with Λdki : R3×nel → R, Ωdk : R3 → R, αi ∈ {↑, ↓}, i ∈ {1, . . . , nel}, k ∈ {1, . . . , nel}. Each determinant is taken over a nel × nel matrix, with row-indices k running over orbitals and columnindices i running over electrons. The determinant enforces antisymmetry, Ωdk are envelope functions enforcing the boundary condition lim|r|→∞ ψθ(r) = 0, and Λdki are neural networks. The local energy Hψψ can be evaluated using automatic differentiation and the loss can be minimized by gradient based methods. The computation of the expectation value in eq. 1 over the high-dimensional space R3×nel is done using Monte Carlo integration by sampling electron coordinates r distributed according to ψ2θ using the Metropolis-Hastings [8] algorithm. A thorough discussion of deep-learning-based variational Monte Carlo (DL-VMC) can be found in [9]. Related work Two major neural network architectures and their extensions have emerged throughout literature: PauliNet [9] and FermiNet [10]. PauliNet puts emphasis on maximizing physical prior knowledge, by focusing on the the envelope function. They use the output of CASSCF (Complete Active Space Self Consistent Field, a sophisticated conventional quantum-chemistry method) as Ω and use a relatively small (∼ 100k weights) neural network for Λ. FermiNet on the other hand uses a simple exponential function as envelope Ω and uses a large (∼ 700k weights) neural network for Λ. Both approaches have been applied with great success to many different systems and properties, such as energies of individual molecules [4, 10, 9], ionization energies [10], potential energy surfaces [11, 12], forces [11], excited states [13], model-systems for solids [14, 15] and actual solids [16]. Several approaches have been proposed to increase accuracy or decrease computational cost, most notably architecture simplifications [17], alternative antisymmetric layers [18], effective core potentials [19] and Diffusion Monte Carlo (DMC) [20, 21]. FermiNet commonly reaches lower (i.e. more accurate) energies than PauliNet[10], but PauliNet has been observed to converge faster [12]. It has been proposed [9] that combining the embedding of FermiNet and the physical prior knowledge of PauliNet could lead to a superior architecture. Our contribution In this work we present the counter-intuitive observation that the opposite approach might be more fruitful. By combining a PauliNet-like neural network embedding with the envelopes of FermiNet and adding several improvements to the embedding, input features, and initialization of parameter (Sec. 2), we obtain the currently best neural network architecture for the numerical solution of the electronic Schrödinger equation. Combining our new architecture with VMC we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules - both when comparing to deep-learning-based methods, as well as when comparing to classical methods (Sec. 3). Across systems we reduce energy errors by 40-100% and achieve these results with 3-4x fewer optimization epochs compared to FermiNet. In Sec. 4 we systematically break down which changes cause these improvements. We hypothesize that including too much physical prior knowledge can actually hinder optimization and thus deteriorate accuracy – we provide ample experimental evidence in Sec. 5. 2 Improved approach Similar to FermiNet, our architecture expresses Λdki as a linear combination of high-dimensional electron embeddings hLi , and the envelopes Ω dαi k as a sum of exponential functions Λdki(r) =W dαi k h L i Ω dαi k (ri) = nnuc∑ I=1 πdαikI exp(−ω dαi kI |ρiI |), (3) where W dαik , π dαi kI , ω dαi kI are trainable parameters and we enforce ω dαi kI ≥ 0. We compute these embeddings hLi by first transforming the inputs RI , ri into feature vectors h0i = [ |ρiI |, ρ̃iI ] I∈{1,...,nnuc} v0iI = [ |ρiI |, ρ̃iI ] g0ij = |rij | where [·] denotes the concatenation operation and then applying L iterations of an embedding network (Fig. 2a). The local difference vectors ρ̃iI are obtained by applying rotation matrices onto ρiI as described in Sec. 2.2. 2.1 Convolutional layers in embedding Our embedding network uses four residual neural network streams (Fig. 2b): A primary oneelectron stream that embeds a single electron, and three auxiliary streams modelling the two-particleinteractions (electrons with same spins, electrons with different spin, and electron-ion). hl+1i = A l one ( f li ) + hli g l+1 ij = A l σij ( glij ) + glij v l+1 iI = A l nuc ( vliI ) + vliI (4) Here l denotes the embedding iteration, Al denote fully connected neural networks, and glij , v l iI denote electron-electron- and electron-nucleus-embeddings. We use σij = ’same’ for same-spin pairs of electrons and σij = ’diff’ for pairs of electrons with different spin. Similar to FermiNet, in each iteration we assemble the input f li to the primary stream from the auxiliary streams (Fig. 2c): f li = [ hli, 1 n↑ n↑∑ j=1 hlj , 1 n↓ nel∑ j=1+n↑ hlj , s l,el i , s l,nuc i ] . (5) Inspired by the success of SchNet [22] and the efficiency of the PauliNet embedding, we use the sum of element-wise multiplication (⊙), effectively forming a convolution, to aggregate the auxiliary two-particle streams: sl,eli = nel∑ j=1 Blσij ( glij ) ⊙Clσij ( hlj ) sl,nuci = nion∑ I=1 Blnuc ( vliI ) ⊙Clnuc ( ZembI ) (6) Eq. 5 and 6 form the core of the architecture and are the key difference between FermiNet, PauliNet and our architecture. The PauliNet architecture emphasizes two-particle interactions and essentially only uses convolutions as input features: f li = [s l,el i , s l,nuc i ]. In addition to not using the hi as input features, PauliNet also limits its effective depth by making the convolutional kernels B functions of the electron-electron distances |rij | instead of the high-dimensional embedded representations gij . The FermiNet architecture on the other hand emphasizes the one-electron stream and only uses sums over gij as input features, essentially corresponding to B l = Id, C(·) = 1. Furthermore FermiNet does not contain an explicit stream for electron-nucleus interactions. Since our architecture adequately models both the one-electron-embedding as well as the two-particle-interactions, we expect our architecture to be more expressive than either predecessor, as demonstrated in Sec. 4. 2.2 Local, invariant input features The first stage of any VMC wavefunction model is typically the computation of suitable input features from the raw electron coordinates r and nuclear coordinates {RI}. While the subsequent embedding stage could in principle take the raw coordinates, appropriate features allow to explicitly enforce symmetries and improve the model’s transferability and accuracy. Input features should have three properties: First, they should be sufficiently expressive to encode any physical wavefunction. Second, the features should be invariant under geometric transformations. Third, the features should primarily depend on the local environment of a particle, i.e. similar local geometries should generate similar local features, mostly independent of changes to the geometry far from the particle in question. Published architectures have so far not been able to address all three points: PauliNet [10] uses only distances as input features, making them invariant and local, but not sufficiently expressive, as demonstrated by [12]. FermiNet [10] uses raw distances and differences, making the inputs expressive and local, but not invariant under rotations. PESNet [12] proposes a global coordinate system along the principle axes of a molecule, making the inputs invariant and sufficiently expressive, but not local. We propose using local coordinate systems centered on every nucleus and evaluating the electronnuclei differences in these local coordinate systems. Effectively this amounts to applying a rotation a.) Ethene b.) Hypothetical chain of atoms Global coordinates (PESNet) Local coordinates (this work) Figure 3: Visualization of resulting coordinate systems for 2 example molecules: a) Ethene b) A hypothetical bent chain of atoms. matrix UJ to the raw electron-nucleus differences: ρ̃iJ = UJρiJ . As opposed to [12] where U is constant for all nuclei, in our approach UJ can be different for every nucleus. These electronnucleus differences ρ̃iJ are invariant under rotations, contain all the information contained in raw Cartesian coordinates and depend primarily on the local environment of an atom. To compute the 3x3 rotation matrices UJ , we first run a single Hartree-Fock calculation using a minimal basis set to obtain a density matrix D. For each atom J we choose the 3x3 block of the density matrix which corresponds to the 3 p-orbitals of the atom J and compute its eigenvectors UJ = eig(DJ). To make our coordinate system unique we sort the eigenvectors of DJ by their corresponding eigenvalues. If the eigenvalues are degenerate, we pick the rotation of the eigenvectors that maximizes the overlap with the coordinate-axes of the previous nucleus. Fig. 3 shows the resulting coordinate systems spanned by UJ . Note that the local coordinate system generally depicts more physically meaningful directions such as "along the chain". We find that these local coordinates slightly increase the accuracy for single geometries, but more importantly we expect the wavefunction to generalize better across different molecules or geometries. This should improve the accuracy of approaches that attempt to learn wavefunctions for multiple geometries at once [11, 12]. 2.3 Initialization of orbital envelope weights When considering a single atom, the entries of the wavefunction determinant have essentially the form Λki(r) exp (−ωk|ρiI |) . In [10], the exponential envelope was purely motivated by the boundary condition that the orbitals must decay to zero, and initialization with ωk = 1 was proposed. However, when comparing this ansatz to analytical solutions, an interesting parallel can be found: Analytical solutions to the Schrödinger equation for atoms with a single electron – the only systems that have analytical solutions – are of the form Λ̃k(ρiI) exp ( − Z nk |ρiI | ) , where Λ̃k(ρiI) is a product of a Laguerre polynomial and a spherical harmonic, and nk ∈ N+ is known as the principal quantum number. This suggests ωk ≈ Z/nk, which we also find when analyzing the weights of a fully trained wavefunction. When initializing with ωk = Z/nk instead of ωk = 1, we observe faster convergence, lower final energies, and lower variance of the energies (Sec. 4). The effect is most notable for molecules containing nuclei with large Z, where Z/nk ≫ 1. 2.4 Improved hyperparameters Beyond the improved neural network architecture we fine-tuned the hyperparameters to reduce the number of optimization steps required for convergence. Starting from the hyperparameters proposed by [10], we increased the norm constrain by 3x for the second-order optimizer KFAC [23, 24], decreased learning rate by 0.5x, and decreased learning rate decay time by 0.4x. We observe that these changes stabilize the optimization and enable usage of 50% fewer Monte Carlo walkers, which results in ∼2x faster optimization and reduced memory allocation. A complete set of hyperparameters can be found in appendix B. 3 Results of improved approach We evaluated the accuracy of our approach by comparing our computed energies against the most accurate references available in the literature. Fig. 4 compares our energies against variational methods – for which lower energies are guaranteed to be more accurate – as well as non-variational high-accuracy methods. We find that across many different systems (small and large atoms, molecules at equilibrium geometry, molecules in transition states), our approach yields substantially lower – and thus more accurate – energies than previous variational results. Across all tested systems, we outperform almost all existing variational methods, both deep-learning-based methods as well as classical ones. When comparing to high-accuracy FermiNet VMC calculations, we not only reach substantially lower energies, but also do so using 3-4x fewer training steps, with each step being 40% faster (cf. appendix C). Comparing to a concurrently published Diffusion Monte Carlo approach, which used ∼10x more computational resources, we achieve similar or better accuracy for molecules like N2 and cyclobutadiene and slightly lower accuracy for benzene. Non-variational methods (e.g. CCSD(T)) yield slightly lower energies than our calculations for some molecules, but since those methods do not provide upper bounds or uncertainty guarantees they do not provide a ground-truth. For many applications not only absolute energies are important, but energy differences between different molecules or geometries are of interest, for example to determine the energy required to break a chemical bond. A particularly difficult challenge is the dissociation of the N2 molecule, i.e. the energy of an N2 molecule at different bond lengths (inset Fig. 5). Even methods that are generally regarded as highly accurate, such as CCSD(T), predict energies that deviate far from experimental data at bond-lengths from 2.5 - 4 bohr. Fig. 5 depicts this deviation between experimental and computed energy for our method and the best available reference calculations. We find that our results are closer to the experimental absolute energies than all previous work, and are similar to concurrently published FermiNet-DMC results which require 5-10x more computational resources. When comparing relative energies, our approach outperforms all other deep-learning-based methods and CCSD(T), and is only beaten by the highly specialized r12-MR-ACPF method [28]. Similar to absolute energies, we also find that our relative energies converge substantially faster than for other deep-learning-based methods, with relative energies being almost fully converged after 50k epochs. 4 Ablation study To investigate which specific changes lead to the observed improvements in accuracy, we start from the improved FermiNet architecture proposed in [17] and incrementally add improvements in the following order: First, we use dense nel × nel determinants introduced by the FermiNet authors [10, 17] in their GitHub repository and described in [18] instead of block-diagonal determinants. This generalization increases computational cost and parameter count (cf. appendix C) but has been found to better describe the wavefunction’s nodal surface and thus increase expressiveness of the ansatz. Second, we change hyperparameters as described in Sec. 2.4, which increases throughput by ∼ 2x. Third, we augment the electron embedding using our new SchNet-like neural network architecture described in Sec. 2.1. This leads to a moderate increase in parameter count and computational cost. Fourth, we switch to local, invariant input features as described in Sec. 2.2 and remove the electronelectron difference vectors rij as inputs. Lastly we switch to initializing ωdkI = Z/nk as described in Sec. 2.3, resulting in our proposed final method. We note that the accuracy gains of these changes are not fully independent of each other and the relative attribution depends on the order in which they are applied: Earlier changes will generally generate larger energy improvements compared to later changes. At each step we compute total energies for three different molecules: ethene, N2 at the challenging bond-length of 4.0 bohr, and the K-atom. Fig. 6 depicts the energy of our implementation of FermiNet, and the energy change caused by each subsequent improvement. Each experiment was repeated two times with different RNG seeds (appendix D), the errorbars depict the spread in energy. Overall we find that all our changes combined yield a ∼3-20x improvement in the energy error. For ethene, the dominant contribution (3.7 mHa) comes from improved hyperparameters, which lead to the results being mostly converged after 50k epochs vs. the original settings which require 200k epochs for convergence. Using a lower learning rate in combination with a larger gradient-norm-constraint ensures that more optimization steps are taken according to curvature estimated by KFAC and fewer steps are clipped by the gradient-norm-constraint. Architectural improvements (embedding and input features) lower the energy error by additional 1.4 mHa. Because our embedding is a strict generalization of both FermiNet and PauliNet, our ansatz is more expressive and can therefore reach lower energies than previous ansätze. For N2 it has already been observed that a single dense determinant can outperform models with multiple block-diagonal determinants [18]. We find the corresponding result that 32 dense determinants substantially lower the energy relative to an ansatz with 32 block-diagonal determinants. Comparing N2 to ethene, we observe larger contributions from our architectural improvements and smaller contributions from improved hyperparameters. For the K atom, the overall gains are largest, totalling 60mHa, with substantial accuracy gains from all improvements. Since K has a much larger nuclear charge (Z=19) than the constituents of ethene (Z=1,6) and N2 (Z=7), also the physics-inspired initialization of the envelope parameters yields a substantial contribution. This improved initialization leads to a better initial guess for the wavefunction, which not only reduces the number of required optimization steps, but also leads to more accurate initial sampling. 5 Incorporating prior knowledge To further understand the effect of incorporating prior knowledge into neural network architectures for physical problems as the electronic Schrödinger equation, we examined two distinct ways of increasing prior information in our model: First, by including a built-in approximate physical model, analogous to PauliNet. Second, by increasing the number of pre-training steps to more closely match a reference wavefunction before starting the optimization. Explicitly include CASSCF PauliNet maximizes the physical prior information by computing the envelopes Ω with CASSCF, a sophisticated conventional method, and explicitly enforcing the Kato cusp conditions [30]. Starting from our proposed architecture, we modified our approach step-by-step until we arrived at a PauliNet-like architecture. Fig. 7a shows the energies of an NH3 molecule trained for 50k epochs at each step. First, we switch from dense determinants to block-diagonal determinants as used by the original PauliNet, leading to small loss in accuracy. Second, we exchange our embedding for the PauliNet-like embedding using the hyperparameters proposed in [11], leading to a substantial loss in accuracy, presumably caused by a loss in expressiveness. Next, we replace the simple exponential envelopes by the more physically inspired CASSCF-envelopes, causing a large loss in accuracy. We then remove the vector ri −RI as input feature (keeping only its absolute value) as done in the original PauliNet architecture [9]. This again deteriorates accuracy, presumably due to enforcing rotational invariance which is too restrictive of a symmetry class as pointed out by [12]. Lastly we replace the electronelectron distances |rij | (which are not smooth at rij = 0 and thus lead to cusps) by smooth, cuspless radial basis functions as input feature and add an explicit term to enforce the electron-electron cusp condition. Since the CASSCF-envelopes are designed to fulfill the electron-ion cusp condition, this change leads to an enforced cusp condition, slightly improving the energy accuracy. The largest loss in accuracy throughout these changes is caused by introducing the CASSCF-envelopes, suggesting that they introduce a strong bias of the wavefunction that cannot be easily overcome during training. Fig. 7b shows that architectures using exponential envelopes converge to lower absolute energies compared to the CASSCF-based PauliNet and outperform PauliNet already after ∼5000 epochs. Increase pre-training accuracy Before starting the unsupervised variational optimization of the wavefunction, we run a short supervised pre-training of the wavefunction to roughly match a given reference wavefunction. This is computationally inexpensive because it only requires evaluation of ψθ and a back-propagation step to update the neural network weights θ but not the second derivative of the Hamiltonian. If the reference method yields a decent approximation to the true wavefunction, this pre-training significantly speeds-up optimization and avoids unstable parameter regimes [10]. To incorporate more prior knowledge, one could either use a more sophisticated reference method (e.g. CASSCF instead of HF) or increase the number of pre-training steps. In Fig. 7c we pre-trained the wavefunction with a block diagonal determinant for the NH3 molecule using a CASSCF and Hartree-Fock reference. We increased pre-training iteration steps and evaluated the energy after subsequent 20k and 50k variational optimization epochs, each run was repeated with five different seeds. Increasing the number of pre-training steps initially increases accuracy – since it provides a better starting point for the subsequent variational optimization – but when increasing pre-training beyond 20k steps, accuracy deteriorates for both methods. Surprisingly, we observe a drop in accuracy when using CASSCF as a reference method compared to the simpler Hartree-Fock method. This effect is even more pronounced when increasing the number of pre-training steps. It suggests that excessive pre-training introduces a bias that is hard to overcome during variational optimization, similarly to a built-in reference method. 6 Discussion and Limitations Discussion We find that our approach yields substantially more accurate absolute energies than all previous work – both classical as well as deep-learning-based – and that we reach these accuracies 4-6x faster than the next best method (FermiNet). Especially for larger systems, such as 4th row atoms or the amino acid glycine, we outperform conventional "gold-standard" methods like MRCI-F12(Q) by ∼100 mHa. This corroborates the fact that deep-learning-based methods are emerging as a new gold-standard in computational chemistry and showcases the immense potential of machinelearning-based methods in the natural sciences. A concurrent work [21] was able to achieve similar accuracies by applying Diffusion Monte Carlo (DMC) on top of a FermiNet VMC calculation, highlighting the potential of deep-learning Monte Carlo methods. However, [21] required ∼10x more computational resources and their VMC results – already by themselves 8x more expensive then our calculations – are consistently inferior to our results. This showcases a promising route towards further improvements by using our substantially cheaper and more accurate VMC results as a starting point for a DMC calculation. Regarding the question of how much physics to include in the model, we find varying results. For exact physical constraints, such as symmetries or the cusp conditions, inclusion in the model generally appears to be helpful. However for prior knowledge from existing approximate solutions (such as CASSCF) the situation is more subtle. On the one hand, soft physical guidance such as short supervised pre-training or physics-inspired weight initialization accelerates optimization. On the other hand, we show empirically that increasing physical prior knowledge, e.g. by incorporating CASSCF or extensive supervised pre-training, does not necessarily increase accuracy, but can in fact introduce detrimental biases that are hard to overcome during wavefunction optimization. Limitations and outlook Despite the proposed improvements and favorable scaling of the method, computation of energies for large molecules still takes days of GPU-time on current hardware. While the same holds true for conventional high-accuracy approaches, substantial speed-ups are still required to make DL-VMC more accessible for practitioners. Additionally, when increasing the nuclear charges, the wavefunction becomes increasingly localised, which leads to a reduction in average Monte Carlo stepsize and potentially correlated samples. We circumvent this effect for 4th row atoms by increasing the number of intermediate Monte Carlo steps, but further research into Monte Carlo sampling methods [31, 32] is required to fully address this issue. Despite our improvements for the accuracy of energy differences between different molecules or geometries, DLVMC is still outperformed by other, computationally cheaper methods in some cases. Initial research into the regularity of the wavefunction across different molecules [11, 12] provides a promising route to improvements. We note in passing that thanks to the local coordinate input features, our architecture fulfills the required rotational invariance required for these approaches. 7 Code availability The code alongside a detailed documentation is available as part of the DeepErwin code package on the Python Package Index (PyPI) and github (https://github.com/mdsunivie/deeperwin) under the MIT license. 8 Acknowledgements We gratefully acknowledge financial support from the following grants: Austrian Science Fund FWF Project I 3403 (P.G.), WWTF-ICT19-041 (L.G.). The computational results have been achieved using the Vienna Scientific Cluster (VSC). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Additionally, we thank Nicholas Gao for providing his results and data and Rafael Reisenhofer for providing valuable feedback to the manuscript.
1. What is the focus of the paper, and how does it build upon prior works? 2. What are the strengths of the proposed approach, particularly in terms of its performance and thorough analysis? 3. What are the weaknesses of the paper, especially regarding the clarity of the method description and notation? 4. Do you have any suggestions for additional studies that could enhance the understanding of the proposed method? 5. Are there any limitations to the approach that should be acknowledged?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper An improved method to approximately solve the Schroedinger equation is described, which combines ideas from the PauliNet and FermiNet papers. A variety of ablation studies are performed. good performance is achieved. Strengths And Weaknesses Strengths: good results thorough study and recombination of approaches of prior works, without reinventing the wheel ablation studies no grandiose claims good discussion of limitations Weaknesses: I found the the description of the method quite unclear, and some notation was not very clearly defined where it appeared in the text one could argue the paper is "only" a recombination of prior works, but if we take this argument, AlexNet would have needed to be rejected as well. Questions Would a study of the transition states of the butadiene system (as in the paulinet paper) be instructive? Limitations yes
NIPS
Title Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need? Abstract Finding accurate solutions to the Schrödinger equation is the key unsolved challenge of computational chemistry. Given its importance for the development of new chemical compounds, decades of research have been dedicated to this problem, but due to the large dimensionality even the best available methods do not yet reach the desired accuracy. Recently the combination of deep learning with Monte Carlo methods has emerged as a promising way to obtain highly accurate energies and moderate scaling of computational cost. In this paper we significantly contribute towards this goal by introducing a novel deep-learning architecture that achieves 40-70% lower energy error at 6x lower computational cost compared to previous approaches. Using our method we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules. We systematically break down and measure our improvements, focusing in particular on the effect of increasing physical prior knowledge. We surprisingly find that increasing the prior knowledge given to the architecture can actually decrease accuracy. 1 Introduction The challenge of the Schrödinger Equation Accurately predicting properties of molecules and materials is of utmost importance for many applications, including the development of new materials or pharmaceuticals. In principle, any property of any molecule can be calculated from its wavefunction, which is obtained by solving the Schrödinger equation. In practice, computing accurate wavefunctions and corresponding energies is computationally extremely difficult for two reasons: First, the wavefunction is a high-dimensional function, depending on all coordinates of all electrons, subjecting most methods to the curse of dimensionality. Second, the required level of accuracy is extremely high. While total energies of small molecules are typically hundreds of Hartrees, the chemically relevant energy differences are on the order of 1 milli-Hartree as depicted in Fig. 1. Decades of research have produced a plethora of methods, which all require a trade-off between accuracy and computational cost: On one end of the spectrum are approximate methods such as Hartree-Fock (HF) or Density Functional Theory (DFT), which was awarded the Nobel prize in 1998. These methods can treat thousands of particles but can often only crudely approximate chemical properties. On the other end of the spectrum are "gold-standard" methods such as FCI (Full Configuration Interaction) or CCSD(T) (Coupled Clusters Singles Doubles (Perturbative Triples)) 36th Conference on Neural Information Processing Systems (NeurIPS 2022). which yield energies that often closely agree with experiments, but can only treat up to 100 particles. Despite all these efforts, even for small molecules there do currently not exist highly accurate energy calculations. A 2020 benchmark of state-of-the-art methods for the benzene molecule found a spread of 4 mHa across different methods [1] – as a matter of fact, our results show that the absolute energies calculated in [1] are off by at least 600 mHa, due to the small basis set used in their calculations. An important characteristic of a method is the type of approximation being made: Hartree-Fock or FCI are "variational", meaning that their predicted energies at least upper-bound the ground-truth energy. Since a lower energy is always guaranteed to be a closer approximation of the true energy, this makes assessment of these methods straight-forward. In contrast, CCSD(T) or DFT do not have any guarantees on the accuracy of their results. The approximations resulting from such methods, while working well for many common systems, often fail for chemically challenging situations such as breaking of chemical bonds [2, 3]. Deep-learning-based variational Monte Carlo Combining deep learning and Monte Carlo methods has recently emerged as a promising new approach for solving the Schrödinger equation [4, 5]. These methods offer high accuracy, moderate scaling of computational cost with system size and obey the variational principle. Within a few years deep-learning-based methods have managed to outperform conventional high-accuracy methods for many different molecules, potentially defining a new gold-standard for high-accuracy solutions. In the Born-Oppenheimer approximation a molecule, consisting of nnuc nuclei and nel electrons, is fully described by its Hamiltonian in atomic units H = −1 2 ∑ i ∇2ri + ∑ i>j 1 |ri − rj | + ∑ I>J ZIZJ |RI −RJ | − ∑ i,I ZI |ri −RI | . Here RI , ZI , I ∈ {1, . . . , nnuc} denote the coordinates and charges of the nuclei, r = (r1, . . . , rn↑ , . . . , rnel) ∈ R3×nel denotes the set of nel Cartesian electron coordinates differentiated between n↑ spin-up and n↓ spin-down electrons. We define the inter-particle vectors rij = ri − rj and ρiJ = ri −RJ . All properties of the molecule depend on the wavefunction ψ(r), which must fulfill the antisymmetry constraint: ψ(Pr) = −ψ(r) for any permutation P of two electrons with the same spin [6]. The wavefunction ψ can be found as the solution to the Schrödinger equation Hψ = E0ψ with the ground-state energy and smallest eigenvalue E0. By the Rayleigh-Ritz principle [7], the ground-state energy and the corresponding wavefunction can be found through minimization of the loss L(ψθ) = Er∼ψ2θ(r) [ Hψθ(r) ψθ(r) ] ≥ E0 (1) for a trial wavefunction ψθ, parameterized by parameters θ. The trial function ψθ is represented by a neural network and typically has the form ψθ(r) = ndet∑ d=1 det [ Λdki(r)Ω dαi k (ri) ] k,i=1,...,nel (2) with Λdki : R3×nel → R, Ωdk : R3 → R, αi ∈ {↑, ↓}, i ∈ {1, . . . , nel}, k ∈ {1, . . . , nel}. Each determinant is taken over a nel × nel matrix, with row-indices k running over orbitals and columnindices i running over electrons. The determinant enforces antisymmetry, Ωdk are envelope functions enforcing the boundary condition lim|r|→∞ ψθ(r) = 0, and Λdki are neural networks. The local energy Hψψ can be evaluated using automatic differentiation and the loss can be minimized by gradient based methods. The computation of the expectation value in eq. 1 over the high-dimensional space R3×nel is done using Monte Carlo integration by sampling electron coordinates r distributed according to ψ2θ using the Metropolis-Hastings [8] algorithm. A thorough discussion of deep-learning-based variational Monte Carlo (DL-VMC) can be found in [9]. Related work Two major neural network architectures and their extensions have emerged throughout literature: PauliNet [9] and FermiNet [10]. PauliNet puts emphasis on maximizing physical prior knowledge, by focusing on the the envelope function. They use the output of CASSCF (Complete Active Space Self Consistent Field, a sophisticated conventional quantum-chemistry method) as Ω and use a relatively small (∼ 100k weights) neural network for Λ. FermiNet on the other hand uses a simple exponential function as envelope Ω and uses a large (∼ 700k weights) neural network for Λ. Both approaches have been applied with great success to many different systems and properties, such as energies of individual molecules [4, 10, 9], ionization energies [10], potential energy surfaces [11, 12], forces [11], excited states [13], model-systems for solids [14, 15] and actual solids [16]. Several approaches have been proposed to increase accuracy or decrease computational cost, most notably architecture simplifications [17], alternative antisymmetric layers [18], effective core potentials [19] and Diffusion Monte Carlo (DMC) [20, 21]. FermiNet commonly reaches lower (i.e. more accurate) energies than PauliNet[10], but PauliNet has been observed to converge faster [12]. It has been proposed [9] that combining the embedding of FermiNet and the physical prior knowledge of PauliNet could lead to a superior architecture. Our contribution In this work we present the counter-intuitive observation that the opposite approach might be more fruitful. By combining a PauliNet-like neural network embedding with the envelopes of FermiNet and adding several improvements to the embedding, input features, and initialization of parameter (Sec. 2), we obtain the currently best neural network architecture for the numerical solution of the electronic Schrödinger equation. Combining our new architecture with VMC we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules - both when comparing to deep-learning-based methods, as well as when comparing to classical methods (Sec. 3). Across systems we reduce energy errors by 40-100% and achieve these results with 3-4x fewer optimization epochs compared to FermiNet. In Sec. 4 we systematically break down which changes cause these improvements. We hypothesize that including too much physical prior knowledge can actually hinder optimization and thus deteriorate accuracy – we provide ample experimental evidence in Sec. 5. 2 Improved approach Similar to FermiNet, our architecture expresses Λdki as a linear combination of high-dimensional electron embeddings hLi , and the envelopes Ω dαi k as a sum of exponential functions Λdki(r) =W dαi k h L i Ω dαi k (ri) = nnuc∑ I=1 πdαikI exp(−ω dαi kI |ρiI |), (3) where W dαik , π dαi kI , ω dαi kI are trainable parameters and we enforce ω dαi kI ≥ 0. We compute these embeddings hLi by first transforming the inputs RI , ri into feature vectors h0i = [ |ρiI |, ρ̃iI ] I∈{1,...,nnuc} v0iI = [ |ρiI |, ρ̃iI ] g0ij = |rij | where [·] denotes the concatenation operation and then applying L iterations of an embedding network (Fig. 2a). The local difference vectors ρ̃iI are obtained by applying rotation matrices onto ρiI as described in Sec. 2.2. 2.1 Convolutional layers in embedding Our embedding network uses four residual neural network streams (Fig. 2b): A primary oneelectron stream that embeds a single electron, and three auxiliary streams modelling the two-particleinteractions (electrons with same spins, electrons with different spin, and electron-ion). hl+1i = A l one ( f li ) + hli g l+1 ij = A l σij ( glij ) + glij v l+1 iI = A l nuc ( vliI ) + vliI (4) Here l denotes the embedding iteration, Al denote fully connected neural networks, and glij , v l iI denote electron-electron- and electron-nucleus-embeddings. We use σij = ’same’ for same-spin pairs of electrons and σij = ’diff’ for pairs of electrons with different spin. Similar to FermiNet, in each iteration we assemble the input f li to the primary stream from the auxiliary streams (Fig. 2c): f li = [ hli, 1 n↑ n↑∑ j=1 hlj , 1 n↓ nel∑ j=1+n↑ hlj , s l,el i , s l,nuc i ] . (5) Inspired by the success of SchNet [22] and the efficiency of the PauliNet embedding, we use the sum of element-wise multiplication (⊙), effectively forming a convolution, to aggregate the auxiliary two-particle streams: sl,eli = nel∑ j=1 Blσij ( glij ) ⊙Clσij ( hlj ) sl,nuci = nion∑ I=1 Blnuc ( vliI ) ⊙Clnuc ( ZembI ) (6) Eq. 5 and 6 form the core of the architecture and are the key difference between FermiNet, PauliNet and our architecture. The PauliNet architecture emphasizes two-particle interactions and essentially only uses convolutions as input features: f li = [s l,el i , s l,nuc i ]. In addition to not using the hi as input features, PauliNet also limits its effective depth by making the convolutional kernels B functions of the electron-electron distances |rij | instead of the high-dimensional embedded representations gij . The FermiNet architecture on the other hand emphasizes the one-electron stream and only uses sums over gij as input features, essentially corresponding to B l = Id, C(·) = 1. Furthermore FermiNet does not contain an explicit stream for electron-nucleus interactions. Since our architecture adequately models both the one-electron-embedding as well as the two-particle-interactions, we expect our architecture to be more expressive than either predecessor, as demonstrated in Sec. 4. 2.2 Local, invariant input features The first stage of any VMC wavefunction model is typically the computation of suitable input features from the raw electron coordinates r and nuclear coordinates {RI}. While the subsequent embedding stage could in principle take the raw coordinates, appropriate features allow to explicitly enforce symmetries and improve the model’s transferability and accuracy. Input features should have three properties: First, they should be sufficiently expressive to encode any physical wavefunction. Second, the features should be invariant under geometric transformations. Third, the features should primarily depend on the local environment of a particle, i.e. similar local geometries should generate similar local features, mostly independent of changes to the geometry far from the particle in question. Published architectures have so far not been able to address all three points: PauliNet [10] uses only distances as input features, making them invariant and local, but not sufficiently expressive, as demonstrated by [12]. FermiNet [10] uses raw distances and differences, making the inputs expressive and local, but not invariant under rotations. PESNet [12] proposes a global coordinate system along the principle axes of a molecule, making the inputs invariant and sufficiently expressive, but not local. We propose using local coordinate systems centered on every nucleus and evaluating the electronnuclei differences in these local coordinate systems. Effectively this amounts to applying a rotation a.) Ethene b.) Hypothetical chain of atoms Global coordinates (PESNet) Local coordinates (this work) Figure 3: Visualization of resulting coordinate systems for 2 example molecules: a) Ethene b) A hypothetical bent chain of atoms. matrix UJ to the raw electron-nucleus differences: ρ̃iJ = UJρiJ . As opposed to [12] where U is constant for all nuclei, in our approach UJ can be different for every nucleus. These electronnucleus differences ρ̃iJ are invariant under rotations, contain all the information contained in raw Cartesian coordinates and depend primarily on the local environment of an atom. To compute the 3x3 rotation matrices UJ , we first run a single Hartree-Fock calculation using a minimal basis set to obtain a density matrix D. For each atom J we choose the 3x3 block of the density matrix which corresponds to the 3 p-orbitals of the atom J and compute its eigenvectors UJ = eig(DJ). To make our coordinate system unique we sort the eigenvectors of DJ by their corresponding eigenvalues. If the eigenvalues are degenerate, we pick the rotation of the eigenvectors that maximizes the overlap with the coordinate-axes of the previous nucleus. Fig. 3 shows the resulting coordinate systems spanned by UJ . Note that the local coordinate system generally depicts more physically meaningful directions such as "along the chain". We find that these local coordinates slightly increase the accuracy for single geometries, but more importantly we expect the wavefunction to generalize better across different molecules or geometries. This should improve the accuracy of approaches that attempt to learn wavefunctions for multiple geometries at once [11, 12]. 2.3 Initialization of orbital envelope weights When considering a single atom, the entries of the wavefunction determinant have essentially the form Λki(r) exp (−ωk|ρiI |) . In [10], the exponential envelope was purely motivated by the boundary condition that the orbitals must decay to zero, and initialization with ωk = 1 was proposed. However, when comparing this ansatz to analytical solutions, an interesting parallel can be found: Analytical solutions to the Schrödinger equation for atoms with a single electron – the only systems that have analytical solutions – are of the form Λ̃k(ρiI) exp ( − Z nk |ρiI | ) , where Λ̃k(ρiI) is a product of a Laguerre polynomial and a spherical harmonic, and nk ∈ N+ is known as the principal quantum number. This suggests ωk ≈ Z/nk, which we also find when analyzing the weights of a fully trained wavefunction. When initializing with ωk = Z/nk instead of ωk = 1, we observe faster convergence, lower final energies, and lower variance of the energies (Sec. 4). The effect is most notable for molecules containing nuclei with large Z, where Z/nk ≫ 1. 2.4 Improved hyperparameters Beyond the improved neural network architecture we fine-tuned the hyperparameters to reduce the number of optimization steps required for convergence. Starting from the hyperparameters proposed by [10], we increased the norm constrain by 3x for the second-order optimizer KFAC [23, 24], decreased learning rate by 0.5x, and decreased learning rate decay time by 0.4x. We observe that these changes stabilize the optimization and enable usage of 50% fewer Monte Carlo walkers, which results in ∼2x faster optimization and reduced memory allocation. A complete set of hyperparameters can be found in appendix B. 3 Results of improved approach We evaluated the accuracy of our approach by comparing our computed energies against the most accurate references available in the literature. Fig. 4 compares our energies against variational methods – for which lower energies are guaranteed to be more accurate – as well as non-variational high-accuracy methods. We find that across many different systems (small and large atoms, molecules at equilibrium geometry, molecules in transition states), our approach yields substantially lower – and thus more accurate – energies than previous variational results. Across all tested systems, we outperform almost all existing variational methods, both deep-learning-based methods as well as classical ones. When comparing to high-accuracy FermiNet VMC calculations, we not only reach substantially lower energies, but also do so using 3-4x fewer training steps, with each step being 40% faster (cf. appendix C). Comparing to a concurrently published Diffusion Monte Carlo approach, which used ∼10x more computational resources, we achieve similar or better accuracy for molecules like N2 and cyclobutadiene and slightly lower accuracy for benzene. Non-variational methods (e.g. CCSD(T)) yield slightly lower energies than our calculations for some molecules, but since those methods do not provide upper bounds or uncertainty guarantees they do not provide a ground-truth. For many applications not only absolute energies are important, but energy differences between different molecules or geometries are of interest, for example to determine the energy required to break a chemical bond. A particularly difficult challenge is the dissociation of the N2 molecule, i.e. the energy of an N2 molecule at different bond lengths (inset Fig. 5). Even methods that are generally regarded as highly accurate, such as CCSD(T), predict energies that deviate far from experimental data at bond-lengths from 2.5 - 4 bohr. Fig. 5 depicts this deviation between experimental and computed energy for our method and the best available reference calculations. We find that our results are closer to the experimental absolute energies than all previous work, and are similar to concurrently published FermiNet-DMC results which require 5-10x more computational resources. When comparing relative energies, our approach outperforms all other deep-learning-based methods and CCSD(T), and is only beaten by the highly specialized r12-MR-ACPF method [28]. Similar to absolute energies, we also find that our relative energies converge substantially faster than for other deep-learning-based methods, with relative energies being almost fully converged after 50k epochs. 4 Ablation study To investigate which specific changes lead to the observed improvements in accuracy, we start from the improved FermiNet architecture proposed in [17] and incrementally add improvements in the following order: First, we use dense nel × nel determinants introduced by the FermiNet authors [10, 17] in their GitHub repository and described in [18] instead of block-diagonal determinants. This generalization increases computational cost and parameter count (cf. appendix C) but has been found to better describe the wavefunction’s nodal surface and thus increase expressiveness of the ansatz. Second, we change hyperparameters as described in Sec. 2.4, which increases throughput by ∼ 2x. Third, we augment the electron embedding using our new SchNet-like neural network architecture described in Sec. 2.1. This leads to a moderate increase in parameter count and computational cost. Fourth, we switch to local, invariant input features as described in Sec. 2.2 and remove the electronelectron difference vectors rij as inputs. Lastly we switch to initializing ωdkI = Z/nk as described in Sec. 2.3, resulting in our proposed final method. We note that the accuracy gains of these changes are not fully independent of each other and the relative attribution depends on the order in which they are applied: Earlier changes will generally generate larger energy improvements compared to later changes. At each step we compute total energies for three different molecules: ethene, N2 at the challenging bond-length of 4.0 bohr, and the K-atom. Fig. 6 depicts the energy of our implementation of FermiNet, and the energy change caused by each subsequent improvement. Each experiment was repeated two times with different RNG seeds (appendix D), the errorbars depict the spread in energy. Overall we find that all our changes combined yield a ∼3-20x improvement in the energy error. For ethene, the dominant contribution (3.7 mHa) comes from improved hyperparameters, which lead to the results being mostly converged after 50k epochs vs. the original settings which require 200k epochs for convergence. Using a lower learning rate in combination with a larger gradient-norm-constraint ensures that more optimization steps are taken according to curvature estimated by KFAC and fewer steps are clipped by the gradient-norm-constraint. Architectural improvements (embedding and input features) lower the energy error by additional 1.4 mHa. Because our embedding is a strict generalization of both FermiNet and PauliNet, our ansatz is more expressive and can therefore reach lower energies than previous ansätze. For N2 it has already been observed that a single dense determinant can outperform models with multiple block-diagonal determinants [18]. We find the corresponding result that 32 dense determinants substantially lower the energy relative to an ansatz with 32 block-diagonal determinants. Comparing N2 to ethene, we observe larger contributions from our architectural improvements and smaller contributions from improved hyperparameters. For the K atom, the overall gains are largest, totalling 60mHa, with substantial accuracy gains from all improvements. Since K has a much larger nuclear charge (Z=19) than the constituents of ethene (Z=1,6) and N2 (Z=7), also the physics-inspired initialization of the envelope parameters yields a substantial contribution. This improved initialization leads to a better initial guess for the wavefunction, which not only reduces the number of required optimization steps, but also leads to more accurate initial sampling. 5 Incorporating prior knowledge To further understand the effect of incorporating prior knowledge into neural network architectures for physical problems as the electronic Schrödinger equation, we examined two distinct ways of increasing prior information in our model: First, by including a built-in approximate physical model, analogous to PauliNet. Second, by increasing the number of pre-training steps to more closely match a reference wavefunction before starting the optimization. Explicitly include CASSCF PauliNet maximizes the physical prior information by computing the envelopes Ω with CASSCF, a sophisticated conventional method, and explicitly enforcing the Kato cusp conditions [30]. Starting from our proposed architecture, we modified our approach step-by-step until we arrived at a PauliNet-like architecture. Fig. 7a shows the energies of an NH3 molecule trained for 50k epochs at each step. First, we switch from dense determinants to block-diagonal determinants as used by the original PauliNet, leading to small loss in accuracy. Second, we exchange our embedding for the PauliNet-like embedding using the hyperparameters proposed in [11], leading to a substantial loss in accuracy, presumably caused by a loss in expressiveness. Next, we replace the simple exponential envelopes by the more physically inspired CASSCF-envelopes, causing a large loss in accuracy. We then remove the vector ri −RI as input feature (keeping only its absolute value) as done in the original PauliNet architecture [9]. This again deteriorates accuracy, presumably due to enforcing rotational invariance which is too restrictive of a symmetry class as pointed out by [12]. Lastly we replace the electronelectron distances |rij | (which are not smooth at rij = 0 and thus lead to cusps) by smooth, cuspless radial basis functions as input feature and add an explicit term to enforce the electron-electron cusp condition. Since the CASSCF-envelopes are designed to fulfill the electron-ion cusp condition, this change leads to an enforced cusp condition, slightly improving the energy accuracy. The largest loss in accuracy throughout these changes is caused by introducing the CASSCF-envelopes, suggesting that they introduce a strong bias of the wavefunction that cannot be easily overcome during training. Fig. 7b shows that architectures using exponential envelopes converge to lower absolute energies compared to the CASSCF-based PauliNet and outperform PauliNet already after ∼5000 epochs. Increase pre-training accuracy Before starting the unsupervised variational optimization of the wavefunction, we run a short supervised pre-training of the wavefunction to roughly match a given reference wavefunction. This is computationally inexpensive because it only requires evaluation of ψθ and a back-propagation step to update the neural network weights θ but not the second derivative of the Hamiltonian. If the reference method yields a decent approximation to the true wavefunction, this pre-training significantly speeds-up optimization and avoids unstable parameter regimes [10]. To incorporate more prior knowledge, one could either use a more sophisticated reference method (e.g. CASSCF instead of HF) or increase the number of pre-training steps. In Fig. 7c we pre-trained the wavefunction with a block diagonal determinant for the NH3 molecule using a CASSCF and Hartree-Fock reference. We increased pre-training iteration steps and evaluated the energy after subsequent 20k and 50k variational optimization epochs, each run was repeated with five different seeds. Increasing the number of pre-training steps initially increases accuracy – since it provides a better starting point for the subsequent variational optimization – but when increasing pre-training beyond 20k steps, accuracy deteriorates for both methods. Surprisingly, we observe a drop in accuracy when using CASSCF as a reference method compared to the simpler Hartree-Fock method. This effect is even more pronounced when increasing the number of pre-training steps. It suggests that excessive pre-training introduces a bias that is hard to overcome during variational optimization, similarly to a built-in reference method. 6 Discussion and Limitations Discussion We find that our approach yields substantially more accurate absolute energies than all previous work – both classical as well as deep-learning-based – and that we reach these accuracies 4-6x faster than the next best method (FermiNet). Especially for larger systems, such as 4th row atoms or the amino acid glycine, we outperform conventional "gold-standard" methods like MRCI-F12(Q) by ∼100 mHa. This corroborates the fact that deep-learning-based methods are emerging as a new gold-standard in computational chemistry and showcases the immense potential of machinelearning-based methods in the natural sciences. A concurrent work [21] was able to achieve similar accuracies by applying Diffusion Monte Carlo (DMC) on top of a FermiNet VMC calculation, highlighting the potential of deep-learning Monte Carlo methods. However, [21] required ∼10x more computational resources and their VMC results – already by themselves 8x more expensive then our calculations – are consistently inferior to our results. This showcases a promising route towards further improvements by using our substantially cheaper and more accurate VMC results as a starting point for a DMC calculation. Regarding the question of how much physics to include in the model, we find varying results. For exact physical constraints, such as symmetries or the cusp conditions, inclusion in the model generally appears to be helpful. However for prior knowledge from existing approximate solutions (such as CASSCF) the situation is more subtle. On the one hand, soft physical guidance such as short supervised pre-training or physics-inspired weight initialization accelerates optimization. On the other hand, we show empirically that increasing physical prior knowledge, e.g. by incorporating CASSCF or extensive supervised pre-training, does not necessarily increase accuracy, but can in fact introduce detrimental biases that are hard to overcome during wavefunction optimization. Limitations and outlook Despite the proposed improvements and favorable scaling of the method, computation of energies for large molecules still takes days of GPU-time on current hardware. While the same holds true for conventional high-accuracy approaches, substantial speed-ups are still required to make DL-VMC more accessible for practitioners. Additionally, when increasing the nuclear charges, the wavefunction becomes increasingly localised, which leads to a reduction in average Monte Carlo stepsize and potentially correlated samples. We circumvent this effect for 4th row atoms by increasing the number of intermediate Monte Carlo steps, but further research into Monte Carlo sampling methods [31, 32] is required to fully address this issue. Despite our improvements for the accuracy of energy differences between different molecules or geometries, DLVMC is still outperformed by other, computationally cheaper methods in some cases. Initial research into the regularity of the wavefunction across different molecules [11, 12] provides a promising route to improvements. We note in passing that thanks to the local coordinate input features, our architecture fulfills the required rotational invariance required for these approaches. 7 Code availability The code alongside a detailed documentation is available as part of the DeepErwin code package on the Python Package Index (PyPI) and github (https://github.com/mdsunivie/deeperwin) under the MIT license. 8 Acknowledgements We gratefully acknowledge financial support from the following grants: Austrian Science Fund FWF Project I 3403 (P.G.), WWTF-ICT19-041 (L.G.). The computational results have been achieved using the Vienna Scientific Cluster (VSC). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Additionally, we thank Nicholas Gao for providing his results and data and Rafael Reisenhofer for providing valuable feedback to the manuscript.
1. What are the key contributions and strengths of the paper regarding variational Monte Carlo methods? 2. What are the weaknesses or limitations of the paper, particularly concerning its relevance to the broader machine learning community? 3. Are there any questions or concerns regarding the paper's mathematical formulations, experimental design, or results? 4. Do the authors provide sufficient citations and references for the principles and techniques they employ? 5. Are there any aspects of the paper that could benefit from further explanation or clarification?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors present and analyze a list of improvements that significantly increase accuracy and reduce the computational cost of variational Monte Carlo methods based on deep learning. Strengths And Weaknesses Strengths Overall, the paper is very well-written and clear, even for someone with little domain expertise. The introduction is helpful and allows non-experts to onboard (to the extent one can reasonably expect). The key contributions are laid out. Differences and similarities to previous methods (especially PauliNet) are exhibited. Thoughtful experimental design. Impressive experimental results. Weaknesses A (minor) weakness is the lack of immediate relevance of the model improvements to the broader ML community. Questions The units used in the equation before Eq. (1) should be clarified. The (probably well-known) Rayleigh-Ritz principle mentioned in L57 warrants a citation for the interested reader. How is H ψ ( r ) evaluated? In Eq. (2), k seems to be, next to i , an index for an electron. In the previous equations, you use j . Why? The concrete form of Eq. (1) is not obvious. I assume that λ is actually a matrix of shape n e l × n e l ? Below Eq. (3), you have another k . Is that related to the k from before? I didn't fully understand the challenges you (and the community, I assume) are facing in Section 2.2. How are these requirements different from, let's say, an ML force field. Do you mention anywhere the total computational cost of each method? Limitations The authors discuss the limitations of their work.
NIPS
Title Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need? Abstract Finding accurate solutions to the Schrödinger equation is the key unsolved challenge of computational chemistry. Given its importance for the development of new chemical compounds, decades of research have been dedicated to this problem, but due to the large dimensionality even the best available methods do not yet reach the desired accuracy. Recently the combination of deep learning with Monte Carlo methods has emerged as a promising way to obtain highly accurate energies and moderate scaling of computational cost. In this paper we significantly contribute towards this goal by introducing a novel deep-learning architecture that achieves 40-70% lower energy error at 6x lower computational cost compared to previous approaches. Using our method we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules. We systematically break down and measure our improvements, focusing in particular on the effect of increasing physical prior knowledge. We surprisingly find that increasing the prior knowledge given to the architecture can actually decrease accuracy. 1 Introduction The challenge of the Schrödinger Equation Accurately predicting properties of molecules and materials is of utmost importance for many applications, including the development of new materials or pharmaceuticals. In principle, any property of any molecule can be calculated from its wavefunction, which is obtained by solving the Schrödinger equation. In practice, computing accurate wavefunctions and corresponding energies is computationally extremely difficult for two reasons: First, the wavefunction is a high-dimensional function, depending on all coordinates of all electrons, subjecting most methods to the curse of dimensionality. Second, the required level of accuracy is extremely high. While total energies of small molecules are typically hundreds of Hartrees, the chemically relevant energy differences are on the order of 1 milli-Hartree as depicted in Fig. 1. Decades of research have produced a plethora of methods, which all require a trade-off between accuracy and computational cost: On one end of the spectrum are approximate methods such as Hartree-Fock (HF) or Density Functional Theory (DFT), which was awarded the Nobel prize in 1998. These methods can treat thousands of particles but can often only crudely approximate chemical properties. On the other end of the spectrum are "gold-standard" methods such as FCI (Full Configuration Interaction) or CCSD(T) (Coupled Clusters Singles Doubles (Perturbative Triples)) 36th Conference on Neural Information Processing Systems (NeurIPS 2022). which yield energies that often closely agree with experiments, but can only treat up to 100 particles. Despite all these efforts, even for small molecules there do currently not exist highly accurate energy calculations. A 2020 benchmark of state-of-the-art methods for the benzene molecule found a spread of 4 mHa across different methods [1] – as a matter of fact, our results show that the absolute energies calculated in [1] are off by at least 600 mHa, due to the small basis set used in their calculations. An important characteristic of a method is the type of approximation being made: Hartree-Fock or FCI are "variational", meaning that their predicted energies at least upper-bound the ground-truth energy. Since a lower energy is always guaranteed to be a closer approximation of the true energy, this makes assessment of these methods straight-forward. In contrast, CCSD(T) or DFT do not have any guarantees on the accuracy of their results. The approximations resulting from such methods, while working well for many common systems, often fail for chemically challenging situations such as breaking of chemical bonds [2, 3]. Deep-learning-based variational Monte Carlo Combining deep learning and Monte Carlo methods has recently emerged as a promising new approach for solving the Schrödinger equation [4, 5]. These methods offer high accuracy, moderate scaling of computational cost with system size and obey the variational principle. Within a few years deep-learning-based methods have managed to outperform conventional high-accuracy methods for many different molecules, potentially defining a new gold-standard for high-accuracy solutions. In the Born-Oppenheimer approximation a molecule, consisting of nnuc nuclei and nel electrons, is fully described by its Hamiltonian in atomic units H = −1 2 ∑ i ∇2ri + ∑ i>j 1 |ri − rj | + ∑ I>J ZIZJ |RI −RJ | − ∑ i,I ZI |ri −RI | . Here RI , ZI , I ∈ {1, . . . , nnuc} denote the coordinates and charges of the nuclei, r = (r1, . . . , rn↑ , . . . , rnel) ∈ R3×nel denotes the set of nel Cartesian electron coordinates differentiated between n↑ spin-up and n↓ spin-down electrons. We define the inter-particle vectors rij = ri − rj and ρiJ = ri −RJ . All properties of the molecule depend on the wavefunction ψ(r), which must fulfill the antisymmetry constraint: ψ(Pr) = −ψ(r) for any permutation P of two electrons with the same spin [6]. The wavefunction ψ can be found as the solution to the Schrödinger equation Hψ = E0ψ with the ground-state energy and smallest eigenvalue E0. By the Rayleigh-Ritz principle [7], the ground-state energy and the corresponding wavefunction can be found through minimization of the loss L(ψθ) = Er∼ψ2θ(r) [ Hψθ(r) ψθ(r) ] ≥ E0 (1) for a trial wavefunction ψθ, parameterized by parameters θ. The trial function ψθ is represented by a neural network and typically has the form ψθ(r) = ndet∑ d=1 det [ Λdki(r)Ω dαi k (ri) ] k,i=1,...,nel (2) with Λdki : R3×nel → R, Ωdk : R3 → R, αi ∈ {↑, ↓}, i ∈ {1, . . . , nel}, k ∈ {1, . . . , nel}. Each determinant is taken over a nel × nel matrix, with row-indices k running over orbitals and columnindices i running over electrons. The determinant enforces antisymmetry, Ωdk are envelope functions enforcing the boundary condition lim|r|→∞ ψθ(r) = 0, and Λdki are neural networks. The local energy Hψψ can be evaluated using automatic differentiation and the loss can be minimized by gradient based methods. The computation of the expectation value in eq. 1 over the high-dimensional space R3×nel is done using Monte Carlo integration by sampling electron coordinates r distributed according to ψ2θ using the Metropolis-Hastings [8] algorithm. A thorough discussion of deep-learning-based variational Monte Carlo (DL-VMC) can be found in [9]. Related work Two major neural network architectures and their extensions have emerged throughout literature: PauliNet [9] and FermiNet [10]. PauliNet puts emphasis on maximizing physical prior knowledge, by focusing on the the envelope function. They use the output of CASSCF (Complete Active Space Self Consistent Field, a sophisticated conventional quantum-chemistry method) as Ω and use a relatively small (∼ 100k weights) neural network for Λ. FermiNet on the other hand uses a simple exponential function as envelope Ω and uses a large (∼ 700k weights) neural network for Λ. Both approaches have been applied with great success to many different systems and properties, such as energies of individual molecules [4, 10, 9], ionization energies [10], potential energy surfaces [11, 12], forces [11], excited states [13], model-systems for solids [14, 15] and actual solids [16]. Several approaches have been proposed to increase accuracy or decrease computational cost, most notably architecture simplifications [17], alternative antisymmetric layers [18], effective core potentials [19] and Diffusion Monte Carlo (DMC) [20, 21]. FermiNet commonly reaches lower (i.e. more accurate) energies than PauliNet[10], but PauliNet has been observed to converge faster [12]. It has been proposed [9] that combining the embedding of FermiNet and the physical prior knowledge of PauliNet could lead to a superior architecture. Our contribution In this work we present the counter-intuitive observation that the opposite approach might be more fruitful. By combining a PauliNet-like neural network embedding with the envelopes of FermiNet and adding several improvements to the embedding, input features, and initialization of parameter (Sec. 2), we obtain the currently best neural network architecture for the numerical solution of the electronic Schrödinger equation. Combining our new architecture with VMC we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules - both when comparing to deep-learning-based methods, as well as when comparing to classical methods (Sec. 3). Across systems we reduce energy errors by 40-100% and achieve these results with 3-4x fewer optimization epochs compared to FermiNet. In Sec. 4 we systematically break down which changes cause these improvements. We hypothesize that including too much physical prior knowledge can actually hinder optimization and thus deteriorate accuracy – we provide ample experimental evidence in Sec. 5. 2 Improved approach Similar to FermiNet, our architecture expresses Λdki as a linear combination of high-dimensional electron embeddings hLi , and the envelopes Ω dαi k as a sum of exponential functions Λdki(r) =W dαi k h L i Ω dαi k (ri) = nnuc∑ I=1 πdαikI exp(−ω dαi kI |ρiI |), (3) where W dαik , π dαi kI , ω dαi kI are trainable parameters and we enforce ω dαi kI ≥ 0. We compute these embeddings hLi by first transforming the inputs RI , ri into feature vectors h0i = [ |ρiI |, ρ̃iI ] I∈{1,...,nnuc} v0iI = [ |ρiI |, ρ̃iI ] g0ij = |rij | where [·] denotes the concatenation operation and then applying L iterations of an embedding network (Fig. 2a). The local difference vectors ρ̃iI are obtained by applying rotation matrices onto ρiI as described in Sec. 2.2. 2.1 Convolutional layers in embedding Our embedding network uses four residual neural network streams (Fig. 2b): A primary oneelectron stream that embeds a single electron, and three auxiliary streams modelling the two-particleinteractions (electrons with same spins, electrons with different spin, and electron-ion). hl+1i = A l one ( f li ) + hli g l+1 ij = A l σij ( glij ) + glij v l+1 iI = A l nuc ( vliI ) + vliI (4) Here l denotes the embedding iteration, Al denote fully connected neural networks, and glij , v l iI denote electron-electron- and electron-nucleus-embeddings. We use σij = ’same’ for same-spin pairs of electrons and σij = ’diff’ for pairs of electrons with different spin. Similar to FermiNet, in each iteration we assemble the input f li to the primary stream from the auxiliary streams (Fig. 2c): f li = [ hli, 1 n↑ n↑∑ j=1 hlj , 1 n↓ nel∑ j=1+n↑ hlj , s l,el i , s l,nuc i ] . (5) Inspired by the success of SchNet [22] and the efficiency of the PauliNet embedding, we use the sum of element-wise multiplication (⊙), effectively forming a convolution, to aggregate the auxiliary two-particle streams: sl,eli = nel∑ j=1 Blσij ( glij ) ⊙Clσij ( hlj ) sl,nuci = nion∑ I=1 Blnuc ( vliI ) ⊙Clnuc ( ZembI ) (6) Eq. 5 and 6 form the core of the architecture and are the key difference between FermiNet, PauliNet and our architecture. The PauliNet architecture emphasizes two-particle interactions and essentially only uses convolutions as input features: f li = [s l,el i , s l,nuc i ]. In addition to not using the hi as input features, PauliNet also limits its effective depth by making the convolutional kernels B functions of the electron-electron distances |rij | instead of the high-dimensional embedded representations gij . The FermiNet architecture on the other hand emphasizes the one-electron stream and only uses sums over gij as input features, essentially corresponding to B l = Id, C(·) = 1. Furthermore FermiNet does not contain an explicit stream for electron-nucleus interactions. Since our architecture adequately models both the one-electron-embedding as well as the two-particle-interactions, we expect our architecture to be more expressive than either predecessor, as demonstrated in Sec. 4. 2.2 Local, invariant input features The first stage of any VMC wavefunction model is typically the computation of suitable input features from the raw electron coordinates r and nuclear coordinates {RI}. While the subsequent embedding stage could in principle take the raw coordinates, appropriate features allow to explicitly enforce symmetries and improve the model’s transferability and accuracy. Input features should have three properties: First, they should be sufficiently expressive to encode any physical wavefunction. Second, the features should be invariant under geometric transformations. Third, the features should primarily depend on the local environment of a particle, i.e. similar local geometries should generate similar local features, mostly independent of changes to the geometry far from the particle in question. Published architectures have so far not been able to address all three points: PauliNet [10] uses only distances as input features, making them invariant and local, but not sufficiently expressive, as demonstrated by [12]. FermiNet [10] uses raw distances and differences, making the inputs expressive and local, but not invariant under rotations. PESNet [12] proposes a global coordinate system along the principle axes of a molecule, making the inputs invariant and sufficiently expressive, but not local. We propose using local coordinate systems centered on every nucleus and evaluating the electronnuclei differences in these local coordinate systems. Effectively this amounts to applying a rotation a.) Ethene b.) Hypothetical chain of atoms Global coordinates (PESNet) Local coordinates (this work) Figure 3: Visualization of resulting coordinate systems for 2 example molecules: a) Ethene b) A hypothetical bent chain of atoms. matrix UJ to the raw electron-nucleus differences: ρ̃iJ = UJρiJ . As opposed to [12] where U is constant for all nuclei, in our approach UJ can be different for every nucleus. These electronnucleus differences ρ̃iJ are invariant under rotations, contain all the information contained in raw Cartesian coordinates and depend primarily on the local environment of an atom. To compute the 3x3 rotation matrices UJ , we first run a single Hartree-Fock calculation using a minimal basis set to obtain a density matrix D. For each atom J we choose the 3x3 block of the density matrix which corresponds to the 3 p-orbitals of the atom J and compute its eigenvectors UJ = eig(DJ). To make our coordinate system unique we sort the eigenvectors of DJ by their corresponding eigenvalues. If the eigenvalues are degenerate, we pick the rotation of the eigenvectors that maximizes the overlap with the coordinate-axes of the previous nucleus. Fig. 3 shows the resulting coordinate systems spanned by UJ . Note that the local coordinate system generally depicts more physically meaningful directions such as "along the chain". We find that these local coordinates slightly increase the accuracy for single geometries, but more importantly we expect the wavefunction to generalize better across different molecules or geometries. This should improve the accuracy of approaches that attempt to learn wavefunctions for multiple geometries at once [11, 12]. 2.3 Initialization of orbital envelope weights When considering a single atom, the entries of the wavefunction determinant have essentially the form Λki(r) exp (−ωk|ρiI |) . In [10], the exponential envelope was purely motivated by the boundary condition that the orbitals must decay to zero, and initialization with ωk = 1 was proposed. However, when comparing this ansatz to analytical solutions, an interesting parallel can be found: Analytical solutions to the Schrödinger equation for atoms with a single electron – the only systems that have analytical solutions – are of the form Λ̃k(ρiI) exp ( − Z nk |ρiI | ) , where Λ̃k(ρiI) is a product of a Laguerre polynomial and a spherical harmonic, and nk ∈ N+ is known as the principal quantum number. This suggests ωk ≈ Z/nk, which we also find when analyzing the weights of a fully trained wavefunction. When initializing with ωk = Z/nk instead of ωk = 1, we observe faster convergence, lower final energies, and lower variance of the energies (Sec. 4). The effect is most notable for molecules containing nuclei with large Z, where Z/nk ≫ 1. 2.4 Improved hyperparameters Beyond the improved neural network architecture we fine-tuned the hyperparameters to reduce the number of optimization steps required for convergence. Starting from the hyperparameters proposed by [10], we increased the norm constrain by 3x for the second-order optimizer KFAC [23, 24], decreased learning rate by 0.5x, and decreased learning rate decay time by 0.4x. We observe that these changes stabilize the optimization and enable usage of 50% fewer Monte Carlo walkers, which results in ∼2x faster optimization and reduced memory allocation. A complete set of hyperparameters can be found in appendix B. 3 Results of improved approach We evaluated the accuracy of our approach by comparing our computed energies against the most accurate references available in the literature. Fig. 4 compares our energies against variational methods – for which lower energies are guaranteed to be more accurate – as well as non-variational high-accuracy methods. We find that across many different systems (small and large atoms, molecules at equilibrium geometry, molecules in transition states), our approach yields substantially lower – and thus more accurate – energies than previous variational results. Across all tested systems, we outperform almost all existing variational methods, both deep-learning-based methods as well as classical ones. When comparing to high-accuracy FermiNet VMC calculations, we not only reach substantially lower energies, but also do so using 3-4x fewer training steps, with each step being 40% faster (cf. appendix C). Comparing to a concurrently published Diffusion Monte Carlo approach, which used ∼10x more computational resources, we achieve similar or better accuracy for molecules like N2 and cyclobutadiene and slightly lower accuracy for benzene. Non-variational methods (e.g. CCSD(T)) yield slightly lower energies than our calculations for some molecules, but since those methods do not provide upper bounds or uncertainty guarantees they do not provide a ground-truth. For many applications not only absolute energies are important, but energy differences between different molecules or geometries are of interest, for example to determine the energy required to break a chemical bond. A particularly difficult challenge is the dissociation of the N2 molecule, i.e. the energy of an N2 molecule at different bond lengths (inset Fig. 5). Even methods that are generally regarded as highly accurate, such as CCSD(T), predict energies that deviate far from experimental data at bond-lengths from 2.5 - 4 bohr. Fig. 5 depicts this deviation between experimental and computed energy for our method and the best available reference calculations. We find that our results are closer to the experimental absolute energies than all previous work, and are similar to concurrently published FermiNet-DMC results which require 5-10x more computational resources. When comparing relative energies, our approach outperforms all other deep-learning-based methods and CCSD(T), and is only beaten by the highly specialized r12-MR-ACPF method [28]. Similar to absolute energies, we also find that our relative energies converge substantially faster than for other deep-learning-based methods, with relative energies being almost fully converged after 50k epochs. 4 Ablation study To investigate which specific changes lead to the observed improvements in accuracy, we start from the improved FermiNet architecture proposed in [17] and incrementally add improvements in the following order: First, we use dense nel × nel determinants introduced by the FermiNet authors [10, 17] in their GitHub repository and described in [18] instead of block-diagonal determinants. This generalization increases computational cost and parameter count (cf. appendix C) but has been found to better describe the wavefunction’s nodal surface and thus increase expressiveness of the ansatz. Second, we change hyperparameters as described in Sec. 2.4, which increases throughput by ∼ 2x. Third, we augment the electron embedding using our new SchNet-like neural network architecture described in Sec. 2.1. This leads to a moderate increase in parameter count and computational cost. Fourth, we switch to local, invariant input features as described in Sec. 2.2 and remove the electronelectron difference vectors rij as inputs. Lastly we switch to initializing ωdkI = Z/nk as described in Sec. 2.3, resulting in our proposed final method. We note that the accuracy gains of these changes are not fully independent of each other and the relative attribution depends on the order in which they are applied: Earlier changes will generally generate larger energy improvements compared to later changes. At each step we compute total energies for three different molecules: ethene, N2 at the challenging bond-length of 4.0 bohr, and the K-atom. Fig. 6 depicts the energy of our implementation of FermiNet, and the energy change caused by each subsequent improvement. Each experiment was repeated two times with different RNG seeds (appendix D), the errorbars depict the spread in energy. Overall we find that all our changes combined yield a ∼3-20x improvement in the energy error. For ethene, the dominant contribution (3.7 mHa) comes from improved hyperparameters, which lead to the results being mostly converged after 50k epochs vs. the original settings which require 200k epochs for convergence. Using a lower learning rate in combination with a larger gradient-norm-constraint ensures that more optimization steps are taken according to curvature estimated by KFAC and fewer steps are clipped by the gradient-norm-constraint. Architectural improvements (embedding and input features) lower the energy error by additional 1.4 mHa. Because our embedding is a strict generalization of both FermiNet and PauliNet, our ansatz is more expressive and can therefore reach lower energies than previous ansätze. For N2 it has already been observed that a single dense determinant can outperform models with multiple block-diagonal determinants [18]. We find the corresponding result that 32 dense determinants substantially lower the energy relative to an ansatz with 32 block-diagonal determinants. Comparing N2 to ethene, we observe larger contributions from our architectural improvements and smaller contributions from improved hyperparameters. For the K atom, the overall gains are largest, totalling 60mHa, with substantial accuracy gains from all improvements. Since K has a much larger nuclear charge (Z=19) than the constituents of ethene (Z=1,6) and N2 (Z=7), also the physics-inspired initialization of the envelope parameters yields a substantial contribution. This improved initialization leads to a better initial guess for the wavefunction, which not only reduces the number of required optimization steps, but also leads to more accurate initial sampling. 5 Incorporating prior knowledge To further understand the effect of incorporating prior knowledge into neural network architectures for physical problems as the electronic Schrödinger equation, we examined two distinct ways of increasing prior information in our model: First, by including a built-in approximate physical model, analogous to PauliNet. Second, by increasing the number of pre-training steps to more closely match a reference wavefunction before starting the optimization. Explicitly include CASSCF PauliNet maximizes the physical prior information by computing the envelopes Ω with CASSCF, a sophisticated conventional method, and explicitly enforcing the Kato cusp conditions [30]. Starting from our proposed architecture, we modified our approach step-by-step until we arrived at a PauliNet-like architecture. Fig. 7a shows the energies of an NH3 molecule trained for 50k epochs at each step. First, we switch from dense determinants to block-diagonal determinants as used by the original PauliNet, leading to small loss in accuracy. Second, we exchange our embedding for the PauliNet-like embedding using the hyperparameters proposed in [11], leading to a substantial loss in accuracy, presumably caused by a loss in expressiveness. Next, we replace the simple exponential envelopes by the more physically inspired CASSCF-envelopes, causing a large loss in accuracy. We then remove the vector ri −RI as input feature (keeping only its absolute value) as done in the original PauliNet architecture [9]. This again deteriorates accuracy, presumably due to enforcing rotational invariance which is too restrictive of a symmetry class as pointed out by [12]. Lastly we replace the electronelectron distances |rij | (which are not smooth at rij = 0 and thus lead to cusps) by smooth, cuspless radial basis functions as input feature and add an explicit term to enforce the electron-electron cusp condition. Since the CASSCF-envelopes are designed to fulfill the electron-ion cusp condition, this change leads to an enforced cusp condition, slightly improving the energy accuracy. The largest loss in accuracy throughout these changes is caused by introducing the CASSCF-envelopes, suggesting that they introduce a strong bias of the wavefunction that cannot be easily overcome during training. Fig. 7b shows that architectures using exponential envelopes converge to lower absolute energies compared to the CASSCF-based PauliNet and outperform PauliNet already after ∼5000 epochs. Increase pre-training accuracy Before starting the unsupervised variational optimization of the wavefunction, we run a short supervised pre-training of the wavefunction to roughly match a given reference wavefunction. This is computationally inexpensive because it only requires evaluation of ψθ and a back-propagation step to update the neural network weights θ but not the second derivative of the Hamiltonian. If the reference method yields a decent approximation to the true wavefunction, this pre-training significantly speeds-up optimization and avoids unstable parameter regimes [10]. To incorporate more prior knowledge, one could either use a more sophisticated reference method (e.g. CASSCF instead of HF) or increase the number of pre-training steps. In Fig. 7c we pre-trained the wavefunction with a block diagonal determinant for the NH3 molecule using a CASSCF and Hartree-Fock reference. We increased pre-training iteration steps and evaluated the energy after subsequent 20k and 50k variational optimization epochs, each run was repeated with five different seeds. Increasing the number of pre-training steps initially increases accuracy – since it provides a better starting point for the subsequent variational optimization – but when increasing pre-training beyond 20k steps, accuracy deteriorates for both methods. Surprisingly, we observe a drop in accuracy when using CASSCF as a reference method compared to the simpler Hartree-Fock method. This effect is even more pronounced when increasing the number of pre-training steps. It suggests that excessive pre-training introduces a bias that is hard to overcome during variational optimization, similarly to a built-in reference method. 6 Discussion and Limitations Discussion We find that our approach yields substantially more accurate absolute energies than all previous work – both classical as well as deep-learning-based – and that we reach these accuracies 4-6x faster than the next best method (FermiNet). Especially for larger systems, such as 4th row atoms or the amino acid glycine, we outperform conventional "gold-standard" methods like MRCI-F12(Q) by ∼100 mHa. This corroborates the fact that deep-learning-based methods are emerging as a new gold-standard in computational chemistry and showcases the immense potential of machinelearning-based methods in the natural sciences. A concurrent work [21] was able to achieve similar accuracies by applying Diffusion Monte Carlo (DMC) on top of a FermiNet VMC calculation, highlighting the potential of deep-learning Monte Carlo methods. However, [21] required ∼10x more computational resources and their VMC results – already by themselves 8x more expensive then our calculations – are consistently inferior to our results. This showcases a promising route towards further improvements by using our substantially cheaper and more accurate VMC results as a starting point for a DMC calculation. Regarding the question of how much physics to include in the model, we find varying results. For exact physical constraints, such as symmetries or the cusp conditions, inclusion in the model generally appears to be helpful. However for prior knowledge from existing approximate solutions (such as CASSCF) the situation is more subtle. On the one hand, soft physical guidance such as short supervised pre-training or physics-inspired weight initialization accelerates optimization. On the other hand, we show empirically that increasing physical prior knowledge, e.g. by incorporating CASSCF or extensive supervised pre-training, does not necessarily increase accuracy, but can in fact introduce detrimental biases that are hard to overcome during wavefunction optimization. Limitations and outlook Despite the proposed improvements and favorable scaling of the method, computation of energies for large molecules still takes days of GPU-time on current hardware. While the same holds true for conventional high-accuracy approaches, substantial speed-ups are still required to make DL-VMC more accessible for practitioners. Additionally, when increasing the nuclear charges, the wavefunction becomes increasingly localised, which leads to a reduction in average Monte Carlo stepsize and potentially correlated samples. We circumvent this effect for 4th row atoms by increasing the number of intermediate Monte Carlo steps, but further research into Monte Carlo sampling methods [31, 32] is required to fully address this issue. Despite our improvements for the accuracy of energy differences between different molecules or geometries, DLVMC is still outperformed by other, computationally cheaper methods in some cases. Initial research into the regularity of the wavefunction across different molecules [11, 12] provides a promising route to improvements. We note in passing that thanks to the local coordinate input features, our architecture fulfills the required rotational invariance required for these approaches. 7 Code availability The code alongside a detailed documentation is available as part of the DeepErwin code package on the Python Package Index (PyPI) and github (https://github.com/mdsunivie/deeperwin) under the MIT license. 8 Acknowledgements We gratefully acknowledge financial support from the following grants: Austrian Science Fund FWF Project I 3403 (P.G.), WWTF-ICT19-041 (L.G.). The computational results have been achieved using the Vienna Scientific Cluster (VSC). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Additionally, we thank Nicholas Gao for providing his results and data and Rafael Reisenhofer for providing valuable feedback to the manuscript.
1. What are the main contributions and improvements introduced by the paper regarding variational quantum Monte Carlo estimation? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like FermiNet and PauliNet? 3. Do you have any concerns or questions regarding the experimental results, such as the lower energy errors and computational costs? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations mentioned by the authors that could be improved in future works?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper focuses on empirical improvements of variational quantum Monte Carlo estimation of molecular ground state energy. In particular, the authors focus on evaluating the various design choices in the FermiNet and the PauliNet, including dense determinant, hyper parameter tuning, the choice of envelope function and the pretraining strategies. On top of these, the authors propose to use a SchNet-like embedding blocks and an input feature transformation to make the input feature to make the feature to be invariant, local and expressive. The authors empirically evaluate all these design choices on molecules with up to ~40 electrons and obtain a model which improves upon the existing methods both in terms of accuracy and speed. The authors also conduct ablation studies by removing each design choice one by one. From the experiments, although the factors are not totally independent, it seems the most improvements come from the use of dense determinant, hyper parameters tuning, and the SchNet-like embedding. The proposed feature transformation only has minor effects. Strengths And Weaknesses Strength: The paper is well written and easy to follow. The experimental results are good and the ablation studies are extensive and well presented. The effects of different design choices can be easily identified. Since PauliNet and FermiNet differ in many aspects while both give high quality quantum Monte Carlo results, it is interesting to compare and clarify the effects of these differences. Making improvements by fusing these two methods is well motivated. Weakness: Since the paper essentially integrates FermiNet and PauliNet, the novelty of proposed methods are very limited. In particular, the proposed coordinate transform only has sight effects. Although the overall improvement and speedup are clear, for some claims it is difficult to find reference in the main paper. E.g., "40-70% lower energy error at 8x lower computational cost" as stated in the abstract. It would be clearer to collect these comparisons in one place. Questions Some results in Figure 4 are lower than the reference energy, especially for larger molecules where the differences are significant (>100mHa) compared to the chemical accuracy (~2mHa). To which extent are we certain that the QMC results are correct? Although the variational principe could grantee the variational energy to be upper bound but doesn't it assume the sampling to be accurate in the first place? For example, in the Figure 3 of the FermiNet paper [Pfau et al. 2020], during VMC training, the energy could overshot when the MCMC step size is not large enough. Under which setting the claimed "40-70% lower energy error at 8x lower computational cost" is obtained? As shown in Table 5, the runtime per epoch improves from ~6s to 3.6s. How is the convergence time defined? [Pfau et al. 2020] Ab initio solution of the many-electron Schrödinger equation with deep neural networks. Physical Review Research, 2020. After rebuttal: The authors' response clarifies the concern about the uncertainty and the speedup. Overall I think the proposed method is well motivated and results are convincing. Hence I will increase my score from 5 to 7 and will recommend acceptance of this paper. Limitations The authors state the limitations adequately.
NIPS
Title Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need? Abstract Finding accurate solutions to the Schrödinger equation is the key unsolved challenge of computational chemistry. Given its importance for the development of new chemical compounds, decades of research have been dedicated to this problem, but due to the large dimensionality even the best available methods do not yet reach the desired accuracy. Recently the combination of deep learning with Monte Carlo methods has emerged as a promising way to obtain highly accurate energies and moderate scaling of computational cost. In this paper we significantly contribute towards this goal by introducing a novel deep-learning architecture that achieves 40-70% lower energy error at 6x lower computational cost compared to previous approaches. Using our method we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules. We systematically break down and measure our improvements, focusing in particular on the effect of increasing physical prior knowledge. We surprisingly find that increasing the prior knowledge given to the architecture can actually decrease accuracy. 1 Introduction The challenge of the Schrödinger Equation Accurately predicting properties of molecules and materials is of utmost importance for many applications, including the development of new materials or pharmaceuticals. In principle, any property of any molecule can be calculated from its wavefunction, which is obtained by solving the Schrödinger equation. In practice, computing accurate wavefunctions and corresponding energies is computationally extremely difficult for two reasons: First, the wavefunction is a high-dimensional function, depending on all coordinates of all electrons, subjecting most methods to the curse of dimensionality. Second, the required level of accuracy is extremely high. While total energies of small molecules are typically hundreds of Hartrees, the chemically relevant energy differences are on the order of 1 milli-Hartree as depicted in Fig. 1. Decades of research have produced a plethora of methods, which all require a trade-off between accuracy and computational cost: On one end of the spectrum are approximate methods such as Hartree-Fock (HF) or Density Functional Theory (DFT), which was awarded the Nobel prize in 1998. These methods can treat thousands of particles but can often only crudely approximate chemical properties. On the other end of the spectrum are "gold-standard" methods such as FCI (Full Configuration Interaction) or CCSD(T) (Coupled Clusters Singles Doubles (Perturbative Triples)) 36th Conference on Neural Information Processing Systems (NeurIPS 2022). which yield energies that often closely agree with experiments, but can only treat up to 100 particles. Despite all these efforts, even for small molecules there do currently not exist highly accurate energy calculations. A 2020 benchmark of state-of-the-art methods for the benzene molecule found a spread of 4 mHa across different methods [1] – as a matter of fact, our results show that the absolute energies calculated in [1] are off by at least 600 mHa, due to the small basis set used in their calculations. An important characteristic of a method is the type of approximation being made: Hartree-Fock or FCI are "variational", meaning that their predicted energies at least upper-bound the ground-truth energy. Since a lower energy is always guaranteed to be a closer approximation of the true energy, this makes assessment of these methods straight-forward. In contrast, CCSD(T) or DFT do not have any guarantees on the accuracy of their results. The approximations resulting from such methods, while working well for many common systems, often fail for chemically challenging situations such as breaking of chemical bonds [2, 3]. Deep-learning-based variational Monte Carlo Combining deep learning and Monte Carlo methods has recently emerged as a promising new approach for solving the Schrödinger equation [4, 5]. These methods offer high accuracy, moderate scaling of computational cost with system size and obey the variational principle. Within a few years deep-learning-based methods have managed to outperform conventional high-accuracy methods for many different molecules, potentially defining a new gold-standard for high-accuracy solutions. In the Born-Oppenheimer approximation a molecule, consisting of nnuc nuclei and nel electrons, is fully described by its Hamiltonian in atomic units H = −1 2 ∑ i ∇2ri + ∑ i>j 1 |ri − rj | + ∑ I>J ZIZJ |RI −RJ | − ∑ i,I ZI |ri −RI | . Here RI , ZI , I ∈ {1, . . . , nnuc} denote the coordinates and charges of the nuclei, r = (r1, . . . , rn↑ , . . . , rnel) ∈ R3×nel denotes the set of nel Cartesian electron coordinates differentiated between n↑ spin-up and n↓ spin-down electrons. We define the inter-particle vectors rij = ri − rj and ρiJ = ri −RJ . All properties of the molecule depend on the wavefunction ψ(r), which must fulfill the antisymmetry constraint: ψ(Pr) = −ψ(r) for any permutation P of two electrons with the same spin [6]. The wavefunction ψ can be found as the solution to the Schrödinger equation Hψ = E0ψ with the ground-state energy and smallest eigenvalue E0. By the Rayleigh-Ritz principle [7], the ground-state energy and the corresponding wavefunction can be found through minimization of the loss L(ψθ) = Er∼ψ2θ(r) [ Hψθ(r) ψθ(r) ] ≥ E0 (1) for a trial wavefunction ψθ, parameterized by parameters θ. The trial function ψθ is represented by a neural network and typically has the form ψθ(r) = ndet∑ d=1 det [ Λdki(r)Ω dαi k (ri) ] k,i=1,...,nel (2) with Λdki : R3×nel → R, Ωdk : R3 → R, αi ∈ {↑, ↓}, i ∈ {1, . . . , nel}, k ∈ {1, . . . , nel}. Each determinant is taken over a nel × nel matrix, with row-indices k running over orbitals and columnindices i running over electrons. The determinant enforces antisymmetry, Ωdk are envelope functions enforcing the boundary condition lim|r|→∞ ψθ(r) = 0, and Λdki are neural networks. The local energy Hψψ can be evaluated using automatic differentiation and the loss can be minimized by gradient based methods. The computation of the expectation value in eq. 1 over the high-dimensional space R3×nel is done using Monte Carlo integration by sampling electron coordinates r distributed according to ψ2θ using the Metropolis-Hastings [8] algorithm. A thorough discussion of deep-learning-based variational Monte Carlo (DL-VMC) can be found in [9]. Related work Two major neural network architectures and their extensions have emerged throughout literature: PauliNet [9] and FermiNet [10]. PauliNet puts emphasis on maximizing physical prior knowledge, by focusing on the the envelope function. They use the output of CASSCF (Complete Active Space Self Consistent Field, a sophisticated conventional quantum-chemistry method) as Ω and use a relatively small (∼ 100k weights) neural network for Λ. FermiNet on the other hand uses a simple exponential function as envelope Ω and uses a large (∼ 700k weights) neural network for Λ. Both approaches have been applied with great success to many different systems and properties, such as energies of individual molecules [4, 10, 9], ionization energies [10], potential energy surfaces [11, 12], forces [11], excited states [13], model-systems for solids [14, 15] and actual solids [16]. Several approaches have been proposed to increase accuracy or decrease computational cost, most notably architecture simplifications [17], alternative antisymmetric layers [18], effective core potentials [19] and Diffusion Monte Carlo (DMC) [20, 21]. FermiNet commonly reaches lower (i.e. more accurate) energies than PauliNet[10], but PauliNet has been observed to converge faster [12]. It has been proposed [9] that combining the embedding of FermiNet and the physical prior knowledge of PauliNet could lead to a superior architecture. Our contribution In this work we present the counter-intuitive observation that the opposite approach might be more fruitful. By combining a PauliNet-like neural network embedding with the envelopes of FermiNet and adding several improvements to the embedding, input features, and initialization of parameter (Sec. 2), we obtain the currently best neural network architecture for the numerical solution of the electronic Schrödinger equation. Combining our new architecture with VMC we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules - both when comparing to deep-learning-based methods, as well as when comparing to classical methods (Sec. 3). Across systems we reduce energy errors by 40-100% and achieve these results with 3-4x fewer optimization epochs compared to FermiNet. In Sec. 4 we systematically break down which changes cause these improvements. We hypothesize that including too much physical prior knowledge can actually hinder optimization and thus deteriorate accuracy – we provide ample experimental evidence in Sec. 5. 2 Improved approach Similar to FermiNet, our architecture expresses Λdki as a linear combination of high-dimensional electron embeddings hLi , and the envelopes Ω dαi k as a sum of exponential functions Λdki(r) =W dαi k h L i Ω dαi k (ri) = nnuc∑ I=1 πdαikI exp(−ω dαi kI |ρiI |), (3) where W dαik , π dαi kI , ω dαi kI are trainable parameters and we enforce ω dαi kI ≥ 0. We compute these embeddings hLi by first transforming the inputs RI , ri into feature vectors h0i = [ |ρiI |, ρ̃iI ] I∈{1,...,nnuc} v0iI = [ |ρiI |, ρ̃iI ] g0ij = |rij | where [·] denotes the concatenation operation and then applying L iterations of an embedding network (Fig. 2a). The local difference vectors ρ̃iI are obtained by applying rotation matrices onto ρiI as described in Sec. 2.2. 2.1 Convolutional layers in embedding Our embedding network uses four residual neural network streams (Fig. 2b): A primary oneelectron stream that embeds a single electron, and three auxiliary streams modelling the two-particleinteractions (electrons with same spins, electrons with different spin, and electron-ion). hl+1i = A l one ( f li ) + hli g l+1 ij = A l σij ( glij ) + glij v l+1 iI = A l nuc ( vliI ) + vliI (4) Here l denotes the embedding iteration, Al denote fully connected neural networks, and glij , v l iI denote electron-electron- and electron-nucleus-embeddings. We use σij = ’same’ for same-spin pairs of electrons and σij = ’diff’ for pairs of electrons with different spin. Similar to FermiNet, in each iteration we assemble the input f li to the primary stream from the auxiliary streams (Fig. 2c): f li = [ hli, 1 n↑ n↑∑ j=1 hlj , 1 n↓ nel∑ j=1+n↑ hlj , s l,el i , s l,nuc i ] . (5) Inspired by the success of SchNet [22] and the efficiency of the PauliNet embedding, we use the sum of element-wise multiplication (⊙), effectively forming a convolution, to aggregate the auxiliary two-particle streams: sl,eli = nel∑ j=1 Blσij ( glij ) ⊙Clσij ( hlj ) sl,nuci = nion∑ I=1 Blnuc ( vliI ) ⊙Clnuc ( ZembI ) (6) Eq. 5 and 6 form the core of the architecture and are the key difference between FermiNet, PauliNet and our architecture. The PauliNet architecture emphasizes two-particle interactions and essentially only uses convolutions as input features: f li = [s l,el i , s l,nuc i ]. In addition to not using the hi as input features, PauliNet also limits its effective depth by making the convolutional kernels B functions of the electron-electron distances |rij | instead of the high-dimensional embedded representations gij . The FermiNet architecture on the other hand emphasizes the one-electron stream and only uses sums over gij as input features, essentially corresponding to B l = Id, C(·) = 1. Furthermore FermiNet does not contain an explicit stream for electron-nucleus interactions. Since our architecture adequately models both the one-electron-embedding as well as the two-particle-interactions, we expect our architecture to be more expressive than either predecessor, as demonstrated in Sec. 4. 2.2 Local, invariant input features The first stage of any VMC wavefunction model is typically the computation of suitable input features from the raw electron coordinates r and nuclear coordinates {RI}. While the subsequent embedding stage could in principle take the raw coordinates, appropriate features allow to explicitly enforce symmetries and improve the model’s transferability and accuracy. Input features should have three properties: First, they should be sufficiently expressive to encode any physical wavefunction. Second, the features should be invariant under geometric transformations. Third, the features should primarily depend on the local environment of a particle, i.e. similar local geometries should generate similar local features, mostly independent of changes to the geometry far from the particle in question. Published architectures have so far not been able to address all three points: PauliNet [10] uses only distances as input features, making them invariant and local, but not sufficiently expressive, as demonstrated by [12]. FermiNet [10] uses raw distances and differences, making the inputs expressive and local, but not invariant under rotations. PESNet [12] proposes a global coordinate system along the principle axes of a molecule, making the inputs invariant and sufficiently expressive, but not local. We propose using local coordinate systems centered on every nucleus and evaluating the electronnuclei differences in these local coordinate systems. Effectively this amounts to applying a rotation a.) Ethene b.) Hypothetical chain of atoms Global coordinates (PESNet) Local coordinates (this work) Figure 3: Visualization of resulting coordinate systems for 2 example molecules: a) Ethene b) A hypothetical bent chain of atoms. matrix UJ to the raw electron-nucleus differences: ρ̃iJ = UJρiJ . As opposed to [12] where U is constant for all nuclei, in our approach UJ can be different for every nucleus. These electronnucleus differences ρ̃iJ are invariant under rotations, contain all the information contained in raw Cartesian coordinates and depend primarily on the local environment of an atom. To compute the 3x3 rotation matrices UJ , we first run a single Hartree-Fock calculation using a minimal basis set to obtain a density matrix D. For each atom J we choose the 3x3 block of the density matrix which corresponds to the 3 p-orbitals of the atom J and compute its eigenvectors UJ = eig(DJ). To make our coordinate system unique we sort the eigenvectors of DJ by their corresponding eigenvalues. If the eigenvalues are degenerate, we pick the rotation of the eigenvectors that maximizes the overlap with the coordinate-axes of the previous nucleus. Fig. 3 shows the resulting coordinate systems spanned by UJ . Note that the local coordinate system generally depicts more physically meaningful directions such as "along the chain". We find that these local coordinates slightly increase the accuracy for single geometries, but more importantly we expect the wavefunction to generalize better across different molecules or geometries. This should improve the accuracy of approaches that attempt to learn wavefunctions for multiple geometries at once [11, 12]. 2.3 Initialization of orbital envelope weights When considering a single atom, the entries of the wavefunction determinant have essentially the form Λki(r) exp (−ωk|ρiI |) . In [10], the exponential envelope was purely motivated by the boundary condition that the orbitals must decay to zero, and initialization with ωk = 1 was proposed. However, when comparing this ansatz to analytical solutions, an interesting parallel can be found: Analytical solutions to the Schrödinger equation for atoms with a single electron – the only systems that have analytical solutions – are of the form Λ̃k(ρiI) exp ( − Z nk |ρiI | ) , where Λ̃k(ρiI) is a product of a Laguerre polynomial and a spherical harmonic, and nk ∈ N+ is known as the principal quantum number. This suggests ωk ≈ Z/nk, which we also find when analyzing the weights of a fully trained wavefunction. When initializing with ωk = Z/nk instead of ωk = 1, we observe faster convergence, lower final energies, and lower variance of the energies (Sec. 4). The effect is most notable for molecules containing nuclei with large Z, where Z/nk ≫ 1. 2.4 Improved hyperparameters Beyond the improved neural network architecture we fine-tuned the hyperparameters to reduce the number of optimization steps required for convergence. Starting from the hyperparameters proposed by [10], we increased the norm constrain by 3x for the second-order optimizer KFAC [23, 24], decreased learning rate by 0.5x, and decreased learning rate decay time by 0.4x. We observe that these changes stabilize the optimization and enable usage of 50% fewer Monte Carlo walkers, which results in ∼2x faster optimization and reduced memory allocation. A complete set of hyperparameters can be found in appendix B. 3 Results of improved approach We evaluated the accuracy of our approach by comparing our computed energies against the most accurate references available in the literature. Fig. 4 compares our energies against variational methods – for which lower energies are guaranteed to be more accurate – as well as non-variational high-accuracy methods. We find that across many different systems (small and large atoms, molecules at equilibrium geometry, molecules in transition states), our approach yields substantially lower – and thus more accurate – energies than previous variational results. Across all tested systems, we outperform almost all existing variational methods, both deep-learning-based methods as well as classical ones. When comparing to high-accuracy FermiNet VMC calculations, we not only reach substantially lower energies, but also do so using 3-4x fewer training steps, with each step being 40% faster (cf. appendix C). Comparing to a concurrently published Diffusion Monte Carlo approach, which used ∼10x more computational resources, we achieve similar or better accuracy for molecules like N2 and cyclobutadiene and slightly lower accuracy for benzene. Non-variational methods (e.g. CCSD(T)) yield slightly lower energies than our calculations for some molecules, but since those methods do not provide upper bounds or uncertainty guarantees they do not provide a ground-truth. For many applications not only absolute energies are important, but energy differences between different molecules or geometries are of interest, for example to determine the energy required to break a chemical bond. A particularly difficult challenge is the dissociation of the N2 molecule, i.e. the energy of an N2 molecule at different bond lengths (inset Fig. 5). Even methods that are generally regarded as highly accurate, such as CCSD(T), predict energies that deviate far from experimental data at bond-lengths from 2.5 - 4 bohr. Fig. 5 depicts this deviation between experimental and computed energy for our method and the best available reference calculations. We find that our results are closer to the experimental absolute energies than all previous work, and are similar to concurrently published FermiNet-DMC results which require 5-10x more computational resources. When comparing relative energies, our approach outperforms all other deep-learning-based methods and CCSD(T), and is only beaten by the highly specialized r12-MR-ACPF method [28]. Similar to absolute energies, we also find that our relative energies converge substantially faster than for other deep-learning-based methods, with relative energies being almost fully converged after 50k epochs. 4 Ablation study To investigate which specific changes lead to the observed improvements in accuracy, we start from the improved FermiNet architecture proposed in [17] and incrementally add improvements in the following order: First, we use dense nel × nel determinants introduced by the FermiNet authors [10, 17] in their GitHub repository and described in [18] instead of block-diagonal determinants. This generalization increases computational cost and parameter count (cf. appendix C) but has been found to better describe the wavefunction’s nodal surface and thus increase expressiveness of the ansatz. Second, we change hyperparameters as described in Sec. 2.4, which increases throughput by ∼ 2x. Third, we augment the electron embedding using our new SchNet-like neural network architecture described in Sec. 2.1. This leads to a moderate increase in parameter count and computational cost. Fourth, we switch to local, invariant input features as described in Sec. 2.2 and remove the electronelectron difference vectors rij as inputs. Lastly we switch to initializing ωdkI = Z/nk as described in Sec. 2.3, resulting in our proposed final method. We note that the accuracy gains of these changes are not fully independent of each other and the relative attribution depends on the order in which they are applied: Earlier changes will generally generate larger energy improvements compared to later changes. At each step we compute total energies for three different molecules: ethene, N2 at the challenging bond-length of 4.0 bohr, and the K-atom. Fig. 6 depicts the energy of our implementation of FermiNet, and the energy change caused by each subsequent improvement. Each experiment was repeated two times with different RNG seeds (appendix D), the errorbars depict the spread in energy. Overall we find that all our changes combined yield a ∼3-20x improvement in the energy error. For ethene, the dominant contribution (3.7 mHa) comes from improved hyperparameters, which lead to the results being mostly converged after 50k epochs vs. the original settings which require 200k epochs for convergence. Using a lower learning rate in combination with a larger gradient-norm-constraint ensures that more optimization steps are taken according to curvature estimated by KFAC and fewer steps are clipped by the gradient-norm-constraint. Architectural improvements (embedding and input features) lower the energy error by additional 1.4 mHa. Because our embedding is a strict generalization of both FermiNet and PauliNet, our ansatz is more expressive and can therefore reach lower energies than previous ansätze. For N2 it has already been observed that a single dense determinant can outperform models with multiple block-diagonal determinants [18]. We find the corresponding result that 32 dense determinants substantially lower the energy relative to an ansatz with 32 block-diagonal determinants. Comparing N2 to ethene, we observe larger contributions from our architectural improvements and smaller contributions from improved hyperparameters. For the K atom, the overall gains are largest, totalling 60mHa, with substantial accuracy gains from all improvements. Since K has a much larger nuclear charge (Z=19) than the constituents of ethene (Z=1,6) and N2 (Z=7), also the physics-inspired initialization of the envelope parameters yields a substantial contribution. This improved initialization leads to a better initial guess for the wavefunction, which not only reduces the number of required optimization steps, but also leads to more accurate initial sampling. 5 Incorporating prior knowledge To further understand the effect of incorporating prior knowledge into neural network architectures for physical problems as the electronic Schrödinger equation, we examined two distinct ways of increasing prior information in our model: First, by including a built-in approximate physical model, analogous to PauliNet. Second, by increasing the number of pre-training steps to more closely match a reference wavefunction before starting the optimization. Explicitly include CASSCF PauliNet maximizes the physical prior information by computing the envelopes Ω with CASSCF, a sophisticated conventional method, and explicitly enforcing the Kato cusp conditions [30]. Starting from our proposed architecture, we modified our approach step-by-step until we arrived at a PauliNet-like architecture. Fig. 7a shows the energies of an NH3 molecule trained for 50k epochs at each step. First, we switch from dense determinants to block-diagonal determinants as used by the original PauliNet, leading to small loss in accuracy. Second, we exchange our embedding for the PauliNet-like embedding using the hyperparameters proposed in [11], leading to a substantial loss in accuracy, presumably caused by a loss in expressiveness. Next, we replace the simple exponential envelopes by the more physically inspired CASSCF-envelopes, causing a large loss in accuracy. We then remove the vector ri −RI as input feature (keeping only its absolute value) as done in the original PauliNet architecture [9]. This again deteriorates accuracy, presumably due to enforcing rotational invariance which is too restrictive of a symmetry class as pointed out by [12]. Lastly we replace the electronelectron distances |rij | (which are not smooth at rij = 0 and thus lead to cusps) by smooth, cuspless radial basis functions as input feature and add an explicit term to enforce the electron-electron cusp condition. Since the CASSCF-envelopes are designed to fulfill the electron-ion cusp condition, this change leads to an enforced cusp condition, slightly improving the energy accuracy. The largest loss in accuracy throughout these changes is caused by introducing the CASSCF-envelopes, suggesting that they introduce a strong bias of the wavefunction that cannot be easily overcome during training. Fig. 7b shows that architectures using exponential envelopes converge to lower absolute energies compared to the CASSCF-based PauliNet and outperform PauliNet already after ∼5000 epochs. Increase pre-training accuracy Before starting the unsupervised variational optimization of the wavefunction, we run a short supervised pre-training of the wavefunction to roughly match a given reference wavefunction. This is computationally inexpensive because it only requires evaluation of ψθ and a back-propagation step to update the neural network weights θ but not the second derivative of the Hamiltonian. If the reference method yields a decent approximation to the true wavefunction, this pre-training significantly speeds-up optimization and avoids unstable parameter regimes [10]. To incorporate more prior knowledge, one could either use a more sophisticated reference method (e.g. CASSCF instead of HF) or increase the number of pre-training steps. In Fig. 7c we pre-trained the wavefunction with a block diagonal determinant for the NH3 molecule using a CASSCF and Hartree-Fock reference. We increased pre-training iteration steps and evaluated the energy after subsequent 20k and 50k variational optimization epochs, each run was repeated with five different seeds. Increasing the number of pre-training steps initially increases accuracy – since it provides a better starting point for the subsequent variational optimization – but when increasing pre-training beyond 20k steps, accuracy deteriorates for both methods. Surprisingly, we observe a drop in accuracy when using CASSCF as a reference method compared to the simpler Hartree-Fock method. This effect is even more pronounced when increasing the number of pre-training steps. It suggests that excessive pre-training introduces a bias that is hard to overcome during variational optimization, similarly to a built-in reference method. 6 Discussion and Limitations Discussion We find that our approach yields substantially more accurate absolute energies than all previous work – both classical as well as deep-learning-based – and that we reach these accuracies 4-6x faster than the next best method (FermiNet). Especially for larger systems, such as 4th row atoms or the amino acid glycine, we outperform conventional "gold-standard" methods like MRCI-F12(Q) by ∼100 mHa. This corroborates the fact that deep-learning-based methods are emerging as a new gold-standard in computational chemistry and showcases the immense potential of machinelearning-based methods in the natural sciences. A concurrent work [21] was able to achieve similar accuracies by applying Diffusion Monte Carlo (DMC) on top of a FermiNet VMC calculation, highlighting the potential of deep-learning Monte Carlo methods. However, [21] required ∼10x more computational resources and their VMC results – already by themselves 8x more expensive then our calculations – are consistently inferior to our results. This showcases a promising route towards further improvements by using our substantially cheaper and more accurate VMC results as a starting point for a DMC calculation. Regarding the question of how much physics to include in the model, we find varying results. For exact physical constraints, such as symmetries or the cusp conditions, inclusion in the model generally appears to be helpful. However for prior knowledge from existing approximate solutions (such as CASSCF) the situation is more subtle. On the one hand, soft physical guidance such as short supervised pre-training or physics-inspired weight initialization accelerates optimization. On the other hand, we show empirically that increasing physical prior knowledge, e.g. by incorporating CASSCF or extensive supervised pre-training, does not necessarily increase accuracy, but can in fact introduce detrimental biases that are hard to overcome during wavefunction optimization. Limitations and outlook Despite the proposed improvements and favorable scaling of the method, computation of energies for large molecules still takes days of GPU-time on current hardware. While the same holds true for conventional high-accuracy approaches, substantial speed-ups are still required to make DL-VMC more accessible for practitioners. Additionally, when increasing the nuclear charges, the wavefunction becomes increasingly localised, which leads to a reduction in average Monte Carlo stepsize and potentially correlated samples. We circumvent this effect for 4th row atoms by increasing the number of intermediate Monte Carlo steps, but further research into Monte Carlo sampling methods [31, 32] is required to fully address this issue. Despite our improvements for the accuracy of energy differences between different molecules or geometries, DLVMC is still outperformed by other, computationally cheaper methods in some cases. Initial research into the regularity of the wavefunction across different molecules [11, 12] provides a promising route to improvements. We note in passing that thanks to the local coordinate input features, our architecture fulfills the required rotational invariance required for these approaches. 7 Code availability The code alongside a detailed documentation is available as part of the DeepErwin code package on the Python Package Index (PyPI) and github (https://github.com/mdsunivie/deeperwin) under the MIT license. 8 Acknowledgements We gratefully acknowledge financial support from the following grants: Austrian Science Fund FWF Project I 3403 (P.G.), WWTF-ICT19-041 (L.G.). The computational results have been achieved using the Vienna Scientific Cluster (VSC). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Additionally, we thank Nicholas Gao for providing his results and data and Rafael Reisenhofer for providing valuable feedback to the manuscript.
1. What is the focus and contribution of the paper regarding solving ground state of many-electron systems? 2. What are the strengths of the proposed model, particularly in its novel combination of neural networks and improved embedding, input features, and parameter initialization? 3. What are the weaknesses of the paper regarding its experimental results and the need for further theoretical explanations? 4. How does the reviewer assess the clarity, quality, originality, significance, and limitations of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a deep-learning architecture to solve ground state of many-electron systems. The proposed model combines PauliNet-like neural network and envelope function in FermiNet, with additional improvements on embedding, input feature and parameter initialization of previous methods. Experimental results show that the proposed method can reduce errors by 40-70% with 4-8x lower computational cost. This paper also establish a new benchmark of several deep learning based and classical methods over a number of different atoms and molecules. Authors also research into the reason for improvements and find out that including too much physical prior knowledge can deteriorate the accuracy. Strengths And Weaknesses Strengths: originality: Related works are adequately cited and it is clear to see the difference from these works. This work is a novel combination of PauliNet and FermiNet along with additional improvements. quality: This work provides solid experiments to prove the accuracy and efficiency improvements. Also, this work systematically breaks down which changes cause the improvements, and gives insights that including physical prior knowledge can hinder the optimization. clarity: This paper is overall well written and easy to follow. The method and results are clearly presented. significance: This work achieves the best results for the numerical solution of the electronic Schrodinger equation and establishes a new benchmark for current most accurate methods on a number of molecules and atoms. The proposed method is 4-8x faster than FermiNet in terms of optimization. Weaknesses: This paper demonstrates great experimental results, but lacking the necessary theoretical explanation to address the accuracy improvement under reduced computational cost. Section 4 discussed in detail the improvements obtained by each individual change and their combined effects, but the discussion is still restricted within the experimental point of view. Adding theoretical explanation on why the proposed method can obtain accuracy better than the classical as well as existing deep learning method can be beneficial. Questions Please see weaknesses. Limitations Limitation of this work is well addressed.
NIPS
Title Towards Accurate Binary Convolutional Neural Network Abstract We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations. 1 Introduction Convolutional neural networks (CNNs) have achieved state-of-the-art results on real-world applications such as image classification [He et al., 2016] and object detection [Ren et al., 2015], with the best results obtained with large models and sufficient computation resources. Concurrent to these progresses, the deployment of CNNs on mobile devices for consumer applications is gaining more and more attention, due to the widespread commercial value and the exciting prospect. On mobile applications, it is typically assumed that training is performed on the server and test or inference is executed on the mobile devices [Courbariaux et al., 2016, Esser et al., 2016]. In the training phase, GPUs enabled substantial breakthroughs because of their greater computational speed. In the test phase, however, GPUs are usually too expensive to deploy. Thus improving the test-time performance and reducing hardware costs are likely to be crucial for further progress, as mobile applications usually require real-time, low power consumption and fully embeddable. As a result, there is much interest in research and development of dedicated hardware for deep neural networks (DNNs). Binary neural networks (BNNs) [Courbariaux et al., 2016, Rastegari et al., 2016], i.e., neural networks with weights and perhaps activations constrained to only two possible values (e.g., -1 or +1), would bring great benefits to specialized DNN hardware for three major reasons: (1) the binary weights/activations reduce memory usage and model size 32 times compared to single-precision version; (2) if weights are binary, then most multiply-accumulate operations can be replaced by simple accumulations, which is beneficial because multipliers are the most space and power-hungry components of the digital implementation of neural networks; (3) furthermore, if both activations and weights are binary, the multiply-accumulations can be replaced by the bitwise operations: xnor and bitcount Courbariaux et al. [2016]. This could have a big impact on dedicated deep learning hardware. For instance, a 32-bit floating point multiplier costs about 200 Xilinx FPGA slices [Govindu et al., 2004], whereas a 1-bit xnor gate only costs a single slice. Semiconductor ⇤ indicates corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. manufacturers like IBM [Esser et al., 2016] and Intel [Venkatesh et al., 2016] have been involved in the research and development of related chips. However, binarization usually cause severe prediction accuracy degradation, especially on complex tasks such as classification on ImageNet dataset. To take a closer look, Rastegari et al. [2016] shows that binarizing weights causes the accuracy of Resnet-18 drops from 69.3% to 60.8% on ImageNet dataset. If further binarize activations, the accuracy drops to 51.2%. Similar phenomenon can also be found in literatures such as [Hubara et al., 2016]. Clearly there is a considerable gap between the accuracy of a full-precision model and a binary model. This paper proposes a novel scheme for binarizing CNNs, which aims to alleviate, or even eliminate the accuracy degradation, while still significantly reducing inference time, resource requirement and power consumption. The paper makes the following major contributions. • We approximate full-precision weights with the linear combination of multiple binary weight bases. The weights values of CNNs are constrained to { 1,+1}, which means convolutions can be implemented by only addition and subtraction (without multiplication), or bitwise operation when activations are binary as well. We demonstrate that 3⇠5 binary weight bases are adequate to well approximate the full-precision weights. • We introduce multiple binary activations. Previous works have shown that the quantization of activations, especially binarization, is more difficult than that of weights [Cai et al., 2017, Courbariaux et al., 2016]. By employing five binary activations, we have been able to reduce the Top-1 and Top-5 accuracy degradation caused by binarization to around 5% on ImageNet compared to the full precision counterpart. It is worth noting that the multiple binary weight bases/activations scheme is preferable to the fixedpoint quantization in previous works. In those fixed-point quantized networks one still needs to employ arithmetic operations, such as multiplication and addition, on fixed-point values. Even though faster than floating point, they still require relatively complex logic and can consume a lot of power. Detailed discussions can be found in Section 5.2. Ideally, combining more binary weight bases and activations always leads to better accuracy and will eventually get very close to that of full-precision networks. We verify this on ImageNet using Resnet network topology. This is the first time a binary neural network achieves prediction accuracy comparable to its full-precision counterpart on ImageNet. 2 Related work Quantized Neural Networks: High precision parameters are not very necessary to reach high performance in deep neural networks. Recent research efforts (e.g., [Hubara et al., 2016]) have considerably reduced a large amounts of memory requirement and computation complexity by using low bitwidth weights and activations. Zhou et al. [2016] further generalized these schemes and proposed to train CNNs with low bitwidth gradients. By performing the quantization after network training or using the “straight-through estimator (STE)" [Bengio et al., 2013], these works avoided the issues of non-differentiable optimization. While some of these methods have produced good results on datasets such as CIFAR-10 and SVHN, none has produced low precision networks competitive with full-precision models on large-scale classification tasks, such as ImageNet. In fact, [Zhou et al., 2016] and [Hubara et al., 2016] experiment with different combinations of bitwidth for weights and activations, and show that the performance of their highly quantized networks deteriorates rapidly when the weights and activations are quantized to less than 4-bit numbers. Cai et al. [2017] enhance the performance of a low bitwidth model by addressing the gradient mismatch problem, nevertheless there is still much room for improvement. Binarized Neural Networks: The binary representation for deep models is not a new topic. At the emergence of artificial neural networks, inspired biologically, the unit step function has been used as the activation function [Toms, 1990]. It is known that binary activation can use spiking response for event-based computation and communication (consuming energy only when necessary) and therefore is energy-efficient [Esser et al., 2016]. Recently, Courbariaux et al. [2016] introduce BinarizedNeural-Networks (BNNs), neural networks with binary weights and activations at run-time. Different from their work, Rastegari et al. [2016] introduce simple, efficient, and accurate approximations to CNNs by binarizing the weights and even the intermediate representations in CNNs. All these works drastically reduce memory consumption, and replace most arithmetic operations with bitwise operations, which potentially lead to a substantial increase in power efficiency. In all above mentioned works, binarization significantly reduces accuracy. Our experimental results on ImageNet show that we are close to filling the gap between the accuracy of a binary model and its full-precision counterpart. We relied on the idea of finding the best approximation of full-precision convolution using multiple binary operations, and employing multiple binary activations to allow more information passing through. 3 Binarization methods In this section, we detail our binarization method, which is termed ABC-Net (Accurate-BinaryConvolutional) for convenience. Bear in mind that during training, the real-valued weights are reserved and updated at every epoch, while in test-time only binary weights are used in convolution. 3.1 Weight approximation Consider a L-layer CNN architecture. Without loss of generality, we assume the weights of each convolutional layer are tensors of dimension (w, h, c in , c out ), which represents filter width, filter height, input-channel and output-channel respectively. We propose two variations of binarization method for weights at each layer: 1) approximate weights as a whole and 2) approximate weights channel-wise. 3.1.1 Approximate weights as a whole At each layer, in order to constrain a CNN to have binary weights, we estimate the real-value weight filter W 2 Rw⇥h⇥cin⇥cout using the linear combination of M binary filters B 1 ,B 2 , · · · ,BM 2 { 1,+1}w⇥h⇥cin⇥cout such that W ⇡ ↵ 1 B 1 +↵ 2 B 2 +· · ·+↵MBM . To find an optimal estimation, a straightforward way is to solve the following optimization problem: min ↵,B J(↵,B) = ||w B↵||2, s.t. Bij 2 { 1,+1}, (1) where B = [vec(B 1 ), vec(B 2 ), · · · , vec(BM )], w = vec(W ) and ↵ = [↵1,↵2, · · · ,↵M ]T. Here the notation vec(·) refers to vectorization. Although a local minimum solution to (1) can be obtained by numerical methods, one could not backpropagate through it to update the real-value weight filter W . To address this issue, assuming the mean and standard deviation of W are mean(W ) and std(W ) respectively, we fix Bi’s as follows: Bi = Fui(W ) := sign(W̄ + uistd(W )), i = 1, 2, · · · ,M, (2) where W̄ = W mean(W ), and ui is a shift parameter. For example, one can choose ui’s to be ui = 1 + (i 1) 2M 1 , i = 1, 2, · · · ,M, to shift evenly over the range [ std(W ), std(W )], or leave it to be trained by the network. This is based on the observation that the full-precision weights tend to have a symmetric, non-sparse distribution, which is close to Gaussian. To gain more intuition and illustrate the approximation effectiveness, an example is visualized in Section S2 of the supplementary material. With Bi’s chosen, (1) becomes a linear regression problem min ↵ J(↵) = ||w B↵||2, (3) in which Bi’s serve as the bases in the design/dictionary matrix. We can then back-propagate through Bi’s using the “straight-through estimator” (STE) [Bengio et al., 2013]. Assume c as the cost function, A and O as the input and output tensor of a convolution respectively, the forward and backward approach of an approximated convolution during training can be computed as follows: Forward: B 1 ,B 2 , · · · ,BM = Fu 1 (W ), Fu 2 (W ), · · · , FuM (W ), (4) Solve (3) for ↵, (5) O = MX m=1 ↵mConv(Bm,A). (6) Backward: @c @W = @c @O MX m=1 ↵m @O @Bm @Bm @W ! STE = @c @O MX m=1 ↵m @O @Bm ! = MX m=1 ↵m @c @Bm . (7) In test-time, only (6) is required. The block structure of this approximated convolution layer is shown on the left side in Figure 1. With suitable hardwares and appropriate implementations, the convolution can be efficiently computed. For example, since the weight values are binary, we can implement the convolution with additions and subtractions (thus without multiplications). Furthermore, if the input A is binary as well, we can implement the convolution with bitwise operations: xnor and bitcount [Rastegari et al., 2016]. Note that the convolution with each binary filter can be computed in parallel. 3.1.2 Approximate weights channel-wise Alternatively, we can estimate the real-value weight filter Wi 2 Rw⇥h⇥cin of each output channel i 2 {1, 2, · · · , c out } using the linear combination of M binary filters Bi1,Bi2, · · · ,BiM 2 { 1,+1}w⇥h⇥cin such that Wi ⇡ ↵i1Bi1 + ↵i2Bi2 + · · · + ↵iMBiM . Again, to find an optimal estimation, we solve a linear regression problem analogy to (3) for each output channel. After convolution, the results are concatenated together along the output-channel dimension. If M = 1, this approach reduces to the Binary-Weights-Networks (BWN) proposed in [Rastegari et al., 2016]. Compared to weights approximation as a whole, the channel-wise approach approximates weights more elaborately, however no extra cost is needed during inference. Since this approach requires more computational resources during training, we leave it as a future work and focus on the former approximation approach in this paper. 3.2 Multiple binary activations and bitwise convolution As mentioned above, a convolution can be implemented without multiplications when weights are binarized. However, to utilize the bitwise operation, the activations must be binarized as well, as they are the inputs of convolutions. Similar to the activation binarization procedure in [Zhou et al., 2016], we binarize activations after passing it through a bounded activation function h, which ensures h(x) 2 [0, 1]. We choose the bounded rectifier as h. Formally, it can be defined as: hv(x) = clip(x + v, 0, 1), (8) where v is a shift parameter. If v = 0, then hv is the clip activation function in [Zhou et al., 2016]. We constrain the binary activations to either 1 or -1. In order to transform the real-valued activation R into binary activation, we use the following binarization function: Hv(R) := 2Ihv(R) 0.5 1, (9) where I is the indicator function. The conventional forward and backward approach of the activation can be given as follows: Forward: A = Hv(R). Backward: @c @R = @c @A I 0R v1. (using STE) (10) Here denotes the Hadamard product. As can be expected, binaizing activations as above is kind of crude and leads to non-trivial losses in accuracy, as shown in Rastegari et al. [2016], Hubara et al. [2016]. While it is also possible to approximate activations with linear regression, as that of weights, another critical challenge arises – unlike weights, the activations always vary in test-time inference. Luckily, this difficulty can be avoided by exploiting the statistical structure of the activations of deep networks. Our scheme can be described as follows. First of all, to keep the distribution of activations relatively stable, we resort to batch normalization [Ioffe and Szegedy, 2015]. This is a widely used normalization technique, which forces the responses of each network layer to have zero mean and unit variance. We apply this normalization before activation. Secondly, we estimate the real-value activation R using the linear combination of N binary activations A 1 ,A 2 , · · · ,AN such that R ⇡ 1A1 + 2A2 + · · · + NAN , where A 1 ,A 2 , · · · ,AN = Hv 1 (R), Hv 2 (R), · · · , HvN (R). (11) Different from that of weights, the parameters n’s and vn’s (n = 1, · · · , N ) here are both trainable, just like the scale and shift parameters in batch normalization. Without the explicit linear regression approach, n’s and vn’s are tuned by the network itself during training and fixed in test-time. They are expected to learn and utilize the statistical features of full-precision activations. The resulting network architecture outputs multiple binary activations A 1 ,A 2 , · · · ,AN and their corresponding coefficients 1 , 2 , · · · , N , which allows more information passing through compared to the former one. Combining with the weight approximation, the whole convolution scheme is given by: Conv(W ,R) ⇡ Conv MX m=1 ↵mBm, NX n=1 nAn ! = MX m=1 NX n=1 ↵m nConv (Bm,An) , (12) which suggests that it can be implemented by computing M ⇥ N bitwise convolutions in parallel. An example of the whole convolution scheme is shown in Figure 1. 3.3 Training algorithm A typical block in CNN contains several different layers, which are usually in the following order: (1) Convolution, (2) Batch Normalization, (3) Activation and (4) Pooling. The batch normalization layer [Ioffe and Szegedy, 2015] normalizes the input batch by its mean and variance. The activation is an element-wise non-linear function (e.g., Sigmoid, ReLU). The pooling layer applies any type of pooling (e.g., max,min or average) on the input batch. In our experiment, we observe that applying max-pooling on binary input returns a tensor that most of its elements are equal to +1, resulting in a noticeable drop in accuracy. Similar phenomenon has been reported in Rastegari et al. [2016] as well. Therefore, we put the max-pooling layer before the batch normalization and activation. Since our binarization scheme approximates full-precision weights, using the full-precision pre-train model serves as a perfect initialization. However, fine-tuning is always required for the weights to adapt to the new network structure. The training procedure, i.e., ABC-Net, is summarized in Section S1 of the supplementary material. It is worth noting that as M increases, the shift parameters get closer and the bases of the linear combination are more correlated, which sometimes lead to rank deficiency when solving (3). This can be tackled with the ` 2 regularization. 4 Experiment results In this section, the proposed ABC-Net was evaluated on the ILSVRC12 ImageNet classification dataset [Deng et al., 2009], and visual perception of forest trails datasets for mobile robots [Giusti et al., 2016] in Section S6 of supplementary material. 4.1 Experiment results on ImageNet dataset The ImageNet dataset contains about 1.2 million high-resolution natural images for training that spans 1000 categories of objects. The validation set contains 50k images. We use Resnet ([He et al., 2016]) as network topology. The images are resized to 224x224 before fed into the network. We report our classification performance using Top-1 and Top-5 accuracies. 4.1.1 Effect of weight approximation We first evaluate the weight approximation technique by examining the accuracy improvement for a binary model. To eliminate variables, we leave the activations being full-precision in this experiment. Table 1 shows the prediction accuracy of ABC-Net on ImageNet with different choices of M . For comparison, we add the results of Binary-Weights-Network (denoted ‘BWN’) reported in Rastegari et al. [2016] and the full-precision network (denoted ‘FP’). The BWN binarizes weights and leaves the activations being full-precision as we do. All results in this experiment use Resnet-18 as network topology. It can be observed that as M increases, the accuracy of ABC-Net converges to its fullprecision counterpart. The Top-1 gap between them reduces to only 0.9 percentage point when M = 5, which suggests that this approach nearly eliminates the accuracy degradation caused by binarizing weights. For interested readers, Figure S4 in section S5 of the supplementary material shows that the relationship between accuracy and M appears to be linear. Also, in Section S2 of the supplementary material, a visualization of the approximated weights is provided. 4.1.2 Configuration space exploration We explore the configuration space of combinations of number of weight bases and activations. Table 2 presents the results of ABC-Net with different configurations. The parameter settings for these experiments are provided in Section S4 of the supplementary material. As balancing between multiple factors like training time and inference time, model size and accuracy is more a problem of practical trade-off, there will be no definite conclusion as which combination of (M,N ) one should choose. In general, Table 2 shows that (1) the prediction accuracy of ABC-Net improves greatly as the number of binary activations increases, which is analogous to the weight approximation approach; (2) larger M or N gives better accuracy; (3) when M = N = 5, the Top-1 gap between the accuracy of a full-precision model and a binary one reduces to around 5%. To gain a visual understanding and show the possibility of extensions to other tasks such object detection, we print the a sample of feature maps in Section S3 of supplementary material. 4.1.3 Comparison with the state-of-the-art Table 3 presents a comparison between ABC-Net and several other state-of-the-art models, i.e., full-precision Resnet-18, BWN and XNOR-Net in [Rastegari et al., 2016], DoReFa-Net in [Zhou et al., 2016] and BNN in [Courbariaux et al., 2016] respectively. All comparative models use Resnet18 as network topology. The full-precision Resnet-18 achieves 69.3% Top-1 accuracy. Although Rastegari et al. [2016]’s BWN model and DeReFa-Net perform well, it should be noted that they use full-precision and 4-bit activation respectively. Models (XNOR-Net and BNN) that used both binary weights and activations achieve much less satisfactory accuracy, and is significantly outperformed by ABC-Net with multiple binary weight bases and activations. It can be seen that ABC-Net has achieved state-of-the-art performance as a binary model. One might argue that 5-bit width quantization scheme could reach similar accuracy as that of ABCNet with 5 weight bases and 5 binary activations. However, the former one is less efficient and requires distinctly more hardware resource. More detailed discussions can be found in Section 5.2. 5 Discussion 5.1 Why adding a shift parameter works? Intuitively, the multiple binarized weight bases/activations scheme works because it allows more information passing through. Consider the case that a real value, say 1.5, is passed to a binarized function f(x) = sign(x), where sign maps a positive x to 1 and otherwise -1. In that case, the outputs of f(1.5) is 1, which suggests that the input value is positive. Now imagine that we have two binarization function f 1 (x) = sign(x) and f 2 (x) = sign(x 2). In that case f 1 outputs 1 and f 2 outputs -1, which suggests that the input value is not only positive, but also must be smaller than 2. Clearly we see that each function contributes differently to represent the input and more information is gained from f 2 compared to the former case. From another point of view, both coefficients ( ’s) and shift parameters are expected to learn and utilize the statistical features of full-precision tensors, just like the scale and shift parameters in batch normalization. If we have more binarized weight bases/activations, the network has the capacity to approximate the full-precision one more precisely. Therefore, it can be deduced that when M or N is large enough, the network learns to tune itself so that the combination of M weight bases or N binarized activations can act like the full-precision one. 5.2 Advantage over the fixed-point quantization scheme It should be noted that there are key differences between the multiple binarization scheme (M binarized weight bases or N binarized activations) proposed in this paper and the fixed-point quantization scheme in the previous works such as [Zhou et al., 2016, Hubara et al., 2016], though at first Courbariaux et al. [2016] did not report their result on ImageNet. We implemented and presented the result. thought K-bit width quantization seems to share the same memory requirement with K binarizations. Specifically, our K binarized weight bases/activations is preferable to the fixed K-bit width scheme for the following reasons: (1) The K binarization scheme preserves binarization for bitwise operations. One or several bitwise operations is known to be more efficient than a fixed-point multiplication, which is a major reason that BNN/XNOR-Net was proposed. (2) A K-bit width multiplier consumes more resources than K 1-bit multipliers in a digital chip: it requires more than K bits to store and compute, otherwise it could easily overflow/underflow. For example, if a real number is quantized to a 2-bit number, a possible choice is in range {0,1,2,4}. In this 2-bit multiplication, when both numbers are 4, it outputs 4 ⇥ 4 = 16, which is not within the range. In [Zhou et al., 2016], the range of activations is constrained within [0,1], which seems to avoid this situation. However, fractional numbers do not solve this problem, severe precision deterioration will appear during the multiplication if there are no extra resources. The fact that the complexity of a multiplier is proportional to THE SQUARE of bit-widths can be found in literatures (e.g., sec 3.1.1. in [Grabbe et al., 2003]). In contrast, our K binarization scheme does not have this issue – it always outputs within the range {-1,1}. The saved hardware resources can be further used for parallel computing. (3) A binary activation can use spiking response for event-based computation and communication (consuming energy only when necessary) and therefore is energy-efficient [Esser et al., 2016]. This can be employed in our scheme, but not in the fixed K-bit width scheme. Also, we have mentioned the fact that K-bit width multiplier consumes more resources than K 1-bit multipliers. It is noteworthy that these resources include power. To sum up, K-bit multipliers are the most space and power-hungry components of the digital implementation of DNNs. Our scheme could bring great benefits to specialized DNN hardware. 5.3 Further computation reduction in run-time On specialized hardware, the following operations in our scheme can be integrated with other operations in run-time and further reduce the computation requirement. (1) Shift operations. The existence of shift parameters seem to require extra additions/subtractions (see (2) and (8)). However, the binarization operation with a shift parameter can be implemented as a comparator where the shift parameter is the number for comparison, e.g., Hv(R) =⇢ 1, R 0.5 v; 1, R < 0.5 v. (0.5 v is a constant), so no extra additions/subtractions are involved. (2) Batch normalization. In run-time, a batch normalization is simply an affine function, say, BN(R) = aR + b, whose scale and shift parameters a, b are fixed and can be integrated with vn’s. More specifically, a batch normalization can be integrated into a binarization operation as follow: Hv(BN(R)) = ⇢ 1, aR + b 0.5 v; 1, aR + b < 0.5 v. = ⇢ 1, R (0.5 v b)/a; 1, R < (0.5 v b)/a. Therefore, there will be no extra cost for the batch normalization. 6 Conclusion and future work We have introduced a novel binarization scheme for weights and activations during forward and backward propagations called ABC-Net. We have shown that it is possible to train a binary CNN with ABC-Net on ImageNet and achieve accuracy close to its full-precision counterpart. The binarization scheme proposed in this work is parallelizable and hardware friendly, and the impact of such a method on specialized hardware implementations of CNNs could be major, by replacing most multiplications in convolution with bitwise operations. The potential to speed-up the test-time inference might be very useful for real-time embedding systems. Future work includes the extension of those results to other tasks such as object detection and other models such as RNN. Also, it would be interesting to investigate using FPGA/ASIC or other customized deep learning processor [Liu et al., 2016] to implement ABC-Net at run-time. 7 Acknowledgement We acknowledge Mr Jingyang Xu for helpful discussions.
1. What is the main contribution of the paper regarding binary convolutional neural networks? 2. What are the strengths and weaknesses of the proposed method compared to other approaches? 3. Do you have any concerns or suggestions regarding the experimental setup or results? 4. How does the reviewer assess the novelty and significance of the paper's content? 5. Are there any questions or aspects that the reviewer would like to see addressed in future works related to this topic?
Review
Review The paper describes a new method for training binary convolutional neural networks. The scheme addresses both using binary weights and activations. The key insight here is to approximate the real valued weights via a linear combination of M binary basis weights. The coefficients for reconstructing the real weights can be found using least squares in the forward pass, and then pulled outside the convolution to allow for fast binary convolution at test time. A similar approach is taken for the activations, but in this case the weights and shifts are trained as normal during backpropagation. The result is a network that requires M more binary convolutions than a straightforward binary neural network, but it is expected that these will be significantly more hardware friendly. Experiments on ImageNet show that the approach outperforms competing approaches. Binary convolutional networks are an important topic with with obvious commercial applications. The idea in the paper is good, and the results on ImageNet are encouraging. Comments 1. Given that you approximate a real valued convolution with M binary convolutions, it seems to me that the approximate cost would be similar to using M binary CNNs like XNOR nets. With this in mind, I think the approach should really be compared to an ensemble of M XNOR nets trained with different initial weights, both in terms of total train time and prediction accuracy. 2. The paper suggests the v_i shifts are trainable. Why are values for these given in table 4 and the supplementary material. Are these the initial values? 3. The paper uses fixed values for the u_i's but mentions that these could also be trained. Why not train them too? Have you tried this? 4. It's not clear from the paper if solving regression problems for the alphas during training adds much to the train time. 5. It would be nice to have an estimation of the total cost of inference for the approach on a typical net (e.g. ResNet-18) given some realistic assumptions about the hardware, and compare this with other methods (perhaps in the supplementary material)
NIPS
Title Towards Accurate Binary Convolutional Neural Network Abstract We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations. 1 Introduction Convolutional neural networks (CNNs) have achieved state-of-the-art results on real-world applications such as image classification [He et al., 2016] and object detection [Ren et al., 2015], with the best results obtained with large models and sufficient computation resources. Concurrent to these progresses, the deployment of CNNs on mobile devices for consumer applications is gaining more and more attention, due to the widespread commercial value and the exciting prospect. On mobile applications, it is typically assumed that training is performed on the server and test or inference is executed on the mobile devices [Courbariaux et al., 2016, Esser et al., 2016]. In the training phase, GPUs enabled substantial breakthroughs because of their greater computational speed. In the test phase, however, GPUs are usually too expensive to deploy. Thus improving the test-time performance and reducing hardware costs are likely to be crucial for further progress, as mobile applications usually require real-time, low power consumption and fully embeddable. As a result, there is much interest in research and development of dedicated hardware for deep neural networks (DNNs). Binary neural networks (BNNs) [Courbariaux et al., 2016, Rastegari et al., 2016], i.e., neural networks with weights and perhaps activations constrained to only two possible values (e.g., -1 or +1), would bring great benefits to specialized DNN hardware for three major reasons: (1) the binary weights/activations reduce memory usage and model size 32 times compared to single-precision version; (2) if weights are binary, then most multiply-accumulate operations can be replaced by simple accumulations, which is beneficial because multipliers are the most space and power-hungry components of the digital implementation of neural networks; (3) furthermore, if both activations and weights are binary, the multiply-accumulations can be replaced by the bitwise operations: xnor and bitcount Courbariaux et al. [2016]. This could have a big impact on dedicated deep learning hardware. For instance, a 32-bit floating point multiplier costs about 200 Xilinx FPGA slices [Govindu et al., 2004], whereas a 1-bit xnor gate only costs a single slice. Semiconductor ⇤ indicates corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. manufacturers like IBM [Esser et al., 2016] and Intel [Venkatesh et al., 2016] have been involved in the research and development of related chips. However, binarization usually cause severe prediction accuracy degradation, especially on complex tasks such as classification on ImageNet dataset. To take a closer look, Rastegari et al. [2016] shows that binarizing weights causes the accuracy of Resnet-18 drops from 69.3% to 60.8% on ImageNet dataset. If further binarize activations, the accuracy drops to 51.2%. Similar phenomenon can also be found in literatures such as [Hubara et al., 2016]. Clearly there is a considerable gap between the accuracy of a full-precision model and a binary model. This paper proposes a novel scheme for binarizing CNNs, which aims to alleviate, or even eliminate the accuracy degradation, while still significantly reducing inference time, resource requirement and power consumption. The paper makes the following major contributions. • We approximate full-precision weights with the linear combination of multiple binary weight bases. The weights values of CNNs are constrained to { 1,+1}, which means convolutions can be implemented by only addition and subtraction (without multiplication), or bitwise operation when activations are binary as well. We demonstrate that 3⇠5 binary weight bases are adequate to well approximate the full-precision weights. • We introduce multiple binary activations. Previous works have shown that the quantization of activations, especially binarization, is more difficult than that of weights [Cai et al., 2017, Courbariaux et al., 2016]. By employing five binary activations, we have been able to reduce the Top-1 and Top-5 accuracy degradation caused by binarization to around 5% on ImageNet compared to the full precision counterpart. It is worth noting that the multiple binary weight bases/activations scheme is preferable to the fixedpoint quantization in previous works. In those fixed-point quantized networks one still needs to employ arithmetic operations, such as multiplication and addition, on fixed-point values. Even though faster than floating point, they still require relatively complex logic and can consume a lot of power. Detailed discussions can be found in Section 5.2. Ideally, combining more binary weight bases and activations always leads to better accuracy and will eventually get very close to that of full-precision networks. We verify this on ImageNet using Resnet network topology. This is the first time a binary neural network achieves prediction accuracy comparable to its full-precision counterpart on ImageNet. 2 Related work Quantized Neural Networks: High precision parameters are not very necessary to reach high performance in deep neural networks. Recent research efforts (e.g., [Hubara et al., 2016]) have considerably reduced a large amounts of memory requirement and computation complexity by using low bitwidth weights and activations. Zhou et al. [2016] further generalized these schemes and proposed to train CNNs with low bitwidth gradients. By performing the quantization after network training or using the “straight-through estimator (STE)" [Bengio et al., 2013], these works avoided the issues of non-differentiable optimization. While some of these methods have produced good results on datasets such as CIFAR-10 and SVHN, none has produced low precision networks competitive with full-precision models on large-scale classification tasks, such as ImageNet. In fact, [Zhou et al., 2016] and [Hubara et al., 2016] experiment with different combinations of bitwidth for weights and activations, and show that the performance of their highly quantized networks deteriorates rapidly when the weights and activations are quantized to less than 4-bit numbers. Cai et al. [2017] enhance the performance of a low bitwidth model by addressing the gradient mismatch problem, nevertheless there is still much room for improvement. Binarized Neural Networks: The binary representation for deep models is not a new topic. At the emergence of artificial neural networks, inspired biologically, the unit step function has been used as the activation function [Toms, 1990]. It is known that binary activation can use spiking response for event-based computation and communication (consuming energy only when necessary) and therefore is energy-efficient [Esser et al., 2016]. Recently, Courbariaux et al. [2016] introduce BinarizedNeural-Networks (BNNs), neural networks with binary weights and activations at run-time. Different from their work, Rastegari et al. [2016] introduce simple, efficient, and accurate approximations to CNNs by binarizing the weights and even the intermediate representations in CNNs. All these works drastically reduce memory consumption, and replace most arithmetic operations with bitwise operations, which potentially lead to a substantial increase in power efficiency. In all above mentioned works, binarization significantly reduces accuracy. Our experimental results on ImageNet show that we are close to filling the gap between the accuracy of a binary model and its full-precision counterpart. We relied on the idea of finding the best approximation of full-precision convolution using multiple binary operations, and employing multiple binary activations to allow more information passing through. 3 Binarization methods In this section, we detail our binarization method, which is termed ABC-Net (Accurate-BinaryConvolutional) for convenience. Bear in mind that during training, the real-valued weights are reserved and updated at every epoch, while in test-time only binary weights are used in convolution. 3.1 Weight approximation Consider a L-layer CNN architecture. Without loss of generality, we assume the weights of each convolutional layer are tensors of dimension (w, h, c in , c out ), which represents filter width, filter height, input-channel and output-channel respectively. We propose two variations of binarization method for weights at each layer: 1) approximate weights as a whole and 2) approximate weights channel-wise. 3.1.1 Approximate weights as a whole At each layer, in order to constrain a CNN to have binary weights, we estimate the real-value weight filter W 2 Rw⇥h⇥cin⇥cout using the linear combination of M binary filters B 1 ,B 2 , · · · ,BM 2 { 1,+1}w⇥h⇥cin⇥cout such that W ⇡ ↵ 1 B 1 +↵ 2 B 2 +· · ·+↵MBM . To find an optimal estimation, a straightforward way is to solve the following optimization problem: min ↵,B J(↵,B) = ||w B↵||2, s.t. Bij 2 { 1,+1}, (1) where B = [vec(B 1 ), vec(B 2 ), · · · , vec(BM )], w = vec(W ) and ↵ = [↵1,↵2, · · · ,↵M ]T. Here the notation vec(·) refers to vectorization. Although a local minimum solution to (1) can be obtained by numerical methods, one could not backpropagate through it to update the real-value weight filter W . To address this issue, assuming the mean and standard deviation of W are mean(W ) and std(W ) respectively, we fix Bi’s as follows: Bi = Fui(W ) := sign(W̄ + uistd(W )), i = 1, 2, · · · ,M, (2) where W̄ = W mean(W ), and ui is a shift parameter. For example, one can choose ui’s to be ui = 1 + (i 1) 2M 1 , i = 1, 2, · · · ,M, to shift evenly over the range [ std(W ), std(W )], or leave it to be trained by the network. This is based on the observation that the full-precision weights tend to have a symmetric, non-sparse distribution, which is close to Gaussian. To gain more intuition and illustrate the approximation effectiveness, an example is visualized in Section S2 of the supplementary material. With Bi’s chosen, (1) becomes a linear regression problem min ↵ J(↵) = ||w B↵||2, (3) in which Bi’s serve as the bases in the design/dictionary matrix. We can then back-propagate through Bi’s using the “straight-through estimator” (STE) [Bengio et al., 2013]. Assume c as the cost function, A and O as the input and output tensor of a convolution respectively, the forward and backward approach of an approximated convolution during training can be computed as follows: Forward: B 1 ,B 2 , · · · ,BM = Fu 1 (W ), Fu 2 (W ), · · · , FuM (W ), (4) Solve (3) for ↵, (5) O = MX m=1 ↵mConv(Bm,A). (6) Backward: @c @W = @c @O MX m=1 ↵m @O @Bm @Bm @W ! STE = @c @O MX m=1 ↵m @O @Bm ! = MX m=1 ↵m @c @Bm . (7) In test-time, only (6) is required. The block structure of this approximated convolution layer is shown on the left side in Figure 1. With suitable hardwares and appropriate implementations, the convolution can be efficiently computed. For example, since the weight values are binary, we can implement the convolution with additions and subtractions (thus without multiplications). Furthermore, if the input A is binary as well, we can implement the convolution with bitwise operations: xnor and bitcount [Rastegari et al., 2016]. Note that the convolution with each binary filter can be computed in parallel. 3.1.2 Approximate weights channel-wise Alternatively, we can estimate the real-value weight filter Wi 2 Rw⇥h⇥cin of each output channel i 2 {1, 2, · · · , c out } using the linear combination of M binary filters Bi1,Bi2, · · · ,BiM 2 { 1,+1}w⇥h⇥cin such that Wi ⇡ ↵i1Bi1 + ↵i2Bi2 + · · · + ↵iMBiM . Again, to find an optimal estimation, we solve a linear regression problem analogy to (3) for each output channel. After convolution, the results are concatenated together along the output-channel dimension. If M = 1, this approach reduces to the Binary-Weights-Networks (BWN) proposed in [Rastegari et al., 2016]. Compared to weights approximation as a whole, the channel-wise approach approximates weights more elaborately, however no extra cost is needed during inference. Since this approach requires more computational resources during training, we leave it as a future work and focus on the former approximation approach in this paper. 3.2 Multiple binary activations and bitwise convolution As mentioned above, a convolution can be implemented without multiplications when weights are binarized. However, to utilize the bitwise operation, the activations must be binarized as well, as they are the inputs of convolutions. Similar to the activation binarization procedure in [Zhou et al., 2016], we binarize activations after passing it through a bounded activation function h, which ensures h(x) 2 [0, 1]. We choose the bounded rectifier as h. Formally, it can be defined as: hv(x) = clip(x + v, 0, 1), (8) where v is a shift parameter. If v = 0, then hv is the clip activation function in [Zhou et al., 2016]. We constrain the binary activations to either 1 or -1. In order to transform the real-valued activation R into binary activation, we use the following binarization function: Hv(R) := 2Ihv(R) 0.5 1, (9) where I is the indicator function. The conventional forward and backward approach of the activation can be given as follows: Forward: A = Hv(R). Backward: @c @R = @c @A I 0R v1. (using STE) (10) Here denotes the Hadamard product. As can be expected, binaizing activations as above is kind of crude and leads to non-trivial losses in accuracy, as shown in Rastegari et al. [2016], Hubara et al. [2016]. While it is also possible to approximate activations with linear regression, as that of weights, another critical challenge arises – unlike weights, the activations always vary in test-time inference. Luckily, this difficulty can be avoided by exploiting the statistical structure of the activations of deep networks. Our scheme can be described as follows. First of all, to keep the distribution of activations relatively stable, we resort to batch normalization [Ioffe and Szegedy, 2015]. This is a widely used normalization technique, which forces the responses of each network layer to have zero mean and unit variance. We apply this normalization before activation. Secondly, we estimate the real-value activation R using the linear combination of N binary activations A 1 ,A 2 , · · · ,AN such that R ⇡ 1A1 + 2A2 + · · · + NAN , where A 1 ,A 2 , · · · ,AN = Hv 1 (R), Hv 2 (R), · · · , HvN (R). (11) Different from that of weights, the parameters n’s and vn’s (n = 1, · · · , N ) here are both trainable, just like the scale and shift parameters in batch normalization. Without the explicit linear regression approach, n’s and vn’s are tuned by the network itself during training and fixed in test-time. They are expected to learn and utilize the statistical features of full-precision activations. The resulting network architecture outputs multiple binary activations A 1 ,A 2 , · · · ,AN and their corresponding coefficients 1 , 2 , · · · , N , which allows more information passing through compared to the former one. Combining with the weight approximation, the whole convolution scheme is given by: Conv(W ,R) ⇡ Conv MX m=1 ↵mBm, NX n=1 nAn ! = MX m=1 NX n=1 ↵m nConv (Bm,An) , (12) which suggests that it can be implemented by computing M ⇥ N bitwise convolutions in parallel. An example of the whole convolution scheme is shown in Figure 1. 3.3 Training algorithm A typical block in CNN contains several different layers, which are usually in the following order: (1) Convolution, (2) Batch Normalization, (3) Activation and (4) Pooling. The batch normalization layer [Ioffe and Szegedy, 2015] normalizes the input batch by its mean and variance. The activation is an element-wise non-linear function (e.g., Sigmoid, ReLU). The pooling layer applies any type of pooling (e.g., max,min or average) on the input batch. In our experiment, we observe that applying max-pooling on binary input returns a tensor that most of its elements are equal to +1, resulting in a noticeable drop in accuracy. Similar phenomenon has been reported in Rastegari et al. [2016] as well. Therefore, we put the max-pooling layer before the batch normalization and activation. Since our binarization scheme approximates full-precision weights, using the full-precision pre-train model serves as a perfect initialization. However, fine-tuning is always required for the weights to adapt to the new network structure. The training procedure, i.e., ABC-Net, is summarized in Section S1 of the supplementary material. It is worth noting that as M increases, the shift parameters get closer and the bases of the linear combination are more correlated, which sometimes lead to rank deficiency when solving (3). This can be tackled with the ` 2 regularization. 4 Experiment results In this section, the proposed ABC-Net was evaluated on the ILSVRC12 ImageNet classification dataset [Deng et al., 2009], and visual perception of forest trails datasets for mobile robots [Giusti et al., 2016] in Section S6 of supplementary material. 4.1 Experiment results on ImageNet dataset The ImageNet dataset contains about 1.2 million high-resolution natural images for training that spans 1000 categories of objects. The validation set contains 50k images. We use Resnet ([He et al., 2016]) as network topology. The images are resized to 224x224 before fed into the network. We report our classification performance using Top-1 and Top-5 accuracies. 4.1.1 Effect of weight approximation We first evaluate the weight approximation technique by examining the accuracy improvement for a binary model. To eliminate variables, we leave the activations being full-precision in this experiment. Table 1 shows the prediction accuracy of ABC-Net on ImageNet with different choices of M . For comparison, we add the results of Binary-Weights-Network (denoted ‘BWN’) reported in Rastegari et al. [2016] and the full-precision network (denoted ‘FP’). The BWN binarizes weights and leaves the activations being full-precision as we do. All results in this experiment use Resnet-18 as network topology. It can be observed that as M increases, the accuracy of ABC-Net converges to its fullprecision counterpart. The Top-1 gap between them reduces to only 0.9 percentage point when M = 5, which suggests that this approach nearly eliminates the accuracy degradation caused by binarizing weights. For interested readers, Figure S4 in section S5 of the supplementary material shows that the relationship between accuracy and M appears to be linear. Also, in Section S2 of the supplementary material, a visualization of the approximated weights is provided. 4.1.2 Configuration space exploration We explore the configuration space of combinations of number of weight bases and activations. Table 2 presents the results of ABC-Net with different configurations. The parameter settings for these experiments are provided in Section S4 of the supplementary material. As balancing between multiple factors like training time and inference time, model size and accuracy is more a problem of practical trade-off, there will be no definite conclusion as which combination of (M,N ) one should choose. In general, Table 2 shows that (1) the prediction accuracy of ABC-Net improves greatly as the number of binary activations increases, which is analogous to the weight approximation approach; (2) larger M or N gives better accuracy; (3) when M = N = 5, the Top-1 gap between the accuracy of a full-precision model and a binary one reduces to around 5%. To gain a visual understanding and show the possibility of extensions to other tasks such object detection, we print the a sample of feature maps in Section S3 of supplementary material. 4.1.3 Comparison with the state-of-the-art Table 3 presents a comparison between ABC-Net and several other state-of-the-art models, i.e., full-precision Resnet-18, BWN and XNOR-Net in [Rastegari et al., 2016], DoReFa-Net in [Zhou et al., 2016] and BNN in [Courbariaux et al., 2016] respectively. All comparative models use Resnet18 as network topology. The full-precision Resnet-18 achieves 69.3% Top-1 accuracy. Although Rastegari et al. [2016]’s BWN model and DeReFa-Net perform well, it should be noted that they use full-precision and 4-bit activation respectively. Models (XNOR-Net and BNN) that used both binary weights and activations achieve much less satisfactory accuracy, and is significantly outperformed by ABC-Net with multiple binary weight bases and activations. It can be seen that ABC-Net has achieved state-of-the-art performance as a binary model. One might argue that 5-bit width quantization scheme could reach similar accuracy as that of ABCNet with 5 weight bases and 5 binary activations. However, the former one is less efficient and requires distinctly more hardware resource. More detailed discussions can be found in Section 5.2. 5 Discussion 5.1 Why adding a shift parameter works? Intuitively, the multiple binarized weight bases/activations scheme works because it allows more information passing through. Consider the case that a real value, say 1.5, is passed to a binarized function f(x) = sign(x), where sign maps a positive x to 1 and otherwise -1. In that case, the outputs of f(1.5) is 1, which suggests that the input value is positive. Now imagine that we have two binarization function f 1 (x) = sign(x) and f 2 (x) = sign(x 2). In that case f 1 outputs 1 and f 2 outputs -1, which suggests that the input value is not only positive, but also must be smaller than 2. Clearly we see that each function contributes differently to represent the input and more information is gained from f 2 compared to the former case. From another point of view, both coefficients ( ’s) and shift parameters are expected to learn and utilize the statistical features of full-precision tensors, just like the scale and shift parameters in batch normalization. If we have more binarized weight bases/activations, the network has the capacity to approximate the full-precision one more precisely. Therefore, it can be deduced that when M or N is large enough, the network learns to tune itself so that the combination of M weight bases or N binarized activations can act like the full-precision one. 5.2 Advantage over the fixed-point quantization scheme It should be noted that there are key differences between the multiple binarization scheme (M binarized weight bases or N binarized activations) proposed in this paper and the fixed-point quantization scheme in the previous works such as [Zhou et al., 2016, Hubara et al., 2016], though at first Courbariaux et al. [2016] did not report their result on ImageNet. We implemented and presented the result. thought K-bit width quantization seems to share the same memory requirement with K binarizations. Specifically, our K binarized weight bases/activations is preferable to the fixed K-bit width scheme for the following reasons: (1) The K binarization scheme preserves binarization for bitwise operations. One or several bitwise operations is known to be more efficient than a fixed-point multiplication, which is a major reason that BNN/XNOR-Net was proposed. (2) A K-bit width multiplier consumes more resources than K 1-bit multipliers in a digital chip: it requires more than K bits to store and compute, otherwise it could easily overflow/underflow. For example, if a real number is quantized to a 2-bit number, a possible choice is in range {0,1,2,4}. In this 2-bit multiplication, when both numbers are 4, it outputs 4 ⇥ 4 = 16, which is not within the range. In [Zhou et al., 2016], the range of activations is constrained within [0,1], which seems to avoid this situation. However, fractional numbers do not solve this problem, severe precision deterioration will appear during the multiplication if there are no extra resources. The fact that the complexity of a multiplier is proportional to THE SQUARE of bit-widths can be found in literatures (e.g., sec 3.1.1. in [Grabbe et al., 2003]). In contrast, our K binarization scheme does not have this issue – it always outputs within the range {-1,1}. The saved hardware resources can be further used for parallel computing. (3) A binary activation can use spiking response for event-based computation and communication (consuming energy only when necessary) and therefore is energy-efficient [Esser et al., 2016]. This can be employed in our scheme, but not in the fixed K-bit width scheme. Also, we have mentioned the fact that K-bit width multiplier consumes more resources than K 1-bit multipliers. It is noteworthy that these resources include power. To sum up, K-bit multipliers are the most space and power-hungry components of the digital implementation of DNNs. Our scheme could bring great benefits to specialized DNN hardware. 5.3 Further computation reduction in run-time On specialized hardware, the following operations in our scheme can be integrated with other operations in run-time and further reduce the computation requirement. (1) Shift operations. The existence of shift parameters seem to require extra additions/subtractions (see (2) and (8)). However, the binarization operation with a shift parameter can be implemented as a comparator where the shift parameter is the number for comparison, e.g., Hv(R) =⇢ 1, R 0.5 v; 1, R < 0.5 v. (0.5 v is a constant), so no extra additions/subtractions are involved. (2) Batch normalization. In run-time, a batch normalization is simply an affine function, say, BN(R) = aR + b, whose scale and shift parameters a, b are fixed and can be integrated with vn’s. More specifically, a batch normalization can be integrated into a binarization operation as follow: Hv(BN(R)) = ⇢ 1, aR + b 0.5 v; 1, aR + b < 0.5 v. = ⇢ 1, R (0.5 v b)/a; 1, R < (0.5 v b)/a. Therefore, there will be no extra cost for the batch normalization. 6 Conclusion and future work We have introduced a novel binarization scheme for weights and activations during forward and backward propagations called ABC-Net. We have shown that it is possible to train a binary CNN with ABC-Net on ImageNet and achieve accuracy close to its full-precision counterpart. The binarization scheme proposed in this work is parallelizable and hardware friendly, and the impact of such a method on specialized hardware implementations of CNNs could be major, by replacing most multiplications in convolution with bitwise operations. The potential to speed-up the test-time inference might be very useful for real-time embedding systems. Future work includes the extension of those results to other tasks such as object detection and other models such as RNN. Also, it would be interesting to investigate using FPGA/ASIC or other customized deep learning processor [Liu et al., 2016] to implement ABC-Net at run-time. 7 Acknowledgement We acknowledge Mr Jingyang Xu for helpful discussions.
1. What is the main contribution of the paper in the field of neural networks? 2. How does the proposed method affect the inference time performance of the network? 3. Can you explain how the solution uses both local optimization and backpropagation? 4. What are the potential benefits of combining this approach with custom hardware? 5. What is the weakness of the paper regarding the quality of the test baseline?
Review
Review This paper proposes a scheme to approximate the weights of a neural network layer by the linear combination of multiple layers with binary {1, -1} weights. The proposed solution affects the inference time performance of the network only, but it relies on batch-normalization at training time to improve its binarizability. The solution makes use of both local optimization of the weight matrices as well as using backpropagation to optimize them in their global context as well. A nice property of the approach is that even it can result on a speedup even on current hardware, but it has very good potential for even greater speed and power gains with custom hardware as it relies on very cheap and well parallelizable operations. Given the fact that this is the first solution that can give comparable recognition performance while massively cutting the raw computational cost of the network (using 5 binary operations for each floating point), and its great potential when combined with custom hardware, this paper could en up having a great impact on the design of custom accelerators and mobile vision in general. The only weakness is that the quality was tested against a relatively weak baseline: their full precision reference model reaches ~69% top-1 accuracy on the ILSVRC2012 dataset while the current state of the art is over 80% top-1 accuracy.
NIPS
Title Towards Accurate Binary Convolutional Neural Network Abstract We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations. 1 Introduction Convolutional neural networks (CNNs) have achieved state-of-the-art results on real-world applications such as image classification [He et al., 2016] and object detection [Ren et al., 2015], with the best results obtained with large models and sufficient computation resources. Concurrent to these progresses, the deployment of CNNs on mobile devices for consumer applications is gaining more and more attention, due to the widespread commercial value and the exciting prospect. On mobile applications, it is typically assumed that training is performed on the server and test or inference is executed on the mobile devices [Courbariaux et al., 2016, Esser et al., 2016]. In the training phase, GPUs enabled substantial breakthroughs because of their greater computational speed. In the test phase, however, GPUs are usually too expensive to deploy. Thus improving the test-time performance and reducing hardware costs are likely to be crucial for further progress, as mobile applications usually require real-time, low power consumption and fully embeddable. As a result, there is much interest in research and development of dedicated hardware for deep neural networks (DNNs). Binary neural networks (BNNs) [Courbariaux et al., 2016, Rastegari et al., 2016], i.e., neural networks with weights and perhaps activations constrained to only two possible values (e.g., -1 or +1), would bring great benefits to specialized DNN hardware for three major reasons: (1) the binary weights/activations reduce memory usage and model size 32 times compared to single-precision version; (2) if weights are binary, then most multiply-accumulate operations can be replaced by simple accumulations, which is beneficial because multipliers are the most space and power-hungry components of the digital implementation of neural networks; (3) furthermore, if both activations and weights are binary, the multiply-accumulations can be replaced by the bitwise operations: xnor and bitcount Courbariaux et al. [2016]. This could have a big impact on dedicated deep learning hardware. For instance, a 32-bit floating point multiplier costs about 200 Xilinx FPGA slices [Govindu et al., 2004], whereas a 1-bit xnor gate only costs a single slice. Semiconductor ⇤ indicates corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. manufacturers like IBM [Esser et al., 2016] and Intel [Venkatesh et al., 2016] have been involved in the research and development of related chips. However, binarization usually cause severe prediction accuracy degradation, especially on complex tasks such as classification on ImageNet dataset. To take a closer look, Rastegari et al. [2016] shows that binarizing weights causes the accuracy of Resnet-18 drops from 69.3% to 60.8% on ImageNet dataset. If further binarize activations, the accuracy drops to 51.2%. Similar phenomenon can also be found in literatures such as [Hubara et al., 2016]. Clearly there is a considerable gap between the accuracy of a full-precision model and a binary model. This paper proposes a novel scheme for binarizing CNNs, which aims to alleviate, or even eliminate the accuracy degradation, while still significantly reducing inference time, resource requirement and power consumption. The paper makes the following major contributions. • We approximate full-precision weights with the linear combination of multiple binary weight bases. The weights values of CNNs are constrained to { 1,+1}, which means convolutions can be implemented by only addition and subtraction (without multiplication), or bitwise operation when activations are binary as well. We demonstrate that 3⇠5 binary weight bases are adequate to well approximate the full-precision weights. • We introduce multiple binary activations. Previous works have shown that the quantization of activations, especially binarization, is more difficult than that of weights [Cai et al., 2017, Courbariaux et al., 2016]. By employing five binary activations, we have been able to reduce the Top-1 and Top-5 accuracy degradation caused by binarization to around 5% on ImageNet compared to the full precision counterpart. It is worth noting that the multiple binary weight bases/activations scheme is preferable to the fixedpoint quantization in previous works. In those fixed-point quantized networks one still needs to employ arithmetic operations, such as multiplication and addition, on fixed-point values. Even though faster than floating point, they still require relatively complex logic and can consume a lot of power. Detailed discussions can be found in Section 5.2. Ideally, combining more binary weight bases and activations always leads to better accuracy and will eventually get very close to that of full-precision networks. We verify this on ImageNet using Resnet network topology. This is the first time a binary neural network achieves prediction accuracy comparable to its full-precision counterpart on ImageNet. 2 Related work Quantized Neural Networks: High precision parameters are not very necessary to reach high performance in deep neural networks. Recent research efforts (e.g., [Hubara et al., 2016]) have considerably reduced a large amounts of memory requirement and computation complexity by using low bitwidth weights and activations. Zhou et al. [2016] further generalized these schemes and proposed to train CNNs with low bitwidth gradients. By performing the quantization after network training or using the “straight-through estimator (STE)" [Bengio et al., 2013], these works avoided the issues of non-differentiable optimization. While some of these methods have produced good results on datasets such as CIFAR-10 and SVHN, none has produced low precision networks competitive with full-precision models on large-scale classification tasks, such as ImageNet. In fact, [Zhou et al., 2016] and [Hubara et al., 2016] experiment with different combinations of bitwidth for weights and activations, and show that the performance of their highly quantized networks deteriorates rapidly when the weights and activations are quantized to less than 4-bit numbers. Cai et al. [2017] enhance the performance of a low bitwidth model by addressing the gradient mismatch problem, nevertheless there is still much room for improvement. Binarized Neural Networks: The binary representation for deep models is not a new topic. At the emergence of artificial neural networks, inspired biologically, the unit step function has been used as the activation function [Toms, 1990]. It is known that binary activation can use spiking response for event-based computation and communication (consuming energy only when necessary) and therefore is energy-efficient [Esser et al., 2016]. Recently, Courbariaux et al. [2016] introduce BinarizedNeural-Networks (BNNs), neural networks with binary weights and activations at run-time. Different from their work, Rastegari et al. [2016] introduce simple, efficient, and accurate approximations to CNNs by binarizing the weights and even the intermediate representations in CNNs. All these works drastically reduce memory consumption, and replace most arithmetic operations with bitwise operations, which potentially lead to a substantial increase in power efficiency. In all above mentioned works, binarization significantly reduces accuracy. Our experimental results on ImageNet show that we are close to filling the gap between the accuracy of a binary model and its full-precision counterpart. We relied on the idea of finding the best approximation of full-precision convolution using multiple binary operations, and employing multiple binary activations to allow more information passing through. 3 Binarization methods In this section, we detail our binarization method, which is termed ABC-Net (Accurate-BinaryConvolutional) for convenience. Bear in mind that during training, the real-valued weights are reserved and updated at every epoch, while in test-time only binary weights are used in convolution. 3.1 Weight approximation Consider a L-layer CNN architecture. Without loss of generality, we assume the weights of each convolutional layer are tensors of dimension (w, h, c in , c out ), which represents filter width, filter height, input-channel and output-channel respectively. We propose two variations of binarization method for weights at each layer: 1) approximate weights as a whole and 2) approximate weights channel-wise. 3.1.1 Approximate weights as a whole At each layer, in order to constrain a CNN to have binary weights, we estimate the real-value weight filter W 2 Rw⇥h⇥cin⇥cout using the linear combination of M binary filters B 1 ,B 2 , · · · ,BM 2 { 1,+1}w⇥h⇥cin⇥cout such that W ⇡ ↵ 1 B 1 +↵ 2 B 2 +· · ·+↵MBM . To find an optimal estimation, a straightforward way is to solve the following optimization problem: min ↵,B J(↵,B) = ||w B↵||2, s.t. Bij 2 { 1,+1}, (1) where B = [vec(B 1 ), vec(B 2 ), · · · , vec(BM )], w = vec(W ) and ↵ = [↵1,↵2, · · · ,↵M ]T. Here the notation vec(·) refers to vectorization. Although a local minimum solution to (1) can be obtained by numerical methods, one could not backpropagate through it to update the real-value weight filter W . To address this issue, assuming the mean and standard deviation of W are mean(W ) and std(W ) respectively, we fix Bi’s as follows: Bi = Fui(W ) := sign(W̄ + uistd(W )), i = 1, 2, · · · ,M, (2) where W̄ = W mean(W ), and ui is a shift parameter. For example, one can choose ui’s to be ui = 1 + (i 1) 2M 1 , i = 1, 2, · · · ,M, to shift evenly over the range [ std(W ), std(W )], or leave it to be trained by the network. This is based on the observation that the full-precision weights tend to have a symmetric, non-sparse distribution, which is close to Gaussian. To gain more intuition and illustrate the approximation effectiveness, an example is visualized in Section S2 of the supplementary material. With Bi’s chosen, (1) becomes a linear regression problem min ↵ J(↵) = ||w B↵||2, (3) in which Bi’s serve as the bases in the design/dictionary matrix. We can then back-propagate through Bi’s using the “straight-through estimator” (STE) [Bengio et al., 2013]. Assume c as the cost function, A and O as the input and output tensor of a convolution respectively, the forward and backward approach of an approximated convolution during training can be computed as follows: Forward: B 1 ,B 2 , · · · ,BM = Fu 1 (W ), Fu 2 (W ), · · · , FuM (W ), (4) Solve (3) for ↵, (5) O = MX m=1 ↵mConv(Bm,A). (6) Backward: @c @W = @c @O MX m=1 ↵m @O @Bm @Bm @W ! STE = @c @O MX m=1 ↵m @O @Bm ! = MX m=1 ↵m @c @Bm . (7) In test-time, only (6) is required. The block structure of this approximated convolution layer is shown on the left side in Figure 1. With suitable hardwares and appropriate implementations, the convolution can be efficiently computed. For example, since the weight values are binary, we can implement the convolution with additions and subtractions (thus without multiplications). Furthermore, if the input A is binary as well, we can implement the convolution with bitwise operations: xnor and bitcount [Rastegari et al., 2016]. Note that the convolution with each binary filter can be computed in parallel. 3.1.2 Approximate weights channel-wise Alternatively, we can estimate the real-value weight filter Wi 2 Rw⇥h⇥cin of each output channel i 2 {1, 2, · · · , c out } using the linear combination of M binary filters Bi1,Bi2, · · · ,BiM 2 { 1,+1}w⇥h⇥cin such that Wi ⇡ ↵i1Bi1 + ↵i2Bi2 + · · · + ↵iMBiM . Again, to find an optimal estimation, we solve a linear regression problem analogy to (3) for each output channel. After convolution, the results are concatenated together along the output-channel dimension. If M = 1, this approach reduces to the Binary-Weights-Networks (BWN) proposed in [Rastegari et al., 2016]. Compared to weights approximation as a whole, the channel-wise approach approximates weights more elaborately, however no extra cost is needed during inference. Since this approach requires more computational resources during training, we leave it as a future work and focus on the former approximation approach in this paper. 3.2 Multiple binary activations and bitwise convolution As mentioned above, a convolution can be implemented without multiplications when weights are binarized. However, to utilize the bitwise operation, the activations must be binarized as well, as they are the inputs of convolutions. Similar to the activation binarization procedure in [Zhou et al., 2016], we binarize activations after passing it through a bounded activation function h, which ensures h(x) 2 [0, 1]. We choose the bounded rectifier as h. Formally, it can be defined as: hv(x) = clip(x + v, 0, 1), (8) where v is a shift parameter. If v = 0, then hv is the clip activation function in [Zhou et al., 2016]. We constrain the binary activations to either 1 or -1. In order to transform the real-valued activation R into binary activation, we use the following binarization function: Hv(R) := 2Ihv(R) 0.5 1, (9) where I is the indicator function. The conventional forward and backward approach of the activation can be given as follows: Forward: A = Hv(R). Backward: @c @R = @c @A I 0R v1. (using STE) (10) Here denotes the Hadamard product. As can be expected, binaizing activations as above is kind of crude and leads to non-trivial losses in accuracy, as shown in Rastegari et al. [2016], Hubara et al. [2016]. While it is also possible to approximate activations with linear regression, as that of weights, another critical challenge arises – unlike weights, the activations always vary in test-time inference. Luckily, this difficulty can be avoided by exploiting the statistical structure of the activations of deep networks. Our scheme can be described as follows. First of all, to keep the distribution of activations relatively stable, we resort to batch normalization [Ioffe and Szegedy, 2015]. This is a widely used normalization technique, which forces the responses of each network layer to have zero mean and unit variance. We apply this normalization before activation. Secondly, we estimate the real-value activation R using the linear combination of N binary activations A 1 ,A 2 , · · · ,AN such that R ⇡ 1A1 + 2A2 + · · · + NAN , where A 1 ,A 2 , · · · ,AN = Hv 1 (R), Hv 2 (R), · · · , HvN (R). (11) Different from that of weights, the parameters n’s and vn’s (n = 1, · · · , N ) here are both trainable, just like the scale and shift parameters in batch normalization. Without the explicit linear regression approach, n’s and vn’s are tuned by the network itself during training and fixed in test-time. They are expected to learn and utilize the statistical features of full-precision activations. The resulting network architecture outputs multiple binary activations A 1 ,A 2 , · · · ,AN and their corresponding coefficients 1 , 2 , · · · , N , which allows more information passing through compared to the former one. Combining with the weight approximation, the whole convolution scheme is given by: Conv(W ,R) ⇡ Conv MX m=1 ↵mBm, NX n=1 nAn ! = MX m=1 NX n=1 ↵m nConv (Bm,An) , (12) which suggests that it can be implemented by computing M ⇥ N bitwise convolutions in parallel. An example of the whole convolution scheme is shown in Figure 1. 3.3 Training algorithm A typical block in CNN contains several different layers, which are usually in the following order: (1) Convolution, (2) Batch Normalization, (3) Activation and (4) Pooling. The batch normalization layer [Ioffe and Szegedy, 2015] normalizes the input batch by its mean and variance. The activation is an element-wise non-linear function (e.g., Sigmoid, ReLU). The pooling layer applies any type of pooling (e.g., max,min or average) on the input batch. In our experiment, we observe that applying max-pooling on binary input returns a tensor that most of its elements are equal to +1, resulting in a noticeable drop in accuracy. Similar phenomenon has been reported in Rastegari et al. [2016] as well. Therefore, we put the max-pooling layer before the batch normalization and activation. Since our binarization scheme approximates full-precision weights, using the full-precision pre-train model serves as a perfect initialization. However, fine-tuning is always required for the weights to adapt to the new network structure. The training procedure, i.e., ABC-Net, is summarized in Section S1 of the supplementary material. It is worth noting that as M increases, the shift parameters get closer and the bases of the linear combination are more correlated, which sometimes lead to rank deficiency when solving (3). This can be tackled with the ` 2 regularization. 4 Experiment results In this section, the proposed ABC-Net was evaluated on the ILSVRC12 ImageNet classification dataset [Deng et al., 2009], and visual perception of forest trails datasets for mobile robots [Giusti et al., 2016] in Section S6 of supplementary material. 4.1 Experiment results on ImageNet dataset The ImageNet dataset contains about 1.2 million high-resolution natural images for training that spans 1000 categories of objects. The validation set contains 50k images. We use Resnet ([He et al., 2016]) as network topology. The images are resized to 224x224 before fed into the network. We report our classification performance using Top-1 and Top-5 accuracies. 4.1.1 Effect of weight approximation We first evaluate the weight approximation technique by examining the accuracy improvement for a binary model. To eliminate variables, we leave the activations being full-precision in this experiment. Table 1 shows the prediction accuracy of ABC-Net on ImageNet with different choices of M . For comparison, we add the results of Binary-Weights-Network (denoted ‘BWN’) reported in Rastegari et al. [2016] and the full-precision network (denoted ‘FP’). The BWN binarizes weights and leaves the activations being full-precision as we do. All results in this experiment use Resnet-18 as network topology. It can be observed that as M increases, the accuracy of ABC-Net converges to its fullprecision counterpart. The Top-1 gap between them reduces to only 0.9 percentage point when M = 5, which suggests that this approach nearly eliminates the accuracy degradation caused by binarizing weights. For interested readers, Figure S4 in section S5 of the supplementary material shows that the relationship between accuracy and M appears to be linear. Also, in Section S2 of the supplementary material, a visualization of the approximated weights is provided. 4.1.2 Configuration space exploration We explore the configuration space of combinations of number of weight bases and activations. Table 2 presents the results of ABC-Net with different configurations. The parameter settings for these experiments are provided in Section S4 of the supplementary material. As balancing between multiple factors like training time and inference time, model size and accuracy is more a problem of practical trade-off, there will be no definite conclusion as which combination of (M,N ) one should choose. In general, Table 2 shows that (1) the prediction accuracy of ABC-Net improves greatly as the number of binary activations increases, which is analogous to the weight approximation approach; (2) larger M or N gives better accuracy; (3) when M = N = 5, the Top-1 gap between the accuracy of a full-precision model and a binary one reduces to around 5%. To gain a visual understanding and show the possibility of extensions to other tasks such object detection, we print the a sample of feature maps in Section S3 of supplementary material. 4.1.3 Comparison with the state-of-the-art Table 3 presents a comparison between ABC-Net and several other state-of-the-art models, i.e., full-precision Resnet-18, BWN and XNOR-Net in [Rastegari et al., 2016], DoReFa-Net in [Zhou et al., 2016] and BNN in [Courbariaux et al., 2016] respectively. All comparative models use Resnet18 as network topology. The full-precision Resnet-18 achieves 69.3% Top-1 accuracy. Although Rastegari et al. [2016]’s BWN model and DeReFa-Net perform well, it should be noted that they use full-precision and 4-bit activation respectively. Models (XNOR-Net and BNN) that used both binary weights and activations achieve much less satisfactory accuracy, and is significantly outperformed by ABC-Net with multiple binary weight bases and activations. It can be seen that ABC-Net has achieved state-of-the-art performance as a binary model. One might argue that 5-bit width quantization scheme could reach similar accuracy as that of ABCNet with 5 weight bases and 5 binary activations. However, the former one is less efficient and requires distinctly more hardware resource. More detailed discussions can be found in Section 5.2. 5 Discussion 5.1 Why adding a shift parameter works? Intuitively, the multiple binarized weight bases/activations scheme works because it allows more information passing through. Consider the case that a real value, say 1.5, is passed to a binarized function f(x) = sign(x), where sign maps a positive x to 1 and otherwise -1. In that case, the outputs of f(1.5) is 1, which suggests that the input value is positive. Now imagine that we have two binarization function f 1 (x) = sign(x) and f 2 (x) = sign(x 2). In that case f 1 outputs 1 and f 2 outputs -1, which suggests that the input value is not only positive, but also must be smaller than 2. Clearly we see that each function contributes differently to represent the input and more information is gained from f 2 compared to the former case. From another point of view, both coefficients ( ’s) and shift parameters are expected to learn and utilize the statistical features of full-precision tensors, just like the scale and shift parameters in batch normalization. If we have more binarized weight bases/activations, the network has the capacity to approximate the full-precision one more precisely. Therefore, it can be deduced that when M or N is large enough, the network learns to tune itself so that the combination of M weight bases or N binarized activations can act like the full-precision one. 5.2 Advantage over the fixed-point quantization scheme It should be noted that there are key differences between the multiple binarization scheme (M binarized weight bases or N binarized activations) proposed in this paper and the fixed-point quantization scheme in the previous works such as [Zhou et al., 2016, Hubara et al., 2016], though at first Courbariaux et al. [2016] did not report their result on ImageNet. We implemented and presented the result. thought K-bit width quantization seems to share the same memory requirement with K binarizations. Specifically, our K binarized weight bases/activations is preferable to the fixed K-bit width scheme for the following reasons: (1) The K binarization scheme preserves binarization for bitwise operations. One or several bitwise operations is known to be more efficient than a fixed-point multiplication, which is a major reason that BNN/XNOR-Net was proposed. (2) A K-bit width multiplier consumes more resources than K 1-bit multipliers in a digital chip: it requires more than K bits to store and compute, otherwise it could easily overflow/underflow. For example, if a real number is quantized to a 2-bit number, a possible choice is in range {0,1,2,4}. In this 2-bit multiplication, when both numbers are 4, it outputs 4 ⇥ 4 = 16, which is not within the range. In [Zhou et al., 2016], the range of activations is constrained within [0,1], which seems to avoid this situation. However, fractional numbers do not solve this problem, severe precision deterioration will appear during the multiplication if there are no extra resources. The fact that the complexity of a multiplier is proportional to THE SQUARE of bit-widths can be found in literatures (e.g., sec 3.1.1. in [Grabbe et al., 2003]). In contrast, our K binarization scheme does not have this issue – it always outputs within the range {-1,1}. The saved hardware resources can be further used for parallel computing. (3) A binary activation can use spiking response for event-based computation and communication (consuming energy only when necessary) and therefore is energy-efficient [Esser et al., 2016]. This can be employed in our scheme, but not in the fixed K-bit width scheme. Also, we have mentioned the fact that K-bit width multiplier consumes more resources than K 1-bit multipliers. It is noteworthy that these resources include power. To sum up, K-bit multipliers are the most space and power-hungry components of the digital implementation of DNNs. Our scheme could bring great benefits to specialized DNN hardware. 5.3 Further computation reduction in run-time On specialized hardware, the following operations in our scheme can be integrated with other operations in run-time and further reduce the computation requirement. (1) Shift operations. The existence of shift parameters seem to require extra additions/subtractions (see (2) and (8)). However, the binarization operation with a shift parameter can be implemented as a comparator where the shift parameter is the number for comparison, e.g., Hv(R) =⇢ 1, R 0.5 v; 1, R < 0.5 v. (0.5 v is a constant), so no extra additions/subtractions are involved. (2) Batch normalization. In run-time, a batch normalization is simply an affine function, say, BN(R) = aR + b, whose scale and shift parameters a, b are fixed and can be integrated with vn’s. More specifically, a batch normalization can be integrated into a binarization operation as follow: Hv(BN(R)) = ⇢ 1, aR + b 0.5 v; 1, aR + b < 0.5 v. = ⇢ 1, R (0.5 v b)/a; 1, R < (0.5 v b)/a. Therefore, there will be no extra cost for the batch normalization. 6 Conclusion and future work We have introduced a novel binarization scheme for weights and activations during forward and backward propagations called ABC-Net. We have shown that it is possible to train a binary CNN with ABC-Net on ImageNet and achieve accuracy close to its full-precision counterpart. The binarization scheme proposed in this work is parallelizable and hardware friendly, and the impact of such a method on specialized hardware implementations of CNNs could be major, by replacing most multiplications in convolution with bitwise operations. The potential to speed-up the test-time inference might be very useful for real-time embedding systems. Future work includes the extension of those results to other tasks such as object detection and other models such as RNN. Also, it would be interesting to investigate using FPGA/ASIC or other customized deep learning processor [Liu et al., 2016] to implement ABC-Net at run-time. 7 Acknowledgement We acknowledge Mr Jingyang Xu for helpful discussions.
1. What are the strengths and weaknesses of the paper regarding its contributions and extensions to prior work? 2. How does the reviewer assess the clarity and quality of the paper's content, particularly in terms of presentation and experimentation? 3. What are the concerns regarding the learned B and v parameters, and how do they impact the results? 4. How does the sparsity of weight matrices affect performance, and what are the implications for choosing the number of binary weight matrices? 5. Are there any questions about the initialization procedure and its impact on the results? 6. Are there any requests for additional experiments or analyses to support the findings or address potential issues?
Review
Review This paper extends previous work done for binary CNNs using multiple binary weight/activation matrices. Although it is presents incremental additions to prior work, the paper shows strong large-scale results on the standard ImageNet benchmark as well as thorough experimentations, and a clear presentation. This paper would entertain a wide interest among NIPS audience. Comments and suggestions for improvements: - You stated that the B and v parameters of activations are learned by the NN, but in table S5, the learned Bs are all 1.0 except for one row, while v values are almost symmetric. Could you elaborate further on this point, and update the text to state the exact procedure you use for learning/setting these parameters. - Could you provide a histogram of the discovered alpha values for weight matrices, to gain more insight on how sparse they are? - How bad are the results if you don’t initialize by the FP CNN? Do you need to start from a fully converged model, or just couple epochs are sufficient? - Given the strong correlation between the number of binary weight matrices M and final performance (as shown in Table 1, and figure S4), Could you provide further experimental results to show the performance drop when M > 5 (as stated in section 3.3) and the potential effect of regularization to fix it? - In section S1, algorithm 1: you need to add gradient and update for the v parameters (based on the text you learn them by backpropagation). - In section S5, please state if you fix the activations to FP.
NIPS
Title Superset Technique for Approximate Recovery in One-Bit Compressed Sensing Abstract One-bit compressed sensing (1bCS) is a method of signal acquisition under extreme measurement quantization that gives important insights on the limits of signal compression and analog-to-digital conversion. The setting is also equivalent to the problem of learning a sparse hyperplane-classifier. In this paper, we propose a generic approach for signal recovery in nonadaptive 1bCS that leads to improved sample complexity for approximate recovery for a variety of signal models, including nonnegative signals and binary signals. We construct 1bCS matrices that are universal i.e. work for all signals under a model and at the same time recover very general random sparse signals with high probability. In our approach, we divide the set of samples (measurements) into two parts, and use the first part to recover the superset of the support of a sparse vector. The second set of measurements is then used to approximate the signal within the superset. While support recovery in 1bCS is well-studied, recovery of superset of the support requires fewer samples, which then leads to an overall reduction in sample complexity for approximate recovery. 1 Introduction Sparsity is a natural property of many real-world signals. For example, image and speech signals are sparse in the Fourier basis, which led to the theory of compressed sensing, and more broadly, sampling theory [12, 7]. In some important multivariate optimization problems with many optimal points, sparsity of the solution is also a measure of ‘simplicity’ and insisting on sparsity is a common method of regularization [19]. While recovering sparse vectors from linear measurements is a well-studied topic, technological advances and increasing data size raises new questions. These include quantized and nonlinear signal acquisition models, such as 1-bit compressed sensing [4]. In 1-bit compressed sensing, linear measurements of a sparse vector are quantized to only 1 bit, e.g. indicating whether the measurement outcome is positive or not, and the task is to recover the vector up to a prescribed Euclidean error with minimum number of measurements. Like compressed sensing, the overwhelming majority of the literature, including this paper, focuses on the nonadaptive setting for the problem. One of the ways to approximately recover a sparse vector from 1-bit measurements is to use a subset of all the measurements to identify the support of the vector. Next, the remainder of the measurements can be used to approximate the vector within the support. Note that this second set of measurements is also predefined, and therefore the entire scheme is still nonadaptive. Such a method appears in the Preprint. Under review. context of ‘universal’ matrix designs in [9, 1]. The resulting schemes are the best known, in some sense, but still result in a large gap between the upper and lower bounds for approximate recovery of vectors. In this paper we take steps to close these gaps, by presenting a simple yet powerful idea. Instead of using a subset of the measurements to recover the support of the vector exactly, we propose using a (smaller) set of measurements to recover a superset of the support. The remainder of the measurements can then be used to better approximate the vector within the superset. It turns out this idea which we call the “superset technique” leads to optimal number of measurements for universal schemes for several important classes of sparse vectors (for example, nonnegative vectors). We also present theoretical results providing a characterization of matrices that would yield universal schemes for all sparse vectors. Prior Results. While the compressed sensing framework was introduced in [7], it was not until [4] that 1-bit quantization of the measurements was considered as well, to try and combat the fact that taking real-valued measurements to arbitrary precision may not be practical in applications. Initially, the focus was primarily on approximately reconstructing the direction of the signal x (the quantization does not preserve any information about the magnitude of the signal, so all we can hope to reconstruct is the direction). However, in [10] the problem of support recovery, as opposed to approximate vector reconstruction, was first considered and it was shown that O (k log n) measurements is sufficient to recover the support of a k-sparse signal in Rn with high probability. This was subsequently shown to be tight with the lower bound proven in [3]. All the above results assume that a new measurement matrix is constructed for each sparse signal, and success is defined as either approximately recovering the signal up to error ✏ in the `2 norm (for the approximate vector recovery problem), or exactly recovering the support of the signal (for the support recovery problem), with high probability. Generating a new matrix for each instance is not practical in all applications, which has led to interest in the “universal” versions of the above two problems, where a single matrix must work for support recovery or approximate recovery of all k-sparse real signals, with high probability. Plan and Vershynin showed in [15] that both O k ✏6 log n k and O k ✏5 log 2 n k measurements suffice for universal approximate recovery. The dependence on ✏ was then improved significantly to O k3 log nk + k ✏ in [9], who also considered the problem of universal support recovery, and showed that for that problem, O k3 log n measurements is sufficient. They showed as well that if we restrict the entries of the signal to be nonnegative (which is the case for many real-world signals such as images), then O k2 log n is sufficient for universal support recovery. The constructions of their measurement matrices are based primarily on combinatorial objects, specifically expanders and Union Free Families (UFFs). Most recently, [1] showed that a modified version of the UFFs used in [9] called “Robust UFFs” (RUFFs) can be used to improve the upper bound on universal support recovery to O k2 log n for all real-valued signals, matching the previous upper bound for nonnegative signals, and showed this is nearly tight with a lower bound of ⌦(k2 log n/ log k) for real signals. They also show that O k2 log n+ k✏ measurements suffices for universal approximate recovery. In tandem with the development of these theoretical results providing necessary and sufficient numbers of measurements for support recovery and approximate vector recovery, there has been a significant body of work in other directions on 1-bit compressed sensing, such as heuristic algorithms that perform well empirically, and tradeoffs between different parameters. More specifically, [11] introduced a gradient-descent based algorithm called Binary Iterative Hard Thresholding (BIHT) which performs very well in practice; later, [13] gave another heuristic algorithm which performs comparably well or better, and aims to allow for very efficient decoding after the measurements are taken. Other papers such as [18] have studied the tradeoff between the amount of quantization of the signal, and the necessary number of measurements. Our Results. We focus primarily on upper bounds in the universal setting, aiming to give constructions that work with high probability for all sparse vectors. In [1], 3 major open questions are given regarding Universal 1-bit Compressed Sensing, which, paraphrasing, are as follows: 1. How many measurements are necessary and sufficient for a matrix to be used to exactly recover all k-sparse binary vectors? In this work we make progress towards solutions to all three Open Questions. Our primary contribution is the “superset technique” which relies on ideas from the closely related sparse recovery problem of group testing [8]; in particular, we show in Theorem 6 that for a large class of signals including all nonnegative (and thus all binary) signals, we can improve the upper bound for approximate recovery by first recovering an O (k)-sized superset of the support rather than the exact support, then subsequently using Gaussian measurements. The previous best upper bound for binary signals from [11] was O k3/2 log n , which we improve to O k3/2 + k log nk , and for nonnegative signals was O min(k2 log nk + k ✏ , k ✏ log n) , which we improve to O k log nk + k ✏ . Regarding Open Question 3, using results of Porat and Rothschild regarding weakly explicit constructions of Error-Correcting Codes (ECCs) on the Gilbert-Varshamov bound [16], we give a construction of Robust UFFs yielding measurement matrices for support recovery with O k2 log n rows in time that is polynomial in n (though not in k) in Theorem 12. Based on a similar idea, we also give a weakly explicit construction for non-universal approximate recovery using only sightly more measurements than is optimal (O k log2 n as opposed to O k log nk ) in Section 4.2; to our knowledge, explicit constructions in the non-universal setting have not been studied previously. Furthermore, this result gives a single measurement matrix which works for almost all vectors, as opposed to typical non-universal results which work with high probability for a particular vector and matrix pair. In Appendix C, we give a sufficient condition generalizing the notion of RUFFs for a matrix to be used for universal recovery of a superset of the support for all real signals; while we do not provide constructions, this seems to be a promising direction for resolving Open Question 2. The best known upper and lower bounds for the various compressed sensing problems considered in this work are presented in Table 1. 2 Definitions We write Mi for the ith row of the matrix M , and Mi,j for the entry of M in the ith row and jth column. We write vectors x in boldface, and write xi for the ith component of the vector x. The set {1, 2, . . . , n} will be denoted by [n], and for any set S we write P(S) for the power set of S (i.e. the set of all subsets of S). We will write supp(x) ✓ [n] to mean the set of indices of nonzero components of x (so supp(x) = {i : xi 6= 0}), and ||x||0 to denote | supp(x)|. For a real number y, sign(y) returns 1 if y is strictly positive, 1 if y is strictly negative, and 0 if y = 0. While this technically returns more than one bit of information, if we had instead defined sign(y) to be 1 when y 0 and 1 otherwise, we could still determine whether y = 0 by looking at sign(y), sign( y), so this affects the numbers of measurements by only a constant factor. We will not concern ourselves with the constants involved in any of our results, so we have chosen to instead use the more convenient definition. We will sometimes refer to constructions from the similar “group testing” problem in our results. To this end, we will use the symbol “ ” to represent the group testing measurement between a measurement vector and a signal vector. Specifically, for a measurement m of length n and signal x of length n, m x is equal to 1 if supp(m) \ supp(x) is nonempty, and 0 otherwise. We will also make use of the “list-disjunct” matrices used in some group testing constructions. Definition 1. An m ⇥ n binary matrix M is (k, l)-list disjunct if for any two disjoint sets S, T ✓ col(M) with |S| = k, |T | = l, there exists a row in M in which some column from T has a nonzero entry, but every column from S has a zero. The primary use of such matrices is that in the group testing model, they can be used to recover a superset of size at most k+ l of the support of any k-sparse signal x from applying a simple decoding to the measurement results M x. In the following definitions, we write S for a generic set that is the domain of the signal. In this paper we consider signals with domain R,R 0 (nonnegative reals), and {0, 1}. Definition 2. An m ⇥ n measurement matrix M can be used for Universal Support Recovery of k-sparse x 2 Sn (in m measurements) if there exists a decoding function f : { 1, 0, 1}m ! P([n]) such that f(sign(Mx)) = supp(x) for all x satisfying ||x||0 k. Definition 3. An m⇥n measurement matrix M can be used for Universal ✏-Approximate Recovery of k-sparse x 2 Sn (in m measurements) if there exists a decoding function f : { 1, 0, 1}m ! Sn such that x ||x||2 f(sign(Mx)) ||f(sign(Mx))||2 2 ✏, for all x with ||x||0 k. 3 Upper Bounds for Universal Approximate Recovery Here we present our main result, an upper bound on the number of measurements needed to perform universal ✏-approximate recovery for a large class of real vectors that includes all binary vectors and all nonnegative vectors. The general technique will be to first use what are known as “list-disjunct” matrices from the group testing literature to recover a superset of the support of the signal, then use Gaussian measurements to approximate the signal within the superset. Because the measurements in the second part are Gaussian, we can perform the recovery within the (initially unknown) superset nonadaptively. When restricting to the class of binary or nonnegative signals, our upper bound improves on existing results and is close to known lower bounds. First, we need a lemma stating the necessary and sufficient conditions on a signal vector x in order to be able to reconstruct the results of a single group testing measurement m x using sign measurements. To concisely state the condition, we introduce some notation: for a subset S ✓ [n] and vector x of length n, we write x|S to mean the restriction of x to the indices of S. Lemma 1. Let m 2 {0, 1}n and x 2 Rn. Define S = supp(m) \ supp(x). If either S is empty or S is nonempty and mT |S x|S 6= 0, we can reconstruct the result of the group testing measurement m x from the sign measurement sign(mTx). Proof. We observe sign(mTx) and based on that must determine the value of m x, or equivalently whether S is empty or nonempty. If sign(mTx) 6= 0 then mTx 6= 0, so S is nonempty and m x = 1. Otherwise we have sign(mTx) = 0, in which case we must have mTx = 0. If S were nonempty then we would have mT |S x|S = 0, contradicting our assumption. Therefore in this case we must have S empty and m x = 0, so for x satisfying the above condition we can reconstruct the results of a group testing measurement. For convenience, we use the following property to mean that a signal x has the necessary property from Lemma 1 with respect to every row of a matrix M . Property 1. Let M be an m⇥n matrix, and x a signal of length n. Define Si = supp(Mi)\supp(x). Then for every row Mi of M , either Si is empty, or MTi |Si x|Si 6= 0. Corollary 2. Let M be a (k, l)-list disjunct matrix, and x 2 Rn be a k-sparse real signal. If Property 1 holds for M and x, then we can use the measurement matrix M to recover a superset of size at most k + l of the support of x using sign measurements. Combining this corollary with results of [6], there exist matrices with O k log(nk ) rows which we can use to recover an O (k)-sized superset of the support of x using sign measurements, provided x satisfies the above condition. Strongly explicit constructions of these matrices exist also, although requiring O k1+o(1) log n rows [5]. The other result we need is one that tells us how many Gaussian measurements are necessary to approximately recover a real signal using maximum likelihood decoding. Similar results have appeared elsewhere, such as [11], but we include the proof for completeness. Lemma 3. There exists a measurement matrix A for 1-bit compressed sensing such that for every pair of k-sparse x,y 2 Rn with ||x||2 = ||y||2 = 1, sign(Ax) 6= sign(Ay) whenever ||x y||2 > ✏, provided that m = O ✓ k ✏ log ⇣n k ⌘◆ . We will make use of the following facts in the proof. Fact 4. For all x 2 R, 1 x < e x. Fact 5. For all x 2 [0, 1], cos 1(x) p 2(1 x). Proof of Lemma 3. Let A ⇠ Nm⇥n(0, 1). For a measurement to separate x and y, it is necessary that the hyperplane corresponding to some row a of A lies between x and y. Thus our goal here is to show that if we take m to be large enough, that all pairs of points at distance > ✏ will be separated with high probability. Since the rows of A are chosen independently and have Gaussian entries, they are spherically symmetric, and thus the probability that the random hyperplane a lies between x and y is proportional to the angle between them. Let ||x y||2 > ✏, then we start out by upper bounding the probability that no measurement separates a particular pair x and y. Before beginning, recall that for unit vectors 1 xTy = ||x y||22/2, so given that ||x y||2 > ✏, we have xTy < 1 ✏2/2. Pr[sign(ax) = sign(ay)] = 1 cos 1(xTy) ⇡ < 1 cos 1(1 ✏2/2) ⇡ exp( cos 1(1 ✏2/2) ⇡ ) (from Fact 4). exp( ✏⇡ ) (from Fact 5). As there are m independent measurements, the probability that x and y are not separated by any of the m measurements is at most exp ⇣ m✏ ⇡ ⌘ , so union bounding over all n k 2 pairs of k-sparse x and y, the total probability of error is strictly less than ✓ n k ◆2 exp ⇣ m✏ ⇡ ⌘ . This probability becomes less than 1 for m ⇡✏ (2k) log n k , so with this number of measurements there exists a matrix that can perform ✏-approximate recovery for all pairs of sparse vectors. Note that in the case that we already have a superset of the support of size O (k), the previous result tells us there exists a matrix with O ⇣ k ✏ log( O(k) k ) ⌘ = O k ✏ rows which can be used to perform ✏-approximate recovery within the superset. We can do this even nonadaptively, because the rows of the matrix for approximate recovery are Gaussian. Combining this with Corollary 2 and the group testing constructions of [6], we have the following theorem. Theorem 6. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k ✏ rows that can be used for ✏-approximate recovery within the superset as in Lemma 3, so M consists of O k log(nk ) + k ✏ rows. Let x 2 Rn be a k-sparse signal. If Property 1 holds for M (1) and x, then M can be used for ✏-approximate recovery of x. Remark. We note that the class of signal vectors x which satisfy the condition in Theorem 6 is actually quite large, in the sense that there is a natural probability distribution over all sparse signals x for which vectors violating the condition occur with probability 0. The details are laid out in Lemma 14. As special cases, we have improved upper bounds for nonnegative and binary signals. For ease of comparison with the other results, we assume the binary signal is rescaled to have unit norm, so has all entries either 0 or equal to 1/ p ||x||0. Corollary 7. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k ✏ rows that can be used for ✏-approximate recovery within the superset as in Lemma 3, so M consists of O k log(nk ) + k ✏ rows. Let x 2 Rn be a k-sparse signal. If all entries of x are nonnegative, then M can be used for ✏-approximate recovery of x. Proof. In light of Theorem 6, we need only note that as all entries of M (1) and x are nonnegative, Property 1 is satisfied for M (1) and x. Corollary 8. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k3/2 rows that can be used for ✏-approximate recovery (with ✏ < 1/ p k) within the superset as in Corollary 2 , so M consists of O k log(nk ) + k 3/2 rows. Let x 2 Rn be the k-sparse signal vector. If all nonzero entries of x are equal, then M can be used for exact recovery of x. Proof. Here we use the fact that if we perform ✏-approximate recovery using ✏ < 1/ p k then as the minimum possible distance between any two k-sparse rescaled binary vectors is 1/ p k, we will recover the signal vector exactly. 4 Explicit Constructions 4.1 Explicit Robust UFFs from Error-Correcting Codes In this section we explain how to combine several existing results in order to explicitly construct Robust UFFs that can be used for support recovery of real vectors. This partially answers Open Problem 3 from [1]. Definition 4. A family of sets F = {B1, B2, . . . , Bn} with each Bi ✓ [m] is an (n,m, d, k,↵)Robust-UFF if |Bi| = d, 8i, and for every distinct j0, j1, . . . , jk 2 [n], |Bj0 \ (Bj1 [ Bj2 [ · · · [ Bjk)| < ↵|Bj0 |. It is shown in [1] that nonexplicit (n,m, d, k, 1/2)-Robust UFFs exist with m = O k2 log n , d = O (k log n) which can be used to exactly recover the support of any k-sparse real vector of length n in m measurements. The results we will need are the following, where the q-ary entropy function Hq is defined as Hq(x) = x logq(q 1) x logq x (1 x) logq(1 x). Theorem 9 ([16] Thm. 2). Let q be a prime power, m and k positive integers, and 2 [0, 1]. Then if k (1 Hq( ))m, we can construct a q-ary linear code with rate k m and relative distance in time O mqk . Theorem 10 ([1] Prop. 17). Given a q-ary error correcting code with rate r and relative distance (1 ), we can construct a (qrd, qd, d, 1, )-Robust-UFF. Theorem 11 ([1] Prop. 15). If F is an (n,m, d, 1,↵/k)-Robust-UFF, then F is also an (n,m, d, k,↵)-Robust-UFF. By combining the above three results, we have the following. Theorem 12. We can explicitly construct an (n,m, d, k,↵)-Robust UFF with m = O ⇣ k2 logn ↵2 ⌘ and d = O ⇣ k logn ↵ ⌘ in time O (k/↵)k . Proof. First, we instantiate Theorem 9 to obtain a q-ary code C of length d with q = O (k/↵), relative distance = k ↵k , and rate r = 1 Hq( ) in time O qk . Applying Theorem 10 to this code results in an (n,m, d, 1, )-Robust-UFF F where n = qrd, m = qd, = 1 . By Theorem 11, F is also an (n,m, d, k, k)-Robust UFF. Plugging back in the parameters of the original code, m = qd = q log n r log q = q log n (1 Hq((k ↵)/k)) log q = O ✓ k2 log n ↵2 ◆ , k = (1 )k = (1 k ↵ k )k = k (k ↵) = ↵. While the time needed for this construction is not polynomial in k (and therefore the construction is not strongly explicit) as asked for in Open Question 3 of [1], this at least demonstrates that there exist codes with sufficiently good parameters to yield Robust UFFs with m = O k2 log n . 4.2 Non-Universal Approximate Recovery If instead of requiring our measurement matrices to be able to recover all k-sparse signals simultaneously (i.e. to be universal), we can instead require only that they are able to recover “most” k-sparse signals. Specifically, in this section we will assume that the sparse signal is generated in the following way: first a set of k indices is chosen to be the support of the signal uniformly at random. Then, the signal is chosen to be a uniformly random vector from the unit sphere on those k indices. We relax the requirement that the supports of all k-sparse signals can be recovered exactly (by some decoding) to the requirement that we can identify the support of a k-sparse signal with probability at least 1 , where 2 [0, 1). Note that even when = 0, this is a weaker condition than universality, as the space of possible k-sparse signals is infinite. It is shown in [3] that a random matrix construction using O (k log n) measurements suffices to recover the support with error probability approaching 0 as k and n approach infinity. The following theorem shows that we can explicitly construct a matrix which works in this setting, at the cost of slightly more measurements (about O k log2(n) ). Theorem 13. We can explicitly construct measurement matrices for Support Recovery (of real vectors) with m = O ⇣ k log(n)log k log( n ) ⌘ rows that can exactly determine the support of a k-sparse signal with probability at least 1 , where the signals are generated by first choosing the size k support uniformly at random, then choosing the signal to be a uniformly random vector on the sphere on those k coordinates. To prove this theorem, we need a lemma which explains how we can use sign measurements to “simulate” group testing measurements with high probability. Both the result and proof are similar to Lemma 1, with the main difference being that given the distribution described above, the vectors violating the necessary condition in Lemma 1 occur with zero probability and so can be safely ignored. For this lemma, we do not need the further assumption made in Theorem 13 that the distribution over support sets is uniform. The proof is presented in Appendix A. Lemma 14. Suppose we have a measurement vector m 2 {0, 1}n, and a k-sparse signal x 2 Rn. The signal x is generated randomly by first picking a subset of size k from [n] (using any distribution) to be the support, then taking x to be a uniformly random vector on the sphere on those k coordinates. Then from sign(mTx), we can determine the value of m x with probability 1. As the above argument works with probability 1, we can easily extend it to an entire measurement matrix M with any finite number of rows by a union bound, and recover all the group testing measurement results M x with probability 1 as well. This means we can leverage the following result from [14]: Theorem 15 ([14] Thm. 5). When x 2 {0, 1}n is drawn uniformly at random among all ksparse binary vectors, there exists an explicitly constructible group testing matrix M with m = O ⇣ k log k log(n) log( n ) ⌘ rows which can exactly identify x from observing the measurement results M x with probability at least 1 . Combining this with the lemma above, we can use the matrix M from Theorem 15 with m = O ⇣ k log k log n log( n ) ⌘ rows (now representing sign measurements) to exactly determine the support of x with probability at least 1 ; we first use Lemma 14 to recover the results of the group testing tests M x with probability 1, and can then apply the above theorem using the results of the group testing measurements. We can also use this construction for approximate recovery rather than support recovery using Lemma 3, by appending O k ✏ rows of Gaussian measurements to M , first recovering the exact support, then doing approximate recovery within that support. This gives a matrix with about O k log2(n) + k✏ rows for non-universal approximate recovery of real signals, where the top portion is explicit. Remark. Above, we have shown that in the non-universal setting, we can use constructions from group testing to recover the exact support with high probability, and then subsequently perform approximate recovery within that exact support. If we are interested only in performing approximate recovery, we can apply our superset technique here as well; Lemma 14 implies also that using a (k,O (k))-list disjunct matrix we can with probability 1 recover an O (k)-sized superset of the support, and such matrices exist with O k log nk rows. Following this, we can use O k ✏ more Gaussian measurements to recover the signal within the superset. This gives a non-universal matrix with O k log nk + k ✏ rows for approximate recovery, the top part of which can be made strongly explicit with only slightly more measurements (O k1+o(1) log nk vs. O k log nk ). 5 Experiments In this section, we present some empirical results relating to the use of our superset technique in approximate vector recovery for real-valued signals. To do so, we compare the average error (in `2 norm) of the reconstructed vector from using an “all Gaussian” measurement matrix to first using a small number of measurements to recover a superset of the support of the signal, then using the remainder of the measurements to recover the signal within that superset via Gaussian measurements. We have used the well-known BIHT algorithm of [11] for recovery of the vector both using the all Gaussian matrix and within the superset, but we emphasize that this superset technique is highly general, and could just as easily be applied on top of other decoding algorithms that use only Gaussian measurements, such as the “QCoSaMP” algorithm of [17]. To generate random signals x, we first choose a size k support uniformly at random among the n k possibilities, then for each coordinate in the chosen support, generate a random value from N (0, 1). The vector is then rescaled so that ||x||2 = 1. For the dotted lines in Figure 1 labeled “all Gaussian,” for each value of (n,m, k) we performed 500 trials in which we generated an m⇥ n matrix with all entries in N (0, 1). We then used BIHT (run either until convergence or 1000 iterations, as there is no convergence guarantee) to recover the signal from the measurement matrix and measurement outcomes. For the solid lines in Figure 1 labeled “4k log n Superset,” we again performed 500 trials for each value of (n,m, k) where in each trial we generated a measurement matrix M = M (1) M (2) with m rows in total. Each entry of M (1) is a Bernoulli random variable that takes value 1 with probability 1 k+1 and value 0 with probability k k+1 ; there is evidence from the group testing literature [3, 2] that this probability is near-optimal in some regimes, and it appears also to perform well in practice; see Appendix B for some empirical evidence. The entries of M (2) are drawn from N (0, 1). We use a standard group testing decoding (i.e., remove any coordinates that appear in a test with result 0) to determine a superset based on y1 = sign(M (1)x), then use BIHT (again run either until convergence or 1000 iterations) to reconstruct x within the superset using the measurement results y2 = sign(M (2)x). The number of rows in M (1) is taken to be m1 = 4k log10(n) based on the fact that with high probability Ck log n rows for some constant C should be sufficient to recover an O (k)-sized superset, and the remainder m2 = (m m1) of the measurements are used in M (2). We display data only for larger values of m, to ensure there are sufficiently many rows in both portions of the measurement matrix. From Figure 1 one can see that in this regime, using a small number of measurements to first recover a superset of the support provides a modest improvement in reconstruction error compared to the alternative. In the higher-error regime when there are simply not enough measurements to obtain an accurate reconstruction, as can be seen in the left side of the graph in Figure 1d, the two methods perform about the same. In the empirical setting, our superset of support recovery technique can be viewed as a very flexible and low overhead method of extending other existing 1bCS algorithms which use only Gaussian measurements, which are quite common. Acknowledgements: This research is supported in part by NSF CCF awards 1618512, 1642658, and 1642550 and the UMass Center for Data Science.
1. How does the reviewer assess the quality and originality of the paper regarding its contribution to compressed sensing? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any concerns or suggestions regarding the experiments conducted in the paper? 4. How does the reviewer evaluate the significance and impact of the paper's findings in the field of compressed sensing? 5. Are there any aspects of the paper that could be improved in terms of clarity or content?
Review
Review Quality: - Overall the paper is high-quality. - Some lingering questions I have: -- In the experiments, it appears that the algorithms have oracle knowledge of the exact sparsity k. Do these methods work without oracle knowledge of k? It appears that this oracle knowledge of k is used directly in setting measurement parameters, such as the entries of M1, which is unrealistic since the true value of k is rarely known a priori. -- How does this approach perform in the presence of noise? Can the authors run more experiments with noise on the 1-bit measurements? -- Why do the authors only compare against “all Gaussian” measurements? Surely there must be other methods that take measurements in two-stages. In particular the authors reference the approaches in (Acharya et al. 2017) and (Gopi et al. 2013) several times but do not run experimental comparisons against these methods. Originality: - The idea of taking measurements in two stages in compressed sensing (to recover the support, then approximate the signal values on this support) is not new. However, the idea of recovering a superset of the support rather than the exact support appears to be novel. - The connections to Group Testing are interesting and appear to be novel as far as I know. Significance: - Tighter upper bounds than existing results, along with explicit measurement construction methods, are provided for the important settings of non-negative signals (such as images) and binary signals. Even if the idea of taking compressed sensing measurements in two stages is not new, these improvements are important for pushing forward 1bCS research. Clarity: - The paper is mostly clear. - The authors did a good job of juxtaposing their results with previous open questions in the literature - Table 1 clearly highlights the novel contributions -------------------------- Update after rebuttal. The authors comments were satisfactory, as my comments were mostly suggestions for improvement. They didn't mention the points of unclarity in their writing, but I do hope that they will make appropriate corrections on revision.
NIPS
Title Superset Technique for Approximate Recovery in One-Bit Compressed Sensing Abstract One-bit compressed sensing (1bCS) is a method of signal acquisition under extreme measurement quantization that gives important insights on the limits of signal compression and analog-to-digital conversion. The setting is also equivalent to the problem of learning a sparse hyperplane-classifier. In this paper, we propose a generic approach for signal recovery in nonadaptive 1bCS that leads to improved sample complexity for approximate recovery for a variety of signal models, including nonnegative signals and binary signals. We construct 1bCS matrices that are universal i.e. work for all signals under a model and at the same time recover very general random sparse signals with high probability. In our approach, we divide the set of samples (measurements) into two parts, and use the first part to recover the superset of the support of a sparse vector. The second set of measurements is then used to approximate the signal within the superset. While support recovery in 1bCS is well-studied, recovery of superset of the support requires fewer samples, which then leads to an overall reduction in sample complexity for approximate recovery. 1 Introduction Sparsity is a natural property of many real-world signals. For example, image and speech signals are sparse in the Fourier basis, which led to the theory of compressed sensing, and more broadly, sampling theory [12, 7]. In some important multivariate optimization problems with many optimal points, sparsity of the solution is also a measure of ‘simplicity’ and insisting on sparsity is a common method of regularization [19]. While recovering sparse vectors from linear measurements is a well-studied topic, technological advances and increasing data size raises new questions. These include quantized and nonlinear signal acquisition models, such as 1-bit compressed sensing [4]. In 1-bit compressed sensing, linear measurements of a sparse vector are quantized to only 1 bit, e.g. indicating whether the measurement outcome is positive or not, and the task is to recover the vector up to a prescribed Euclidean error with minimum number of measurements. Like compressed sensing, the overwhelming majority of the literature, including this paper, focuses on the nonadaptive setting for the problem. One of the ways to approximately recover a sparse vector from 1-bit measurements is to use a subset of all the measurements to identify the support of the vector. Next, the remainder of the measurements can be used to approximate the vector within the support. Note that this second set of measurements is also predefined, and therefore the entire scheme is still nonadaptive. Such a method appears in the Preprint. Under review. context of ‘universal’ matrix designs in [9, 1]. The resulting schemes are the best known, in some sense, but still result in a large gap between the upper and lower bounds for approximate recovery of vectors. In this paper we take steps to close these gaps, by presenting a simple yet powerful idea. Instead of using a subset of the measurements to recover the support of the vector exactly, we propose using a (smaller) set of measurements to recover a superset of the support. The remainder of the measurements can then be used to better approximate the vector within the superset. It turns out this idea which we call the “superset technique” leads to optimal number of measurements for universal schemes for several important classes of sparse vectors (for example, nonnegative vectors). We also present theoretical results providing a characterization of matrices that would yield universal schemes for all sparse vectors. Prior Results. While the compressed sensing framework was introduced in [7], it was not until [4] that 1-bit quantization of the measurements was considered as well, to try and combat the fact that taking real-valued measurements to arbitrary precision may not be practical in applications. Initially, the focus was primarily on approximately reconstructing the direction of the signal x (the quantization does not preserve any information about the magnitude of the signal, so all we can hope to reconstruct is the direction). However, in [10] the problem of support recovery, as opposed to approximate vector reconstruction, was first considered and it was shown that O (k log n) measurements is sufficient to recover the support of a k-sparse signal in Rn with high probability. This was subsequently shown to be tight with the lower bound proven in [3]. All the above results assume that a new measurement matrix is constructed for each sparse signal, and success is defined as either approximately recovering the signal up to error ✏ in the `2 norm (for the approximate vector recovery problem), or exactly recovering the support of the signal (for the support recovery problem), with high probability. Generating a new matrix for each instance is not practical in all applications, which has led to interest in the “universal” versions of the above two problems, where a single matrix must work for support recovery or approximate recovery of all k-sparse real signals, with high probability. Plan and Vershynin showed in [15] that both O k ✏6 log n k and O k ✏5 log 2 n k measurements suffice for universal approximate recovery. The dependence on ✏ was then improved significantly to O k3 log nk + k ✏ in [9], who also considered the problem of universal support recovery, and showed that for that problem, O k3 log n measurements is sufficient. They showed as well that if we restrict the entries of the signal to be nonnegative (which is the case for many real-world signals such as images), then O k2 log n is sufficient for universal support recovery. The constructions of their measurement matrices are based primarily on combinatorial objects, specifically expanders and Union Free Families (UFFs). Most recently, [1] showed that a modified version of the UFFs used in [9] called “Robust UFFs” (RUFFs) can be used to improve the upper bound on universal support recovery to O k2 log n for all real-valued signals, matching the previous upper bound for nonnegative signals, and showed this is nearly tight with a lower bound of ⌦(k2 log n/ log k) for real signals. They also show that O k2 log n+ k✏ measurements suffices for universal approximate recovery. In tandem with the development of these theoretical results providing necessary and sufficient numbers of measurements for support recovery and approximate vector recovery, there has been a significant body of work in other directions on 1-bit compressed sensing, such as heuristic algorithms that perform well empirically, and tradeoffs between different parameters. More specifically, [11] introduced a gradient-descent based algorithm called Binary Iterative Hard Thresholding (BIHT) which performs very well in practice; later, [13] gave another heuristic algorithm which performs comparably well or better, and aims to allow for very efficient decoding after the measurements are taken. Other papers such as [18] have studied the tradeoff between the amount of quantization of the signal, and the necessary number of measurements. Our Results. We focus primarily on upper bounds in the universal setting, aiming to give constructions that work with high probability for all sparse vectors. In [1], 3 major open questions are given regarding Universal 1-bit Compressed Sensing, which, paraphrasing, are as follows: 1. How many measurements are necessary and sufficient for a matrix to be used to exactly recover all k-sparse binary vectors? In this work we make progress towards solutions to all three Open Questions. Our primary contribution is the “superset technique” which relies on ideas from the closely related sparse recovery problem of group testing [8]; in particular, we show in Theorem 6 that for a large class of signals including all nonnegative (and thus all binary) signals, we can improve the upper bound for approximate recovery by first recovering an O (k)-sized superset of the support rather than the exact support, then subsequently using Gaussian measurements. The previous best upper bound for binary signals from [11] was O k3/2 log n , which we improve to O k3/2 + k log nk , and for nonnegative signals was O min(k2 log nk + k ✏ , k ✏ log n) , which we improve to O k log nk + k ✏ . Regarding Open Question 3, using results of Porat and Rothschild regarding weakly explicit constructions of Error-Correcting Codes (ECCs) on the Gilbert-Varshamov bound [16], we give a construction of Robust UFFs yielding measurement matrices for support recovery with O k2 log n rows in time that is polynomial in n (though not in k) in Theorem 12. Based on a similar idea, we also give a weakly explicit construction for non-universal approximate recovery using only sightly more measurements than is optimal (O k log2 n as opposed to O k log nk ) in Section 4.2; to our knowledge, explicit constructions in the non-universal setting have not been studied previously. Furthermore, this result gives a single measurement matrix which works for almost all vectors, as opposed to typical non-universal results which work with high probability for a particular vector and matrix pair. In Appendix C, we give a sufficient condition generalizing the notion of RUFFs for a matrix to be used for universal recovery of a superset of the support for all real signals; while we do not provide constructions, this seems to be a promising direction for resolving Open Question 2. The best known upper and lower bounds for the various compressed sensing problems considered in this work are presented in Table 1. 2 Definitions We write Mi for the ith row of the matrix M , and Mi,j for the entry of M in the ith row and jth column. We write vectors x in boldface, and write xi for the ith component of the vector x. The set {1, 2, . . . , n} will be denoted by [n], and for any set S we write P(S) for the power set of S (i.e. the set of all subsets of S). We will write supp(x) ✓ [n] to mean the set of indices of nonzero components of x (so supp(x) = {i : xi 6= 0}), and ||x||0 to denote | supp(x)|. For a real number y, sign(y) returns 1 if y is strictly positive, 1 if y is strictly negative, and 0 if y = 0. While this technically returns more than one bit of information, if we had instead defined sign(y) to be 1 when y 0 and 1 otherwise, we could still determine whether y = 0 by looking at sign(y), sign( y), so this affects the numbers of measurements by only a constant factor. We will not concern ourselves with the constants involved in any of our results, so we have chosen to instead use the more convenient definition. We will sometimes refer to constructions from the similar “group testing” problem in our results. To this end, we will use the symbol “ ” to represent the group testing measurement between a measurement vector and a signal vector. Specifically, for a measurement m of length n and signal x of length n, m x is equal to 1 if supp(m) \ supp(x) is nonempty, and 0 otherwise. We will also make use of the “list-disjunct” matrices used in some group testing constructions. Definition 1. An m ⇥ n binary matrix M is (k, l)-list disjunct if for any two disjoint sets S, T ✓ col(M) with |S| = k, |T | = l, there exists a row in M in which some column from T has a nonzero entry, but every column from S has a zero. The primary use of such matrices is that in the group testing model, they can be used to recover a superset of size at most k+ l of the support of any k-sparse signal x from applying a simple decoding to the measurement results M x. In the following definitions, we write S for a generic set that is the domain of the signal. In this paper we consider signals with domain R,R 0 (nonnegative reals), and {0, 1}. Definition 2. An m ⇥ n measurement matrix M can be used for Universal Support Recovery of k-sparse x 2 Sn (in m measurements) if there exists a decoding function f : { 1, 0, 1}m ! P([n]) such that f(sign(Mx)) = supp(x) for all x satisfying ||x||0 k. Definition 3. An m⇥n measurement matrix M can be used for Universal ✏-Approximate Recovery of k-sparse x 2 Sn (in m measurements) if there exists a decoding function f : { 1, 0, 1}m ! Sn such that x ||x||2 f(sign(Mx)) ||f(sign(Mx))||2 2 ✏, for all x with ||x||0 k. 3 Upper Bounds for Universal Approximate Recovery Here we present our main result, an upper bound on the number of measurements needed to perform universal ✏-approximate recovery for a large class of real vectors that includes all binary vectors and all nonnegative vectors. The general technique will be to first use what are known as “list-disjunct” matrices from the group testing literature to recover a superset of the support of the signal, then use Gaussian measurements to approximate the signal within the superset. Because the measurements in the second part are Gaussian, we can perform the recovery within the (initially unknown) superset nonadaptively. When restricting to the class of binary or nonnegative signals, our upper bound improves on existing results and is close to known lower bounds. First, we need a lemma stating the necessary and sufficient conditions on a signal vector x in order to be able to reconstruct the results of a single group testing measurement m x using sign measurements. To concisely state the condition, we introduce some notation: for a subset S ✓ [n] and vector x of length n, we write x|S to mean the restriction of x to the indices of S. Lemma 1. Let m 2 {0, 1}n and x 2 Rn. Define S = supp(m) \ supp(x). If either S is empty or S is nonempty and mT |S x|S 6= 0, we can reconstruct the result of the group testing measurement m x from the sign measurement sign(mTx). Proof. We observe sign(mTx) and based on that must determine the value of m x, or equivalently whether S is empty or nonempty. If sign(mTx) 6= 0 then mTx 6= 0, so S is nonempty and m x = 1. Otherwise we have sign(mTx) = 0, in which case we must have mTx = 0. If S were nonempty then we would have mT |S x|S = 0, contradicting our assumption. Therefore in this case we must have S empty and m x = 0, so for x satisfying the above condition we can reconstruct the results of a group testing measurement. For convenience, we use the following property to mean that a signal x has the necessary property from Lemma 1 with respect to every row of a matrix M . Property 1. Let M be an m⇥n matrix, and x a signal of length n. Define Si = supp(Mi)\supp(x). Then for every row Mi of M , either Si is empty, or MTi |Si x|Si 6= 0. Corollary 2. Let M be a (k, l)-list disjunct matrix, and x 2 Rn be a k-sparse real signal. If Property 1 holds for M and x, then we can use the measurement matrix M to recover a superset of size at most k + l of the support of x using sign measurements. Combining this corollary with results of [6], there exist matrices with O k log(nk ) rows which we can use to recover an O (k)-sized superset of the support of x using sign measurements, provided x satisfies the above condition. Strongly explicit constructions of these matrices exist also, although requiring O k1+o(1) log n rows [5]. The other result we need is one that tells us how many Gaussian measurements are necessary to approximately recover a real signal using maximum likelihood decoding. Similar results have appeared elsewhere, such as [11], but we include the proof for completeness. Lemma 3. There exists a measurement matrix A for 1-bit compressed sensing such that for every pair of k-sparse x,y 2 Rn with ||x||2 = ||y||2 = 1, sign(Ax) 6= sign(Ay) whenever ||x y||2 > ✏, provided that m = O ✓ k ✏ log ⇣n k ⌘◆ . We will make use of the following facts in the proof. Fact 4. For all x 2 R, 1 x < e x. Fact 5. For all x 2 [0, 1], cos 1(x) p 2(1 x). Proof of Lemma 3. Let A ⇠ Nm⇥n(0, 1). For a measurement to separate x and y, it is necessary that the hyperplane corresponding to some row a of A lies between x and y. Thus our goal here is to show that if we take m to be large enough, that all pairs of points at distance > ✏ will be separated with high probability. Since the rows of A are chosen independently and have Gaussian entries, they are spherically symmetric, and thus the probability that the random hyperplane a lies between x and y is proportional to the angle between them. Let ||x y||2 > ✏, then we start out by upper bounding the probability that no measurement separates a particular pair x and y. Before beginning, recall that for unit vectors 1 xTy = ||x y||22/2, so given that ||x y||2 > ✏, we have xTy < 1 ✏2/2. Pr[sign(ax) = sign(ay)] = 1 cos 1(xTy) ⇡ < 1 cos 1(1 ✏2/2) ⇡ exp( cos 1(1 ✏2/2) ⇡ ) (from Fact 4). exp( ✏⇡ ) (from Fact 5). As there are m independent measurements, the probability that x and y are not separated by any of the m measurements is at most exp ⇣ m✏ ⇡ ⌘ , so union bounding over all n k 2 pairs of k-sparse x and y, the total probability of error is strictly less than ✓ n k ◆2 exp ⇣ m✏ ⇡ ⌘ . This probability becomes less than 1 for m ⇡✏ (2k) log n k , so with this number of measurements there exists a matrix that can perform ✏-approximate recovery for all pairs of sparse vectors. Note that in the case that we already have a superset of the support of size O (k), the previous result tells us there exists a matrix with O ⇣ k ✏ log( O(k) k ) ⌘ = O k ✏ rows which can be used to perform ✏-approximate recovery within the superset. We can do this even nonadaptively, because the rows of the matrix for approximate recovery are Gaussian. Combining this with Corollary 2 and the group testing constructions of [6], we have the following theorem. Theorem 6. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k ✏ rows that can be used for ✏-approximate recovery within the superset as in Lemma 3, so M consists of O k log(nk ) + k ✏ rows. Let x 2 Rn be a k-sparse signal. If Property 1 holds for M (1) and x, then M can be used for ✏-approximate recovery of x. Remark. We note that the class of signal vectors x which satisfy the condition in Theorem 6 is actually quite large, in the sense that there is a natural probability distribution over all sparse signals x for which vectors violating the condition occur with probability 0. The details are laid out in Lemma 14. As special cases, we have improved upper bounds for nonnegative and binary signals. For ease of comparison with the other results, we assume the binary signal is rescaled to have unit norm, so has all entries either 0 or equal to 1/ p ||x||0. Corollary 7. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k ✏ rows that can be used for ✏-approximate recovery within the superset as in Lemma 3, so M consists of O k log(nk ) + k ✏ rows. Let x 2 Rn be a k-sparse signal. If all entries of x are nonnegative, then M can be used for ✏-approximate recovery of x. Proof. In light of Theorem 6, we need only note that as all entries of M (1) and x are nonnegative, Property 1 is satisfied for M (1) and x. Corollary 8. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k3/2 rows that can be used for ✏-approximate recovery (with ✏ < 1/ p k) within the superset as in Corollary 2 , so M consists of O k log(nk ) + k 3/2 rows. Let x 2 Rn be the k-sparse signal vector. If all nonzero entries of x are equal, then M can be used for exact recovery of x. Proof. Here we use the fact that if we perform ✏-approximate recovery using ✏ < 1/ p k then as the minimum possible distance between any two k-sparse rescaled binary vectors is 1/ p k, we will recover the signal vector exactly. 4 Explicit Constructions 4.1 Explicit Robust UFFs from Error-Correcting Codes In this section we explain how to combine several existing results in order to explicitly construct Robust UFFs that can be used for support recovery of real vectors. This partially answers Open Problem 3 from [1]. Definition 4. A family of sets F = {B1, B2, . . . , Bn} with each Bi ✓ [m] is an (n,m, d, k,↵)Robust-UFF if |Bi| = d, 8i, and for every distinct j0, j1, . . . , jk 2 [n], |Bj0 \ (Bj1 [ Bj2 [ · · · [ Bjk)| < ↵|Bj0 |. It is shown in [1] that nonexplicit (n,m, d, k, 1/2)-Robust UFFs exist with m = O k2 log n , d = O (k log n) which can be used to exactly recover the support of any k-sparse real vector of length n in m measurements. The results we will need are the following, where the q-ary entropy function Hq is defined as Hq(x) = x logq(q 1) x logq x (1 x) logq(1 x). Theorem 9 ([16] Thm. 2). Let q be a prime power, m and k positive integers, and 2 [0, 1]. Then if k (1 Hq( ))m, we can construct a q-ary linear code with rate k m and relative distance in time O mqk . Theorem 10 ([1] Prop. 17). Given a q-ary error correcting code with rate r and relative distance (1 ), we can construct a (qrd, qd, d, 1, )-Robust-UFF. Theorem 11 ([1] Prop. 15). If F is an (n,m, d, 1,↵/k)-Robust-UFF, then F is also an (n,m, d, k,↵)-Robust-UFF. By combining the above three results, we have the following. Theorem 12. We can explicitly construct an (n,m, d, k,↵)-Robust UFF with m = O ⇣ k2 logn ↵2 ⌘ and d = O ⇣ k logn ↵ ⌘ in time O (k/↵)k . Proof. First, we instantiate Theorem 9 to obtain a q-ary code C of length d with q = O (k/↵), relative distance = k ↵k , and rate r = 1 Hq( ) in time O qk . Applying Theorem 10 to this code results in an (n,m, d, 1, )-Robust-UFF F where n = qrd, m = qd, = 1 . By Theorem 11, F is also an (n,m, d, k, k)-Robust UFF. Plugging back in the parameters of the original code, m = qd = q log n r log q = q log n (1 Hq((k ↵)/k)) log q = O ✓ k2 log n ↵2 ◆ , k = (1 )k = (1 k ↵ k )k = k (k ↵) = ↵. While the time needed for this construction is not polynomial in k (and therefore the construction is not strongly explicit) as asked for in Open Question 3 of [1], this at least demonstrates that there exist codes with sufficiently good parameters to yield Robust UFFs with m = O k2 log n . 4.2 Non-Universal Approximate Recovery If instead of requiring our measurement matrices to be able to recover all k-sparse signals simultaneously (i.e. to be universal), we can instead require only that they are able to recover “most” k-sparse signals. Specifically, in this section we will assume that the sparse signal is generated in the following way: first a set of k indices is chosen to be the support of the signal uniformly at random. Then, the signal is chosen to be a uniformly random vector from the unit sphere on those k indices. We relax the requirement that the supports of all k-sparse signals can be recovered exactly (by some decoding) to the requirement that we can identify the support of a k-sparse signal with probability at least 1 , where 2 [0, 1). Note that even when = 0, this is a weaker condition than universality, as the space of possible k-sparse signals is infinite. It is shown in [3] that a random matrix construction using O (k log n) measurements suffices to recover the support with error probability approaching 0 as k and n approach infinity. The following theorem shows that we can explicitly construct a matrix which works in this setting, at the cost of slightly more measurements (about O k log2(n) ). Theorem 13. We can explicitly construct measurement matrices for Support Recovery (of real vectors) with m = O ⇣ k log(n)log k log( n ) ⌘ rows that can exactly determine the support of a k-sparse signal with probability at least 1 , where the signals are generated by first choosing the size k support uniformly at random, then choosing the signal to be a uniformly random vector on the sphere on those k coordinates. To prove this theorem, we need a lemma which explains how we can use sign measurements to “simulate” group testing measurements with high probability. Both the result and proof are similar to Lemma 1, with the main difference being that given the distribution described above, the vectors violating the necessary condition in Lemma 1 occur with zero probability and so can be safely ignored. For this lemma, we do not need the further assumption made in Theorem 13 that the distribution over support sets is uniform. The proof is presented in Appendix A. Lemma 14. Suppose we have a measurement vector m 2 {0, 1}n, and a k-sparse signal x 2 Rn. The signal x is generated randomly by first picking a subset of size k from [n] (using any distribution) to be the support, then taking x to be a uniformly random vector on the sphere on those k coordinates. Then from sign(mTx), we can determine the value of m x with probability 1. As the above argument works with probability 1, we can easily extend it to an entire measurement matrix M with any finite number of rows by a union bound, and recover all the group testing measurement results M x with probability 1 as well. This means we can leverage the following result from [14]: Theorem 15 ([14] Thm. 5). When x 2 {0, 1}n is drawn uniformly at random among all ksparse binary vectors, there exists an explicitly constructible group testing matrix M with m = O ⇣ k log k log(n) log( n ) ⌘ rows which can exactly identify x from observing the measurement results M x with probability at least 1 . Combining this with the lemma above, we can use the matrix M from Theorem 15 with m = O ⇣ k log k log n log( n ) ⌘ rows (now representing sign measurements) to exactly determine the support of x with probability at least 1 ; we first use Lemma 14 to recover the results of the group testing tests M x with probability 1, and can then apply the above theorem using the results of the group testing measurements. We can also use this construction for approximate recovery rather than support recovery using Lemma 3, by appending O k ✏ rows of Gaussian measurements to M , first recovering the exact support, then doing approximate recovery within that support. This gives a matrix with about O k log2(n) + k✏ rows for non-universal approximate recovery of real signals, where the top portion is explicit. Remark. Above, we have shown that in the non-universal setting, we can use constructions from group testing to recover the exact support with high probability, and then subsequently perform approximate recovery within that exact support. If we are interested only in performing approximate recovery, we can apply our superset technique here as well; Lemma 14 implies also that using a (k,O (k))-list disjunct matrix we can with probability 1 recover an O (k)-sized superset of the support, and such matrices exist with O k log nk rows. Following this, we can use O k ✏ more Gaussian measurements to recover the signal within the superset. This gives a non-universal matrix with O k log nk + k ✏ rows for approximate recovery, the top part of which can be made strongly explicit with only slightly more measurements (O k1+o(1) log nk vs. O k log nk ). 5 Experiments In this section, we present some empirical results relating to the use of our superset technique in approximate vector recovery for real-valued signals. To do so, we compare the average error (in `2 norm) of the reconstructed vector from using an “all Gaussian” measurement matrix to first using a small number of measurements to recover a superset of the support of the signal, then using the remainder of the measurements to recover the signal within that superset via Gaussian measurements. We have used the well-known BIHT algorithm of [11] for recovery of the vector both using the all Gaussian matrix and within the superset, but we emphasize that this superset technique is highly general, and could just as easily be applied on top of other decoding algorithms that use only Gaussian measurements, such as the “QCoSaMP” algorithm of [17]. To generate random signals x, we first choose a size k support uniformly at random among the n k possibilities, then for each coordinate in the chosen support, generate a random value from N (0, 1). The vector is then rescaled so that ||x||2 = 1. For the dotted lines in Figure 1 labeled “all Gaussian,” for each value of (n,m, k) we performed 500 trials in which we generated an m⇥ n matrix with all entries in N (0, 1). We then used BIHT (run either until convergence or 1000 iterations, as there is no convergence guarantee) to recover the signal from the measurement matrix and measurement outcomes. For the solid lines in Figure 1 labeled “4k log n Superset,” we again performed 500 trials for each value of (n,m, k) where in each trial we generated a measurement matrix M = M (1) M (2) with m rows in total. Each entry of M (1) is a Bernoulli random variable that takes value 1 with probability 1 k+1 and value 0 with probability k k+1 ; there is evidence from the group testing literature [3, 2] that this probability is near-optimal in some regimes, and it appears also to perform well in practice; see Appendix B for some empirical evidence. The entries of M (2) are drawn from N (0, 1). We use a standard group testing decoding (i.e., remove any coordinates that appear in a test with result 0) to determine a superset based on y1 = sign(M (1)x), then use BIHT (again run either until convergence or 1000 iterations) to reconstruct x within the superset using the measurement results y2 = sign(M (2)x). The number of rows in M (1) is taken to be m1 = 4k log10(n) based on the fact that with high probability Ck log n rows for some constant C should be sufficient to recover an O (k)-sized superset, and the remainder m2 = (m m1) of the measurements are used in M (2). We display data only for larger values of m, to ensure there are sufficiently many rows in both portions of the measurement matrix. From Figure 1 one can see that in this regime, using a small number of measurements to first recover a superset of the support provides a modest improvement in reconstruction error compared to the alternative. In the higher-error regime when there are simply not enough measurements to obtain an accurate reconstruction, as can be seen in the left side of the graph in Figure 1d, the two methods perform about the same. In the empirical setting, our superset of support recovery technique can be viewed as a very flexible and low overhead method of extending other existing 1bCS algorithms which use only Gaussian measurements, which are quite common. Acknowledgements: This research is supported in part by NSF CCF awards 1618512, 1642658, and 1642550 and the UMass Center for Data Science.
1. What are the main concerns regarding the robustness of the proposed approach? 2. How does the method depend on support superset recovery, and what are the implications of misidentification? 3. How does group testing behave in the presence of measurement noise, and how robust is it to practical signals that are not exactly sparse? 4. What is the requirement for knowledge of k in designing the group testing, and how does it compare to conventional algorithms? 5. Are there any issues with the experimental design or results that do not convince the reviewer?
Review
Review Overall an interesting paper. However, there are several issues that I would recommend the authors address. Most of the issues revolve around the robustness of their approach. In particular, there proposed approach critically depends on the support superset recovery, in order to proceed to the next step of signal recovery. Misidentification of the support will result to incorrect recovery. Furthermore, the larger the identified support the more measurements will be required to fully reconstruct the signal based on a larger support. It is not clear how group testing behaves in the presence of measurement noise. Furthermore, quite often practical signals are not exactly sparse. i.e., the support might be the whole signal even though the signal is sparse because the coefficients decay very fast. Thus, some are approximately, but not exactly, sparse (often known as compressible). Images are such an example. Is group testing robust to this? Will the support returned represent the largest coefficients? If the group testing fails, so will the rest of the process. Another issue is that, it seems, the process requires knowledge of k, to design the group testing. This might often not be available, (admittedly, also a problem with some conventional algorithms, such as the BIHT). However, conventional algorithms seem to be more robust to lack of knowledge of this parameter than group testing ones. Is that an issue? In addition, the experiments are not as convincing as I would expect. The improvements are modest, if any, to justify the extra complications of the two-step process. Also, I would recommend the authors include a line (i.e., a third experiment) in which the superset method matrix is used whole with BIHT (i.e., without the support recovery step first). It might be that the matrix they designed is actually better than a fully Gaussian matrix, even when used with conventional methods. ===== I've seen and taken into account the author's response, and it does not change my score.
NIPS
Title Superset Technique for Approximate Recovery in One-Bit Compressed Sensing Abstract One-bit compressed sensing (1bCS) is a method of signal acquisition under extreme measurement quantization that gives important insights on the limits of signal compression and analog-to-digital conversion. The setting is also equivalent to the problem of learning a sparse hyperplane-classifier. In this paper, we propose a generic approach for signal recovery in nonadaptive 1bCS that leads to improved sample complexity for approximate recovery for a variety of signal models, including nonnegative signals and binary signals. We construct 1bCS matrices that are universal i.e. work for all signals under a model and at the same time recover very general random sparse signals with high probability. In our approach, we divide the set of samples (measurements) into two parts, and use the first part to recover the superset of the support of a sparse vector. The second set of measurements is then used to approximate the signal within the superset. While support recovery in 1bCS is well-studied, recovery of superset of the support requires fewer samples, which then leads to an overall reduction in sample complexity for approximate recovery. 1 Introduction Sparsity is a natural property of many real-world signals. For example, image and speech signals are sparse in the Fourier basis, which led to the theory of compressed sensing, and more broadly, sampling theory [12, 7]. In some important multivariate optimization problems with many optimal points, sparsity of the solution is also a measure of ‘simplicity’ and insisting on sparsity is a common method of regularization [19]. While recovering sparse vectors from linear measurements is a well-studied topic, technological advances and increasing data size raises new questions. These include quantized and nonlinear signal acquisition models, such as 1-bit compressed sensing [4]. In 1-bit compressed sensing, linear measurements of a sparse vector are quantized to only 1 bit, e.g. indicating whether the measurement outcome is positive or not, and the task is to recover the vector up to a prescribed Euclidean error with minimum number of measurements. Like compressed sensing, the overwhelming majority of the literature, including this paper, focuses on the nonadaptive setting for the problem. One of the ways to approximately recover a sparse vector from 1-bit measurements is to use a subset of all the measurements to identify the support of the vector. Next, the remainder of the measurements can be used to approximate the vector within the support. Note that this second set of measurements is also predefined, and therefore the entire scheme is still nonadaptive. Such a method appears in the Preprint. Under review. context of ‘universal’ matrix designs in [9, 1]. The resulting schemes are the best known, in some sense, but still result in a large gap between the upper and lower bounds for approximate recovery of vectors. In this paper we take steps to close these gaps, by presenting a simple yet powerful idea. Instead of using a subset of the measurements to recover the support of the vector exactly, we propose using a (smaller) set of measurements to recover a superset of the support. The remainder of the measurements can then be used to better approximate the vector within the superset. It turns out this idea which we call the “superset technique” leads to optimal number of measurements for universal schemes for several important classes of sparse vectors (for example, nonnegative vectors). We also present theoretical results providing a characterization of matrices that would yield universal schemes for all sparse vectors. Prior Results. While the compressed sensing framework was introduced in [7], it was not until [4] that 1-bit quantization of the measurements was considered as well, to try and combat the fact that taking real-valued measurements to arbitrary precision may not be practical in applications. Initially, the focus was primarily on approximately reconstructing the direction of the signal x (the quantization does not preserve any information about the magnitude of the signal, so all we can hope to reconstruct is the direction). However, in [10] the problem of support recovery, as opposed to approximate vector reconstruction, was first considered and it was shown that O (k log n) measurements is sufficient to recover the support of a k-sparse signal in Rn with high probability. This was subsequently shown to be tight with the lower bound proven in [3]. All the above results assume that a new measurement matrix is constructed for each sparse signal, and success is defined as either approximately recovering the signal up to error ✏ in the `2 norm (for the approximate vector recovery problem), or exactly recovering the support of the signal (for the support recovery problem), with high probability. Generating a new matrix for each instance is not practical in all applications, which has led to interest in the “universal” versions of the above two problems, where a single matrix must work for support recovery or approximate recovery of all k-sparse real signals, with high probability. Plan and Vershynin showed in [15] that both O k ✏6 log n k and O k ✏5 log 2 n k measurements suffice for universal approximate recovery. The dependence on ✏ was then improved significantly to O k3 log nk + k ✏ in [9], who also considered the problem of universal support recovery, and showed that for that problem, O k3 log n measurements is sufficient. They showed as well that if we restrict the entries of the signal to be nonnegative (which is the case for many real-world signals such as images), then O k2 log n is sufficient for universal support recovery. The constructions of their measurement matrices are based primarily on combinatorial objects, specifically expanders and Union Free Families (UFFs). Most recently, [1] showed that a modified version of the UFFs used in [9] called “Robust UFFs” (RUFFs) can be used to improve the upper bound on universal support recovery to O k2 log n for all real-valued signals, matching the previous upper bound for nonnegative signals, and showed this is nearly tight with a lower bound of ⌦(k2 log n/ log k) for real signals. They also show that O k2 log n+ k✏ measurements suffices for universal approximate recovery. In tandem with the development of these theoretical results providing necessary and sufficient numbers of measurements for support recovery and approximate vector recovery, there has been a significant body of work in other directions on 1-bit compressed sensing, such as heuristic algorithms that perform well empirically, and tradeoffs between different parameters. More specifically, [11] introduced a gradient-descent based algorithm called Binary Iterative Hard Thresholding (BIHT) which performs very well in practice; later, [13] gave another heuristic algorithm which performs comparably well or better, and aims to allow for very efficient decoding after the measurements are taken. Other papers such as [18] have studied the tradeoff between the amount of quantization of the signal, and the necessary number of measurements. Our Results. We focus primarily on upper bounds in the universal setting, aiming to give constructions that work with high probability for all sparse vectors. In [1], 3 major open questions are given regarding Universal 1-bit Compressed Sensing, which, paraphrasing, are as follows: 1. How many measurements are necessary and sufficient for a matrix to be used to exactly recover all k-sparse binary vectors? In this work we make progress towards solutions to all three Open Questions. Our primary contribution is the “superset technique” which relies on ideas from the closely related sparse recovery problem of group testing [8]; in particular, we show in Theorem 6 that for a large class of signals including all nonnegative (and thus all binary) signals, we can improve the upper bound for approximate recovery by first recovering an O (k)-sized superset of the support rather than the exact support, then subsequently using Gaussian measurements. The previous best upper bound for binary signals from [11] was O k3/2 log n , which we improve to O k3/2 + k log nk , and for nonnegative signals was O min(k2 log nk + k ✏ , k ✏ log n) , which we improve to O k log nk + k ✏ . Regarding Open Question 3, using results of Porat and Rothschild regarding weakly explicit constructions of Error-Correcting Codes (ECCs) on the Gilbert-Varshamov bound [16], we give a construction of Robust UFFs yielding measurement matrices for support recovery with O k2 log n rows in time that is polynomial in n (though not in k) in Theorem 12. Based on a similar idea, we also give a weakly explicit construction for non-universal approximate recovery using only sightly more measurements than is optimal (O k log2 n as opposed to O k log nk ) in Section 4.2; to our knowledge, explicit constructions in the non-universal setting have not been studied previously. Furthermore, this result gives a single measurement matrix which works for almost all vectors, as opposed to typical non-universal results which work with high probability for a particular vector and matrix pair. In Appendix C, we give a sufficient condition generalizing the notion of RUFFs for a matrix to be used for universal recovery of a superset of the support for all real signals; while we do not provide constructions, this seems to be a promising direction for resolving Open Question 2. The best known upper and lower bounds for the various compressed sensing problems considered in this work are presented in Table 1. 2 Definitions We write Mi for the ith row of the matrix M , and Mi,j for the entry of M in the ith row and jth column. We write vectors x in boldface, and write xi for the ith component of the vector x. The set {1, 2, . . . , n} will be denoted by [n], and for any set S we write P(S) for the power set of S (i.e. the set of all subsets of S). We will write supp(x) ✓ [n] to mean the set of indices of nonzero components of x (so supp(x) = {i : xi 6= 0}), and ||x||0 to denote | supp(x)|. For a real number y, sign(y) returns 1 if y is strictly positive, 1 if y is strictly negative, and 0 if y = 0. While this technically returns more than one bit of information, if we had instead defined sign(y) to be 1 when y 0 and 1 otherwise, we could still determine whether y = 0 by looking at sign(y), sign( y), so this affects the numbers of measurements by only a constant factor. We will not concern ourselves with the constants involved in any of our results, so we have chosen to instead use the more convenient definition. We will sometimes refer to constructions from the similar “group testing” problem in our results. To this end, we will use the symbol “ ” to represent the group testing measurement between a measurement vector and a signal vector. Specifically, for a measurement m of length n and signal x of length n, m x is equal to 1 if supp(m) \ supp(x) is nonempty, and 0 otherwise. We will also make use of the “list-disjunct” matrices used in some group testing constructions. Definition 1. An m ⇥ n binary matrix M is (k, l)-list disjunct if for any two disjoint sets S, T ✓ col(M) with |S| = k, |T | = l, there exists a row in M in which some column from T has a nonzero entry, but every column from S has a zero. The primary use of such matrices is that in the group testing model, they can be used to recover a superset of size at most k+ l of the support of any k-sparse signal x from applying a simple decoding to the measurement results M x. In the following definitions, we write S for a generic set that is the domain of the signal. In this paper we consider signals with domain R,R 0 (nonnegative reals), and {0, 1}. Definition 2. An m ⇥ n measurement matrix M can be used for Universal Support Recovery of k-sparse x 2 Sn (in m measurements) if there exists a decoding function f : { 1, 0, 1}m ! P([n]) such that f(sign(Mx)) = supp(x) for all x satisfying ||x||0 k. Definition 3. An m⇥n measurement matrix M can be used for Universal ✏-Approximate Recovery of k-sparse x 2 Sn (in m measurements) if there exists a decoding function f : { 1, 0, 1}m ! Sn such that x ||x||2 f(sign(Mx)) ||f(sign(Mx))||2 2 ✏, for all x with ||x||0 k. 3 Upper Bounds for Universal Approximate Recovery Here we present our main result, an upper bound on the number of measurements needed to perform universal ✏-approximate recovery for a large class of real vectors that includes all binary vectors and all nonnegative vectors. The general technique will be to first use what are known as “list-disjunct” matrices from the group testing literature to recover a superset of the support of the signal, then use Gaussian measurements to approximate the signal within the superset. Because the measurements in the second part are Gaussian, we can perform the recovery within the (initially unknown) superset nonadaptively. When restricting to the class of binary or nonnegative signals, our upper bound improves on existing results and is close to known lower bounds. First, we need a lemma stating the necessary and sufficient conditions on a signal vector x in order to be able to reconstruct the results of a single group testing measurement m x using sign measurements. To concisely state the condition, we introduce some notation: for a subset S ✓ [n] and vector x of length n, we write x|S to mean the restriction of x to the indices of S. Lemma 1. Let m 2 {0, 1}n and x 2 Rn. Define S = supp(m) \ supp(x). If either S is empty or S is nonempty and mT |S x|S 6= 0, we can reconstruct the result of the group testing measurement m x from the sign measurement sign(mTx). Proof. We observe sign(mTx) and based on that must determine the value of m x, or equivalently whether S is empty or nonempty. If sign(mTx) 6= 0 then mTx 6= 0, so S is nonempty and m x = 1. Otherwise we have sign(mTx) = 0, in which case we must have mTx = 0. If S were nonempty then we would have mT |S x|S = 0, contradicting our assumption. Therefore in this case we must have S empty and m x = 0, so for x satisfying the above condition we can reconstruct the results of a group testing measurement. For convenience, we use the following property to mean that a signal x has the necessary property from Lemma 1 with respect to every row of a matrix M . Property 1. Let M be an m⇥n matrix, and x a signal of length n. Define Si = supp(Mi)\supp(x). Then for every row Mi of M , either Si is empty, or MTi |Si x|Si 6= 0. Corollary 2. Let M be a (k, l)-list disjunct matrix, and x 2 Rn be a k-sparse real signal. If Property 1 holds for M and x, then we can use the measurement matrix M to recover a superset of size at most k + l of the support of x using sign measurements. Combining this corollary with results of [6], there exist matrices with O k log(nk ) rows which we can use to recover an O (k)-sized superset of the support of x using sign measurements, provided x satisfies the above condition. Strongly explicit constructions of these matrices exist also, although requiring O k1+o(1) log n rows [5]. The other result we need is one that tells us how many Gaussian measurements are necessary to approximately recover a real signal using maximum likelihood decoding. Similar results have appeared elsewhere, such as [11], but we include the proof for completeness. Lemma 3. There exists a measurement matrix A for 1-bit compressed sensing such that for every pair of k-sparse x,y 2 Rn with ||x||2 = ||y||2 = 1, sign(Ax) 6= sign(Ay) whenever ||x y||2 > ✏, provided that m = O ✓ k ✏ log ⇣n k ⌘◆ . We will make use of the following facts in the proof. Fact 4. For all x 2 R, 1 x < e x. Fact 5. For all x 2 [0, 1], cos 1(x) p 2(1 x). Proof of Lemma 3. Let A ⇠ Nm⇥n(0, 1). For a measurement to separate x and y, it is necessary that the hyperplane corresponding to some row a of A lies between x and y. Thus our goal here is to show that if we take m to be large enough, that all pairs of points at distance > ✏ will be separated with high probability. Since the rows of A are chosen independently and have Gaussian entries, they are spherically symmetric, and thus the probability that the random hyperplane a lies between x and y is proportional to the angle between them. Let ||x y||2 > ✏, then we start out by upper bounding the probability that no measurement separates a particular pair x and y. Before beginning, recall that for unit vectors 1 xTy = ||x y||22/2, so given that ||x y||2 > ✏, we have xTy < 1 ✏2/2. Pr[sign(ax) = sign(ay)] = 1 cos 1(xTy) ⇡ < 1 cos 1(1 ✏2/2) ⇡ exp( cos 1(1 ✏2/2) ⇡ ) (from Fact 4). exp( ✏⇡ ) (from Fact 5). As there are m independent measurements, the probability that x and y are not separated by any of the m measurements is at most exp ⇣ m✏ ⇡ ⌘ , so union bounding over all n k 2 pairs of k-sparse x and y, the total probability of error is strictly less than ✓ n k ◆2 exp ⇣ m✏ ⇡ ⌘ . This probability becomes less than 1 for m ⇡✏ (2k) log n k , so with this number of measurements there exists a matrix that can perform ✏-approximate recovery for all pairs of sparse vectors. Note that in the case that we already have a superset of the support of size O (k), the previous result tells us there exists a matrix with O ⇣ k ✏ log( O(k) k ) ⌘ = O k ✏ rows which can be used to perform ✏-approximate recovery within the superset. We can do this even nonadaptively, because the rows of the matrix for approximate recovery are Gaussian. Combining this with Corollary 2 and the group testing constructions of [6], we have the following theorem. Theorem 6. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k ✏ rows that can be used for ✏-approximate recovery within the superset as in Lemma 3, so M consists of O k log(nk ) + k ✏ rows. Let x 2 Rn be a k-sparse signal. If Property 1 holds for M (1) and x, then M can be used for ✏-approximate recovery of x. Remark. We note that the class of signal vectors x which satisfy the condition in Theorem 6 is actually quite large, in the sense that there is a natural probability distribution over all sparse signals x for which vectors violating the condition occur with probability 0. The details are laid out in Lemma 14. As special cases, we have improved upper bounds for nonnegative and binary signals. For ease of comparison with the other results, we assume the binary signal is rescaled to have unit norm, so has all entries either 0 or equal to 1/ p ||x||0. Corollary 7. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k ✏ rows that can be used for ✏-approximate recovery within the superset as in Lemma 3, so M consists of O k log(nk ) + k ✏ rows. Let x 2 Rn be a k-sparse signal. If all entries of x are nonnegative, then M can be used for ✏-approximate recovery of x. Proof. In light of Theorem 6, we need only note that as all entries of M (1) and x are nonnegative, Property 1 is satisfied for M (1) and x. Corollary 8. Let M = M (1) M (2) where M (1) is a (k,O (k))-list disjunct matrix with O k log nk rows, and M (2) is a matrix with O k3/2 rows that can be used for ✏-approximate recovery (with ✏ < 1/ p k) within the superset as in Corollary 2 , so M consists of O k log(nk ) + k 3/2 rows. Let x 2 Rn be the k-sparse signal vector. If all nonzero entries of x are equal, then M can be used for exact recovery of x. Proof. Here we use the fact that if we perform ✏-approximate recovery using ✏ < 1/ p k then as the minimum possible distance between any two k-sparse rescaled binary vectors is 1/ p k, we will recover the signal vector exactly. 4 Explicit Constructions 4.1 Explicit Robust UFFs from Error-Correcting Codes In this section we explain how to combine several existing results in order to explicitly construct Robust UFFs that can be used for support recovery of real vectors. This partially answers Open Problem 3 from [1]. Definition 4. A family of sets F = {B1, B2, . . . , Bn} with each Bi ✓ [m] is an (n,m, d, k,↵)Robust-UFF if |Bi| = d, 8i, and for every distinct j0, j1, . . . , jk 2 [n], |Bj0 \ (Bj1 [ Bj2 [ · · · [ Bjk)| < ↵|Bj0 |. It is shown in [1] that nonexplicit (n,m, d, k, 1/2)-Robust UFFs exist with m = O k2 log n , d = O (k log n) which can be used to exactly recover the support of any k-sparse real vector of length n in m measurements. The results we will need are the following, where the q-ary entropy function Hq is defined as Hq(x) = x logq(q 1) x logq x (1 x) logq(1 x). Theorem 9 ([16] Thm. 2). Let q be a prime power, m and k positive integers, and 2 [0, 1]. Then if k (1 Hq( ))m, we can construct a q-ary linear code with rate k m and relative distance in time O mqk . Theorem 10 ([1] Prop. 17). Given a q-ary error correcting code with rate r and relative distance (1 ), we can construct a (qrd, qd, d, 1, )-Robust-UFF. Theorem 11 ([1] Prop. 15). If F is an (n,m, d, 1,↵/k)-Robust-UFF, then F is also an (n,m, d, k,↵)-Robust-UFF. By combining the above three results, we have the following. Theorem 12. We can explicitly construct an (n,m, d, k,↵)-Robust UFF with m = O ⇣ k2 logn ↵2 ⌘ and d = O ⇣ k logn ↵ ⌘ in time O (k/↵)k . Proof. First, we instantiate Theorem 9 to obtain a q-ary code C of length d with q = O (k/↵), relative distance = k ↵k , and rate r = 1 Hq( ) in time O qk . Applying Theorem 10 to this code results in an (n,m, d, 1, )-Robust-UFF F where n = qrd, m = qd, = 1 . By Theorem 11, F is also an (n,m, d, k, k)-Robust UFF. Plugging back in the parameters of the original code, m = qd = q log n r log q = q log n (1 Hq((k ↵)/k)) log q = O ✓ k2 log n ↵2 ◆ , k = (1 )k = (1 k ↵ k )k = k (k ↵) = ↵. While the time needed for this construction is not polynomial in k (and therefore the construction is not strongly explicit) as asked for in Open Question 3 of [1], this at least demonstrates that there exist codes with sufficiently good parameters to yield Robust UFFs with m = O k2 log n . 4.2 Non-Universal Approximate Recovery If instead of requiring our measurement matrices to be able to recover all k-sparse signals simultaneously (i.e. to be universal), we can instead require only that they are able to recover “most” k-sparse signals. Specifically, in this section we will assume that the sparse signal is generated in the following way: first a set of k indices is chosen to be the support of the signal uniformly at random. Then, the signal is chosen to be a uniformly random vector from the unit sphere on those k indices. We relax the requirement that the supports of all k-sparse signals can be recovered exactly (by some decoding) to the requirement that we can identify the support of a k-sparse signal with probability at least 1 , where 2 [0, 1). Note that even when = 0, this is a weaker condition than universality, as the space of possible k-sparse signals is infinite. It is shown in [3] that a random matrix construction using O (k log n) measurements suffices to recover the support with error probability approaching 0 as k and n approach infinity. The following theorem shows that we can explicitly construct a matrix which works in this setting, at the cost of slightly more measurements (about O k log2(n) ). Theorem 13. We can explicitly construct measurement matrices for Support Recovery (of real vectors) with m = O ⇣ k log(n)log k log( n ) ⌘ rows that can exactly determine the support of a k-sparse signal with probability at least 1 , where the signals are generated by first choosing the size k support uniformly at random, then choosing the signal to be a uniformly random vector on the sphere on those k coordinates. To prove this theorem, we need a lemma which explains how we can use sign measurements to “simulate” group testing measurements with high probability. Both the result and proof are similar to Lemma 1, with the main difference being that given the distribution described above, the vectors violating the necessary condition in Lemma 1 occur with zero probability and so can be safely ignored. For this lemma, we do not need the further assumption made in Theorem 13 that the distribution over support sets is uniform. The proof is presented in Appendix A. Lemma 14. Suppose we have a measurement vector m 2 {0, 1}n, and a k-sparse signal x 2 Rn. The signal x is generated randomly by first picking a subset of size k from [n] (using any distribution) to be the support, then taking x to be a uniformly random vector on the sphere on those k coordinates. Then from sign(mTx), we can determine the value of m x with probability 1. As the above argument works with probability 1, we can easily extend it to an entire measurement matrix M with any finite number of rows by a union bound, and recover all the group testing measurement results M x with probability 1 as well. This means we can leverage the following result from [14]: Theorem 15 ([14] Thm. 5). When x 2 {0, 1}n is drawn uniformly at random among all ksparse binary vectors, there exists an explicitly constructible group testing matrix M with m = O ⇣ k log k log(n) log( n ) ⌘ rows which can exactly identify x from observing the measurement results M x with probability at least 1 . Combining this with the lemma above, we can use the matrix M from Theorem 15 with m = O ⇣ k log k log n log( n ) ⌘ rows (now representing sign measurements) to exactly determine the support of x with probability at least 1 ; we first use Lemma 14 to recover the results of the group testing tests M x with probability 1, and can then apply the above theorem using the results of the group testing measurements. We can also use this construction for approximate recovery rather than support recovery using Lemma 3, by appending O k ✏ rows of Gaussian measurements to M , first recovering the exact support, then doing approximate recovery within that support. This gives a matrix with about O k log2(n) + k✏ rows for non-universal approximate recovery of real signals, where the top portion is explicit. Remark. Above, we have shown that in the non-universal setting, we can use constructions from group testing to recover the exact support with high probability, and then subsequently perform approximate recovery within that exact support. If we are interested only in performing approximate recovery, we can apply our superset technique here as well; Lemma 14 implies also that using a (k,O (k))-list disjunct matrix we can with probability 1 recover an O (k)-sized superset of the support, and such matrices exist with O k log nk rows. Following this, we can use O k ✏ more Gaussian measurements to recover the signal within the superset. This gives a non-universal matrix with O k log nk + k ✏ rows for approximate recovery, the top part of which can be made strongly explicit with only slightly more measurements (O k1+o(1) log nk vs. O k log nk ). 5 Experiments In this section, we present some empirical results relating to the use of our superset technique in approximate vector recovery for real-valued signals. To do so, we compare the average error (in `2 norm) of the reconstructed vector from using an “all Gaussian” measurement matrix to first using a small number of measurements to recover a superset of the support of the signal, then using the remainder of the measurements to recover the signal within that superset via Gaussian measurements. We have used the well-known BIHT algorithm of [11] for recovery of the vector both using the all Gaussian matrix and within the superset, but we emphasize that this superset technique is highly general, and could just as easily be applied on top of other decoding algorithms that use only Gaussian measurements, such as the “QCoSaMP” algorithm of [17]. To generate random signals x, we first choose a size k support uniformly at random among the n k possibilities, then for each coordinate in the chosen support, generate a random value from N (0, 1). The vector is then rescaled so that ||x||2 = 1. For the dotted lines in Figure 1 labeled “all Gaussian,” for each value of (n,m, k) we performed 500 trials in which we generated an m⇥ n matrix with all entries in N (0, 1). We then used BIHT (run either until convergence or 1000 iterations, as there is no convergence guarantee) to recover the signal from the measurement matrix and measurement outcomes. For the solid lines in Figure 1 labeled “4k log n Superset,” we again performed 500 trials for each value of (n,m, k) where in each trial we generated a measurement matrix M = M (1) M (2) with m rows in total. Each entry of M (1) is a Bernoulli random variable that takes value 1 with probability 1 k+1 and value 0 with probability k k+1 ; there is evidence from the group testing literature [3, 2] that this probability is near-optimal in some regimes, and it appears also to perform well in practice; see Appendix B for some empirical evidence. The entries of M (2) are drawn from N (0, 1). We use a standard group testing decoding (i.e., remove any coordinates that appear in a test with result 0) to determine a superset based on y1 = sign(M (1)x), then use BIHT (again run either until convergence or 1000 iterations) to reconstruct x within the superset using the measurement results y2 = sign(M (2)x). The number of rows in M (1) is taken to be m1 = 4k log10(n) based on the fact that with high probability Ck log n rows for some constant C should be sufficient to recover an O (k)-sized superset, and the remainder m2 = (m m1) of the measurements are used in M (2). We display data only for larger values of m, to ensure there are sufficiently many rows in both portions of the measurement matrix. From Figure 1 one can see that in this regime, using a small number of measurements to first recover a superset of the support provides a modest improvement in reconstruction error compared to the alternative. In the higher-error regime when there are simply not enough measurements to obtain an accurate reconstruction, as can be seen in the left side of the graph in Figure 1d, the two methods perform about the same. In the empirical setting, our superset of support recovery technique can be viewed as a very flexible and low overhead method of extending other existing 1bCS algorithms which use only Gaussian measurements, which are quite common. Acknowledgements: This research is supported in part by NSF CCF awards 1618512, 1642658, and 1642550 and the UMass Center for Data Science.
1. What is the main contribution and strength of the paper regarding its results and simplicity? 2. Are there any concerns about the experiments section, and would it have been better as a purely theory paper? 3. What are some minor comments regarding the paper's content, such as justifying the claim of "optimal number of measurements," defining epsilon and "approximate recovery," avoiding uses of the word "necessary," rephrasing the proof of Lemma 1, providing brief explanations for the steps in the proof of Theorem 10, emphasizing that m is known but x is not in Lemma 12, and giving citations for certain statements? 4. Are there any recent refined bounds on the "for-each" setting that could be mentioned, even though the paper's emphasis is on the "for-all" setting? 5. Are there any other very minor comments, such as removing the word "typical" from "the typical group testing measurement," making it clear that "\cdot" denotes an inner product, renaming delta to beta to avoid inconsistency with delta in Theorem 7, reiterating that the constructions for [1] were non-explicit, changing "very low probability" to "zero probability," adding a citation for Pr[sign = sign] = (… cos^-1 formula …), and giving citations for certain statements?
Review
Review This is a generally well-written paper with a nice set of results. A potential weakness is that some of the main results come across as rather simple combinations of existing ideas/results, but on the other hand the simplicity can also be viewed as a strength. I don’t find the Experiments section essential, and would have been equally happy to have this as a purely theory paper. But the experiments don’t hurt either. My remaining comments are mostly quite minor – I will put a * next to those where I prefer a response, and any other responses are optional: [*] p2: Please justify the claim “optimal number of measurements” - in particular highlighting the k*log(n/k) + 1/eps lower bound from [1] and adding it to Table 1. As far as I know, it is an open problem as to whether the k^{3/2} term is unavoidable in the binary setting - is this correct? (If not, again please include a citation and add to Table 1) - p2: epsilon is used without being defined (and also the phrase “approximate recovery”) - p4: Avoid the uses of the word “necessary”, since these are only sufficient conditions. Similarly, in Lemma 3 the statement “provided that” is strictly speaking incorrect (e.g., m = 0 satisfies the statement given). - The proof of Lemma 1 is a bit confusing, and could be re-worded. - p6: The terminology “rate”, “relative distance”, and notation H_q(delta) should not be assumed familiar for a NeurIPS audience. - I think the proof of Theorem 10 should be revised. Please give brief explanations for the steps (e.g., the step after qd = (…) follows by re-arranging the choice of n, etc.) [*] In fact, I couldn’t quite follow the last step – substituting q=O(k/alpha) is clear, but why is the denominator also proportional to alpha/k? (A definition of H_q would have helped here) - Lemma 12: Please emphasize that m is known but x is not – this seems crucial. - For the authors’ interest, there are some more recent refined bounds on the “for-each” setting such as “Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework” and “Sparse Classification: A Scalable Discrete Optimization Perspective”, though since the emphasis of this paper is on the “for-all” setting, mentioning these is not essential. Very minor comments: - No need for capitalization in “Group Testing” - Give a citation when group testing first mentioned on p3 - p3: Remove the word “typical” from “the typical group testing measurement”, I think it only increases ambiguity/confusion. - Lemma 1: Is “\cdot” an inner product? Please make it clear. Also, should it be mx or m^T x inside the sign(.)? - Theorem 8: Rename delta to beta to avoid inconsistency with delta in Theorem 7. Also, is a “for all d” statement needed? - Just before Section 4.2, perhaps re-iterate that the constructions for [1] were non-explicit (hence highlighting the value of Theorem 10). - p7: “very low probability” -> “zero probability” - p7: “This connection was known previously” -> Add citation - p10: Please give a citation for Pr[sign = sign] = (… cos^-1 formula …). === POST-REVIEW COMMENTS: The responses were all as I had assumed them to be when stating my previous score, so naturally my score is unchanged. Overall a good paper, with the main limitation probably being the level of novelty.
NIPS
Title Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness Abstract We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. In addition to the usual cross-entropy loss, we add regularization terms for every intermediate layer to ensure that the latent representations retain useful information for output prediction while reducing redundant information. We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. In particular, we prove that the HSIC bottleneck regularizer reduces the sensitivity of the classifier to adversarial examples. Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training. Our code and adversarially robust models are publicly available.2 1 Introduction Adversarial attacks [8, 17, 18, 3, 5] to deep neural networks (DNNs) have received considerable attention recently. Such attacks are intentionally crafted to change prediction outcomes, e.g, by adding visually imperceptible perturbations to the original, natural examples [25]. Adversarial robustness, i.e., the ability of a trained model to maintain its predictive power under such attacks, is an important property for many safety-critical applications [4, 6, 26]. The most common approach to construct adversarially robust models is via adversarial training [34, 36, 30], i.e., training the model over adversarially constructed samples. Alemi et al. [1] propose using the so-called Information Bottleneck (IB) [27, 28] to ehnance adversarial robustness. Proposed by Tishby and Zaslavsky [28], the information bottleneck expresses a tradeoff between (a) the mutual information of the input and latent layers vs. (b) the mutual information between latent layers and the output. Alemi et al. show empirically that using IB as a learning objective for DNNs indeed leads to better adversarial robustness. Intuitively, the IB objective increases the entropy between input and latent layers; in turn, this also increases the model’s robustness, as it makes latent layers less sensitive to input perturbations. Nevertheless, mutual information is notoriously expensive to compute. The Hilbert-Schmidt independence criterion (HSIC) has been used as a tractable, efficient substitute in a variety of machine ∗Equal contribution. 2https://github.com/neu-spiral/HBaR 35th Conference on Neural Information Processing Systems (NeurIPS 2021). learning tasks [31, 32, 33]. Recently, Ma et al. [16] also exploited this relationship to propose an HSIC bottleneck (HB), as a variant to the more classic (mutual-information based) information bottleneck, though not in the context of adversarial robustness. We revisit the HSIC bottleneck, studying its adversarial robustness properties. In contrast to both Alemi et al. [1] and Ma et al. [16], we use the HSIC bottleneck as a regularizer in addition to commonly used losses for DNNs (e.g., cross-entropy). Our proposed approach, HSIC-Bottleneck-asRegularizer (HBaR) can be used in conjunction with adversarial examples; even without adversarial training, it is able to improve a classifier’s robustness. It also significantly outperforms previous IB-based methods for robustness, as well as the method proposed by Ma et al. Overall, we make the following contributions: 1. We apply the HSIC bottleneck as a regularizer for the purpose of adversarial robustness. 2. We provide a theoretical motivation for the constituent terms of the HBaR penalty, proving that it indeed constrains the output perturbation produced by adversarial attacks. 3. We show that HBaR can be naturally combined with a broad array of state of the art adversarial training methods, consistently improving their robustness. 4. We empirically show that this phenomenon persists even for weaker methods. In particular, HBaR can even enhance the adversarial robustness of plain SGD, without access to adversarial examples. The remainder of this paper is structured as follows. We review related work in Sec. 2. In Sec. 3, we discuss the standard setting of adversarial robustness and HSIC. In Sec. 4, we provide a theoretical justification that HBaR reduces the sensitivity of the classifier to adversarial examples. Sec. 5 includes our experiments; we conclude in Sec. 6. 2 Related Work Adversarial Attacks. Adversarial attacks often add a constrained perturbation to natural inputs with the goal of maximizing classification loss. Szegedy et al. [25] learn a perturbation via boxconstrained L-BFGS that misleads the classifier but minimally distort the input. FGSM, proposed by Goodfellow et al. [8], is a one step adversarial attack perturbing the input based on the sign of the gradient of the loss. PGD [13, 17] generates adversarial examples through multi-step projected gradient descent optimization. DeepFool [18] is an iterative attack strategy, which perturbs the input towards the direction of the decision boundaries. CW [3] applies a rectifier function regularizer to generate adversarial examples near the original input. AutoAttack (AA) [5] is an ensemble of parameter-free attacks, that also deals with common issues like gradient masking [19] and fixed step sizes [17]. Adversarial Robustness. A common approach to obtaining robust models is adversarial training, i.e., training models over adversarial examples generated via the aforementioned attacks. For example, Madry et al. [17] show that training with adversarial examples generated by PGD achieves good robustness under different attacks. DeepDefense [34] penalizes the norm of adversarial perturbations. TRADES [36] minimizes the difference between the predictions of natural and adversarial examples to get a smooth decision boundary. MART [30] pays more attention to adversarial examples from misclassified natural examples and adds a KL-divergence term between natural and adversarial samples to the cross-entropy loss. We show that our proposed method HBaR can be combined with several such state-of-the-art defense methods and boost their performance. Information Bottleneck. The information bottleneck (IB) [27, 28] expresses a tradeoff in latent representations between information useful for output prediction and information retained about the input. IB has been employed to explore the training dynamics in deep learning models [23, 22] as well as a learning objective [1, 2]. Fischer [7] proposes a conditional entropy bottleneck (CEB) based on IB and observes its robust generalization ability empirically. Closer to us, Alemi et al. [1] propose a variational information bottleneck (VIB) for supervised learning. They empirically show that training VIB on natural examples provides good generalization and adversarial robustness. We show that HBaR can be combined with various adversarial defense methods enhancing their robustness, but also outperforms VIB [1] when given access only to natural samples. Moreover, we provide theoretical guarantees on how HBaR bounds the output perturbation induced by adversarial attacks. Mutual Information vs. HSIC. Mutual information is difficult to compute in practice. To address this, Alemi et al. [1] estimate IB via variational inference. Ma et al. [16] replaced mutual information by the Hilbert Schmidt Independence Criterion (HSIC) and named this the HSIC Bottleneck (HB). Like Ma et al. [16], we utilize HSIC to estimate IB. However, our method is different from Ma et al. [16] in several aspects. First, they use HB to train the neural network stage-wise, layer-bylayer, without backpropagation, while we use HSIC bottleneck as a regularization in addition to cross-entropy and optimize the parameters jointly by backpropagation. Second, they only evaluate the model performance on classification accuracy, while we demonstrate adversarial robustness. Finally, we show that HBaR further enhances robustness to adversarial examples both theoretically and experimentally. Greenfeld et al. [9] use HSIC between the residual of the prediction and the input data as a learning objective for model robustness on covariate distribution shifts. Their focus is on robustness to distribution shifts, whereas our work focuses on robustness to adversarial examples, on which HBaR outperforms their proposed objective. 3 Background 3.1 Adversarial Robustness In standard k-ary classification, we are given a dataset D = {(xi, yi)}ni=1, where xi ∈ RdX , yi ∈ {0, 1}k are i.i.d. samples drawn from joint distribution PXY . A learner trains a neural network hθ : RdX → Rk parameterized by weights θ ∈ Rdθ to predict Y from X by minimizing L(θ) = EXY [`(hθ(X), Y )] ≈ 1 n n∑ i=1 `(hθ(xi), yi), (1) where ` : Rk × Rk → R is a loss function, e.g., cross-entropy. We aim to find a model hθ that has high prediction accuracy but is also adversarially robust: the model should maintain high prediction accuracy against a constrained adversary, that can perturb input samples in a restricted fashion. Formally, prior to submitting a sample x ∈ RdX to the classifier, an adversary may perturb x by an arbitrary δ ∈ Sr, where Sr ⊆ RdX is the `∞-ball of radius r, i.e., Sr = B(0, r) = {δ ∈ RdX : ‖δ‖∞ ≤ r}. (2) The adversarial robustness [17] of a model hθ is measured by the expected loss attained by such adversarial examples, i.e., Lr(θ) = EXY [ max δ∈Sr ` (hθ(X + δ), Y ) ] ≈ 1 n n∑ i=1 max δ∈Sr `(hθ(xi + δ), yi). (3) An adversarially robust neural network hθ can be obtained via adversarial training, i.e., by minimizing the adversarial robustness loss in (3) empirically over the training set D. In practice, this amounts to training via stochastic gradient descent (SGD) over adversarial examples xi + δ (see, e.g., [17]). In each epoch, δ is generated on a per sample basis via an inner optimization over Sr, e.g., via projected gradient descent (PGD) on −L. 3.2 Hilbert-Schmidt Independence Criterion (HSIC) The Hilbert-Schmidt Independence Criterion (HSIC) is a statistical dependency measure introduced by Gretton et al. [10]. HSIC is the Hilbert-Schmidt norm of the cross-covariance operator between the distributions in Reproducing Kernel Hilbert Space (RKHS). Similar to Mutual Information (MI), HSIC captures non-linear dependencies between random variables. HSIC(X,Y ) is defined as: HSIC(X,Y ) = EXYX′Y ′ [kX (X,X ′) kY ′ (Y, Y ′)] + EXX′ [kX (X,X ′)]EY Y ′ [kY (Y, Y ′)] − 2EXY [EX′ [kX (X,X ′)]EY ′ [kY (Y, Y ′)]] , (4) where X ′, Y ′ are independent copies of X , Y , respectively, and kX , kY are kernels. In practice, we often approximate HSIC empirically. Given n i.i.d. samples {(xi, yi)}ni=1 drawn from PXY , we estimate HSIC via: ĤSIC(X,Y ) = (n− 1)−2 tr (KXHKYH) , (5) where KX and KY are kernel matrices with entries KXij = kX(xi, xj) and KYij = kY (yi, yj), respectively, and H = I− 1n11 > is a centering matrix. 4 Methodology In this section, we present our method, HSIC bottleneck as regularizer (HBaR) as a means to enhance a classifier’s robustness. The effect of HBaR for adversarial robustness is illustrated in Figure 1; the HSIC bottleneck penalty reduces the sensitivity of the classifier to adversarial examples. We provide a theoretical justification for this below, in Theorems 1 and 2, but also validate the efficacy of the HSIC bottleneck extensively with experiments in Section 5. 4.1 HSIC Bottleneck as Regularizer for Robustness Given a feedforward neural network hθ : RdX → Rk parameterized by θ with M layers, and an input r.v. X , we denote by Zj ∈ RdZj , j ∈ {1, . . . ,M}, the output of the j-th layer under input X (i.e., the j-th latent representation). We define our HBaR learning objective as follows: L̃(θ) = L(θ) + λx M∑ j=1 HSIC(X,Zj)− λy M∑ j=1 HSIC(Y,Zj), (6) where L is the standard loss given by Eq. (1) and λx, λy ∈ R+ are balancing hyperparameters. Together, the second and third terms in Eq. (6) form the HSIC bottleneck penalty. As HSIC measures dependence between two random variables, minimizing HSIC(X,Zi) corresponds to removing redundant or noisy information contained in X . Hence, this term also naturally reduces the influence of an adversarial attack, i.e., a perturbation added on the input data. This is intuitive, but we also provide theoretical justification in the next subsection. Meanwhile, maximizing HSIC(Y,Zi) encourages this lack of sensitivity to the input to happen while retaining the discriminative nature of the classifier, captured by dependence to useful information w.r.t. the output label Y . Note that minimizing HSIC(X,Zi) alone would also lead to the loss of useful information, so it is necessary to keep the HSIC(Y,Zi) term to make sure Zi is informative enough of Y . The overall algorithm is described in Alg. 1. In practice, we perform Stochastic Gradient Descent (SGD) over L̃: both L and HSIC can be evaluated empirically over batches. For the latter, we use the estimator (5), restricted over the current batch. As we have m samples in a mini-batch, the complexity of calculating the empirical HSIC (5) is O(m2dZ̄) [24] for a single layer, where dZ̄ = maxj dZj . Thus, the overall complexity for (6) is O(Mm 2dZ̄). This computation is highly parallelizable, thus, the additional computation time of HBaR is small when compared to training a neural network via cross-entropy only. Algorithm 1: Robust Learning with HBaR Input: input sample tuples {(xi, yi)}ni=1, kernel function kx, ky, kz , a neural network hθ parameterized by θ, mini-batch size m, learning rate α. Output: parameter of classifier θ while θ has not converged do Sample a mini-batch of size m from input samples. Forward Propagation: calculate zi and hθ(x). Compute kernel matrices for X , Y and Zi using kx, ky, kz respectively inside mini-batch. Compute L̃(θ) via (6), where HSIC is evaluated empirically via (5). Backward Propagation: θ ← θ − α∇L̃(θ). end 4.2 Combining HBaR with Adversarial Examples HBaR can also be naturally applied in combination with adversarial training. For r > 0 the magnitude of the perturbations introduced in adversarial examples, one can optimize the following objective instead of L̃(θ) in Eq. (6): L̃r(θ) = Lr(θ) + λx M∑ j=1 HSIC(X,Zj)− λy M∑ j=1 HSIC(Y, Zj), (7) whereLr is the adversarial loss given by Eq. (3). This can be used instead ofL in Alg. 1. Adversarial examples need to be used in the computation of the gradient of the loss Lr in each minibatch; these need to be computed on a per sample basis, e.g., via PGD over Sr, at additional computational cost. Note that the natural samples (xi, yi) in a batch are used to compute the HSIC bottleneck regularizer. The HBaR penalty can similarly be combined with other adversarial learning methods and/or used with different means for selecting adversarial examples, other than PGD. We illustrate this in Section 5, where we combine HBaR with state-of-the-art adversarial learning methods TRADES [36] and MART [30]. 4.3 HBaR Robustness Guarantees We provide here a formal justification for the use of HBaR to enhance robustness: we prove that regularization terms HSIC(X,Zj), j = 1, . . . ,M lead to classifiers which are less sensitive to input perturbations. For simplicity, we focus on the case where k = 1 (i.e., binary classification). Let Z ∈ RdZ be the latent representation at some arbitrary intermediate layer of the network. That is, Z = Zj , for some j ∈ {1, . . . ,M}; we omit the subscript j to further reduce notation clutter. Then hθ = (g ◦ f), where f : RdX → RdZ maps the inputs to this intermediate layer, and g : RdZ → R maps the intermediate layer to the final layer. Then, Z = f(X) and g(Z) = hθ(X) ∈ R are the latent and final outputs, respectively. Recall that, in HBaR, HSIC(X,Z) is associated with kernels kX , kZ . We make the following technical assumptions: Assumption 1. Let X ⊆ RdX , Z ⊆ RdZ be the supports of random variables X , Z, respectively. We assume that both hθ and g are continuous and bounded functions in X , Z , respectively, i.e.: hθ ∈ C(X ), g ∈ C(Z). (8) Moreover, we assume that all functions hθ and g we consider are uniformly bounded, i.e., there exist 0 < MX ,MZ <∞ such that: MX = max hθ∈C(X ) ‖hθ‖∞ and MZ = max g∈C(Z) ‖g‖∞. (9) The continuity stated in Assumption 1 is natural, if all activation functions are continuous. Boundedness follows if, e.g., X , Z are closed and bounded (i.e., compact), or if activation functions are bounded (e.g., softmax, sigmoid, etc.). Assumption 2. We assume kernels kX , kZ are universal with respect to functions hθ and g that satisfy Assumption 1, i.e., if F and G are the induced RKHSs for kernels kX and kZ , respectively, then for any hθ, g that satisfy Assumption 1 and any ε > 0 there exist functions h′ ∈ F and g′ ∈ G such that ||hθ − h′||∞ ≤ ε and ||g − g′||∞ ≤ ε. Moreover, functions in F and G are uniformly bounded, i.e., there exist 0 < MF ,MG <∞ such that for all h′ ∈ F and all g′ ∈ G: MF = max f ′∈F ‖f ′‖∞ and MG = max g′∈G ‖g′‖∞. (10) We note that several kernels used in practice are universal, including, e.g., the Gaussian and Laplace kernels. Moreover, given that functions that satisfy Assumption 1 are uniformly bounded by (9), such kernels can indeed remain universal while satisfying (10) via an appropriate rescaling. Our first result shows that HSIC(X,Z) at any intermediate layer Z bounds the output variance: Theorem 1. Under Assumptions 1 and 2, we have: HSIC(X,Z) ≥ MFMG MXMZ sup θ Var(hθ(X)). (11) The proof of Theorem 1 is in Appendix B in the supplement. We use a result by Greenfeld and Shalit [9] that links HSIC(X,Z) to the supremum of the covariance of bounded continuous functionals over X and Z . Theorem 1 indicates that the regularizer HSIC(X,Z) at any intermediate layer naturally suppresses the variability of the output, i.e., the classifier prediction hθ(X). To see this, observe that by Chebyshev’s inequality [20] the distribution of hθ(X) concentrates around its mean when Var(hθ(X)) approaches 0. As a result, bounding HSIC(X,Z) inherently also bounds the (global) variability of the classifier (across all parameters θ). This observation motivates us to also maximize HSIC(Y, Z) to recover essential information useful for classification: if we want to achieve good adversarial robustness as well as good predictive accuracy, we have to strike a balance between HSIC(X,Z) and HSIC(Y,Z). This perfectly aligns with the intuition behind the information bottleneck [28] and the well-known accuracy-robustness trade off [17, 36, 29, 21]. We also confirm this experimentally: we observe that both additional terms (the standard loss and HSIC(Y,Z)) are necessary for ensuring good prediction performance in practice (see Table 3). Most importantly, by further assuming that features are normal, we can show that HSIC bounds the power of an arbitrary adversary, as defined in Eq. (3): Theorem 2. Assume that X ∼ N (0, σ2I). Then, under Assumptions 1 and 2, we have:3 r √ −2 log o(1)dXMZ σMFMG HSIC(X,Z) + o(r) ≥ E[|hθ(X + δ)− hθ(X)|], for all δ ∈ Sr. (12) The proof of Theorem 2 can also be found in Appendix C in the supplement. We again use the result by Greenfeld and Shalit [9] along with Stein’s Lemma [15], that relates covariances of Gaussian r.v.s and their functions to expected gradients. In particular, we apply Stein’s Lemma to the bounded functionals considered by Greenfeld and Shalit by using a truncation argument. Theorem 2 implies that HSIC(X,Z) indeed bounds the output perturbation produced by an arbitrary adversary: suppressing HSIC sufficiently can ensure that the adversary cannot alter the out- put significantly, in expectation. In particular, if HSIC(X,Z) = o ( σMFMG√ −2log o(1)dXMZ ) , then limr→0 supδ∈Sr E[|hθ(X + δ)− hθ(X)|]/r = 0, i.e., the output is almost constant under small input perturbations. 5 Experiments 5.1 Experimental Setting We experiment with three standard datasets, MNIST [14], CIFAR-10 [12] and CIFAR-100 [12]. We use a 4-layer LeNet [17] for MNIST, ResNet-18 [11] and WideResNet-28-10 [35] for CIFAR-10, 3Recall that for functions f, g : R→ R we have f = o(g) if limr→0 f(r)g(r) = 0. and WideResNet-28-10 [35] for CIFAR-100. We use cross-entropy as loss L(θ). Licensing information for all existing assets can be found in Appendix D in the supplement. Algorithms. We compare HBaR to the following non-adversarial learning algorithms: CrossEntropy (CE), Stage-Wise HSIC Bottleneck (SWHB) [16], XIC [9], and Variational Information Bottleneck (VIB) [1]. We also incorporate HBaR to several adversarial learning algorithms, as described in Section 4.2, and compare against the original methods, without the HBaR penalty. The adversarial methods we use are: Projected Gradient Descent (PGD) [17], TRADES [36], and MART [30]. Further details and parameters can be found in Appendix E in the supplement. Performance Metrics. For all methods, we evaluate the obtained model hθ via the following metrics: (a) Natural (i.e., clean test data) accuracy, and adversarial robustness via test accuracy under (b) FGSM, the fast gradient sign attack [8], (c) PGDm, the PGD attack with m steps used for the internal PGD optimization [17], (d) CW, the CW-loss within the PGD framework [3], and (e) AA, AutoAttack [5]. All five metrics are reported in percent (%) accuracy. Following prior literature, we set step size to 0.01 and radius r = 0.3 for MNIST, and step size as 2/255 and r = 8/255 for CIFAR-10 and CIFAR-100. All attacks happen during the test phase and have full access to model parameters (i.e., are white-box attacks). All experiments are carried out on a Tesla V100 GPU with 32 GB memory and 5120 cores. 5.2 Results Combining HBaR with Adversarial Examples. We show how HBaR can be used to improve robustness when used as a regularizer, as described in Section 4.2, along with state-of-the-art adversarial learning methods. We run each experiment by five times and report the mean natural test accuracy and adversarial robustness of all models on MNIST, CIFAR-10, and CIFAR-100 datasets by four architectures in Table 1 and Table 2. Combined with all adversarial training baselines, HBaR consistently improves adversarial robustness against all types of attacks on all datasets. The resulting improvements are larger than 2 standard deviations (that range between 0.05-0.2) in most cases; we report the results with standard deviations in Appendix G in the supplement. Although natural accuracy is generally restricted by the trade-off between robustness and accuracy [36], we observe that incorporating HBaR comes with an actual improvement over natural accuracy in most cases. 0 20 40 60 80 100 Epoch 0 2 4 6 8 10 Ad ve sa ria l R ob us tn es s ( % ) MNIST by LeNet: CE only HBaR-high (λx=1, λy=50) HBaR-low (λx=0.001, λy=0.005) 0 20 40 60 80 100 Epoch 0 10 20 30 40 50 HS IC (X , Z _M ) 0 20 40 60 80 100 Epoch 6.5 7.0 7.5 8.0 8.5 9.0 HS IC (Y , Z _M ) 0 20 40 60 80 100 Epoch 93 94 95 96 97 98 99 100 Na tu ra l A cc ur ac y (% ) 0 20 40 60 80 100 Epoch 0 2 4 6 8 10 Ad ve sa ria l R ob us tn es s ( % ) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 Ad ve sa ria l R ob us tn es s ( % ) CIFAR-10 by ResNet-18: CE only HBaR-high (λx=0.006, λy=0.05) HBaR-low (λx=0.001, λy=0.05) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 HS IC (X , Z _M ) 0 50 100 150 200 250 300 Epoch 1 2 3 4 5 6 7 8 9 HS IC (Y , Z _M ) 0 50 100 150 200 250 300 Epoch 80 82 84 86 88 90 92 94 96 Na tu ra l A cc ur ac y (% ) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 Ad ve sa ria l R ob us tn es s ( % ) (a) HSIC(X,ZM ) (b) HSIC(Y, ZM ) (c) Natural Accuracy (d) Adv. Robustness Figure 3: Visualization of the HBaR quantities (a) HSIC(X,ZM ), (b) HSIC(X,ZM ), (c) natural test accuracy, and (d) adversarial robustness against PGD attack (PGD40 and PGD20 on MNIST and CIFAR-10, respectively) as a function of training epochs, on MNIST by LeNet (top) and CIFAR-10 by ResNet (bottom). Different colored lines correspond to CE, HBaR-high (HBaR with high weights λ), and HBaR-low (HBaR method small weighs λ). HBaR-low parameters are selected so that the values of the loss L and each of the HSIC terms are close after the first epoch. Adversarial Robustness Analysis without Adversarial Training. Next, we show that HBaR can achieve modest robustness even without adversarial examples during training. We evaluate the robustness of HBaR on CIFAR-10 by ResNet-18 against various adversarial attacks, and compare HBaR with other information bottleneck penalties without adversarial training in Figure 2. Specifically, we compare the robustness of HBaR with other IB-based methods under various attacks and hyperparameters. Our proposed HBaR achieves the best overall robustness against all three types of attacks while attaining competitive natural test accuracy. Interestingly, HBaR achieves natural accuracy (95.27%) comparable to CE (95.32%) which is much higher than VIB (92.35%), XIC (92.93%) and SWHB (59.18%). We observe SWHB underperforms HBaR on CIFAR-10 for both natural accuracy and robustness. One possible explanation may be that when the model is deep, minimizing HSIC without backpropagation, as in SWHB, does not suffice to transmit the learned information across layers. Compared to SWHB, HBaR backpropagates over the HSIC objective through each intermediate layer and computes gradients only once in each batch, improving accuracy and robustness while reducing computational cost significantly. Synergy between HSIC Terms. Focusing on ZM , the last latent layer, Figure 3 shows the evolution per epoch of: (a) HSIC(X,ZM ), (b) HSIC(Y,ZM ), (c) natural accuracy (in %), and (d) adversarial robustness (in %) under PGD attack on MNIST and CIFAR-10. Different lines correspond to CE, HBaR-high (HBaR with high weights λ), and HBaR-low (HBaR method small weighs λ). HBaR-low parameters are selected so that the values of the loss L and each of the HSIC terms are close after the first epoch. Figure 3(c) illustrates that all three settings achieve good natural accuracy on both datasets. However, in Figure 3(d), only HBaR-high, that puts sufficient weight on HSIC terms, attains relatively high adversarial robustness. In Figure 3(a), we see that CE leads to high HSIC(X,ZM ) for the shallow LeNet, but low in the (much deeper) ResNet-18, even lower than HBaR-low. Moreover, we also see that the best performer in terms of adversarial robustness, HBaR-high, lies in between the other two w.r.t. HSIC(X,ZM ). Both of these observations indi- cate the importance of the HSIC(Y,ZM ) penalty: minimizing HSIC(X,ZM ) appropriately leads to good adversarial robustness, but coupling learning to labels via the third term is integral to maintaining useful label-related information in latent layers, thus resulting in good adversarial robustness. Figure 3(b) confirms this, as HBaR-high achieves relatively high HSIC(Y,ZM ) on both datasets. Figure 4 provides another perspective of the same experiments via the learning dynamics on the HSIC plane. We again observe that the best performer in terms of robustness HBaR-high lies in between the other two methods, crucially attaining a much higher HSIC(Y,ZM ) than HBaR-low. Moreover, for both HBaR methods, we clearly observe the two distinct optimization phases first observed by Shwartz-Ziv and Tishby [23] in the context of the mutual information bottleneck: the fast empirical risk minimization phase, where the neural network tries to learn a meaningful representation by increasing HSIC(Y,ZM ) regardless of information redundancy (HSIC(X,ZM ) increasing), and the representation compression phase, where the neural network turns its focus onto compressing the latent representation by minimizing HSIC(X,ZM ), while maintaining highly label-related information. Interestingly, the HBaR penalty produces the two-phase behavior even though our networks use ReLU activation functions; Shwartz et al. [23] only observed these two optimization phases on neural networks with tanh activation functions, a phenomenon further confirmed by Saxe et al. [22]. Ablation Study. Motivated by the above observations, we turn our attention to how the three terms in the loss function in Eq. (6) affect HBaR. As illustrated in Table 3, removing any part leads to either a significant natural accuracy or robustness degradation. Specifically, using L(θ) only (row [i]) lacks adversarial robustness; removing L(θ) (row [ii]) or the penalty on Y (row [iii]) degrades natural accuracy significantly (a similar result was also observed in [2]); finally, removing the penalty on X improves the natural accuracy while degrading adversarial robustness. The three terms combined together by proper hyperparameters λx and λy (row [v]) achieve both high natural accuracy and adversarial robustness. We provide a comprehensive ablation study on the sensitivity of λx and λy and draw conclusions in Appendix F in the supplement (Tables 7 and 8). 6 Conclusions We investigate the HSIC bottleneck as regularizer (HBaR) as a means to enhance adversarial robustness. We theoretically prove that HBaR suppresses the sensitivity of the classifier to adversarial examples while retaining its discriminative nature. One limitation of our method is that the robustness gain is modest when training with only natural examples. Moreover, a possible negative societal impact is overconfidence in adversarial robustness: over-confidence in the adversarially-robust models produced by HBaR as well as other defense methods may lead to overlooking their potential failure on newly-invented attack methods; this should be taken into account in safety-critical applications like healthcare [6] or security [26]. We extend the discussion on the limitations and potential negative societal impacts of our work in Appendix H and I, respectively, in the supplement. 7 Acknowledgements The authors gratefully acknowledge support by the National Science Foundation under grants CCF-1937500 and CNS-2112471, and the National Institutes of Health under grant NHLBI U01HL089856.
1. What is the focus and contribution of the paper regarding adversarial robustness? 2. What are the strengths of the proposed regularization method, particularly in terms of theoretical analysis? 3. Do you have any concerns or suggestions regarding the experimental results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a novel regularization for adversarial robustness based on an information bottleneck criterion. The method is well-motivated and supported theoretically and empirically. Review Strengths: The proposed regularization is well-motivated (Fig. 1, Lines 127-135). It is theoretically shown to upper bound the variance in the network output (Theorem 1 with continuity and bounded assumptions) and the expected absolute difference of the output (Theorem 2 with Gaussian input assumption). Theorem 2 is a more direct bound for the adversarial robustness. The experiments support the effectiveness of the idea in increasing the robustness of ResNet and WideResNet models on CIFAR-10 (Table 2) and CIFAR-100 (Table 1 right) when combined with other robustness methods as measured by various attacks such as AutoAttack. In particular, the improvements achieved combined with TRADES are significant (Table 1-right, CIFAR-100 ~0.5% against AA, Table 2-right, CIFAR-10 ~0.8% against AA). The improvements are shown to be statistically significant. Minor: In Table 3 and Appendix E, the accuracy against PGD is low. Is this ablation study done without adversarial training? It would be better to see an ablation study on the best performing methods. Also, lambda_y is only set to 0.05 in Table 6 and a wider range needs to be studied.
NIPS
Title Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness Abstract We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. In addition to the usual cross-entropy loss, we add regularization terms for every intermediate layer to ensure that the latent representations retain useful information for output prediction while reducing redundant information. We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. In particular, we prove that the HSIC bottleneck regularizer reduces the sensitivity of the classifier to adversarial examples. Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training. Our code and adversarially robust models are publicly available.2 1 Introduction Adversarial attacks [8, 17, 18, 3, 5] to deep neural networks (DNNs) have received considerable attention recently. Such attacks are intentionally crafted to change prediction outcomes, e.g, by adding visually imperceptible perturbations to the original, natural examples [25]. Adversarial robustness, i.e., the ability of a trained model to maintain its predictive power under such attacks, is an important property for many safety-critical applications [4, 6, 26]. The most common approach to construct adversarially robust models is via adversarial training [34, 36, 30], i.e., training the model over adversarially constructed samples. Alemi et al. [1] propose using the so-called Information Bottleneck (IB) [27, 28] to ehnance adversarial robustness. Proposed by Tishby and Zaslavsky [28], the information bottleneck expresses a tradeoff between (a) the mutual information of the input and latent layers vs. (b) the mutual information between latent layers and the output. Alemi et al. show empirically that using IB as a learning objective for DNNs indeed leads to better adversarial robustness. Intuitively, the IB objective increases the entropy between input and latent layers; in turn, this also increases the model’s robustness, as it makes latent layers less sensitive to input perturbations. Nevertheless, mutual information is notoriously expensive to compute. The Hilbert-Schmidt independence criterion (HSIC) has been used as a tractable, efficient substitute in a variety of machine ∗Equal contribution. 2https://github.com/neu-spiral/HBaR 35th Conference on Neural Information Processing Systems (NeurIPS 2021). learning tasks [31, 32, 33]. Recently, Ma et al. [16] also exploited this relationship to propose an HSIC bottleneck (HB), as a variant to the more classic (mutual-information based) information bottleneck, though not in the context of adversarial robustness. We revisit the HSIC bottleneck, studying its adversarial robustness properties. In contrast to both Alemi et al. [1] and Ma et al. [16], we use the HSIC bottleneck as a regularizer in addition to commonly used losses for DNNs (e.g., cross-entropy). Our proposed approach, HSIC-Bottleneck-asRegularizer (HBaR) can be used in conjunction with adversarial examples; even without adversarial training, it is able to improve a classifier’s robustness. It also significantly outperforms previous IB-based methods for robustness, as well as the method proposed by Ma et al. Overall, we make the following contributions: 1. We apply the HSIC bottleneck as a regularizer for the purpose of adversarial robustness. 2. We provide a theoretical motivation for the constituent terms of the HBaR penalty, proving that it indeed constrains the output perturbation produced by adversarial attacks. 3. We show that HBaR can be naturally combined with a broad array of state of the art adversarial training methods, consistently improving their robustness. 4. We empirically show that this phenomenon persists even for weaker methods. In particular, HBaR can even enhance the adversarial robustness of plain SGD, without access to adversarial examples. The remainder of this paper is structured as follows. We review related work in Sec. 2. In Sec. 3, we discuss the standard setting of adversarial robustness and HSIC. In Sec. 4, we provide a theoretical justification that HBaR reduces the sensitivity of the classifier to adversarial examples. Sec. 5 includes our experiments; we conclude in Sec. 6. 2 Related Work Adversarial Attacks. Adversarial attacks often add a constrained perturbation to natural inputs with the goal of maximizing classification loss. Szegedy et al. [25] learn a perturbation via boxconstrained L-BFGS that misleads the classifier but minimally distort the input. FGSM, proposed by Goodfellow et al. [8], is a one step adversarial attack perturbing the input based on the sign of the gradient of the loss. PGD [13, 17] generates adversarial examples through multi-step projected gradient descent optimization. DeepFool [18] is an iterative attack strategy, which perturbs the input towards the direction of the decision boundaries. CW [3] applies a rectifier function regularizer to generate adversarial examples near the original input. AutoAttack (AA) [5] is an ensemble of parameter-free attacks, that also deals with common issues like gradient masking [19] and fixed step sizes [17]. Adversarial Robustness. A common approach to obtaining robust models is adversarial training, i.e., training models over adversarial examples generated via the aforementioned attacks. For example, Madry et al. [17] show that training with adversarial examples generated by PGD achieves good robustness under different attacks. DeepDefense [34] penalizes the norm of adversarial perturbations. TRADES [36] minimizes the difference between the predictions of natural and adversarial examples to get a smooth decision boundary. MART [30] pays more attention to adversarial examples from misclassified natural examples and adds a KL-divergence term between natural and adversarial samples to the cross-entropy loss. We show that our proposed method HBaR can be combined with several such state-of-the-art defense methods and boost their performance. Information Bottleneck. The information bottleneck (IB) [27, 28] expresses a tradeoff in latent representations between information useful for output prediction and information retained about the input. IB has been employed to explore the training dynamics in deep learning models [23, 22] as well as a learning objective [1, 2]. Fischer [7] proposes a conditional entropy bottleneck (CEB) based on IB and observes its robust generalization ability empirically. Closer to us, Alemi et al. [1] propose a variational information bottleneck (VIB) for supervised learning. They empirically show that training VIB on natural examples provides good generalization and adversarial robustness. We show that HBaR can be combined with various adversarial defense methods enhancing their robustness, but also outperforms VIB [1] when given access only to natural samples. Moreover, we provide theoretical guarantees on how HBaR bounds the output perturbation induced by adversarial attacks. Mutual Information vs. HSIC. Mutual information is difficult to compute in practice. To address this, Alemi et al. [1] estimate IB via variational inference. Ma et al. [16] replaced mutual information by the Hilbert Schmidt Independence Criterion (HSIC) and named this the HSIC Bottleneck (HB). Like Ma et al. [16], we utilize HSIC to estimate IB. However, our method is different from Ma et al. [16] in several aspects. First, they use HB to train the neural network stage-wise, layer-bylayer, without backpropagation, while we use HSIC bottleneck as a regularization in addition to cross-entropy and optimize the parameters jointly by backpropagation. Second, they only evaluate the model performance on classification accuracy, while we demonstrate adversarial robustness. Finally, we show that HBaR further enhances robustness to adversarial examples both theoretically and experimentally. Greenfeld et al. [9] use HSIC between the residual of the prediction and the input data as a learning objective for model robustness on covariate distribution shifts. Their focus is on robustness to distribution shifts, whereas our work focuses on robustness to adversarial examples, on which HBaR outperforms their proposed objective. 3 Background 3.1 Adversarial Robustness In standard k-ary classification, we are given a dataset D = {(xi, yi)}ni=1, where xi ∈ RdX , yi ∈ {0, 1}k are i.i.d. samples drawn from joint distribution PXY . A learner trains a neural network hθ : RdX → Rk parameterized by weights θ ∈ Rdθ to predict Y from X by minimizing L(θ) = EXY [`(hθ(X), Y )] ≈ 1 n n∑ i=1 `(hθ(xi), yi), (1) where ` : Rk × Rk → R is a loss function, e.g., cross-entropy. We aim to find a model hθ that has high prediction accuracy but is also adversarially robust: the model should maintain high prediction accuracy against a constrained adversary, that can perturb input samples in a restricted fashion. Formally, prior to submitting a sample x ∈ RdX to the classifier, an adversary may perturb x by an arbitrary δ ∈ Sr, where Sr ⊆ RdX is the `∞-ball of radius r, i.e., Sr = B(0, r) = {δ ∈ RdX : ‖δ‖∞ ≤ r}. (2) The adversarial robustness [17] of a model hθ is measured by the expected loss attained by such adversarial examples, i.e., Lr(θ) = EXY [ max δ∈Sr ` (hθ(X + δ), Y ) ] ≈ 1 n n∑ i=1 max δ∈Sr `(hθ(xi + δ), yi). (3) An adversarially robust neural network hθ can be obtained via adversarial training, i.e., by minimizing the adversarial robustness loss in (3) empirically over the training set D. In practice, this amounts to training via stochastic gradient descent (SGD) over adversarial examples xi + δ (see, e.g., [17]). In each epoch, δ is generated on a per sample basis via an inner optimization over Sr, e.g., via projected gradient descent (PGD) on −L. 3.2 Hilbert-Schmidt Independence Criterion (HSIC) The Hilbert-Schmidt Independence Criterion (HSIC) is a statistical dependency measure introduced by Gretton et al. [10]. HSIC is the Hilbert-Schmidt norm of the cross-covariance operator between the distributions in Reproducing Kernel Hilbert Space (RKHS). Similar to Mutual Information (MI), HSIC captures non-linear dependencies between random variables. HSIC(X,Y ) is defined as: HSIC(X,Y ) = EXYX′Y ′ [kX (X,X ′) kY ′ (Y, Y ′)] + EXX′ [kX (X,X ′)]EY Y ′ [kY (Y, Y ′)] − 2EXY [EX′ [kX (X,X ′)]EY ′ [kY (Y, Y ′)]] , (4) where X ′, Y ′ are independent copies of X , Y , respectively, and kX , kY are kernels. In practice, we often approximate HSIC empirically. Given n i.i.d. samples {(xi, yi)}ni=1 drawn from PXY , we estimate HSIC via: ĤSIC(X,Y ) = (n− 1)−2 tr (KXHKYH) , (5) where KX and KY are kernel matrices with entries KXij = kX(xi, xj) and KYij = kY (yi, yj), respectively, and H = I− 1n11 > is a centering matrix. 4 Methodology In this section, we present our method, HSIC bottleneck as regularizer (HBaR) as a means to enhance a classifier’s robustness. The effect of HBaR for adversarial robustness is illustrated in Figure 1; the HSIC bottleneck penalty reduces the sensitivity of the classifier to adversarial examples. We provide a theoretical justification for this below, in Theorems 1 and 2, but also validate the efficacy of the HSIC bottleneck extensively with experiments in Section 5. 4.1 HSIC Bottleneck as Regularizer for Robustness Given a feedforward neural network hθ : RdX → Rk parameterized by θ with M layers, and an input r.v. X , we denote by Zj ∈ RdZj , j ∈ {1, . . . ,M}, the output of the j-th layer under input X (i.e., the j-th latent representation). We define our HBaR learning objective as follows: L̃(θ) = L(θ) + λx M∑ j=1 HSIC(X,Zj)− λy M∑ j=1 HSIC(Y,Zj), (6) where L is the standard loss given by Eq. (1) and λx, λy ∈ R+ are balancing hyperparameters. Together, the second and third terms in Eq. (6) form the HSIC bottleneck penalty. As HSIC measures dependence between two random variables, minimizing HSIC(X,Zi) corresponds to removing redundant or noisy information contained in X . Hence, this term also naturally reduces the influence of an adversarial attack, i.e., a perturbation added on the input data. This is intuitive, but we also provide theoretical justification in the next subsection. Meanwhile, maximizing HSIC(Y,Zi) encourages this lack of sensitivity to the input to happen while retaining the discriminative nature of the classifier, captured by dependence to useful information w.r.t. the output label Y . Note that minimizing HSIC(X,Zi) alone would also lead to the loss of useful information, so it is necessary to keep the HSIC(Y,Zi) term to make sure Zi is informative enough of Y . The overall algorithm is described in Alg. 1. In practice, we perform Stochastic Gradient Descent (SGD) over L̃: both L and HSIC can be evaluated empirically over batches. For the latter, we use the estimator (5), restricted over the current batch. As we have m samples in a mini-batch, the complexity of calculating the empirical HSIC (5) is O(m2dZ̄) [24] for a single layer, where dZ̄ = maxj dZj . Thus, the overall complexity for (6) is O(Mm 2dZ̄). This computation is highly parallelizable, thus, the additional computation time of HBaR is small when compared to training a neural network via cross-entropy only. Algorithm 1: Robust Learning with HBaR Input: input sample tuples {(xi, yi)}ni=1, kernel function kx, ky, kz , a neural network hθ parameterized by θ, mini-batch size m, learning rate α. Output: parameter of classifier θ while θ has not converged do Sample a mini-batch of size m from input samples. Forward Propagation: calculate zi and hθ(x). Compute kernel matrices for X , Y and Zi using kx, ky, kz respectively inside mini-batch. Compute L̃(θ) via (6), where HSIC is evaluated empirically via (5). Backward Propagation: θ ← θ − α∇L̃(θ). end 4.2 Combining HBaR with Adversarial Examples HBaR can also be naturally applied in combination with adversarial training. For r > 0 the magnitude of the perturbations introduced in adversarial examples, one can optimize the following objective instead of L̃(θ) in Eq. (6): L̃r(θ) = Lr(θ) + λx M∑ j=1 HSIC(X,Zj)− λy M∑ j=1 HSIC(Y, Zj), (7) whereLr is the adversarial loss given by Eq. (3). This can be used instead ofL in Alg. 1. Adversarial examples need to be used in the computation of the gradient of the loss Lr in each minibatch; these need to be computed on a per sample basis, e.g., via PGD over Sr, at additional computational cost. Note that the natural samples (xi, yi) in a batch are used to compute the HSIC bottleneck regularizer. The HBaR penalty can similarly be combined with other adversarial learning methods and/or used with different means for selecting adversarial examples, other than PGD. We illustrate this in Section 5, where we combine HBaR with state-of-the-art adversarial learning methods TRADES [36] and MART [30]. 4.3 HBaR Robustness Guarantees We provide here a formal justification for the use of HBaR to enhance robustness: we prove that regularization terms HSIC(X,Zj), j = 1, . . . ,M lead to classifiers which are less sensitive to input perturbations. For simplicity, we focus on the case where k = 1 (i.e., binary classification). Let Z ∈ RdZ be the latent representation at some arbitrary intermediate layer of the network. That is, Z = Zj , for some j ∈ {1, . . . ,M}; we omit the subscript j to further reduce notation clutter. Then hθ = (g ◦ f), where f : RdX → RdZ maps the inputs to this intermediate layer, and g : RdZ → R maps the intermediate layer to the final layer. Then, Z = f(X) and g(Z) = hθ(X) ∈ R are the latent and final outputs, respectively. Recall that, in HBaR, HSIC(X,Z) is associated with kernels kX , kZ . We make the following technical assumptions: Assumption 1. Let X ⊆ RdX , Z ⊆ RdZ be the supports of random variables X , Z, respectively. We assume that both hθ and g are continuous and bounded functions in X , Z , respectively, i.e.: hθ ∈ C(X ), g ∈ C(Z). (8) Moreover, we assume that all functions hθ and g we consider are uniformly bounded, i.e., there exist 0 < MX ,MZ <∞ such that: MX = max hθ∈C(X ) ‖hθ‖∞ and MZ = max g∈C(Z) ‖g‖∞. (9) The continuity stated in Assumption 1 is natural, if all activation functions are continuous. Boundedness follows if, e.g., X , Z are closed and bounded (i.e., compact), or if activation functions are bounded (e.g., softmax, sigmoid, etc.). Assumption 2. We assume kernels kX , kZ are universal with respect to functions hθ and g that satisfy Assumption 1, i.e., if F and G are the induced RKHSs for kernels kX and kZ , respectively, then for any hθ, g that satisfy Assumption 1 and any ε > 0 there exist functions h′ ∈ F and g′ ∈ G such that ||hθ − h′||∞ ≤ ε and ||g − g′||∞ ≤ ε. Moreover, functions in F and G are uniformly bounded, i.e., there exist 0 < MF ,MG <∞ such that for all h′ ∈ F and all g′ ∈ G: MF = max f ′∈F ‖f ′‖∞ and MG = max g′∈G ‖g′‖∞. (10) We note that several kernels used in practice are universal, including, e.g., the Gaussian and Laplace kernels. Moreover, given that functions that satisfy Assumption 1 are uniformly bounded by (9), such kernels can indeed remain universal while satisfying (10) via an appropriate rescaling. Our first result shows that HSIC(X,Z) at any intermediate layer Z bounds the output variance: Theorem 1. Under Assumptions 1 and 2, we have: HSIC(X,Z) ≥ MFMG MXMZ sup θ Var(hθ(X)). (11) The proof of Theorem 1 is in Appendix B in the supplement. We use a result by Greenfeld and Shalit [9] that links HSIC(X,Z) to the supremum of the covariance of bounded continuous functionals over X and Z . Theorem 1 indicates that the regularizer HSIC(X,Z) at any intermediate layer naturally suppresses the variability of the output, i.e., the classifier prediction hθ(X). To see this, observe that by Chebyshev’s inequality [20] the distribution of hθ(X) concentrates around its mean when Var(hθ(X)) approaches 0. As a result, bounding HSIC(X,Z) inherently also bounds the (global) variability of the classifier (across all parameters θ). This observation motivates us to also maximize HSIC(Y, Z) to recover essential information useful for classification: if we want to achieve good adversarial robustness as well as good predictive accuracy, we have to strike a balance between HSIC(X,Z) and HSIC(Y,Z). This perfectly aligns with the intuition behind the information bottleneck [28] and the well-known accuracy-robustness trade off [17, 36, 29, 21]. We also confirm this experimentally: we observe that both additional terms (the standard loss and HSIC(Y,Z)) are necessary for ensuring good prediction performance in practice (see Table 3). Most importantly, by further assuming that features are normal, we can show that HSIC bounds the power of an arbitrary adversary, as defined in Eq. (3): Theorem 2. Assume that X ∼ N (0, σ2I). Then, under Assumptions 1 and 2, we have:3 r √ −2 log o(1)dXMZ σMFMG HSIC(X,Z) + o(r) ≥ E[|hθ(X + δ)− hθ(X)|], for all δ ∈ Sr. (12) The proof of Theorem 2 can also be found in Appendix C in the supplement. We again use the result by Greenfeld and Shalit [9] along with Stein’s Lemma [15], that relates covariances of Gaussian r.v.s and their functions to expected gradients. In particular, we apply Stein’s Lemma to the bounded functionals considered by Greenfeld and Shalit by using a truncation argument. Theorem 2 implies that HSIC(X,Z) indeed bounds the output perturbation produced by an arbitrary adversary: suppressing HSIC sufficiently can ensure that the adversary cannot alter the out- put significantly, in expectation. In particular, if HSIC(X,Z) = o ( σMFMG√ −2log o(1)dXMZ ) , then limr→0 supδ∈Sr E[|hθ(X + δ)− hθ(X)|]/r = 0, i.e., the output is almost constant under small input perturbations. 5 Experiments 5.1 Experimental Setting We experiment with three standard datasets, MNIST [14], CIFAR-10 [12] and CIFAR-100 [12]. We use a 4-layer LeNet [17] for MNIST, ResNet-18 [11] and WideResNet-28-10 [35] for CIFAR-10, 3Recall that for functions f, g : R→ R we have f = o(g) if limr→0 f(r)g(r) = 0. and WideResNet-28-10 [35] for CIFAR-100. We use cross-entropy as loss L(θ). Licensing information for all existing assets can be found in Appendix D in the supplement. Algorithms. We compare HBaR to the following non-adversarial learning algorithms: CrossEntropy (CE), Stage-Wise HSIC Bottleneck (SWHB) [16], XIC [9], and Variational Information Bottleneck (VIB) [1]. We also incorporate HBaR to several adversarial learning algorithms, as described in Section 4.2, and compare against the original methods, without the HBaR penalty. The adversarial methods we use are: Projected Gradient Descent (PGD) [17], TRADES [36], and MART [30]. Further details and parameters can be found in Appendix E in the supplement. Performance Metrics. For all methods, we evaluate the obtained model hθ via the following metrics: (a) Natural (i.e., clean test data) accuracy, and adversarial robustness via test accuracy under (b) FGSM, the fast gradient sign attack [8], (c) PGDm, the PGD attack with m steps used for the internal PGD optimization [17], (d) CW, the CW-loss within the PGD framework [3], and (e) AA, AutoAttack [5]. All five metrics are reported in percent (%) accuracy. Following prior literature, we set step size to 0.01 and radius r = 0.3 for MNIST, and step size as 2/255 and r = 8/255 for CIFAR-10 and CIFAR-100. All attacks happen during the test phase and have full access to model parameters (i.e., are white-box attacks). All experiments are carried out on a Tesla V100 GPU with 32 GB memory and 5120 cores. 5.2 Results Combining HBaR with Adversarial Examples. We show how HBaR can be used to improve robustness when used as a regularizer, as described in Section 4.2, along with state-of-the-art adversarial learning methods. We run each experiment by five times and report the mean natural test accuracy and adversarial robustness of all models on MNIST, CIFAR-10, and CIFAR-100 datasets by four architectures in Table 1 and Table 2. Combined with all adversarial training baselines, HBaR consistently improves adversarial robustness against all types of attacks on all datasets. The resulting improvements are larger than 2 standard deviations (that range between 0.05-0.2) in most cases; we report the results with standard deviations in Appendix G in the supplement. Although natural accuracy is generally restricted by the trade-off between robustness and accuracy [36], we observe that incorporating HBaR comes with an actual improvement over natural accuracy in most cases. 0 20 40 60 80 100 Epoch 0 2 4 6 8 10 Ad ve sa ria l R ob us tn es s ( % ) MNIST by LeNet: CE only HBaR-high (λx=1, λy=50) HBaR-low (λx=0.001, λy=0.005) 0 20 40 60 80 100 Epoch 0 10 20 30 40 50 HS IC (X , Z _M ) 0 20 40 60 80 100 Epoch 6.5 7.0 7.5 8.0 8.5 9.0 HS IC (Y , Z _M ) 0 20 40 60 80 100 Epoch 93 94 95 96 97 98 99 100 Na tu ra l A cc ur ac y (% ) 0 20 40 60 80 100 Epoch 0 2 4 6 8 10 Ad ve sa ria l R ob us tn es s ( % ) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 Ad ve sa ria l R ob us tn es s ( % ) CIFAR-10 by ResNet-18: CE only HBaR-high (λx=0.006, λy=0.05) HBaR-low (λx=0.001, λy=0.05) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 HS IC (X , Z _M ) 0 50 100 150 200 250 300 Epoch 1 2 3 4 5 6 7 8 9 HS IC (Y , Z _M ) 0 50 100 150 200 250 300 Epoch 80 82 84 86 88 90 92 94 96 Na tu ra l A cc ur ac y (% ) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 Ad ve sa ria l R ob us tn es s ( % ) (a) HSIC(X,ZM ) (b) HSIC(Y, ZM ) (c) Natural Accuracy (d) Adv. Robustness Figure 3: Visualization of the HBaR quantities (a) HSIC(X,ZM ), (b) HSIC(X,ZM ), (c) natural test accuracy, and (d) adversarial robustness against PGD attack (PGD40 and PGD20 on MNIST and CIFAR-10, respectively) as a function of training epochs, on MNIST by LeNet (top) and CIFAR-10 by ResNet (bottom). Different colored lines correspond to CE, HBaR-high (HBaR with high weights λ), and HBaR-low (HBaR method small weighs λ). HBaR-low parameters are selected so that the values of the loss L and each of the HSIC terms are close after the first epoch. Adversarial Robustness Analysis without Adversarial Training. Next, we show that HBaR can achieve modest robustness even without adversarial examples during training. We evaluate the robustness of HBaR on CIFAR-10 by ResNet-18 against various adversarial attacks, and compare HBaR with other information bottleneck penalties without adversarial training in Figure 2. Specifically, we compare the robustness of HBaR with other IB-based methods under various attacks and hyperparameters. Our proposed HBaR achieves the best overall robustness against all three types of attacks while attaining competitive natural test accuracy. Interestingly, HBaR achieves natural accuracy (95.27%) comparable to CE (95.32%) which is much higher than VIB (92.35%), XIC (92.93%) and SWHB (59.18%). We observe SWHB underperforms HBaR on CIFAR-10 for both natural accuracy and robustness. One possible explanation may be that when the model is deep, minimizing HSIC without backpropagation, as in SWHB, does not suffice to transmit the learned information across layers. Compared to SWHB, HBaR backpropagates over the HSIC objective through each intermediate layer and computes gradients only once in each batch, improving accuracy and robustness while reducing computational cost significantly. Synergy between HSIC Terms. Focusing on ZM , the last latent layer, Figure 3 shows the evolution per epoch of: (a) HSIC(X,ZM ), (b) HSIC(Y,ZM ), (c) natural accuracy (in %), and (d) adversarial robustness (in %) under PGD attack on MNIST and CIFAR-10. Different lines correspond to CE, HBaR-high (HBaR with high weights λ), and HBaR-low (HBaR method small weighs λ). HBaR-low parameters are selected so that the values of the loss L and each of the HSIC terms are close after the first epoch. Figure 3(c) illustrates that all three settings achieve good natural accuracy on both datasets. However, in Figure 3(d), only HBaR-high, that puts sufficient weight on HSIC terms, attains relatively high adversarial robustness. In Figure 3(a), we see that CE leads to high HSIC(X,ZM ) for the shallow LeNet, but low in the (much deeper) ResNet-18, even lower than HBaR-low. Moreover, we also see that the best performer in terms of adversarial robustness, HBaR-high, lies in between the other two w.r.t. HSIC(X,ZM ). Both of these observations indi- cate the importance of the HSIC(Y,ZM ) penalty: minimizing HSIC(X,ZM ) appropriately leads to good adversarial robustness, but coupling learning to labels via the third term is integral to maintaining useful label-related information in latent layers, thus resulting in good adversarial robustness. Figure 3(b) confirms this, as HBaR-high achieves relatively high HSIC(Y,ZM ) on both datasets. Figure 4 provides another perspective of the same experiments via the learning dynamics on the HSIC plane. We again observe that the best performer in terms of robustness HBaR-high lies in between the other two methods, crucially attaining a much higher HSIC(Y,ZM ) than HBaR-low. Moreover, for both HBaR methods, we clearly observe the two distinct optimization phases first observed by Shwartz-Ziv and Tishby [23] in the context of the mutual information bottleneck: the fast empirical risk minimization phase, where the neural network tries to learn a meaningful representation by increasing HSIC(Y,ZM ) regardless of information redundancy (HSIC(X,ZM ) increasing), and the representation compression phase, where the neural network turns its focus onto compressing the latent representation by minimizing HSIC(X,ZM ), while maintaining highly label-related information. Interestingly, the HBaR penalty produces the two-phase behavior even though our networks use ReLU activation functions; Shwartz et al. [23] only observed these two optimization phases on neural networks with tanh activation functions, a phenomenon further confirmed by Saxe et al. [22]. Ablation Study. Motivated by the above observations, we turn our attention to how the three terms in the loss function in Eq. (6) affect HBaR. As illustrated in Table 3, removing any part leads to either a significant natural accuracy or robustness degradation. Specifically, using L(θ) only (row [i]) lacks adversarial robustness; removing L(θ) (row [ii]) or the penalty on Y (row [iii]) degrades natural accuracy significantly (a similar result was also observed in [2]); finally, removing the penalty on X improves the natural accuracy while degrading adversarial robustness. The three terms combined together by proper hyperparameters λx and λy (row [v]) achieve both high natural accuracy and adversarial robustness. We provide a comprehensive ablation study on the sensitivity of λx and λy and draw conclusions in Appendix F in the supplement (Tables 7 and 8). 6 Conclusions We investigate the HSIC bottleneck as regularizer (HBaR) as a means to enhance adversarial robustness. We theoretically prove that HBaR suppresses the sensitivity of the classifier to adversarial examples while retaining its discriminative nature. One limitation of our method is that the robustness gain is modest when training with only natural examples. Moreover, a possible negative societal impact is overconfidence in adversarial robustness: over-confidence in the adversarially-robust models produced by HBaR as well as other defense methods may lead to overlooking their potential failure on newly-invented attack methods; this should be taken into account in safety-critical applications like healthcare [6] or security [26]. We extend the discussion on the limitations and potential negative societal impacts of our work in Appendix H and I, respectively, in the supplement. 7 Acknowledgements The authors gratefully acknowledge support by the National Science Foundation under grants CCF-1937500 and CNS-2112471, and the National Institutes of Health under grant NHLBI U01HL089856.
1. What is the focus and contribution of the paper regarding adversarial robustness? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and theoretical analysis? 3. Do you have any concerns or questions regarding the experimental results or limitations of the paper? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper To enhance the adversarial robustness of deep models, this paper proposes to use HSIC bottleneck as a regularizer. Unlike previous work trained with adversarial samples, the proposed learning objective drops redundant information from the input to the latent representations. Theoretical analysis that HSIC regularizer can reduce the output variance is provided. Review This paper is valuable in the following aspects: 1. The idea that improving robustness without adversarial samples is interesting. Although HSIC bottleneck is used in previous work, it is novel to use HSIC as a regular term for adversarial robustness. 2. The theoretical analysis ensures the soundness that HSIC can reduce the influence of an adversarial attack. 3. The effectiveness is well studied with ablations, and the experiment details are clear. 4. The limitations and potential negative impacts are clearly discussed. I only find some minor issues: 1. It will be more readable if the abstract provides more information on the proposed method and theoretical conclusions. 2. It is said that SWHB fails on ResNet-18 due to the way how SWHB updates parameters. However, Ma et al. have conducted experiments on ResNet and shows comparable performance with backprop. Could you please provide more explanation? 3. Error bars in figure 5 are unclear. It might be better to show the standard deviations with a table.
NIPS
Title Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness Abstract We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. In addition to the usual cross-entropy loss, we add regularization terms for every intermediate layer to ensure that the latent representations retain useful information for output prediction while reducing redundant information. We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. In particular, we prove that the HSIC bottleneck regularizer reduces the sensitivity of the classifier to adversarial examples. Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training. Our code and adversarially robust models are publicly available.2 1 Introduction Adversarial attacks [8, 17, 18, 3, 5] to deep neural networks (DNNs) have received considerable attention recently. Such attacks are intentionally crafted to change prediction outcomes, e.g, by adding visually imperceptible perturbations to the original, natural examples [25]. Adversarial robustness, i.e., the ability of a trained model to maintain its predictive power under such attacks, is an important property for many safety-critical applications [4, 6, 26]. The most common approach to construct adversarially robust models is via adversarial training [34, 36, 30], i.e., training the model over adversarially constructed samples. Alemi et al. [1] propose using the so-called Information Bottleneck (IB) [27, 28] to ehnance adversarial robustness. Proposed by Tishby and Zaslavsky [28], the information bottleneck expresses a tradeoff between (a) the mutual information of the input and latent layers vs. (b) the mutual information between latent layers and the output. Alemi et al. show empirically that using IB as a learning objective for DNNs indeed leads to better adversarial robustness. Intuitively, the IB objective increases the entropy between input and latent layers; in turn, this also increases the model’s robustness, as it makes latent layers less sensitive to input perturbations. Nevertheless, mutual information is notoriously expensive to compute. The Hilbert-Schmidt independence criterion (HSIC) has been used as a tractable, efficient substitute in a variety of machine ∗Equal contribution. 2https://github.com/neu-spiral/HBaR 35th Conference on Neural Information Processing Systems (NeurIPS 2021). learning tasks [31, 32, 33]. Recently, Ma et al. [16] also exploited this relationship to propose an HSIC bottleneck (HB), as a variant to the more classic (mutual-information based) information bottleneck, though not in the context of adversarial robustness. We revisit the HSIC bottleneck, studying its adversarial robustness properties. In contrast to both Alemi et al. [1] and Ma et al. [16], we use the HSIC bottleneck as a regularizer in addition to commonly used losses for DNNs (e.g., cross-entropy). Our proposed approach, HSIC-Bottleneck-asRegularizer (HBaR) can be used in conjunction with adversarial examples; even without adversarial training, it is able to improve a classifier’s robustness. It also significantly outperforms previous IB-based methods for robustness, as well as the method proposed by Ma et al. Overall, we make the following contributions: 1. We apply the HSIC bottleneck as a regularizer for the purpose of adversarial robustness. 2. We provide a theoretical motivation for the constituent terms of the HBaR penalty, proving that it indeed constrains the output perturbation produced by adversarial attacks. 3. We show that HBaR can be naturally combined with a broad array of state of the art adversarial training methods, consistently improving their robustness. 4. We empirically show that this phenomenon persists even for weaker methods. In particular, HBaR can even enhance the adversarial robustness of plain SGD, without access to adversarial examples. The remainder of this paper is structured as follows. We review related work in Sec. 2. In Sec. 3, we discuss the standard setting of adversarial robustness and HSIC. In Sec. 4, we provide a theoretical justification that HBaR reduces the sensitivity of the classifier to adversarial examples. Sec. 5 includes our experiments; we conclude in Sec. 6. 2 Related Work Adversarial Attacks. Adversarial attacks often add a constrained perturbation to natural inputs with the goal of maximizing classification loss. Szegedy et al. [25] learn a perturbation via boxconstrained L-BFGS that misleads the classifier but minimally distort the input. FGSM, proposed by Goodfellow et al. [8], is a one step adversarial attack perturbing the input based on the sign of the gradient of the loss. PGD [13, 17] generates adversarial examples through multi-step projected gradient descent optimization. DeepFool [18] is an iterative attack strategy, which perturbs the input towards the direction of the decision boundaries. CW [3] applies a rectifier function regularizer to generate adversarial examples near the original input. AutoAttack (AA) [5] is an ensemble of parameter-free attacks, that also deals with common issues like gradient masking [19] and fixed step sizes [17]. Adversarial Robustness. A common approach to obtaining robust models is adversarial training, i.e., training models over adversarial examples generated via the aforementioned attacks. For example, Madry et al. [17] show that training with adversarial examples generated by PGD achieves good robustness under different attacks. DeepDefense [34] penalizes the norm of adversarial perturbations. TRADES [36] minimizes the difference between the predictions of natural and adversarial examples to get a smooth decision boundary. MART [30] pays more attention to adversarial examples from misclassified natural examples and adds a KL-divergence term between natural and adversarial samples to the cross-entropy loss. We show that our proposed method HBaR can be combined with several such state-of-the-art defense methods and boost their performance. Information Bottleneck. The information bottleneck (IB) [27, 28] expresses a tradeoff in latent representations between information useful for output prediction and information retained about the input. IB has been employed to explore the training dynamics in deep learning models [23, 22] as well as a learning objective [1, 2]. Fischer [7] proposes a conditional entropy bottleneck (CEB) based on IB and observes its robust generalization ability empirically. Closer to us, Alemi et al. [1] propose a variational information bottleneck (VIB) for supervised learning. They empirically show that training VIB on natural examples provides good generalization and adversarial robustness. We show that HBaR can be combined with various adversarial defense methods enhancing their robustness, but also outperforms VIB [1] when given access only to natural samples. Moreover, we provide theoretical guarantees on how HBaR bounds the output perturbation induced by adversarial attacks. Mutual Information vs. HSIC. Mutual information is difficult to compute in practice. To address this, Alemi et al. [1] estimate IB via variational inference. Ma et al. [16] replaced mutual information by the Hilbert Schmidt Independence Criterion (HSIC) and named this the HSIC Bottleneck (HB). Like Ma et al. [16], we utilize HSIC to estimate IB. However, our method is different from Ma et al. [16] in several aspects. First, they use HB to train the neural network stage-wise, layer-bylayer, without backpropagation, while we use HSIC bottleneck as a regularization in addition to cross-entropy and optimize the parameters jointly by backpropagation. Second, they only evaluate the model performance on classification accuracy, while we demonstrate adversarial robustness. Finally, we show that HBaR further enhances robustness to adversarial examples both theoretically and experimentally. Greenfeld et al. [9] use HSIC between the residual of the prediction and the input data as a learning objective for model robustness on covariate distribution shifts. Their focus is on robustness to distribution shifts, whereas our work focuses on robustness to adversarial examples, on which HBaR outperforms their proposed objective. 3 Background 3.1 Adversarial Robustness In standard k-ary classification, we are given a dataset D = {(xi, yi)}ni=1, where xi ∈ RdX , yi ∈ {0, 1}k are i.i.d. samples drawn from joint distribution PXY . A learner trains a neural network hθ : RdX → Rk parameterized by weights θ ∈ Rdθ to predict Y from X by minimizing L(θ) = EXY [`(hθ(X), Y )] ≈ 1 n n∑ i=1 `(hθ(xi), yi), (1) where ` : Rk × Rk → R is a loss function, e.g., cross-entropy. We aim to find a model hθ that has high prediction accuracy but is also adversarially robust: the model should maintain high prediction accuracy against a constrained adversary, that can perturb input samples in a restricted fashion. Formally, prior to submitting a sample x ∈ RdX to the classifier, an adversary may perturb x by an arbitrary δ ∈ Sr, where Sr ⊆ RdX is the `∞-ball of radius r, i.e., Sr = B(0, r) = {δ ∈ RdX : ‖δ‖∞ ≤ r}. (2) The adversarial robustness [17] of a model hθ is measured by the expected loss attained by such adversarial examples, i.e., Lr(θ) = EXY [ max δ∈Sr ` (hθ(X + δ), Y ) ] ≈ 1 n n∑ i=1 max δ∈Sr `(hθ(xi + δ), yi). (3) An adversarially robust neural network hθ can be obtained via adversarial training, i.e., by minimizing the adversarial robustness loss in (3) empirically over the training set D. In practice, this amounts to training via stochastic gradient descent (SGD) over adversarial examples xi + δ (see, e.g., [17]). In each epoch, δ is generated on a per sample basis via an inner optimization over Sr, e.g., via projected gradient descent (PGD) on −L. 3.2 Hilbert-Schmidt Independence Criterion (HSIC) The Hilbert-Schmidt Independence Criterion (HSIC) is a statistical dependency measure introduced by Gretton et al. [10]. HSIC is the Hilbert-Schmidt norm of the cross-covariance operator between the distributions in Reproducing Kernel Hilbert Space (RKHS). Similar to Mutual Information (MI), HSIC captures non-linear dependencies between random variables. HSIC(X,Y ) is defined as: HSIC(X,Y ) = EXYX′Y ′ [kX (X,X ′) kY ′ (Y, Y ′)] + EXX′ [kX (X,X ′)]EY Y ′ [kY (Y, Y ′)] − 2EXY [EX′ [kX (X,X ′)]EY ′ [kY (Y, Y ′)]] , (4) where X ′, Y ′ are independent copies of X , Y , respectively, and kX , kY are kernels. In practice, we often approximate HSIC empirically. Given n i.i.d. samples {(xi, yi)}ni=1 drawn from PXY , we estimate HSIC via: ĤSIC(X,Y ) = (n− 1)−2 tr (KXHKYH) , (5) where KX and KY are kernel matrices with entries KXij = kX(xi, xj) and KYij = kY (yi, yj), respectively, and H = I− 1n11 > is a centering matrix. 4 Methodology In this section, we present our method, HSIC bottleneck as regularizer (HBaR) as a means to enhance a classifier’s robustness. The effect of HBaR for adversarial robustness is illustrated in Figure 1; the HSIC bottleneck penalty reduces the sensitivity of the classifier to adversarial examples. We provide a theoretical justification for this below, in Theorems 1 and 2, but also validate the efficacy of the HSIC bottleneck extensively with experiments in Section 5. 4.1 HSIC Bottleneck as Regularizer for Robustness Given a feedforward neural network hθ : RdX → Rk parameterized by θ with M layers, and an input r.v. X , we denote by Zj ∈ RdZj , j ∈ {1, . . . ,M}, the output of the j-th layer under input X (i.e., the j-th latent representation). We define our HBaR learning objective as follows: L̃(θ) = L(θ) + λx M∑ j=1 HSIC(X,Zj)− λy M∑ j=1 HSIC(Y,Zj), (6) where L is the standard loss given by Eq. (1) and λx, λy ∈ R+ are balancing hyperparameters. Together, the second and third terms in Eq. (6) form the HSIC bottleneck penalty. As HSIC measures dependence between two random variables, minimizing HSIC(X,Zi) corresponds to removing redundant or noisy information contained in X . Hence, this term also naturally reduces the influence of an adversarial attack, i.e., a perturbation added on the input data. This is intuitive, but we also provide theoretical justification in the next subsection. Meanwhile, maximizing HSIC(Y,Zi) encourages this lack of sensitivity to the input to happen while retaining the discriminative nature of the classifier, captured by dependence to useful information w.r.t. the output label Y . Note that minimizing HSIC(X,Zi) alone would also lead to the loss of useful information, so it is necessary to keep the HSIC(Y,Zi) term to make sure Zi is informative enough of Y . The overall algorithm is described in Alg. 1. In practice, we perform Stochastic Gradient Descent (SGD) over L̃: both L and HSIC can be evaluated empirically over batches. For the latter, we use the estimator (5), restricted over the current batch. As we have m samples in a mini-batch, the complexity of calculating the empirical HSIC (5) is O(m2dZ̄) [24] for a single layer, where dZ̄ = maxj dZj . Thus, the overall complexity for (6) is O(Mm 2dZ̄). This computation is highly parallelizable, thus, the additional computation time of HBaR is small when compared to training a neural network via cross-entropy only. Algorithm 1: Robust Learning with HBaR Input: input sample tuples {(xi, yi)}ni=1, kernel function kx, ky, kz , a neural network hθ parameterized by θ, mini-batch size m, learning rate α. Output: parameter of classifier θ while θ has not converged do Sample a mini-batch of size m from input samples. Forward Propagation: calculate zi and hθ(x). Compute kernel matrices for X , Y and Zi using kx, ky, kz respectively inside mini-batch. Compute L̃(θ) via (6), where HSIC is evaluated empirically via (5). Backward Propagation: θ ← θ − α∇L̃(θ). end 4.2 Combining HBaR with Adversarial Examples HBaR can also be naturally applied in combination with adversarial training. For r > 0 the magnitude of the perturbations introduced in adversarial examples, one can optimize the following objective instead of L̃(θ) in Eq. (6): L̃r(θ) = Lr(θ) + λx M∑ j=1 HSIC(X,Zj)− λy M∑ j=1 HSIC(Y, Zj), (7) whereLr is the adversarial loss given by Eq. (3). This can be used instead ofL in Alg. 1. Adversarial examples need to be used in the computation of the gradient of the loss Lr in each minibatch; these need to be computed on a per sample basis, e.g., via PGD over Sr, at additional computational cost. Note that the natural samples (xi, yi) in a batch are used to compute the HSIC bottleneck regularizer. The HBaR penalty can similarly be combined with other adversarial learning methods and/or used with different means for selecting adversarial examples, other than PGD. We illustrate this in Section 5, where we combine HBaR with state-of-the-art adversarial learning methods TRADES [36] and MART [30]. 4.3 HBaR Robustness Guarantees We provide here a formal justification for the use of HBaR to enhance robustness: we prove that regularization terms HSIC(X,Zj), j = 1, . . . ,M lead to classifiers which are less sensitive to input perturbations. For simplicity, we focus on the case where k = 1 (i.e., binary classification). Let Z ∈ RdZ be the latent representation at some arbitrary intermediate layer of the network. That is, Z = Zj , for some j ∈ {1, . . . ,M}; we omit the subscript j to further reduce notation clutter. Then hθ = (g ◦ f), where f : RdX → RdZ maps the inputs to this intermediate layer, and g : RdZ → R maps the intermediate layer to the final layer. Then, Z = f(X) and g(Z) = hθ(X) ∈ R are the latent and final outputs, respectively. Recall that, in HBaR, HSIC(X,Z) is associated with kernels kX , kZ . We make the following technical assumptions: Assumption 1. Let X ⊆ RdX , Z ⊆ RdZ be the supports of random variables X , Z, respectively. We assume that both hθ and g are continuous and bounded functions in X , Z , respectively, i.e.: hθ ∈ C(X ), g ∈ C(Z). (8) Moreover, we assume that all functions hθ and g we consider are uniformly bounded, i.e., there exist 0 < MX ,MZ <∞ such that: MX = max hθ∈C(X ) ‖hθ‖∞ and MZ = max g∈C(Z) ‖g‖∞. (9) The continuity stated in Assumption 1 is natural, if all activation functions are continuous. Boundedness follows if, e.g., X , Z are closed and bounded (i.e., compact), or if activation functions are bounded (e.g., softmax, sigmoid, etc.). Assumption 2. We assume kernels kX , kZ are universal with respect to functions hθ and g that satisfy Assumption 1, i.e., if F and G are the induced RKHSs for kernels kX and kZ , respectively, then for any hθ, g that satisfy Assumption 1 and any ε > 0 there exist functions h′ ∈ F and g′ ∈ G such that ||hθ − h′||∞ ≤ ε and ||g − g′||∞ ≤ ε. Moreover, functions in F and G are uniformly bounded, i.e., there exist 0 < MF ,MG <∞ such that for all h′ ∈ F and all g′ ∈ G: MF = max f ′∈F ‖f ′‖∞ and MG = max g′∈G ‖g′‖∞. (10) We note that several kernels used in practice are universal, including, e.g., the Gaussian and Laplace kernels. Moreover, given that functions that satisfy Assumption 1 are uniformly bounded by (9), such kernels can indeed remain universal while satisfying (10) via an appropriate rescaling. Our first result shows that HSIC(X,Z) at any intermediate layer Z bounds the output variance: Theorem 1. Under Assumptions 1 and 2, we have: HSIC(X,Z) ≥ MFMG MXMZ sup θ Var(hθ(X)). (11) The proof of Theorem 1 is in Appendix B in the supplement. We use a result by Greenfeld and Shalit [9] that links HSIC(X,Z) to the supremum of the covariance of bounded continuous functionals over X and Z . Theorem 1 indicates that the regularizer HSIC(X,Z) at any intermediate layer naturally suppresses the variability of the output, i.e., the classifier prediction hθ(X). To see this, observe that by Chebyshev’s inequality [20] the distribution of hθ(X) concentrates around its mean when Var(hθ(X)) approaches 0. As a result, bounding HSIC(X,Z) inherently also bounds the (global) variability of the classifier (across all parameters θ). This observation motivates us to also maximize HSIC(Y, Z) to recover essential information useful for classification: if we want to achieve good adversarial robustness as well as good predictive accuracy, we have to strike a balance between HSIC(X,Z) and HSIC(Y,Z). This perfectly aligns with the intuition behind the information bottleneck [28] and the well-known accuracy-robustness trade off [17, 36, 29, 21]. We also confirm this experimentally: we observe that both additional terms (the standard loss and HSIC(Y,Z)) are necessary for ensuring good prediction performance in practice (see Table 3). Most importantly, by further assuming that features are normal, we can show that HSIC bounds the power of an arbitrary adversary, as defined in Eq. (3): Theorem 2. Assume that X ∼ N (0, σ2I). Then, under Assumptions 1 and 2, we have:3 r √ −2 log o(1)dXMZ σMFMG HSIC(X,Z) + o(r) ≥ E[|hθ(X + δ)− hθ(X)|], for all δ ∈ Sr. (12) The proof of Theorem 2 can also be found in Appendix C in the supplement. We again use the result by Greenfeld and Shalit [9] along with Stein’s Lemma [15], that relates covariances of Gaussian r.v.s and their functions to expected gradients. In particular, we apply Stein’s Lemma to the bounded functionals considered by Greenfeld and Shalit by using a truncation argument. Theorem 2 implies that HSIC(X,Z) indeed bounds the output perturbation produced by an arbitrary adversary: suppressing HSIC sufficiently can ensure that the adversary cannot alter the out- put significantly, in expectation. In particular, if HSIC(X,Z) = o ( σMFMG√ −2log o(1)dXMZ ) , then limr→0 supδ∈Sr E[|hθ(X + δ)− hθ(X)|]/r = 0, i.e., the output is almost constant under small input perturbations. 5 Experiments 5.1 Experimental Setting We experiment with three standard datasets, MNIST [14], CIFAR-10 [12] and CIFAR-100 [12]. We use a 4-layer LeNet [17] for MNIST, ResNet-18 [11] and WideResNet-28-10 [35] for CIFAR-10, 3Recall that for functions f, g : R→ R we have f = o(g) if limr→0 f(r)g(r) = 0. and WideResNet-28-10 [35] for CIFAR-100. We use cross-entropy as loss L(θ). Licensing information for all existing assets can be found in Appendix D in the supplement. Algorithms. We compare HBaR to the following non-adversarial learning algorithms: CrossEntropy (CE), Stage-Wise HSIC Bottleneck (SWHB) [16], XIC [9], and Variational Information Bottleneck (VIB) [1]. We also incorporate HBaR to several adversarial learning algorithms, as described in Section 4.2, and compare against the original methods, without the HBaR penalty. The adversarial methods we use are: Projected Gradient Descent (PGD) [17], TRADES [36], and MART [30]. Further details and parameters can be found in Appendix E in the supplement. Performance Metrics. For all methods, we evaluate the obtained model hθ via the following metrics: (a) Natural (i.e., clean test data) accuracy, and adversarial robustness via test accuracy under (b) FGSM, the fast gradient sign attack [8], (c) PGDm, the PGD attack with m steps used for the internal PGD optimization [17], (d) CW, the CW-loss within the PGD framework [3], and (e) AA, AutoAttack [5]. All five metrics are reported in percent (%) accuracy. Following prior literature, we set step size to 0.01 and radius r = 0.3 for MNIST, and step size as 2/255 and r = 8/255 for CIFAR-10 and CIFAR-100. All attacks happen during the test phase and have full access to model parameters (i.e., are white-box attacks). All experiments are carried out on a Tesla V100 GPU with 32 GB memory and 5120 cores. 5.2 Results Combining HBaR with Adversarial Examples. We show how HBaR can be used to improve robustness when used as a regularizer, as described in Section 4.2, along with state-of-the-art adversarial learning methods. We run each experiment by five times and report the mean natural test accuracy and adversarial robustness of all models on MNIST, CIFAR-10, and CIFAR-100 datasets by four architectures in Table 1 and Table 2. Combined with all adversarial training baselines, HBaR consistently improves adversarial robustness against all types of attacks on all datasets. The resulting improvements are larger than 2 standard deviations (that range between 0.05-0.2) in most cases; we report the results with standard deviations in Appendix G in the supplement. Although natural accuracy is generally restricted by the trade-off between robustness and accuracy [36], we observe that incorporating HBaR comes with an actual improvement over natural accuracy in most cases. 0 20 40 60 80 100 Epoch 0 2 4 6 8 10 Ad ve sa ria l R ob us tn es s ( % ) MNIST by LeNet: CE only HBaR-high (λx=1, λy=50) HBaR-low (λx=0.001, λy=0.005) 0 20 40 60 80 100 Epoch 0 10 20 30 40 50 HS IC (X , Z _M ) 0 20 40 60 80 100 Epoch 6.5 7.0 7.5 8.0 8.5 9.0 HS IC (Y , Z _M ) 0 20 40 60 80 100 Epoch 93 94 95 96 97 98 99 100 Na tu ra l A cc ur ac y (% ) 0 20 40 60 80 100 Epoch 0 2 4 6 8 10 Ad ve sa ria l R ob us tn es s ( % ) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 Ad ve sa ria l R ob us tn es s ( % ) CIFAR-10 by ResNet-18: CE only HBaR-high (λx=0.006, λy=0.05) HBaR-low (λx=0.001, λy=0.05) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 HS IC (X , Z _M ) 0 50 100 150 200 250 300 Epoch 1 2 3 4 5 6 7 8 9 HS IC (Y , Z _M ) 0 50 100 150 200 250 300 Epoch 80 82 84 86 88 90 92 94 96 Na tu ra l A cc ur ac y (% ) 0 50 100 150 200 250 300 Epoch 0 10 20 30 40 50 Ad ve sa ria l R ob us tn es s ( % ) (a) HSIC(X,ZM ) (b) HSIC(Y, ZM ) (c) Natural Accuracy (d) Adv. Robustness Figure 3: Visualization of the HBaR quantities (a) HSIC(X,ZM ), (b) HSIC(X,ZM ), (c) natural test accuracy, and (d) adversarial robustness against PGD attack (PGD40 and PGD20 on MNIST and CIFAR-10, respectively) as a function of training epochs, on MNIST by LeNet (top) and CIFAR-10 by ResNet (bottom). Different colored lines correspond to CE, HBaR-high (HBaR with high weights λ), and HBaR-low (HBaR method small weighs λ). HBaR-low parameters are selected so that the values of the loss L and each of the HSIC terms are close after the first epoch. Adversarial Robustness Analysis without Adversarial Training. Next, we show that HBaR can achieve modest robustness even without adversarial examples during training. We evaluate the robustness of HBaR on CIFAR-10 by ResNet-18 against various adversarial attacks, and compare HBaR with other information bottleneck penalties without adversarial training in Figure 2. Specifically, we compare the robustness of HBaR with other IB-based methods under various attacks and hyperparameters. Our proposed HBaR achieves the best overall robustness against all three types of attacks while attaining competitive natural test accuracy. Interestingly, HBaR achieves natural accuracy (95.27%) comparable to CE (95.32%) which is much higher than VIB (92.35%), XIC (92.93%) and SWHB (59.18%). We observe SWHB underperforms HBaR on CIFAR-10 for both natural accuracy and robustness. One possible explanation may be that when the model is deep, minimizing HSIC without backpropagation, as in SWHB, does not suffice to transmit the learned information across layers. Compared to SWHB, HBaR backpropagates over the HSIC objective through each intermediate layer and computes gradients only once in each batch, improving accuracy and robustness while reducing computational cost significantly. Synergy between HSIC Terms. Focusing on ZM , the last latent layer, Figure 3 shows the evolution per epoch of: (a) HSIC(X,ZM ), (b) HSIC(Y,ZM ), (c) natural accuracy (in %), and (d) adversarial robustness (in %) under PGD attack on MNIST and CIFAR-10. Different lines correspond to CE, HBaR-high (HBaR with high weights λ), and HBaR-low (HBaR method small weighs λ). HBaR-low parameters are selected so that the values of the loss L and each of the HSIC terms are close after the first epoch. Figure 3(c) illustrates that all three settings achieve good natural accuracy on both datasets. However, in Figure 3(d), only HBaR-high, that puts sufficient weight on HSIC terms, attains relatively high adversarial robustness. In Figure 3(a), we see that CE leads to high HSIC(X,ZM ) for the shallow LeNet, but low in the (much deeper) ResNet-18, even lower than HBaR-low. Moreover, we also see that the best performer in terms of adversarial robustness, HBaR-high, lies in between the other two w.r.t. HSIC(X,ZM ). Both of these observations indi- cate the importance of the HSIC(Y,ZM ) penalty: minimizing HSIC(X,ZM ) appropriately leads to good adversarial robustness, but coupling learning to labels via the third term is integral to maintaining useful label-related information in latent layers, thus resulting in good adversarial robustness. Figure 3(b) confirms this, as HBaR-high achieves relatively high HSIC(Y,ZM ) on both datasets. Figure 4 provides another perspective of the same experiments via the learning dynamics on the HSIC plane. We again observe that the best performer in terms of robustness HBaR-high lies in between the other two methods, crucially attaining a much higher HSIC(Y,ZM ) than HBaR-low. Moreover, for both HBaR methods, we clearly observe the two distinct optimization phases first observed by Shwartz-Ziv and Tishby [23] in the context of the mutual information bottleneck: the fast empirical risk minimization phase, where the neural network tries to learn a meaningful representation by increasing HSIC(Y,ZM ) regardless of information redundancy (HSIC(X,ZM ) increasing), and the representation compression phase, where the neural network turns its focus onto compressing the latent representation by minimizing HSIC(X,ZM ), while maintaining highly label-related information. Interestingly, the HBaR penalty produces the two-phase behavior even though our networks use ReLU activation functions; Shwartz et al. [23] only observed these two optimization phases on neural networks with tanh activation functions, a phenomenon further confirmed by Saxe et al. [22]. Ablation Study. Motivated by the above observations, we turn our attention to how the three terms in the loss function in Eq. (6) affect HBaR. As illustrated in Table 3, removing any part leads to either a significant natural accuracy or robustness degradation. Specifically, using L(θ) only (row [i]) lacks adversarial robustness; removing L(θ) (row [ii]) or the penalty on Y (row [iii]) degrades natural accuracy significantly (a similar result was also observed in [2]); finally, removing the penalty on X improves the natural accuracy while degrading adversarial robustness. The three terms combined together by proper hyperparameters λx and λy (row [v]) achieve both high natural accuracy and adversarial robustness. We provide a comprehensive ablation study on the sensitivity of λx and λy and draw conclusions in Appendix F in the supplement (Tables 7 and 8). 6 Conclusions We investigate the HSIC bottleneck as regularizer (HBaR) as a means to enhance adversarial robustness. We theoretically prove that HBaR suppresses the sensitivity of the classifier to adversarial examples while retaining its discriminative nature. One limitation of our method is that the robustness gain is modest when training with only natural examples. Moreover, a possible negative societal impact is overconfidence in adversarial robustness: over-confidence in the adversarially-robust models produced by HBaR as well as other defense methods may lead to overlooking their potential failure on newly-invented attack methods; this should be taken into account in safety-critical applications like healthcare [6] or security [26]. We extend the discussion on the limitations and potential negative societal impacts of our work in Appendix H and I, respectively, in the supplement. 7 Acknowledgements The authors gratefully acknowledge support by the National Science Foundation under grants CCF-1937500 and CNS-2112471, and the National Institutes of Health under grant NHLBI U01HL089856.
1. What is the main contribution of the paper regarding adversarial robustness? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any concerns or suggestions regarding the theoretical analysis and results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors propose to use an information bottleneck regularization -- with Hilbert-Schmidt information rather than mutual information -- to improve the adversarial robustness of the trained neural networks. The authors furthermore show that the minimizing HSIC between the input and the latent representation limits the variance the network output can achieve. Review The main idea of the paper is very nice, especially in the light of the fact that for deterministic networks the classical IB objective is infinite (cf. [2] in the paper). The HSIC functionals appear not to be suffering from this limitation. However, I have noticed several small to medium issues that need to be addressed: -) First of all, it is not exactly clear why the HSIC(Y;Z) term is required. Essentially, the classical or adversarial loss in (5) should suffice to ensure that Z contains sufficient information about Y, while HSIC(Y;Z) goes in the same direction. Moreover, in the classical IB setting, the term I(Y;Z) is usually replaced (not complemented) by the classical cross entropy loss. I have the strong feeling that it should be possible to remove this term altogether by appropriately adjusting the hyperparameters lambda_x. This is also supported by Table 3: In case HSIC(Y;Z) is missing during training, both HSIC terms turn out to be zero, indicating that the regularization on HSIC(X;Z) was too strong. This is further supported by Table 5, in which lambda_y=0 and for small lamda_x still good natural accuracy is achieved. Thus, HSIC(Y;Z) is not necessary, but a careful setting of lambda_x is required. -) Theorem 1 is extremely strong and interesting; however, I agree with previous reviewers that it is not very useful for making the intended point. In fact, for a purely Gaussian setting, I expect that similar statements can be made about I(X;Z): If Z contains only little information about X (in the sense of a conditional variance, which appears to be the case for Gaussian settings and/our HSIC measured with distance-based kernels), then also the network output can only vary little with X. Theorem 2 goes in the same direction, essentially replacing variance with the first absolute moment under a specific perturbation model. Both theorems are interesting, but they do not explicitly point at adversarial robustness, other than just pointing at general invariance to input variations. -) This brings me to a recent publication that may actually help relieving the previous issue. The conditional entropy bottleneck by Fischer replaces the I(X;Z) part in the IB functional by I(X;Z|Y). This term should indeed become zero, and setting this term to zero does not harm classification performance. If the same Theorems 1 and 2 could be formulated for HSIC(X;Z|Y) (appropriately defined), then this reduction in variation with the input would actually help make strong claims regarding generalization and adversarial robustness. -) Some of the papers in the references are still referred to via their arXiv version, despite being already published. I suggest to update the reference list. Aside from that, I noticed some minor issues that can be addressed quite easily. I list them here for completeness, they do not influence the review score though: [1] did not study learning dynamics, but [1] used the IB functional as a training objective. [2] did not suggest the IB functional for training, but analyzed it. [1] has a focus on supervised learning, not on auto-encoders. A reference to Fischer's conditional entropy bottleneck may be added. in line 92, while y in {0,1}^k is correct, this is not standard notation; y is usually one-hot encoded. after line 110, w.r.t. which distribution are the expectations taken? in lines 138-140, the computational complexity also depends on d_Z; this should be acknowledged. in line 171, why must the sets be closed? Is it not sufficient that they are bounded? in Table 1, the caption claims that training times are shown, but I think they are missing. line 244-245 contains a typo: "with and hyperparameters" line 273 lists Shwartz et al, but I guess it should be Shwartz-Ziv and Tishby. in Table 1 [ii] it is not surprising that missing the cross entropy loss yields low accuracy. Even if Z contains much information about Y, this information may not be easy to access for the remaining part of the network. This was discussed also in [2]. edit: I acknowledge the authors' responses and change my score. Thank you!
NIPS
Title Scalable methods for 8-bit training of neural networks Abstract Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range BatchNormalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors’ knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset. 1 Introduction Deep Neural Networks (DNNs) achieved remarkable results in many fields making them the most common off-the-shelf approach for a wide variety of machine learning applications. However, as networks get deeper, using neural network (NN) algorithms and training them on conventional general-purpose digital hardware is highly inefficient. The main computational effort is due to massive amounts of multiply-accumulate operations (MACs) required to compute the weighted sums of the neurons’ inputs and the parameters’ gradients. Much work has been done to reduce the size of networks. The conventional approach is to compress a trained (full precision) network [4, 19, 12] using weights sharing, low rank approximation, quantization, pruning or some combination thereof. For example, Han et al., 2015 [7] successfully pruned several state-of-the-art large-scale networks and showed that the number of parameters can be reduced by an order of magnitude. Since training neural networks requires approximately three times more computation power than just evaluating them, quantizing the gradients is a critical step towards faster training machines. Previous 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. ∗Equal contribution work demonstrated that by quantizing network parameters and intermediate activations during the training phase more computationally efficient DNNs could be constructed. Researchers [6, 5] have shown that 16-bit is sufficient precision for most network training but further quantization (i.e., 8-bit) results with severe degradation. Our work is the first to almost exclusively train at 8-bit without harming classification accuracy. This is addressed by overcoming two main obstacles known to hamper numerical stability: batch normalization and gradient computations. The traditional batch normalization [11] implementation requires the computation of the sum of squares, square-root and reciprocal operations; these require high precision (to avoid zero variance) and a large dynamic range. It should come as no surprise that previous attempts to use low precision networks did not use batch normalization layers [21] or kept them in full precision [24]. This work replaces the batch norm operation with range batch-norm (range BN) that normalizes inputs by the range of the input distribution (i.e., max(x) −min(x)). This measure is more suitable for low-precision implementations. Range BN is shown analytically to approximate the original batch normalization by multiplying this range with a scale adjustment that depends on the size of the batch and equals to (2 · ln(n))−0.5. Experiments on ImageNet with Res18 and Res50 showed no distinguishable difference between accuracy of Range BN and traditional BN. The second obstacle is related to the gradients quantization. Given an upstream gradient gl from layer l, layer l − 1 needs to apply two different matrix multiplications: one for the layer gradient gl−1 and the other for the weight gradient gW which are needed for the update rule. Our analysis indicates that the statistics of the gradient gl violates the assumptions at the crux of common quantization schemes. As such, quantizing these gradients constitutes the main cause of degradation in performance through training. Accordingly, we suggest to use two versions of layer gradients gl, one with low-precision (8-bit) and another with higher-precision (16-bit). The idea is to keep all calculations with gl that does not involve a performance bottleneck at 16 bits, while the rest at 8 bits. As the gradients gW are required only for the weight update, they are computed using the 16 bits copy of gl. On the other hand, the gradient gl−1 is required for the entire backwards stream and as such it is computed using the corresponding 8-bit version of gl. In most layers of the DNN these computations can be performed in parallel. Hence gW can be computed at high precision in parallel with gl−1, without interrupting the propagation of gl to lower layers. We denote the use of two different arithmetic precision operations in the differentiation process as "Gradients Bifurcation". 2 Previous Work While several works [6, 5] have shown that training at 16-bit is sufficient for most networks, more aggressive quantization schemes were also suggested [24, 16, 14, 10]. In the extreme case, the quantization process used only one bit which resulted in binarized neural networks (BNNs) [9] where both weights and activations were constrained to -1 and 1. However, for more complex models and challenging datasets, the extreme compression rate resulted in a loss of accuracy. Recently, Mishra et al. [15] showed that this accuracy loss can be prevented by merely increasing the number of filter maps in each layer, thus suggesting that quantized neural networks (QNNs) do not possess an inherent convergence problem. Nevertheless, increasing the number of filter maps enlarge quadratically the number of parameters, which raises questions about the efficiency of this approach. In addition to the quantization of the forward pass, a growing interest is directed towards the quantization of the gradient propagation in neural networks. A fully quantized method, allowing both forward and backward low-precision operations will enable the use of dedicated hardware, with considerable computational, memory, and power benefits. Previous attempts to discretize the gradients managed to either reduce them to 16-bit without loss of accuracy [5] or apply a more aggressive approach and reduce the precision to 6-8 bit [24, 10] with a noticeable degradation. Batch normalization is mentioned by [21] as a bottleneck for network quantization and is either replaced by a constant scaling layer kept in full precision, or avoided altogether; this clearly has some impact on performance (e.g., AlexNet trained over ImageNet resulted with top-1 error of 51.6%, where the state of the art is near 42%) and better ways to quantize normalization are explicitly called for. Recently L1 batch norm with only linear operations in both forward and backward propagation was suggested by [22, 8] with improved numerical stability. Yet, our experiments show that with 8-bit training even L1 batch norm is prone to overflows when summing over many large positive values. Finally, Wen et al. [20] focused on quantizing the gradient updates to ternary values to reduce the communication bandwidth in distributed systems. We claim that although more aggressive quantization methods exist, 8-bit precision may prove to have a "sweet-spot" quality to it, by enabling training with no loss of accuracy and without modifying the original architecture. Moreover, we note that 8-bit quantization is better suited for future and even current hardware, many of which can already benefit from 8-bit operations [17]. So far, to the best of our knowledge, no work has succeeded to quantize the activations, weights, and gradient of all layers (including batch normalization) to 8-bit without any degradation. 3 Range Batch-Normalization For a layer with n×d−dimensional input x = (x(1), x(2), ..., x(d)), traditional batch norm normalizes each dimension x̂(d) = x(d) − µd√ Var[x(d)] , (1) where µd is the expectation over x(d), n is the batch size and Var[x(d)] = 1n ||x (d) − µd||22. The term√ Var[x(d)] involves sums of squares that can lead to numerical instability as well as to arithmetic overflows when dealing with large values. The Range BN method replaces the above term by normalizing according to the range of the input distribution (i.e., max(·)−min(·)), making it more tolerant to quantization. For a layer with d−dimensional input x = (x(1), x(2), ..., x(d)), Range BN normalizes each dimension x̂(d) = x(d) − µd C(n) · range(x(d) − µd) , (2) where µd is the expectation over x(d), n is the batch size, C(n) = 1√ 2·ln(n) is a scale adjustment term, and range(x) = max(x)−min(x). The main idea behind Range BN is to use the scale adjustment C(n) to approximate the standard deviation σ (traditionally being used in vanilla batch norm) by multiplying it with the range of the input values. Assuming the input follows a Gaussian distribution, the range (spread) of the input is highly correlated with the standard deviation magnitude. Therefore by normalizing the range by C(n) we can estimate σ. Note that the Gaussian assumption is a common approximation (e.g., Soudry et al. [18]), based on the fact that the neural input x(d) is a sum of many inputs, so we expect it to be approximately Gaussian from the central limit theorem. We now turn to derive the normalization term C(n). The expectation of maximum of Gaussian random variables are bounded as follows [13]: 0.23σ · √ ln(n) ≤ E[max(x(d) − µd)] ≤ √ 2σ √ ln(n). (3) Since x(d) − µd is symmetrical with respect to zero (centred at zero and assumed gaussian), it holds that E[max(·)] = −E[min(·)]; hence, 0.23σ · √ ln(n) ≤ −E[min(x(d) − µd)] ≤ √ 2σ √ ln(n). (4) Therefore, by summing Equations 3 and 4 and multiplying the three parts of the inequality by the normalization term C(n), Range BN in Eq. 2 approximates the original standard deviation measure σ as follows: 0.325σ ≤ C(n) · range(x(d) − µd) ≤ 2 · σ Importantly, the scale adjustment termC(n) plays a major role in RangeBN success. The performance was degraded in simulations when C(n) was not used or modified to nearby values. 4 Quantized Back-Propagation Quantization methods: Following [23] we used the GEMMLOWP quantization scheme as decribed in Google’s open source library [1]. A detailed explanation of this approach is given in Appendix.While GEMMLOWP is widely used for deployment, to the best of the authors knowledge this is the first time GEMMLOWP quantization is applied for training. Note that the activations maximum and minimum values were computed by the range BN operator, thus finding the normalization scale (see Appendix)does not require additional O(n) operations. Finally we note that a good convergence was achieved only by using stochastic rounding [6] for the gradient quantization. This behaviour is not surprising as the gradients will serve eventually for the weight update thus unbiased quantization scheme is required to avoid noise accumulation. Gradients Bifurcation: In the back-propagation algorithm we recursively calculate the gradients of the loss function L with respect to I`, the input of the ` neural layer, g` = ∂L ∂I` , (5) starting from the last layer. Each layer needs to derive two sets of gradients to perform the recursive update. The layer activation gradients: g`−1 = g`W T ` , (6) served for the Back-Propagation (BP) phase thus passed to the next layer,and the weights gradients gW` = g`I T `−1, (7) used to updated the weights in layer `. Since the backward pass requires twice the amount of multiplications compared to the forward pass, quantizing the gradients is a crucial step towards faster training machines. Since g`, the gradients streaming from layer `, are required to compute g`−1, it is important to expedite the matrix multiplication described in Eq.6. The second set of gradient derive in Eq.7 is not required for this sequential process and thus we choose to keep this matrix multiplication in full precision. We argue that the extra time required for this matrix multiplication is comparably small to the time required to communicate the gradients g`. Thus, in this work the gradients used for the weight gradients derivation are still in float. In section 6, we show empirically that bifurcation of the gradients is crucial for high accuracy results. Straight-Through Estimator: Similar to previous work [9, 15], we used the straight-through estimator (STE) approach to approximate differentiation through discrete variables. This is the most simple and hardware friendly approach to deal with the fact that the exact derivative of discrete variables is zero almost everywhere. 5 When is quantization of neural networks possible? This section provides some of the foundations needed for understanding the internal representation of quantized neural networks. It is well known that when batch norm is applied after a convolution layer, the output is invariant to the norm of the weight on the proceeding layer [11] i.e., BN(C ·W · x) = BN(W · x) for any given constant C. This quantity is often described geometrically as the norm of the weight tensor, and in the presence of this invariance, the only measure that needs to be preserved upon quantization is the directionality of the weight tensor. In the following we show that quantization preserves the direction (angle) of high-dimensional vectors when W follows a Gaussian distribution. More specifically, for networks with M -bit fixed point representation, the angle is preserved when the number of quantization levels 2M is much larger than √ 2 ln(N), where N is the size of quantized vector. This shows that significant quantization is possible on practical settings. Taking for example the dimensionality of the joint product in a batch with 1024 examples corresponding to the last layer of ResNet-50, we need no more than 8-bit of precision to preserve the angle well (i.e., √ 2 ln(3 · 3 · 2048 · 1024) = 5.7 << 28). We stress that this result heavily relays on values being distributed according to a Gaussian distribution, and suggests why some vectors are robust to quantization (e.g., weights and activations) while others are more fragile (e.g., gradients). 5.1 Problem Statement Given a vector of weights W = (w0, w1, ..., wN−1), where the weights follow a Gaussian distribution W ∼ N(0, σ), we would like to measure the cosine similarity (i.e., cosine of the angle) between W and Q(W ), where Q(·) is a quantization function. More formally, we are interested in estimating the following geometric measure: cos(θ) = W ·Q(W ) ||W ||2 · ||Q(W )||2 (8) We next define the quantization function Q(·) using a fixed quantization step between adjacent quantified levels as follows: Q(x) = ∆ · (⌊ x ∆ ⌋ + 1 2 ) , where ∆ = max(|W |) 2M (9) We consider the case where quantization step ∆ is much smaller than mean(|W |). Under this assumption correlation between W and quantization noise W − Q(W ) = ( 0, 1, ..., N−1) is negligible, and can be approximated as an additive noise. Our model assumes an additive quantization noise ̄ with a uniform distribution i.e., i ∼ U [−∆/2,∆/2] for each index i. Our goal is to estimate the angle between W and W + ̄ for high dimensions (i.e., N →∞). 5.2 Angle preservation during quantization In order to estimate the angle between W and W + , we first estimate the angle between W and . It is well known that if and W are independent, then at high dimension the angle between W and tends to π 2 [2] i.e., we get a right angle triangle with W and as the legs, while W + is the hypotenuse as illustrated in Figure 1-right. The cosine of the angle θ in that triangle can be approximated as follows: cos(θ) = ||W || ||W + || ≥ ||W || ||W ||+ || || (10) Since W is Gaussian, we have that E(||W ||) ∼= √ Nσ in high dimensions [3]. Additionally, in Appendix ?? we show that E(||̄||) ≤ √ N/12 ·∆. Moreover, at high dimensions, the relative error made as considering E||X|| instead of the random variable ||X|| becomes asymptotically negligible [2]. Therefore, the following holds in high dimensions: cos(θ) ≥ σ σ + E(∆)/ √ 12 = 2M · σ 2M · σ + E(max(|W |))/ √ 12 (11) Finally, E(max(W )) ≤ √ 2σ √ ln(N) when W follows a Gaussian distribution [13], establishing the following: cos(θ) ≥ 2 M 2M + √ lnN/ √ 6 (12) Eq. 12 establishes that when 2M >> √ ln(N) the angle is preserved during quantization. It is easy to see that in most practical settings this condition holds even for challenging quantizations. Moreover, this results highly depends on the assumption made about the Gaussian distribution of W (transition from equation 11 to equation 12). 6 Experiments We evaluated the ideas of Range Batch-Norm and Quantized Back-Propagation on multiple different models and datasets. The code to replicate all of our experiments is available on-line 2. 2https://github.com/eladhoffer/quantized.pytorch 6.1 Experiment results on cifar-10 dataset To validate our assumption that the cosine similarity is a good measure for the quality of the quantization, we ran a set of experiments on Cifar-10 dataset, each with a different number of bits, and then plotted the average angle and the final accuracy. As can be seen in Figure 2 there is a high correlation between the two. Taking a closer look the following additional observations can be made: (1) During quantization the direction of vectors is better preserved with the forward pass compared to the backward pass; (2) validation accuracy follows tightly the cosine of the angle in the backward pass, indicating gradient quantization as the primary bottleneck; (3) as expected, the bound on E(cos(θ)) in Eq. 12 holds in the forward pass, but less so in the backward pass, where the Gaussian assumption tends to break. The histograms in Figure 2 further confirms that the layer gradients gl do not follow Gaussian distribution. These are the values that are bifurcated into low and high precision copies to reduce noise accumulation. 6.2 Experiment results on ImageNet dataset: Range Batch-Normalization We ran experiments with Res50 on ImageNet dataset showing the equivalence between the standard batch-norm and Range BN in terms of accuracy. The only difference between the experiments was the use of Range BN instead of the traditional batch-norm. Figure 3 compares between the two. It shows equivalence when models are trained at high precision. We also ran simulations on other datasets and models. When examining the final results, both were equivalent i.e., 32.5% vs 32.4% for ResNet-18 on ImageNet and 10.5% vs 10.7% for ResNet-56 on Cifar10. To conclude, these simulations prove that we can replace standard batch-norm with Range BN while keeping accuracy unchanged. Replacing the sum of squares and square root operations in standard batch-norm by a few maximum and minimum operations has a major benefit in low-precision implementations. 6.3 Experiment results on ImageNet dataset: Putting it all together We conducted experiments using RangeBN together with Quantized Back-Propagation. To validate this low precision scheme, we were quantizing the vast majority of operations to 8-bit. The only operations left at higher precising were the updates (float32) needed to accumulate small changes from stochastic gradient descent, and a copy of the layer gradients at 16 bits needed to compute gW . Note that the float32 updates are done once per minibatch while the propagations are done for each example (e.g., for a minibatch of 256 examples the updates constitute less than 0.4% of the training effort). Figure 4 presents the result of this experiment on ImageNet dataset using ResNet18 and ResNet50. We provide additional results using more aggressive quantizations in Appedix F. 7 Discussion In this study, we investigate the internal representation of low precision neural networks and present guidelines for their quantization. Considering the preservation of direction during quantization, we analytically show that significant quantization is possible for vectors with a Gaussian distribution. On the forward pass the inputs to each layer are known to be distributed according to a Gaussian distribution, but on the backward pass we observe that the layer gradients gl do not follow this distribution. Our experiments further assess that angle is not well preserved on the backward pass, and moreover final validation accuracy tightly follows that angle. Accordingly, we bifurcate the layer gradients gl and use it at 16-bits for the computation of the weight gradient gW while keeping the computation of next layer gradient gl−1 at 8-bit. This enables the (slower) 16-bits computation of gW to be be done in parallel with gl−1, without interrupting the propagation the layer gradients. We further show that Range BN is comparable to the traditional batch norm in terms of accuracy and convergence rate. This makes it a viable alternative for low precision training. During the forward-propagation phase computation of the square and square root operations are avoided and replaced by max(·) and min(·) operations. During the back-propagation phase, the derivative of max(·) or min(·) is set to one where the coordinates for which the maximal or minimal values are attained, and is set to zero otherwise. Finally, we combine the two novelties into a single training scheme and demonstrate, for the first time, that 8-bit training on a large scale dataset does not harm accuracy. Our quantization approach has major performance benefits in terms of speed, memory, and energy. By replacing float32 with int8, multiplications become 16 times faster and at least 15 times more energy efficient [10]. This impact is attained for 2/3 of all the multiplications, namely the forward pass and the calculations of the layer gradients gl. The weight gradients gW are computed as a product of 8-bit precision (layer input) with a 16-bit precision (unquantized version of gl), resulting with a speedup of x8 for the rest of multiplications and at least x2 power savings. Although previous works considered an even lower precision quantization (up-to 1-bit), we claim that 8-bit quantization may prove to be more of an interest. Furthermore, 8-bit matrix multiplication is available as an off-the-shelf operation in existing hardware and can be easily adopted and used with our methods. Acknowledgments This research was supported by the Israel Science Foundation (grant No. 31/1031), and by the Taub foundation. A Titan Xp used for this research was donated by the NVIDIA Corporation. The authors are pleased to acknowledge that the work reported in this paper was substantially performed at Intel - Artificial Intelligence Products Group (AIPG).
1. What is the focus of the paper regarding training and quantization? 2. What are the strengths of the proposed approach, particularly in addressing numerical instability? 3. Do you have any concerns or suggestions regarding the theoretical analysis? 4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 5. Are there any specific points or sections in the paper that the reviewer would like to see improved or expanded upon?
Review
Review # Summary of the paper The goal of this paper is to train and quantize a model into 8 bit. This is interesting given the fact that most of the existing works are based on 16 bit and people are having some difficulties in training 8bit models. The paper identified that the training difficulty comes from batchnorm and it proposed a variant of batchnorm called range batchnorm which alleviate the numerical instability of the original batchnorm occurring with the quantized models. By such simple modification, the paper shows that a 8 bit model can be easily trained using GEMMLOWP, an existing framework. The paper also tried to analyze and understand the proposed approach in a theoretical manner. Experiments well supported the argument of the paper. # General comments I am on the positive because I found the paper has a clean goal (training 8 bit model), identified the right problem (batch norm) and proposed a solution to address the problem (range batch norm). The paper is technically sound and I appreciate the authors’ effort in understanding of the problem which puts some foundation of the proposed solution. # Quality The proposed method is technically sound, simple and effective. I roughly checked all the equations which look good in general. 1- Given this is a model quantization paper, I would be interested in a evaluation and comparison on the model size and speed. 2- The analysis in section 3 is good. However, the assumption that x^{(d)} is gaussian distributed is probably not true in the real scenario. The input data could be Gaussian however, the input to other following layers could often not be. But I don’t think this is a severe problem for this paper given the fact that properly analyzing neural networks is still a challenging theoretical problem. 3- Section 5 derives the lower bound of the expectation of cosine distance. But how about the variance of the cosine? I think the variance could also be an important metric to understand better about such performance guarantee. # Clarity The paper is well written and easy to follow. Few comments: 1- Appendix E is an important technical detail and should be included in the main body (section 4) of the paper. If you feel the paper is too long, I would suggest reducing Section 5 a little bit, e.g., Figure 1-right does not seem to add additional information while it took a lot of space. 2- Fix typos, e.g., Figure 1-left the x label “treshold” -> “threshold”; Line 233 “Res50” -> “ResNet-50”. Please be consistent with the terminologies and short forms. The caption of figure 2, “with respect the” -> “with respect to the”. 3- All equations should be properly punctuated. # Originality I believe the Range Batchnorm and a systematic method to quantize models into 8 bit are novel. # Significance I think the results presented in this paper could be interesting to researchers in theory and quantization. Quantizing a model into 8 bit is interesting which might inspire many more interesting future work in this area.
NIPS
Title Scalable methods for 8-bit training of neural networks Abstract Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range BatchNormalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors’ knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset. 1 Introduction Deep Neural Networks (DNNs) achieved remarkable results in many fields making them the most common off-the-shelf approach for a wide variety of machine learning applications. However, as networks get deeper, using neural network (NN) algorithms and training them on conventional general-purpose digital hardware is highly inefficient. The main computational effort is due to massive amounts of multiply-accumulate operations (MACs) required to compute the weighted sums of the neurons’ inputs and the parameters’ gradients. Much work has been done to reduce the size of networks. The conventional approach is to compress a trained (full precision) network [4, 19, 12] using weights sharing, low rank approximation, quantization, pruning or some combination thereof. For example, Han et al., 2015 [7] successfully pruned several state-of-the-art large-scale networks and showed that the number of parameters can be reduced by an order of magnitude. Since training neural networks requires approximately three times more computation power than just evaluating them, quantizing the gradients is a critical step towards faster training machines. Previous 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. ∗Equal contribution work demonstrated that by quantizing network parameters and intermediate activations during the training phase more computationally efficient DNNs could be constructed. Researchers [6, 5] have shown that 16-bit is sufficient precision for most network training but further quantization (i.e., 8-bit) results with severe degradation. Our work is the first to almost exclusively train at 8-bit without harming classification accuracy. This is addressed by overcoming two main obstacles known to hamper numerical stability: batch normalization and gradient computations. The traditional batch normalization [11] implementation requires the computation of the sum of squares, square-root and reciprocal operations; these require high precision (to avoid zero variance) and a large dynamic range. It should come as no surprise that previous attempts to use low precision networks did not use batch normalization layers [21] or kept them in full precision [24]. This work replaces the batch norm operation with range batch-norm (range BN) that normalizes inputs by the range of the input distribution (i.e., max(x) −min(x)). This measure is more suitable for low-precision implementations. Range BN is shown analytically to approximate the original batch normalization by multiplying this range with a scale adjustment that depends on the size of the batch and equals to (2 · ln(n))−0.5. Experiments on ImageNet with Res18 and Res50 showed no distinguishable difference between accuracy of Range BN and traditional BN. The second obstacle is related to the gradients quantization. Given an upstream gradient gl from layer l, layer l − 1 needs to apply two different matrix multiplications: one for the layer gradient gl−1 and the other for the weight gradient gW which are needed for the update rule. Our analysis indicates that the statistics of the gradient gl violates the assumptions at the crux of common quantization schemes. As such, quantizing these gradients constitutes the main cause of degradation in performance through training. Accordingly, we suggest to use two versions of layer gradients gl, one with low-precision (8-bit) and another with higher-precision (16-bit). The idea is to keep all calculations with gl that does not involve a performance bottleneck at 16 bits, while the rest at 8 bits. As the gradients gW are required only for the weight update, they are computed using the 16 bits copy of gl. On the other hand, the gradient gl−1 is required for the entire backwards stream and as such it is computed using the corresponding 8-bit version of gl. In most layers of the DNN these computations can be performed in parallel. Hence gW can be computed at high precision in parallel with gl−1, without interrupting the propagation of gl to lower layers. We denote the use of two different arithmetic precision operations in the differentiation process as "Gradients Bifurcation". 2 Previous Work While several works [6, 5] have shown that training at 16-bit is sufficient for most networks, more aggressive quantization schemes were also suggested [24, 16, 14, 10]. In the extreme case, the quantization process used only one bit which resulted in binarized neural networks (BNNs) [9] where both weights and activations were constrained to -1 and 1. However, for more complex models and challenging datasets, the extreme compression rate resulted in a loss of accuracy. Recently, Mishra et al. [15] showed that this accuracy loss can be prevented by merely increasing the number of filter maps in each layer, thus suggesting that quantized neural networks (QNNs) do not possess an inherent convergence problem. Nevertheless, increasing the number of filter maps enlarge quadratically the number of parameters, which raises questions about the efficiency of this approach. In addition to the quantization of the forward pass, a growing interest is directed towards the quantization of the gradient propagation in neural networks. A fully quantized method, allowing both forward and backward low-precision operations will enable the use of dedicated hardware, with considerable computational, memory, and power benefits. Previous attempts to discretize the gradients managed to either reduce them to 16-bit without loss of accuracy [5] or apply a more aggressive approach and reduce the precision to 6-8 bit [24, 10] with a noticeable degradation. Batch normalization is mentioned by [21] as a bottleneck for network quantization and is either replaced by a constant scaling layer kept in full precision, or avoided altogether; this clearly has some impact on performance (e.g., AlexNet trained over ImageNet resulted with top-1 error of 51.6%, where the state of the art is near 42%) and better ways to quantize normalization are explicitly called for. Recently L1 batch norm with only linear operations in both forward and backward propagation was suggested by [22, 8] with improved numerical stability. Yet, our experiments show that with 8-bit training even L1 batch norm is prone to overflows when summing over many large positive values. Finally, Wen et al. [20] focused on quantizing the gradient updates to ternary values to reduce the communication bandwidth in distributed systems. We claim that although more aggressive quantization methods exist, 8-bit precision may prove to have a "sweet-spot" quality to it, by enabling training with no loss of accuracy and without modifying the original architecture. Moreover, we note that 8-bit quantization is better suited for future and even current hardware, many of which can already benefit from 8-bit operations [17]. So far, to the best of our knowledge, no work has succeeded to quantize the activations, weights, and gradient of all layers (including batch normalization) to 8-bit without any degradation. 3 Range Batch-Normalization For a layer with n×d−dimensional input x = (x(1), x(2), ..., x(d)), traditional batch norm normalizes each dimension x̂(d) = x(d) − µd√ Var[x(d)] , (1) where µd is the expectation over x(d), n is the batch size and Var[x(d)] = 1n ||x (d) − µd||22. The term√ Var[x(d)] involves sums of squares that can lead to numerical instability as well as to arithmetic overflows when dealing with large values. The Range BN method replaces the above term by normalizing according to the range of the input distribution (i.e., max(·)−min(·)), making it more tolerant to quantization. For a layer with d−dimensional input x = (x(1), x(2), ..., x(d)), Range BN normalizes each dimension x̂(d) = x(d) − µd C(n) · range(x(d) − µd) , (2) where µd is the expectation over x(d), n is the batch size, C(n) = 1√ 2·ln(n) is a scale adjustment term, and range(x) = max(x)−min(x). The main idea behind Range BN is to use the scale adjustment C(n) to approximate the standard deviation σ (traditionally being used in vanilla batch norm) by multiplying it with the range of the input values. Assuming the input follows a Gaussian distribution, the range (spread) of the input is highly correlated with the standard deviation magnitude. Therefore by normalizing the range by C(n) we can estimate σ. Note that the Gaussian assumption is a common approximation (e.g., Soudry et al. [18]), based on the fact that the neural input x(d) is a sum of many inputs, so we expect it to be approximately Gaussian from the central limit theorem. We now turn to derive the normalization term C(n). The expectation of maximum of Gaussian random variables are bounded as follows [13]: 0.23σ · √ ln(n) ≤ E[max(x(d) − µd)] ≤ √ 2σ √ ln(n). (3) Since x(d) − µd is symmetrical with respect to zero (centred at zero and assumed gaussian), it holds that E[max(·)] = −E[min(·)]; hence, 0.23σ · √ ln(n) ≤ −E[min(x(d) − µd)] ≤ √ 2σ √ ln(n). (4) Therefore, by summing Equations 3 and 4 and multiplying the three parts of the inequality by the normalization term C(n), Range BN in Eq. 2 approximates the original standard deviation measure σ as follows: 0.325σ ≤ C(n) · range(x(d) − µd) ≤ 2 · σ Importantly, the scale adjustment termC(n) plays a major role in RangeBN success. The performance was degraded in simulations when C(n) was not used or modified to nearby values. 4 Quantized Back-Propagation Quantization methods: Following [23] we used the GEMMLOWP quantization scheme as decribed in Google’s open source library [1]. A detailed explanation of this approach is given in Appendix.While GEMMLOWP is widely used for deployment, to the best of the authors knowledge this is the first time GEMMLOWP quantization is applied for training. Note that the activations maximum and minimum values were computed by the range BN operator, thus finding the normalization scale (see Appendix)does not require additional O(n) operations. Finally we note that a good convergence was achieved only by using stochastic rounding [6] for the gradient quantization. This behaviour is not surprising as the gradients will serve eventually for the weight update thus unbiased quantization scheme is required to avoid noise accumulation. Gradients Bifurcation: In the back-propagation algorithm we recursively calculate the gradients of the loss function L with respect to I`, the input of the ` neural layer, g` = ∂L ∂I` , (5) starting from the last layer. Each layer needs to derive two sets of gradients to perform the recursive update. The layer activation gradients: g`−1 = g`W T ` , (6) served for the Back-Propagation (BP) phase thus passed to the next layer,and the weights gradients gW` = g`I T `−1, (7) used to updated the weights in layer `. Since the backward pass requires twice the amount of multiplications compared to the forward pass, quantizing the gradients is a crucial step towards faster training machines. Since g`, the gradients streaming from layer `, are required to compute g`−1, it is important to expedite the matrix multiplication described in Eq.6. The second set of gradient derive in Eq.7 is not required for this sequential process and thus we choose to keep this matrix multiplication in full precision. We argue that the extra time required for this matrix multiplication is comparably small to the time required to communicate the gradients g`. Thus, in this work the gradients used for the weight gradients derivation are still in float. In section 6, we show empirically that bifurcation of the gradients is crucial for high accuracy results. Straight-Through Estimator: Similar to previous work [9, 15], we used the straight-through estimator (STE) approach to approximate differentiation through discrete variables. This is the most simple and hardware friendly approach to deal with the fact that the exact derivative of discrete variables is zero almost everywhere. 5 When is quantization of neural networks possible? This section provides some of the foundations needed for understanding the internal representation of quantized neural networks. It is well known that when batch norm is applied after a convolution layer, the output is invariant to the norm of the weight on the proceeding layer [11] i.e., BN(C ·W · x) = BN(W · x) for any given constant C. This quantity is often described geometrically as the norm of the weight tensor, and in the presence of this invariance, the only measure that needs to be preserved upon quantization is the directionality of the weight tensor. In the following we show that quantization preserves the direction (angle) of high-dimensional vectors when W follows a Gaussian distribution. More specifically, for networks with M -bit fixed point representation, the angle is preserved when the number of quantization levels 2M is much larger than √ 2 ln(N), where N is the size of quantized vector. This shows that significant quantization is possible on practical settings. Taking for example the dimensionality of the joint product in a batch with 1024 examples corresponding to the last layer of ResNet-50, we need no more than 8-bit of precision to preserve the angle well (i.e., √ 2 ln(3 · 3 · 2048 · 1024) = 5.7 << 28). We stress that this result heavily relays on values being distributed according to a Gaussian distribution, and suggests why some vectors are robust to quantization (e.g., weights and activations) while others are more fragile (e.g., gradients). 5.1 Problem Statement Given a vector of weights W = (w0, w1, ..., wN−1), where the weights follow a Gaussian distribution W ∼ N(0, σ), we would like to measure the cosine similarity (i.e., cosine of the angle) between W and Q(W ), where Q(·) is a quantization function. More formally, we are interested in estimating the following geometric measure: cos(θ) = W ·Q(W ) ||W ||2 · ||Q(W )||2 (8) We next define the quantization function Q(·) using a fixed quantization step between adjacent quantified levels as follows: Q(x) = ∆ · (⌊ x ∆ ⌋ + 1 2 ) , where ∆ = max(|W |) 2M (9) We consider the case where quantization step ∆ is much smaller than mean(|W |). Under this assumption correlation between W and quantization noise W − Q(W ) = ( 0, 1, ..., N−1) is negligible, and can be approximated as an additive noise. Our model assumes an additive quantization noise ̄ with a uniform distribution i.e., i ∼ U [−∆/2,∆/2] for each index i. Our goal is to estimate the angle between W and W + ̄ for high dimensions (i.e., N →∞). 5.2 Angle preservation during quantization In order to estimate the angle between W and W + , we first estimate the angle between W and . It is well known that if and W are independent, then at high dimension the angle between W and tends to π 2 [2] i.e., we get a right angle triangle with W and as the legs, while W + is the hypotenuse as illustrated in Figure 1-right. The cosine of the angle θ in that triangle can be approximated as follows: cos(θ) = ||W || ||W + || ≥ ||W || ||W ||+ || || (10) Since W is Gaussian, we have that E(||W ||) ∼= √ Nσ in high dimensions [3]. Additionally, in Appendix ?? we show that E(||̄||) ≤ √ N/12 ·∆. Moreover, at high dimensions, the relative error made as considering E||X|| instead of the random variable ||X|| becomes asymptotically negligible [2]. Therefore, the following holds in high dimensions: cos(θ) ≥ σ σ + E(∆)/ √ 12 = 2M · σ 2M · σ + E(max(|W |))/ √ 12 (11) Finally, E(max(W )) ≤ √ 2σ √ ln(N) when W follows a Gaussian distribution [13], establishing the following: cos(θ) ≥ 2 M 2M + √ lnN/ √ 6 (12) Eq. 12 establishes that when 2M >> √ ln(N) the angle is preserved during quantization. It is easy to see that in most practical settings this condition holds even for challenging quantizations. Moreover, this results highly depends on the assumption made about the Gaussian distribution of W (transition from equation 11 to equation 12). 6 Experiments We evaluated the ideas of Range Batch-Norm and Quantized Back-Propagation on multiple different models and datasets. The code to replicate all of our experiments is available on-line 2. 2https://github.com/eladhoffer/quantized.pytorch 6.1 Experiment results on cifar-10 dataset To validate our assumption that the cosine similarity is a good measure for the quality of the quantization, we ran a set of experiments on Cifar-10 dataset, each with a different number of bits, and then plotted the average angle and the final accuracy. As can be seen in Figure 2 there is a high correlation between the two. Taking a closer look the following additional observations can be made: (1) During quantization the direction of vectors is better preserved with the forward pass compared to the backward pass; (2) validation accuracy follows tightly the cosine of the angle in the backward pass, indicating gradient quantization as the primary bottleneck; (3) as expected, the bound on E(cos(θ)) in Eq. 12 holds in the forward pass, but less so in the backward pass, where the Gaussian assumption tends to break. The histograms in Figure 2 further confirms that the layer gradients gl do not follow Gaussian distribution. These are the values that are bifurcated into low and high precision copies to reduce noise accumulation. 6.2 Experiment results on ImageNet dataset: Range Batch-Normalization We ran experiments with Res50 on ImageNet dataset showing the equivalence between the standard batch-norm and Range BN in terms of accuracy. The only difference between the experiments was the use of Range BN instead of the traditional batch-norm. Figure 3 compares between the two. It shows equivalence when models are trained at high precision. We also ran simulations on other datasets and models. When examining the final results, both were equivalent i.e., 32.5% vs 32.4% for ResNet-18 on ImageNet and 10.5% vs 10.7% for ResNet-56 on Cifar10. To conclude, these simulations prove that we can replace standard batch-norm with Range BN while keeping accuracy unchanged. Replacing the sum of squares and square root operations in standard batch-norm by a few maximum and minimum operations has a major benefit in low-precision implementations. 6.3 Experiment results on ImageNet dataset: Putting it all together We conducted experiments using RangeBN together with Quantized Back-Propagation. To validate this low precision scheme, we were quantizing the vast majority of operations to 8-bit. The only operations left at higher precising were the updates (float32) needed to accumulate small changes from stochastic gradient descent, and a copy of the layer gradients at 16 bits needed to compute gW . Note that the float32 updates are done once per minibatch while the propagations are done for each example (e.g., for a minibatch of 256 examples the updates constitute less than 0.4% of the training effort). Figure 4 presents the result of this experiment on ImageNet dataset using ResNet18 and ResNet50. We provide additional results using more aggressive quantizations in Appedix F. 7 Discussion In this study, we investigate the internal representation of low precision neural networks and present guidelines for their quantization. Considering the preservation of direction during quantization, we analytically show that significant quantization is possible for vectors with a Gaussian distribution. On the forward pass the inputs to each layer are known to be distributed according to a Gaussian distribution, but on the backward pass we observe that the layer gradients gl do not follow this distribution. Our experiments further assess that angle is not well preserved on the backward pass, and moreover final validation accuracy tightly follows that angle. Accordingly, we bifurcate the layer gradients gl and use it at 16-bits for the computation of the weight gradient gW while keeping the computation of next layer gradient gl−1 at 8-bit. This enables the (slower) 16-bits computation of gW to be be done in parallel with gl−1, without interrupting the propagation the layer gradients. We further show that Range BN is comparable to the traditional batch norm in terms of accuracy and convergence rate. This makes it a viable alternative for low precision training. During the forward-propagation phase computation of the square and square root operations are avoided and replaced by max(·) and min(·) operations. During the back-propagation phase, the derivative of max(·) or min(·) is set to one where the coordinates for which the maximal or minimal values are attained, and is set to zero otherwise. Finally, we combine the two novelties into a single training scheme and demonstrate, for the first time, that 8-bit training on a large scale dataset does not harm accuracy. Our quantization approach has major performance benefits in terms of speed, memory, and energy. By replacing float32 with int8, multiplications become 16 times faster and at least 15 times more energy efficient [10]. This impact is attained for 2/3 of all the multiplications, namely the forward pass and the calculations of the layer gradients gl. The weight gradients gW are computed as a product of 8-bit precision (layer input) with a 16-bit precision (unquantized version of gl), resulting with a speedup of x8 for the rest of multiplications and at least x2 power savings. Although previous works considered an even lower precision quantization (up-to 1-bit), we claim that 8-bit quantization may prove to be more of an interest. Furthermore, 8-bit matrix multiplication is available as an off-the-shelf operation in existing hardware and can be easily adopted and used with our methods. Acknowledgments This research was supported by the Israel Science Foundation (grant No. 31/1031), and by the Taub foundation. A Titan Xp used for this research was donated by the NVIDIA Corporation. The authors are pleased to acknowledge that the work reported in this paper was substantially performed at Intel - Artificial Intelligence Products Group (AIPG).
1. What is the main contribution of the paper regarding DNNs quantization? 2. What are the strengths and weaknesses of the proposed method in terms of experimental results and theoretical analysis? 3. How does the Range Batch-Normalization (Range BN) operator contribute to the performance of the method? 4. Do you have any concerns or suggestions regarding the application of Jensen's inequality in the proof? 5. Are there any limitations or areas for improvement in the theoretical part of the paper?
Review
Review The paper is focused on a very important problem of DNNs quantization. The authors propose a method of quantization of gradients, activations, and weights to 8-bit without a drop in test accuracy. The authors noticed that layer gradients do not follow a Gaussian distribution and connected this observation with the poor performance of low precision training. Based on this observation the authors suggested replacing several 8-bit matrix multiplications with 16-bit operations during the backward pass. It is necessary to note that 16-bit operations are applied only to such multiplications that do not involve a performance bottleneck. Another important component that leads to a good performance is Range Batch-Normalization (Range BN) operator. In other words, the authors introduced a more robust version of BN layer. Overall, it is a very interesting and well-written paper and the result is pretty strong. However, more experimental results on low-precision training without a drop in accuracy are required since it is the main contribution of the paper. The authors showed that their method has the same accuracy as a full-precision model only for ResNet-18 on ImageNet. Supplementary contains more experiments on more aggressive and, as a result, lossy quantization. Theoretical results in section 5 also contain several shortcomings: 1. In subsection 5.4 the authors’ reasoning is based on the fact that vectors W and eps are independent. However, components of eps are drawn from the uniform distribution with parameters dependent on max_i|W|_i. 2. In (13) inequality should be replaced with approximate inequality, since the authors consider approximation. In (12) the equality should be replaced with approximate equality due to the same reason. 3. To prove (9) the authors use Jensen's inequality. The classic Jensen's inequality has the form f(\mathbb{E} X) \leq \mathbb{E}f(x), where f is convex. In this work, the authors apply this inequality to get the inequality (9) of the form \mathbb{E} Y f(\mathbb{E}X) \leq \mathbb{E}(f(X) Y), where X and Y are not independent variables and f is convex. In other words, could you please elaborate how exactly you apply Jensen's inequality in (9), because under expectation in (9) there are two dependent variables (X = \|w\|_2 and Y = \|w\|_1) and f = 1 / x takes as the argument only one of these variables ( f(X) = 1 / \| w\|_2)? Update: I would like to thank the authors for their feedback. Since the authors provided new experimental results, I will change my score. However, I think that the theoretical part should be improved. ‘We note that unlike [1] which established that this angle converges to 37 degrees only at the limit when the dimension of the vector goes to infinity, our proof shows that this is a fundamental property valid also for the more practical case of finite dimensions.’ In the feedback, the authors state that the Jensen’s inequality is a good approximation. However, it is a good approximation when dimensionality is large enough (in other words it goes to infinity). Therefore this statement significantly resembles previous results. Moreover, since the authors use an approximation, they should not use the equality sign because it confuses readers.
NIPS
Title Scalable methods for 8-bit training of neural networks Abstract Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range BatchNormalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors’ knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset. 1 Introduction Deep Neural Networks (DNNs) achieved remarkable results in many fields making them the most common off-the-shelf approach for a wide variety of machine learning applications. However, as networks get deeper, using neural network (NN) algorithms and training them on conventional general-purpose digital hardware is highly inefficient. The main computational effort is due to massive amounts of multiply-accumulate operations (MACs) required to compute the weighted sums of the neurons’ inputs and the parameters’ gradients. Much work has been done to reduce the size of networks. The conventional approach is to compress a trained (full precision) network [4, 19, 12] using weights sharing, low rank approximation, quantization, pruning or some combination thereof. For example, Han et al., 2015 [7] successfully pruned several state-of-the-art large-scale networks and showed that the number of parameters can be reduced by an order of magnitude. Since training neural networks requires approximately three times more computation power than just evaluating them, quantizing the gradients is a critical step towards faster training machines. Previous 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. ∗Equal contribution work demonstrated that by quantizing network parameters and intermediate activations during the training phase more computationally efficient DNNs could be constructed. Researchers [6, 5] have shown that 16-bit is sufficient precision for most network training but further quantization (i.e., 8-bit) results with severe degradation. Our work is the first to almost exclusively train at 8-bit without harming classification accuracy. This is addressed by overcoming two main obstacles known to hamper numerical stability: batch normalization and gradient computations. The traditional batch normalization [11] implementation requires the computation of the sum of squares, square-root and reciprocal operations; these require high precision (to avoid zero variance) and a large dynamic range. It should come as no surprise that previous attempts to use low precision networks did not use batch normalization layers [21] or kept them in full precision [24]. This work replaces the batch norm operation with range batch-norm (range BN) that normalizes inputs by the range of the input distribution (i.e., max(x) −min(x)). This measure is more suitable for low-precision implementations. Range BN is shown analytically to approximate the original batch normalization by multiplying this range with a scale adjustment that depends on the size of the batch and equals to (2 · ln(n))−0.5. Experiments on ImageNet with Res18 and Res50 showed no distinguishable difference between accuracy of Range BN and traditional BN. The second obstacle is related to the gradients quantization. Given an upstream gradient gl from layer l, layer l − 1 needs to apply two different matrix multiplications: one for the layer gradient gl−1 and the other for the weight gradient gW which are needed for the update rule. Our analysis indicates that the statistics of the gradient gl violates the assumptions at the crux of common quantization schemes. As such, quantizing these gradients constitutes the main cause of degradation in performance through training. Accordingly, we suggest to use two versions of layer gradients gl, one with low-precision (8-bit) and another with higher-precision (16-bit). The idea is to keep all calculations with gl that does not involve a performance bottleneck at 16 bits, while the rest at 8 bits. As the gradients gW are required only for the weight update, they are computed using the 16 bits copy of gl. On the other hand, the gradient gl−1 is required for the entire backwards stream and as such it is computed using the corresponding 8-bit version of gl. In most layers of the DNN these computations can be performed in parallel. Hence gW can be computed at high precision in parallel with gl−1, without interrupting the propagation of gl to lower layers. We denote the use of two different arithmetic precision operations in the differentiation process as "Gradients Bifurcation". 2 Previous Work While several works [6, 5] have shown that training at 16-bit is sufficient for most networks, more aggressive quantization schemes were also suggested [24, 16, 14, 10]. In the extreme case, the quantization process used only one bit which resulted in binarized neural networks (BNNs) [9] where both weights and activations were constrained to -1 and 1. However, for more complex models and challenging datasets, the extreme compression rate resulted in a loss of accuracy. Recently, Mishra et al. [15] showed that this accuracy loss can be prevented by merely increasing the number of filter maps in each layer, thus suggesting that quantized neural networks (QNNs) do not possess an inherent convergence problem. Nevertheless, increasing the number of filter maps enlarge quadratically the number of parameters, which raises questions about the efficiency of this approach. In addition to the quantization of the forward pass, a growing interest is directed towards the quantization of the gradient propagation in neural networks. A fully quantized method, allowing both forward and backward low-precision operations will enable the use of dedicated hardware, with considerable computational, memory, and power benefits. Previous attempts to discretize the gradients managed to either reduce them to 16-bit without loss of accuracy [5] or apply a more aggressive approach and reduce the precision to 6-8 bit [24, 10] with a noticeable degradation. Batch normalization is mentioned by [21] as a bottleneck for network quantization and is either replaced by a constant scaling layer kept in full precision, or avoided altogether; this clearly has some impact on performance (e.g., AlexNet trained over ImageNet resulted with top-1 error of 51.6%, where the state of the art is near 42%) and better ways to quantize normalization are explicitly called for. Recently L1 batch norm with only linear operations in both forward and backward propagation was suggested by [22, 8] with improved numerical stability. Yet, our experiments show that with 8-bit training even L1 batch norm is prone to overflows when summing over many large positive values. Finally, Wen et al. [20] focused on quantizing the gradient updates to ternary values to reduce the communication bandwidth in distributed systems. We claim that although more aggressive quantization methods exist, 8-bit precision may prove to have a "sweet-spot" quality to it, by enabling training with no loss of accuracy and without modifying the original architecture. Moreover, we note that 8-bit quantization is better suited for future and even current hardware, many of which can already benefit from 8-bit operations [17]. So far, to the best of our knowledge, no work has succeeded to quantize the activations, weights, and gradient of all layers (including batch normalization) to 8-bit without any degradation. 3 Range Batch-Normalization For a layer with n×d−dimensional input x = (x(1), x(2), ..., x(d)), traditional batch norm normalizes each dimension x̂(d) = x(d) − µd√ Var[x(d)] , (1) where µd is the expectation over x(d), n is the batch size and Var[x(d)] = 1n ||x (d) − µd||22. The term√ Var[x(d)] involves sums of squares that can lead to numerical instability as well as to arithmetic overflows when dealing with large values. The Range BN method replaces the above term by normalizing according to the range of the input distribution (i.e., max(·)−min(·)), making it more tolerant to quantization. For a layer with d−dimensional input x = (x(1), x(2), ..., x(d)), Range BN normalizes each dimension x̂(d) = x(d) − µd C(n) · range(x(d) − µd) , (2) where µd is the expectation over x(d), n is the batch size, C(n) = 1√ 2·ln(n) is a scale adjustment term, and range(x) = max(x)−min(x). The main idea behind Range BN is to use the scale adjustment C(n) to approximate the standard deviation σ (traditionally being used in vanilla batch norm) by multiplying it with the range of the input values. Assuming the input follows a Gaussian distribution, the range (spread) of the input is highly correlated with the standard deviation magnitude. Therefore by normalizing the range by C(n) we can estimate σ. Note that the Gaussian assumption is a common approximation (e.g., Soudry et al. [18]), based on the fact that the neural input x(d) is a sum of many inputs, so we expect it to be approximately Gaussian from the central limit theorem. We now turn to derive the normalization term C(n). The expectation of maximum of Gaussian random variables are bounded as follows [13]: 0.23σ · √ ln(n) ≤ E[max(x(d) − µd)] ≤ √ 2σ √ ln(n). (3) Since x(d) − µd is symmetrical with respect to zero (centred at zero and assumed gaussian), it holds that E[max(·)] = −E[min(·)]; hence, 0.23σ · √ ln(n) ≤ −E[min(x(d) − µd)] ≤ √ 2σ √ ln(n). (4) Therefore, by summing Equations 3 and 4 and multiplying the three parts of the inequality by the normalization term C(n), Range BN in Eq. 2 approximates the original standard deviation measure σ as follows: 0.325σ ≤ C(n) · range(x(d) − µd) ≤ 2 · σ Importantly, the scale adjustment termC(n) plays a major role in RangeBN success. The performance was degraded in simulations when C(n) was not used or modified to nearby values. 4 Quantized Back-Propagation Quantization methods: Following [23] we used the GEMMLOWP quantization scheme as decribed in Google’s open source library [1]. A detailed explanation of this approach is given in Appendix.While GEMMLOWP is widely used for deployment, to the best of the authors knowledge this is the first time GEMMLOWP quantization is applied for training. Note that the activations maximum and minimum values were computed by the range BN operator, thus finding the normalization scale (see Appendix)does not require additional O(n) operations. Finally we note that a good convergence was achieved only by using stochastic rounding [6] for the gradient quantization. This behaviour is not surprising as the gradients will serve eventually for the weight update thus unbiased quantization scheme is required to avoid noise accumulation. Gradients Bifurcation: In the back-propagation algorithm we recursively calculate the gradients of the loss function L with respect to I`, the input of the ` neural layer, g` = ∂L ∂I` , (5) starting from the last layer. Each layer needs to derive two sets of gradients to perform the recursive update. The layer activation gradients: g`−1 = g`W T ` , (6) served for the Back-Propagation (BP) phase thus passed to the next layer,and the weights gradients gW` = g`I T `−1, (7) used to updated the weights in layer `. Since the backward pass requires twice the amount of multiplications compared to the forward pass, quantizing the gradients is a crucial step towards faster training machines. Since g`, the gradients streaming from layer `, are required to compute g`−1, it is important to expedite the matrix multiplication described in Eq.6. The second set of gradient derive in Eq.7 is not required for this sequential process and thus we choose to keep this matrix multiplication in full precision. We argue that the extra time required for this matrix multiplication is comparably small to the time required to communicate the gradients g`. Thus, in this work the gradients used for the weight gradients derivation are still in float. In section 6, we show empirically that bifurcation of the gradients is crucial for high accuracy results. Straight-Through Estimator: Similar to previous work [9, 15], we used the straight-through estimator (STE) approach to approximate differentiation through discrete variables. This is the most simple and hardware friendly approach to deal with the fact that the exact derivative of discrete variables is zero almost everywhere. 5 When is quantization of neural networks possible? This section provides some of the foundations needed for understanding the internal representation of quantized neural networks. It is well known that when batch norm is applied after a convolution layer, the output is invariant to the norm of the weight on the proceeding layer [11] i.e., BN(C ·W · x) = BN(W · x) for any given constant C. This quantity is often described geometrically as the norm of the weight tensor, and in the presence of this invariance, the only measure that needs to be preserved upon quantization is the directionality of the weight tensor. In the following we show that quantization preserves the direction (angle) of high-dimensional vectors when W follows a Gaussian distribution. More specifically, for networks with M -bit fixed point representation, the angle is preserved when the number of quantization levels 2M is much larger than √ 2 ln(N), where N is the size of quantized vector. This shows that significant quantization is possible on practical settings. Taking for example the dimensionality of the joint product in a batch with 1024 examples corresponding to the last layer of ResNet-50, we need no more than 8-bit of precision to preserve the angle well (i.e., √ 2 ln(3 · 3 · 2048 · 1024) = 5.7 << 28). We stress that this result heavily relays on values being distributed according to a Gaussian distribution, and suggests why some vectors are robust to quantization (e.g., weights and activations) while others are more fragile (e.g., gradients). 5.1 Problem Statement Given a vector of weights W = (w0, w1, ..., wN−1), where the weights follow a Gaussian distribution W ∼ N(0, σ), we would like to measure the cosine similarity (i.e., cosine of the angle) between W and Q(W ), where Q(·) is a quantization function. More formally, we are interested in estimating the following geometric measure: cos(θ) = W ·Q(W ) ||W ||2 · ||Q(W )||2 (8) We next define the quantization function Q(·) using a fixed quantization step between adjacent quantified levels as follows: Q(x) = ∆ · (⌊ x ∆ ⌋ + 1 2 ) , where ∆ = max(|W |) 2M (9) We consider the case where quantization step ∆ is much smaller than mean(|W |). Under this assumption correlation between W and quantization noise W − Q(W ) = ( 0, 1, ..., N−1) is negligible, and can be approximated as an additive noise. Our model assumes an additive quantization noise ̄ with a uniform distribution i.e., i ∼ U [−∆/2,∆/2] for each index i. Our goal is to estimate the angle between W and W + ̄ for high dimensions (i.e., N →∞). 5.2 Angle preservation during quantization In order to estimate the angle between W and W + , we first estimate the angle between W and . It is well known that if and W are independent, then at high dimension the angle between W and tends to π 2 [2] i.e., we get a right angle triangle with W and as the legs, while W + is the hypotenuse as illustrated in Figure 1-right. The cosine of the angle θ in that triangle can be approximated as follows: cos(θ) = ||W || ||W + || ≥ ||W || ||W ||+ || || (10) Since W is Gaussian, we have that E(||W ||) ∼= √ Nσ in high dimensions [3]. Additionally, in Appendix ?? we show that E(||̄||) ≤ √ N/12 ·∆. Moreover, at high dimensions, the relative error made as considering E||X|| instead of the random variable ||X|| becomes asymptotically negligible [2]. Therefore, the following holds in high dimensions: cos(θ) ≥ σ σ + E(∆)/ √ 12 = 2M · σ 2M · σ + E(max(|W |))/ √ 12 (11) Finally, E(max(W )) ≤ √ 2σ √ ln(N) when W follows a Gaussian distribution [13], establishing the following: cos(θ) ≥ 2 M 2M + √ lnN/ √ 6 (12) Eq. 12 establishes that when 2M >> √ ln(N) the angle is preserved during quantization. It is easy to see that in most practical settings this condition holds even for challenging quantizations. Moreover, this results highly depends on the assumption made about the Gaussian distribution of W (transition from equation 11 to equation 12). 6 Experiments We evaluated the ideas of Range Batch-Norm and Quantized Back-Propagation on multiple different models and datasets. The code to replicate all of our experiments is available on-line 2. 2https://github.com/eladhoffer/quantized.pytorch 6.1 Experiment results on cifar-10 dataset To validate our assumption that the cosine similarity is a good measure for the quality of the quantization, we ran a set of experiments on Cifar-10 dataset, each with a different number of bits, and then plotted the average angle and the final accuracy. As can be seen in Figure 2 there is a high correlation between the two. Taking a closer look the following additional observations can be made: (1) During quantization the direction of vectors is better preserved with the forward pass compared to the backward pass; (2) validation accuracy follows tightly the cosine of the angle in the backward pass, indicating gradient quantization as the primary bottleneck; (3) as expected, the bound on E(cos(θ)) in Eq. 12 holds in the forward pass, but less so in the backward pass, where the Gaussian assumption tends to break. The histograms in Figure 2 further confirms that the layer gradients gl do not follow Gaussian distribution. These are the values that are bifurcated into low and high precision copies to reduce noise accumulation. 6.2 Experiment results on ImageNet dataset: Range Batch-Normalization We ran experiments with Res50 on ImageNet dataset showing the equivalence between the standard batch-norm and Range BN in terms of accuracy. The only difference between the experiments was the use of Range BN instead of the traditional batch-norm. Figure 3 compares between the two. It shows equivalence when models are trained at high precision. We also ran simulations on other datasets and models. When examining the final results, both were equivalent i.e., 32.5% vs 32.4% for ResNet-18 on ImageNet and 10.5% vs 10.7% for ResNet-56 on Cifar10. To conclude, these simulations prove that we can replace standard batch-norm with Range BN while keeping accuracy unchanged. Replacing the sum of squares and square root operations in standard batch-norm by a few maximum and minimum operations has a major benefit in low-precision implementations. 6.3 Experiment results on ImageNet dataset: Putting it all together We conducted experiments using RangeBN together with Quantized Back-Propagation. To validate this low precision scheme, we were quantizing the vast majority of operations to 8-bit. The only operations left at higher precising were the updates (float32) needed to accumulate small changes from stochastic gradient descent, and a copy of the layer gradients at 16 bits needed to compute gW . Note that the float32 updates are done once per minibatch while the propagations are done for each example (e.g., for a minibatch of 256 examples the updates constitute less than 0.4% of the training effort). Figure 4 presents the result of this experiment on ImageNet dataset using ResNet18 and ResNet50. We provide additional results using more aggressive quantizations in Appedix F. 7 Discussion In this study, we investigate the internal representation of low precision neural networks and present guidelines for their quantization. Considering the preservation of direction during quantization, we analytically show that significant quantization is possible for vectors with a Gaussian distribution. On the forward pass the inputs to each layer are known to be distributed according to a Gaussian distribution, but on the backward pass we observe that the layer gradients gl do not follow this distribution. Our experiments further assess that angle is not well preserved on the backward pass, and moreover final validation accuracy tightly follows that angle. Accordingly, we bifurcate the layer gradients gl and use it at 16-bits for the computation of the weight gradient gW while keeping the computation of next layer gradient gl−1 at 8-bit. This enables the (slower) 16-bits computation of gW to be be done in parallel with gl−1, without interrupting the propagation the layer gradients. We further show that Range BN is comparable to the traditional batch norm in terms of accuracy and convergence rate. This makes it a viable alternative for low precision training. During the forward-propagation phase computation of the square and square root operations are avoided and replaced by max(·) and min(·) operations. During the back-propagation phase, the derivative of max(·) or min(·) is set to one where the coordinates for which the maximal or minimal values are attained, and is set to zero otherwise. Finally, we combine the two novelties into a single training scheme and demonstrate, for the first time, that 8-bit training on a large scale dataset does not harm accuracy. Our quantization approach has major performance benefits in terms of speed, memory, and energy. By replacing float32 with int8, multiplications become 16 times faster and at least 15 times more energy efficient [10]. This impact is attained for 2/3 of all the multiplications, namely the forward pass and the calculations of the layer gradients gl. The weight gradients gW are computed as a product of 8-bit precision (layer input) with a 16-bit precision (unquantized version of gl), resulting with a speedup of x8 for the rest of multiplications and at least x2 power savings. Although previous works considered an even lower precision quantization (up-to 1-bit), we claim that 8-bit quantization may prove to be more of an interest. Furthermore, 8-bit matrix multiplication is available as an off-the-shelf operation in existing hardware and can be easily adopted and used with our methods. Acknowledgments This research was supported by the Israel Science Foundation (grant No. 31/1031), and by the Taub foundation. A Titan Xp used for this research was donated by the NVIDIA Corporation. The authors are pleased to acknowledge that the work reported in this paper was substantially performed at Intel - Artificial Intelligence Products Group (AIPG).
1. What are the three contributions of the paper, and how do they relate to each other? 2. What is Range BN, and how does it improve upon traditional Batch Normalization? 3. What is the training scheme proposed in the paper, and how does it use low-precision weights and activations? 4. What is the purpose of Section 5, and how does it relate to the rest of the paper? 5. How does the paper analyze the relationship between cosine similarity and performance, and what are the limitations of this analysis? 6. What is the significance of the condition derived in the paper regarding the number of bits required for operations? 7. How could the clarity of the paper be improved, especially regarding the description of data formats and computational methods? 8. What are some minor concerns regarding the paper's content, such as typos and unclear statements? 9. How have the authors addressed the reviewer's concerns in their rebuttal, and how has this affected the overall assessment of the paper?
Review
Review At the hightest level, the manuscript contains three contributions: 1. Range BN. This is a clever way to make BN more low precision friendly. The authors motivate it thoroughly and back it up with convincing experiments. A good contribution 2. A largely heuristically derived training scheme that uses 8-bit weights and activations, a low (8-bit) and high (16-bit) copy of the deltas and some other tricks such as stochastic rounding. They show ImageNet ResNet18 to 65% accuracy matching full precision, which is a solid experimental validation of the method. 3. A lengthy section that tries to predict if a certain quantization scheme will be successful based on the cosine similarity between the original and the quantized weights. Overall, it appears to be a well-written, though out paper. Using INT8 for training is very novel and useful. For me the weak part of the paper is clearly Section 5. The hypothesis that angles are predictive of performance seems quite disconnected from the rest of the paper. I don't see evidence in the rest of the paper that this analysis provides anything predictive that could guide an experimenter. At the same time, it's not a theoretical result that is impactful on it's own. It seems like there might be possible ways the authors could have tried to make more of this analysis, for example injecting angle noise into a network and measuring how it affects performance. But there is neither intuition nor data what a certain angle means. What does it tell us if the angle for binary is 37°, which is almost right in the middle between no noise and orthogonal 90°? It's obvious that some error (in angle or by any other measure) will degrade performance, but to make this useful, there needs to be a functional relationship between the two. The relationship in Fig 2a) is tenuous at best, with a gradual fall-off in cosine and a sharp knee in accuracy. The derivation of the 2^M >> sqrt(ln(N)) condition is nicely done and promising, but the authors do not show that it's useful, which they could do e.g. by predicting the required number of bits for various operations and then showing convergence at this number of bits. Instead, they just observer that using 8 bits leaves a huge amount of headroom, rather than probing how tight this error bound is. My second, minor concern is that clarity should be improved. It's not exactly clear to me what data formats are used for which computations. It's laudable that the authors provide code to check, but this should be clear from the paper. Is the 8-bit format an integer format or fixed point, and how are exponents / scaling factors determined? What's the 16-bit format used for bifurcated gradients, is it fp16 or int16? Am I understanding correctly that weights and activations are in 8-bit and the gradients have 8- and 16-bit copies, or is there also a higher precision copy of the weights (common in many other low precision schemes)? What exactly do you mean by fp32 updates, does this imply there is a 32 bit copy of the weights, and if yes, where does stochastic rounding come in? This is partially addressed in the last paragraph, but should come earlier, and more clearly. Consider explaining it in form of a table, or better yet a diagram showing the various tensors that go into computing and updating a layer, and what type they are in. Misc comments: - The section around lines 79 is incorrectly claiming that BatchNorm has not successfully been applied in 16 bit. Köster et al. 2017 (Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks) have shown that a ResNet-110 with BN can be trained completely in 16 bit, with no high precision variance parameters. - The statement in line 80 is not clear, is 42% supposed to be the overall SotA or for alexnet? It doesn't seem to be the right number for either - Can you elaborate on the serial dependence in line 140? Why does this matter? Does this assume specialized hardware where serial dependencies are more costly than raw FLOPS? - Figure 1: Caption should mention that this figure is for the ternary case - Figure 2a: Needs axis labels. bits on the horizontal and angle / accuracy on the vertical? - Fig2 caption: type, "with respect" should be "with respect to". Middle histogram in log scale -> all 3 are in log scale. - line 238, 10.5% for Cifar10 ResNet50. This model should train to about 93% accuracy according to He 2016. Please provide details on exact model used and why it's so much worse. - 245 typo "precising" In conclusion, the paper is solidly borderline, by which I mean that a rebuttal that addresses these concerns would make me reconsider the rating. ----------- Response to author rebuttal: In response to the rebuttal and discussion with other reviewers I'd like to update my assessment, the paper should be accepted. The new experimental results make the paper much stronger, and the authors were able to clear up a lot of misunderstandings, which I hope will make it through to the final version of the paper (e.g. a figure showing the elements of a layer, indicating the data type for each operation)
NIPS
Title Geometry Based Data Generation Abstract We propose a new type of generative model for high-dimensional data that learns a manifold geometry of the data, rather than density, and can generate points evenly along this manifold. This is in contrast to existing generative models that represent data density, and are strongly affected by noise and other artifacts of data collection. We demonstrate how this approach corrects sampling biases and artifacts, thus improves several downstream data analysis tasks, such as clustering and classification. Finally, we demonstrate that this approach is especially useful in biology where, despite the advent of single-cell technologies, rare subpopulations and gene-interaction relationships are affected by biased sampling. We show that SUGAR can generate hypothetical populations, and it is able to reveal intrinsic patterns and mutual-information relationships between genes on a single-cell RNA sequencing dataset of hematopoiesis. 1 Introduction Manifold learning methods in general, and diffusion geometry ones in particular (Coifman & Lafon, 2006), are traditionally used to infer latent representations that capture intrinsic geometry in data, but they do not relate them to original data features. Here, we propose a novel data synthesis method, which we call SUGAR (Synthesis Using Geometrically Aligned Random-walks), for generating data in its original feature space while following its intrinsic geometry. This geometry is inferred by a diffusion kernel that captures a data-driven manifold and reveals underlying structure in the full range of the data space – including undersampled regions that can be augmented by new synthesized data. Geometry-based data generation with SUGAR is motivated by numerous uses in data exploration. For instance, in biology, despite the advent of single-cell technologies such as single-cell RNA sequencing and mass cytometry, sampling biases and artifacts often make it difficult to evenly sample the data space. Rare populations of relevance to disease and development are often left out (Grün et al., 2015). By learning the data geometry rather than density, SUGAR is able to generate hypothetical cell types for exploration, and uncover patterns and interactions in the data. Further, imbalanced data is problematic for many machine learning applications. In classification, for example, class density can strongly bias some classifiers (He & Garcia, 2009; López et al., ∗These authors contributed equally †These authors contributed equally; Corresponding author 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. 2013; Hensman & Masko, 2015). In clustering, imbalanced sampling of ground truth clusters can lead to distortions in learned clusters (Xuan et al., 2013; Wu, 2012). Sampling biases can also corrupt regression tasks; relationship measures such as mutual information are heavily weighted by density estimates and thus may mis-quantify the strength of dependencies with data whose density is concentrated in a particular region of the relationship (Krishnaswamy et al., 2014). SUGAR can aid such machine learning algorithms by generating data that is balanced along its manifold. There are several advantages of our approach over contemporary generative models. Most other generative models attempt to learn and replicate the density of the data; this approach is intractable in high dimensions. Distribution-based generative models typically require vast simplifications such as parametric forms or restriction to marginals in order to become tractable. Examples for such methods include Gaussian Mixture Models (GMM, Rasmussen, 2000), variational Bayesian methods (Beal & Ghahramani, 2003), and kernel density estimates (Scott, 2008). In contrast to these methods, SUGAR does not rely on high dimensional probability distributions or parametric forms. SUGAR selectively generates points to equalize density; as such, the method can be used generally to compensate for sparsity and heavily biased sampling in data in a way that is agnostic to downstream application. In other words, whereas more specialized methods may use prior information (e.g., labels) to correct class imbalances for classifier training (Chawla et al., 2002), SUGAR does not require such information and can apply even in cases such as clustering or regression, where such information does not exist. Here, we construct SUGAR from diffusion methods and theoretically justify its density equalization properties. We then demonstrate SUGAR on imbalanced artificial data. Subsequently, we use SUGAR to improve classification accuracy on 61 imbalanced datasets. We then provide an illustrative synthetic example of clustering with SUGAR and show the clustering performance of the method on 115 imbalanced datasets obtained from the KEEL-dataset repository (Alcalá-Fdez et al., 2009). Finally, we use SUGAR for exploratory analysis of a biological dataset, recovering imbalanced cell types and restoring canonical gene-gene relationships. 2 Related Work Most existing methods for data generation assume a probabilistic data model. Parametric density estimation methods, such as Rasmussen (2000) or Varanasi & Aazhang (1989), find a best fitting parametric model for the data using maximum likelihood, which is then used to generate new data. Nonparametric density estimators (e.g., Seaman & Powell, 1996; Scott, 1985; Giné & Guillou, 2002) use a histogram or a kernel (Scott, 2008) to estimate the generating distribution. Recently, Variational Auto-Encoders (VAE, Kingma & Welling, 2014; Doersch, 2016) and Generative Adversarial Networks (GAN, Goodfellow et al., 2014) have been demonstrated for generating new points from complex high dimensional distributions. A family of manifold based Parzen window estimators are presented in Vincent & Bengio (2003); Bengio & Monperrus (2005); Bengio et al. (2006). These methods exploit manifold structures to improve density estimation of high dimensional data. Markov-Chain Monte Carlo (MCMC) on implicitly defined manifolds was presented in Girolami & Calderhead (2011); Brubaker et al. (2012). There, the authors use implicit constraints to generate new points that follow a manifold structure. Another scheme by Öztireli et al. (2010) defines a spectral measure to resample existing points such that manifold structure is preserved and the density of points is uniform. These methods differ from the proposed approach as they either require implicit constraints or they change the values of existing points in the resampling process. 3 Background 3.1 Diffusion Geometry Coifman & Lafon (2006) proposed the nonlinear dimensionality reduction framework called Diffusion Maps (DM). This popular method robustly captures an intrinsic manifold geometry using a rowstochastic Markov matrix associated with a graph of the data. This graph is commonly constructed using a Gaussian kernel K(xi,xj) , Ki,j = exp ( −‖xi − xj‖ 2 2σ2 ) , i, j = 1, ..., N (1) where x1, . . . , xN are data points, and σ is a bandwidth parameter that controls neighborhood sizes. Then, a diffusion operator is defined as the row-stochastic matrix Pi,j = P(xi,xj) = [D−1K]i,j , i, j = 1, ..., N , where D is a diagonal matrix with values corresponding to the degree of the kernel Di,i = d̂(i) = ∑ j K(xi,xj). The degree d̂(i) of each point xi encodes the total connectivity the point has to its neighbors. The Markov matrix P defines an imposed diffusion process, shown by Coifman & Lafon (2006) to efficiently capture the diffusion geometry of a manifoldM. The DM framework may be used for dimensionality reduction to embed data using the eigendecomposition of the diffusion operator. However, in this paper, we do not directly use the DM embedding, but rather a variant of the operator P that captures diffusion geometry. In Sec. 4, we explain how this operator allows us to ensure the data we generate follows diffusion geometry and the manifold structure it represents. 3.2 Measure Based Gaussian Correlation Bermanis et al. (2016a,b) suggest the Measure-based Gaussian Correlation (MGC) kernel as an alternative to the Gaussian kernel (Eq. 1) for constructing diffusion geometry based on a measure µ. The measure could be provided in advance or approximated based on the data samples. The MGC kernel with a measure µ(r), r ∈X , defined over a set X of reference points, is K̂(xi,xj) = ∑ r∈X K(xi, r)K(r,xj)µ(r) , i, j = 1, ..., N , where the kernel K is some decaying symmetric function. Here, we use a Gaussian kernel for K and a sparsity-based measure for µ. 3.3 Kernel Bandwidth Selection The choice of kernel bandwidth σ in Eq. 1 is crucial for the performance of Gaussian-kernel methods. For small values of σ, the resulting kernel K converges to the identity matrix; inversely, large values of σ yield the all-ones matrix. Many methods have been proposed for tuning σ. A range of values is suggested in Singer et al. (2009) based on an analysis of the sum of values in K. Lindenbaum et al. (2017) presented a kernel scaling method that is well suited for classification and manifold learning tasks. We describe here two methods for setting the bandwidth: a global scale suggested in Keller et al. (2010) and an adaptive local scale based on Zelnik-Manor & Perona (2005). For degree estimation we use the max-min bandwidth (Keller et al., 2010) as it is simple and effective. The max-min bandwidth is defined by σ2MaxMin = C ·max j [min i,i 6=j (‖xi − xj‖2)] , where C ∈ [2, 3]. This approach attempts to force each point to be connected to at least one other point. This method is simple, but highly sensitive to outliers. Zelnik-Manor & Perona (2005) propose adaptive bandwidth selection. At each point xi, the scale σi is chosen as the L1 distance of xi from its r-th nearest neighbor. This adaptive bandwidth guarantees that at least half of the points are connected to r neighbors. Since an adaptive bandwidth obscures density biases, it is more suitable for applying the resulting diffusion process to the data than for degree estimation. 4 Data Generation 4.1 Problem Formulation LetM be a d dimensional manifold that lies in a higher dimensional space RD, with d < D, and let X ⊆M be a dataset of N = |X| data points, denoted x1, . . . ,xN , sampled from the manifold. In this paper, we propose an approach that uses the samples in X in order to capture the manifold Algorithm 1 SUGAR: Synthesis Using Geometrically Aligned Random-walks Input: Dataset X = {x1,x2, . . . ,xN},xi ∈ RD. Output: Generated set of points Y = {y1,y2, . . . , yM},yi ∈ RD. 1: Compute the diffusion geometry operators K, P , and degrees d̂(i), i = 1, ..., N (see Sec. 3) 2: Define a sparsity measure ŝ(i), i = 1, ..., N (Eq. 2). 3: Estimate a local covariance Σi, i = 1, ..., N , using k nearest neighbors around each xi. 4: For each point i = 1, ..., N draw ˆ̀(i) vectors (see Sec. 4.3) from a Gaussian distribution N (xi,Σi). Let Ŷ 0 be a matrix with these M = ∑N i=1 ˆ̀(i) generated vectors as its rows. 5: Compute the sparsity based diffusion operator P̂ (see Sec 4.2). 6: Apply the operator P̂ at time instant t to the new generated points in Ŷ 0 to get diffused points as rows of Y t = P̂ t · Y 0. 7: Rescale Y t to get the output Y [·, j] = Y t[·, j] · percentile(X [·,j],.99)maxY t[·,j] , j = 1, . . . , D, in order to fit the original range of feature values in the data. geometry and generate new data points from the manifold. In particular, we focus on the case where the points in X are unevenly sampled fromM, and aim to generate a set of M new data points Y = {y1, ...,yM} ⊆ RD such that 1. the new points Y approximately lie on the manifoldM, and 2. the distribution of points in the combined dataset Z , X ∪ Y is uniform. Our proposed approach is based on using an intrinsic diffusion process to robustly capture a manifold geometry from X (see Sec. 3). Then, we use this diffusion process to generate new data points along the manifold geometry while adjusting their intrinsic distribution, as explained in the following sections. 4.2 SUGAR: Synthesis Using Geometrically Aligned Random-walks SUGAR initializes by forming a Gaussian kernel GX (see Eq. 1) over the input data X in order to estimate the degree d̂(i) of each xi ∈ X . Because the space in which the degree is estimated impacts the output of SUGAR, X may consist of the full data dimensions or learned dimensions from manifold learning algorithms. We then define the sparsity of each point ŝ(i) via ŝ(i) , [d̂(i)]−1, i = 1, ..., N. (2) Subsequently, we sample ˆ̀(i) points hj ∈ Hi, j = 1, ..., ˆ̀(i) around each xi ∈ X from a set of localized Gaussian distributions Gi = N (xi,Σi) ∈ G. The choice of ˆ̀(i) based on the density (or sparsity) around xi is discussed in Sec. 4.3. This construction elaborates local manifold structure in meaningful directions by 1. compensating for data sparsity according to ŝ(i), and 2. centering each Gi on an existing point xi with local covariance Σi based on the k nearest neighbors of xi. The set of all M = ∑ i ˆ̀(i) new points, Y 0 = {y1, ...,yM}, is then given by the union of all local point sets Y 0 = H1 ∪H2 ∪ ... ∪HN . Next, we construct a sparsity-based MGC kernel (see Sec. 3.2) K̂(yi,yj) = ∑ r K(yi,xr)K(xr,yj)ŝ(r) using the affinities in the sampled set X and the generated set Y 0. We use this kernel to pull the new points Y 0 toward the sparse regions of the manifoldM using the row-stochastic diffusion operator P̂ (see Sec. 3.1). We then apply the powered operator P̂ t to Y 0, which averages points in Y 0 according to their neighbors in X . The powered operator P̂ t controls the diffusion distance over which points are averaged; higher values of t lead to wider local averages over the manifold. The operator may be modeled as a low pass filter in which higher powers decrease the cutoff frequency. Because Y 0 is inherently noisy in the ambient space of the data, Y t = P̂ t · Y 0 is a denoised version of Y 0 alongM. The number of steps required can be set manually or using the Von Neumann Entropy as suggested by Moon et al. (2017b). Because the filter P̂ t ·Y 0 is not power preserving, Y t is rescaled to fit the range of original values of X . A full description of the approach is given in Alg. 1. 4.3 Manifold Density Equalization The generation level ˆ̀(i) in Alg. 1 (step 4), i.e., the amount of points generated around each xi, determines the distribution of points in Y 1. Given a biased dataset X , we wish to generate points in sparse regions such that the resulting density overM becomes uniform. To do this we have proposed to draw ˆ̀(i) points around each point xi, i = 1, ..., N , from N (xi,Σi) (as described in Alg. 1). The following proposition provides bounds on the “correct” number of points ˆ̀(i), i = 1, ..., N , required to balance the density over the manifold by equalizing the degrees d̂(i). Proposition 4.1. The generation level ˆ̀(i) required to equalize the degree d̂(i), is bounded by det ( I + Σi 2σ2 ) 1 2 max(d̂(·))− d̂(i) d̂(i) + 1 − 1 ≤ ˆ̀(i) ≤ det ( I + Σi 2σ2 ) 1 2 [max(d̂(·))− d̂(i)] , where d̂(i) is the degree value at point xi, σ2 is the bandwidth of the kernel K (Eq. 1) and Σi is the covariance of the Gaussian designed for generating new points (as described in Algorithm 1). In practice we suggest to use the mean of the upper and lower bound to set the number of generated points ˆ̀(i). In Sec. 5.2 we demonstrate how the proposed scheme enables density equalization using few iterations of SUGAR. The proof of Prop. 4.1 is presented in the supplemental material. 5 Experimental Results 5.1 MNIST Manifold In the following experiment we empirically demonstrate the ability of SUGAR to fill in missing samples and compare it to two generative Neural Networks: a Variational Autoencoder (VAE, Kingma & Welling, 2014), which has an implicit probabilistic model of the data, and a Generative Adversarial Network (GAN, Goodfellow et al., 2014), which learns to mimic input data distributions. Note that we are not able to use other density estimates in general due to the high dimensionality of datasets and the inability of density estimates to scale to high dimensions. To begin, we rotated an example of a handwritten ‘6’ from the MNIST dataset in N = 320 different angles non-uniformly sampled over the range [0, 2π]. This circular construction was recovered by the diffusion maps embedding of the data, with points towards the undersampled regions having a lower degree than other regions of the embedding (Fig. 1, left, colored by degree). We then generated new points around each sample in the rotated data according to Alg. 1. We show the results of SUGAR before and after diffusion in Fig. 1 (top and bottom right, respectively). Next, we compared our results to a two-layer VAE and a GAN trained over the original data (Fig. 1, (b) and (c)). Training a GAN on a dataset with number of samples of the same order as the dimension was not a simple task. Based on our experience adding the gradient penalty as suggested in Gulrajani et al. (2017) helps prevent mode collapse. The GAN was injected with uniform noise. Both SUGAR (t = 1) and the VAE generated points along the circular structure of the original manifold. For the output of the GAN, we had to filter out around 5% of the points, which fall far from the original manifold and look very noisy. Examples of images from both techniques are presented in Fig. 1. Notably, the VAE generated images similar to the original angle distribution, such that sparse regions of the manifold were not filled. In contrast, points generated by SUGAR occupied new angles not present in the original data but clearly present along the circular manifold. This example illustrates the ability of SUGAR to recover sparse areas of a data manifold. 5.2 Density Equalization Given the circular manifold recovered in Sec. 5.1, we next sought to evaluate the density equalization properties proposed in Sec. 4.3. We begin by sampling one hundred points from a circle such that the highest density is at the origin (θ = 0) and the density decreases away from it (Fig. 2(a), colored by degree d̂(i)). SUGAR was then used to generate new points based on ˆ̀(i) around each original point (Fig. 2(b), before diffusion, 2(c), after diffusion). We repeat this process for different initial densities and evaluate the resulting distribution of point against the amount of iteration of SUGAR. We perform a Kolmogorov-Smirnov (K-S) test to determine if the points came from a uniform distribution. The resulting p-values are presented in Fig. 2(d). 5.3 Classification of Imbalanced Data The loss functions of many standard classification algorithms are global; these algorithms are thus easily biased when trained on imbalanced datasets. Imbalanced training typically manifests in poor classification of rare samples. These rare samples are often important (Weiss, 2004). For example, the preponderance of healthy individuals in medical data can obscure the diagnosis of rare diseases. Resampling and boosting strategies have been used to combat data imbalance. Removing points (undersampling) is a simple solution, but this strategy leads to information loss and can decrease generalization performance. RUSBoost (Seiffert et al., 2010) combines this approach with boosting, a technique that resamples the data using a set of weights learned by iterative training. Oversampling methods remove class imbalance by generating synthetic data alongside the original data. Synthetic Minority Over-sampling Technique (SMOTE, Chawla et al., 2002) oversamples by generating points along lines between existing points of minority classes. We compared SUGAR, RUSBoost, and SMOTE for improving k-NN and kernel SVM classification of 61 imbalanced datasets of varying size (from hundreds to thousands) and imbalance ratio (1.8–130), obtained from Alcalá-Fdez et al. (2009). To quantify classification performance we used Precision, Recall, and the Mathews correlation coefficient (MCC), which capture classification accuracy in light of data imbalance. For binary classification, precision measures the fraction of true positives to false positives, recall measures the fraction of true positives identified, and MCC is a discrete version of Pearson correlation between the observed and predicted class labels. Formally, they are defined as Precision = TP TP + FP Recall = TP TP + FN MCC = TP · TN − FP · FN√ (TP + FP )(TP + FN)(TN + FP )(TN + FN) . For handling multiple classes, the first two are extended via average class precision and recall (ACP and ACR), which are defined as ACP = 1 C C∑ c=1 Precision(class = c) ACR = 1 C C∑ c=1 Recall(class = c) , while MCC is extended to multiclass settings as defined in Gorodkin (2004). These metrics ignore class population biases by equally weighting classes. This experiment is summarized in Table 1 (see supplement for full details). 5.4 Clustering of Imbalanced Data In order to examine the effect of SUGAR on clustering, we performed spectral clustering on a set of Gaussians in the shape of the word “SUGAR” (top panel, Fig. 3(a)). Next, we altered the mixtures to sample heavily towards points on the edges of the word (middle panel, Fig. 3(a)). This perturbation disrupted letter clusters. Finally, we performed SUGAR on the biased data to recreate data along the manifold. The combined data and its resultant clustering is shown in the bottom panel of Fig. 3(a) revealing that the letter clustering was restored after SUGAR. The effect of sample density on spectral clustering is evident in the eigendecomposition of the graph Laplacian, which describes the connectivity of the graph and is the basis for spectral clustering. We shall focus on the multiplicity of the zero eigenvalue, which corresponds to the number of connected components of a graph. In our example, we see that the zero eigenvalue for the ground truth and SUGAR graphs has a multiplicity of 5 whereas the corrupted graph only has a multiplicity of 4 (see Fig. 3(b)). This connectivity difference arises from the k-neighborhoods of points in each ground truth cluster. We note that variation in sample density disrupts the k-neighborhood of points in the downsampled region to include points outside of their ground truth cluster. These connections across the letters of “SUGAR” thus lead to a lower multiplicity of the zero eigenvalue, which negatively affects the spectral clustering. Augmenting the biased data via SUGAR equalizes the sampling density, restoring ground-truth neighborhood structure to the graph built on the data. Next, we explored the effects of SUGAR on traditional k-means across 115 datasets obtained from Alcalá-Fdez et al. (2009). K-means was performed using the ground truth number of clusters, and the Rand Index (RI, Hubert & Arabie, 1985) between the ground truth clustering and the empirical clustering was taken (Fig. 3(c), x-axis). Subsequently, SUGAR was used to generate new points for clustering together with the original data. The RI over the original data was again computed, this time using the SUGAR clusters (Fig. 3(c), y-axis). Our results indicate the SUGAR can be used to improve the cluster quality of k-means. 5.5 Biological Manifolds Next, we used SUGAR for exploratory analysis of a biological dataset. In Velten et al. (2017), a high dimensional yet small (X ∈ R1029×12553) single-cell RNA sequencing (scRNA-seq) dataset was collected to elucidate the development of human blood cells, which is posited to form a continuum of development trajectories from a central reservoir of immature cells. This dataset thus represents an ideal substrate to explore manifold learning (Moon et al., 2017a). However, the data presents two distinct challenge due to 1. undersampling of cell types, and 2. dropout and artifacts associated with scRNA-seq (Kim et al., 2015). These challenges stand at odds with a central task of computational biology; namely, the characterization of gene-gene interactions that foment phenotypes. We first sought to enrich rare phenotypes in the Velten data by generating Y ∈ R4116×12553 new data points with SUGAR. A useful tool for this analysis is the ‘gene module’, a pair or set of genes that are expressed together to drive phenotype development. K-means clustering of the augmented data over fourteen principal gene modules (32 dimensions) revealed six cell types described in Velten et al. (2017) and a seventh cluster consisting of mature B-cells (Fig. 4(a)). Analysis of population prevalence before and after SUGAR revealed a dramatic enrichment of mature B and pre-B cells, eosinophil/basophil/mast cells (EBM), and neutrophils (N), while previously dominant megakaryocytes (MK) became a more equal portion of the post-SUGAR population (Fig. 4(b)). These results demonstrate the ability of SUGAR to balance population prevalence along a data manifold. In Fig. 4(c), we examine the effect of SUGAR on intra-module relationships. Because expression of genes in a module are molecularly linked, intra-module relationships should be strong in the absence of sampling biases and experimental noise. After SUGAR, we note an improvement in linear regression (r2) and scaled mutual information coefficients. We note that in some cases the change in mutual information was stronger than linear regression, likely due to nonlinearities in the module relationship. Because this experiment was based on putative intra-module relationships we next sought to identify strong improvements in regression coefficients de novo. To this end, we compared the relationship of the B cell maturation marker CD19 with the entire dataset before and after SUGAR. In Fig. 4(d) we show three relationships with marked improvement from the original data (top panel) to the augmented data (bottom panel). The markers uncovered by this search, HOXA3, CASP1, and EAF2, each have disparate relationships with CD19. HOXA3 marks stem cell immaturity, and is negatively correlated with CD19. In contrast, CASP1 is known to mark commitment to the B cell lineage (Velten et al., 2017). After SUGAR, both of these relationships were enhanced. EAF2 is a part of a module that is expressed during early development of neutrophils and monocytes; we observe that its correlation and mutual information with B cell maturation are also increased after SUGAR. We note that in light of the early development discussed by Velten et al. (2017), this new relationship seems problematic. In fact, Li et al. (2016) showed that EAF2 is upregulated in mature B cells as a mechanism against autoimmunity. Taken together, our analyses show that SUGAR is effective for bolstering relationships between dimensions in the absence of prior knowledge for exploratory data analysis. 6 Conclusion SUGAR presents a new type of generative model, based on data geometry rather than density. This enables us to compensate for sparsity and heavily biased sampling in many data types of interest, especially biomedical data. We assume that the training data lies on a low-dimensional manifold. The manifold assumption is usually valid in many datasets (e.g., single cell RNA sequencing (Moon et al., 2017a)) as they are globally high-dimensional but locally generated by a small number of factors. We use a diffusion kernel to capture the manifold structure. Then, we randomly generate new points along the incomplete manifold, with emphasis on its sparse areas. Finally, we use a weighted transition kernel to pull the new points towards the structure of the manifold. The presented method demonstrated promising results on synthetic data, MNIST images, and high dimensional biological datasets in applications such as clustering, classification, and mutual information relationship analysis. We note that a toolbox implementing the presented algorithm is available via GitHub3 for free academic use (see supplement for details), and we expect future work to apply SUGAR to study extremely biased biological datasets and improve classification and regression performance on them. Acknowledgments This research was partially funded by grant from the Chan-Zuckerberg Initiative (ID: 182702). 3URL: github.com/KrishnaswamyLab/SUGAR
1. What is the focus and contribution of the paper on generative models? 2. What are the strengths of the proposed approach, particularly in dealing with sparsity and bias in sampling? 3. Are there any concerns or questions regarding the novelty of the individual components of the pipeline? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? 5. What are the suggestions for enhancing the manuscript's value, such as discussing relations to other models like stochastic block models, UMAP, or tSNE, and using different performance metrics like MCC?
Review
Review The authors introduce here an innovative generative model for data, based on learning the manifold geometry rather than the density of the data. They claim that this technique can deal w/ sparsity and heavily biased sampling. In practice, this is realised through a diffusion kernel which generates new points along the incomplete manifold by randomly generating points at sparse areas. The paper is well written, with a grounded theoretical frameworks and based on an interesting novel idea. Although no ingredient of the pipeline could be considered really novel, the overall solution can be deemed original because of the proposed smart combination of the components. The experimental section is very rich and all the results are consistently supporting all the authors' claims; in particular, the results obtained on biological datasets are biologically plausible and meaningful, making the proposed method particularly suitable for this kind of data. A number of points can be raised whose discussion might enhance the manuscript value: - what is the relation with the stochastic block model in clustering? - what is the relation with UMAP and/or tSNE in dimensionality reduction and manifold approximation? - I would strongly suggest using MCC as the performance metric ## After reading the authors' rebuttal, I confirm my rating: the observations included in the rebuttal are convincing and further supporting my evaluation of the manuscript as a good submission.
NIPS
Title Geometry Based Data Generation Abstract We propose a new type of generative model for high-dimensional data that learns a manifold geometry of the data, rather than density, and can generate points evenly along this manifold. This is in contrast to existing generative models that represent data density, and are strongly affected by noise and other artifacts of data collection. We demonstrate how this approach corrects sampling biases and artifacts, thus improves several downstream data analysis tasks, such as clustering and classification. Finally, we demonstrate that this approach is especially useful in biology where, despite the advent of single-cell technologies, rare subpopulations and gene-interaction relationships are affected by biased sampling. We show that SUGAR can generate hypothetical populations, and it is able to reveal intrinsic patterns and mutual-information relationships between genes on a single-cell RNA sequencing dataset of hematopoiesis. 1 Introduction Manifold learning methods in general, and diffusion geometry ones in particular (Coifman & Lafon, 2006), are traditionally used to infer latent representations that capture intrinsic geometry in data, but they do not relate them to original data features. Here, we propose a novel data synthesis method, which we call SUGAR (Synthesis Using Geometrically Aligned Random-walks), for generating data in its original feature space while following its intrinsic geometry. This geometry is inferred by a diffusion kernel that captures a data-driven manifold and reveals underlying structure in the full range of the data space – including undersampled regions that can be augmented by new synthesized data. Geometry-based data generation with SUGAR is motivated by numerous uses in data exploration. For instance, in biology, despite the advent of single-cell technologies such as single-cell RNA sequencing and mass cytometry, sampling biases and artifacts often make it difficult to evenly sample the data space. Rare populations of relevance to disease and development are often left out (Grün et al., 2015). By learning the data geometry rather than density, SUGAR is able to generate hypothetical cell types for exploration, and uncover patterns and interactions in the data. Further, imbalanced data is problematic for many machine learning applications. In classification, for example, class density can strongly bias some classifiers (He & Garcia, 2009; López et al., ∗These authors contributed equally †These authors contributed equally; Corresponding author 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. 2013; Hensman & Masko, 2015). In clustering, imbalanced sampling of ground truth clusters can lead to distortions in learned clusters (Xuan et al., 2013; Wu, 2012). Sampling biases can also corrupt regression tasks; relationship measures such as mutual information are heavily weighted by density estimates and thus may mis-quantify the strength of dependencies with data whose density is concentrated in a particular region of the relationship (Krishnaswamy et al., 2014). SUGAR can aid such machine learning algorithms by generating data that is balanced along its manifold. There are several advantages of our approach over contemporary generative models. Most other generative models attempt to learn and replicate the density of the data; this approach is intractable in high dimensions. Distribution-based generative models typically require vast simplifications such as parametric forms or restriction to marginals in order to become tractable. Examples for such methods include Gaussian Mixture Models (GMM, Rasmussen, 2000), variational Bayesian methods (Beal & Ghahramani, 2003), and kernel density estimates (Scott, 2008). In contrast to these methods, SUGAR does not rely on high dimensional probability distributions or parametric forms. SUGAR selectively generates points to equalize density; as such, the method can be used generally to compensate for sparsity and heavily biased sampling in data in a way that is agnostic to downstream application. In other words, whereas more specialized methods may use prior information (e.g., labels) to correct class imbalances for classifier training (Chawla et al., 2002), SUGAR does not require such information and can apply even in cases such as clustering or regression, where such information does not exist. Here, we construct SUGAR from diffusion methods and theoretically justify its density equalization properties. We then demonstrate SUGAR on imbalanced artificial data. Subsequently, we use SUGAR to improve classification accuracy on 61 imbalanced datasets. We then provide an illustrative synthetic example of clustering with SUGAR and show the clustering performance of the method on 115 imbalanced datasets obtained from the KEEL-dataset repository (Alcalá-Fdez et al., 2009). Finally, we use SUGAR for exploratory analysis of a biological dataset, recovering imbalanced cell types and restoring canonical gene-gene relationships. 2 Related Work Most existing methods for data generation assume a probabilistic data model. Parametric density estimation methods, such as Rasmussen (2000) or Varanasi & Aazhang (1989), find a best fitting parametric model for the data using maximum likelihood, which is then used to generate new data. Nonparametric density estimators (e.g., Seaman & Powell, 1996; Scott, 1985; Giné & Guillou, 2002) use a histogram or a kernel (Scott, 2008) to estimate the generating distribution. Recently, Variational Auto-Encoders (VAE, Kingma & Welling, 2014; Doersch, 2016) and Generative Adversarial Networks (GAN, Goodfellow et al., 2014) have been demonstrated for generating new points from complex high dimensional distributions. A family of manifold based Parzen window estimators are presented in Vincent & Bengio (2003); Bengio & Monperrus (2005); Bengio et al. (2006). These methods exploit manifold structures to improve density estimation of high dimensional data. Markov-Chain Monte Carlo (MCMC) on implicitly defined manifolds was presented in Girolami & Calderhead (2011); Brubaker et al. (2012). There, the authors use implicit constraints to generate new points that follow a manifold structure. Another scheme by Öztireli et al. (2010) defines a spectral measure to resample existing points such that manifold structure is preserved and the density of points is uniform. These methods differ from the proposed approach as they either require implicit constraints or they change the values of existing points in the resampling process. 3 Background 3.1 Diffusion Geometry Coifman & Lafon (2006) proposed the nonlinear dimensionality reduction framework called Diffusion Maps (DM). This popular method robustly captures an intrinsic manifold geometry using a rowstochastic Markov matrix associated with a graph of the data. This graph is commonly constructed using a Gaussian kernel K(xi,xj) , Ki,j = exp ( −‖xi − xj‖ 2 2σ2 ) , i, j = 1, ..., N (1) where x1, . . . , xN are data points, and σ is a bandwidth parameter that controls neighborhood sizes. Then, a diffusion operator is defined as the row-stochastic matrix Pi,j = P(xi,xj) = [D−1K]i,j , i, j = 1, ..., N , where D is a diagonal matrix with values corresponding to the degree of the kernel Di,i = d̂(i) = ∑ j K(xi,xj). The degree d̂(i) of each point xi encodes the total connectivity the point has to its neighbors. The Markov matrix P defines an imposed diffusion process, shown by Coifman & Lafon (2006) to efficiently capture the diffusion geometry of a manifoldM. The DM framework may be used for dimensionality reduction to embed data using the eigendecomposition of the diffusion operator. However, in this paper, we do not directly use the DM embedding, but rather a variant of the operator P that captures diffusion geometry. In Sec. 4, we explain how this operator allows us to ensure the data we generate follows diffusion geometry and the manifold structure it represents. 3.2 Measure Based Gaussian Correlation Bermanis et al. (2016a,b) suggest the Measure-based Gaussian Correlation (MGC) kernel as an alternative to the Gaussian kernel (Eq. 1) for constructing diffusion geometry based on a measure µ. The measure could be provided in advance or approximated based on the data samples. The MGC kernel with a measure µ(r), r ∈X , defined over a set X of reference points, is K̂(xi,xj) = ∑ r∈X K(xi, r)K(r,xj)µ(r) , i, j = 1, ..., N , where the kernel K is some decaying symmetric function. Here, we use a Gaussian kernel for K and a sparsity-based measure for µ. 3.3 Kernel Bandwidth Selection The choice of kernel bandwidth σ in Eq. 1 is crucial for the performance of Gaussian-kernel methods. For small values of σ, the resulting kernel K converges to the identity matrix; inversely, large values of σ yield the all-ones matrix. Many methods have been proposed for tuning σ. A range of values is suggested in Singer et al. (2009) based on an analysis of the sum of values in K. Lindenbaum et al. (2017) presented a kernel scaling method that is well suited for classification and manifold learning tasks. We describe here two methods for setting the bandwidth: a global scale suggested in Keller et al. (2010) and an adaptive local scale based on Zelnik-Manor & Perona (2005). For degree estimation we use the max-min bandwidth (Keller et al., 2010) as it is simple and effective. The max-min bandwidth is defined by σ2MaxMin = C ·max j [min i,i 6=j (‖xi − xj‖2)] , where C ∈ [2, 3]. This approach attempts to force each point to be connected to at least one other point. This method is simple, but highly sensitive to outliers. Zelnik-Manor & Perona (2005) propose adaptive bandwidth selection. At each point xi, the scale σi is chosen as the L1 distance of xi from its r-th nearest neighbor. This adaptive bandwidth guarantees that at least half of the points are connected to r neighbors. Since an adaptive bandwidth obscures density biases, it is more suitable for applying the resulting diffusion process to the data than for degree estimation. 4 Data Generation 4.1 Problem Formulation LetM be a d dimensional manifold that lies in a higher dimensional space RD, with d < D, and let X ⊆M be a dataset of N = |X| data points, denoted x1, . . . ,xN , sampled from the manifold. In this paper, we propose an approach that uses the samples in X in order to capture the manifold Algorithm 1 SUGAR: Synthesis Using Geometrically Aligned Random-walks Input: Dataset X = {x1,x2, . . . ,xN},xi ∈ RD. Output: Generated set of points Y = {y1,y2, . . . , yM},yi ∈ RD. 1: Compute the diffusion geometry operators K, P , and degrees d̂(i), i = 1, ..., N (see Sec. 3) 2: Define a sparsity measure ŝ(i), i = 1, ..., N (Eq. 2). 3: Estimate a local covariance Σi, i = 1, ..., N , using k nearest neighbors around each xi. 4: For each point i = 1, ..., N draw ˆ̀(i) vectors (see Sec. 4.3) from a Gaussian distribution N (xi,Σi). Let Ŷ 0 be a matrix with these M = ∑N i=1 ˆ̀(i) generated vectors as its rows. 5: Compute the sparsity based diffusion operator P̂ (see Sec 4.2). 6: Apply the operator P̂ at time instant t to the new generated points in Ŷ 0 to get diffused points as rows of Y t = P̂ t · Y 0. 7: Rescale Y t to get the output Y [·, j] = Y t[·, j] · percentile(X [·,j],.99)maxY t[·,j] , j = 1, . . . , D, in order to fit the original range of feature values in the data. geometry and generate new data points from the manifold. In particular, we focus on the case where the points in X are unevenly sampled fromM, and aim to generate a set of M new data points Y = {y1, ...,yM} ⊆ RD such that 1. the new points Y approximately lie on the manifoldM, and 2. the distribution of points in the combined dataset Z , X ∪ Y is uniform. Our proposed approach is based on using an intrinsic diffusion process to robustly capture a manifold geometry from X (see Sec. 3). Then, we use this diffusion process to generate new data points along the manifold geometry while adjusting their intrinsic distribution, as explained in the following sections. 4.2 SUGAR: Synthesis Using Geometrically Aligned Random-walks SUGAR initializes by forming a Gaussian kernel GX (see Eq. 1) over the input data X in order to estimate the degree d̂(i) of each xi ∈ X . Because the space in which the degree is estimated impacts the output of SUGAR, X may consist of the full data dimensions or learned dimensions from manifold learning algorithms. We then define the sparsity of each point ŝ(i) via ŝ(i) , [d̂(i)]−1, i = 1, ..., N. (2) Subsequently, we sample ˆ̀(i) points hj ∈ Hi, j = 1, ..., ˆ̀(i) around each xi ∈ X from a set of localized Gaussian distributions Gi = N (xi,Σi) ∈ G. The choice of ˆ̀(i) based on the density (or sparsity) around xi is discussed in Sec. 4.3. This construction elaborates local manifold structure in meaningful directions by 1. compensating for data sparsity according to ŝ(i), and 2. centering each Gi on an existing point xi with local covariance Σi based on the k nearest neighbors of xi. The set of all M = ∑ i ˆ̀(i) new points, Y 0 = {y1, ...,yM}, is then given by the union of all local point sets Y 0 = H1 ∪H2 ∪ ... ∪HN . Next, we construct a sparsity-based MGC kernel (see Sec. 3.2) K̂(yi,yj) = ∑ r K(yi,xr)K(xr,yj)ŝ(r) using the affinities in the sampled set X and the generated set Y 0. We use this kernel to pull the new points Y 0 toward the sparse regions of the manifoldM using the row-stochastic diffusion operator P̂ (see Sec. 3.1). We then apply the powered operator P̂ t to Y 0, which averages points in Y 0 according to their neighbors in X . The powered operator P̂ t controls the diffusion distance over which points are averaged; higher values of t lead to wider local averages over the manifold. The operator may be modeled as a low pass filter in which higher powers decrease the cutoff frequency. Because Y 0 is inherently noisy in the ambient space of the data, Y t = P̂ t · Y 0 is a denoised version of Y 0 alongM. The number of steps required can be set manually or using the Von Neumann Entropy as suggested by Moon et al. (2017b). Because the filter P̂ t ·Y 0 is not power preserving, Y t is rescaled to fit the range of original values of X . A full description of the approach is given in Alg. 1. 4.3 Manifold Density Equalization The generation level ˆ̀(i) in Alg. 1 (step 4), i.e., the amount of points generated around each xi, determines the distribution of points in Y 1. Given a biased dataset X , we wish to generate points in sparse regions such that the resulting density overM becomes uniform. To do this we have proposed to draw ˆ̀(i) points around each point xi, i = 1, ..., N , from N (xi,Σi) (as described in Alg. 1). The following proposition provides bounds on the “correct” number of points ˆ̀(i), i = 1, ..., N , required to balance the density over the manifold by equalizing the degrees d̂(i). Proposition 4.1. The generation level ˆ̀(i) required to equalize the degree d̂(i), is bounded by det ( I + Σi 2σ2 ) 1 2 max(d̂(·))− d̂(i) d̂(i) + 1 − 1 ≤ ˆ̀(i) ≤ det ( I + Σi 2σ2 ) 1 2 [max(d̂(·))− d̂(i)] , where d̂(i) is the degree value at point xi, σ2 is the bandwidth of the kernel K (Eq. 1) and Σi is the covariance of the Gaussian designed for generating new points (as described in Algorithm 1). In practice we suggest to use the mean of the upper and lower bound to set the number of generated points ˆ̀(i). In Sec. 5.2 we demonstrate how the proposed scheme enables density equalization using few iterations of SUGAR. The proof of Prop. 4.1 is presented in the supplemental material. 5 Experimental Results 5.1 MNIST Manifold In the following experiment we empirically demonstrate the ability of SUGAR to fill in missing samples and compare it to two generative Neural Networks: a Variational Autoencoder (VAE, Kingma & Welling, 2014), which has an implicit probabilistic model of the data, and a Generative Adversarial Network (GAN, Goodfellow et al., 2014), which learns to mimic input data distributions. Note that we are not able to use other density estimates in general due to the high dimensionality of datasets and the inability of density estimates to scale to high dimensions. To begin, we rotated an example of a handwritten ‘6’ from the MNIST dataset in N = 320 different angles non-uniformly sampled over the range [0, 2π]. This circular construction was recovered by the diffusion maps embedding of the data, with points towards the undersampled regions having a lower degree than other regions of the embedding (Fig. 1, left, colored by degree). We then generated new points around each sample in the rotated data according to Alg. 1. We show the results of SUGAR before and after diffusion in Fig. 1 (top and bottom right, respectively). Next, we compared our results to a two-layer VAE and a GAN trained over the original data (Fig. 1, (b) and (c)). Training a GAN on a dataset with number of samples of the same order as the dimension was not a simple task. Based on our experience adding the gradient penalty as suggested in Gulrajani et al. (2017) helps prevent mode collapse. The GAN was injected with uniform noise. Both SUGAR (t = 1) and the VAE generated points along the circular structure of the original manifold. For the output of the GAN, we had to filter out around 5% of the points, which fall far from the original manifold and look very noisy. Examples of images from both techniques are presented in Fig. 1. Notably, the VAE generated images similar to the original angle distribution, such that sparse regions of the manifold were not filled. In contrast, points generated by SUGAR occupied new angles not present in the original data but clearly present along the circular manifold. This example illustrates the ability of SUGAR to recover sparse areas of a data manifold. 5.2 Density Equalization Given the circular manifold recovered in Sec. 5.1, we next sought to evaluate the density equalization properties proposed in Sec. 4.3. We begin by sampling one hundred points from a circle such that the highest density is at the origin (θ = 0) and the density decreases away from it (Fig. 2(a), colored by degree d̂(i)). SUGAR was then used to generate new points based on ˆ̀(i) around each original point (Fig. 2(b), before diffusion, 2(c), after diffusion). We repeat this process for different initial densities and evaluate the resulting distribution of point against the amount of iteration of SUGAR. We perform a Kolmogorov-Smirnov (K-S) test to determine if the points came from a uniform distribution. The resulting p-values are presented in Fig. 2(d). 5.3 Classification of Imbalanced Data The loss functions of many standard classification algorithms are global; these algorithms are thus easily biased when trained on imbalanced datasets. Imbalanced training typically manifests in poor classification of rare samples. These rare samples are often important (Weiss, 2004). For example, the preponderance of healthy individuals in medical data can obscure the diagnosis of rare diseases. Resampling and boosting strategies have been used to combat data imbalance. Removing points (undersampling) is a simple solution, but this strategy leads to information loss and can decrease generalization performance. RUSBoost (Seiffert et al., 2010) combines this approach with boosting, a technique that resamples the data using a set of weights learned by iterative training. Oversampling methods remove class imbalance by generating synthetic data alongside the original data. Synthetic Minority Over-sampling Technique (SMOTE, Chawla et al., 2002) oversamples by generating points along lines between existing points of minority classes. We compared SUGAR, RUSBoost, and SMOTE for improving k-NN and kernel SVM classification of 61 imbalanced datasets of varying size (from hundreds to thousands) and imbalance ratio (1.8–130), obtained from Alcalá-Fdez et al. (2009). To quantify classification performance we used Precision, Recall, and the Mathews correlation coefficient (MCC), which capture classification accuracy in light of data imbalance. For binary classification, precision measures the fraction of true positives to false positives, recall measures the fraction of true positives identified, and MCC is a discrete version of Pearson correlation between the observed and predicted class labels. Formally, they are defined as Precision = TP TP + FP Recall = TP TP + FN MCC = TP · TN − FP · FN√ (TP + FP )(TP + FN)(TN + FP )(TN + FN) . For handling multiple classes, the first two are extended via average class precision and recall (ACP and ACR), which are defined as ACP = 1 C C∑ c=1 Precision(class = c) ACR = 1 C C∑ c=1 Recall(class = c) , while MCC is extended to multiclass settings as defined in Gorodkin (2004). These metrics ignore class population biases by equally weighting classes. This experiment is summarized in Table 1 (see supplement for full details). 5.4 Clustering of Imbalanced Data In order to examine the effect of SUGAR on clustering, we performed spectral clustering on a set of Gaussians in the shape of the word “SUGAR” (top panel, Fig. 3(a)). Next, we altered the mixtures to sample heavily towards points on the edges of the word (middle panel, Fig. 3(a)). This perturbation disrupted letter clusters. Finally, we performed SUGAR on the biased data to recreate data along the manifold. The combined data and its resultant clustering is shown in the bottom panel of Fig. 3(a) revealing that the letter clustering was restored after SUGAR. The effect of sample density on spectral clustering is evident in the eigendecomposition of the graph Laplacian, which describes the connectivity of the graph and is the basis for spectral clustering. We shall focus on the multiplicity of the zero eigenvalue, which corresponds to the number of connected components of a graph. In our example, we see that the zero eigenvalue for the ground truth and SUGAR graphs has a multiplicity of 5 whereas the corrupted graph only has a multiplicity of 4 (see Fig. 3(b)). This connectivity difference arises from the k-neighborhoods of points in each ground truth cluster. We note that variation in sample density disrupts the k-neighborhood of points in the downsampled region to include points outside of their ground truth cluster. These connections across the letters of “SUGAR” thus lead to a lower multiplicity of the zero eigenvalue, which negatively affects the spectral clustering. Augmenting the biased data via SUGAR equalizes the sampling density, restoring ground-truth neighborhood structure to the graph built on the data. Next, we explored the effects of SUGAR on traditional k-means across 115 datasets obtained from Alcalá-Fdez et al. (2009). K-means was performed using the ground truth number of clusters, and the Rand Index (RI, Hubert & Arabie, 1985) between the ground truth clustering and the empirical clustering was taken (Fig. 3(c), x-axis). Subsequently, SUGAR was used to generate new points for clustering together with the original data. The RI over the original data was again computed, this time using the SUGAR clusters (Fig. 3(c), y-axis). Our results indicate the SUGAR can be used to improve the cluster quality of k-means. 5.5 Biological Manifolds Next, we used SUGAR for exploratory analysis of a biological dataset. In Velten et al. (2017), a high dimensional yet small (X ∈ R1029×12553) single-cell RNA sequencing (scRNA-seq) dataset was collected to elucidate the development of human blood cells, which is posited to form a continuum of development trajectories from a central reservoir of immature cells. This dataset thus represents an ideal substrate to explore manifold learning (Moon et al., 2017a). However, the data presents two distinct challenge due to 1. undersampling of cell types, and 2. dropout and artifacts associated with scRNA-seq (Kim et al., 2015). These challenges stand at odds with a central task of computational biology; namely, the characterization of gene-gene interactions that foment phenotypes. We first sought to enrich rare phenotypes in the Velten data by generating Y ∈ R4116×12553 new data points with SUGAR. A useful tool for this analysis is the ‘gene module’, a pair or set of genes that are expressed together to drive phenotype development. K-means clustering of the augmented data over fourteen principal gene modules (32 dimensions) revealed six cell types described in Velten et al. (2017) and a seventh cluster consisting of mature B-cells (Fig. 4(a)). Analysis of population prevalence before and after SUGAR revealed a dramatic enrichment of mature B and pre-B cells, eosinophil/basophil/mast cells (EBM), and neutrophils (N), while previously dominant megakaryocytes (MK) became a more equal portion of the post-SUGAR population (Fig. 4(b)). These results demonstrate the ability of SUGAR to balance population prevalence along a data manifold. In Fig. 4(c), we examine the effect of SUGAR on intra-module relationships. Because expression of genes in a module are molecularly linked, intra-module relationships should be strong in the absence of sampling biases and experimental noise. After SUGAR, we note an improvement in linear regression (r2) and scaled mutual information coefficients. We note that in some cases the change in mutual information was stronger than linear regression, likely due to nonlinearities in the module relationship. Because this experiment was based on putative intra-module relationships we next sought to identify strong improvements in regression coefficients de novo. To this end, we compared the relationship of the B cell maturation marker CD19 with the entire dataset before and after SUGAR. In Fig. 4(d) we show three relationships with marked improvement from the original data (top panel) to the augmented data (bottom panel). The markers uncovered by this search, HOXA3, CASP1, and EAF2, each have disparate relationships with CD19. HOXA3 marks stem cell immaturity, and is negatively correlated with CD19. In contrast, CASP1 is known to mark commitment to the B cell lineage (Velten et al., 2017). After SUGAR, both of these relationships were enhanced. EAF2 is a part of a module that is expressed during early development of neutrophils and monocytes; we observe that its correlation and mutual information with B cell maturation are also increased after SUGAR. We note that in light of the early development discussed by Velten et al. (2017), this new relationship seems problematic. In fact, Li et al. (2016) showed that EAF2 is upregulated in mature B cells as a mechanism against autoimmunity. Taken together, our analyses show that SUGAR is effective for bolstering relationships between dimensions in the absence of prior knowledge for exploratory data analysis. 6 Conclusion SUGAR presents a new type of generative model, based on data geometry rather than density. This enables us to compensate for sparsity and heavily biased sampling in many data types of interest, especially biomedical data. We assume that the training data lies on a low-dimensional manifold. The manifold assumption is usually valid in many datasets (e.g., single cell RNA sequencing (Moon et al., 2017a)) as they are globally high-dimensional but locally generated by a small number of factors. We use a diffusion kernel to capture the manifold structure. Then, we randomly generate new points along the incomplete manifold, with emphasis on its sparse areas. Finally, we use a weighted transition kernel to pull the new points towards the structure of the manifold. The presented method demonstrated promising results on synthetic data, MNIST images, and high dimensional biological datasets in applications such as clustering, classification, and mutual information relationship analysis. We note that a toolbox implementing the presented algorithm is available via GitHub3 for free academic use (see supplement for details), and we expect future work to apply SUGAR to study extremely biased biological datasets and improve classification and regression performance on them. Acknowledgments This research was partially funded by grant from the Chan-Zuckerberg Initiative (ID: 182702). 3URL: github.com/KrishnaswamyLab/SUGAR
1. What is the main contribution of the paper in terms of clustering and classification tasks? 2. What are the strengths of the proposed approach, particularly in dealing with imbalanced datasets? 3. Do you have any concerns or questions regarding the experimental results, such as the choice of parameters or the simplicity of the MNIST experiment? 4. How does the method address the issue of fake new clusters, which may arise from imputing new data points? 5. What is the significance of the proof provided in the paper regarding setting l(i)? 6. Can you provide more details about the initialization procedure and guarantees for the fast random initialization mentioned in Assumption B1? 7. How does the SUGAR method handle permutation invariance, as mentioned in the review? 8. Are there any simulation results verifying the convergence rate of the algorithm, as asked in the review?
Review
Review It is the difficult to discover small clusters from imbalanced dataset in classification/clustering tasks. The authors try to tackle this problem by imputing new data points into the original dataset using a diffusion map based approach. Diffusion map is a Gaussian kernel normalized by a diagonal matrix D, whose diagonal elements are the total connectivity of the corresponding data point. This kernel can be further extended to measure-based Gaussian Correlation (MGC) kernel, which only considers a set of reference points. The main idea is to generate new data points using the MGC kernel in neighbourhood of each original data point and combine them for downstream classification/clustering tasks. The authors then discusses how to select the bandwidth of the Gaussian kernel, how many new data points to generate for each original data point, how to set the number of multiplication steps of the Gaussian kernel. After that the authors provide a proof about setting l(i) which I do not go through. The MNIST experiment shows the method is able to interpolate data points in low density regions, while variational autoencoder is not. After that the authors check the generated data points are uniformly distributed in the manifold. Then they show SUGAR achieves better performance compared with other data sampling methods on classification and clustering of imbalanced datasets. In the final single cell dataset, they show SUGAR helps to separate different cell types from the data. In general I feel the key strength of SUGAR comes from the random walk near the original data points, which may help to connect the low density regions. The exploration property comes from the diffusion distance (line 141). The larger t used in the diffusion distance, the closer the newly generated data points to the original. While solving the minor cluster problem, I have a gut feeling imputing new data points may bring up new problems such as fake new clusters. I have some questions regarding the experiments: 1. MINST: this experiments may be too simple. Only one "6" is rotated to generate the whole dataset, so there is no noise in principle. Comparison with GAN instead of VAE may be more realistic. In Figure 1, how is t determined? 2. Clustering of imbalanced data: Figure 3a shows the contribution of SUGAR is merging the two clusters in "S" but not imputing the low density regions in "G" (bottom part), what is the t used here? How is K chosen in k-means for the newly generated dataset? 3. Biological data: Figure 4d, what are the x-axis and y-axis? 4. what is "f" in line 133? Although with flaws, I feel this is a nice paper to be accepted. Updates: Based on the authors' feedback, it seems to be a nice paper.
NIPS
Title FourierFormer: Transformer Meets Generalized Fourier Integral Theorem Abstract Multi-head attention empowers the recent success of transformers, the state-of-theart models that have achieved remarkable success in sequence modeling and beyond. These attention mechanisms compute the pairwise dot products between the queries and keys, which results from the use of unnormalized Gaussian kernels with the assumption that the queries follow a mixture of Gaussian distribution. There is no guarantee that this assumption is valid in practice. In response, we first interpret attention in transformers as a nonparametric kernel regression. We then propose the FourierFormer, a new class of transformers in which the dot-product kernels are replaced by the novel generalized Fourier integral kernels. Different from the dot-product kernels, where we need to choose a good covariance matrix to capture the dependency of the features of data, the generalized Fourier integral kernels can automatically capture such dependency and remove the need to tune the covariance matrix. We theoretically prove that our proposed Fourier integral kernels can efficiently approximate any key and query distributions. Compared to the conventional transformers with dot-product attention, FourierFormers attain better accuracy and reduce the redundancy between attention heads. We empirically corroborate the advantages of FourierFormers over the baseline transformers in a variety of practical applications including language modeling and image classification. 1 Introduction Transformers [83] are powerful neural networks that have achieved tremendous success in many areas of machine learning [40, 76, 36] and become the state-of-the-art model on a wide range of applications across different data modalities, from language [23, 1, 18, 13, 62, 4, 8, 21] to images [24, 43, 78, 63, 59, 27], videos [3, 44], point clouds [97, 31], and protein sequence [65, 34]. In addition to their excellent performance on supervised learning tasks, transformers can also effectively transfer the learned knowledge from a pretraining task to new tasks with limited or no supervision [60, 61, 23, 94, 42]. At the core of transformers is the dot-product self-attention, which ⇤ Co-first authors. ⇤⇤ Co-last authors. Please correspond to: tanmnguyen89@ucla.edu 36th Conference on Neural Information Processing Systems (NeurIPS 2022). mainly accounts for the success of transformer models [14, 56, 41]. This dot-product self-attention learn self-alignment between tokens in an input sequence by estimating the relative importance of a given token with respect to all other tokens. It then transform each token into a weighted average of the feature representations of other tokens where the weight is proportional to a importance score between each pair of tokens. The importance scores in self-attention enable a token to attend to other tokens in the sequence, thus capturing the contextual representation [6, 83, 38]. 1.1 Self-Attention Given an input sequence X := [x1, · · · ,xN ]> 2 RN⇥Dx of N feature vectors, self-attention computes the output sequence H from X as follows: Step 1: Projecting the input sequence into different subspaces. The input sequence X is transformed into the query matrix Q, the key matrix K, and the value matrix V via three linear transformations Q = XW>Q;K = XW > K ;V = XW > V , where WQ,WK 2 RD⇥Dx , and WV 2 RDv⇥Dx are the weight matrices. We denote Q := [q1, · · · , qN ]>,K := [k1, · · · ,kN ]>, and V := [v1, · · · ,vN ]>, where the vectors qi,ki,vi for i = 1, · · · , N are the query, key, and value vectors, respectively. Step 2: Computing the output as a weighted average. The output sequence H := [h1, · · · ,hN ]> is then given by H = softmax ⇣ QK > / p D ⌘ V := AV, (1) where the softmax function is applied to each row of the matrix (QK>)/ p D. For each query vector qi, i = 1, · · · , N , Eqn. (1) can be written in the vector form to compute the output vector hi as follows hi = NX j=1 softmax ⇣ q>i kj/ p D ⌘ vj := NX j=1 aijvj . (2) The matrix A 2 RN⇥N and its component aij for i, j = 1, · · · , N are the attention matrix and attention scores, respectively. The self-attention computed by equations (1) and (2) is called the dotproduct attention or softmax attention. In our paper, we refer a transformer that uses this attention as the baseline transformer with the dot-product attention or the dot-product transformer. The structure of the attention matrix A after training governs the ability of the self-attention to capture contextual representation for each token. Multi-head Attention Each output sequence H forms an attention head. Multi-head attention concatenates multiple heads to compute the final output. Let H be the number of heads and W O 2 RHDv⇥HDv be the projection matrix for the output. The multi-head attention is defined as MultiHead({Q,K,V}Hi=1) = Concat(H1, . . . ,HH)W O . The capacity of the attention mechanism and its ability to learn diverse syntactic and semantic relationships determine the success of transformers [77, 84, 17, 85, 32]. However, equations (1) and (2) implies that the dot-product attention assumes the features (qi1, . . . , qiD) in qi, as well as the features (kj1, . . . , qjD) in kj , are independent. Thus, the dot-product attention fail to capture the correlations between these features, limiting its representation capacity and inhibit the performance of transformers on practical tasks where there is no guarantee that independent features can learned from complex data. One solution to capture correlations between features qi and kj is to introduce covariance matrices into the formulation of the dot-product attention with the cost of significantly increasing of the computational complexity. Also, choosing good covariance matrices is difficult. 1.2 Contribution In this paper, we first establish a correspondence between self-attention and nonparametric kernel regression. Under this new perspective of self-attention, we explain the limitation of the dot-product self-attention that it may fail to capture correlations between the features in the query and key vectors. We then leverage the generalized Fourier integral theorems, which can automatically capture these correlations, and derive the generalized Fourier integral estimators for the nonparametric regression problem. Using this new density estimator, we propose the FourierFormer, a novel class of transformers that can capture correlations between features in the query and key vectors of self-attention. In summary, our contribution is three-fold: 1. We derive the formula of self-attention from solving a nonparametric kernel regression problem, thus providing a nonparametric regression interpretation to study and further develop self-attention. 2. We develop the generalized Fourier integral estimators for the nonparametric regression problem and provide theoretical guarantees for these estimator. 3. We propose the FourierFormer whose attentions use the generalized Fourier integral estimators to capture more efficiently correlations between features in the query and key vectors. Finally, we empirically show that the FourierFormer attains significantly better accuracy than the baseline transformer with the dot-product attention on a variety of tasks including the WikiText language modeling and ImageNet image classsification. We also demonstrate in our experiments that FourierFormer helps reduce the redundancy between attention heads. Organization We structure this paper as follows: In Section 2, we present the correspondence between self-attention and nonparametric kernel regression. In Section 3, we discuss the generalized Fourier integral estimators and define the FourierFormer. We validate and empirically analyze the advantages of FourierFormer in Section 4. We discuss related works in Section 5. The paper ends with concluding remarks. Technical proofs and more experimental details are provided in the Appendix. Notation For any N 2 N, we denote [N ] = {1, 2, . . . , N}. For any D 1, L1(RD) denotes the space of real-valued functions on RD that are integrable. For any two sequences {aN}N 1, {bN}N 1, we denote aN = O(bN ) to mean that aN CbN for all N 1 where C is some universal constant. 2 A Nonparametric Regression Interpretation of Self-attention In this section, we establish the connection between self-attention and nonparametric kernel regression. In particular, we derive the self-attention in equation (2) as a nonparametric kernel regression in which the key vectors kj and value vectors vj are training inputs and training targets, respectively, while the query vectors qi and the output vectors hi form a set of new inputs and their corresponding targets that need to be estimated, respectively, for i, j = 1, · · · , N . In general, we can view the training set {kj ,vj} for j 2 [N ] to come from the following nonparametric regression model: vj = f(kj) + "j , (3) where "1, . . . , "N are independent noises such that E("j) = 0. Furthermore, we consider a random design setting where the key vectors k1,k2, . . . ,kN are i.i.d. samples from the distribution that admits p as density function. By an abuse of notation, we also denote p as the joint density where the key and value vectors (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from. Here, f is a true but unknown function and we would like to estimate it. Nadaraya–Watson estimator Our approach to estimate the function f is based on Nadaraya–Watson’s nonparametric kernel regression approach [50]. In particular, from the nonparametric regression model (3), we have E [vj |kj ] = f(kj) for all j 2 [N ]. Therefore, it is sufficient to estimate the conditional distribution of the value vectors given the key vectors. Given the density function p of the key vectors and the joint density p of the key and value vectors, for any pair of vectors (v,k) generate from model (3) we have E [v|k] = Z RD v · p(v|k)dv = Z v · p(v,k) p(k) dv. (4) The formulation (4) of the conditional expectation indicates that as long as we can estimate the joint density function p(v,k) and the marginal density function p(v), we are able to obtain an estimation for the conditional expectation and thus for the function f . This approach is widely known as Nadaraya–Watson’s nonparametric kernel regression approach. Kernel density estimator To estimate p(v,k) and p(k), we employ the kernel density estimation approach [66, 57]. In particular, by using the isotropic Gaussian kernel with bandwidth , we have the following estimators of p(v,k) and p(k): p̂ (v,k) = 1 N NX j=1 ' (v vj)' (k kj), p̂ (k) = 1 N NX j=1 ' (k kj), (5) where ' (.) is the isotropic multivariate Gaussian density function with diagonal covariance matrix 2 ID. Given the kernel density estimators (5), we obtain the following estimation of the function f : bf (k) = Z RD v · p̂ (v,k) p̂ (k) dv = Z RD v · PN j=1 ' (v vj)' (k kj)PN j=1 ' (k kj) dv = PN j=1 ' (k kj) R v · ' (v vj)dv PN j=1 ' (k kj) = PN j=1 vj' (k kj)PN j=1 ' (k kj) . (6) Connection between Self-Attention and nonparametric regression By plugging the query vectors qi into the function bf in equation (6), we obtain that bf (qi) = PN j vj exp kqi kjk2/2 2 PN j exp ( kqi kjk 2/2 2) = PN j vj exp ⇥ kqik2 + kkjk2 /2 2 ⇤ exp qik>j / 2 PN j exp [ (kqik 2 + kkj0k2) /2 2] exp qik>j / 2 . (7) If we further assume that the keys kj are normalized, which is usually done in practice to stabilize the training of transformers [71], the value of bf (qi) in equation (6) then becomes bf (qi) = PN j vj exp qik>j / 2 PN j exp qik>j / 2 = NX j=1 softmax ⇣ q>i kj/ 2 ⌘ vj . (8) When we choose 2 = p D where D is the dimension of qi and kj , equation (8) matches equation (2) of self-attention, namely, bf (qi) = hi. Thus, we have shown that self-attention performs nonparametric regression using isotropic Gaussian kernels. Remark 1 The assumption that kj is normalized is to recover the pairwise dot-product attention in transformers. In general, this assumption is not necessary. In fact, the isotropic Gaussian kernel in equation (7) is more desirable than the dot-product kernel in equation (8) of the pairwise dot-product attention since the former is Lipschitz while the later is not Lipschitz [37]. The Lipschitz constraint helps improve the robustness of the model [16, 81, 2] and stabilize the model training [48]. Limitation of Self-Attention From our nonparametric regression interpretation, self-attention is derived from the use of isotropic Gaussian kernels for kernel density estimation and nonparametric regression estimation, which may fail to capture the complex correlations between D features in qi and kj [88, 33]. Using multivariate Gaussian kernels with dense covariance matrices can help capture such correlations; however, choosing good covariance matrices is challenging and inefficient [87, 73, 11]. In the following section, we discuss the Fourier integral estimator and its use as a kernel for computing self-attention in order to overcome these limitations. 3 FourierFormer: Transformer via Generalized Fourier Integral Theorem In the following, we introduce generalized integral theorems that are able to capture the complex interactions among the features of the queries and keys. We then apply these theorems to density estimation and nonparametric regression problems. We also establish the convergence rates of these estimators. Given these density estimators, we introduce a novel family of transformers, named FourierFormer, that integrates the generalized Fourier integral theorem into the dot-product attention step of the standard transformer. 3.1 Generalized Fourier Integral Theorems and Their Applications The Fourier integral theorem is a beautiful result in mathematics [92, 7] and has been recently used in nonparametric mode clustering, deconvolution problem, and generative modeling [33]. It is a combination of Fourier transform and Fourier inverse transform. In particular, for any function p 2 L1(RD), the Fourier integral theorem is given by p(k) = 1 (2⇡)D Z RD Z RD cos(s>(k y))p(y)dyds = 1 ⇡D lim R!1 Z RD DY j=1 sin(R(kj yj)) (kj yj) p(y)dy, (9) where k = (k1, . . . , kD),y = (y1, . . . , yD), s = (s1, . . . , sD), and R is the radius. The detailed derivation of Equation (9) is in Appendix B.3. Equation (9) suggests that pR(k) := 1 ⇡D R RD QD j=1 sin(R(yj kj)) (yj kj) p(y)dy can be used as an estimator of the function p. Benefits of the Fourier integral over Gaussian kernel There are two important benefits of the estimator pR: (i) it can automatically preserve the correlated structure lying within p even when p is very complex and high dimensional function. It is in stark contrast to the standard kernel estimator built based on multivariate Gaussian kernel where we need to choose good covariance matrix in the multivariate Gaussian kernel to guarantee such estimator to work well. We note that as the standard soft-max Transformer is constructed based on the multivariate Gaussian kernel, the issue of choosing good covariance matrix in dot-product transformer is inevitable; (ii) The product of sinc kernels in the estimator pR does not decay to a point mass when R ! 1. It is in stark difference from the multivariate Gaussian kernel estimator, which converges to a point mass when the covariance matrix goes to 0. It indicates that pR is a non-trivial estimator of the function p. Finally, detailed illustrations of these benefits of the Fourier integral over Gaussian kernel in density estimation and nonparametric regression problems, which we have just shown to have connection to the self-attention in transformer, can be found in Section 8 in [33]. Generalized Fourier integral estimator Borrowing the above benefits of Fourier integral estimator pR, in the paper we would like to consider a generalization of that estimator, named generalized Fourier integral estimator, which is given by: p R(k) := R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy, (10) where A := R R ⇣ sin(z) z ⌘ dz and : R ! R is a given function. When (k) = k for all k 2 RD, the generalized Fourier integral estimator p R becomes the Fourier integral estimator pR. Under appropriate conditions on the function (see Theorem 1 in Section 3.1.1 and Theorem 3 in Appendix C.1), the estimator p R converges to the true function p, namely, p(k) = lim R!1 p R(k) = limR!1 R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy. (11) We name the above limit as generalized Fourier integral theorem. Furthermore, the estimator p R also inherits similar aforementioned benefits of the Fourier integral estimator pR. Therefore, we will use the generalized Fourier integral theorem as a building block for constructing density estimators and nonparametric regression estimators, which are crucial to develop the FourierFormer in Section 3.2. 3.1.1 Density Estimation via Generalized Fourier Integral Theorems We first apply the generalized Fourier integral theorem to the density estimation problem. To ease the presentation, we assume that k1,k2, . . . ,kN 2 RD are i.i.d. samples from a distribution admitting density function p where D 1 is the dimension. Inspired by the generalized Fourier integral theorem, we obtain the following generalized Fourier density estimator p N,R of p as follows: p N,R(k) := R D NAD NX i=1 DY j=1 ✓ sin(R(kj kij)) R(kj kij) ◆ , (12) where A = R R ⇣ sin(z) z ⌘ dz and ki = (ki1, . . . , kiD) for all i 2 [N ]. To quantify the error between the generalized Fourier density estimator p n,R and the true density p, we utilize mean integrated squared errors (MISE) [91], which is given by: MISE(p N,R, p) := Z RD (p N,R(k) p(k)) 2 dk. (13) We start with the following bound on the MISE between p n,R and p. Theorem 1 Assume that R R (sin(z)/z)z j dz = 0 for all j 2 [m] and R R | (sin(z)/z)||z| m+1 dz < 1 for some m 2 N. Then, there exist universal constants C and C 0 depending on d and A such that MISE(p N,R, p) C Rm+1 + C 0 R D N . Proof of Theorem 1 is in Appendix D.1. A few comments are in order. First, by choosing R to balance the bias and variance in the bound of MISE in Theorem 1, we have the optimal R as R = O(N1/(D+m+1)). With that choice of R, the MISE rate of p N,R is O(N (m+1)/(D+m+1)). Second, when (z) = zl for l 4 and z 2 R, the assumptions in Theorem 1 are satisfied when m = 1. Under this case, the MISE rate of p N,R is O(N 2/(D+2)). However, these assumptions do not satisfy when (z) = zl and l 2 {1, 2, 3}, which is due to the limitation of the current proof technique of Theorem 1 that is based on Taylor expansion of the estimator p n,R. To address the limitation of the Taylor expansion technique, we utilize the Plancherel theorem in Fourier analysis to establish the MISE rate of p N,R when (z) = z l and l 2 {1, 2, 3}. The details of the theoretical analyses for such setting are in Appendix C. 3.2 FourierFormer: Transformers with Fourier Attentions Motivated by the preservation of the correlated structure of the function from the generalized Fourier integral theorem as well as the theoretical guarantees of density estimators, in this section we adapt the nonparametric regression interpretation of self-attention in Section 2 and propose the generalized Fourier nonparametric regression estimator in Section 3.2.1. We also establish the convergence properties of that estimator. Then, based on generalized Fourier nonparametric regression estimator, we develop the Fourier Attention and its corresponding FourierFormer in Section 3.2.2. 3.2.1 Nonparametric Regression via Generalized Fourier Integral Theorem We now discuss an application of the generalized Fourier integral theorems to the nonparametric regression setting (3), namely, we assume that (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from the following nonparametric regression model: vj = f(kj) + "j , where "1, . . . , "N are independent noises such that E("j) = 0 and the key vectors k1,k2, . . . ,kN are i.i.d. samples from p. Given the generalized Fourier density estimator (12), following the argument in Section 2, the Nadaraya–Watson estimator of the function f based on the generalized Fourier density estimator is given by: fN,R(k) := PN i=1 vi QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ . (14) The main difference between the generalized Fourier nonparametric regression estimator fN,R in equation (14) and the estimator bf in equation (6) is that the estimator fN,R utilizes the generalized Fourier density estimator to estimate the conditional distribution of the value vectors given the key vectors instead of the isotropic Gaussian kernel density estimator as in bf . As we highlighted in Section 3, an important benefit of the generalized Fourier density estimator is that it can capture the complex dependencies of the features of the value vectors and the key vectors while the Gaussian kernel needs to have good covariance matrix to do that, which is computationally expensive in practice. We now have the following result establishing the mean square error (MSE) of fN,R when Dv = 1. Theorem 2 Assume that R R ⇣ sin(z) z ⌘ z j dz = 0 for all 1 j m and R R ⇣ sin(z) z ⌘ |z|jdz < 1 for any m + 1 j 2m + 2 for some m 2 N. Then, for any k 2 RD, when Dv = 1 there exist universal constants C1, C2, C3, C4 such that the following holds: E ⇥ (fN,R(k) f(k)) 2 ⇤ ✓ C1 R2(m+1) + (f(k) + C2)RD N ◆ p 2(k)J(R) , where J(R) = 1 1p2(k) ⇣ C3 R2(m+1) + C4R d log(NR) N ⌘ . Here, the outer expectation is taken with respect to the key vectors k1, . . . ,kN and the noises "1, . . . , "N . Proof of Theorem 2 is in Appendix D.3. A few comments with Theorem 2 are in order. First, by choosing R to balance the bias and variance in the bound of the MSE of the nonparametric generalized Fourier estimator fN,R, we have the optimal radius R as R = O(N 1 2(m+1)+D ). With that choice of the optimal radius R, the rate of fN,R is O(N 2(m+1)D+2(m+1) ). Second, when (z) = zl for l 6, the assumption on the function of Theorem 2 is satisfied with m = 1. Under this case, the rate of fN,R becomes O(N 4 D+4 ). In Appendix C, we also provide the rate of fN,R when (z) = zl for some l 5, which includes the original Fourier integral theorem. 3.2.2 FourierFormer Given the generalized Fourier nonparametric regression estimator fN,R in equation (14), by plugging the query values q1, . . . , qN into that function, we obtain the following definition of the Fourier attention: Definition 1 (Fourier Attention) A Fourier attention is a multi-head attention that does nonparametric regression using the generalized Fourier nonparametric regression estimator fN,R. The output ĥi of the Fourier attention is then computed as ĥi := fN,R(qi) = PN i=1 vi QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ 8 i 2 [N ]. (15) Given the Fourier Attention in Definition 1, we then give the definition of FourierFormer as follows. Definition 2 (FourierFormer) A FourierFormer is a transformer that uses Fourier attention to capture dependency between tokens in the input sequence and the correlation between features in each token. Remark 2 (The Nonnegativity of the Fourier Kernel) The density estimation via generalized Fourier integral theorem in Section 3.1.1 does not require the generalized Fourier density estimator to be nonnegative. However, empirically, we observe that negative density estimator can cause instability in training the FourierFormer. Thus, in FourierFormer, we choose the function to be a nonnegative function to enforce the density estimator to be nonnegative. In particular, we choose to be power functions of the form (x) = x2m, where m is an positive integer. Note that when m = 1 and m = 2, the kernels in our generalized Fourier integral estimators are the well-known Fejer-de la Vallee Poussin and Jackson-de la Vallee Poussin kernels [20]. 3.3 An Efficient Implementation of the Fourier Attention The Fourier kernel is implemented efficiently in the C++/CUDA extension developed by Pytorch [58]. The idea is similar to the function cdist [58], which computes the p-norm distance between each pair of the two collections of row vectors. In our case, we aim to compute kernel functions that represent a Fourier attention in Definition 1. The core of this implementation is the following Fourier metric function df : df (qi,kj) = DY d=1 ✓ sin(R(qid kjd)) R(qid kjd) ◆ . We directly implement df as a torch.autograd.Function [58] in which we provide an efficient way to compute forward and backward function (df and gradient of df ). While the implementation of the forward function is straight forward, the backward function is more tricky since we need to optimize the code to compute the gradient of df w.r.t to variables q, k, and R all at once. We can develop the backward function with highly parallel computation by exploiting GPU architecture and utilizing the reduction technique. The computational time is comparable to function cdist; thus, our FourierFormer implementation is as computationally time-efficient. 4 Experimental Results In this section, we numerically justify the advantage of FourierFormer over the baseline dot-product transformer on two large-scale tasks: language modeling on WikiText-103 [46] (Section 4.1) and image classification on ImageNet [22, 67] (Section 4.2), time series classification on the UEA benchmark [5] (Section 4.3), and reinforcement learning on the D4RL Benchmark [29] (Section 4.4), and the machine translation on the IWSLT’ 14 De-En [10] (Section 4.5). We aim to show that: (i) FourierFormer achieves better accuracy than the baseline transformer on a variety of practical tasks with different data modalities, and (ii) FourierFormer helps reduce head redundancy compared to the baseline transformer (Section 4.6). Throughout the section, we compare FourierFormers with the baseline dot-product transformers of the same configuration. In all experiments, we made the constant R in Fourier attention (see equation (16)) to be a learnable scalar and set choose the function (x) = x4 (see Remark 2). All of our results are averaged over 5 runs with different seeds. The details on the models and training are provided in Appendix A. Moreover, additional experiments results are provided in Appendix E. Our PyTorch code with documentation can be found at https://github.com/minhtannguyen/FourierFormer_NeurIPS. 4.1 Language Modeling on WikiText-103 We report the validation and test perplexity (PPL) of FourierFormer versus the baseline transformer with the dot-product attention in Table 1. FourierFormers attain much better PPL than the baselines in both small and medium configurations. For the small configuration, the improvements of FourierFormer over the baseline are 1.29 PPL in validation and 1.44 PPL in test. For the medium configuration, these improvements are 1.39 PPL in validation and 1.59 PPL in test. These results suggest that the advantage of FourierFormer over the baseline dot-product transformer grows with the model’s size. This meets our expectation because larger models has larger query and key dimensions, e.g. the language model with medium configuration in this experiment has the query and key dimension of 256 versus 128 as in the language model with small configuration. Since the advantage of FourierFormer results from the property that FourierFormer can capture correlation between features in query and key vectors, the larger the query and key dimensions are, the more advantage FourierFormer has. 4.2 Image Classification on ImageNet In the Imagenet classification task, we illustrates the benefits of Fourierformers in different data modalities. We summarize our models’ results in Table 2. Same as in the language modeling experiment, for this image classification task, the Deit model equipped with FourierFormer significantly outperforms the baseline Deit dot-product transformer [79] in both top-1 and top-5 accuracy. This result suggests that the advantage of FourierFormer over the baseline dot-product transformer holds across different data modalities. 4.3 UEA Time Series Classification To evaluate Fourierformers on temporal sequences, we compare the accuracy of the our models and the baseline softmax transformers trained on 10 datasets in the the UEA Time Series Classification Archive benchmark [5]. We summarize our results in Table 3. We observe show that Fourierformers outperforms softmax baselines in 7 out of 10 tasks and yields significantly better accuracy than the softmax transformer on average, showing the our models benefits when trained on temporal data. 4.4 Reinforcement learning on the D4RL benchmark We also examine the performance of our Fourierformers in reinforcement learning. In Table 4, we verify the advantage of decision FourierFormer over the baseline decision transformer [12] on the continuous control tasks from the D4RL benchmark [29]. The decision FourierFormer is the decision transformer with the Fourier attention instead of the softmax attention. On this benchmark, our decision FourierFormer significantly outperforms the baseline decision transformer on 8 out of 9 tasks and on average across tasks. Each experiment result averaged over 5 runs with different random seeds. We follow the architecture and training configuration from [93]. 4.5 Machine Translation on IWSLT’ 14 De-En We demonstrate the performance of Fourierformer on the IWSLT’ 14 De-En [10] neural machine translation task, which has different inputs’ the sequence lengths. Table 5 shows that the FourierFormer achieves better BLUE scores than the softmax baseline. 4.6 FourierFormer Helps Reducing Head Redundancy To study the diversity between attention heads, given the model trained for the WikiText-103 language modeling task, we compute the average L2 distance between heads in each layer. We show the layer-average mean and variance of distances between heads in Table 6. Results in Table 6 shows that FourierFormer obtains greater L2 distance between attention heads than the baseline transformer with the dot-product attention and thus helps reduce the head redundancy. Note that we use the small configuration as specified in Section 4.1 for both models. 5 Related Work Interpretation of Attention Mechanism in Transformers Recent works have tried to gain an understanding of transformer’s attention from different perspectives. [80] considers attention as applying kernel smoother over the inputs. Extending this kernel approach, [35, 15, 52, 89, 54] linearize the softmax kernel in dot-product attention and propose a family of efficient transformers with linear computational and memory complexity. [9] then shows that these linear transformers are comparable to a Petrov-Galerkin projection [64], suggesting that the softmax normalization in the dot-product attention is sufficient but not necessary. Other works provide an understanding of attention in transformers via ordinary/partial differential equation include [45, 69]. In addition, [51, 75, 30, 96, 53] relate attentions in transformers to a Gaussian mixture models. Several works also connect the attention mechanism to graph-structured learning and message passing in graphical models [90, 72, 39]. Our work focuses on deriving the connection between self-attention and nonparametric kernel regression and exploring better regression estimator, such as the generalized Fourier nonparametric regression estimator, to improve the performance of transformers. Redundancy in Transformers [19, 47, 25] show that neurons and attention heads in the pre-trained transformer are redundant and can be removed when applied on a downstream task. By studying the contextualized embeddings in pre-trained networks, it has been demonstrated that the learned representations from these redundant models are highly anisotropic [49, 26]. Furthermore, [70, 74, 86, 68] employ knowledge distillation and sparse approximation to enhance the efficiency of transformers. Our FourierFormer is complementary to these methods and can be combined with them. 6 Concluding Remarks In this paper, we establish the correspondence between the nonparametric kernel regression and the self-attention in transformer. We then develop the generalized Fourier integral estimators and propose the FourierFormer, a novel class of transformers that use the generalized Fourier integral estimators to construct their attentions for efficiently capturing the correlations between features in the query and key vectors. We theoretically prove the approximation guarantees of the generalized Fourier integral estimators and empirically validate the advantage of FourierFormer over the baseline transformer with the dot-product attention in terms of accuracy and head redundancy reduction. It is interesting to incorporate robust kernels into the nonparametric regression framework of FourierFormer to enhance the robustness of the model under data perturbation and adversarial attacks. A limitation of FourierFormer is that it still has the same quadratic computational and memory complexity as the baseline transformer with the dot-product attention. We leave the development of the linear version of FourierFormer that achieves linear computational and memory complexity as future work. It is worth noting that there is no potential negative societal impacts of FourierFormer. Acknowledgements This material is based on research sponsored by the AFOSR MURI FA9550-18-1-0502, the ONR grant N00014-20-1-2093, the MURI N00014-20-1-2787, and the NSF under Grant# 2030859 to the Computing Research Association for the CIFellows Project (CIF2020-UCLA-38). NH acknowledges support from the NSF IFML 2019844 and the NSF AI Institute for Foundations of Machine Learning.
1. What is the focus and contribution of the paper on attention mechanisms? 2. What are the strengths of the proposed approach, particularly in terms of its connection to kernel density estimation? 3. What are the weaknesses of the paper regarding experimentation and the determination of a specific value? 4. Do you have any questions or concerns about the replacement of k with q in the derivation? 5. What is the difference between the proposed method and Fnet, which uses Fourier transforms instead of softmax attention? 6. Have the authors adequately discussed the limitations of their proposed approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes an alternative to softmax attention using the sinc function. The authors were well-motivated using the sinc function from the Fourier integral estimator and provided theoretical support for approximation error. They have done experiments on two datasets showing improvement over softmax attention. Strengths And Weaknesses Strengths: The paper is a pleasant read and clear to understand. The established connection between non-parametric kernel density estimator and self-attention. The authors have provided well-motivated intuition for their proposed approach. Related work has been moderately covered. The evaluation is convincing to show the benefit of sinc-based attention. Weaknesses: The authors only experimented with one choice of ϕ . It would be great to see what other suitable candidate of ϕ is possible. How do they determine the value of R ? Is it dataset-specific? There are no quantitative results on the runtime of the proposed attention mechanism. Questions What is the connection and differences between the proposed sinc function-based attention and the Fnet [1], which uses Fourier transform instead of softmax attention? Eq. 6. is derived upon considering only v and k . How is it justified to replace k with q in line 119? What was the rationale and practical reason to consider k j s are i.i.d? In practice, when they went through an MLP, the channels became dependent on each other. [1] Lee-Thorp, James, et al. "Fnet: Mixing tokens with Fourier transforms." arXiv preprint arXiv:2105.03824 (2021). Limitations Authors have not adequately commented on the known limitations.
NIPS
Title FourierFormer: Transformer Meets Generalized Fourier Integral Theorem Abstract Multi-head attention empowers the recent success of transformers, the state-of-theart models that have achieved remarkable success in sequence modeling and beyond. These attention mechanisms compute the pairwise dot products between the queries and keys, which results from the use of unnormalized Gaussian kernels with the assumption that the queries follow a mixture of Gaussian distribution. There is no guarantee that this assumption is valid in practice. In response, we first interpret attention in transformers as a nonparametric kernel regression. We then propose the FourierFormer, a new class of transformers in which the dot-product kernels are replaced by the novel generalized Fourier integral kernels. Different from the dot-product kernels, where we need to choose a good covariance matrix to capture the dependency of the features of data, the generalized Fourier integral kernels can automatically capture such dependency and remove the need to tune the covariance matrix. We theoretically prove that our proposed Fourier integral kernels can efficiently approximate any key and query distributions. Compared to the conventional transformers with dot-product attention, FourierFormers attain better accuracy and reduce the redundancy between attention heads. We empirically corroborate the advantages of FourierFormers over the baseline transformers in a variety of practical applications including language modeling and image classification. 1 Introduction Transformers [83] are powerful neural networks that have achieved tremendous success in many areas of machine learning [40, 76, 36] and become the state-of-the-art model on a wide range of applications across different data modalities, from language [23, 1, 18, 13, 62, 4, 8, 21] to images [24, 43, 78, 63, 59, 27], videos [3, 44], point clouds [97, 31], and protein sequence [65, 34]. In addition to their excellent performance on supervised learning tasks, transformers can also effectively transfer the learned knowledge from a pretraining task to new tasks with limited or no supervision [60, 61, 23, 94, 42]. At the core of transformers is the dot-product self-attention, which ⇤ Co-first authors. ⇤⇤ Co-last authors. Please correspond to: tanmnguyen89@ucla.edu 36th Conference on Neural Information Processing Systems (NeurIPS 2022). mainly accounts for the success of transformer models [14, 56, 41]. This dot-product self-attention learn self-alignment between tokens in an input sequence by estimating the relative importance of a given token with respect to all other tokens. It then transform each token into a weighted average of the feature representations of other tokens where the weight is proportional to a importance score between each pair of tokens. The importance scores in self-attention enable a token to attend to other tokens in the sequence, thus capturing the contextual representation [6, 83, 38]. 1.1 Self-Attention Given an input sequence X := [x1, · · · ,xN ]> 2 RN⇥Dx of N feature vectors, self-attention computes the output sequence H from X as follows: Step 1: Projecting the input sequence into different subspaces. The input sequence X is transformed into the query matrix Q, the key matrix K, and the value matrix V via three linear transformations Q = XW>Q;K = XW > K ;V = XW > V , where WQ,WK 2 RD⇥Dx , and WV 2 RDv⇥Dx are the weight matrices. We denote Q := [q1, · · · , qN ]>,K := [k1, · · · ,kN ]>, and V := [v1, · · · ,vN ]>, where the vectors qi,ki,vi for i = 1, · · · , N are the query, key, and value vectors, respectively. Step 2: Computing the output as a weighted average. The output sequence H := [h1, · · · ,hN ]> is then given by H = softmax ⇣ QK > / p D ⌘ V := AV, (1) where the softmax function is applied to each row of the matrix (QK>)/ p D. For each query vector qi, i = 1, · · · , N , Eqn. (1) can be written in the vector form to compute the output vector hi as follows hi = NX j=1 softmax ⇣ q>i kj/ p D ⌘ vj := NX j=1 aijvj . (2) The matrix A 2 RN⇥N and its component aij for i, j = 1, · · · , N are the attention matrix and attention scores, respectively. The self-attention computed by equations (1) and (2) is called the dotproduct attention or softmax attention. In our paper, we refer a transformer that uses this attention as the baseline transformer with the dot-product attention or the dot-product transformer. The structure of the attention matrix A after training governs the ability of the self-attention to capture contextual representation for each token. Multi-head Attention Each output sequence H forms an attention head. Multi-head attention concatenates multiple heads to compute the final output. Let H be the number of heads and W O 2 RHDv⇥HDv be the projection matrix for the output. The multi-head attention is defined as MultiHead({Q,K,V}Hi=1) = Concat(H1, . . . ,HH)W O . The capacity of the attention mechanism and its ability to learn diverse syntactic and semantic relationships determine the success of transformers [77, 84, 17, 85, 32]. However, equations (1) and (2) implies that the dot-product attention assumes the features (qi1, . . . , qiD) in qi, as well as the features (kj1, . . . , qjD) in kj , are independent. Thus, the dot-product attention fail to capture the correlations between these features, limiting its representation capacity and inhibit the performance of transformers on practical tasks where there is no guarantee that independent features can learned from complex data. One solution to capture correlations between features qi and kj is to introduce covariance matrices into the formulation of the dot-product attention with the cost of significantly increasing of the computational complexity. Also, choosing good covariance matrices is difficult. 1.2 Contribution In this paper, we first establish a correspondence between self-attention and nonparametric kernel regression. Under this new perspective of self-attention, we explain the limitation of the dot-product self-attention that it may fail to capture correlations between the features in the query and key vectors. We then leverage the generalized Fourier integral theorems, which can automatically capture these correlations, and derive the generalized Fourier integral estimators for the nonparametric regression problem. Using this new density estimator, we propose the FourierFormer, a novel class of transformers that can capture correlations between features in the query and key vectors of self-attention. In summary, our contribution is three-fold: 1. We derive the formula of self-attention from solving a nonparametric kernel regression problem, thus providing a nonparametric regression interpretation to study and further develop self-attention. 2. We develop the generalized Fourier integral estimators for the nonparametric regression problem and provide theoretical guarantees for these estimator. 3. We propose the FourierFormer whose attentions use the generalized Fourier integral estimators to capture more efficiently correlations between features in the query and key vectors. Finally, we empirically show that the FourierFormer attains significantly better accuracy than the baseline transformer with the dot-product attention on a variety of tasks including the WikiText language modeling and ImageNet image classsification. We also demonstrate in our experiments that FourierFormer helps reduce the redundancy between attention heads. Organization We structure this paper as follows: In Section 2, we present the correspondence between self-attention and nonparametric kernel regression. In Section 3, we discuss the generalized Fourier integral estimators and define the FourierFormer. We validate and empirically analyze the advantages of FourierFormer in Section 4. We discuss related works in Section 5. The paper ends with concluding remarks. Technical proofs and more experimental details are provided in the Appendix. Notation For any N 2 N, we denote [N ] = {1, 2, . . . , N}. For any D 1, L1(RD) denotes the space of real-valued functions on RD that are integrable. For any two sequences {aN}N 1, {bN}N 1, we denote aN = O(bN ) to mean that aN CbN for all N 1 where C is some universal constant. 2 A Nonparametric Regression Interpretation of Self-attention In this section, we establish the connection between self-attention and nonparametric kernel regression. In particular, we derive the self-attention in equation (2) as a nonparametric kernel regression in which the key vectors kj and value vectors vj are training inputs and training targets, respectively, while the query vectors qi and the output vectors hi form a set of new inputs and their corresponding targets that need to be estimated, respectively, for i, j = 1, · · · , N . In general, we can view the training set {kj ,vj} for j 2 [N ] to come from the following nonparametric regression model: vj = f(kj) + "j , (3) where "1, . . . , "N are independent noises such that E("j) = 0. Furthermore, we consider a random design setting where the key vectors k1,k2, . . . ,kN are i.i.d. samples from the distribution that admits p as density function. By an abuse of notation, we also denote p as the joint density where the key and value vectors (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from. Here, f is a true but unknown function and we would like to estimate it. Nadaraya–Watson estimator Our approach to estimate the function f is based on Nadaraya–Watson’s nonparametric kernel regression approach [50]. In particular, from the nonparametric regression model (3), we have E [vj |kj ] = f(kj) for all j 2 [N ]. Therefore, it is sufficient to estimate the conditional distribution of the value vectors given the key vectors. Given the density function p of the key vectors and the joint density p of the key and value vectors, for any pair of vectors (v,k) generate from model (3) we have E [v|k] = Z RD v · p(v|k)dv = Z v · p(v,k) p(k) dv. (4) The formulation (4) of the conditional expectation indicates that as long as we can estimate the joint density function p(v,k) and the marginal density function p(v), we are able to obtain an estimation for the conditional expectation and thus for the function f . This approach is widely known as Nadaraya–Watson’s nonparametric kernel regression approach. Kernel density estimator To estimate p(v,k) and p(k), we employ the kernel density estimation approach [66, 57]. In particular, by using the isotropic Gaussian kernel with bandwidth , we have the following estimators of p(v,k) and p(k): p̂ (v,k) = 1 N NX j=1 ' (v vj)' (k kj), p̂ (k) = 1 N NX j=1 ' (k kj), (5) where ' (.) is the isotropic multivariate Gaussian density function with diagonal covariance matrix 2 ID. Given the kernel density estimators (5), we obtain the following estimation of the function f : bf (k) = Z RD v · p̂ (v,k) p̂ (k) dv = Z RD v · PN j=1 ' (v vj)' (k kj)PN j=1 ' (k kj) dv = PN j=1 ' (k kj) R v · ' (v vj)dv PN j=1 ' (k kj) = PN j=1 vj' (k kj)PN j=1 ' (k kj) . (6) Connection between Self-Attention and nonparametric regression By plugging the query vectors qi into the function bf in equation (6), we obtain that bf (qi) = PN j vj exp kqi kjk2/2 2 PN j exp ( kqi kjk 2/2 2) = PN j vj exp ⇥ kqik2 + kkjk2 /2 2 ⇤ exp qik>j / 2 PN j exp [ (kqik 2 + kkj0k2) /2 2] exp qik>j / 2 . (7) If we further assume that the keys kj are normalized, which is usually done in practice to stabilize the training of transformers [71], the value of bf (qi) in equation (6) then becomes bf (qi) = PN j vj exp qik>j / 2 PN j exp qik>j / 2 = NX j=1 softmax ⇣ q>i kj/ 2 ⌘ vj . (8) When we choose 2 = p D where D is the dimension of qi and kj , equation (8) matches equation (2) of self-attention, namely, bf (qi) = hi. Thus, we have shown that self-attention performs nonparametric regression using isotropic Gaussian kernels. Remark 1 The assumption that kj is normalized is to recover the pairwise dot-product attention in transformers. In general, this assumption is not necessary. In fact, the isotropic Gaussian kernel in equation (7) is more desirable than the dot-product kernel in equation (8) of the pairwise dot-product attention since the former is Lipschitz while the later is not Lipschitz [37]. The Lipschitz constraint helps improve the robustness of the model [16, 81, 2] and stabilize the model training [48]. Limitation of Self-Attention From our nonparametric regression interpretation, self-attention is derived from the use of isotropic Gaussian kernels for kernel density estimation and nonparametric regression estimation, which may fail to capture the complex correlations between D features in qi and kj [88, 33]. Using multivariate Gaussian kernels with dense covariance matrices can help capture such correlations; however, choosing good covariance matrices is challenging and inefficient [87, 73, 11]. In the following section, we discuss the Fourier integral estimator and its use as a kernel for computing self-attention in order to overcome these limitations. 3 FourierFormer: Transformer via Generalized Fourier Integral Theorem In the following, we introduce generalized integral theorems that are able to capture the complex interactions among the features of the queries and keys. We then apply these theorems to density estimation and nonparametric regression problems. We also establish the convergence rates of these estimators. Given these density estimators, we introduce a novel family of transformers, named FourierFormer, that integrates the generalized Fourier integral theorem into the dot-product attention step of the standard transformer. 3.1 Generalized Fourier Integral Theorems and Their Applications The Fourier integral theorem is a beautiful result in mathematics [92, 7] and has been recently used in nonparametric mode clustering, deconvolution problem, and generative modeling [33]. It is a combination of Fourier transform and Fourier inverse transform. In particular, for any function p 2 L1(RD), the Fourier integral theorem is given by p(k) = 1 (2⇡)D Z RD Z RD cos(s>(k y))p(y)dyds = 1 ⇡D lim R!1 Z RD DY j=1 sin(R(kj yj)) (kj yj) p(y)dy, (9) where k = (k1, . . . , kD),y = (y1, . . . , yD), s = (s1, . . . , sD), and R is the radius. The detailed derivation of Equation (9) is in Appendix B.3. Equation (9) suggests that pR(k) := 1 ⇡D R RD QD j=1 sin(R(yj kj)) (yj kj) p(y)dy can be used as an estimator of the function p. Benefits of the Fourier integral over Gaussian kernel There are two important benefits of the estimator pR: (i) it can automatically preserve the correlated structure lying within p even when p is very complex and high dimensional function. It is in stark contrast to the standard kernel estimator built based on multivariate Gaussian kernel where we need to choose good covariance matrix in the multivariate Gaussian kernel to guarantee such estimator to work well. We note that as the standard soft-max Transformer is constructed based on the multivariate Gaussian kernel, the issue of choosing good covariance matrix in dot-product transformer is inevitable; (ii) The product of sinc kernels in the estimator pR does not decay to a point mass when R ! 1. It is in stark difference from the multivariate Gaussian kernel estimator, which converges to a point mass when the covariance matrix goes to 0. It indicates that pR is a non-trivial estimator of the function p. Finally, detailed illustrations of these benefits of the Fourier integral over Gaussian kernel in density estimation and nonparametric regression problems, which we have just shown to have connection to the self-attention in transformer, can be found in Section 8 in [33]. Generalized Fourier integral estimator Borrowing the above benefits of Fourier integral estimator pR, in the paper we would like to consider a generalization of that estimator, named generalized Fourier integral estimator, which is given by: p R(k) := R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy, (10) where A := R R ⇣ sin(z) z ⌘ dz and : R ! R is a given function. When (k) = k for all k 2 RD, the generalized Fourier integral estimator p R becomes the Fourier integral estimator pR. Under appropriate conditions on the function (see Theorem 1 in Section 3.1.1 and Theorem 3 in Appendix C.1), the estimator p R converges to the true function p, namely, p(k) = lim R!1 p R(k) = limR!1 R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy. (11) We name the above limit as generalized Fourier integral theorem. Furthermore, the estimator p R also inherits similar aforementioned benefits of the Fourier integral estimator pR. Therefore, we will use the generalized Fourier integral theorem as a building block for constructing density estimators and nonparametric regression estimators, which are crucial to develop the FourierFormer in Section 3.2. 3.1.1 Density Estimation via Generalized Fourier Integral Theorems We first apply the generalized Fourier integral theorem to the density estimation problem. To ease the presentation, we assume that k1,k2, . . . ,kN 2 RD are i.i.d. samples from a distribution admitting density function p where D 1 is the dimension. Inspired by the generalized Fourier integral theorem, we obtain the following generalized Fourier density estimator p N,R of p as follows: p N,R(k) := R D NAD NX i=1 DY j=1 ✓ sin(R(kj kij)) R(kj kij) ◆ , (12) where A = R R ⇣ sin(z) z ⌘ dz and ki = (ki1, . . . , kiD) for all i 2 [N ]. To quantify the error between the generalized Fourier density estimator p n,R and the true density p, we utilize mean integrated squared errors (MISE) [91], which is given by: MISE(p N,R, p) := Z RD (p N,R(k) p(k)) 2 dk. (13) We start with the following bound on the MISE between p n,R and p. Theorem 1 Assume that R R (sin(z)/z)z j dz = 0 for all j 2 [m] and R R | (sin(z)/z)||z| m+1 dz < 1 for some m 2 N. Then, there exist universal constants C and C 0 depending on d and A such that MISE(p N,R, p) C Rm+1 + C 0 R D N . Proof of Theorem 1 is in Appendix D.1. A few comments are in order. First, by choosing R to balance the bias and variance in the bound of MISE in Theorem 1, we have the optimal R as R = O(N1/(D+m+1)). With that choice of R, the MISE rate of p N,R is O(N (m+1)/(D+m+1)). Second, when (z) = zl for l 4 and z 2 R, the assumptions in Theorem 1 are satisfied when m = 1. Under this case, the MISE rate of p N,R is O(N 2/(D+2)). However, these assumptions do not satisfy when (z) = zl and l 2 {1, 2, 3}, which is due to the limitation of the current proof technique of Theorem 1 that is based on Taylor expansion of the estimator p n,R. To address the limitation of the Taylor expansion technique, we utilize the Plancherel theorem in Fourier analysis to establish the MISE rate of p N,R when (z) = z l and l 2 {1, 2, 3}. The details of the theoretical analyses for such setting are in Appendix C. 3.2 FourierFormer: Transformers with Fourier Attentions Motivated by the preservation of the correlated structure of the function from the generalized Fourier integral theorem as well as the theoretical guarantees of density estimators, in this section we adapt the nonparametric regression interpretation of self-attention in Section 2 and propose the generalized Fourier nonparametric regression estimator in Section 3.2.1. We also establish the convergence properties of that estimator. Then, based on generalized Fourier nonparametric regression estimator, we develop the Fourier Attention and its corresponding FourierFormer in Section 3.2.2. 3.2.1 Nonparametric Regression via Generalized Fourier Integral Theorem We now discuss an application of the generalized Fourier integral theorems to the nonparametric regression setting (3), namely, we assume that (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from the following nonparametric regression model: vj = f(kj) + "j , where "1, . . . , "N are independent noises such that E("j) = 0 and the key vectors k1,k2, . . . ,kN are i.i.d. samples from p. Given the generalized Fourier density estimator (12), following the argument in Section 2, the Nadaraya–Watson estimator of the function f based on the generalized Fourier density estimator is given by: fN,R(k) := PN i=1 vi QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ . (14) The main difference between the generalized Fourier nonparametric regression estimator fN,R in equation (14) and the estimator bf in equation (6) is that the estimator fN,R utilizes the generalized Fourier density estimator to estimate the conditional distribution of the value vectors given the key vectors instead of the isotropic Gaussian kernel density estimator as in bf . As we highlighted in Section 3, an important benefit of the generalized Fourier density estimator is that it can capture the complex dependencies of the features of the value vectors and the key vectors while the Gaussian kernel needs to have good covariance matrix to do that, which is computationally expensive in practice. We now have the following result establishing the mean square error (MSE) of fN,R when Dv = 1. Theorem 2 Assume that R R ⇣ sin(z) z ⌘ z j dz = 0 for all 1 j m and R R ⇣ sin(z) z ⌘ |z|jdz < 1 for any m + 1 j 2m + 2 for some m 2 N. Then, for any k 2 RD, when Dv = 1 there exist universal constants C1, C2, C3, C4 such that the following holds: E ⇥ (fN,R(k) f(k)) 2 ⇤ ✓ C1 R2(m+1) + (f(k) + C2)RD N ◆ p 2(k)J(R) , where J(R) = 1 1p2(k) ⇣ C3 R2(m+1) + C4R d log(NR) N ⌘ . Here, the outer expectation is taken with respect to the key vectors k1, . . . ,kN and the noises "1, . . . , "N . Proof of Theorem 2 is in Appendix D.3. A few comments with Theorem 2 are in order. First, by choosing R to balance the bias and variance in the bound of the MSE of the nonparametric generalized Fourier estimator fN,R, we have the optimal radius R as R = O(N 1 2(m+1)+D ). With that choice of the optimal radius R, the rate of fN,R is O(N 2(m+1)D+2(m+1) ). Second, when (z) = zl for l 6, the assumption on the function of Theorem 2 is satisfied with m = 1. Under this case, the rate of fN,R becomes O(N 4 D+4 ). In Appendix C, we also provide the rate of fN,R when (z) = zl for some l 5, which includes the original Fourier integral theorem. 3.2.2 FourierFormer Given the generalized Fourier nonparametric regression estimator fN,R in equation (14), by plugging the query values q1, . . . , qN into that function, we obtain the following definition of the Fourier attention: Definition 1 (Fourier Attention) A Fourier attention is a multi-head attention that does nonparametric regression using the generalized Fourier nonparametric regression estimator fN,R. The output ĥi of the Fourier attention is then computed as ĥi := fN,R(qi) = PN i=1 vi QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ 8 i 2 [N ]. (15) Given the Fourier Attention in Definition 1, we then give the definition of FourierFormer as follows. Definition 2 (FourierFormer) A FourierFormer is a transformer that uses Fourier attention to capture dependency between tokens in the input sequence and the correlation between features in each token. Remark 2 (The Nonnegativity of the Fourier Kernel) The density estimation via generalized Fourier integral theorem in Section 3.1.1 does not require the generalized Fourier density estimator to be nonnegative. However, empirically, we observe that negative density estimator can cause instability in training the FourierFormer. Thus, in FourierFormer, we choose the function to be a nonnegative function to enforce the density estimator to be nonnegative. In particular, we choose to be power functions of the form (x) = x2m, where m is an positive integer. Note that when m = 1 and m = 2, the kernels in our generalized Fourier integral estimators are the well-known Fejer-de la Vallee Poussin and Jackson-de la Vallee Poussin kernels [20]. 3.3 An Efficient Implementation of the Fourier Attention The Fourier kernel is implemented efficiently in the C++/CUDA extension developed by Pytorch [58]. The idea is similar to the function cdist [58], which computes the p-norm distance between each pair of the two collections of row vectors. In our case, we aim to compute kernel functions that represent a Fourier attention in Definition 1. The core of this implementation is the following Fourier metric function df : df (qi,kj) = DY d=1 ✓ sin(R(qid kjd)) R(qid kjd) ◆ . We directly implement df as a torch.autograd.Function [58] in which we provide an efficient way to compute forward and backward function (df and gradient of df ). While the implementation of the forward function is straight forward, the backward function is more tricky since we need to optimize the code to compute the gradient of df w.r.t to variables q, k, and R all at once. We can develop the backward function with highly parallel computation by exploiting GPU architecture and utilizing the reduction technique. The computational time is comparable to function cdist; thus, our FourierFormer implementation is as computationally time-efficient. 4 Experimental Results In this section, we numerically justify the advantage of FourierFormer over the baseline dot-product transformer on two large-scale tasks: language modeling on WikiText-103 [46] (Section 4.1) and image classification on ImageNet [22, 67] (Section 4.2), time series classification on the UEA benchmark [5] (Section 4.3), and reinforcement learning on the D4RL Benchmark [29] (Section 4.4), and the machine translation on the IWSLT’ 14 De-En [10] (Section 4.5). We aim to show that: (i) FourierFormer achieves better accuracy than the baseline transformer on a variety of practical tasks with different data modalities, and (ii) FourierFormer helps reduce head redundancy compared to the baseline transformer (Section 4.6). Throughout the section, we compare FourierFormers with the baseline dot-product transformers of the same configuration. In all experiments, we made the constant R in Fourier attention (see equation (16)) to be a learnable scalar and set choose the function (x) = x4 (see Remark 2). All of our results are averaged over 5 runs with different seeds. The details on the models and training are provided in Appendix A. Moreover, additional experiments results are provided in Appendix E. Our PyTorch code with documentation can be found at https://github.com/minhtannguyen/FourierFormer_NeurIPS. 4.1 Language Modeling on WikiText-103 We report the validation and test perplexity (PPL) of FourierFormer versus the baseline transformer with the dot-product attention in Table 1. FourierFormers attain much better PPL than the baselines in both small and medium configurations. For the small configuration, the improvements of FourierFormer over the baseline are 1.29 PPL in validation and 1.44 PPL in test. For the medium configuration, these improvements are 1.39 PPL in validation and 1.59 PPL in test. These results suggest that the advantage of FourierFormer over the baseline dot-product transformer grows with the model’s size. This meets our expectation because larger models has larger query and key dimensions, e.g. the language model with medium configuration in this experiment has the query and key dimension of 256 versus 128 as in the language model with small configuration. Since the advantage of FourierFormer results from the property that FourierFormer can capture correlation between features in query and key vectors, the larger the query and key dimensions are, the more advantage FourierFormer has. 4.2 Image Classification on ImageNet In the Imagenet classification task, we illustrates the benefits of Fourierformers in different data modalities. We summarize our models’ results in Table 2. Same as in the language modeling experiment, for this image classification task, the Deit model equipped with FourierFormer significantly outperforms the baseline Deit dot-product transformer [79] in both top-1 and top-5 accuracy. This result suggests that the advantage of FourierFormer over the baseline dot-product transformer holds across different data modalities. 4.3 UEA Time Series Classification To evaluate Fourierformers on temporal sequences, we compare the accuracy of the our models and the baseline softmax transformers trained on 10 datasets in the the UEA Time Series Classification Archive benchmark [5]. We summarize our results in Table 3. We observe show that Fourierformers outperforms softmax baselines in 7 out of 10 tasks and yields significantly better accuracy than the softmax transformer on average, showing the our models benefits when trained on temporal data. 4.4 Reinforcement learning on the D4RL benchmark We also examine the performance of our Fourierformers in reinforcement learning. In Table 4, we verify the advantage of decision FourierFormer over the baseline decision transformer [12] on the continuous control tasks from the D4RL benchmark [29]. The decision FourierFormer is the decision transformer with the Fourier attention instead of the softmax attention. On this benchmark, our decision FourierFormer significantly outperforms the baseline decision transformer on 8 out of 9 tasks and on average across tasks. Each experiment result averaged over 5 runs with different random seeds. We follow the architecture and training configuration from [93]. 4.5 Machine Translation on IWSLT’ 14 De-En We demonstrate the performance of Fourierformer on the IWSLT’ 14 De-En [10] neural machine translation task, which has different inputs’ the sequence lengths. Table 5 shows that the FourierFormer achieves better BLUE scores than the softmax baseline. 4.6 FourierFormer Helps Reducing Head Redundancy To study the diversity between attention heads, given the model trained for the WikiText-103 language modeling task, we compute the average L2 distance between heads in each layer. We show the layer-average mean and variance of distances between heads in Table 6. Results in Table 6 shows that FourierFormer obtains greater L2 distance between attention heads than the baseline transformer with the dot-product attention and thus helps reduce the head redundancy. Note that we use the small configuration as specified in Section 4.1 for both models. 5 Related Work Interpretation of Attention Mechanism in Transformers Recent works have tried to gain an understanding of transformer’s attention from different perspectives. [80] considers attention as applying kernel smoother over the inputs. Extending this kernel approach, [35, 15, 52, 89, 54] linearize the softmax kernel in dot-product attention and propose a family of efficient transformers with linear computational and memory complexity. [9] then shows that these linear transformers are comparable to a Petrov-Galerkin projection [64], suggesting that the softmax normalization in the dot-product attention is sufficient but not necessary. Other works provide an understanding of attention in transformers via ordinary/partial differential equation include [45, 69]. In addition, [51, 75, 30, 96, 53] relate attentions in transformers to a Gaussian mixture models. Several works also connect the attention mechanism to graph-structured learning and message passing in graphical models [90, 72, 39]. Our work focuses on deriving the connection between self-attention and nonparametric kernel regression and exploring better regression estimator, such as the generalized Fourier nonparametric regression estimator, to improve the performance of transformers. Redundancy in Transformers [19, 47, 25] show that neurons and attention heads in the pre-trained transformer are redundant and can be removed when applied on a downstream task. By studying the contextualized embeddings in pre-trained networks, it has been demonstrated that the learned representations from these redundant models are highly anisotropic [49, 26]. Furthermore, [70, 74, 86, 68] employ knowledge distillation and sparse approximation to enhance the efficiency of transformers. Our FourierFormer is complementary to these methods and can be combined with them. 6 Concluding Remarks In this paper, we establish the correspondence between the nonparametric kernel regression and the self-attention in transformer. We then develop the generalized Fourier integral estimators and propose the FourierFormer, a novel class of transformers that use the generalized Fourier integral estimators to construct their attentions for efficiently capturing the correlations between features in the query and key vectors. We theoretically prove the approximation guarantees of the generalized Fourier integral estimators and empirically validate the advantage of FourierFormer over the baseline transformer with the dot-product attention in terms of accuracy and head redundancy reduction. It is interesting to incorporate robust kernels into the nonparametric regression framework of FourierFormer to enhance the robustness of the model under data perturbation and adversarial attacks. A limitation of FourierFormer is that it still has the same quadratic computational and memory complexity as the baseline transformer with the dot-product attention. We leave the development of the linear version of FourierFormer that achieves linear computational and memory complexity as future work. It is worth noting that there is no potential negative societal impacts of FourierFormer. Acknowledgements This material is based on research sponsored by the AFOSR MURI FA9550-18-1-0502, the ONR grant N00014-20-1-2093, the MURI N00014-20-1-2787, and the NSF under Grant# 2030859 to the Computing Research Association for the CIFellows Project (CIF2020-UCLA-38). NH acknowledges support from the NSF IFML 2019844 and the NSF AI Institute for Foundations of Machine Learning.
1. What is the focus and contribution of the paper on transformers? 2. What are the strengths of the proposed approach, particularly in terms of capturing correlations? 3. What are the weaknesses of the paper, especially regarding experimentation? 4. Do you have any concerns about the statistical relationships between query, key, and value features in transformers? 5. What are the limitations of the paper in convincing readers of its effectiveness?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors demonstrate the FourierFormer, a new class of transformers in which the novel generalized Fourier integral kernels replace the dot-product kernels. The FourierFormer can capture correlations between query features and key self-attention vectors. The authors empirically corroborate the advantages of FourierFormers over the baseline transformers in various practical applications, including language modeling and image classification. Strengths And Weaknesses Strengths: The ideas that the authors put forward are novel, and the mathematical arguments are complete and ingenious. Weaknesses: The experiments in this paper are insufficient and, therefore, not convincing enough to demonstrate the effectiveness of the FourierFormer. The experiment only involves two basic tasks based on WikiText-103 and ImageNet. Questions In the paper, the authors say ‘However, equations (1) and (2) imply that the dot-product attention assumes the features (qi1, . . . , qiD) in qi, as well as the features (kj1, . . . , qjD) in kj , are independent’. This description also implies that v is not included in the consideration of the correlation. However, even it's straightforward to consider q, k as independent variables and v as dependent variables, in my opinion, q, k, and v don’t have a clear statistical relationship in Transformer, partially due to its poor interpretability. Hence, according to the author's idea, v may also be part of considering the correlation. Limitations Although the authors have given detailed proof mathematically, due to the poor interpretability of the Transformer itself, I still need to see more experimental results to agree with their point of view. The current experimental results are insufficient and not persuasive.
NIPS
Title FourierFormer: Transformer Meets Generalized Fourier Integral Theorem Abstract Multi-head attention empowers the recent success of transformers, the state-of-theart models that have achieved remarkable success in sequence modeling and beyond. These attention mechanisms compute the pairwise dot products between the queries and keys, which results from the use of unnormalized Gaussian kernels with the assumption that the queries follow a mixture of Gaussian distribution. There is no guarantee that this assumption is valid in practice. In response, we first interpret attention in transformers as a nonparametric kernel regression. We then propose the FourierFormer, a new class of transformers in which the dot-product kernels are replaced by the novel generalized Fourier integral kernels. Different from the dot-product kernels, where we need to choose a good covariance matrix to capture the dependency of the features of data, the generalized Fourier integral kernels can automatically capture such dependency and remove the need to tune the covariance matrix. We theoretically prove that our proposed Fourier integral kernels can efficiently approximate any key and query distributions. Compared to the conventional transformers with dot-product attention, FourierFormers attain better accuracy and reduce the redundancy between attention heads. We empirically corroborate the advantages of FourierFormers over the baseline transformers in a variety of practical applications including language modeling and image classification. 1 Introduction Transformers [83] are powerful neural networks that have achieved tremendous success in many areas of machine learning [40, 76, 36] and become the state-of-the-art model on a wide range of applications across different data modalities, from language [23, 1, 18, 13, 62, 4, 8, 21] to images [24, 43, 78, 63, 59, 27], videos [3, 44], point clouds [97, 31], and protein sequence [65, 34]. In addition to their excellent performance on supervised learning tasks, transformers can also effectively transfer the learned knowledge from a pretraining task to new tasks with limited or no supervision [60, 61, 23, 94, 42]. At the core of transformers is the dot-product self-attention, which ⇤ Co-first authors. ⇤⇤ Co-last authors. Please correspond to: tanmnguyen89@ucla.edu 36th Conference on Neural Information Processing Systems (NeurIPS 2022). mainly accounts for the success of transformer models [14, 56, 41]. This dot-product self-attention learn self-alignment between tokens in an input sequence by estimating the relative importance of a given token with respect to all other tokens. It then transform each token into a weighted average of the feature representations of other tokens where the weight is proportional to a importance score between each pair of tokens. The importance scores in self-attention enable a token to attend to other tokens in the sequence, thus capturing the contextual representation [6, 83, 38]. 1.1 Self-Attention Given an input sequence X := [x1, · · · ,xN ]> 2 RN⇥Dx of N feature vectors, self-attention computes the output sequence H from X as follows: Step 1: Projecting the input sequence into different subspaces. The input sequence X is transformed into the query matrix Q, the key matrix K, and the value matrix V via three linear transformations Q = XW>Q;K = XW > K ;V = XW > V , where WQ,WK 2 RD⇥Dx , and WV 2 RDv⇥Dx are the weight matrices. We denote Q := [q1, · · · , qN ]>,K := [k1, · · · ,kN ]>, and V := [v1, · · · ,vN ]>, where the vectors qi,ki,vi for i = 1, · · · , N are the query, key, and value vectors, respectively. Step 2: Computing the output as a weighted average. The output sequence H := [h1, · · · ,hN ]> is then given by H = softmax ⇣ QK > / p D ⌘ V := AV, (1) where the softmax function is applied to each row of the matrix (QK>)/ p D. For each query vector qi, i = 1, · · · , N , Eqn. (1) can be written in the vector form to compute the output vector hi as follows hi = NX j=1 softmax ⇣ q>i kj/ p D ⌘ vj := NX j=1 aijvj . (2) The matrix A 2 RN⇥N and its component aij for i, j = 1, · · · , N are the attention matrix and attention scores, respectively. The self-attention computed by equations (1) and (2) is called the dotproduct attention or softmax attention. In our paper, we refer a transformer that uses this attention as the baseline transformer with the dot-product attention or the dot-product transformer. The structure of the attention matrix A after training governs the ability of the self-attention to capture contextual representation for each token. Multi-head Attention Each output sequence H forms an attention head. Multi-head attention concatenates multiple heads to compute the final output. Let H be the number of heads and W O 2 RHDv⇥HDv be the projection matrix for the output. The multi-head attention is defined as MultiHead({Q,K,V}Hi=1) = Concat(H1, . . . ,HH)W O . The capacity of the attention mechanism and its ability to learn diverse syntactic and semantic relationships determine the success of transformers [77, 84, 17, 85, 32]. However, equations (1) and (2) implies that the dot-product attention assumes the features (qi1, . . . , qiD) in qi, as well as the features (kj1, . . . , qjD) in kj , are independent. Thus, the dot-product attention fail to capture the correlations between these features, limiting its representation capacity and inhibit the performance of transformers on practical tasks where there is no guarantee that independent features can learned from complex data. One solution to capture correlations between features qi and kj is to introduce covariance matrices into the formulation of the dot-product attention with the cost of significantly increasing of the computational complexity. Also, choosing good covariance matrices is difficult. 1.2 Contribution In this paper, we first establish a correspondence between self-attention and nonparametric kernel regression. Under this new perspective of self-attention, we explain the limitation of the dot-product self-attention that it may fail to capture correlations between the features in the query and key vectors. We then leverage the generalized Fourier integral theorems, which can automatically capture these correlations, and derive the generalized Fourier integral estimators for the nonparametric regression problem. Using this new density estimator, we propose the FourierFormer, a novel class of transformers that can capture correlations between features in the query and key vectors of self-attention. In summary, our contribution is three-fold: 1. We derive the formula of self-attention from solving a nonparametric kernel regression problem, thus providing a nonparametric regression interpretation to study and further develop self-attention. 2. We develop the generalized Fourier integral estimators for the nonparametric regression problem and provide theoretical guarantees for these estimator. 3. We propose the FourierFormer whose attentions use the generalized Fourier integral estimators to capture more efficiently correlations between features in the query and key vectors. Finally, we empirically show that the FourierFormer attains significantly better accuracy than the baseline transformer with the dot-product attention on a variety of tasks including the WikiText language modeling and ImageNet image classsification. We also demonstrate in our experiments that FourierFormer helps reduce the redundancy between attention heads. Organization We structure this paper as follows: In Section 2, we present the correspondence between self-attention and nonparametric kernel regression. In Section 3, we discuss the generalized Fourier integral estimators and define the FourierFormer. We validate and empirically analyze the advantages of FourierFormer in Section 4. We discuss related works in Section 5. The paper ends with concluding remarks. Technical proofs and more experimental details are provided in the Appendix. Notation For any N 2 N, we denote [N ] = {1, 2, . . . , N}. For any D 1, L1(RD) denotes the space of real-valued functions on RD that are integrable. For any two sequences {aN}N 1, {bN}N 1, we denote aN = O(bN ) to mean that aN CbN for all N 1 where C is some universal constant. 2 A Nonparametric Regression Interpretation of Self-attention In this section, we establish the connection between self-attention and nonparametric kernel regression. In particular, we derive the self-attention in equation (2) as a nonparametric kernel regression in which the key vectors kj and value vectors vj are training inputs and training targets, respectively, while the query vectors qi and the output vectors hi form a set of new inputs and their corresponding targets that need to be estimated, respectively, for i, j = 1, · · · , N . In general, we can view the training set {kj ,vj} for j 2 [N ] to come from the following nonparametric regression model: vj = f(kj) + "j , (3) where "1, . . . , "N are independent noises such that E("j) = 0. Furthermore, we consider a random design setting where the key vectors k1,k2, . . . ,kN are i.i.d. samples from the distribution that admits p as density function. By an abuse of notation, we also denote p as the joint density where the key and value vectors (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from. Here, f is a true but unknown function and we would like to estimate it. Nadaraya–Watson estimator Our approach to estimate the function f is based on Nadaraya–Watson’s nonparametric kernel regression approach [50]. In particular, from the nonparametric regression model (3), we have E [vj |kj ] = f(kj) for all j 2 [N ]. Therefore, it is sufficient to estimate the conditional distribution of the value vectors given the key vectors. Given the density function p of the key vectors and the joint density p of the key and value vectors, for any pair of vectors (v,k) generate from model (3) we have E [v|k] = Z RD v · p(v|k)dv = Z v · p(v,k) p(k) dv. (4) The formulation (4) of the conditional expectation indicates that as long as we can estimate the joint density function p(v,k) and the marginal density function p(v), we are able to obtain an estimation for the conditional expectation and thus for the function f . This approach is widely known as Nadaraya–Watson’s nonparametric kernel regression approach. Kernel density estimator To estimate p(v,k) and p(k), we employ the kernel density estimation approach [66, 57]. In particular, by using the isotropic Gaussian kernel with bandwidth , we have the following estimators of p(v,k) and p(k): p̂ (v,k) = 1 N NX j=1 ' (v vj)' (k kj), p̂ (k) = 1 N NX j=1 ' (k kj), (5) where ' (.) is the isotropic multivariate Gaussian density function with diagonal covariance matrix 2 ID. Given the kernel density estimators (5), we obtain the following estimation of the function f : bf (k) = Z RD v · p̂ (v,k) p̂ (k) dv = Z RD v · PN j=1 ' (v vj)' (k kj)PN j=1 ' (k kj) dv = PN j=1 ' (k kj) R v · ' (v vj)dv PN j=1 ' (k kj) = PN j=1 vj' (k kj)PN j=1 ' (k kj) . (6) Connection between Self-Attention and nonparametric regression By plugging the query vectors qi into the function bf in equation (6), we obtain that bf (qi) = PN j vj exp kqi kjk2/2 2 PN j exp ( kqi kjk 2/2 2) = PN j vj exp ⇥ kqik2 + kkjk2 /2 2 ⇤ exp qik>j / 2 PN j exp [ (kqik 2 + kkj0k2) /2 2] exp qik>j / 2 . (7) If we further assume that the keys kj are normalized, which is usually done in practice to stabilize the training of transformers [71], the value of bf (qi) in equation (6) then becomes bf (qi) = PN j vj exp qik>j / 2 PN j exp qik>j / 2 = NX j=1 softmax ⇣ q>i kj/ 2 ⌘ vj . (8) When we choose 2 = p D where D is the dimension of qi and kj , equation (8) matches equation (2) of self-attention, namely, bf (qi) = hi. Thus, we have shown that self-attention performs nonparametric regression using isotropic Gaussian kernels. Remark 1 The assumption that kj is normalized is to recover the pairwise dot-product attention in transformers. In general, this assumption is not necessary. In fact, the isotropic Gaussian kernel in equation (7) is more desirable than the dot-product kernel in equation (8) of the pairwise dot-product attention since the former is Lipschitz while the later is not Lipschitz [37]. The Lipschitz constraint helps improve the robustness of the model [16, 81, 2] and stabilize the model training [48]. Limitation of Self-Attention From our nonparametric regression interpretation, self-attention is derived from the use of isotropic Gaussian kernels for kernel density estimation and nonparametric regression estimation, which may fail to capture the complex correlations between D features in qi and kj [88, 33]. Using multivariate Gaussian kernels with dense covariance matrices can help capture such correlations; however, choosing good covariance matrices is challenging and inefficient [87, 73, 11]. In the following section, we discuss the Fourier integral estimator and its use as a kernel for computing self-attention in order to overcome these limitations. 3 FourierFormer: Transformer via Generalized Fourier Integral Theorem In the following, we introduce generalized integral theorems that are able to capture the complex interactions among the features of the queries and keys. We then apply these theorems to density estimation and nonparametric regression problems. We also establish the convergence rates of these estimators. Given these density estimators, we introduce a novel family of transformers, named FourierFormer, that integrates the generalized Fourier integral theorem into the dot-product attention step of the standard transformer. 3.1 Generalized Fourier Integral Theorems and Their Applications The Fourier integral theorem is a beautiful result in mathematics [92, 7] and has been recently used in nonparametric mode clustering, deconvolution problem, and generative modeling [33]. It is a combination of Fourier transform and Fourier inverse transform. In particular, for any function p 2 L1(RD), the Fourier integral theorem is given by p(k) = 1 (2⇡)D Z RD Z RD cos(s>(k y))p(y)dyds = 1 ⇡D lim R!1 Z RD DY j=1 sin(R(kj yj)) (kj yj) p(y)dy, (9) where k = (k1, . . . , kD),y = (y1, . . . , yD), s = (s1, . . . , sD), and R is the radius. The detailed derivation of Equation (9) is in Appendix B.3. Equation (9) suggests that pR(k) := 1 ⇡D R RD QD j=1 sin(R(yj kj)) (yj kj) p(y)dy can be used as an estimator of the function p. Benefits of the Fourier integral over Gaussian kernel There are two important benefits of the estimator pR: (i) it can automatically preserve the correlated structure lying within p even when p is very complex and high dimensional function. It is in stark contrast to the standard kernel estimator built based on multivariate Gaussian kernel where we need to choose good covariance matrix in the multivariate Gaussian kernel to guarantee such estimator to work well. We note that as the standard soft-max Transformer is constructed based on the multivariate Gaussian kernel, the issue of choosing good covariance matrix in dot-product transformer is inevitable; (ii) The product of sinc kernels in the estimator pR does not decay to a point mass when R ! 1. It is in stark difference from the multivariate Gaussian kernel estimator, which converges to a point mass when the covariance matrix goes to 0. It indicates that pR is a non-trivial estimator of the function p. Finally, detailed illustrations of these benefits of the Fourier integral over Gaussian kernel in density estimation and nonparametric regression problems, which we have just shown to have connection to the self-attention in transformer, can be found in Section 8 in [33]. Generalized Fourier integral estimator Borrowing the above benefits of Fourier integral estimator pR, in the paper we would like to consider a generalization of that estimator, named generalized Fourier integral estimator, which is given by: p R(k) := R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy, (10) where A := R R ⇣ sin(z) z ⌘ dz and : R ! R is a given function. When (k) = k for all k 2 RD, the generalized Fourier integral estimator p R becomes the Fourier integral estimator pR. Under appropriate conditions on the function (see Theorem 1 in Section 3.1.1 and Theorem 3 in Appendix C.1), the estimator p R converges to the true function p, namely, p(k) = lim R!1 p R(k) = limR!1 R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy. (11) We name the above limit as generalized Fourier integral theorem. Furthermore, the estimator p R also inherits similar aforementioned benefits of the Fourier integral estimator pR. Therefore, we will use the generalized Fourier integral theorem as a building block for constructing density estimators and nonparametric regression estimators, which are crucial to develop the FourierFormer in Section 3.2. 3.1.1 Density Estimation via Generalized Fourier Integral Theorems We first apply the generalized Fourier integral theorem to the density estimation problem. To ease the presentation, we assume that k1,k2, . . . ,kN 2 RD are i.i.d. samples from a distribution admitting density function p where D 1 is the dimension. Inspired by the generalized Fourier integral theorem, we obtain the following generalized Fourier density estimator p N,R of p as follows: p N,R(k) := R D NAD NX i=1 DY j=1 ✓ sin(R(kj kij)) R(kj kij) ◆ , (12) where A = R R ⇣ sin(z) z ⌘ dz and ki = (ki1, . . . , kiD) for all i 2 [N ]. To quantify the error between the generalized Fourier density estimator p n,R and the true density p, we utilize mean integrated squared errors (MISE) [91], which is given by: MISE(p N,R, p) := Z RD (p N,R(k) p(k)) 2 dk. (13) We start with the following bound on the MISE between p n,R and p. Theorem 1 Assume that R R (sin(z)/z)z j dz = 0 for all j 2 [m] and R R | (sin(z)/z)||z| m+1 dz < 1 for some m 2 N. Then, there exist universal constants C and C 0 depending on d and A such that MISE(p N,R, p) C Rm+1 + C 0 R D N . Proof of Theorem 1 is in Appendix D.1. A few comments are in order. First, by choosing R to balance the bias and variance in the bound of MISE in Theorem 1, we have the optimal R as R = O(N1/(D+m+1)). With that choice of R, the MISE rate of p N,R is O(N (m+1)/(D+m+1)). Second, when (z) = zl for l 4 and z 2 R, the assumptions in Theorem 1 are satisfied when m = 1. Under this case, the MISE rate of p N,R is O(N 2/(D+2)). However, these assumptions do not satisfy when (z) = zl and l 2 {1, 2, 3}, which is due to the limitation of the current proof technique of Theorem 1 that is based on Taylor expansion of the estimator p n,R. To address the limitation of the Taylor expansion technique, we utilize the Plancherel theorem in Fourier analysis to establish the MISE rate of p N,R when (z) = z l and l 2 {1, 2, 3}. The details of the theoretical analyses for such setting are in Appendix C. 3.2 FourierFormer: Transformers with Fourier Attentions Motivated by the preservation of the correlated structure of the function from the generalized Fourier integral theorem as well as the theoretical guarantees of density estimators, in this section we adapt the nonparametric regression interpretation of self-attention in Section 2 and propose the generalized Fourier nonparametric regression estimator in Section 3.2.1. We also establish the convergence properties of that estimator. Then, based on generalized Fourier nonparametric regression estimator, we develop the Fourier Attention and its corresponding FourierFormer in Section 3.2.2. 3.2.1 Nonparametric Regression via Generalized Fourier Integral Theorem We now discuss an application of the generalized Fourier integral theorems to the nonparametric regression setting (3), namely, we assume that (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from the following nonparametric regression model: vj = f(kj) + "j , where "1, . . . , "N are independent noises such that E("j) = 0 and the key vectors k1,k2, . . . ,kN are i.i.d. samples from p. Given the generalized Fourier density estimator (12), following the argument in Section 2, the Nadaraya–Watson estimator of the function f based on the generalized Fourier density estimator is given by: fN,R(k) := PN i=1 vi QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ . (14) The main difference between the generalized Fourier nonparametric regression estimator fN,R in equation (14) and the estimator bf in equation (6) is that the estimator fN,R utilizes the generalized Fourier density estimator to estimate the conditional distribution of the value vectors given the key vectors instead of the isotropic Gaussian kernel density estimator as in bf . As we highlighted in Section 3, an important benefit of the generalized Fourier density estimator is that it can capture the complex dependencies of the features of the value vectors and the key vectors while the Gaussian kernel needs to have good covariance matrix to do that, which is computationally expensive in practice. We now have the following result establishing the mean square error (MSE) of fN,R when Dv = 1. Theorem 2 Assume that R R ⇣ sin(z) z ⌘ z j dz = 0 for all 1 j m and R R ⇣ sin(z) z ⌘ |z|jdz < 1 for any m + 1 j 2m + 2 for some m 2 N. Then, for any k 2 RD, when Dv = 1 there exist universal constants C1, C2, C3, C4 such that the following holds: E ⇥ (fN,R(k) f(k)) 2 ⇤ ✓ C1 R2(m+1) + (f(k) + C2)RD N ◆ p 2(k)J(R) , where J(R) = 1 1p2(k) ⇣ C3 R2(m+1) + C4R d log(NR) N ⌘ . Here, the outer expectation is taken with respect to the key vectors k1, . . . ,kN and the noises "1, . . . , "N . Proof of Theorem 2 is in Appendix D.3. A few comments with Theorem 2 are in order. First, by choosing R to balance the bias and variance in the bound of the MSE of the nonparametric generalized Fourier estimator fN,R, we have the optimal radius R as R = O(N 1 2(m+1)+D ). With that choice of the optimal radius R, the rate of fN,R is O(N 2(m+1)D+2(m+1) ). Second, when (z) = zl for l 6, the assumption on the function of Theorem 2 is satisfied with m = 1. Under this case, the rate of fN,R becomes O(N 4 D+4 ). In Appendix C, we also provide the rate of fN,R when (z) = zl for some l 5, which includes the original Fourier integral theorem. 3.2.2 FourierFormer Given the generalized Fourier nonparametric regression estimator fN,R in equation (14), by plugging the query values q1, . . . , qN into that function, we obtain the following definition of the Fourier attention: Definition 1 (Fourier Attention) A Fourier attention is a multi-head attention that does nonparametric regression using the generalized Fourier nonparametric regression estimator fN,R. The output ĥi of the Fourier attention is then computed as ĥi := fN,R(qi) = PN i=1 vi QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ 8 i 2 [N ]. (15) Given the Fourier Attention in Definition 1, we then give the definition of FourierFormer as follows. Definition 2 (FourierFormer) A FourierFormer is a transformer that uses Fourier attention to capture dependency between tokens in the input sequence and the correlation between features in each token. Remark 2 (The Nonnegativity of the Fourier Kernel) The density estimation via generalized Fourier integral theorem in Section 3.1.1 does not require the generalized Fourier density estimator to be nonnegative. However, empirically, we observe that negative density estimator can cause instability in training the FourierFormer. Thus, in FourierFormer, we choose the function to be a nonnegative function to enforce the density estimator to be nonnegative. In particular, we choose to be power functions of the form (x) = x2m, where m is an positive integer. Note that when m = 1 and m = 2, the kernels in our generalized Fourier integral estimators are the well-known Fejer-de la Vallee Poussin and Jackson-de la Vallee Poussin kernels [20]. 3.3 An Efficient Implementation of the Fourier Attention The Fourier kernel is implemented efficiently in the C++/CUDA extension developed by Pytorch [58]. The idea is similar to the function cdist [58], which computes the p-norm distance between each pair of the two collections of row vectors. In our case, we aim to compute kernel functions that represent a Fourier attention in Definition 1. The core of this implementation is the following Fourier metric function df : df (qi,kj) = DY d=1 ✓ sin(R(qid kjd)) R(qid kjd) ◆ . We directly implement df as a torch.autograd.Function [58] in which we provide an efficient way to compute forward and backward function (df and gradient of df ). While the implementation of the forward function is straight forward, the backward function is more tricky since we need to optimize the code to compute the gradient of df w.r.t to variables q, k, and R all at once. We can develop the backward function with highly parallel computation by exploiting GPU architecture and utilizing the reduction technique. The computational time is comparable to function cdist; thus, our FourierFormer implementation is as computationally time-efficient. 4 Experimental Results In this section, we numerically justify the advantage of FourierFormer over the baseline dot-product transformer on two large-scale tasks: language modeling on WikiText-103 [46] (Section 4.1) and image classification on ImageNet [22, 67] (Section 4.2), time series classification on the UEA benchmark [5] (Section 4.3), and reinforcement learning on the D4RL Benchmark [29] (Section 4.4), and the machine translation on the IWSLT’ 14 De-En [10] (Section 4.5). We aim to show that: (i) FourierFormer achieves better accuracy than the baseline transformer on a variety of practical tasks with different data modalities, and (ii) FourierFormer helps reduce head redundancy compared to the baseline transformer (Section 4.6). Throughout the section, we compare FourierFormers with the baseline dot-product transformers of the same configuration. In all experiments, we made the constant R in Fourier attention (see equation (16)) to be a learnable scalar and set choose the function (x) = x4 (see Remark 2). All of our results are averaged over 5 runs with different seeds. The details on the models and training are provided in Appendix A. Moreover, additional experiments results are provided in Appendix E. Our PyTorch code with documentation can be found at https://github.com/minhtannguyen/FourierFormer_NeurIPS. 4.1 Language Modeling on WikiText-103 We report the validation and test perplexity (PPL) of FourierFormer versus the baseline transformer with the dot-product attention in Table 1. FourierFormers attain much better PPL than the baselines in both small and medium configurations. For the small configuration, the improvements of FourierFormer over the baseline are 1.29 PPL in validation and 1.44 PPL in test. For the medium configuration, these improvements are 1.39 PPL in validation and 1.59 PPL in test. These results suggest that the advantage of FourierFormer over the baseline dot-product transformer grows with the model’s size. This meets our expectation because larger models has larger query and key dimensions, e.g. the language model with medium configuration in this experiment has the query and key dimension of 256 versus 128 as in the language model with small configuration. Since the advantage of FourierFormer results from the property that FourierFormer can capture correlation between features in query and key vectors, the larger the query and key dimensions are, the more advantage FourierFormer has. 4.2 Image Classification on ImageNet In the Imagenet classification task, we illustrates the benefits of Fourierformers in different data modalities. We summarize our models’ results in Table 2. Same as in the language modeling experiment, for this image classification task, the Deit model equipped with FourierFormer significantly outperforms the baseline Deit dot-product transformer [79] in both top-1 and top-5 accuracy. This result suggests that the advantage of FourierFormer over the baseline dot-product transformer holds across different data modalities. 4.3 UEA Time Series Classification To evaluate Fourierformers on temporal sequences, we compare the accuracy of the our models and the baseline softmax transformers trained on 10 datasets in the the UEA Time Series Classification Archive benchmark [5]. We summarize our results in Table 3. We observe show that Fourierformers outperforms softmax baselines in 7 out of 10 tasks and yields significantly better accuracy than the softmax transformer on average, showing the our models benefits when trained on temporal data. 4.4 Reinforcement learning on the D4RL benchmark We also examine the performance of our Fourierformers in reinforcement learning. In Table 4, we verify the advantage of decision FourierFormer over the baseline decision transformer [12] on the continuous control tasks from the D4RL benchmark [29]. The decision FourierFormer is the decision transformer with the Fourier attention instead of the softmax attention. On this benchmark, our decision FourierFormer significantly outperforms the baseline decision transformer on 8 out of 9 tasks and on average across tasks. Each experiment result averaged over 5 runs with different random seeds. We follow the architecture and training configuration from [93]. 4.5 Machine Translation on IWSLT’ 14 De-En We demonstrate the performance of Fourierformer on the IWSLT’ 14 De-En [10] neural machine translation task, which has different inputs’ the sequence lengths. Table 5 shows that the FourierFormer achieves better BLUE scores than the softmax baseline. 4.6 FourierFormer Helps Reducing Head Redundancy To study the diversity between attention heads, given the model trained for the WikiText-103 language modeling task, we compute the average L2 distance between heads in each layer. We show the layer-average mean and variance of distances between heads in Table 6. Results in Table 6 shows that FourierFormer obtains greater L2 distance between attention heads than the baseline transformer with the dot-product attention and thus helps reduce the head redundancy. Note that we use the small configuration as specified in Section 4.1 for both models. 5 Related Work Interpretation of Attention Mechanism in Transformers Recent works have tried to gain an understanding of transformer’s attention from different perspectives. [80] considers attention as applying kernel smoother over the inputs. Extending this kernel approach, [35, 15, 52, 89, 54] linearize the softmax kernel in dot-product attention and propose a family of efficient transformers with linear computational and memory complexity. [9] then shows that these linear transformers are comparable to a Petrov-Galerkin projection [64], suggesting that the softmax normalization in the dot-product attention is sufficient but not necessary. Other works provide an understanding of attention in transformers via ordinary/partial differential equation include [45, 69]. In addition, [51, 75, 30, 96, 53] relate attentions in transformers to a Gaussian mixture models. Several works also connect the attention mechanism to graph-structured learning and message passing in graphical models [90, 72, 39]. Our work focuses on deriving the connection between self-attention and nonparametric kernel regression and exploring better regression estimator, such as the generalized Fourier nonparametric regression estimator, to improve the performance of transformers. Redundancy in Transformers [19, 47, 25] show that neurons and attention heads in the pre-trained transformer are redundant and can be removed when applied on a downstream task. By studying the contextualized embeddings in pre-trained networks, it has been demonstrated that the learned representations from these redundant models are highly anisotropic [49, 26]. Furthermore, [70, 74, 86, 68] employ knowledge distillation and sparse approximation to enhance the efficiency of transformers. Our FourierFormer is complementary to these methods and can be combined with them. 6 Concluding Remarks In this paper, we establish the correspondence between the nonparametric kernel regression and the self-attention in transformer. We then develop the generalized Fourier integral estimators and propose the FourierFormer, a novel class of transformers that use the generalized Fourier integral estimators to construct their attentions for efficiently capturing the correlations between features in the query and key vectors. We theoretically prove the approximation guarantees of the generalized Fourier integral estimators and empirically validate the advantage of FourierFormer over the baseline transformer with the dot-product attention in terms of accuracy and head redundancy reduction. It is interesting to incorporate robust kernels into the nonparametric regression framework of FourierFormer to enhance the robustness of the model under data perturbation and adversarial attacks. A limitation of FourierFormer is that it still has the same quadratic computational and memory complexity as the baseline transformer with the dot-product attention. We leave the development of the linear version of FourierFormer that achieves linear computational and memory complexity as future work. It is worth noting that there is no potential negative societal impacts of FourierFormer. Acknowledgements This material is based on research sponsored by the AFOSR MURI FA9550-18-1-0502, the ONR grant N00014-20-1-2093, the MURI N00014-20-1-2787, and the NSF under Grant# 2030859 to the Computing Research Association for the CIFellows Project (CIF2020-UCLA-38). NH acknowledges support from the NSF IFML 2019844 and the NSF AI Institute for Foundations of Machine Learning.
1. What is the focus and contribution of the paper regarding the self-attention mechanism in Transformers? 2. What are the strengths of the proposed approach, particularly in its interpretation and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its background introduction and experimental evaluations? 4. Do you have any questions or suggestions regarding the paper's content? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors provide a new perspective to interpret the self-attention mechanism in Transformers. In particular, with the assumption that the query and key vectors are normalized, the self-attention mechanism coincides with the well-known Nonparametric Kernel Regression with kernel density estimation. Motivated by this, the authors instead use the Generalized Fourier Integral Theorem to build more powerful estimators for capturing the interaction between features in different dimensions. Experiments on some benchmarks are conducted. Strengths And Weaknesses Strengths The interpretation of seeing the self-attention mechanism as using the isotropic Gaussian kernels for kernel density estimation and nonparametric regression estimation seems to be novel, which provides a new perspective to the community to understand the behavior of self-attention. The motivation seems to be reasonable to use the generalized Fourier Integral Theorem to capture the feature interaction instead of using the multivariate Gaussian kernels with proper covariance matrices. The theoretical analysis is thorough, including approximation error of the generalized Fourier density estimator (Theorem 1) and the generalized Fourier nonparametric regression estimator (Theorem 2). Weaknesses Regarding the background: the authors should consider adding a preliminary section to introduce the background knowledge on the nonparametric kernel regression, kernel density estimation, and the generalized Fourier Integral theorem, which could help the readers easily follow the derivation of Section 2 and understand the motivation to use the Fourier Integral theorem as a guide to developing a new self-attention mechanism. Regarding the experimental evaluation: the issues are three-fold. 1) since the authors provide an analysis of the approximation error between estimators and true functions (Theorem 1 and 2), it is informative to provide an empirical evaluation of these quantities on real data as further verification. 2) The experiments should be more comprehensive and general. For both the language modeling task and image classification task, the model size is limited and the baselines are restrictive. 3) Since the FourierFormer need customized operators for implementation, the authors should also provide the memory/time cost profiling compared to popular Transformer architectures. Based on these issues, the efficiency and effectiveness of the FourierFormer are doubtful. -------After Rebuttal------- Thank authors for the detailed response. Most of my concerns have been addressed. I have updated my scores to 6. Questions See the issues in the Strength And Weaknesses. If the authors could address these issues, I would like to increase my scores accordingly. Limitations No negative societal impact
NIPS
Title FourierFormer: Transformer Meets Generalized Fourier Integral Theorem Abstract Multi-head attention empowers the recent success of transformers, the state-of-theart models that have achieved remarkable success in sequence modeling and beyond. These attention mechanisms compute the pairwise dot products between the queries and keys, which results from the use of unnormalized Gaussian kernels with the assumption that the queries follow a mixture of Gaussian distribution. There is no guarantee that this assumption is valid in practice. In response, we first interpret attention in transformers as a nonparametric kernel regression. We then propose the FourierFormer, a new class of transformers in which the dot-product kernels are replaced by the novel generalized Fourier integral kernels. Different from the dot-product kernels, where we need to choose a good covariance matrix to capture the dependency of the features of data, the generalized Fourier integral kernels can automatically capture such dependency and remove the need to tune the covariance matrix. We theoretically prove that our proposed Fourier integral kernels can efficiently approximate any key and query distributions. Compared to the conventional transformers with dot-product attention, FourierFormers attain better accuracy and reduce the redundancy between attention heads. We empirically corroborate the advantages of FourierFormers over the baseline transformers in a variety of practical applications including language modeling and image classification. 1 Introduction Transformers [83] are powerful neural networks that have achieved tremendous success in many areas of machine learning [40, 76, 36] and become the state-of-the-art model on a wide range of applications across different data modalities, from language [23, 1, 18, 13, 62, 4, 8, 21] to images [24, 43, 78, 63, 59, 27], videos [3, 44], point clouds [97, 31], and protein sequence [65, 34]. In addition to their excellent performance on supervised learning tasks, transformers can also effectively transfer the learned knowledge from a pretraining task to new tasks with limited or no supervision [60, 61, 23, 94, 42]. At the core of transformers is the dot-product self-attention, which ⇤ Co-first authors. ⇤⇤ Co-last authors. Please correspond to: tanmnguyen89@ucla.edu 36th Conference on Neural Information Processing Systems (NeurIPS 2022). mainly accounts for the success of transformer models [14, 56, 41]. This dot-product self-attention learn self-alignment between tokens in an input sequence by estimating the relative importance of a given token with respect to all other tokens. It then transform each token into a weighted average of the feature representations of other tokens where the weight is proportional to a importance score between each pair of tokens. The importance scores in self-attention enable a token to attend to other tokens in the sequence, thus capturing the contextual representation [6, 83, 38]. 1.1 Self-Attention Given an input sequence X := [x1, · · · ,xN ]> 2 RN⇥Dx of N feature vectors, self-attention computes the output sequence H from X as follows: Step 1: Projecting the input sequence into different subspaces. The input sequence X is transformed into the query matrix Q, the key matrix K, and the value matrix V via three linear transformations Q = XW>Q;K = XW > K ;V = XW > V , where WQ,WK 2 RD⇥Dx , and WV 2 RDv⇥Dx are the weight matrices. We denote Q := [q1, · · · , qN ]>,K := [k1, · · · ,kN ]>, and V := [v1, · · · ,vN ]>, where the vectors qi,ki,vi for i = 1, · · · , N are the query, key, and value vectors, respectively. Step 2: Computing the output as a weighted average. The output sequence H := [h1, · · · ,hN ]> is then given by H = softmax ⇣ QK > / p D ⌘ V := AV, (1) where the softmax function is applied to each row of the matrix (QK>)/ p D. For each query vector qi, i = 1, · · · , N , Eqn. (1) can be written in the vector form to compute the output vector hi as follows hi = NX j=1 softmax ⇣ q>i kj/ p D ⌘ vj := NX j=1 aijvj . (2) The matrix A 2 RN⇥N and its component aij for i, j = 1, · · · , N are the attention matrix and attention scores, respectively. The self-attention computed by equations (1) and (2) is called the dotproduct attention or softmax attention. In our paper, we refer a transformer that uses this attention as the baseline transformer with the dot-product attention or the dot-product transformer. The structure of the attention matrix A after training governs the ability of the self-attention to capture contextual representation for each token. Multi-head Attention Each output sequence H forms an attention head. Multi-head attention concatenates multiple heads to compute the final output. Let H be the number of heads and W O 2 RHDv⇥HDv be the projection matrix for the output. The multi-head attention is defined as MultiHead({Q,K,V}Hi=1) = Concat(H1, . . . ,HH)W O . The capacity of the attention mechanism and its ability to learn diverse syntactic and semantic relationships determine the success of transformers [77, 84, 17, 85, 32]. However, equations (1) and (2) implies that the dot-product attention assumes the features (qi1, . . . , qiD) in qi, as well as the features (kj1, . . . , qjD) in kj , are independent. Thus, the dot-product attention fail to capture the correlations between these features, limiting its representation capacity and inhibit the performance of transformers on practical tasks where there is no guarantee that independent features can learned from complex data. One solution to capture correlations between features qi and kj is to introduce covariance matrices into the formulation of the dot-product attention with the cost of significantly increasing of the computational complexity. Also, choosing good covariance matrices is difficult. 1.2 Contribution In this paper, we first establish a correspondence between self-attention and nonparametric kernel regression. Under this new perspective of self-attention, we explain the limitation of the dot-product self-attention that it may fail to capture correlations between the features in the query and key vectors. We then leverage the generalized Fourier integral theorems, which can automatically capture these correlations, and derive the generalized Fourier integral estimators for the nonparametric regression problem. Using this new density estimator, we propose the FourierFormer, a novel class of transformers that can capture correlations between features in the query and key vectors of self-attention. In summary, our contribution is three-fold: 1. We derive the formula of self-attention from solving a nonparametric kernel regression problem, thus providing a nonparametric regression interpretation to study and further develop self-attention. 2. We develop the generalized Fourier integral estimators for the nonparametric regression problem and provide theoretical guarantees for these estimator. 3. We propose the FourierFormer whose attentions use the generalized Fourier integral estimators to capture more efficiently correlations between features in the query and key vectors. Finally, we empirically show that the FourierFormer attains significantly better accuracy than the baseline transformer with the dot-product attention on a variety of tasks including the WikiText language modeling and ImageNet image classsification. We also demonstrate in our experiments that FourierFormer helps reduce the redundancy between attention heads. Organization We structure this paper as follows: In Section 2, we present the correspondence between self-attention and nonparametric kernel regression. In Section 3, we discuss the generalized Fourier integral estimators and define the FourierFormer. We validate and empirically analyze the advantages of FourierFormer in Section 4. We discuss related works in Section 5. The paper ends with concluding remarks. Technical proofs and more experimental details are provided in the Appendix. Notation For any N 2 N, we denote [N ] = {1, 2, . . . , N}. For any D 1, L1(RD) denotes the space of real-valued functions on RD that are integrable. For any two sequences {aN}N 1, {bN}N 1, we denote aN = O(bN ) to mean that aN CbN for all N 1 where C is some universal constant. 2 A Nonparametric Regression Interpretation of Self-attention In this section, we establish the connection between self-attention and nonparametric kernel regression. In particular, we derive the self-attention in equation (2) as a nonparametric kernel regression in which the key vectors kj and value vectors vj are training inputs and training targets, respectively, while the query vectors qi and the output vectors hi form a set of new inputs and their corresponding targets that need to be estimated, respectively, for i, j = 1, · · · , N . In general, we can view the training set {kj ,vj} for j 2 [N ] to come from the following nonparametric regression model: vj = f(kj) + "j , (3) where "1, . . . , "N are independent noises such that E("j) = 0. Furthermore, we consider a random design setting where the key vectors k1,k2, . . . ,kN are i.i.d. samples from the distribution that admits p as density function. By an abuse of notation, we also denote p as the joint density where the key and value vectors (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from. Here, f is a true but unknown function and we would like to estimate it. Nadaraya–Watson estimator Our approach to estimate the function f is based on Nadaraya–Watson’s nonparametric kernel regression approach [50]. In particular, from the nonparametric regression model (3), we have E [vj |kj ] = f(kj) for all j 2 [N ]. Therefore, it is sufficient to estimate the conditional distribution of the value vectors given the key vectors. Given the density function p of the key vectors and the joint density p of the key and value vectors, for any pair of vectors (v,k) generate from model (3) we have E [v|k] = Z RD v · p(v|k)dv = Z v · p(v,k) p(k) dv. (4) The formulation (4) of the conditional expectation indicates that as long as we can estimate the joint density function p(v,k) and the marginal density function p(v), we are able to obtain an estimation for the conditional expectation and thus for the function f . This approach is widely known as Nadaraya–Watson’s nonparametric kernel regression approach. Kernel density estimator To estimate p(v,k) and p(k), we employ the kernel density estimation approach [66, 57]. In particular, by using the isotropic Gaussian kernel with bandwidth , we have the following estimators of p(v,k) and p(k): p̂ (v,k) = 1 N NX j=1 ' (v vj)' (k kj), p̂ (k) = 1 N NX j=1 ' (k kj), (5) where ' (.) is the isotropic multivariate Gaussian density function with diagonal covariance matrix 2 ID. Given the kernel density estimators (5), we obtain the following estimation of the function f : bf (k) = Z RD v · p̂ (v,k) p̂ (k) dv = Z RD v · PN j=1 ' (v vj)' (k kj)PN j=1 ' (k kj) dv = PN j=1 ' (k kj) R v · ' (v vj)dv PN j=1 ' (k kj) = PN j=1 vj' (k kj)PN j=1 ' (k kj) . (6) Connection between Self-Attention and nonparametric regression By plugging the query vectors qi into the function bf in equation (6), we obtain that bf (qi) = PN j vj exp kqi kjk2/2 2 PN j exp ( kqi kjk 2/2 2) = PN j vj exp ⇥ kqik2 + kkjk2 /2 2 ⇤ exp qik>j / 2 PN j exp [ (kqik 2 + kkj0k2) /2 2] exp qik>j / 2 . (7) If we further assume that the keys kj are normalized, which is usually done in practice to stabilize the training of transformers [71], the value of bf (qi) in equation (6) then becomes bf (qi) = PN j vj exp qik>j / 2 PN j exp qik>j / 2 = NX j=1 softmax ⇣ q>i kj/ 2 ⌘ vj . (8) When we choose 2 = p D where D is the dimension of qi and kj , equation (8) matches equation (2) of self-attention, namely, bf (qi) = hi. Thus, we have shown that self-attention performs nonparametric regression using isotropic Gaussian kernels. Remark 1 The assumption that kj is normalized is to recover the pairwise dot-product attention in transformers. In general, this assumption is not necessary. In fact, the isotropic Gaussian kernel in equation (7) is more desirable than the dot-product kernel in equation (8) of the pairwise dot-product attention since the former is Lipschitz while the later is not Lipschitz [37]. The Lipschitz constraint helps improve the robustness of the model [16, 81, 2] and stabilize the model training [48]. Limitation of Self-Attention From our nonparametric regression interpretation, self-attention is derived from the use of isotropic Gaussian kernels for kernel density estimation and nonparametric regression estimation, which may fail to capture the complex correlations between D features in qi and kj [88, 33]. Using multivariate Gaussian kernels with dense covariance matrices can help capture such correlations; however, choosing good covariance matrices is challenging and inefficient [87, 73, 11]. In the following section, we discuss the Fourier integral estimator and its use as a kernel for computing self-attention in order to overcome these limitations. 3 FourierFormer: Transformer via Generalized Fourier Integral Theorem In the following, we introduce generalized integral theorems that are able to capture the complex interactions among the features of the queries and keys. We then apply these theorems to density estimation and nonparametric regression problems. We also establish the convergence rates of these estimators. Given these density estimators, we introduce a novel family of transformers, named FourierFormer, that integrates the generalized Fourier integral theorem into the dot-product attention step of the standard transformer. 3.1 Generalized Fourier Integral Theorems and Their Applications The Fourier integral theorem is a beautiful result in mathematics [92, 7] and has been recently used in nonparametric mode clustering, deconvolution problem, and generative modeling [33]. It is a combination of Fourier transform and Fourier inverse transform. In particular, for any function p 2 L1(RD), the Fourier integral theorem is given by p(k) = 1 (2⇡)D Z RD Z RD cos(s>(k y))p(y)dyds = 1 ⇡D lim R!1 Z RD DY j=1 sin(R(kj yj)) (kj yj) p(y)dy, (9) where k = (k1, . . . , kD),y = (y1, . . . , yD), s = (s1, . . . , sD), and R is the radius. The detailed derivation of Equation (9) is in Appendix B.3. Equation (9) suggests that pR(k) := 1 ⇡D R RD QD j=1 sin(R(yj kj)) (yj kj) p(y)dy can be used as an estimator of the function p. Benefits of the Fourier integral over Gaussian kernel There are two important benefits of the estimator pR: (i) it can automatically preserve the correlated structure lying within p even when p is very complex and high dimensional function. It is in stark contrast to the standard kernel estimator built based on multivariate Gaussian kernel where we need to choose good covariance matrix in the multivariate Gaussian kernel to guarantee such estimator to work well. We note that as the standard soft-max Transformer is constructed based on the multivariate Gaussian kernel, the issue of choosing good covariance matrix in dot-product transformer is inevitable; (ii) The product of sinc kernels in the estimator pR does not decay to a point mass when R ! 1. It is in stark difference from the multivariate Gaussian kernel estimator, which converges to a point mass when the covariance matrix goes to 0. It indicates that pR is a non-trivial estimator of the function p. Finally, detailed illustrations of these benefits of the Fourier integral over Gaussian kernel in density estimation and nonparametric regression problems, which we have just shown to have connection to the self-attention in transformer, can be found in Section 8 in [33]. Generalized Fourier integral estimator Borrowing the above benefits of Fourier integral estimator pR, in the paper we would like to consider a generalization of that estimator, named generalized Fourier integral estimator, which is given by: p R(k) := R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy, (10) where A := R R ⇣ sin(z) z ⌘ dz and : R ! R is a given function. When (k) = k for all k 2 RD, the generalized Fourier integral estimator p R becomes the Fourier integral estimator pR. Under appropriate conditions on the function (see Theorem 1 in Section 3.1.1 and Theorem 3 in Appendix C.1), the estimator p R converges to the true function p, namely, p(k) = lim R!1 p R(k) = limR!1 R D AD Z RD DY j=1 ✓ sin(R(yj kj)) R(yj kj) ◆ p(y)dy. (11) We name the above limit as generalized Fourier integral theorem. Furthermore, the estimator p R also inherits similar aforementioned benefits of the Fourier integral estimator pR. Therefore, we will use the generalized Fourier integral theorem as a building block for constructing density estimators and nonparametric regression estimators, which are crucial to develop the FourierFormer in Section 3.2. 3.1.1 Density Estimation via Generalized Fourier Integral Theorems We first apply the generalized Fourier integral theorem to the density estimation problem. To ease the presentation, we assume that k1,k2, . . . ,kN 2 RD are i.i.d. samples from a distribution admitting density function p where D 1 is the dimension. Inspired by the generalized Fourier integral theorem, we obtain the following generalized Fourier density estimator p N,R of p as follows: p N,R(k) := R D NAD NX i=1 DY j=1 ✓ sin(R(kj kij)) R(kj kij) ◆ , (12) where A = R R ⇣ sin(z) z ⌘ dz and ki = (ki1, . . . , kiD) for all i 2 [N ]. To quantify the error between the generalized Fourier density estimator p n,R and the true density p, we utilize mean integrated squared errors (MISE) [91], which is given by: MISE(p N,R, p) := Z RD (p N,R(k) p(k)) 2 dk. (13) We start with the following bound on the MISE between p n,R and p. Theorem 1 Assume that R R (sin(z)/z)z j dz = 0 for all j 2 [m] and R R | (sin(z)/z)||z| m+1 dz < 1 for some m 2 N. Then, there exist universal constants C and C 0 depending on d and A such that MISE(p N,R, p) C Rm+1 + C 0 R D N . Proof of Theorem 1 is in Appendix D.1. A few comments are in order. First, by choosing R to balance the bias and variance in the bound of MISE in Theorem 1, we have the optimal R as R = O(N1/(D+m+1)). With that choice of R, the MISE rate of p N,R is O(N (m+1)/(D+m+1)). Second, when (z) = zl for l 4 and z 2 R, the assumptions in Theorem 1 are satisfied when m = 1. Under this case, the MISE rate of p N,R is O(N 2/(D+2)). However, these assumptions do not satisfy when (z) = zl and l 2 {1, 2, 3}, which is due to the limitation of the current proof technique of Theorem 1 that is based on Taylor expansion of the estimator p n,R. To address the limitation of the Taylor expansion technique, we utilize the Plancherel theorem in Fourier analysis to establish the MISE rate of p N,R when (z) = z l and l 2 {1, 2, 3}. The details of the theoretical analyses for such setting are in Appendix C. 3.2 FourierFormer: Transformers with Fourier Attentions Motivated by the preservation of the correlated structure of the function from the generalized Fourier integral theorem as well as the theoretical guarantees of density estimators, in this section we adapt the nonparametric regression interpretation of self-attention in Section 2 and propose the generalized Fourier nonparametric regression estimator in Section 3.2.1. We also establish the convergence properties of that estimator. Then, based on generalized Fourier nonparametric regression estimator, we develop the Fourier Attention and its corresponding FourierFormer in Section 3.2.2. 3.2.1 Nonparametric Regression via Generalized Fourier Integral Theorem We now discuss an application of the generalized Fourier integral theorems to the nonparametric regression setting (3), namely, we assume that (v1,k1), . . . , (vN ,kN ) are i.i.d. samples from the following nonparametric regression model: vj = f(kj) + "j , where "1, . . . , "N are independent noises such that E("j) = 0 and the key vectors k1,k2, . . . ,kN are i.i.d. samples from p. Given the generalized Fourier density estimator (12), following the argument in Section 2, the Nadaraya–Watson estimator of the function f based on the generalized Fourier density estimator is given by: fN,R(k) := PN i=1 vi QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(kj kij)) R(kj kij) ⌘ . (14) The main difference between the generalized Fourier nonparametric regression estimator fN,R in equation (14) and the estimator bf in equation (6) is that the estimator fN,R utilizes the generalized Fourier density estimator to estimate the conditional distribution of the value vectors given the key vectors instead of the isotropic Gaussian kernel density estimator as in bf . As we highlighted in Section 3, an important benefit of the generalized Fourier density estimator is that it can capture the complex dependencies of the features of the value vectors and the key vectors while the Gaussian kernel needs to have good covariance matrix to do that, which is computationally expensive in practice. We now have the following result establishing the mean square error (MSE) of fN,R when Dv = 1. Theorem 2 Assume that R R ⇣ sin(z) z ⌘ z j dz = 0 for all 1 j m and R R ⇣ sin(z) z ⌘ |z|jdz < 1 for any m + 1 j 2m + 2 for some m 2 N. Then, for any k 2 RD, when Dv = 1 there exist universal constants C1, C2, C3, C4 such that the following holds: E ⇥ (fN,R(k) f(k)) 2 ⇤ ✓ C1 R2(m+1) + (f(k) + C2)RD N ◆ p 2(k)J(R) , where J(R) = 1 1p2(k) ⇣ C3 R2(m+1) + C4R d log(NR) N ⌘ . Here, the outer expectation is taken with respect to the key vectors k1, . . . ,kN and the noises "1, . . . , "N . Proof of Theorem 2 is in Appendix D.3. A few comments with Theorem 2 are in order. First, by choosing R to balance the bias and variance in the bound of the MSE of the nonparametric generalized Fourier estimator fN,R, we have the optimal radius R as R = O(N 1 2(m+1)+D ). With that choice of the optimal radius R, the rate of fN,R is O(N 2(m+1)D+2(m+1) ). Second, when (z) = zl for l 6, the assumption on the function of Theorem 2 is satisfied with m = 1. Under this case, the rate of fN,R becomes O(N 4 D+4 ). In Appendix C, we also provide the rate of fN,R when (z) = zl for some l 5, which includes the original Fourier integral theorem. 3.2.2 FourierFormer Given the generalized Fourier nonparametric regression estimator fN,R in equation (14), by plugging the query values q1, . . . , qN into that function, we obtain the following definition of the Fourier attention: Definition 1 (Fourier Attention) A Fourier attention is a multi-head attention that does nonparametric regression using the generalized Fourier nonparametric regression estimator fN,R. The output ĥi of the Fourier attention is then computed as ĥi := fN,R(qi) = PN i=1 vi QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ PN i=1 QD j=1 ⇣ sin(R(qij kij)) R(qij kij) ⌘ 8 i 2 [N ]. (15) Given the Fourier Attention in Definition 1, we then give the definition of FourierFormer as follows. Definition 2 (FourierFormer) A FourierFormer is a transformer that uses Fourier attention to capture dependency between tokens in the input sequence and the correlation between features in each token. Remark 2 (The Nonnegativity of the Fourier Kernel) The density estimation via generalized Fourier integral theorem in Section 3.1.1 does not require the generalized Fourier density estimator to be nonnegative. However, empirically, we observe that negative density estimator can cause instability in training the FourierFormer. Thus, in FourierFormer, we choose the function to be a nonnegative function to enforce the density estimator to be nonnegative. In particular, we choose to be power functions of the form (x) = x2m, where m is an positive integer. Note that when m = 1 and m = 2, the kernels in our generalized Fourier integral estimators are the well-known Fejer-de la Vallee Poussin and Jackson-de la Vallee Poussin kernels [20]. 3.3 An Efficient Implementation of the Fourier Attention The Fourier kernel is implemented efficiently in the C++/CUDA extension developed by Pytorch [58]. The idea is similar to the function cdist [58], which computes the p-norm distance between each pair of the two collections of row vectors. In our case, we aim to compute kernel functions that represent a Fourier attention in Definition 1. The core of this implementation is the following Fourier metric function df : df (qi,kj) = DY d=1 ✓ sin(R(qid kjd)) R(qid kjd) ◆ . We directly implement df as a torch.autograd.Function [58] in which we provide an efficient way to compute forward and backward function (df and gradient of df ). While the implementation of the forward function is straight forward, the backward function is more tricky since we need to optimize the code to compute the gradient of df w.r.t to variables q, k, and R all at once. We can develop the backward function with highly parallel computation by exploiting GPU architecture and utilizing the reduction technique. The computational time is comparable to function cdist; thus, our FourierFormer implementation is as computationally time-efficient. 4 Experimental Results In this section, we numerically justify the advantage of FourierFormer over the baseline dot-product transformer on two large-scale tasks: language modeling on WikiText-103 [46] (Section 4.1) and image classification on ImageNet [22, 67] (Section 4.2), time series classification on the UEA benchmark [5] (Section 4.3), and reinforcement learning on the D4RL Benchmark [29] (Section 4.4), and the machine translation on the IWSLT’ 14 De-En [10] (Section 4.5). We aim to show that: (i) FourierFormer achieves better accuracy than the baseline transformer on a variety of practical tasks with different data modalities, and (ii) FourierFormer helps reduce head redundancy compared to the baseline transformer (Section 4.6). Throughout the section, we compare FourierFormers with the baseline dot-product transformers of the same configuration. In all experiments, we made the constant R in Fourier attention (see equation (16)) to be a learnable scalar and set choose the function (x) = x4 (see Remark 2). All of our results are averaged over 5 runs with different seeds. The details on the models and training are provided in Appendix A. Moreover, additional experiments results are provided in Appendix E. Our PyTorch code with documentation can be found at https://github.com/minhtannguyen/FourierFormer_NeurIPS. 4.1 Language Modeling on WikiText-103 We report the validation and test perplexity (PPL) of FourierFormer versus the baseline transformer with the dot-product attention in Table 1. FourierFormers attain much better PPL than the baselines in both small and medium configurations. For the small configuration, the improvements of FourierFormer over the baseline are 1.29 PPL in validation and 1.44 PPL in test. For the medium configuration, these improvements are 1.39 PPL in validation and 1.59 PPL in test. These results suggest that the advantage of FourierFormer over the baseline dot-product transformer grows with the model’s size. This meets our expectation because larger models has larger query and key dimensions, e.g. the language model with medium configuration in this experiment has the query and key dimension of 256 versus 128 as in the language model with small configuration. Since the advantage of FourierFormer results from the property that FourierFormer can capture correlation between features in query and key vectors, the larger the query and key dimensions are, the more advantage FourierFormer has. 4.2 Image Classification on ImageNet In the Imagenet classification task, we illustrates the benefits of Fourierformers in different data modalities. We summarize our models’ results in Table 2. Same as in the language modeling experiment, for this image classification task, the Deit model equipped with FourierFormer significantly outperforms the baseline Deit dot-product transformer [79] in both top-1 and top-5 accuracy. This result suggests that the advantage of FourierFormer over the baseline dot-product transformer holds across different data modalities. 4.3 UEA Time Series Classification To evaluate Fourierformers on temporal sequences, we compare the accuracy of the our models and the baseline softmax transformers trained on 10 datasets in the the UEA Time Series Classification Archive benchmark [5]. We summarize our results in Table 3. We observe show that Fourierformers outperforms softmax baselines in 7 out of 10 tasks and yields significantly better accuracy than the softmax transformer on average, showing the our models benefits when trained on temporal data. 4.4 Reinforcement learning on the D4RL benchmark We also examine the performance of our Fourierformers in reinforcement learning. In Table 4, we verify the advantage of decision FourierFormer over the baseline decision transformer [12] on the continuous control tasks from the D4RL benchmark [29]. The decision FourierFormer is the decision transformer with the Fourier attention instead of the softmax attention. On this benchmark, our decision FourierFormer significantly outperforms the baseline decision transformer on 8 out of 9 tasks and on average across tasks. Each experiment result averaged over 5 runs with different random seeds. We follow the architecture and training configuration from [93]. 4.5 Machine Translation on IWSLT’ 14 De-En We demonstrate the performance of Fourierformer on the IWSLT’ 14 De-En [10] neural machine translation task, which has different inputs’ the sequence lengths. Table 5 shows that the FourierFormer achieves better BLUE scores than the softmax baseline. 4.6 FourierFormer Helps Reducing Head Redundancy To study the diversity between attention heads, given the model trained for the WikiText-103 language modeling task, we compute the average L2 distance between heads in each layer. We show the layer-average mean and variance of distances between heads in Table 6. Results in Table 6 shows that FourierFormer obtains greater L2 distance between attention heads than the baseline transformer with the dot-product attention and thus helps reduce the head redundancy. Note that we use the small configuration as specified in Section 4.1 for both models. 5 Related Work Interpretation of Attention Mechanism in Transformers Recent works have tried to gain an understanding of transformer’s attention from different perspectives. [80] considers attention as applying kernel smoother over the inputs. Extending this kernel approach, [35, 15, 52, 89, 54] linearize the softmax kernel in dot-product attention and propose a family of efficient transformers with linear computational and memory complexity. [9] then shows that these linear transformers are comparable to a Petrov-Galerkin projection [64], suggesting that the softmax normalization in the dot-product attention is sufficient but not necessary. Other works provide an understanding of attention in transformers via ordinary/partial differential equation include [45, 69]. In addition, [51, 75, 30, 96, 53] relate attentions in transformers to a Gaussian mixture models. Several works also connect the attention mechanism to graph-structured learning and message passing in graphical models [90, 72, 39]. Our work focuses on deriving the connection between self-attention and nonparametric kernel regression and exploring better regression estimator, such as the generalized Fourier nonparametric regression estimator, to improve the performance of transformers. Redundancy in Transformers [19, 47, 25] show that neurons and attention heads in the pre-trained transformer are redundant and can be removed when applied on a downstream task. By studying the contextualized embeddings in pre-trained networks, it has been demonstrated that the learned representations from these redundant models are highly anisotropic [49, 26]. Furthermore, [70, 74, 86, 68] employ knowledge distillation and sparse approximation to enhance the efficiency of transformers. Our FourierFormer is complementary to these methods and can be combined with them. 6 Concluding Remarks In this paper, we establish the correspondence between the nonparametric kernel regression and the self-attention in transformer. We then develop the generalized Fourier integral estimators and propose the FourierFormer, a novel class of transformers that use the generalized Fourier integral estimators to construct their attentions for efficiently capturing the correlations between features in the query and key vectors. We theoretically prove the approximation guarantees of the generalized Fourier integral estimators and empirically validate the advantage of FourierFormer over the baseline transformer with the dot-product attention in terms of accuracy and head redundancy reduction. It is interesting to incorporate robust kernels into the nonparametric regression framework of FourierFormer to enhance the robustness of the model under data perturbation and adversarial attacks. A limitation of FourierFormer is that it still has the same quadratic computational and memory complexity as the baseline transformer with the dot-product attention. We leave the development of the linear version of FourierFormer that achieves linear computational and memory complexity as future work. It is worth noting that there is no potential negative societal impacts of FourierFormer. Acknowledgements This material is based on research sponsored by the AFOSR MURI FA9550-18-1-0502, the ONR grant N00014-20-1-2093, the MURI N00014-20-1-2787, and the NSF under Grant# 2030859 to the Computing Research Association for the CIFellows Project (CIF2020-UCLA-38). NH acknowledges support from the NSF IFML 2019844 and the NSF AI Institute for Foundations of Machine Learning.
1. What is the focus and contribution of the paper on transformers? 2. What are the strengths of the proposed approach, particularly in terms of interpreting self-attention? 3. What are the weaknesses of the paper regarding its presentation and technical aspects? 4. Do you have any questions regarding the paper's content or presentation? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes the FourierFormer, in which the dot-product kernels are replaced by the generalized Fourier integral kernels. Unlike the dot-product kernels, where we need to choose a good covariance matrix to capture the dependency of the features of data, the generalized Fourier integral kernels can automatically capture such dependency and remove the need to tune the covariance matrix. This paper theoretically prove that the proposed Fourier integral kernels can efficiently approximate key and query distributions and verify this point through experiments on two transformer-based tasks. Strengths And Weaknesses 1.This paper introduces a new angle to interpret transformer and its key module. This work provides a nonparametric regression interpretation to study self-attention in transformers and formulate self-attention from the viewpoint of kernel regression. 2.This work adopts the generalized Fourier integral estimators to replace the traditional dot-product self-attention and provide theoretical guarantees for the estimator. 3.Overall, the paper is well organized and technically sound. The experimental results on multiple transformer-based tasks verify the efficiency of the proposed Fourier Former. Weaknesses 1.The derivation process and the presentation need to be improved. Some important symbol annotations or explanation is missing during the algorithm description, which make readers hard to follow the derivation process. For example, in the equation (9), some important symbol annotations are missing, e.g. ‘s’, ‘R’. It is difficult for readers to catch up the derivation, and the derivation of p(k) is crucial to the following interpretation. 2.Some pitfalls in the paper: a) in line 100, “are i.i.d samples from.” ; b) in equation (6) one of /psi is written as /phi; c)line 185, the C’ in text are in wrong format. Questions In line 144, the subsection title is “(Generalized) Fourier Integral Theorems”. Why do authors bracket the word “generalized” ? Limitations The paper didn't address the limitation and potential negative societal impact of the work.
NIPS
Title Bezier Gaussian Processes for Tall and Wide Data Abstract Modern approximations to Gaussian processes are suitable for “tall data”, with a cost that scales well in the number of observations, but under-performs on “wide data”, scaling poorly in the number of input features. That is, as the number of input features grows, good predictive performance requires the number of summarising variables, and their associated cost, to grow rapidly. We introduce a kernel that allows the number of summarising variables to grow exponentially with the number of input features, but requires only linear cost in both number of observations and input features. This scaling is achieved through our introduction of the Bézier buttress, which allows approximate inference without computing matrix inverses or determinants. We show that our kernel has close similarities to some of the most used kernels in Gaussian process regression, and empirically demonstrate the kernel’s ability to scale to both tall and wide datasets. Gaussian processes (GPs) are a probabilistic approach to modelling functions that permit tractable Bayesian inference. They are, however, notorious for their poor scalability. In recent decades, this criticism has been challenged. Several approximate methods now allow GPs to scale to millions of data points. Yet, scalability in the number of data points is merely one challenge of big data. There are still problems associated with the input dimensionality – one aspect of the famed curse of dimensionality. Burt et al. [2020] analysed the most studied approximation, the so-called sparse inducing points methods, and showed it to be accurate for low dimensional inputs. Alarmingly, exponentially many inducing points are still needed in high-dimensional input spaces, that is, for problems with a large number of features. As such, despite modern GP approximations scaling to tall data, they are still discounted when concerning wide data. In response to this, there exist GP approximations built on simplices or grid-structures in the input space [Wilson and Nickisch, 2015, Gardner et al., 2018, Kapoor et al., 2021]. These take advantage of attractive fast linear algebra, but are often limited by memory in higher dimensions. Their advantage is the ability to fill the input space with structured points, so all observations have a close neighbour. We propose a new kernel for GP regression that requires neither matrix inversion nor determinant calculation – GPs’ two core computational sinners. Additionally, we cover the input space1 with exponentially many points, but introduce an approximation that grows only linearly in computational complexity. That is, our method scales linearly in both the number of data points and the number of input dimensions, whilst being space-filling in the input domain. GPs are indispensable to fields where uncertainty is a driver in decision-making mechanisms. Such fields include Bayesian optimisation, active learning and reinforcement learning. The critical decision mechanism is the exploration-exploitation trade-off. One ability useful in such fields is to assign high uncertainty to unexplored regions, just as does an exact GP. We show that our proposed model also assigns high uncertainty to unexplored regions, suggesting our model as well-suited to decisionmaking problems. A limiting assumption of our kernel is its restriction to a box-bounded domain in the input space. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A Gaussian processes (GPs) are a probabilistic approach to modelling functions that permit tractable Bayesian inference. They are, however, notorious for their poor scalability. In recent decades, this criticism has been challenged. Several approximate methods now allow GPs to scale to millions of data points. Yet, scalability in the number of data points is merely one challenge of big data. There are still problems associated with the input dimensionality – one aspect of the famed curse of dimensionality. Burt et al. [2020] analysed the most studied approximation, the so-called sparse inducing points methods, and showed it to be accurate for low dimensional inputs. Alarmingly, exponentially many inducing points are still needed in high-dimensional input spaces, that is, for problems with a large number of features. As such, despite modern GP approximations scaling to tall data, they are still discounted when concerning wide data. In response to this, there exist GP approximations built on simplices or grid-structures in the input space [Wilson and Nickisch, 2015, Gardner et al., 2018, Kapoor et al., 2021]. These take advantage of attractive fast linear algebra, but are often limited by memory in higher dimensions. Their advantage is the ability to fill the input space with structured points, so all observations have a close neighbour. We propose a new kernel for GP regression that requires neither matrix inversion nor determinant calculation – GPs’ two core computational sinners. Additionally, we cover the input space1 with exponentially many points, but introduce an approximation that grows only linearly in computational complexity. That is, our method scales linearly in both the number of data points and the number of input dimensions, whilst being space-filling in the input domain. GPs are indispensable to fields where uncertainty is a driver in decision-making mechanisms. Such fields include Bayesian optimisation, active learning and reinforcement learning. The critical decision mechanism is the exploration-exploitation trade-off. One ability useful in such fields is to assign high uncertainty to unexplored regions, just as does an exact GP. We show that our proposed model also assigns high uncertainty to unexplored regions, suggesting our model as well-suited to decisionmaking problems. 1A limiting assumption of our kernel is its restriction to a box-bounded domain in the input space. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Background Bézier curves and surfaces are parametrised geometric objects that have found great usage in computer-aided design and robotics [Prautzsch et al., 2002]. The simplest Bézier curve is the linear interpolation of two points p0 and p1 in RD; the Bézier curve of order 1 c(t) = (1− t)p0 + tp1, t ∈ [0, 1]. (1) Higher order curves are generalised in the following way: the order-ν Bézier curve is defined as c(t) = ν∑ i=0 Bνi (t)pi, t ∈ [0, 1]. (2) In Bézier terms, pi are referred to as control points. Notice an order-ν Bézier curve has ν + 1 control points. Bνi denotes the ith Bernstein polynomial of order ν. They are defined as Bνi (t) = ν! i!(ν − i)! ti(1− t)ν−i. (3) By going from curves to surfaces we wish to extend from the scalar t to a spatial input x ∈ [0, 1]d, for d > 1. Here, we can define Bézier d-surfaces as cd(x) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)pi1,...,id , x = (x1, x2, . . . , xd) ∈ [0, 1] d. (4) Figure 1 gives a visual illustration of a 2-dimensional surface embedded in R3. In the literature, it is difficult to find any studies of d-surfaces for d > 2. This paper targets especially this high-dimensional input case. We restrict our output dimension to 1, the regression problem, but the methods naturally extend to multidimensional outputs. The red points in Figure 1 show how each control points has an associated location in the input space. They are placed on a grid-like structure, and the order of each dimension determines how fine the mesh-grid of the hypercube is; i.e. how dense the input-space is filled. Gaussian processes (GPs) are meticulously studied in probability and statistics [Williams and Rasmussen, 2006]. They provide a way to define a probability distribution over functions. This makes them useful, as priors, to build structures for quantifying uncertainty in prediction. They are defined as a probability measure over functions f : X → R, such that any collection of elements (x1,x2, . . . ,xn) in X have their associated output (f(x1), . . . , f(xn)) following a joint Gaussian distribution. This distribution is fully determined by a mean function m : X → R and a positive semi-definite kernel function k : X ×X → R. They admit exact Bayesian inference. However, exact inference comes at a prohibitive worst-case computational cost of O(n3), where n is the number of training points, due to computing inverse and determinant of the kernel matrix. Sparse Gaussian processes [Snelson and Ghahramani, 2005] overcome this burden by conditioning on m inducing points, reducing complexity to O(nm2), where usually m << n. The inducing points, denoted u, are then marginalised to obtain an approximate posterior of f . The variational posterior mean and variance, at a location x∗ are then given by E[f(x∗)] = k(x∗,Z)k(Z,Z)−1µu, (5) Var(f(x∗)) = k(x∗,x∗)− k(x∗,Z)k(Z,Z)−1 (k(Z,Z)− Σu) k(Z,Z)−1k(Z,x∗), (6) under the assumption of a constant zero prior mean function. This assumption is easily relaxed if needed. Here Z denotes the inducing locations in the input space, i.e. f(Z) = u ∼ N (µu,Σu). Under further assumption of Gaussian observation noise , i.e. y∗ = f(x∗) + , then Titsias [2009] showed the optimal µu and Σu are known analytically. In a sought analogy to Figure 1, Z would be the red points and u would be the orange. 2 Bézier Gaussian Processes Inspired by Bézier surfaces, we construct a Gaussian process f : [0, 1]d → R as f(x) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)P i1,...,id , (7) where P i1,i2,...,id ∼ N (ϑi1,i2,...,id ,Σi1,i2,...,id) are Gaussian variables and x = (x1, x2, . . . , xd). Here, xγ ∈ [0, 1] for γ = 1, . . . , d. We write P , with capital letter to emphasise that it is a random variable now. Further, we write it in boldface though we here only consider the scalar case, i.e. regression, but the multi-output case is not fundamentally different. It is easy to verify that f satisfies the definition of a GP since it, for any x, is a scaled sum of Gaussians. We assume that all P i1,...,id are fully independent. With that assumption, we can make the following observation for the mean and kernel function µ(x) := ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)ϑi1,...,id , and (8) k(x, z) := Cov (f(x), f(z)) = ν1∑ i1=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)Σi1,...,idB ν1 i1 (z1)· · ·Bνdid (zd). (9) The Bernstein polynomials can approximate any continuous function given the order is large enough, thus they make a good basis for GP regression [Hildebrandt and Schoenberg, 1933]. Naturally, selecting a prior over f comes down to selecting a prior over the random control points P . The most common prior mean in GP regression, the constant zero function, is then easily obtained by ϑi1,...,id = 0 for all i. By construction, the choice of Σi1,...,id needs consideration to yield a convenient prior over f . Mindlessly setting Σi1,...,id = 1 would make Var (f(x)) collapse to zero quickly in the central region of the domain, especially as dimensions d grow. This, of course, gives a much too narrow prior over f . Figure 2 (middle) shows that in the central region the standard deviation of f is smaller due to the nature of the Bernstein polynomials. If we consider instead a two-dimensional input the standard deviation would collapse even more, as we would then see the shrinking effect for both dimension and multiply them. We can, however, adjust for this. We define the inverse squared Bernstein adjusted prior to counter this effect. In all dimensions γ = 1, . . . , d, let ςγ = A −1 γ 1νγ+1, where Ai,j = ( B νγ j (i/νγ) )2 , (10) and νγ denotes the order of the dimension γ. Then setting Σi1,...,id = ∏d γ=1 ςγ(iγ) ensures that Var (f(x)) ≈ 1 over the entire domain [0, 1]d. Eq. 10 solves a linear system, such that Var (f(i/νγ)) = 1, for i = 0, . . . , νγ . This means a prior hardly distinguishable from standard stationary ones such as the RBF kernel. Visual representation of this prior is shown in Figure 2 (right). This adjustment works up to νγ = 25, after which negative values occur. Summarising, we introduced a kernel based on Bézier surfaces. An alternative viewpoint is that f is a polynomial GP, but with Bernstein basis rather than the canonical basis. We remark that f is defined outside the domain [0, 1]d; any intuition about the prior there is not considered, and we will not pursue investigating data points outside this domain. Of course, for practical purposes this domain generalises without loss of generality to any rectangular domain [a1, b1]×. . .×[ad, bd]. For presentation purposes we keep writing [0, 1]d. Next, we show how we infer an approximate posterior given data. 2.1 Variational inference Let P denote the set of all P i1,...,id . As the prior of f is fully determined by the random control points P , the posterior of f is determined by the posterior of these. As per above, we set the prior p(P ) = ν1∏ i1=0 ν2∏ i2=0 · · · νd∏ id=0 p(P i1,...,id) := ν1∏ i1=0 ν2∏ i2=0 · · · νd∏ id=0 N (0,Σi1,...,id), (11) where Σi1,...,id = ∏d γ=1 ςγ(iγ). We utilise variational inference to approximate the posterior of the control points, and hence f . This means we introduce variational control points. We assume they are fully independent (usually called the mean-field assumption), and have free parameters for the mean and variance, such that P i1,...,id ∼ N (ϑ̂i1,i2,...,id , Σ̂i1,i2,...,id). Assume we have observed data D = {xj , yj}nj=1. The key quantity in variational inference is the Kullback-Leibler divergence between the true posterior p(P |y) and the variational approximation – which we denote q(P ). The smaller divergence, the better approximation of the true posterior. Without access to the true posterior, the quantity is not computable. However, it has been shown this divergence is equal to the slack in Jensen’s inequality used of the log-marginal likelihood: log p(y). log p(y) = log ∫ p(y|P )p(P )dP ≥ ∫ log ( p(y|P )p(P ) q(P ) ) q(P )dP (12) = Eq(P ) [log p(y|P )]− KL (q(P )‖p(P )) . (13) Knowing this, we can approximate the true posterior with q(P ) by maximising Eq. (13). This is the evidence lower bound, and it is maximised with respect to the variational parameters ϑ̂i1,i2,...,id and Σ̂i1,i2,...,id . This is fully analytical when the variational parameters and a Gaussian likelihood is assumed. We assume our observation model is disturbed with additive Gaussian noise, which in other words means our likelihood is Gaussian p(yj |P ) := N ( yj |f(xj), σ2 ) , σ2 > 0, (14) for each j = 1, . . . , n and we assume they are independent conditioned on P . With these assumption the first term in Eq. (13) becomes Eq(P ) [log p(y|P )] = − 1 2 n∑ j=1 log(2π) + log(σ2) + ( yj − Eq(P )[f(xj)] )2 + Varq(P ) (f(xj)) σ2 , (15) where Eq(P )[f(xj)] and Varq(P )(f(xj)) are given as Eq. (8) and (9) respectively, but with the variational parameters ϑ̂i1,i2,...,id and Σ̂i1,i2,...,id used. The second term in Eq. (13) enjoys the independence of control points to split into sums KL (q(P )‖p(P )) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 KL (q(P i1,...,id)‖p(P i1,...,id)) (16) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Σ̂i1,i2,...,idΣi1,i2,...,id − 1 + ϑ̂ 2 i1,i2,...,id Σi1,i2,...,id + log Σi1,i2,...,id Σ̂i1,i2,...,id . (17) When inspecting the evidence lower bound, Eq. (13), we see it has a data term forcing control points to fit the data, and a KL-term to make control points revert to the prior. Knowing how control points are allocated in the input domain, we expect control points in regions of no data revert to the prior. This is similar to what stationary kernels do in said regions. We verify visually in Section 4. All together, Bézier GPs can be adjusted to have priors similar to stationary GPs, and have analogous posterior behaviour, which is favourable to many practitioners. But Bézier GPs scale. None of the terms in the evidence lower bound require matrix inversions or determinants. It is simple to mini-batch over the data points, utilising stochastic variational inference [Hoffman et al., 2013], to scale it to large n. However, nearly all terms require evaluations of huge sums if the input dimension is high. The next section is aimed at this problem. 2.2 Scalability with the Bézier buttress Until this point, we have omitted addressing the number of random control points needed for Bézier GPs. Let us denote this number τ . It can quickly be checked that τ = ∏d γ=1(νγ + 1). This implies that to evaluate f we must sum over τ summands, which as d increases, quickly becomes computationally cumbersome. τ increases exponentially with d. It is justifiable to view the random control points as inducing points; after all, they are Gaussian variables in the output space. Thus, it would be extremely valuable to manage exponentially many of them. To overcome this, we introduce the Bézier buttress2. We assume parameters of the random control points, say ϑ, can parametrise ϑi1,i2,...,id = ∏d γ=1 wiγ−1,iγ ,γ , where w0,i1,1 := wi1,1. This assumption is the key of the Bézier buttress. Figure 3 provides visualisation. It visualises a source-sink graph, where each unique path from source to sink represents one unique control point with above parametrisation. The cyan highlighted path represents the ϑ1,2,3 = w1,1w1,2,2w2,3,3, where we multiply the values along the path from source to sink. Notice last edges have value 1. In the Bézier buttress there are d layers, one for each input dimension, and νγ + 1 nodes in each layer γ = 1, . . . , d. Borrowing from neural network terminology, a forward-pass is a sequential series of matrix multiplications which are element-wise warped with non-linearities, such as tanh or ReLU. If 2A buttress is an architectural structure that provides support to a building. we let our sequence of matrices be w1,w2, . . . ,wd, where wγ is the matrix with entries {wi,k,γ}i,k. Let 1ν denote the vector of size ν with 1 in all entries, then fixing ’the input’ to 1>ν1+1 and the last matrix to 1νd+1, a forward pass is 1>ν1+1w1w2 · · ·wd1νd+1 = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 ϑi1,i2,...,id . (18) We see this forward pass computes a sum over all control points means. Naturally, we construct a Bézier buttress for summing over variances too, which must be restricted to positive weights. It was these sums that formed our bottleneck, in this parametrisation it is just a sequence of matrix products. What about computing f? It comes down to a use of ‘non-linearities’ in the buttress. Multiplying element-wise the Bernstein polynomials as seen in Figure 3 (visualised only on 3rd layer), a forward pass computes either E[f(x)], or Var(f(x)) if using squared Bernstein polynomials. Each control point is then exactly multiplied by the correct polynomials from its way from source to sink. Notice, the ‘input’ is fixed to 1 in the source, but the observed x is appearing via the Bernstein polynomials along the way. We can write this too as a sequence of matrix products E[f(x)] = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)ϑi1,...,id = 1 > ν1+1w1Bx1 · · ·wdBxd1νd+1, (19) where Bxγ is a diagonal matrix with the νγ + 1 Bernstein polynomials on its diagonal. For the variance of f(x) there exists a similar expression, of course with squared polynomials, and with the positive weights associated with the variance Bézier buttress. On this inspection, all terms needed to compute Eq. (15) are available. All terms for the KL divergences are algebraic manipulations of Eq. (18) – these are explicit in the supplementary material. The key takeaway for Bézier buttress is that we parametrise each random control point as a product of weights. This can be seen as an amortisation, and we do inference on these weights rather than the control points themselves. Hence, no matrix inversions are needed to backpropagate through our objective, and a forward pass is only a sequence of matrix products. 2.3 Marginalising matrix commutativity Matrix multiplication is not commutative. This implies the ordering of the matrices in Eq. (18) matter, which again implies how we order the input dimensions in the Bézier buttress is of importance. This is the price for the computational benefit this parametrisation gives. An ordering of the input dimensions is somewhat unnatural for spatial regression, so we present a way to overcome this in approximate Bayesian manner. Define f = f1 + f2 + · · ·+ fr, where each of these individual fk, k ∈ {1, . . . , r}, are Bézier GPs with a random permutation of the ordering in the associated Bézier buttress. In other words, we let f be an ensemble of Bézier GPs to (approximately) marginalise over all possible orderings. The number of all possible orderings quickly becomes too large for which to account; in practice, we set r in a feasible region, say 20, which gives satisfactory empirical performance. Another lens on this is that each control point is a sum of r control points – each control point’s standard deviation is scaled by r−1, to obtain the same prior as so far discussed. Oddly, we have then circumvented the problem of too many control points by introducing even more control points. That is, each control point mean parametrise ϑi1,i2,...,id = ∑r k=1 ∏d γ=1 witk(γ)−1,itk(γ),tk(γ), where tk denotes a random permutation of (1, . . . , d). We remind again that similar expression exist for control point variances, restricted to positive weights. As remarked, inference comes down to a forward pass in the Bézier buttress, a sequence of d matrix multiplications. Assume all dimension are of order ν, then the computational complexity of one forward pass is O ( d(ν + 1)2 ) . Now we need r forward passes to marginalise the ordering, and n forward passes, one for each observation, leaving final complexity of O(nrdν2). Linear in n and d. 3 Related Work Variational inference in GPs was initially considered by Csató et al. [1999] and Gibbs and MacKay [2000]. In recent times, the focus has shifted focus to scalable solutions to accommodate the big data era. In this respect, Titsias [2009] took a variational approach to the inducing points methods [Quinonero-Candela and Rasmussen, 2005]; later Hensman et al. [2013] further enhanced scalability to allow for mini-batching and a wider class of likelihoods. Still, the need for more inducing points is of importance, especially as the number of input features grows. The response to this has mostly revolved around exploiting some structure of the kernel. Wilson and Nickisch [2015] and Wu et al. [2021] exploit specific structure that allow for fast linear algebra methods; similar to our method inducing locations tend to lie on grid. These grids expand fast as the input dimension increases, as also pointed out earlier in the article. Kapoor et al. [2021] remedy this by instead of a rectangular grid, they consider the permutohedral lattice, such that each observation only embeds to d+ 1 neighbours instead of 2d, as in [Wilson and Nickisch, 2015]. Another approach to allowing for more inducing points is incorporating nearest neighbour search in the approximation [Kim et al., 2005, Nguyen-Tuong et al., 2008]. Tran et al. [2021] introduced sparse-within-sparse where they have many inducing points in memory, but each time search for the k-nearest ones to any observation. They discard the remaining ones as they have little to no influence on the observation. Wu et al. [2022] made a variational pendant to this method. Lastly, when dealing with high-dimensional inputs it is worth mentioning feature learning. That is, learning more low-dimensional features where GPs have better performance. The success of deep models has been transported to GPs by Damianou and Lawrence [2013] and later scaled to large datasets in [Salimbeni and Deisenroth, 2017]. Another approach is Deep Kernel Learning [Wilson et al., 2016, Bradshaw et al., 2017], where feature extraction happens inside the kernel function; lately Ober et al. [2021] has investigated the limitations and benefits of these models. We have treated structured control points as our version of inducing points; and by parametrising them with a Bézier buttress, we limit the expansion of grids to linear growth in parameters. We are not the first to consider the Bernstein polynomials as a basis for learning functions. Petrone [1999b] used it to model kernel estimate probability density functions, and several follow up works [Petrone, 1999a, Petrone and Wasserman, 2002]. Hug et al. [2020] recently introduced Bézier GPs, but with a focus on time series [Hug et al., 2022]. Our emphasis has been on spatial input; even the Bézier surface literature contain close to nothing on more than 2-dimensional surfaces. 4 Evaluation We split our evaluation into four parts. First, we visually inspect the posterior on a one dimensional toy dataset to show how the control points behave, and indicate that there indeed is a stationarylike behaviour on the domain of the hypercube. Next, we test empirically on some standard UCI Benchmark datasets, to gives insight into when Bézier GPs are applicable. After that, we switch to tall and wide data – large both in the input dimension and in number of data points. These experiments give certainty that the method delivers on its key promise: scalability. Lastly, we turn our eyes to the method itself and investigate how performance is influenced by the ordering of dimensions. Care is needed in optimising a Bézier GP – not all parameters are born equal. We split optimisation into two phases. First, we optimise all variational parameters, keeping the likelihood variance σ2 fixed as τ−1, with τ being the number of control points. After this initial phase, we optimise σ2 with all variational parameters fixed. We let both phases run for 10000 iterations with a mini-batch size of 500, for all datasets. Both phases use the Adam optimiser [Kingma and Ba, 2015], the first phase with learning rate 0.001, and the second with learning rate 0.01. If not following a such a bespoke training scheme, we see a tendency for the posterior to revert to the prior, because the KL-term becomes too dominating initially. This training scheme is designed for the Gaussian likelihood, but we wish to emphasise that, in principle, the loss function is accurate for any choice of likelihood. 4.1 One dimensional visual inspection We hypothesised the objective function, Eq. 16, would ensure, within the hypercube domain, that f reverts to its prior in regions where data is scarce. To verify this we construct a small dataset to inspect. We generate one-dimensional inputs uniformly in the regions [0, 0.33] and [0.66, 1]; we sample 20 observation in each region. The responsive variable is generated as y(x) = 3 sin(16x). According to the hypothesis, f should in the region [0.33, 0.66], tend towards zero in mean, and increase its variation here. We use a BézierGP of order 20 to model the observations, since they are highly non-linear. Figure 4 shows the posterior distribution of f to the left; we observe f tends towards the prior in the middle region. The middle plot illustrates the distribution, both prior and posterior, of the 21 control points. There is a clear tendency for the central-most points to align the posterior and posterior, enforcing this behaviour in f . The non-equal priors are due to the inverse-squared Bernstein adjusted prior which ensures a uniform variation in f over the domain, see Figure 2. The plot to the right in Figure 4 shows the behaviour foundational to practitioners of Bayesian optimisation and active learning etc., the variance increase away from data regions. 4.2 UCI Benchmark We evaluate on eight small to mid-size real world datasets commonly used to benchmark regression [Hernandez-Lobato and Adams, 2015]. We split each dataset into train/test-split with the ratio 90/10. We do this over 20 random splits and report test set RMSE and log-likelihood average and standard deviation over splits. We choose baselines to be SGPR, following the method from Titsias [2009]; we do both for 100 and 500 inducing variables. SimplexGP is another baseline, they suggest their approximation is beneficial for semi-high dimensional inputs (between 3 and 20) [Kapoor et al., 2021], hence they are an obvious baseline. SimplexGP usually use a validation set to choose the final model. This is due to a highly noisy optimisation scheme using a high error-tolerance (1.0) for conjugate gradients. We remedy this by setting the error-tolerance to (0.01), which harms scalability, but we can omit using a validation set for better comparability. Wang et al. [2019] recommend this error-tolerance, but remark it is more stable for RMSE than for log-likelihood. For BézierGP, we fix the number of permutations to r = 20, and vary the order in ν = 5, 10, 20. The order is identical over input dimensions. The inputs are pre-processed such that the training set is contained in [0, 1]d. Table 1 contains the results of this experiment. We make the following observations about our presented BézierGP. On keggdirected and elevators there are test points outside the defined domain on some splits, which cause the RMSE to be extreme, but the likelihood is more forgiving. This highlights the constraint of our model: it needs a box-bounded domain to be a priori known. Had we standardised such that both test and train data were in the hypercube, BézierGP (ν = 20) would have a average test RMSE of 0.0937. We could not reproduce results from Kapoor et al. [2021] on keggdirected, the optimisation was too noisy and with no use of validation. On concrete, boston and energy we see overfitting tendencies. Even though BézierGP is the optimal choice on energy there is a mismatch between train and test error. On concrete this shows in better test RMSE, than the baselines, but the variance is overfitted yielding non-optimal likelihood. We conjecture this happens because the n/d-ratio is low; which makes it more likely to overfit the control points – especially for higher orders ν. Knowing these model fallacies, we observe that BézierGP outperforms on baselines on multiple datasets, most notably the 17-dimensional bike dataset. 4.3 Large scale regression Figure 5 shows the results of regression tasks in regimes of high dimensions and one in high number of observations. Here, we follow exactly the experimental setup of either Salimbeni and Deisenroth [2017] or Wang et al. [2019]. If the latter, we use the validation-split they use as training data. Our optimisation scheme for BézierGP is consistent with above, except for slice, where the first training phase runs for 30000 iterations. We discard test points that are not in the [0, 1]d domain – in no situation did this remove more than 0.001% of the test set. The number after DGP, denotes the number of hidden layers in a Deep GP [Salimbeni and Deisenroth, 2017], after SGPR and SVGP it denotes the number of inducing points. SVGP refers to the method from Hensman et al. [2013]. After B, it denotes the order used in BézierGP. On year, we observe our (non-deep) model is on-par with 2-layered Deep GPs, and closer to 3 in RMSE. The highest dimensional dataset, slice, sees us in the low n/d-ratio again, and we are again faced with a too flexible model. This is why we report results for orders 3 and 5, rather than 20, since the overfitting kicks in. Even for these small orders the test loglikelihood has high variance and under-performs compared to RMSE. With respect to RMSE it is top-performer signalling again it is overfitting the variance. On the remaining two datasets BézierGP is best-performing among baselines. 4.4 Influence of number of permutations All experiments so far used r = 20; that is, 20 random permutations of the ordering of dimension used in the Bézier buttress. For a problem with input dimension d, there exist d! possible permutations. Table 2 shows results with varying r; for each dataset, the results are over the same train/test split (0.9/0.1). We fixed ν = 20. Up to some noise in the optimisation phase, we see for the two highest dimensional datasets, protein and bike, performance improves with higher r. Bike has over 50% reduction in RMSE from r = 1 to r = 50. Table 2 emphasises the results we have presented are not optimised over hyperparameter r and ν. They also illustrate an interesting direction of future research: optimising these hyperparameters. We chose permutations by random sampling, but choosing them in a principled deliberate manner could yield good performance with a computationally manageable r. This result indicates, at least, protein and bike would see increased performance in Table 1 from better (or just more) permutations. 5 Discussion We introduced the Bézier Gaussian Process – a GP, with a polynomial kernel in the Bernstein basis, that scales to a large number of observations, and remains space-filling for high number of input features (limited to a box-bounded domain). We illustrated that, with slight adjustments, the prior and posterior have similar behaviour to ‘usual’ stationary kernels. We presented the Bézier buttress, a weighted graph to which we amortise the inference, rather than inferring the control points themselves. The Bézier buttress allows GP inference without any matrix inversion. Using the Bézier buttress, we inferred 6385 control points for the high-dimensional slice dataset. We highlighted weaknesses of the proposed model: most crucially the tendency of overfitting when the n/d-ratio is low. The results demonstrate scalability in both n and d, but does not solve the short, but wide problem. The paper did not optimise over the hyperparameters of the proposed kernel, namely ν and r, but it showcased briefly that doing so might enhance BézierGPs empirically; especially smart selection of the permutations is an interesting direction for future research. We speculate that optimising over orders, on a validation set, would alleviate some of the overfitting issues. Acknowledgments and Disclosure of Funding MJ is supported by the Carlsberg Foundation.
1. What is the focus and contribution of the paper on Gaussian processes? 2. What are the strengths of the proposed approach, particularly in handling high-dimensional data? 3. What are the weaknesses of the paper, especially regarding the concern about Bezier surfaces and the lack of theoretical justification? 4. How does the reviewer suggest improving the paper, such as providing more explanation or references for inverse squared Bernstein and Bernstein polynomials? 5. Does the reviewer have any questions regarding the construction of f(x) and its resemblance to deep structures? 6. What are the limitations of the paper, and how can they be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents a Gaussian process method focusing on data in which the number of samples and the number of dimensions are both high. The GP prior is built upon Bezier surfaces and is formulated as a combination of Gaussian random variables and coefficients taken from Bernstein polynomials. The paper suggested adjusted priors and scalable construction via Bezier buttress. Experiments are performed on various data sets. Strengths And Weaknesses Strengths: The problem of handling high-dim GP is interesting , the approach is novel to tackle the problem in GP research. The paper gives a good analysis for the method and provides a scalable solution for GP. Weakness: I have one concern of how well Bezier d-surfaces can approximate data. What is the error as the number of dimensions increases? If there is any reference for this, it would be nice to add the clarification. It might be out of scope in this paper, there is no theoretical justification for that the proposed approach can address effectively high-dim data .This may be related to the previous point and the way the paper suggests constructing Bezier buttress. The empirical results (first plot of Fig 5.) may. The main concern is that we do not know whether the ability to handle high-dim data comes from deep networks or from the proposed approach. Questions How to perform the matrix inverse of k ( Z , Z ) ? Is it done analytically by using inverse squared Bernstein. If so, I would suggest to put more explanation or reference for inverse squared Bernstein and Bernstein polynomials which I feel is missing in this paper. (I’m not familiar with the literature here). The construction of f ( x ) somewhat resembles deep structures (of course with no activations), the comparison can be done extensively with other models like deep Gaussian process or deep kernel learning. However, the paper reports just one result with DGP for the “year” data set. Are the results for the remaining data sets (buzz, house electric, slice) available? Minor points: -The optimization is somehow pragmatic. Since the paper suggests KL terms may be big at initial steps, I think KL annealing techniques can be used here. Limitations Please see above.
NIPS
Title Bezier Gaussian Processes for Tall and Wide Data Abstract Modern approximations to Gaussian processes are suitable for “tall data”, with a cost that scales well in the number of observations, but under-performs on “wide data”, scaling poorly in the number of input features. That is, as the number of input features grows, good predictive performance requires the number of summarising variables, and their associated cost, to grow rapidly. We introduce a kernel that allows the number of summarising variables to grow exponentially with the number of input features, but requires only linear cost in both number of observations and input features. This scaling is achieved through our introduction of the Bézier buttress, which allows approximate inference without computing matrix inverses or determinants. We show that our kernel has close similarities to some of the most used kernels in Gaussian process regression, and empirically demonstrate the kernel’s ability to scale to both tall and wide datasets. Gaussian processes (GPs) are a probabilistic approach to modelling functions that permit tractable Bayesian inference. They are, however, notorious for their poor scalability. In recent decades, this criticism has been challenged. Several approximate methods now allow GPs to scale to millions of data points. Yet, scalability in the number of data points is merely one challenge of big data. There are still problems associated with the input dimensionality – one aspect of the famed curse of dimensionality. Burt et al. [2020] analysed the most studied approximation, the so-called sparse inducing points methods, and showed it to be accurate for low dimensional inputs. Alarmingly, exponentially many inducing points are still needed in high-dimensional input spaces, that is, for problems with a large number of features. As such, despite modern GP approximations scaling to tall data, they are still discounted when concerning wide data. In response to this, there exist GP approximations built on simplices or grid-structures in the input space [Wilson and Nickisch, 2015, Gardner et al., 2018, Kapoor et al., 2021]. These take advantage of attractive fast linear algebra, but are often limited by memory in higher dimensions. Their advantage is the ability to fill the input space with structured points, so all observations have a close neighbour. We propose a new kernel for GP regression that requires neither matrix inversion nor determinant calculation – GPs’ two core computational sinners. Additionally, we cover the input space1 with exponentially many points, but introduce an approximation that grows only linearly in computational complexity. That is, our method scales linearly in both the number of data points and the number of input dimensions, whilst being space-filling in the input domain. GPs are indispensable to fields where uncertainty is a driver in decision-making mechanisms. Such fields include Bayesian optimisation, active learning and reinforcement learning. The critical decision mechanism is the exploration-exploitation trade-off. One ability useful in such fields is to assign high uncertainty to unexplored regions, just as does an exact GP. We show that our proposed model also assigns high uncertainty to unexplored regions, suggesting our model as well-suited to decisionmaking problems. A limiting assumption of our kernel is its restriction to a box-bounded domain in the input space. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A Gaussian processes (GPs) are a probabilistic approach to modelling functions that permit tractable Bayesian inference. They are, however, notorious for their poor scalability. In recent decades, this criticism has been challenged. Several approximate methods now allow GPs to scale to millions of data points. Yet, scalability in the number of data points is merely one challenge of big data. There are still problems associated with the input dimensionality – one aspect of the famed curse of dimensionality. Burt et al. [2020] analysed the most studied approximation, the so-called sparse inducing points methods, and showed it to be accurate for low dimensional inputs. Alarmingly, exponentially many inducing points are still needed in high-dimensional input spaces, that is, for problems with a large number of features. As such, despite modern GP approximations scaling to tall data, they are still discounted when concerning wide data. In response to this, there exist GP approximations built on simplices or grid-structures in the input space [Wilson and Nickisch, 2015, Gardner et al., 2018, Kapoor et al., 2021]. These take advantage of attractive fast linear algebra, but are often limited by memory in higher dimensions. Their advantage is the ability to fill the input space with structured points, so all observations have a close neighbour. We propose a new kernel for GP regression that requires neither matrix inversion nor determinant calculation – GPs’ two core computational sinners. Additionally, we cover the input space1 with exponentially many points, but introduce an approximation that grows only linearly in computational complexity. That is, our method scales linearly in both the number of data points and the number of input dimensions, whilst being space-filling in the input domain. GPs are indispensable to fields where uncertainty is a driver in decision-making mechanisms. Such fields include Bayesian optimisation, active learning and reinforcement learning. The critical decision mechanism is the exploration-exploitation trade-off. One ability useful in such fields is to assign high uncertainty to unexplored regions, just as does an exact GP. We show that our proposed model also assigns high uncertainty to unexplored regions, suggesting our model as well-suited to decisionmaking problems. 1A limiting assumption of our kernel is its restriction to a box-bounded domain in the input space. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Background Bézier curves and surfaces are parametrised geometric objects that have found great usage in computer-aided design and robotics [Prautzsch et al., 2002]. The simplest Bézier curve is the linear interpolation of two points p0 and p1 in RD; the Bézier curve of order 1 c(t) = (1− t)p0 + tp1, t ∈ [0, 1]. (1) Higher order curves are generalised in the following way: the order-ν Bézier curve is defined as c(t) = ν∑ i=0 Bνi (t)pi, t ∈ [0, 1]. (2) In Bézier terms, pi are referred to as control points. Notice an order-ν Bézier curve has ν + 1 control points. Bνi denotes the ith Bernstein polynomial of order ν. They are defined as Bνi (t) = ν! i!(ν − i)! ti(1− t)ν−i. (3) By going from curves to surfaces we wish to extend from the scalar t to a spatial input x ∈ [0, 1]d, for d > 1. Here, we can define Bézier d-surfaces as cd(x) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)pi1,...,id , x = (x1, x2, . . . , xd) ∈ [0, 1] d. (4) Figure 1 gives a visual illustration of a 2-dimensional surface embedded in R3. In the literature, it is difficult to find any studies of d-surfaces for d > 2. This paper targets especially this high-dimensional input case. We restrict our output dimension to 1, the regression problem, but the methods naturally extend to multidimensional outputs. The red points in Figure 1 show how each control points has an associated location in the input space. They are placed on a grid-like structure, and the order of each dimension determines how fine the mesh-grid of the hypercube is; i.e. how dense the input-space is filled. Gaussian processes (GPs) are meticulously studied in probability and statistics [Williams and Rasmussen, 2006]. They provide a way to define a probability distribution over functions. This makes them useful, as priors, to build structures for quantifying uncertainty in prediction. They are defined as a probability measure over functions f : X → R, such that any collection of elements (x1,x2, . . . ,xn) in X have their associated output (f(x1), . . . , f(xn)) following a joint Gaussian distribution. This distribution is fully determined by a mean function m : X → R and a positive semi-definite kernel function k : X ×X → R. They admit exact Bayesian inference. However, exact inference comes at a prohibitive worst-case computational cost of O(n3), where n is the number of training points, due to computing inverse and determinant of the kernel matrix. Sparse Gaussian processes [Snelson and Ghahramani, 2005] overcome this burden by conditioning on m inducing points, reducing complexity to O(nm2), where usually m << n. The inducing points, denoted u, are then marginalised to obtain an approximate posterior of f . The variational posterior mean and variance, at a location x∗ are then given by E[f(x∗)] = k(x∗,Z)k(Z,Z)−1µu, (5) Var(f(x∗)) = k(x∗,x∗)− k(x∗,Z)k(Z,Z)−1 (k(Z,Z)− Σu) k(Z,Z)−1k(Z,x∗), (6) under the assumption of a constant zero prior mean function. This assumption is easily relaxed if needed. Here Z denotes the inducing locations in the input space, i.e. f(Z) = u ∼ N (µu,Σu). Under further assumption of Gaussian observation noise , i.e. y∗ = f(x∗) + , then Titsias [2009] showed the optimal µu and Σu are known analytically. In a sought analogy to Figure 1, Z would be the red points and u would be the orange. 2 Bézier Gaussian Processes Inspired by Bézier surfaces, we construct a Gaussian process f : [0, 1]d → R as f(x) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)P i1,...,id , (7) where P i1,i2,...,id ∼ N (ϑi1,i2,...,id ,Σi1,i2,...,id) are Gaussian variables and x = (x1, x2, . . . , xd). Here, xγ ∈ [0, 1] for γ = 1, . . . , d. We write P , with capital letter to emphasise that it is a random variable now. Further, we write it in boldface though we here only consider the scalar case, i.e. regression, but the multi-output case is not fundamentally different. It is easy to verify that f satisfies the definition of a GP since it, for any x, is a scaled sum of Gaussians. We assume that all P i1,...,id are fully independent. With that assumption, we can make the following observation for the mean and kernel function µ(x) := ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)ϑi1,...,id , and (8) k(x, z) := Cov (f(x), f(z)) = ν1∑ i1=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)Σi1,...,idB ν1 i1 (z1)· · ·Bνdid (zd). (9) The Bernstein polynomials can approximate any continuous function given the order is large enough, thus they make a good basis for GP regression [Hildebrandt and Schoenberg, 1933]. Naturally, selecting a prior over f comes down to selecting a prior over the random control points P . The most common prior mean in GP regression, the constant zero function, is then easily obtained by ϑi1,...,id = 0 for all i. By construction, the choice of Σi1,...,id needs consideration to yield a convenient prior over f . Mindlessly setting Σi1,...,id = 1 would make Var (f(x)) collapse to zero quickly in the central region of the domain, especially as dimensions d grow. This, of course, gives a much too narrow prior over f . Figure 2 (middle) shows that in the central region the standard deviation of f is smaller due to the nature of the Bernstein polynomials. If we consider instead a two-dimensional input the standard deviation would collapse even more, as we would then see the shrinking effect for both dimension and multiply them. We can, however, adjust for this. We define the inverse squared Bernstein adjusted prior to counter this effect. In all dimensions γ = 1, . . . , d, let ςγ = A −1 γ 1νγ+1, where Ai,j = ( B νγ j (i/νγ) )2 , (10) and νγ denotes the order of the dimension γ. Then setting Σi1,...,id = ∏d γ=1 ςγ(iγ) ensures that Var (f(x)) ≈ 1 over the entire domain [0, 1]d. Eq. 10 solves a linear system, such that Var (f(i/νγ)) = 1, for i = 0, . . . , νγ . This means a prior hardly distinguishable from standard stationary ones such as the RBF kernel. Visual representation of this prior is shown in Figure 2 (right). This adjustment works up to νγ = 25, after which negative values occur. Summarising, we introduced a kernel based on Bézier surfaces. An alternative viewpoint is that f is a polynomial GP, but with Bernstein basis rather than the canonical basis. We remark that f is defined outside the domain [0, 1]d; any intuition about the prior there is not considered, and we will not pursue investigating data points outside this domain. Of course, for practical purposes this domain generalises without loss of generality to any rectangular domain [a1, b1]×. . .×[ad, bd]. For presentation purposes we keep writing [0, 1]d. Next, we show how we infer an approximate posterior given data. 2.1 Variational inference Let P denote the set of all P i1,...,id . As the prior of f is fully determined by the random control points P , the posterior of f is determined by the posterior of these. As per above, we set the prior p(P ) = ν1∏ i1=0 ν2∏ i2=0 · · · νd∏ id=0 p(P i1,...,id) := ν1∏ i1=0 ν2∏ i2=0 · · · νd∏ id=0 N (0,Σi1,...,id), (11) where Σi1,...,id = ∏d γ=1 ςγ(iγ). We utilise variational inference to approximate the posterior of the control points, and hence f . This means we introduce variational control points. We assume they are fully independent (usually called the mean-field assumption), and have free parameters for the mean and variance, such that P i1,...,id ∼ N (ϑ̂i1,i2,...,id , Σ̂i1,i2,...,id). Assume we have observed data D = {xj , yj}nj=1. The key quantity in variational inference is the Kullback-Leibler divergence between the true posterior p(P |y) and the variational approximation – which we denote q(P ). The smaller divergence, the better approximation of the true posterior. Without access to the true posterior, the quantity is not computable. However, it has been shown this divergence is equal to the slack in Jensen’s inequality used of the log-marginal likelihood: log p(y). log p(y) = log ∫ p(y|P )p(P )dP ≥ ∫ log ( p(y|P )p(P ) q(P ) ) q(P )dP (12) = Eq(P ) [log p(y|P )]− KL (q(P )‖p(P )) . (13) Knowing this, we can approximate the true posterior with q(P ) by maximising Eq. (13). This is the evidence lower bound, and it is maximised with respect to the variational parameters ϑ̂i1,i2,...,id and Σ̂i1,i2,...,id . This is fully analytical when the variational parameters and a Gaussian likelihood is assumed. We assume our observation model is disturbed with additive Gaussian noise, which in other words means our likelihood is Gaussian p(yj |P ) := N ( yj |f(xj), σ2 ) , σ2 > 0, (14) for each j = 1, . . . , n and we assume they are independent conditioned on P . With these assumption the first term in Eq. (13) becomes Eq(P ) [log p(y|P )] = − 1 2 n∑ j=1 log(2π) + log(σ2) + ( yj − Eq(P )[f(xj)] )2 + Varq(P ) (f(xj)) σ2 , (15) where Eq(P )[f(xj)] and Varq(P )(f(xj)) are given as Eq. (8) and (9) respectively, but with the variational parameters ϑ̂i1,i2,...,id and Σ̂i1,i2,...,id used. The second term in Eq. (13) enjoys the independence of control points to split into sums KL (q(P )‖p(P )) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 KL (q(P i1,...,id)‖p(P i1,...,id)) (16) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Σ̂i1,i2,...,idΣi1,i2,...,id − 1 + ϑ̂ 2 i1,i2,...,id Σi1,i2,...,id + log Σi1,i2,...,id Σ̂i1,i2,...,id . (17) When inspecting the evidence lower bound, Eq. (13), we see it has a data term forcing control points to fit the data, and a KL-term to make control points revert to the prior. Knowing how control points are allocated in the input domain, we expect control points in regions of no data revert to the prior. This is similar to what stationary kernels do in said regions. We verify visually in Section 4. All together, Bézier GPs can be adjusted to have priors similar to stationary GPs, and have analogous posterior behaviour, which is favourable to many practitioners. But Bézier GPs scale. None of the terms in the evidence lower bound require matrix inversions or determinants. It is simple to mini-batch over the data points, utilising stochastic variational inference [Hoffman et al., 2013], to scale it to large n. However, nearly all terms require evaluations of huge sums if the input dimension is high. The next section is aimed at this problem. 2.2 Scalability with the Bézier buttress Until this point, we have omitted addressing the number of random control points needed for Bézier GPs. Let us denote this number τ . It can quickly be checked that τ = ∏d γ=1(νγ + 1). This implies that to evaluate f we must sum over τ summands, which as d increases, quickly becomes computationally cumbersome. τ increases exponentially with d. It is justifiable to view the random control points as inducing points; after all, they are Gaussian variables in the output space. Thus, it would be extremely valuable to manage exponentially many of them. To overcome this, we introduce the Bézier buttress2. We assume parameters of the random control points, say ϑ, can parametrise ϑi1,i2,...,id = ∏d γ=1 wiγ−1,iγ ,γ , where w0,i1,1 := wi1,1. This assumption is the key of the Bézier buttress. Figure 3 provides visualisation. It visualises a source-sink graph, where each unique path from source to sink represents one unique control point with above parametrisation. The cyan highlighted path represents the ϑ1,2,3 = w1,1w1,2,2w2,3,3, where we multiply the values along the path from source to sink. Notice last edges have value 1. In the Bézier buttress there are d layers, one for each input dimension, and νγ + 1 nodes in each layer γ = 1, . . . , d. Borrowing from neural network terminology, a forward-pass is a sequential series of matrix multiplications which are element-wise warped with non-linearities, such as tanh or ReLU. If 2A buttress is an architectural structure that provides support to a building. we let our sequence of matrices be w1,w2, . . . ,wd, where wγ is the matrix with entries {wi,k,γ}i,k. Let 1ν denote the vector of size ν with 1 in all entries, then fixing ’the input’ to 1>ν1+1 and the last matrix to 1νd+1, a forward pass is 1>ν1+1w1w2 · · ·wd1νd+1 = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 ϑi1,i2,...,id . (18) We see this forward pass computes a sum over all control points means. Naturally, we construct a Bézier buttress for summing over variances too, which must be restricted to positive weights. It was these sums that formed our bottleneck, in this parametrisation it is just a sequence of matrix products. What about computing f? It comes down to a use of ‘non-linearities’ in the buttress. Multiplying element-wise the Bernstein polynomials as seen in Figure 3 (visualised only on 3rd layer), a forward pass computes either E[f(x)], or Var(f(x)) if using squared Bernstein polynomials. Each control point is then exactly multiplied by the correct polynomials from its way from source to sink. Notice, the ‘input’ is fixed to 1 in the source, but the observed x is appearing via the Bernstein polynomials along the way. We can write this too as a sequence of matrix products E[f(x)] = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)ϑi1,...,id = 1 > ν1+1w1Bx1 · · ·wdBxd1νd+1, (19) where Bxγ is a diagonal matrix with the νγ + 1 Bernstein polynomials on its diagonal. For the variance of f(x) there exists a similar expression, of course with squared polynomials, and with the positive weights associated with the variance Bézier buttress. On this inspection, all terms needed to compute Eq. (15) are available. All terms for the KL divergences are algebraic manipulations of Eq. (18) – these are explicit in the supplementary material. The key takeaway for Bézier buttress is that we parametrise each random control point as a product of weights. This can be seen as an amortisation, and we do inference on these weights rather than the control points themselves. Hence, no matrix inversions are needed to backpropagate through our objective, and a forward pass is only a sequence of matrix products. 2.3 Marginalising matrix commutativity Matrix multiplication is not commutative. This implies the ordering of the matrices in Eq. (18) matter, which again implies how we order the input dimensions in the Bézier buttress is of importance. This is the price for the computational benefit this parametrisation gives. An ordering of the input dimensions is somewhat unnatural for spatial regression, so we present a way to overcome this in approximate Bayesian manner. Define f = f1 + f2 + · · ·+ fr, where each of these individual fk, k ∈ {1, . . . , r}, are Bézier GPs with a random permutation of the ordering in the associated Bézier buttress. In other words, we let f be an ensemble of Bézier GPs to (approximately) marginalise over all possible orderings. The number of all possible orderings quickly becomes too large for which to account; in practice, we set r in a feasible region, say 20, which gives satisfactory empirical performance. Another lens on this is that each control point is a sum of r control points – each control point’s standard deviation is scaled by r−1, to obtain the same prior as so far discussed. Oddly, we have then circumvented the problem of too many control points by introducing even more control points. That is, each control point mean parametrise ϑi1,i2,...,id = ∑r k=1 ∏d γ=1 witk(γ)−1,itk(γ),tk(γ), where tk denotes a random permutation of (1, . . . , d). We remind again that similar expression exist for control point variances, restricted to positive weights. As remarked, inference comes down to a forward pass in the Bézier buttress, a sequence of d matrix multiplications. Assume all dimension are of order ν, then the computational complexity of one forward pass is O ( d(ν + 1)2 ) . Now we need r forward passes to marginalise the ordering, and n forward passes, one for each observation, leaving final complexity of O(nrdν2). Linear in n and d. 3 Related Work Variational inference in GPs was initially considered by Csató et al. [1999] and Gibbs and MacKay [2000]. In recent times, the focus has shifted focus to scalable solutions to accommodate the big data era. In this respect, Titsias [2009] took a variational approach to the inducing points methods [Quinonero-Candela and Rasmussen, 2005]; later Hensman et al. [2013] further enhanced scalability to allow for mini-batching and a wider class of likelihoods. Still, the need for more inducing points is of importance, especially as the number of input features grows. The response to this has mostly revolved around exploiting some structure of the kernel. Wilson and Nickisch [2015] and Wu et al. [2021] exploit specific structure that allow for fast linear algebra methods; similar to our method inducing locations tend to lie on grid. These grids expand fast as the input dimension increases, as also pointed out earlier in the article. Kapoor et al. [2021] remedy this by instead of a rectangular grid, they consider the permutohedral lattice, such that each observation only embeds to d+ 1 neighbours instead of 2d, as in [Wilson and Nickisch, 2015]. Another approach to allowing for more inducing points is incorporating nearest neighbour search in the approximation [Kim et al., 2005, Nguyen-Tuong et al., 2008]. Tran et al. [2021] introduced sparse-within-sparse where they have many inducing points in memory, but each time search for the k-nearest ones to any observation. They discard the remaining ones as they have little to no influence on the observation. Wu et al. [2022] made a variational pendant to this method. Lastly, when dealing with high-dimensional inputs it is worth mentioning feature learning. That is, learning more low-dimensional features where GPs have better performance. The success of deep models has been transported to GPs by Damianou and Lawrence [2013] and later scaled to large datasets in [Salimbeni and Deisenroth, 2017]. Another approach is Deep Kernel Learning [Wilson et al., 2016, Bradshaw et al., 2017], where feature extraction happens inside the kernel function; lately Ober et al. [2021] has investigated the limitations and benefits of these models. We have treated structured control points as our version of inducing points; and by parametrising them with a Bézier buttress, we limit the expansion of grids to linear growth in parameters. We are not the first to consider the Bernstein polynomials as a basis for learning functions. Petrone [1999b] used it to model kernel estimate probability density functions, and several follow up works [Petrone, 1999a, Petrone and Wasserman, 2002]. Hug et al. [2020] recently introduced Bézier GPs, but with a focus on time series [Hug et al., 2022]. Our emphasis has been on spatial input; even the Bézier surface literature contain close to nothing on more than 2-dimensional surfaces. 4 Evaluation We split our evaluation into four parts. First, we visually inspect the posterior on a one dimensional toy dataset to show how the control points behave, and indicate that there indeed is a stationarylike behaviour on the domain of the hypercube. Next, we test empirically on some standard UCI Benchmark datasets, to gives insight into when Bézier GPs are applicable. After that, we switch to tall and wide data – large both in the input dimension and in number of data points. These experiments give certainty that the method delivers on its key promise: scalability. Lastly, we turn our eyes to the method itself and investigate how performance is influenced by the ordering of dimensions. Care is needed in optimising a Bézier GP – not all parameters are born equal. We split optimisation into two phases. First, we optimise all variational parameters, keeping the likelihood variance σ2 fixed as τ−1, with τ being the number of control points. After this initial phase, we optimise σ2 with all variational parameters fixed. We let both phases run for 10000 iterations with a mini-batch size of 500, for all datasets. Both phases use the Adam optimiser [Kingma and Ba, 2015], the first phase with learning rate 0.001, and the second with learning rate 0.01. If not following a such a bespoke training scheme, we see a tendency for the posterior to revert to the prior, because the KL-term becomes too dominating initially. This training scheme is designed for the Gaussian likelihood, but we wish to emphasise that, in principle, the loss function is accurate for any choice of likelihood. 4.1 One dimensional visual inspection We hypothesised the objective function, Eq. 16, would ensure, within the hypercube domain, that f reverts to its prior in regions where data is scarce. To verify this we construct a small dataset to inspect. We generate one-dimensional inputs uniformly in the regions [0, 0.33] and [0.66, 1]; we sample 20 observation in each region. The responsive variable is generated as y(x) = 3 sin(16x). According to the hypothesis, f should in the region [0.33, 0.66], tend towards zero in mean, and increase its variation here. We use a BézierGP of order 20 to model the observations, since they are highly non-linear. Figure 4 shows the posterior distribution of f to the left; we observe f tends towards the prior in the middle region. The middle plot illustrates the distribution, both prior and posterior, of the 21 control points. There is a clear tendency for the central-most points to align the posterior and posterior, enforcing this behaviour in f . The non-equal priors are due to the inverse-squared Bernstein adjusted prior which ensures a uniform variation in f over the domain, see Figure 2. The plot to the right in Figure 4 shows the behaviour foundational to practitioners of Bayesian optimisation and active learning etc., the variance increase away from data regions. 4.2 UCI Benchmark We evaluate on eight small to mid-size real world datasets commonly used to benchmark regression [Hernandez-Lobato and Adams, 2015]. We split each dataset into train/test-split with the ratio 90/10. We do this over 20 random splits and report test set RMSE and log-likelihood average and standard deviation over splits. We choose baselines to be SGPR, following the method from Titsias [2009]; we do both for 100 and 500 inducing variables. SimplexGP is another baseline, they suggest their approximation is beneficial for semi-high dimensional inputs (between 3 and 20) [Kapoor et al., 2021], hence they are an obvious baseline. SimplexGP usually use a validation set to choose the final model. This is due to a highly noisy optimisation scheme using a high error-tolerance (1.0) for conjugate gradients. We remedy this by setting the error-tolerance to (0.01), which harms scalability, but we can omit using a validation set for better comparability. Wang et al. [2019] recommend this error-tolerance, but remark it is more stable for RMSE than for log-likelihood. For BézierGP, we fix the number of permutations to r = 20, and vary the order in ν = 5, 10, 20. The order is identical over input dimensions. The inputs are pre-processed such that the training set is contained in [0, 1]d. Table 1 contains the results of this experiment. We make the following observations about our presented BézierGP. On keggdirected and elevators there are test points outside the defined domain on some splits, which cause the RMSE to be extreme, but the likelihood is more forgiving. This highlights the constraint of our model: it needs a box-bounded domain to be a priori known. Had we standardised such that both test and train data were in the hypercube, BézierGP (ν = 20) would have a average test RMSE of 0.0937. We could not reproduce results from Kapoor et al. [2021] on keggdirected, the optimisation was too noisy and with no use of validation. On concrete, boston and energy we see overfitting tendencies. Even though BézierGP is the optimal choice on energy there is a mismatch between train and test error. On concrete this shows in better test RMSE, than the baselines, but the variance is overfitted yielding non-optimal likelihood. We conjecture this happens because the n/d-ratio is low; which makes it more likely to overfit the control points – especially for higher orders ν. Knowing these model fallacies, we observe that BézierGP outperforms on baselines on multiple datasets, most notably the 17-dimensional bike dataset. 4.3 Large scale regression Figure 5 shows the results of regression tasks in regimes of high dimensions and one in high number of observations. Here, we follow exactly the experimental setup of either Salimbeni and Deisenroth [2017] or Wang et al. [2019]. If the latter, we use the validation-split they use as training data. Our optimisation scheme for BézierGP is consistent with above, except for slice, where the first training phase runs for 30000 iterations. We discard test points that are not in the [0, 1]d domain – in no situation did this remove more than 0.001% of the test set. The number after DGP, denotes the number of hidden layers in a Deep GP [Salimbeni and Deisenroth, 2017], after SGPR and SVGP it denotes the number of inducing points. SVGP refers to the method from Hensman et al. [2013]. After B, it denotes the order used in BézierGP. On year, we observe our (non-deep) model is on-par with 2-layered Deep GPs, and closer to 3 in RMSE. The highest dimensional dataset, slice, sees us in the low n/d-ratio again, and we are again faced with a too flexible model. This is why we report results for orders 3 and 5, rather than 20, since the overfitting kicks in. Even for these small orders the test loglikelihood has high variance and under-performs compared to RMSE. With respect to RMSE it is top-performer signalling again it is overfitting the variance. On the remaining two datasets BézierGP is best-performing among baselines. 4.4 Influence of number of permutations All experiments so far used r = 20; that is, 20 random permutations of the ordering of dimension used in the Bézier buttress. For a problem with input dimension d, there exist d! possible permutations. Table 2 shows results with varying r; for each dataset, the results are over the same train/test split (0.9/0.1). We fixed ν = 20. Up to some noise in the optimisation phase, we see for the two highest dimensional datasets, protein and bike, performance improves with higher r. Bike has over 50% reduction in RMSE from r = 1 to r = 50. Table 2 emphasises the results we have presented are not optimised over hyperparameter r and ν. They also illustrate an interesting direction of future research: optimising these hyperparameters. We chose permutations by random sampling, but choosing them in a principled deliberate manner could yield good performance with a computationally manageable r. This result indicates, at least, protein and bike would see increased performance in Table 1 from better (or just more) permutations. 5 Discussion We introduced the Bézier Gaussian Process – a GP, with a polynomial kernel in the Bernstein basis, that scales to a large number of observations, and remains space-filling for high number of input features (limited to a box-bounded domain). We illustrated that, with slight adjustments, the prior and posterior have similar behaviour to ‘usual’ stationary kernels. We presented the Bézier buttress, a weighted graph to which we amortise the inference, rather than inferring the control points themselves. The Bézier buttress allows GP inference without any matrix inversion. Using the Bézier buttress, we inferred 6385 control points for the high-dimensional slice dataset. We highlighted weaknesses of the proposed model: most crucially the tendency of overfitting when the n/d-ratio is low. The results demonstrate scalability in both n and d, but does not solve the short, but wide problem. The paper did not optimise over the hyperparameters of the proposed kernel, namely ν and r, but it showcased briefly that doing so might enhance BézierGPs empirically; especially smart selection of the permutations is an interesting direction for future research. We speculate that optimising over orders, on a validation set, would alleviate some of the overfitting issues. Acknowledgments and Disclosure of Funding MJ is supported by the Carlsberg Foundation.
1. What is the focus and contribution of the paper on Gaussian process frameworks? 2. What are the strengths of the proposed methodology, particularly in its originality and significance? 3. What are the weaknesses of the paper regarding its reproducibility and empirical analysis? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns regarding numerical stability when working with Bernstein polynomials? 6. How do the approximations made to render the model scalable affect the marginal likelihood bound and the experimental results? 7. Can the proposed methodology be applied to non-Gaussian likelihoods?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The manuscript proposes a methodology to extend the Gaussian process framework towards datasets that are both high-dimensional (wide) and have a large sample size (tall) using multivariate Bézier curves. In particular, the authors propose to perform variational inference over a multivariate linear model over Bernstein polynomials having exponentially many control points as parameters. To render the model scalable, the authors propose two approximations: They use a factorisation assumption to reduce the number of control point parameters and they neglect the order of the summation over the dimensions. Experiments on several UCI benchmarks conclude the paper. Strengths And Weaknesses Clarity The paper is mostly well written and the technical content is rather well accessible. Originality The proposed polynomial covariance structure using Bernstein polynomials seems to not have been explored before. Significance The method is (currently) specific to large scale regression and could be useful for GP regression. However, in the current state, the underlying approximation assumption need to be better analysed before the model can be used as a black box. Reproducibility Unfortunately, only some vague outline of the code is contained in the appendix suggesting that there might be an implementation that might be shared at some point. The relevant classes i.e. BezierButtress are not contained hence this cannot be judged. Assuming that the code will be shared, the experiments should be well reproducible. If not, this should be rather difficult. Empirical analysis The manuscript contains experiments on several standard regression datasets regarding marginal likelihood and prediction error. a large number of numerical results gathered in a big table regarding marginal likelihood, prediction error and runtime. Some more results on more subtle aspects of the covariance function itsel would have been much appreciated. Questions Computations with Bernstein polynomials usually require special care regarding numerical stability i.e. De Casteljau's algorithm. Do you encounter issues in that respect? The "adjustment" of the Bernstein polynomials might be numerically challenging as shown by Figure 2 right Panel. Is this correct? What is the condition of the linear system in Equation (10)? How would experimental results change without that correction? I'm missing an analysis and more detailed evaluation on the effects of the two approximations made to render the model scalable: the factorisation structure and the order of summation. In particular, how is the marginal likelihood bound affected? Is it still a bound or a mere approximation that cannot be trusted as a comparative measure in the experiments? Columns 2,4,6 of Table 2 suggests exactly that behavior. Then, one could only compare RMSE and not probabilistic model properties. More specifically, the comparisons in Figure 5, in the left panels and Table upper part would be questionnable in their meaning. What would need to change if the likelihood was non-Gaussian? I'm assuming that the same construction can be used. Limitations Yes.
NIPS
Title Bezier Gaussian Processes for Tall and Wide Data Abstract Modern approximations to Gaussian processes are suitable for “tall data”, with a cost that scales well in the number of observations, but under-performs on “wide data”, scaling poorly in the number of input features. That is, as the number of input features grows, good predictive performance requires the number of summarising variables, and their associated cost, to grow rapidly. We introduce a kernel that allows the number of summarising variables to grow exponentially with the number of input features, but requires only linear cost in both number of observations and input features. This scaling is achieved through our introduction of the Bézier buttress, which allows approximate inference without computing matrix inverses or determinants. We show that our kernel has close similarities to some of the most used kernels in Gaussian process regression, and empirically demonstrate the kernel’s ability to scale to both tall and wide datasets. Gaussian processes (GPs) are a probabilistic approach to modelling functions that permit tractable Bayesian inference. They are, however, notorious for their poor scalability. In recent decades, this criticism has been challenged. Several approximate methods now allow GPs to scale to millions of data points. Yet, scalability in the number of data points is merely one challenge of big data. There are still problems associated with the input dimensionality – one aspect of the famed curse of dimensionality. Burt et al. [2020] analysed the most studied approximation, the so-called sparse inducing points methods, and showed it to be accurate for low dimensional inputs. Alarmingly, exponentially many inducing points are still needed in high-dimensional input spaces, that is, for problems with a large number of features. As such, despite modern GP approximations scaling to tall data, they are still discounted when concerning wide data. In response to this, there exist GP approximations built on simplices or grid-structures in the input space [Wilson and Nickisch, 2015, Gardner et al., 2018, Kapoor et al., 2021]. These take advantage of attractive fast linear algebra, but are often limited by memory in higher dimensions. Their advantage is the ability to fill the input space with structured points, so all observations have a close neighbour. We propose a new kernel for GP regression that requires neither matrix inversion nor determinant calculation – GPs’ two core computational sinners. Additionally, we cover the input space1 with exponentially many points, but introduce an approximation that grows only linearly in computational complexity. That is, our method scales linearly in both the number of data points and the number of input dimensions, whilst being space-filling in the input domain. GPs are indispensable to fields where uncertainty is a driver in decision-making mechanisms. Such fields include Bayesian optimisation, active learning and reinforcement learning. The critical decision mechanism is the exploration-exploitation trade-off. One ability useful in such fields is to assign high uncertainty to unexplored regions, just as does an exact GP. We show that our proposed model also assigns high uncertainty to unexplored regions, suggesting our model as well-suited to decisionmaking problems. A limiting assumption of our kernel is its restriction to a box-bounded domain in the input space. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A Gaussian processes (GPs) are a probabilistic approach to modelling functions that permit tractable Bayesian inference. They are, however, notorious for their poor scalability. In recent decades, this criticism has been challenged. Several approximate methods now allow GPs to scale to millions of data points. Yet, scalability in the number of data points is merely one challenge of big data. There are still problems associated with the input dimensionality – one aspect of the famed curse of dimensionality. Burt et al. [2020] analysed the most studied approximation, the so-called sparse inducing points methods, and showed it to be accurate for low dimensional inputs. Alarmingly, exponentially many inducing points are still needed in high-dimensional input spaces, that is, for problems with a large number of features. As such, despite modern GP approximations scaling to tall data, they are still discounted when concerning wide data. In response to this, there exist GP approximations built on simplices or grid-structures in the input space [Wilson and Nickisch, 2015, Gardner et al., 2018, Kapoor et al., 2021]. These take advantage of attractive fast linear algebra, but are often limited by memory in higher dimensions. Their advantage is the ability to fill the input space with structured points, so all observations have a close neighbour. We propose a new kernel for GP regression that requires neither matrix inversion nor determinant calculation – GPs’ two core computational sinners. Additionally, we cover the input space1 with exponentially many points, but introduce an approximation that grows only linearly in computational complexity. That is, our method scales linearly in both the number of data points and the number of input dimensions, whilst being space-filling in the input domain. GPs are indispensable to fields where uncertainty is a driver in decision-making mechanisms. Such fields include Bayesian optimisation, active learning and reinforcement learning. The critical decision mechanism is the exploration-exploitation trade-off. One ability useful in such fields is to assign high uncertainty to unexplored regions, just as does an exact GP. We show that our proposed model also assigns high uncertainty to unexplored regions, suggesting our model as well-suited to decisionmaking problems. 1A limiting assumption of our kernel is its restriction to a box-bounded domain in the input space. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Background Bézier curves and surfaces are parametrised geometric objects that have found great usage in computer-aided design and robotics [Prautzsch et al., 2002]. The simplest Bézier curve is the linear interpolation of two points p0 and p1 in RD; the Bézier curve of order 1 c(t) = (1− t)p0 + tp1, t ∈ [0, 1]. (1) Higher order curves are generalised in the following way: the order-ν Bézier curve is defined as c(t) = ν∑ i=0 Bνi (t)pi, t ∈ [0, 1]. (2) In Bézier terms, pi are referred to as control points. Notice an order-ν Bézier curve has ν + 1 control points. Bνi denotes the ith Bernstein polynomial of order ν. They are defined as Bνi (t) = ν! i!(ν − i)! ti(1− t)ν−i. (3) By going from curves to surfaces we wish to extend from the scalar t to a spatial input x ∈ [0, 1]d, for d > 1. Here, we can define Bézier d-surfaces as cd(x) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)pi1,...,id , x = (x1, x2, . . . , xd) ∈ [0, 1] d. (4) Figure 1 gives a visual illustration of a 2-dimensional surface embedded in R3. In the literature, it is difficult to find any studies of d-surfaces for d > 2. This paper targets especially this high-dimensional input case. We restrict our output dimension to 1, the regression problem, but the methods naturally extend to multidimensional outputs. The red points in Figure 1 show how each control points has an associated location in the input space. They are placed on a grid-like structure, and the order of each dimension determines how fine the mesh-grid of the hypercube is; i.e. how dense the input-space is filled. Gaussian processes (GPs) are meticulously studied in probability and statistics [Williams and Rasmussen, 2006]. They provide a way to define a probability distribution over functions. This makes them useful, as priors, to build structures for quantifying uncertainty in prediction. They are defined as a probability measure over functions f : X → R, such that any collection of elements (x1,x2, . . . ,xn) in X have their associated output (f(x1), . . . , f(xn)) following a joint Gaussian distribution. This distribution is fully determined by a mean function m : X → R and a positive semi-definite kernel function k : X ×X → R. They admit exact Bayesian inference. However, exact inference comes at a prohibitive worst-case computational cost of O(n3), where n is the number of training points, due to computing inverse and determinant of the kernel matrix. Sparse Gaussian processes [Snelson and Ghahramani, 2005] overcome this burden by conditioning on m inducing points, reducing complexity to O(nm2), where usually m << n. The inducing points, denoted u, are then marginalised to obtain an approximate posterior of f . The variational posterior mean and variance, at a location x∗ are then given by E[f(x∗)] = k(x∗,Z)k(Z,Z)−1µu, (5) Var(f(x∗)) = k(x∗,x∗)− k(x∗,Z)k(Z,Z)−1 (k(Z,Z)− Σu) k(Z,Z)−1k(Z,x∗), (6) under the assumption of a constant zero prior mean function. This assumption is easily relaxed if needed. Here Z denotes the inducing locations in the input space, i.e. f(Z) = u ∼ N (µu,Σu). Under further assumption of Gaussian observation noise , i.e. y∗ = f(x∗) + , then Titsias [2009] showed the optimal µu and Σu are known analytically. In a sought analogy to Figure 1, Z would be the red points and u would be the orange. 2 Bézier Gaussian Processes Inspired by Bézier surfaces, we construct a Gaussian process f : [0, 1]d → R as f(x) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)P i1,...,id , (7) where P i1,i2,...,id ∼ N (ϑi1,i2,...,id ,Σi1,i2,...,id) are Gaussian variables and x = (x1, x2, . . . , xd). Here, xγ ∈ [0, 1] for γ = 1, . . . , d. We write P , with capital letter to emphasise that it is a random variable now. Further, we write it in boldface though we here only consider the scalar case, i.e. regression, but the multi-output case is not fundamentally different. It is easy to verify that f satisfies the definition of a GP since it, for any x, is a scaled sum of Gaussians. We assume that all P i1,...,id are fully independent. With that assumption, we can make the following observation for the mean and kernel function µ(x) := ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)ϑi1,...,id , and (8) k(x, z) := Cov (f(x), f(z)) = ν1∑ i1=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)Σi1,...,idB ν1 i1 (z1)· · ·Bνdid (zd). (9) The Bernstein polynomials can approximate any continuous function given the order is large enough, thus they make a good basis for GP regression [Hildebrandt and Schoenberg, 1933]. Naturally, selecting a prior over f comes down to selecting a prior over the random control points P . The most common prior mean in GP regression, the constant zero function, is then easily obtained by ϑi1,...,id = 0 for all i. By construction, the choice of Σi1,...,id needs consideration to yield a convenient prior over f . Mindlessly setting Σi1,...,id = 1 would make Var (f(x)) collapse to zero quickly in the central region of the domain, especially as dimensions d grow. This, of course, gives a much too narrow prior over f . Figure 2 (middle) shows that in the central region the standard deviation of f is smaller due to the nature of the Bernstein polynomials. If we consider instead a two-dimensional input the standard deviation would collapse even more, as we would then see the shrinking effect for both dimension and multiply them. We can, however, adjust for this. We define the inverse squared Bernstein adjusted prior to counter this effect. In all dimensions γ = 1, . . . , d, let ςγ = A −1 γ 1νγ+1, where Ai,j = ( B νγ j (i/νγ) )2 , (10) and νγ denotes the order of the dimension γ. Then setting Σi1,...,id = ∏d γ=1 ςγ(iγ) ensures that Var (f(x)) ≈ 1 over the entire domain [0, 1]d. Eq. 10 solves a linear system, such that Var (f(i/νγ)) = 1, for i = 0, . . . , νγ . This means a prior hardly distinguishable from standard stationary ones such as the RBF kernel. Visual representation of this prior is shown in Figure 2 (right). This adjustment works up to νγ = 25, after which negative values occur. Summarising, we introduced a kernel based on Bézier surfaces. An alternative viewpoint is that f is a polynomial GP, but with Bernstein basis rather than the canonical basis. We remark that f is defined outside the domain [0, 1]d; any intuition about the prior there is not considered, and we will not pursue investigating data points outside this domain. Of course, for practical purposes this domain generalises without loss of generality to any rectangular domain [a1, b1]×. . .×[ad, bd]. For presentation purposes we keep writing [0, 1]d. Next, we show how we infer an approximate posterior given data. 2.1 Variational inference Let P denote the set of all P i1,...,id . As the prior of f is fully determined by the random control points P , the posterior of f is determined by the posterior of these. As per above, we set the prior p(P ) = ν1∏ i1=0 ν2∏ i2=0 · · · νd∏ id=0 p(P i1,...,id) := ν1∏ i1=0 ν2∏ i2=0 · · · νd∏ id=0 N (0,Σi1,...,id), (11) where Σi1,...,id = ∏d γ=1 ςγ(iγ). We utilise variational inference to approximate the posterior of the control points, and hence f . This means we introduce variational control points. We assume they are fully independent (usually called the mean-field assumption), and have free parameters for the mean and variance, such that P i1,...,id ∼ N (ϑ̂i1,i2,...,id , Σ̂i1,i2,...,id). Assume we have observed data D = {xj , yj}nj=1. The key quantity in variational inference is the Kullback-Leibler divergence between the true posterior p(P |y) and the variational approximation – which we denote q(P ). The smaller divergence, the better approximation of the true posterior. Without access to the true posterior, the quantity is not computable. However, it has been shown this divergence is equal to the slack in Jensen’s inequality used of the log-marginal likelihood: log p(y). log p(y) = log ∫ p(y|P )p(P )dP ≥ ∫ log ( p(y|P )p(P ) q(P ) ) q(P )dP (12) = Eq(P ) [log p(y|P )]− KL (q(P )‖p(P )) . (13) Knowing this, we can approximate the true posterior with q(P ) by maximising Eq. (13). This is the evidence lower bound, and it is maximised with respect to the variational parameters ϑ̂i1,i2,...,id and Σ̂i1,i2,...,id . This is fully analytical when the variational parameters and a Gaussian likelihood is assumed. We assume our observation model is disturbed with additive Gaussian noise, which in other words means our likelihood is Gaussian p(yj |P ) := N ( yj |f(xj), σ2 ) , σ2 > 0, (14) for each j = 1, . . . , n and we assume they are independent conditioned on P . With these assumption the first term in Eq. (13) becomes Eq(P ) [log p(y|P )] = − 1 2 n∑ j=1 log(2π) + log(σ2) + ( yj − Eq(P )[f(xj)] )2 + Varq(P ) (f(xj)) σ2 , (15) where Eq(P )[f(xj)] and Varq(P )(f(xj)) are given as Eq. (8) and (9) respectively, but with the variational parameters ϑ̂i1,i2,...,id and Σ̂i1,i2,...,id used. The second term in Eq. (13) enjoys the independence of control points to split into sums KL (q(P )‖p(P )) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 KL (q(P i1,...,id)‖p(P i1,...,id)) (16) = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Σ̂i1,i2,...,idΣi1,i2,...,id − 1 + ϑ̂ 2 i1,i2,...,id Σi1,i2,...,id + log Σi1,i2,...,id Σ̂i1,i2,...,id . (17) When inspecting the evidence lower bound, Eq. (13), we see it has a data term forcing control points to fit the data, and a KL-term to make control points revert to the prior. Knowing how control points are allocated in the input domain, we expect control points in regions of no data revert to the prior. This is similar to what stationary kernels do in said regions. We verify visually in Section 4. All together, Bézier GPs can be adjusted to have priors similar to stationary GPs, and have analogous posterior behaviour, which is favourable to many practitioners. But Bézier GPs scale. None of the terms in the evidence lower bound require matrix inversions or determinants. It is simple to mini-batch over the data points, utilising stochastic variational inference [Hoffman et al., 2013], to scale it to large n. However, nearly all terms require evaluations of huge sums if the input dimension is high. The next section is aimed at this problem. 2.2 Scalability with the Bézier buttress Until this point, we have omitted addressing the number of random control points needed for Bézier GPs. Let us denote this number τ . It can quickly be checked that τ = ∏d γ=1(νγ + 1). This implies that to evaluate f we must sum over τ summands, which as d increases, quickly becomes computationally cumbersome. τ increases exponentially with d. It is justifiable to view the random control points as inducing points; after all, they are Gaussian variables in the output space. Thus, it would be extremely valuable to manage exponentially many of them. To overcome this, we introduce the Bézier buttress2. We assume parameters of the random control points, say ϑ, can parametrise ϑi1,i2,...,id = ∏d γ=1 wiγ−1,iγ ,γ , where w0,i1,1 := wi1,1. This assumption is the key of the Bézier buttress. Figure 3 provides visualisation. It visualises a source-sink graph, where each unique path from source to sink represents one unique control point with above parametrisation. The cyan highlighted path represents the ϑ1,2,3 = w1,1w1,2,2w2,3,3, where we multiply the values along the path from source to sink. Notice last edges have value 1. In the Bézier buttress there are d layers, one for each input dimension, and νγ + 1 nodes in each layer γ = 1, . . . , d. Borrowing from neural network terminology, a forward-pass is a sequential series of matrix multiplications which are element-wise warped with non-linearities, such as tanh or ReLU. If 2A buttress is an architectural structure that provides support to a building. we let our sequence of matrices be w1,w2, . . . ,wd, where wγ is the matrix with entries {wi,k,γ}i,k. Let 1ν denote the vector of size ν with 1 in all entries, then fixing ’the input’ to 1>ν1+1 and the last matrix to 1νd+1, a forward pass is 1>ν1+1w1w2 · · ·wd1νd+1 = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 ϑi1,i2,...,id . (18) We see this forward pass computes a sum over all control points means. Naturally, we construct a Bézier buttress for summing over variances too, which must be restricted to positive weights. It was these sums that formed our bottleneck, in this parametrisation it is just a sequence of matrix products. What about computing f? It comes down to a use of ‘non-linearities’ in the buttress. Multiplying element-wise the Bernstein polynomials as seen in Figure 3 (visualised only on 3rd layer), a forward pass computes either E[f(x)], or Var(f(x)) if using squared Bernstein polynomials. Each control point is then exactly multiplied by the correct polynomials from its way from source to sink. Notice, the ‘input’ is fixed to 1 in the source, but the observed x is appearing via the Bernstein polynomials along the way. We can write this too as a sequence of matrix products E[f(x)] = ν1∑ i1=0 ν2∑ i2=0 · · · νd∑ id=0 Bν1i1 (x1)· · ·B νd id (xd)ϑi1,...,id = 1 > ν1+1w1Bx1 · · ·wdBxd1νd+1, (19) where Bxγ is a diagonal matrix with the νγ + 1 Bernstein polynomials on its diagonal. For the variance of f(x) there exists a similar expression, of course with squared polynomials, and with the positive weights associated with the variance Bézier buttress. On this inspection, all terms needed to compute Eq. (15) are available. All terms for the KL divergences are algebraic manipulations of Eq. (18) – these are explicit in the supplementary material. The key takeaway for Bézier buttress is that we parametrise each random control point as a product of weights. This can be seen as an amortisation, and we do inference on these weights rather than the control points themselves. Hence, no matrix inversions are needed to backpropagate through our objective, and a forward pass is only a sequence of matrix products. 2.3 Marginalising matrix commutativity Matrix multiplication is not commutative. This implies the ordering of the matrices in Eq. (18) matter, which again implies how we order the input dimensions in the Bézier buttress is of importance. This is the price for the computational benefit this parametrisation gives. An ordering of the input dimensions is somewhat unnatural for spatial regression, so we present a way to overcome this in approximate Bayesian manner. Define f = f1 + f2 + · · ·+ fr, where each of these individual fk, k ∈ {1, . . . , r}, are Bézier GPs with a random permutation of the ordering in the associated Bézier buttress. In other words, we let f be an ensemble of Bézier GPs to (approximately) marginalise over all possible orderings. The number of all possible orderings quickly becomes too large for which to account; in practice, we set r in a feasible region, say 20, which gives satisfactory empirical performance. Another lens on this is that each control point is a sum of r control points – each control point’s standard deviation is scaled by r−1, to obtain the same prior as so far discussed. Oddly, we have then circumvented the problem of too many control points by introducing even more control points. That is, each control point mean parametrise ϑi1,i2,...,id = ∑r k=1 ∏d γ=1 witk(γ)−1,itk(γ),tk(γ), where tk denotes a random permutation of (1, . . . , d). We remind again that similar expression exist for control point variances, restricted to positive weights. As remarked, inference comes down to a forward pass in the Bézier buttress, a sequence of d matrix multiplications. Assume all dimension are of order ν, then the computational complexity of one forward pass is O ( d(ν + 1)2 ) . Now we need r forward passes to marginalise the ordering, and n forward passes, one for each observation, leaving final complexity of O(nrdν2). Linear in n and d. 3 Related Work Variational inference in GPs was initially considered by Csató et al. [1999] and Gibbs and MacKay [2000]. In recent times, the focus has shifted focus to scalable solutions to accommodate the big data era. In this respect, Titsias [2009] took a variational approach to the inducing points methods [Quinonero-Candela and Rasmussen, 2005]; later Hensman et al. [2013] further enhanced scalability to allow for mini-batching and a wider class of likelihoods. Still, the need for more inducing points is of importance, especially as the number of input features grows. The response to this has mostly revolved around exploiting some structure of the kernel. Wilson and Nickisch [2015] and Wu et al. [2021] exploit specific structure that allow for fast linear algebra methods; similar to our method inducing locations tend to lie on grid. These grids expand fast as the input dimension increases, as also pointed out earlier in the article. Kapoor et al. [2021] remedy this by instead of a rectangular grid, they consider the permutohedral lattice, such that each observation only embeds to d+ 1 neighbours instead of 2d, as in [Wilson and Nickisch, 2015]. Another approach to allowing for more inducing points is incorporating nearest neighbour search in the approximation [Kim et al., 2005, Nguyen-Tuong et al., 2008]. Tran et al. [2021] introduced sparse-within-sparse where they have many inducing points in memory, but each time search for the k-nearest ones to any observation. They discard the remaining ones as they have little to no influence on the observation. Wu et al. [2022] made a variational pendant to this method. Lastly, when dealing with high-dimensional inputs it is worth mentioning feature learning. That is, learning more low-dimensional features where GPs have better performance. The success of deep models has been transported to GPs by Damianou and Lawrence [2013] and later scaled to large datasets in [Salimbeni and Deisenroth, 2017]. Another approach is Deep Kernel Learning [Wilson et al., 2016, Bradshaw et al., 2017], where feature extraction happens inside the kernel function; lately Ober et al. [2021] has investigated the limitations and benefits of these models. We have treated structured control points as our version of inducing points; and by parametrising them with a Bézier buttress, we limit the expansion of grids to linear growth in parameters. We are not the first to consider the Bernstein polynomials as a basis for learning functions. Petrone [1999b] used it to model kernel estimate probability density functions, and several follow up works [Petrone, 1999a, Petrone and Wasserman, 2002]. Hug et al. [2020] recently introduced Bézier GPs, but with a focus on time series [Hug et al., 2022]. Our emphasis has been on spatial input; even the Bézier surface literature contain close to nothing on more than 2-dimensional surfaces. 4 Evaluation We split our evaluation into four parts. First, we visually inspect the posterior on a one dimensional toy dataset to show how the control points behave, and indicate that there indeed is a stationarylike behaviour on the domain of the hypercube. Next, we test empirically on some standard UCI Benchmark datasets, to gives insight into when Bézier GPs are applicable. After that, we switch to tall and wide data – large both in the input dimension and in number of data points. These experiments give certainty that the method delivers on its key promise: scalability. Lastly, we turn our eyes to the method itself and investigate how performance is influenced by the ordering of dimensions. Care is needed in optimising a Bézier GP – not all parameters are born equal. We split optimisation into two phases. First, we optimise all variational parameters, keeping the likelihood variance σ2 fixed as τ−1, with τ being the number of control points. After this initial phase, we optimise σ2 with all variational parameters fixed. We let both phases run for 10000 iterations with a mini-batch size of 500, for all datasets. Both phases use the Adam optimiser [Kingma and Ba, 2015], the first phase with learning rate 0.001, and the second with learning rate 0.01. If not following a such a bespoke training scheme, we see a tendency for the posterior to revert to the prior, because the KL-term becomes too dominating initially. This training scheme is designed for the Gaussian likelihood, but we wish to emphasise that, in principle, the loss function is accurate for any choice of likelihood. 4.1 One dimensional visual inspection We hypothesised the objective function, Eq. 16, would ensure, within the hypercube domain, that f reverts to its prior in regions where data is scarce. To verify this we construct a small dataset to inspect. We generate one-dimensional inputs uniformly in the regions [0, 0.33] and [0.66, 1]; we sample 20 observation in each region. The responsive variable is generated as y(x) = 3 sin(16x). According to the hypothesis, f should in the region [0.33, 0.66], tend towards zero in mean, and increase its variation here. We use a BézierGP of order 20 to model the observations, since they are highly non-linear. Figure 4 shows the posterior distribution of f to the left; we observe f tends towards the prior in the middle region. The middle plot illustrates the distribution, both prior and posterior, of the 21 control points. There is a clear tendency for the central-most points to align the posterior and posterior, enforcing this behaviour in f . The non-equal priors are due to the inverse-squared Bernstein adjusted prior which ensures a uniform variation in f over the domain, see Figure 2. The plot to the right in Figure 4 shows the behaviour foundational to practitioners of Bayesian optimisation and active learning etc., the variance increase away from data regions. 4.2 UCI Benchmark We evaluate on eight small to mid-size real world datasets commonly used to benchmark regression [Hernandez-Lobato and Adams, 2015]. We split each dataset into train/test-split with the ratio 90/10. We do this over 20 random splits and report test set RMSE and log-likelihood average and standard deviation over splits. We choose baselines to be SGPR, following the method from Titsias [2009]; we do both for 100 and 500 inducing variables. SimplexGP is another baseline, they suggest their approximation is beneficial for semi-high dimensional inputs (between 3 and 20) [Kapoor et al., 2021], hence they are an obvious baseline. SimplexGP usually use a validation set to choose the final model. This is due to a highly noisy optimisation scheme using a high error-tolerance (1.0) for conjugate gradients. We remedy this by setting the error-tolerance to (0.01), which harms scalability, but we can omit using a validation set for better comparability. Wang et al. [2019] recommend this error-tolerance, but remark it is more stable for RMSE than for log-likelihood. For BézierGP, we fix the number of permutations to r = 20, and vary the order in ν = 5, 10, 20. The order is identical over input dimensions. The inputs are pre-processed such that the training set is contained in [0, 1]d. Table 1 contains the results of this experiment. We make the following observations about our presented BézierGP. On keggdirected and elevators there are test points outside the defined domain on some splits, which cause the RMSE to be extreme, but the likelihood is more forgiving. This highlights the constraint of our model: it needs a box-bounded domain to be a priori known. Had we standardised such that both test and train data were in the hypercube, BézierGP (ν = 20) would have a average test RMSE of 0.0937. We could not reproduce results from Kapoor et al. [2021] on keggdirected, the optimisation was too noisy and with no use of validation. On concrete, boston and energy we see overfitting tendencies. Even though BézierGP is the optimal choice on energy there is a mismatch between train and test error. On concrete this shows in better test RMSE, than the baselines, but the variance is overfitted yielding non-optimal likelihood. We conjecture this happens because the n/d-ratio is low; which makes it more likely to overfit the control points – especially for higher orders ν. Knowing these model fallacies, we observe that BézierGP outperforms on baselines on multiple datasets, most notably the 17-dimensional bike dataset. 4.3 Large scale regression Figure 5 shows the results of regression tasks in regimes of high dimensions and one in high number of observations. Here, we follow exactly the experimental setup of either Salimbeni and Deisenroth [2017] or Wang et al. [2019]. If the latter, we use the validation-split they use as training data. Our optimisation scheme for BézierGP is consistent with above, except for slice, where the first training phase runs for 30000 iterations. We discard test points that are not in the [0, 1]d domain – in no situation did this remove more than 0.001% of the test set. The number after DGP, denotes the number of hidden layers in a Deep GP [Salimbeni and Deisenroth, 2017], after SGPR and SVGP it denotes the number of inducing points. SVGP refers to the method from Hensman et al. [2013]. After B, it denotes the order used in BézierGP. On year, we observe our (non-deep) model is on-par with 2-layered Deep GPs, and closer to 3 in RMSE. The highest dimensional dataset, slice, sees us in the low n/d-ratio again, and we are again faced with a too flexible model. This is why we report results for orders 3 and 5, rather than 20, since the overfitting kicks in. Even for these small orders the test loglikelihood has high variance and under-performs compared to RMSE. With respect to RMSE it is top-performer signalling again it is overfitting the variance. On the remaining two datasets BézierGP is best-performing among baselines. 4.4 Influence of number of permutations All experiments so far used r = 20; that is, 20 random permutations of the ordering of dimension used in the Bézier buttress. For a problem with input dimension d, there exist d! possible permutations. Table 2 shows results with varying r; for each dataset, the results are over the same train/test split (0.9/0.1). We fixed ν = 20. Up to some noise in the optimisation phase, we see for the two highest dimensional datasets, protein and bike, performance improves with higher r. Bike has over 50% reduction in RMSE from r = 1 to r = 50. Table 2 emphasises the results we have presented are not optimised over hyperparameter r and ν. They also illustrate an interesting direction of future research: optimising these hyperparameters. We chose permutations by random sampling, but choosing them in a principled deliberate manner could yield good performance with a computationally manageable r. This result indicates, at least, protein and bike would see increased performance in Table 1 from better (or just more) permutations. 5 Discussion We introduced the Bézier Gaussian Process – a GP, with a polynomial kernel in the Bernstein basis, that scales to a large number of observations, and remains space-filling for high number of input features (limited to a box-bounded domain). We illustrated that, with slight adjustments, the prior and posterior have similar behaviour to ‘usual’ stationary kernels. We presented the Bézier buttress, a weighted graph to which we amortise the inference, rather than inferring the control points themselves. The Bézier buttress allows GP inference without any matrix inversion. Using the Bézier buttress, we inferred 6385 control points for the high-dimensional slice dataset. We highlighted weaknesses of the proposed model: most crucially the tendency of overfitting when the n/d-ratio is low. The results demonstrate scalability in both n and d, but does not solve the short, but wide problem. The paper did not optimise over the hyperparameters of the proposed kernel, namely ν and r, but it showcased briefly that doing so might enhance BézierGPs empirically; especially smart selection of the permutations is an interesting direction for future research. We speculate that optimising over orders, on a validation set, would alleviate some of the overfitting issues. Acknowledgments and Disclosure of Funding MJ is supported by the Carlsberg Foundation.
1. What is the focus and contribution of the paper regarding Gaussian process machine learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of scalability and theoretical soundness? 3. Are there any concerns or questions regarding the implementation and performance of the method, such as computing full covariance or larger orders of the Bezier? 4. How does the reviewer assess the clarity, quality, significance, and novelty of the paper's content? 5. Are there any suggestions for further research or improvements to the proposed method, such as introducing non-Gaussian likelihood or varying the order of each dimension?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a new Bézier buttress-based kernel that allows for scaling GP regression both in number of inputs (tall data) and in number of input features (wide data). The scalability is shown both theoretically and empirically through evaluation on popular datasets. Strengths And Weaknesses Originality. The work is original. While there are other papers leveraging Bézier curves for Gaussian processes, the paper provides a way to parametrize Bézier control points to allow for a scalable training and inference (termed Bézier buttress). Quality. The research is sound. The statements the authors make are supported by evidence. The authors make a detailed analysis of empirical results, addressing both advantages and limitations of the suggested method. They provide a review of the related work, and highlight areas of possible further research. That said, a more detailed overview of performance against order of the Bézier would be nice. Another thing beneficial for the paper would be reports on the wall time consumed and computing power used in the experiments. Clarity. The work is written in a clear language, the presentation is on-point and laid out in a way that facilitates reading. The notation might be cumbersome sometimes. One thing to pay attention to is in the supplementary material, line 5: the definition of the variance buttress weights. I think there is a forgotten dependence of v on γ and also ζ γ i doesn't make sense, I guess it should be ζ γ ( i γ ) . There is also a typo on line 142: "can be parametrized". Significance. Since applying Gaussian processes to high-dimensional inputs is known to be hard, the proposed method that deals with both "wide" (high-dimensional inputs) and "tall" (many inputs) data is a significant contribution to the world of Gaussian process machine learning. Questions I have a several questions to the authors. Am I correct in understanding that computing full covariance would be just rendered as another type of "non-linearity" in the Bézier buttress, that is, multiplying be a cross-product of Bernstein polynomials instead of the squares of them? Have you tried to make even larger orders ν work? Does it make sense to try larger orders of the Bézier? Have you tried to faithfully compute the true posterior (on a small example perhaps) and see how good the proposed variational approximation is? How easy is it to introduce non-Gaussian likelihood? In your opinion, does it make sense to vary the order of each dimension (that is, non-equal ν γ )? It would make sense as not all dimensions are equal. That would just render a Bézier buttress with different layer widths. A major thing is an assumption of the line 142 which makes Bézier buttress possible. This introduces inter-relations between variational parameters. Do you think this is not very restrictive of an assumption? Again, comparing to the true posterior would probably be good enough of an answer. What happens when jointly optimizing likelihood noise and variational parameters? Have you observed a worsening of the performance? Limitations The authors address the limitations and constraint of their proposed method. The distribution of the data must be supported on an a-priori known compact cube. Another constraint is a tendency to overfit when n / d ratio is small. Orders ν larger than 25 do not work.
NIPS
Title Compositional Visual Generation with Energy Based Models Abstract A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions. Samples from the combined distribution correspond to compositions of concepts. For example, given one distribution for smiling face images, and another for male faces, we can combine them to generate smiling male faces. This allows us to generate natural images that simultaneously satisfy conjunctions, disjunctions, and negations of concepts. We evaluate compositional generation abilities of our model on the CelebA dataset of natural faces and synthetic 3D scene images. We showcase the breadth of unique capabilities of our model, such as the ability to continually learn and incorporate new concepts, or infer compositions of concept properties underlying an image. 1 Introduction Humans are able to rapidly learn new concepts and continuously integrate them among prior knowledge. The core component in enabling this is the ability to compose increasingly complex concepts out of simpler ones as well as recombining and reusing concepts in novel ways [5]. By combining a finite number of primitive components, humans can create an exponential number of new concepts, and use them to rapidly explain current and past experiences [16]. We are interested in enabling such capabilities in machine learning systems, particularly in the context of generative modeling. Past efforts have attempted to enable compositionality in several ways. One approach decomposes data into disentangled factors of variation and situate each datapoint in the resulting - typically continuous - factor vector space [29, 9]. The factors can either be explicitly provided or learned in an unsupervised manner. In both cases, however, the dimensionality of the factor vector space is fixed and defined prior to training. This makes it difficult to introduce new factors of variation, which may be necessary to explain new data, or to taxonomize past data in new ways. Another approach to incorporate the compositionality is to spatially decompose an image into a collection of objects, each object slot occupying some pixels of the image defined by a segmentation mask [28, 6]. Such approaches can generate visual scenes with multiple objects, but may have difficulty in generating interactions between objects. These two incorporations of compositionality are considered distinct, with very different underlying implementations. In this work∗, we propose to implement the compositionality via energy based models (EBMs). Instead of an explicit vector of factors that is input to a generator function, or object slots that are blended to form an image, our unified treatment defines factors of variation and object slots via energy functions. Each factor is represented by an individual scalar energy function that takes as input an image and outputs a low energy value if the factor is exhibited in the image. Images that exhibit the ∗Code and data available at https://energy-based-model.github.io/ compositional-generation-inference/ 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. factor can then be generated implicitly through an Markov Chain Monte Carlo (MCMC) sampling process that minimizes the energy. Importantly, it is also possible to run MCMC process on some combination of energy functions to generate images that exhibit multiple factors or multiple objects, in a globally coherent manner. There are several ways to combine energy functions. One can add or multiply distributions as in mixtures [25, 6] or products [11] of experts. We view these as probabilistic instances of logical operators over concepts. Instead of using only one, we consider three operators: logical conjunction, disjunction, and negation (illustrated in Figure 1). We can then flexibly and recursively combine multiple energy functions via these operators. More complex operators (such as implication) can be formed out of our base operators. EBMs with such composition operations enable a breadth of new capabilities - among them is a unique approach to continual learning. Our formulation defines concepts or factors implicitly via examples, rather than pre-declaring an explicit latent space ahead of time. For example, we can create an EBM for concept "black hair" from a dataset of face images that share this concept. New concepts (or factors), such as hair color can be learned by simply adding a new energy function and can then be combined with energies for previously trained concepts. This process can repeat continually. This view of few-shot concept learning and generation is similar to work of [23], with the distinction that instead of learning to generate holistic images from few examples, we learn factors from examples, which can be composed with other factors. A related advantage is that finely controllable image generation can be achieved by specifying the desired image via a collection of logical clauses, with applications to neural scene rendering [4]. Our contributions are as follows: first, while composition of energy-based models has been proposed in abstract settings before [11], we show that it can be used to generate plausible natural images. Second, we propose a principled approach to combine independent trained energy models based on logical operators which can be chained recursively, allowing controllable generation based on a collection of logical clauses at test time. Third, by being able to recursively combine independent models, we show our approach allows us to extrapolate to new concept combinations, continually incorporate new visual concepts for generation, and infer concept properties compositionally. 2 Related Work Our work draws on results in energy based models - see [17] for a comprehensive review. A number of methods have been used for inference and sampling in EBMs, from Gibbs Sampling [12], Langevin Dynamics [31, 3], Path Integral methods [2] and learned samplers [13, 26]. In this work, we apply EBMs to the task of compositional generation. Compositionality has been incorporated in representation learning (see [1] for a summary) and generative modeling. One approach to compositionality has focused on learning disentangled factors of variation [8, 15, 29]. Such an approach allows for the combination of existing factors, but does not allow the addition of new factors. A different approach to compositionality includes learning various different pixel/segmentation masks for each concept [6, 7]. However such a factorization may have difficulty capturing the global structure of an image, and in many cases different concepts cannot be explicitly factored using attention masks. In contrast, our approach towards compositionality focuses on composing separate learned probability distribution of concepts. Such an approach allows viewing factors of variation as constraints [19]. In prior work, [10] show that products of EBMs can be used to decompose complex generative modeling problems to simpler ones. [29] further apply products of distributions over the latent space of VAE to define compositions. [9] show that additional compositions in VAE latent space. Both of them rely on joint training to learn compositions of a fixed number of concepts. In contrast, in this work, we show how we can realize concept compositions using completely independently trained probability distributions. Furthermore, we introduce three compositional logical operators of conjunction, disjunction and negation can be realized and nested together through manipulation of independent probability distributions of each concept. Our compositional approach is inspired by the goal of continual lifelong learning - see [20] for a thorough review. New concepts can be composed with past concepts by combining new independent probability distributions. Many methods in continual learning are focused on how to overcome catashtophic forgetting [14, 18], but do not support dynamically growing capacity. Progressive growing of the models [24] has been considered, but is implemented at the level of the model architecture, whereas our method composes independent models together. 3 Method In this section, we first give an overview of the Energy-Based Model formulation we use and introduce three logical operators over these models. We then discuss the unique properties such a form of compositionality enables. 3.1 Energy Based Models EBMs represent data by learning an unnormalized probability distribution across the data. For each data point x, an energy function Eθ(x), parameterized by a neural network, outputs a scalar real energy such that the model distribution pθ(x) ∝ e−Eθ(x). (1) To train an EBM on a data distribution pD, we use contrastive divergence [10]. In particular we use the methodology defined in [3], where a Monte Carlo estimate (Equation 2) of maximum likelihood L is minimized with the following gradient ∇θL = Ex+∼pD∇θEθ(x +)− Ex−∼pθ∇θEθ(x −). (2) To sample x− from pθ for both training and generation, we use MCMC based off Langevin dynamics [30]. Samples are initialized from uniform random noise and are iteratively refined using x̃k = x̃k−1 − λ 2 ∇xEθ(x̃k−1) + ωk, ωk ∼ N (0, λ), (3) where k is the kth iteration step and λ is the step size. We refer to each iteration of Langevin dynamics as a negative sampling step. We note that this form of sampling allows us to use the gradient of the combined distribution to generate samples from distributions composed of pθ and the other distributions. We use this ability to generate from multiple different compositions of distributions. 3.2 Composition of Energy-Based Models We next present different ways that EBMs can compose. We consider a set of independently trained EBMs, E(x|c1), E(x|c2), . . . , E(x|cn), which are learned conditional distributions on underlying concept codes ci. Latent codes we consider include position, size, color, gender, hair style, and age, which we also refer to as concepts. Figure 2 shows three concepts and their combinations on the CelebA face dataset and attributes. Concept Conjunction In concept conjunction, given separate independent concepts (such as a particular gender, hair style, or facial expression), we wish to construct an output with the specified gender, hair style, and facial expression – the combination of each concept. Since the likelihood of an output given a set of specific concepts is equal to the product of the likelihood of each individual concept, we have Equation 4, which is also known as the product of experts [11]: p(x|c1 and c2, . . . , and ci) = ∏ i p(x|ci) ∝ e− ∑ i E(x|ci). (4) We can thus apply Equation 3 to the distribution that is the sum of the energies of each concept. We sample from this distribution using Equation 5 to sample from the joint concept space with ωk ∼ N (0, λ). x̃k = x̃k−1 − λ 2 ∇x ∑ i Eθ(x̃ k−1|ci) + ωk. (5) Concept Disjunction In concept disjunction, given separate concepts such as the colors red and blue, we wish to construct an output that is either red or blue. This requires a distribution that has probability mass when any chosen concept is true. A natural choice of such a distribution is the sum of the likelihood of each concept: p(x|c1 or c2, . . . or ci) ∝ ∑ i p(x|ci)/Z(ci). (6) where Z(ci) denotes the partition function for each concept. A tractable simplification becomes available if we assume all partition functions Z(ci) to be equal∑ i p(x|ci) ∝ ∑ i e−E(x|ci) = elogsumexp(−E(x|c1),−E(x|c2),...,−E(x|ci)), (7) where logsumexp(f1, . . . , fN ) = log ∑ i exp(fi). We can thus apply Equation 3 to the distribution that is a negative smooth minimum of the energies of each concept to obtain Equation 8 to sample from the disjunction concept space: x̃k = x̃k−1 − λ 2 ∇xlogsumexp(−E(x|c1),−E(x|c2), . . . ,−E(x|ci)) + ωk, (8) where ωk ∼ N (0, λ). While the assumption that leads to Equation 7 is not guaranteed to hold in general, in our experiments we empirically found the partition function Z(ci) estimates to be similar across partition functions (see Appendix) and also analyze cases in which partitions functions are different in the Appendix. Furthermore, the resulting generation results do exhibit equal distribution across disjunction constituents in practice as seen in Table 1. Concept Negation In concept negation, we wish to generate an output that does not contain the concept. Given a color red, we want an output that is of a different color, such as blue. Thus, we want to construct a distribution that places high likelihood to data that is outside a given concept. One choice is a distribution inversely proportional to the concept. Importantly, negation must be defined with respect to another concept to be useful. The opposite of alive may be dead, but not inanimate. Negation without a data distribution is not integrable and leads to a generation of chaotic textures which, while satisfying absence of a concept, is not desirable. Thus in our experiments with negation we combine it with another concept to ground the negation and obtain an integrable distribution: p(x|not(c1), c2) ∝ p(x|c2) p(x|c1)α ∝ eαE(x|c1)−E(x|c2). (9) We found the smoothing parameter α to be a useful regularizer (when α = 0 we arrive at uniform distribution) and we use α = 0.01 in our experiments. The above equation allows us to apply Langevin dynamics to obtain Equation 10 to sample concept negations. x̃k = x̃k−1 − λ 2 ∇x(αE(x|c1)− E(x|c2)) + ωk, (10) where ωk ∼ N (0, λ). Recursive Concept Combinations We have defined the three classical symbolic operators for concept combinations. These symbolic operators can further be recursively chained on top of each to specify more complex logical operators at test time. To our knowledge, our approach is the only approach enabling such compositionality across independently trained models. 4 Experiments We perform empirical studies to answer the following questions: (1) Can EBMs exhibit concept compositionality (such as concept negation, conjunction, and disjunction) in generating images? (2) Can we take advantage of concept combinations to learn new concepts in a continual manner? (3) Does explicit factor decomposition enable generalization to novel combinations of factors? (4) Can we perform concept inference across multiple inputs? In the appendix, we further show that approach enables better generalization to novel combinations of factors by learning explicit factor decompositions. 4.1 Setup We perform experiments on 64x64 object scenes rendered in MuJoCo [27] (MuJoCo Scenes) and the 128x128 CelebA dataset. For MuJoCo Scene images, we generate a central object of shape either sphere, cylinder, or box of varying size and color at different positions, with some number of (specified) additional background objects. Images are generated with varying lighting and objects. We use the ImageNet32x32 architecture and ImageNet128x128 architecture from [3] with the Swish activation [22] on MuJoCo and CelebA datasets. Models are trained on MuJoCo datasets for up to 1 day on 1 GPU and for 1 day on 8 GPUs for CelebA. More training details and model architecture can be found in the appendix. 4.2 Compositional Generation Quantitative evaluation. We first evaluate compositionality operations of EBMs in Section 3.2. To quantitatively evaluate generation, we use the MuJoCo Scenes dataset. We train a supervised classifier to predict the object position and color on the MuJoCo Scenes dataset. Our classifier obtains 99.3% accuracy for position and 99.9% for color on the test set. We also train seperate conditional EBMs on the concepts of position and color. For a given positional generation then, if the predicted position (obtained from a supervised classifier on generated images) and original conditioned generation position is smaller than 0.4, then a generation is consider correct. A color generation is correct if the predicted color is the same as the conditioned generation color. In Table 1, we quantitatively evaluate the quality of generated images given combinations of conjunction, disjunction, and negation on the color and position concepts. When using either Color or Position EBMs, the respective accuracy is high. Conjunction(Position, Color) has high position and color accuracies which demonstrates that an EBM can combine different concepts. Under Conjunction(Position, Negation(Color)), the color accuracy drops to below that of Color EBM. This means negating a concept reduces the likelihood of the concept. The same conclusion for Conjunction(Negation(Position), Color). We compare with the approach in [29], using the author’s online github repo, and find it produces blurrier and worse results. To evaluate disjunction, we set Position 1 to be a random point in the bottom left corner of a grid and Position 2 to be a random point in the top right corner of a grid. The average results over 1000 generated images are reported in Table 1. Position 1 EBM or Position 2 EBM can obtain high accuracy in predicting their own positions. Disjunction(Position 1, Position 2) EBM generate images that are roughly evenly distributed between Position 1 and Position 2, indicating the disjunction can combine concepts additively. This trend further holds with conjunction, with Disjunction(Conjunction(Position 1, Color 1),Conjunction(Position 2, Color 2)) also being evenly distributed. We further investigate implication using a composition of conjunctions and negations in EBMs. We consider the term (Position 1 AND (NOT Color 1)) AND ... AND (Position 1 AND (NOT Color 4)), which implicates Color 5. We find that are generations obtain 0.982 accuracy for Color 5. Qualitative evaluation. We further provide qualitative visualizations of conjunction, disjunction, and negation operations on both MuJoCo Scenes and CelebA datasets. Concept Conjunction: In Figure 3, we show the conjunction of EBMs is able to combine multiple independent concepts, such as age, gender, smile, and wavy hair, and get more precise generations with each energy models. Our composed generations obtain a FID of 45.3, compared to an FID of 64.5 of an SNGAN model trained on data conditioned on all four attributes. Our generations are also significantly more diverse than that of GAN model (average pixel MSE of 64.5 compared to 55.4 of the GAN model). Similarily, EBMs can combine independent concepts of shape, position, size, and color to get more precise generations in Figure 4. We also show results of conjunction with other logical operators in Figure 5. Concept Negation: In Figure 5, row 4 shows images that are opposite to the trained concept using negation operation. Since concept negation operation should accompany with another concept as described in Section 3.2, we use “smiling“ as the second concept. The images in row 4 shows the negation of male AND smiling is smiling female. This can further be combined with disjunction in the row 5 to make either “non-smiling male” or “smiling female”. Concept Disjunction: The last row of Figure 5 shows EBMs can combine concepts additively (generate images that are concept A or concept B). By constructing sampling using logsumexp, EBMs can sample an image that is “not smiling male” or “smiling female”, where both “not smiling male” and “smiling female” are specified through the conjunction of energy models of the two concepts. Multiple object combination: We show that our composition operations not only combine object concepts or attributes, but also on the object level. To verify this, we constructed a dataset with one green cube and a large amount background clutter objects (which are not green) in the scene. We train a conditional EBM (conditioned on position) on the dataset. Figure 7 “cube 1” and “cube 2” are the generated images conditioned on different positions. We perform the conjunction operation on the EBMs of “cube 1” and “cube 2” and use the combined energy model to generate images (row 3). We find that adding two conditional EBMs allows us to selectively generate two different cubes. Furthermore, such generation satisfies the constraints of the dataset. For example, when two conditional cubes are too close, the conditionals EBMs are able to default and just generate one cube like the last image in row 3. 4.3 Continual Learning We evaluate to what extent compositionality in EBMs enables continual learning of new concepts and their combination with previously learned concepts. If we create an EBM for a novel concept, can it be combined with previous EBMs that have never observed this concept in their training data? And can we continually repeat this process? To evaluate this, we use the following methodology on MuJoCo dataset: 1) We first train a position EBM on a dataset of varying positions, but a fixed color and a fixed shape. In experiment, we use shape “cube” and color “purple”. The position EBM allows us generate a purple cube at various positions. (Figure 8 row 1). 2) Next we train a shape EBM by training the model in combination with the position EBM to generate images of different shapes at different positions, but without training position EBM. As shown in Figure 8 row 2, after combining the position and shape EBMs, the “sphere” is placed in the same position as “cubes” in row 1 even these “sphere” positions never be seen during training. 3) Finally, we train a color EBM in combination with both position and shape EBMs to generate images of different shapes at different positions and colors. Again we fix both position and shape EBMs, and only train the color model. In Figure 8 row 3, the objects with different color have the same position as row 1 and same shape as row 2 which shows the EBM can continually learn different concepts and extrapolate new concepts in combination with previously learned concepts to generate new images. In Table 2, we quantitatively evaluate the continuous learning ability of our EBM and GAN [21]. Similar to the quantitative evaluation in Section 3.2, we a train three classifiers for position, shape, color respectively. For fair comparison, the GAN model is also trained sequentially on the position, shape, and color datasets (with the corresponding position, shape, color and other random attributes set to match the training in EBMs). The position accuracy of EBM does not drop significantly when continually learning new concepts (shape and color) which shows our EBM is able to extrapolate earlier learned concepts by combining them with newly learned concepts. In contrast, while the GAN model is able to learn the attributes of position, shape and color models given the corresponding dataset. We find the accuracies of position and shape drops significantly after learning color. The bad performance shows that GANs cannot com- bine the newly learned attributes with the previous attributes. 4.4 Cross Product Extrapolation Humans are endowed with the ability to extrapolate novel concept combinations when only a limited number of combinations were originally observed. For example, despite never having seen a “purple cube”, a human can compose what it looks like based on the previously observation of “red cube” and “purple sphere”. To evaluate the extrapolation ability of EBMs, we construct a dataset of MuJoCo scene images with spheres of all possible sizes appearing only in the top right corner of the scene and spheres of only large size appearing in the remaining positions. The left figure in Figure 9 shows a qualitative illustration. For the spheres only in the top right corner of the scene, we design different settings. For example, 1% meaning only 1% of positions (starting from the top right corner) that contain all sphere sizes are used for training. At test time, we evaluate the generation of spheres of all sizes at positions that are not seen during the training time. Similar to 1%, 10% and 100% mean the spheres of all sizes appears only in the top right 10% and 100% of the scene. The task is to test the quality of generated objects with unseen size and position combinations. This requires the model to extrapolate the learned position and size concepts in novel combinations. We train two EBMs on this dataset. One is conditioned on the position latent and trained only on large sizes and another is conditioned on the size latent and trained at the aforementioned percentage of positions. Conjunction of the two EBMs is fine-tuned for generation through gradient descent. We compare this composed model with a baseline holistic model conditioned on both position and size jointly. The baseline is trained on the same position and size combinations and optimized directly from the Mean Squared Error between the generated image and real image. Both models use the same architecture and number of parameters are described in the appendix. We qualitatively compare the EBM and baseline in Figure 9. When sphere of all sizes are only distributed in the 1% of possible locations, both the EBM and baseline have bad performance. This is because the very few combinations of sizes and positions make both models fail in extrapolation. For the 10% setting, our EBM is better than baseline. EBM is able to combine concepts to form images from few combination examples by learning an independent model for each concept factor. Both EBM and baseline models generate accurate images when given examples of all combinations (100% setting), but our EBM is closer to ground truth than the baseline. In Figure 10, we quantitatively evaluate the extrapolation ability of EBM and the baseline. We train a regression model that outputs both the position and size of a generated sphere image. We compute the error between the predicted size and ground truth size and report it in the first image of Figure 10. Similarly, we report the position error in the second image. EBMs are able to extrapolate both position and size better than the baseline model with smaller errors. The size errors goes down with more examples of all sphere sizes. For position error, both EBM and the baseline model have smaller errors at 1% data than 5% or 10% data. This result is due to the make-up of the data – with 1% data, only 1% of the rightmost sphere positions have different size annotations, so the models generate large spheres at the conditioned position which are closer to the ground truth position since most positions (99%) are large spheres. 4.5 Concept Inference Our formulation also allows us to infer concept parameters given a compositional relationship in inputs. For example, given a generated set of of images, each generated by the same underlying concept (conjunction), the likelihood of a concept is given by: p(x1, x2, . . . , xn|c) ∝ e− ∑ i E(xi|c). (11) We can then obtain maximum a posteriori (MAP) estimates of concept parameters by minimizing the logarithm of the above expression. We evaluate inference on an EBM trained on object position, which takes an image and an object position (x,y in 2D) as input and outputs an energy. We analyze the accuracy of such inference in the appendix and find EBMs exhibit both high accuracy and robustness, performing before than a ResNet. Concept Inference from Multiple Observations The composition rules in Section 3.2 apply directly to inference. When given several different views of an object at a particular position with different size, shape, camera view points, and lighting conditions, we can formulate concept inference as inference over a conjunction of multiple positional EBMs. Each positional EBM takes a different view as input we minimize energy value over positions across the sum of the energies. We use the same metric used above, i.e. Mean Absolute Error, in position inference and find the error in regressing positions goes down when successively giving more images in Figure 11. Concept Inference of Unseen Scene with Multiple Objects We also investigate the inherent compositionality that emerges from inference on a single EBM generalizing to multiple objects. Given EBMs trained on images of a single object, we test on images with multiple objects (not seen in training). In Figure 12, we plot the input RGB image and the generated energy maps over all positions in the scene. The “Two Cubes” scenes are never seen during training, but the output energy map is still make scene with the bimodality energy distribution. The generated energy map of “Two Cubes” is also close to the summation of energy maps of “Cube 1” and “Cube 2” which shows the EBM is able to infer concepts, such as position, on unseen scene with multiple objects. 5 Conclusion In this paper, we demonstrate the potential of EBMs for both compositional generation and inference. We show that EBMs support composition on both the factor and object level, unifying different perspectives of compositionality and can recursively combine with each other. We further showcase how this composition can be applied to both continually learn and compositionally infer underlying concepts. We hope our results inspire future work in this direction. 6 Acknowledgement We should like to thank Jiayuan Mao for reading and providing feedback on the paper and both Josh Tenenbaum and Jiayuan Mao for helpful feedback on the paper. 7 Broader Impacts We believe that compositionality is a crucial component of next generation AI systems. Compositionality enables system to synthesize and combine knowledge from different domains to tackle the problem in hand. Our proposed method is step towards more composable deep learning models. A truly compositional system has many positive societal benefits, potentially enabling a intelligent and flexible robots that can selectively recruit different skills learned for the task on hand, or super-human synthesis of scientific knowledge that can further progress of scientific discovery. At the same time, there remain unanswered ethical problems about any such next generation AI system.
1. What is the focus and contribution of the paper on machine learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to generalize disentanglement? 3. What are the weaknesses of the paper, especially regarding the quality of the results and clarity of details?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents an approach that potentially can provide a long search for concept in machine learning - posterior concept compositionallity. The authors propose a framework in which a "concept" (or a feature, such as position, color, hair color, gender, geometric shape etc.) is learned from a set of images where all other "concepts" are kept fixed. After this is done to each concept separately, an image that portrays a requested logical combination of the concepts is produced, without having the system see any combined result. This is, in a sense, a generalization of disentanglement, where a single factor is learned to be extracted away from all others typically. Strengths The demonstrated achieved capability is an interesting one, which most probably can bear merit for the community. Weaknesses - The quality of the results are underwhelming, but this is to be expected since the system has never seen composed examples. - Some of the details are still somewhat unclear.
NIPS
Title Compositional Visual Generation with Energy Based Models Abstract A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions. Samples from the combined distribution correspond to compositions of concepts. For example, given one distribution for smiling face images, and another for male faces, we can combine them to generate smiling male faces. This allows us to generate natural images that simultaneously satisfy conjunctions, disjunctions, and negations of concepts. We evaluate compositional generation abilities of our model on the CelebA dataset of natural faces and synthetic 3D scene images. We showcase the breadth of unique capabilities of our model, such as the ability to continually learn and incorporate new concepts, or infer compositions of concept properties underlying an image. 1 Introduction Humans are able to rapidly learn new concepts and continuously integrate them among prior knowledge. The core component in enabling this is the ability to compose increasingly complex concepts out of simpler ones as well as recombining and reusing concepts in novel ways [5]. By combining a finite number of primitive components, humans can create an exponential number of new concepts, and use them to rapidly explain current and past experiences [16]. We are interested in enabling such capabilities in machine learning systems, particularly in the context of generative modeling. Past efforts have attempted to enable compositionality in several ways. One approach decomposes data into disentangled factors of variation and situate each datapoint in the resulting - typically continuous - factor vector space [29, 9]. The factors can either be explicitly provided or learned in an unsupervised manner. In both cases, however, the dimensionality of the factor vector space is fixed and defined prior to training. This makes it difficult to introduce new factors of variation, which may be necessary to explain new data, or to taxonomize past data in new ways. Another approach to incorporate the compositionality is to spatially decompose an image into a collection of objects, each object slot occupying some pixels of the image defined by a segmentation mask [28, 6]. Such approaches can generate visual scenes with multiple objects, but may have difficulty in generating interactions between objects. These two incorporations of compositionality are considered distinct, with very different underlying implementations. In this work∗, we propose to implement the compositionality via energy based models (EBMs). Instead of an explicit vector of factors that is input to a generator function, or object slots that are blended to form an image, our unified treatment defines factors of variation and object slots via energy functions. Each factor is represented by an individual scalar energy function that takes as input an image and outputs a low energy value if the factor is exhibited in the image. Images that exhibit the ∗Code and data available at https://energy-based-model.github.io/ compositional-generation-inference/ 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. factor can then be generated implicitly through an Markov Chain Monte Carlo (MCMC) sampling process that minimizes the energy. Importantly, it is also possible to run MCMC process on some combination of energy functions to generate images that exhibit multiple factors or multiple objects, in a globally coherent manner. There are several ways to combine energy functions. One can add or multiply distributions as in mixtures [25, 6] or products [11] of experts. We view these as probabilistic instances of logical operators over concepts. Instead of using only one, we consider three operators: logical conjunction, disjunction, and negation (illustrated in Figure 1). We can then flexibly and recursively combine multiple energy functions via these operators. More complex operators (such as implication) can be formed out of our base operators. EBMs with such composition operations enable a breadth of new capabilities - among them is a unique approach to continual learning. Our formulation defines concepts or factors implicitly via examples, rather than pre-declaring an explicit latent space ahead of time. For example, we can create an EBM for concept "black hair" from a dataset of face images that share this concept. New concepts (or factors), such as hair color can be learned by simply adding a new energy function and can then be combined with energies for previously trained concepts. This process can repeat continually. This view of few-shot concept learning and generation is similar to work of [23], with the distinction that instead of learning to generate holistic images from few examples, we learn factors from examples, which can be composed with other factors. A related advantage is that finely controllable image generation can be achieved by specifying the desired image via a collection of logical clauses, with applications to neural scene rendering [4]. Our contributions are as follows: first, while composition of energy-based models has been proposed in abstract settings before [11], we show that it can be used to generate plausible natural images. Second, we propose a principled approach to combine independent trained energy models based on logical operators which can be chained recursively, allowing controllable generation based on a collection of logical clauses at test time. Third, by being able to recursively combine independent models, we show our approach allows us to extrapolate to new concept combinations, continually incorporate new visual concepts for generation, and infer concept properties compositionally. 2 Related Work Our work draws on results in energy based models - see [17] for a comprehensive review. A number of methods have been used for inference and sampling in EBMs, from Gibbs Sampling [12], Langevin Dynamics [31, 3], Path Integral methods [2] and learned samplers [13, 26]. In this work, we apply EBMs to the task of compositional generation. Compositionality has been incorporated in representation learning (see [1] for a summary) and generative modeling. One approach to compositionality has focused on learning disentangled factors of variation [8, 15, 29]. Such an approach allows for the combination of existing factors, but does not allow the addition of new factors. A different approach to compositionality includes learning various different pixel/segmentation masks for each concept [6, 7]. However such a factorization may have difficulty capturing the global structure of an image, and in many cases different concepts cannot be explicitly factored using attention masks. In contrast, our approach towards compositionality focuses on composing separate learned probability distribution of concepts. Such an approach allows viewing factors of variation as constraints [19]. In prior work, [10] show that products of EBMs can be used to decompose complex generative modeling problems to simpler ones. [29] further apply products of distributions over the latent space of VAE to define compositions. [9] show that additional compositions in VAE latent space. Both of them rely on joint training to learn compositions of a fixed number of concepts. In contrast, in this work, we show how we can realize concept compositions using completely independently trained probability distributions. Furthermore, we introduce three compositional logical operators of conjunction, disjunction and negation can be realized and nested together through manipulation of independent probability distributions of each concept. Our compositional approach is inspired by the goal of continual lifelong learning - see [20] for a thorough review. New concepts can be composed with past concepts by combining new independent probability distributions. Many methods in continual learning are focused on how to overcome catashtophic forgetting [14, 18], but do not support dynamically growing capacity. Progressive growing of the models [24] has been considered, but is implemented at the level of the model architecture, whereas our method composes independent models together. 3 Method In this section, we first give an overview of the Energy-Based Model formulation we use and introduce three logical operators over these models. We then discuss the unique properties such a form of compositionality enables. 3.1 Energy Based Models EBMs represent data by learning an unnormalized probability distribution across the data. For each data point x, an energy function Eθ(x), parameterized by a neural network, outputs a scalar real energy such that the model distribution pθ(x) ∝ e−Eθ(x). (1) To train an EBM on a data distribution pD, we use contrastive divergence [10]. In particular we use the methodology defined in [3], where a Monte Carlo estimate (Equation 2) of maximum likelihood L is minimized with the following gradient ∇θL = Ex+∼pD∇θEθ(x +)− Ex−∼pθ∇θEθ(x −). (2) To sample x− from pθ for both training and generation, we use MCMC based off Langevin dynamics [30]. Samples are initialized from uniform random noise and are iteratively refined using x̃k = x̃k−1 − λ 2 ∇xEθ(x̃k−1) + ωk, ωk ∼ N (0, λ), (3) where k is the kth iteration step and λ is the step size. We refer to each iteration of Langevin dynamics as a negative sampling step. We note that this form of sampling allows us to use the gradient of the combined distribution to generate samples from distributions composed of pθ and the other distributions. We use this ability to generate from multiple different compositions of distributions. 3.2 Composition of Energy-Based Models We next present different ways that EBMs can compose. We consider a set of independently trained EBMs, E(x|c1), E(x|c2), . . . , E(x|cn), which are learned conditional distributions on underlying concept codes ci. Latent codes we consider include position, size, color, gender, hair style, and age, which we also refer to as concepts. Figure 2 shows three concepts and their combinations on the CelebA face dataset and attributes. Concept Conjunction In concept conjunction, given separate independent concepts (such as a particular gender, hair style, or facial expression), we wish to construct an output with the specified gender, hair style, and facial expression – the combination of each concept. Since the likelihood of an output given a set of specific concepts is equal to the product of the likelihood of each individual concept, we have Equation 4, which is also known as the product of experts [11]: p(x|c1 and c2, . . . , and ci) = ∏ i p(x|ci) ∝ e− ∑ i E(x|ci). (4) We can thus apply Equation 3 to the distribution that is the sum of the energies of each concept. We sample from this distribution using Equation 5 to sample from the joint concept space with ωk ∼ N (0, λ). x̃k = x̃k−1 − λ 2 ∇x ∑ i Eθ(x̃ k−1|ci) + ωk. (5) Concept Disjunction In concept disjunction, given separate concepts such as the colors red and blue, we wish to construct an output that is either red or blue. This requires a distribution that has probability mass when any chosen concept is true. A natural choice of such a distribution is the sum of the likelihood of each concept: p(x|c1 or c2, . . . or ci) ∝ ∑ i p(x|ci)/Z(ci). (6) where Z(ci) denotes the partition function for each concept. A tractable simplification becomes available if we assume all partition functions Z(ci) to be equal∑ i p(x|ci) ∝ ∑ i e−E(x|ci) = elogsumexp(−E(x|c1),−E(x|c2),...,−E(x|ci)), (7) where logsumexp(f1, . . . , fN ) = log ∑ i exp(fi). We can thus apply Equation 3 to the distribution that is a negative smooth minimum of the energies of each concept to obtain Equation 8 to sample from the disjunction concept space: x̃k = x̃k−1 − λ 2 ∇xlogsumexp(−E(x|c1),−E(x|c2), . . . ,−E(x|ci)) + ωk, (8) where ωk ∼ N (0, λ). While the assumption that leads to Equation 7 is not guaranteed to hold in general, in our experiments we empirically found the partition function Z(ci) estimates to be similar across partition functions (see Appendix) and also analyze cases in which partitions functions are different in the Appendix. Furthermore, the resulting generation results do exhibit equal distribution across disjunction constituents in practice as seen in Table 1. Concept Negation In concept negation, we wish to generate an output that does not contain the concept. Given a color red, we want an output that is of a different color, such as blue. Thus, we want to construct a distribution that places high likelihood to data that is outside a given concept. One choice is a distribution inversely proportional to the concept. Importantly, negation must be defined with respect to another concept to be useful. The opposite of alive may be dead, but not inanimate. Negation without a data distribution is not integrable and leads to a generation of chaotic textures which, while satisfying absence of a concept, is not desirable. Thus in our experiments with negation we combine it with another concept to ground the negation and obtain an integrable distribution: p(x|not(c1), c2) ∝ p(x|c2) p(x|c1)α ∝ eαE(x|c1)−E(x|c2). (9) We found the smoothing parameter α to be a useful regularizer (when α = 0 we arrive at uniform distribution) and we use α = 0.01 in our experiments. The above equation allows us to apply Langevin dynamics to obtain Equation 10 to sample concept negations. x̃k = x̃k−1 − λ 2 ∇x(αE(x|c1)− E(x|c2)) + ωk, (10) where ωk ∼ N (0, λ). Recursive Concept Combinations We have defined the three classical symbolic operators for concept combinations. These symbolic operators can further be recursively chained on top of each to specify more complex logical operators at test time. To our knowledge, our approach is the only approach enabling such compositionality across independently trained models. 4 Experiments We perform empirical studies to answer the following questions: (1) Can EBMs exhibit concept compositionality (such as concept negation, conjunction, and disjunction) in generating images? (2) Can we take advantage of concept combinations to learn new concepts in a continual manner? (3) Does explicit factor decomposition enable generalization to novel combinations of factors? (4) Can we perform concept inference across multiple inputs? In the appendix, we further show that approach enables better generalization to novel combinations of factors by learning explicit factor decompositions. 4.1 Setup We perform experiments on 64x64 object scenes rendered in MuJoCo [27] (MuJoCo Scenes) and the 128x128 CelebA dataset. For MuJoCo Scene images, we generate a central object of shape either sphere, cylinder, or box of varying size and color at different positions, with some number of (specified) additional background objects. Images are generated with varying lighting and objects. We use the ImageNet32x32 architecture and ImageNet128x128 architecture from [3] with the Swish activation [22] on MuJoCo and CelebA datasets. Models are trained on MuJoCo datasets for up to 1 day on 1 GPU and for 1 day on 8 GPUs for CelebA. More training details and model architecture can be found in the appendix. 4.2 Compositional Generation Quantitative evaluation. We first evaluate compositionality operations of EBMs in Section 3.2. To quantitatively evaluate generation, we use the MuJoCo Scenes dataset. We train a supervised classifier to predict the object position and color on the MuJoCo Scenes dataset. Our classifier obtains 99.3% accuracy for position and 99.9% for color on the test set. We also train seperate conditional EBMs on the concepts of position and color. For a given positional generation then, if the predicted position (obtained from a supervised classifier on generated images) and original conditioned generation position is smaller than 0.4, then a generation is consider correct. A color generation is correct if the predicted color is the same as the conditioned generation color. In Table 1, we quantitatively evaluate the quality of generated images given combinations of conjunction, disjunction, and negation on the color and position concepts. When using either Color or Position EBMs, the respective accuracy is high. Conjunction(Position, Color) has high position and color accuracies which demonstrates that an EBM can combine different concepts. Under Conjunction(Position, Negation(Color)), the color accuracy drops to below that of Color EBM. This means negating a concept reduces the likelihood of the concept. The same conclusion for Conjunction(Negation(Position), Color). We compare with the approach in [29], using the author’s online github repo, and find it produces blurrier and worse results. To evaluate disjunction, we set Position 1 to be a random point in the bottom left corner of a grid and Position 2 to be a random point in the top right corner of a grid. The average results over 1000 generated images are reported in Table 1. Position 1 EBM or Position 2 EBM can obtain high accuracy in predicting their own positions. Disjunction(Position 1, Position 2) EBM generate images that are roughly evenly distributed between Position 1 and Position 2, indicating the disjunction can combine concepts additively. This trend further holds with conjunction, with Disjunction(Conjunction(Position 1, Color 1),Conjunction(Position 2, Color 2)) also being evenly distributed. We further investigate implication using a composition of conjunctions and negations in EBMs. We consider the term (Position 1 AND (NOT Color 1)) AND ... AND (Position 1 AND (NOT Color 4)), which implicates Color 5. We find that are generations obtain 0.982 accuracy for Color 5. Qualitative evaluation. We further provide qualitative visualizations of conjunction, disjunction, and negation operations on both MuJoCo Scenes and CelebA datasets. Concept Conjunction: In Figure 3, we show the conjunction of EBMs is able to combine multiple independent concepts, such as age, gender, smile, and wavy hair, and get more precise generations with each energy models. Our composed generations obtain a FID of 45.3, compared to an FID of 64.5 of an SNGAN model trained on data conditioned on all four attributes. Our generations are also significantly more diverse than that of GAN model (average pixel MSE of 64.5 compared to 55.4 of the GAN model). Similarily, EBMs can combine independent concepts of shape, position, size, and color to get more precise generations in Figure 4. We also show results of conjunction with other logical operators in Figure 5. Concept Negation: In Figure 5, row 4 shows images that are opposite to the trained concept using negation operation. Since concept negation operation should accompany with another concept as described in Section 3.2, we use “smiling“ as the second concept. The images in row 4 shows the negation of male AND smiling is smiling female. This can further be combined with disjunction in the row 5 to make either “non-smiling male” or “smiling female”. Concept Disjunction: The last row of Figure 5 shows EBMs can combine concepts additively (generate images that are concept A or concept B). By constructing sampling using logsumexp, EBMs can sample an image that is “not smiling male” or “smiling female”, where both “not smiling male” and “smiling female” are specified through the conjunction of energy models of the two concepts. Multiple object combination: We show that our composition operations not only combine object concepts or attributes, but also on the object level. To verify this, we constructed a dataset with one green cube and a large amount background clutter objects (which are not green) in the scene. We train a conditional EBM (conditioned on position) on the dataset. Figure 7 “cube 1” and “cube 2” are the generated images conditioned on different positions. We perform the conjunction operation on the EBMs of “cube 1” and “cube 2” and use the combined energy model to generate images (row 3). We find that adding two conditional EBMs allows us to selectively generate two different cubes. Furthermore, such generation satisfies the constraints of the dataset. For example, when two conditional cubes are too close, the conditionals EBMs are able to default and just generate one cube like the last image in row 3. 4.3 Continual Learning We evaluate to what extent compositionality in EBMs enables continual learning of new concepts and their combination with previously learned concepts. If we create an EBM for a novel concept, can it be combined with previous EBMs that have never observed this concept in their training data? And can we continually repeat this process? To evaluate this, we use the following methodology on MuJoCo dataset: 1) We first train a position EBM on a dataset of varying positions, but a fixed color and a fixed shape. In experiment, we use shape “cube” and color “purple”. The position EBM allows us generate a purple cube at various positions. (Figure 8 row 1). 2) Next we train a shape EBM by training the model in combination with the position EBM to generate images of different shapes at different positions, but without training position EBM. As shown in Figure 8 row 2, after combining the position and shape EBMs, the “sphere” is placed in the same position as “cubes” in row 1 even these “sphere” positions never be seen during training. 3) Finally, we train a color EBM in combination with both position and shape EBMs to generate images of different shapes at different positions and colors. Again we fix both position and shape EBMs, and only train the color model. In Figure 8 row 3, the objects with different color have the same position as row 1 and same shape as row 2 which shows the EBM can continually learn different concepts and extrapolate new concepts in combination with previously learned concepts to generate new images. In Table 2, we quantitatively evaluate the continuous learning ability of our EBM and GAN [21]. Similar to the quantitative evaluation in Section 3.2, we a train three classifiers for position, shape, color respectively. For fair comparison, the GAN model is also trained sequentially on the position, shape, and color datasets (with the corresponding position, shape, color and other random attributes set to match the training in EBMs). The position accuracy of EBM does not drop significantly when continually learning new concepts (shape and color) which shows our EBM is able to extrapolate earlier learned concepts by combining them with newly learned concepts. In contrast, while the GAN model is able to learn the attributes of position, shape and color models given the corresponding dataset. We find the accuracies of position and shape drops significantly after learning color. The bad performance shows that GANs cannot com- bine the newly learned attributes with the previous attributes. 4.4 Cross Product Extrapolation Humans are endowed with the ability to extrapolate novel concept combinations when only a limited number of combinations were originally observed. For example, despite never having seen a “purple cube”, a human can compose what it looks like based on the previously observation of “red cube” and “purple sphere”. To evaluate the extrapolation ability of EBMs, we construct a dataset of MuJoCo scene images with spheres of all possible sizes appearing only in the top right corner of the scene and spheres of only large size appearing in the remaining positions. The left figure in Figure 9 shows a qualitative illustration. For the spheres only in the top right corner of the scene, we design different settings. For example, 1% meaning only 1% of positions (starting from the top right corner) that contain all sphere sizes are used for training. At test time, we evaluate the generation of spheres of all sizes at positions that are not seen during the training time. Similar to 1%, 10% and 100% mean the spheres of all sizes appears only in the top right 10% and 100% of the scene. The task is to test the quality of generated objects with unseen size and position combinations. This requires the model to extrapolate the learned position and size concepts in novel combinations. We train two EBMs on this dataset. One is conditioned on the position latent and trained only on large sizes and another is conditioned on the size latent and trained at the aforementioned percentage of positions. Conjunction of the two EBMs is fine-tuned for generation through gradient descent. We compare this composed model with a baseline holistic model conditioned on both position and size jointly. The baseline is trained on the same position and size combinations and optimized directly from the Mean Squared Error between the generated image and real image. Both models use the same architecture and number of parameters are described in the appendix. We qualitatively compare the EBM and baseline in Figure 9. When sphere of all sizes are only distributed in the 1% of possible locations, both the EBM and baseline have bad performance. This is because the very few combinations of sizes and positions make both models fail in extrapolation. For the 10% setting, our EBM is better than baseline. EBM is able to combine concepts to form images from few combination examples by learning an independent model for each concept factor. Both EBM and baseline models generate accurate images when given examples of all combinations (100% setting), but our EBM is closer to ground truth than the baseline. In Figure 10, we quantitatively evaluate the extrapolation ability of EBM and the baseline. We train a regression model that outputs both the position and size of a generated sphere image. We compute the error between the predicted size and ground truth size and report it in the first image of Figure 10. Similarly, we report the position error in the second image. EBMs are able to extrapolate both position and size better than the baseline model with smaller errors. The size errors goes down with more examples of all sphere sizes. For position error, both EBM and the baseline model have smaller errors at 1% data than 5% or 10% data. This result is due to the make-up of the data – with 1% data, only 1% of the rightmost sphere positions have different size annotations, so the models generate large spheres at the conditioned position which are closer to the ground truth position since most positions (99%) are large spheres. 4.5 Concept Inference Our formulation also allows us to infer concept parameters given a compositional relationship in inputs. For example, given a generated set of of images, each generated by the same underlying concept (conjunction), the likelihood of a concept is given by: p(x1, x2, . . . , xn|c) ∝ e− ∑ i E(xi|c). (11) We can then obtain maximum a posteriori (MAP) estimates of concept parameters by minimizing the logarithm of the above expression. We evaluate inference on an EBM trained on object position, which takes an image and an object position (x,y in 2D) as input and outputs an energy. We analyze the accuracy of such inference in the appendix and find EBMs exhibit both high accuracy and robustness, performing before than a ResNet. Concept Inference from Multiple Observations The composition rules in Section 3.2 apply directly to inference. When given several different views of an object at a particular position with different size, shape, camera view points, and lighting conditions, we can formulate concept inference as inference over a conjunction of multiple positional EBMs. Each positional EBM takes a different view as input we minimize energy value over positions across the sum of the energies. We use the same metric used above, i.e. Mean Absolute Error, in position inference and find the error in regressing positions goes down when successively giving more images in Figure 11. Concept Inference of Unseen Scene with Multiple Objects We also investigate the inherent compositionality that emerges from inference on a single EBM generalizing to multiple objects. Given EBMs trained on images of a single object, we test on images with multiple objects (not seen in training). In Figure 12, we plot the input RGB image and the generated energy maps over all positions in the scene. The “Two Cubes” scenes are never seen during training, but the output energy map is still make scene with the bimodality energy distribution. The generated energy map of “Two Cubes” is also close to the summation of energy maps of “Cube 1” and “Cube 2” which shows the EBM is able to infer concepts, such as position, on unseen scene with multiple objects. 5 Conclusion In this paper, we demonstrate the potential of EBMs for both compositional generation and inference. We show that EBMs support composition on both the factor and object level, unifying different perspectives of compositionality and can recursively combine with each other. We further showcase how this composition can be applied to both continually learn and compositionally infer underlying concepts. We hope our results inspire future work in this direction. 6 Acknowledgement We should like to thank Jiayuan Mao for reading and providing feedback on the paper and both Josh Tenenbaum and Jiayuan Mao for helpful feedback on the paper. 7 Broader Impacts We believe that compositionality is a crucial component of next generation AI systems. Compositionality enables system to synthesize and combine knowledge from different domains to tackle the problem in hand. Our proposed method is step towards more composable deep learning models. A truly compositional system has many positive societal benefits, potentially enabling a intelligent and flexible robots that can selectively recruit different skills learned for the task on hand, or super-human synthesis of scientific knowledge that can further progress of scientific discovery. At the same time, there remain unanswered ethical problems about any such next generation AI system.
1. What is the focus and contribution of the paper on compositionality in generative models? 2. What are the strengths of the proposed energy-based model, particularly in controlled generation and compositional generalization? 3. What are the weaknesses of the paper regarding visual quality, efficiency, and convergence? 4. How does the reviewer assess the novelty and similarity of the proposed approach compared to prior works, specifically VAEs? 5. Do you have any concerns about the limited variability in generated images, and how can it be addressed?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper explores compositionality in generative models and discusses how to use energy models in order to achieve compositionality of objects or properties. It explores the model in the contexts of faces (CelebA) and artificial block images with a couple of objects and multiple properties that can be controlled such as shape, position and color. Update: I thank the authors for responding to the review. I was satisfied with the responses to my questions and potential concerns and happy to improve my score. It would be great to update the final version of the paper accordingly based on the author's response. Best of luck! Strengths * Developing a new energy based model for generating natural images. * Ability to perform controlled generation that combined different properties through logical operations. * Compositional generalization to new unseen combinations of concepts. Weaknesses * The visual quality/fidelity of the generated images is quite low. Making sure that the visual fidelity on common metrics such as FID matches or is at least close enough to GAN models will be useful to validate that the approach supports high fidelity (as otherwise it may be the case that it achieves compositionality at the expense of lower potential for fine details or high fidelity, as is the case in e.g. VAEs). Given that there have been many works that explore combinations of properties for CelebA images with GANs, showing that the proposed approach can compete with them is especially important. * It is unclear to me if MCMC is efficient in terms of training and convergence. Showing learning plots as well compared to other types of generative models will be useful. * The use of energy models for image generation is much more unexplored compared to GANs and VAEs and so exploring it further is great. However, note that the motivation and goals of the model -- to achieve compositional generation through logical combination of concepts learned through data subsets, is similar to a prior VAE paper. See further details in the related work review part. * Given the visual samples in the paper, it looks as if it might be the case that the model has limited variability in generated images: the face images in figure 3 show that both in the second and 4th rows the model tends to generate images that feature unspecified but correlated properties, such as the blonde hair or the very similar bottom three faces. That’s also the case in figure 5 rows 2-4. Consequently, it gives the sense that the model or sampling may not allow for large variation in the generated images, but rather tend to take typical likely examples, as happened in the earlier GAN models. A quantitative comparison of the variance in the images compared to other types of generative models will be useful to either refute or validate this.
NIPS
Title Compositional Visual Generation with Energy Based Models Abstract A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions. Samples from the combined distribution correspond to compositions of concepts. For example, given one distribution for smiling face images, and another for male faces, we can combine them to generate smiling male faces. This allows us to generate natural images that simultaneously satisfy conjunctions, disjunctions, and negations of concepts. We evaluate compositional generation abilities of our model on the CelebA dataset of natural faces and synthetic 3D scene images. We showcase the breadth of unique capabilities of our model, such as the ability to continually learn and incorporate new concepts, or infer compositions of concept properties underlying an image. 1 Introduction Humans are able to rapidly learn new concepts and continuously integrate them among prior knowledge. The core component in enabling this is the ability to compose increasingly complex concepts out of simpler ones as well as recombining and reusing concepts in novel ways [5]. By combining a finite number of primitive components, humans can create an exponential number of new concepts, and use them to rapidly explain current and past experiences [16]. We are interested in enabling such capabilities in machine learning systems, particularly in the context of generative modeling. Past efforts have attempted to enable compositionality in several ways. One approach decomposes data into disentangled factors of variation and situate each datapoint in the resulting - typically continuous - factor vector space [29, 9]. The factors can either be explicitly provided or learned in an unsupervised manner. In both cases, however, the dimensionality of the factor vector space is fixed and defined prior to training. This makes it difficult to introduce new factors of variation, which may be necessary to explain new data, or to taxonomize past data in new ways. Another approach to incorporate the compositionality is to spatially decompose an image into a collection of objects, each object slot occupying some pixels of the image defined by a segmentation mask [28, 6]. Such approaches can generate visual scenes with multiple objects, but may have difficulty in generating interactions between objects. These two incorporations of compositionality are considered distinct, with very different underlying implementations. In this work∗, we propose to implement the compositionality via energy based models (EBMs). Instead of an explicit vector of factors that is input to a generator function, or object slots that are blended to form an image, our unified treatment defines factors of variation and object slots via energy functions. Each factor is represented by an individual scalar energy function that takes as input an image and outputs a low energy value if the factor is exhibited in the image. Images that exhibit the ∗Code and data available at https://energy-based-model.github.io/ compositional-generation-inference/ 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. factor can then be generated implicitly through an Markov Chain Monte Carlo (MCMC) sampling process that minimizes the energy. Importantly, it is also possible to run MCMC process on some combination of energy functions to generate images that exhibit multiple factors or multiple objects, in a globally coherent manner. There are several ways to combine energy functions. One can add or multiply distributions as in mixtures [25, 6] or products [11] of experts. We view these as probabilistic instances of logical operators over concepts. Instead of using only one, we consider three operators: logical conjunction, disjunction, and negation (illustrated in Figure 1). We can then flexibly and recursively combine multiple energy functions via these operators. More complex operators (such as implication) can be formed out of our base operators. EBMs with such composition operations enable a breadth of new capabilities - among them is a unique approach to continual learning. Our formulation defines concepts or factors implicitly via examples, rather than pre-declaring an explicit latent space ahead of time. For example, we can create an EBM for concept "black hair" from a dataset of face images that share this concept. New concepts (or factors), such as hair color can be learned by simply adding a new energy function and can then be combined with energies for previously trained concepts. This process can repeat continually. This view of few-shot concept learning and generation is similar to work of [23], with the distinction that instead of learning to generate holistic images from few examples, we learn factors from examples, which can be composed with other factors. A related advantage is that finely controllable image generation can be achieved by specifying the desired image via a collection of logical clauses, with applications to neural scene rendering [4]. Our contributions are as follows: first, while composition of energy-based models has been proposed in abstract settings before [11], we show that it can be used to generate plausible natural images. Second, we propose a principled approach to combine independent trained energy models based on logical operators which can be chained recursively, allowing controllable generation based on a collection of logical clauses at test time. Third, by being able to recursively combine independent models, we show our approach allows us to extrapolate to new concept combinations, continually incorporate new visual concepts for generation, and infer concept properties compositionally. 2 Related Work Our work draws on results in energy based models - see [17] for a comprehensive review. A number of methods have been used for inference and sampling in EBMs, from Gibbs Sampling [12], Langevin Dynamics [31, 3], Path Integral methods [2] and learned samplers [13, 26]. In this work, we apply EBMs to the task of compositional generation. Compositionality has been incorporated in representation learning (see [1] for a summary) and generative modeling. One approach to compositionality has focused on learning disentangled factors of variation [8, 15, 29]. Such an approach allows for the combination of existing factors, but does not allow the addition of new factors. A different approach to compositionality includes learning various different pixel/segmentation masks for each concept [6, 7]. However such a factorization may have difficulty capturing the global structure of an image, and in many cases different concepts cannot be explicitly factored using attention masks. In contrast, our approach towards compositionality focuses on composing separate learned probability distribution of concepts. Such an approach allows viewing factors of variation as constraints [19]. In prior work, [10] show that products of EBMs can be used to decompose complex generative modeling problems to simpler ones. [29] further apply products of distributions over the latent space of VAE to define compositions. [9] show that additional compositions in VAE latent space. Both of them rely on joint training to learn compositions of a fixed number of concepts. In contrast, in this work, we show how we can realize concept compositions using completely independently trained probability distributions. Furthermore, we introduce three compositional logical operators of conjunction, disjunction and negation can be realized and nested together through manipulation of independent probability distributions of each concept. Our compositional approach is inspired by the goal of continual lifelong learning - see [20] for a thorough review. New concepts can be composed with past concepts by combining new independent probability distributions. Many methods in continual learning are focused on how to overcome catashtophic forgetting [14, 18], but do not support dynamically growing capacity. Progressive growing of the models [24] has been considered, but is implemented at the level of the model architecture, whereas our method composes independent models together. 3 Method In this section, we first give an overview of the Energy-Based Model formulation we use and introduce three logical operators over these models. We then discuss the unique properties such a form of compositionality enables. 3.1 Energy Based Models EBMs represent data by learning an unnormalized probability distribution across the data. For each data point x, an energy function Eθ(x), parameterized by a neural network, outputs a scalar real energy such that the model distribution pθ(x) ∝ e−Eθ(x). (1) To train an EBM on a data distribution pD, we use contrastive divergence [10]. In particular we use the methodology defined in [3], where a Monte Carlo estimate (Equation 2) of maximum likelihood L is minimized with the following gradient ∇θL = Ex+∼pD∇θEθ(x +)− Ex−∼pθ∇θEθ(x −). (2) To sample x− from pθ for both training and generation, we use MCMC based off Langevin dynamics [30]. Samples are initialized from uniform random noise and are iteratively refined using x̃k = x̃k−1 − λ 2 ∇xEθ(x̃k−1) + ωk, ωk ∼ N (0, λ), (3) where k is the kth iteration step and λ is the step size. We refer to each iteration of Langevin dynamics as a negative sampling step. We note that this form of sampling allows us to use the gradient of the combined distribution to generate samples from distributions composed of pθ and the other distributions. We use this ability to generate from multiple different compositions of distributions. 3.2 Composition of Energy-Based Models We next present different ways that EBMs can compose. We consider a set of independently trained EBMs, E(x|c1), E(x|c2), . . . , E(x|cn), which are learned conditional distributions on underlying concept codes ci. Latent codes we consider include position, size, color, gender, hair style, and age, which we also refer to as concepts. Figure 2 shows three concepts and their combinations on the CelebA face dataset and attributes. Concept Conjunction In concept conjunction, given separate independent concepts (such as a particular gender, hair style, or facial expression), we wish to construct an output with the specified gender, hair style, and facial expression – the combination of each concept. Since the likelihood of an output given a set of specific concepts is equal to the product of the likelihood of each individual concept, we have Equation 4, which is also known as the product of experts [11]: p(x|c1 and c2, . . . , and ci) = ∏ i p(x|ci) ∝ e− ∑ i E(x|ci). (4) We can thus apply Equation 3 to the distribution that is the sum of the energies of each concept. We sample from this distribution using Equation 5 to sample from the joint concept space with ωk ∼ N (0, λ). x̃k = x̃k−1 − λ 2 ∇x ∑ i Eθ(x̃ k−1|ci) + ωk. (5) Concept Disjunction In concept disjunction, given separate concepts such as the colors red and blue, we wish to construct an output that is either red or blue. This requires a distribution that has probability mass when any chosen concept is true. A natural choice of such a distribution is the sum of the likelihood of each concept: p(x|c1 or c2, . . . or ci) ∝ ∑ i p(x|ci)/Z(ci). (6) where Z(ci) denotes the partition function for each concept. A tractable simplification becomes available if we assume all partition functions Z(ci) to be equal∑ i p(x|ci) ∝ ∑ i e−E(x|ci) = elogsumexp(−E(x|c1),−E(x|c2),...,−E(x|ci)), (7) where logsumexp(f1, . . . , fN ) = log ∑ i exp(fi). We can thus apply Equation 3 to the distribution that is a negative smooth minimum of the energies of each concept to obtain Equation 8 to sample from the disjunction concept space: x̃k = x̃k−1 − λ 2 ∇xlogsumexp(−E(x|c1),−E(x|c2), . . . ,−E(x|ci)) + ωk, (8) where ωk ∼ N (0, λ). While the assumption that leads to Equation 7 is not guaranteed to hold in general, in our experiments we empirically found the partition function Z(ci) estimates to be similar across partition functions (see Appendix) and also analyze cases in which partitions functions are different in the Appendix. Furthermore, the resulting generation results do exhibit equal distribution across disjunction constituents in practice as seen in Table 1. Concept Negation In concept negation, we wish to generate an output that does not contain the concept. Given a color red, we want an output that is of a different color, such as blue. Thus, we want to construct a distribution that places high likelihood to data that is outside a given concept. One choice is a distribution inversely proportional to the concept. Importantly, negation must be defined with respect to another concept to be useful. The opposite of alive may be dead, but not inanimate. Negation without a data distribution is not integrable and leads to a generation of chaotic textures which, while satisfying absence of a concept, is not desirable. Thus in our experiments with negation we combine it with another concept to ground the negation and obtain an integrable distribution: p(x|not(c1), c2) ∝ p(x|c2) p(x|c1)α ∝ eαE(x|c1)−E(x|c2). (9) We found the smoothing parameter α to be a useful regularizer (when α = 0 we arrive at uniform distribution) and we use α = 0.01 in our experiments. The above equation allows us to apply Langevin dynamics to obtain Equation 10 to sample concept negations. x̃k = x̃k−1 − λ 2 ∇x(αE(x|c1)− E(x|c2)) + ωk, (10) where ωk ∼ N (0, λ). Recursive Concept Combinations We have defined the three classical symbolic operators for concept combinations. These symbolic operators can further be recursively chained on top of each to specify more complex logical operators at test time. To our knowledge, our approach is the only approach enabling such compositionality across independently trained models. 4 Experiments We perform empirical studies to answer the following questions: (1) Can EBMs exhibit concept compositionality (such as concept negation, conjunction, and disjunction) in generating images? (2) Can we take advantage of concept combinations to learn new concepts in a continual manner? (3) Does explicit factor decomposition enable generalization to novel combinations of factors? (4) Can we perform concept inference across multiple inputs? In the appendix, we further show that approach enables better generalization to novel combinations of factors by learning explicit factor decompositions. 4.1 Setup We perform experiments on 64x64 object scenes rendered in MuJoCo [27] (MuJoCo Scenes) and the 128x128 CelebA dataset. For MuJoCo Scene images, we generate a central object of shape either sphere, cylinder, or box of varying size and color at different positions, with some number of (specified) additional background objects. Images are generated with varying lighting and objects. We use the ImageNet32x32 architecture and ImageNet128x128 architecture from [3] with the Swish activation [22] on MuJoCo and CelebA datasets. Models are trained on MuJoCo datasets for up to 1 day on 1 GPU and for 1 day on 8 GPUs for CelebA. More training details and model architecture can be found in the appendix. 4.2 Compositional Generation Quantitative evaluation. We first evaluate compositionality operations of EBMs in Section 3.2. To quantitatively evaluate generation, we use the MuJoCo Scenes dataset. We train a supervised classifier to predict the object position and color on the MuJoCo Scenes dataset. Our classifier obtains 99.3% accuracy for position and 99.9% for color on the test set. We also train seperate conditional EBMs on the concepts of position and color. For a given positional generation then, if the predicted position (obtained from a supervised classifier on generated images) and original conditioned generation position is smaller than 0.4, then a generation is consider correct. A color generation is correct if the predicted color is the same as the conditioned generation color. In Table 1, we quantitatively evaluate the quality of generated images given combinations of conjunction, disjunction, and negation on the color and position concepts. When using either Color or Position EBMs, the respective accuracy is high. Conjunction(Position, Color) has high position and color accuracies which demonstrates that an EBM can combine different concepts. Under Conjunction(Position, Negation(Color)), the color accuracy drops to below that of Color EBM. This means negating a concept reduces the likelihood of the concept. The same conclusion for Conjunction(Negation(Position), Color). We compare with the approach in [29], using the author’s online github repo, and find it produces blurrier and worse results. To evaluate disjunction, we set Position 1 to be a random point in the bottom left corner of a grid and Position 2 to be a random point in the top right corner of a grid. The average results over 1000 generated images are reported in Table 1. Position 1 EBM or Position 2 EBM can obtain high accuracy in predicting their own positions. Disjunction(Position 1, Position 2) EBM generate images that are roughly evenly distributed between Position 1 and Position 2, indicating the disjunction can combine concepts additively. This trend further holds with conjunction, with Disjunction(Conjunction(Position 1, Color 1),Conjunction(Position 2, Color 2)) also being evenly distributed. We further investigate implication using a composition of conjunctions and negations in EBMs. We consider the term (Position 1 AND (NOT Color 1)) AND ... AND (Position 1 AND (NOT Color 4)), which implicates Color 5. We find that are generations obtain 0.982 accuracy for Color 5. Qualitative evaluation. We further provide qualitative visualizations of conjunction, disjunction, and negation operations on both MuJoCo Scenes and CelebA datasets. Concept Conjunction: In Figure 3, we show the conjunction of EBMs is able to combine multiple independent concepts, such as age, gender, smile, and wavy hair, and get more precise generations with each energy models. Our composed generations obtain a FID of 45.3, compared to an FID of 64.5 of an SNGAN model trained on data conditioned on all four attributes. Our generations are also significantly more diverse than that of GAN model (average pixel MSE of 64.5 compared to 55.4 of the GAN model). Similarily, EBMs can combine independent concepts of shape, position, size, and color to get more precise generations in Figure 4. We also show results of conjunction with other logical operators in Figure 5. Concept Negation: In Figure 5, row 4 shows images that are opposite to the trained concept using negation operation. Since concept negation operation should accompany with another concept as described in Section 3.2, we use “smiling“ as the second concept. The images in row 4 shows the negation of male AND smiling is smiling female. This can further be combined with disjunction in the row 5 to make either “non-smiling male” or “smiling female”. Concept Disjunction: The last row of Figure 5 shows EBMs can combine concepts additively (generate images that are concept A or concept B). By constructing sampling using logsumexp, EBMs can sample an image that is “not smiling male” or “smiling female”, where both “not smiling male” and “smiling female” are specified through the conjunction of energy models of the two concepts. Multiple object combination: We show that our composition operations not only combine object concepts or attributes, but also on the object level. To verify this, we constructed a dataset with one green cube and a large amount background clutter objects (which are not green) in the scene. We train a conditional EBM (conditioned on position) on the dataset. Figure 7 “cube 1” and “cube 2” are the generated images conditioned on different positions. We perform the conjunction operation on the EBMs of “cube 1” and “cube 2” and use the combined energy model to generate images (row 3). We find that adding two conditional EBMs allows us to selectively generate two different cubes. Furthermore, such generation satisfies the constraints of the dataset. For example, when two conditional cubes are too close, the conditionals EBMs are able to default and just generate one cube like the last image in row 3. 4.3 Continual Learning We evaluate to what extent compositionality in EBMs enables continual learning of new concepts and their combination with previously learned concepts. If we create an EBM for a novel concept, can it be combined with previous EBMs that have never observed this concept in their training data? And can we continually repeat this process? To evaluate this, we use the following methodology on MuJoCo dataset: 1) We first train a position EBM on a dataset of varying positions, but a fixed color and a fixed shape. In experiment, we use shape “cube” and color “purple”. The position EBM allows us generate a purple cube at various positions. (Figure 8 row 1). 2) Next we train a shape EBM by training the model in combination with the position EBM to generate images of different shapes at different positions, but without training position EBM. As shown in Figure 8 row 2, after combining the position and shape EBMs, the “sphere” is placed in the same position as “cubes” in row 1 even these “sphere” positions never be seen during training. 3) Finally, we train a color EBM in combination with both position and shape EBMs to generate images of different shapes at different positions and colors. Again we fix both position and shape EBMs, and only train the color model. In Figure 8 row 3, the objects with different color have the same position as row 1 and same shape as row 2 which shows the EBM can continually learn different concepts and extrapolate new concepts in combination with previously learned concepts to generate new images. In Table 2, we quantitatively evaluate the continuous learning ability of our EBM and GAN [21]. Similar to the quantitative evaluation in Section 3.2, we a train three classifiers for position, shape, color respectively. For fair comparison, the GAN model is also trained sequentially on the position, shape, and color datasets (with the corresponding position, shape, color and other random attributes set to match the training in EBMs). The position accuracy of EBM does not drop significantly when continually learning new concepts (shape and color) which shows our EBM is able to extrapolate earlier learned concepts by combining them with newly learned concepts. In contrast, while the GAN model is able to learn the attributes of position, shape and color models given the corresponding dataset. We find the accuracies of position and shape drops significantly after learning color. The bad performance shows that GANs cannot com- bine the newly learned attributes with the previous attributes. 4.4 Cross Product Extrapolation Humans are endowed with the ability to extrapolate novel concept combinations when only a limited number of combinations were originally observed. For example, despite never having seen a “purple cube”, a human can compose what it looks like based on the previously observation of “red cube” and “purple sphere”. To evaluate the extrapolation ability of EBMs, we construct a dataset of MuJoCo scene images with spheres of all possible sizes appearing only in the top right corner of the scene and spheres of only large size appearing in the remaining positions. The left figure in Figure 9 shows a qualitative illustration. For the spheres only in the top right corner of the scene, we design different settings. For example, 1% meaning only 1% of positions (starting from the top right corner) that contain all sphere sizes are used for training. At test time, we evaluate the generation of spheres of all sizes at positions that are not seen during the training time. Similar to 1%, 10% and 100% mean the spheres of all sizes appears only in the top right 10% and 100% of the scene. The task is to test the quality of generated objects with unseen size and position combinations. This requires the model to extrapolate the learned position and size concepts in novel combinations. We train two EBMs on this dataset. One is conditioned on the position latent and trained only on large sizes and another is conditioned on the size latent and trained at the aforementioned percentage of positions. Conjunction of the two EBMs is fine-tuned for generation through gradient descent. We compare this composed model with a baseline holistic model conditioned on both position and size jointly. The baseline is trained on the same position and size combinations and optimized directly from the Mean Squared Error between the generated image and real image. Both models use the same architecture and number of parameters are described in the appendix. We qualitatively compare the EBM and baseline in Figure 9. When sphere of all sizes are only distributed in the 1% of possible locations, both the EBM and baseline have bad performance. This is because the very few combinations of sizes and positions make both models fail in extrapolation. For the 10% setting, our EBM is better than baseline. EBM is able to combine concepts to form images from few combination examples by learning an independent model for each concept factor. Both EBM and baseline models generate accurate images when given examples of all combinations (100% setting), but our EBM is closer to ground truth than the baseline. In Figure 10, we quantitatively evaluate the extrapolation ability of EBM and the baseline. We train a regression model that outputs both the position and size of a generated sphere image. We compute the error between the predicted size and ground truth size and report it in the first image of Figure 10. Similarly, we report the position error in the second image. EBMs are able to extrapolate both position and size better than the baseline model with smaller errors. The size errors goes down with more examples of all sphere sizes. For position error, both EBM and the baseline model have smaller errors at 1% data than 5% or 10% data. This result is due to the make-up of the data – with 1% data, only 1% of the rightmost sphere positions have different size annotations, so the models generate large spheres at the conditioned position which are closer to the ground truth position since most positions (99%) are large spheres. 4.5 Concept Inference Our formulation also allows us to infer concept parameters given a compositional relationship in inputs. For example, given a generated set of of images, each generated by the same underlying concept (conjunction), the likelihood of a concept is given by: p(x1, x2, . . . , xn|c) ∝ e− ∑ i E(xi|c). (11) We can then obtain maximum a posteriori (MAP) estimates of concept parameters by minimizing the logarithm of the above expression. We evaluate inference on an EBM trained on object position, which takes an image and an object position (x,y in 2D) as input and outputs an energy. We analyze the accuracy of such inference in the appendix and find EBMs exhibit both high accuracy and robustness, performing before than a ResNet. Concept Inference from Multiple Observations The composition rules in Section 3.2 apply directly to inference. When given several different views of an object at a particular position with different size, shape, camera view points, and lighting conditions, we can formulate concept inference as inference over a conjunction of multiple positional EBMs. Each positional EBM takes a different view as input we minimize energy value over positions across the sum of the energies. We use the same metric used above, i.e. Mean Absolute Error, in position inference and find the error in regressing positions goes down when successively giving more images in Figure 11. Concept Inference of Unseen Scene with Multiple Objects We also investigate the inherent compositionality that emerges from inference on a single EBM generalizing to multiple objects. Given EBMs trained on images of a single object, we test on images with multiple objects (not seen in training). In Figure 12, we plot the input RGB image and the generated energy maps over all positions in the scene. The “Two Cubes” scenes are never seen during training, but the output energy map is still make scene with the bimodality energy distribution. The generated energy map of “Two Cubes” is also close to the summation of energy maps of “Cube 1” and “Cube 2” which shows the EBM is able to infer concepts, such as position, on unseen scene with multiple objects. 5 Conclusion In this paper, we demonstrate the potential of EBMs for both compositional generation and inference. We show that EBMs support composition on both the factor and object level, unifying different perspectives of compositionality and can recursively combine with each other. We further showcase how this composition can be applied to both continually learn and compositionally infer underlying concepts. We hope our results inspire future work in this direction. 6 Acknowledgement We should like to thank Jiayuan Mao for reading and providing feedback on the paper and both Josh Tenenbaum and Jiayuan Mao for helpful feedback on the paper. 7 Broader Impacts We believe that compositionality is a crucial component of next generation AI systems. Compositionality enables system to synthesize and combine knowledge from different domains to tackle the problem in hand. Our proposed method is step towards more composable deep learning models. A truly compositional system has many positive societal benefits, potentially enabling a intelligent and flexible robots that can selectively recruit different skills learned for the task on hand, or super-human synthesis of scientific knowledge that can further progress of scientific discovery. At the same time, there remain unanswered ethical problems about any such next generation AI system.
1. What is the focus and contribution of the paper regarding compositionality in energy-based models? 2. What are the strengths of the proposed approach, particularly in its ability to enable continual learning and concept inference? 3. Do you have any concerns or questions regarding the paper's claims about L2 normalization and spectral normalization? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper considers training a set of independent energy-based models (EBMs) for different concepts and compose them based on three different logical operators over concepts. This is a new approach to compositionality with separate EBMs compared to previous methods cited in the paper. The proposed method enables interesting applications such as continual learning with reusing past EBMs for image generation so that different compositions of concepts can be present in the generated images as the continual learning progresses. This paper provides both qualitative and quantitative evaluations for compositional concept image generation tasks in Mujoco Scene domain and CelebA domain to justify their three methods of combining energy functions. Concepts extrapolation, continual learning and concept inference are also evaluated in Mujoco Scene domain. Strengths The simple derivations on the equivalence between different ways of combining energy functions and different logical operators is neat and interesting. (though product of mixture is proposed before and the simplified expression for disjunction assumes equal partition function for different models) This is the first work to actually generate realistic images with composite concepts using Langevin dynamics. The performance on continual learning is also promising with better quantitative results than GANs. Weaknesses The paper claims L2 normalization and spectral normalization can make partition function behave similarly for different models trained on same (but with different concepts) or different datasets but I am not sure whether this conjecture is true. Is there any ablation studies (e.g. by removing spectral normalization) to support this claim? It might just make training unstable by removing it but this can still be checked and mentioned in the paper. I believe you could also make some simple synthetic toy data to expand on the conditions when this argument holds.
NIPS
Title Compositional Visual Generation with Energy Based Models Abstract A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions. Samples from the combined distribution correspond to compositions of concepts. For example, given one distribution for smiling face images, and another for male faces, we can combine them to generate smiling male faces. This allows us to generate natural images that simultaneously satisfy conjunctions, disjunctions, and negations of concepts. We evaluate compositional generation abilities of our model on the CelebA dataset of natural faces and synthetic 3D scene images. We showcase the breadth of unique capabilities of our model, such as the ability to continually learn and incorporate new concepts, or infer compositions of concept properties underlying an image. 1 Introduction Humans are able to rapidly learn new concepts and continuously integrate them among prior knowledge. The core component in enabling this is the ability to compose increasingly complex concepts out of simpler ones as well as recombining and reusing concepts in novel ways [5]. By combining a finite number of primitive components, humans can create an exponential number of new concepts, and use them to rapidly explain current and past experiences [16]. We are interested in enabling such capabilities in machine learning systems, particularly in the context of generative modeling. Past efforts have attempted to enable compositionality in several ways. One approach decomposes data into disentangled factors of variation and situate each datapoint in the resulting - typically continuous - factor vector space [29, 9]. The factors can either be explicitly provided or learned in an unsupervised manner. In both cases, however, the dimensionality of the factor vector space is fixed and defined prior to training. This makes it difficult to introduce new factors of variation, which may be necessary to explain new data, or to taxonomize past data in new ways. Another approach to incorporate the compositionality is to spatially decompose an image into a collection of objects, each object slot occupying some pixels of the image defined by a segmentation mask [28, 6]. Such approaches can generate visual scenes with multiple objects, but may have difficulty in generating interactions between objects. These two incorporations of compositionality are considered distinct, with very different underlying implementations. In this work∗, we propose to implement the compositionality via energy based models (EBMs). Instead of an explicit vector of factors that is input to a generator function, or object slots that are blended to form an image, our unified treatment defines factors of variation and object slots via energy functions. Each factor is represented by an individual scalar energy function that takes as input an image and outputs a low energy value if the factor is exhibited in the image. Images that exhibit the ∗Code and data available at https://energy-based-model.github.io/ compositional-generation-inference/ 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. factor can then be generated implicitly through an Markov Chain Monte Carlo (MCMC) sampling process that minimizes the energy. Importantly, it is also possible to run MCMC process on some combination of energy functions to generate images that exhibit multiple factors or multiple objects, in a globally coherent manner. There are several ways to combine energy functions. One can add or multiply distributions as in mixtures [25, 6] or products [11] of experts. We view these as probabilistic instances of logical operators over concepts. Instead of using only one, we consider three operators: logical conjunction, disjunction, and negation (illustrated in Figure 1). We can then flexibly and recursively combine multiple energy functions via these operators. More complex operators (such as implication) can be formed out of our base operators. EBMs with such composition operations enable a breadth of new capabilities - among them is a unique approach to continual learning. Our formulation defines concepts or factors implicitly via examples, rather than pre-declaring an explicit latent space ahead of time. For example, we can create an EBM for concept "black hair" from a dataset of face images that share this concept. New concepts (or factors), such as hair color can be learned by simply adding a new energy function and can then be combined with energies for previously trained concepts. This process can repeat continually. This view of few-shot concept learning and generation is similar to work of [23], with the distinction that instead of learning to generate holistic images from few examples, we learn factors from examples, which can be composed with other factors. A related advantage is that finely controllable image generation can be achieved by specifying the desired image via a collection of logical clauses, with applications to neural scene rendering [4]. Our contributions are as follows: first, while composition of energy-based models has been proposed in abstract settings before [11], we show that it can be used to generate plausible natural images. Second, we propose a principled approach to combine independent trained energy models based on logical operators which can be chained recursively, allowing controllable generation based on a collection of logical clauses at test time. Third, by being able to recursively combine independent models, we show our approach allows us to extrapolate to new concept combinations, continually incorporate new visual concepts for generation, and infer concept properties compositionally. 2 Related Work Our work draws on results in energy based models - see [17] for a comprehensive review. A number of methods have been used for inference and sampling in EBMs, from Gibbs Sampling [12], Langevin Dynamics [31, 3], Path Integral methods [2] and learned samplers [13, 26]. In this work, we apply EBMs to the task of compositional generation. Compositionality has been incorporated in representation learning (see [1] for a summary) and generative modeling. One approach to compositionality has focused on learning disentangled factors of variation [8, 15, 29]. Such an approach allows for the combination of existing factors, but does not allow the addition of new factors. A different approach to compositionality includes learning various different pixel/segmentation masks for each concept [6, 7]. However such a factorization may have difficulty capturing the global structure of an image, and in many cases different concepts cannot be explicitly factored using attention masks. In contrast, our approach towards compositionality focuses on composing separate learned probability distribution of concepts. Such an approach allows viewing factors of variation as constraints [19]. In prior work, [10] show that products of EBMs can be used to decompose complex generative modeling problems to simpler ones. [29] further apply products of distributions over the latent space of VAE to define compositions. [9] show that additional compositions in VAE latent space. Both of them rely on joint training to learn compositions of a fixed number of concepts. In contrast, in this work, we show how we can realize concept compositions using completely independently trained probability distributions. Furthermore, we introduce three compositional logical operators of conjunction, disjunction and negation can be realized and nested together through manipulation of independent probability distributions of each concept. Our compositional approach is inspired by the goal of continual lifelong learning - see [20] for a thorough review. New concepts can be composed with past concepts by combining new independent probability distributions. Many methods in continual learning are focused on how to overcome catashtophic forgetting [14, 18], but do not support dynamically growing capacity. Progressive growing of the models [24] has been considered, but is implemented at the level of the model architecture, whereas our method composes independent models together. 3 Method In this section, we first give an overview of the Energy-Based Model formulation we use and introduce three logical operators over these models. We then discuss the unique properties such a form of compositionality enables. 3.1 Energy Based Models EBMs represent data by learning an unnormalized probability distribution across the data. For each data point x, an energy function Eθ(x), parameterized by a neural network, outputs a scalar real energy such that the model distribution pθ(x) ∝ e−Eθ(x). (1) To train an EBM on a data distribution pD, we use contrastive divergence [10]. In particular we use the methodology defined in [3], where a Monte Carlo estimate (Equation 2) of maximum likelihood L is minimized with the following gradient ∇θL = Ex+∼pD∇θEθ(x +)− Ex−∼pθ∇θEθ(x −). (2) To sample x− from pθ for both training and generation, we use MCMC based off Langevin dynamics [30]. Samples are initialized from uniform random noise and are iteratively refined using x̃k = x̃k−1 − λ 2 ∇xEθ(x̃k−1) + ωk, ωk ∼ N (0, λ), (3) where k is the kth iteration step and λ is the step size. We refer to each iteration of Langevin dynamics as a negative sampling step. We note that this form of sampling allows us to use the gradient of the combined distribution to generate samples from distributions composed of pθ and the other distributions. We use this ability to generate from multiple different compositions of distributions. 3.2 Composition of Energy-Based Models We next present different ways that EBMs can compose. We consider a set of independently trained EBMs, E(x|c1), E(x|c2), . . . , E(x|cn), which are learned conditional distributions on underlying concept codes ci. Latent codes we consider include position, size, color, gender, hair style, and age, which we also refer to as concepts. Figure 2 shows three concepts and their combinations on the CelebA face dataset and attributes. Concept Conjunction In concept conjunction, given separate independent concepts (such as a particular gender, hair style, or facial expression), we wish to construct an output with the specified gender, hair style, and facial expression – the combination of each concept. Since the likelihood of an output given a set of specific concepts is equal to the product of the likelihood of each individual concept, we have Equation 4, which is also known as the product of experts [11]: p(x|c1 and c2, . . . , and ci) = ∏ i p(x|ci) ∝ e− ∑ i E(x|ci). (4) We can thus apply Equation 3 to the distribution that is the sum of the energies of each concept. We sample from this distribution using Equation 5 to sample from the joint concept space with ωk ∼ N (0, λ). x̃k = x̃k−1 − λ 2 ∇x ∑ i Eθ(x̃ k−1|ci) + ωk. (5) Concept Disjunction In concept disjunction, given separate concepts such as the colors red and blue, we wish to construct an output that is either red or blue. This requires a distribution that has probability mass when any chosen concept is true. A natural choice of such a distribution is the sum of the likelihood of each concept: p(x|c1 or c2, . . . or ci) ∝ ∑ i p(x|ci)/Z(ci). (6) where Z(ci) denotes the partition function for each concept. A tractable simplification becomes available if we assume all partition functions Z(ci) to be equal∑ i p(x|ci) ∝ ∑ i e−E(x|ci) = elogsumexp(−E(x|c1),−E(x|c2),...,−E(x|ci)), (7) where logsumexp(f1, . . . , fN ) = log ∑ i exp(fi). We can thus apply Equation 3 to the distribution that is a negative smooth minimum of the energies of each concept to obtain Equation 8 to sample from the disjunction concept space: x̃k = x̃k−1 − λ 2 ∇xlogsumexp(−E(x|c1),−E(x|c2), . . . ,−E(x|ci)) + ωk, (8) where ωk ∼ N (0, λ). While the assumption that leads to Equation 7 is not guaranteed to hold in general, in our experiments we empirically found the partition function Z(ci) estimates to be similar across partition functions (see Appendix) and also analyze cases in which partitions functions are different in the Appendix. Furthermore, the resulting generation results do exhibit equal distribution across disjunction constituents in practice as seen in Table 1. Concept Negation In concept negation, we wish to generate an output that does not contain the concept. Given a color red, we want an output that is of a different color, such as blue. Thus, we want to construct a distribution that places high likelihood to data that is outside a given concept. One choice is a distribution inversely proportional to the concept. Importantly, negation must be defined with respect to another concept to be useful. The opposite of alive may be dead, but not inanimate. Negation without a data distribution is not integrable and leads to a generation of chaotic textures which, while satisfying absence of a concept, is not desirable. Thus in our experiments with negation we combine it with another concept to ground the negation and obtain an integrable distribution: p(x|not(c1), c2) ∝ p(x|c2) p(x|c1)α ∝ eαE(x|c1)−E(x|c2). (9) We found the smoothing parameter α to be a useful regularizer (when α = 0 we arrive at uniform distribution) and we use α = 0.01 in our experiments. The above equation allows us to apply Langevin dynamics to obtain Equation 10 to sample concept negations. x̃k = x̃k−1 − λ 2 ∇x(αE(x|c1)− E(x|c2)) + ωk, (10) where ωk ∼ N (0, λ). Recursive Concept Combinations We have defined the three classical symbolic operators for concept combinations. These symbolic operators can further be recursively chained on top of each to specify more complex logical operators at test time. To our knowledge, our approach is the only approach enabling such compositionality across independently trained models. 4 Experiments We perform empirical studies to answer the following questions: (1) Can EBMs exhibit concept compositionality (such as concept negation, conjunction, and disjunction) in generating images? (2) Can we take advantage of concept combinations to learn new concepts in a continual manner? (3) Does explicit factor decomposition enable generalization to novel combinations of factors? (4) Can we perform concept inference across multiple inputs? In the appendix, we further show that approach enables better generalization to novel combinations of factors by learning explicit factor decompositions. 4.1 Setup We perform experiments on 64x64 object scenes rendered in MuJoCo [27] (MuJoCo Scenes) and the 128x128 CelebA dataset. For MuJoCo Scene images, we generate a central object of shape either sphere, cylinder, or box of varying size and color at different positions, with some number of (specified) additional background objects. Images are generated with varying lighting and objects. We use the ImageNet32x32 architecture and ImageNet128x128 architecture from [3] with the Swish activation [22] on MuJoCo and CelebA datasets. Models are trained on MuJoCo datasets for up to 1 day on 1 GPU and for 1 day on 8 GPUs for CelebA. More training details and model architecture can be found in the appendix. 4.2 Compositional Generation Quantitative evaluation. We first evaluate compositionality operations of EBMs in Section 3.2. To quantitatively evaluate generation, we use the MuJoCo Scenes dataset. We train a supervised classifier to predict the object position and color on the MuJoCo Scenes dataset. Our classifier obtains 99.3% accuracy for position and 99.9% for color on the test set. We also train seperate conditional EBMs on the concepts of position and color. For a given positional generation then, if the predicted position (obtained from a supervised classifier on generated images) and original conditioned generation position is smaller than 0.4, then a generation is consider correct. A color generation is correct if the predicted color is the same as the conditioned generation color. In Table 1, we quantitatively evaluate the quality of generated images given combinations of conjunction, disjunction, and negation on the color and position concepts. When using either Color or Position EBMs, the respective accuracy is high. Conjunction(Position, Color) has high position and color accuracies which demonstrates that an EBM can combine different concepts. Under Conjunction(Position, Negation(Color)), the color accuracy drops to below that of Color EBM. This means negating a concept reduces the likelihood of the concept. The same conclusion for Conjunction(Negation(Position), Color). We compare with the approach in [29], using the author’s online github repo, and find it produces blurrier and worse results. To evaluate disjunction, we set Position 1 to be a random point in the bottom left corner of a grid and Position 2 to be a random point in the top right corner of a grid. The average results over 1000 generated images are reported in Table 1. Position 1 EBM or Position 2 EBM can obtain high accuracy in predicting their own positions. Disjunction(Position 1, Position 2) EBM generate images that are roughly evenly distributed between Position 1 and Position 2, indicating the disjunction can combine concepts additively. This trend further holds with conjunction, with Disjunction(Conjunction(Position 1, Color 1),Conjunction(Position 2, Color 2)) also being evenly distributed. We further investigate implication using a composition of conjunctions and negations in EBMs. We consider the term (Position 1 AND (NOT Color 1)) AND ... AND (Position 1 AND (NOT Color 4)), which implicates Color 5. We find that are generations obtain 0.982 accuracy for Color 5. Qualitative evaluation. We further provide qualitative visualizations of conjunction, disjunction, and negation operations on both MuJoCo Scenes and CelebA datasets. Concept Conjunction: In Figure 3, we show the conjunction of EBMs is able to combine multiple independent concepts, such as age, gender, smile, and wavy hair, and get more precise generations with each energy models. Our composed generations obtain a FID of 45.3, compared to an FID of 64.5 of an SNGAN model trained on data conditioned on all four attributes. Our generations are also significantly more diverse than that of GAN model (average pixel MSE of 64.5 compared to 55.4 of the GAN model). Similarily, EBMs can combine independent concepts of shape, position, size, and color to get more precise generations in Figure 4. We also show results of conjunction with other logical operators in Figure 5. Concept Negation: In Figure 5, row 4 shows images that are opposite to the trained concept using negation operation. Since concept negation operation should accompany with another concept as described in Section 3.2, we use “smiling“ as the second concept. The images in row 4 shows the negation of male AND smiling is smiling female. This can further be combined with disjunction in the row 5 to make either “non-smiling male” or “smiling female”. Concept Disjunction: The last row of Figure 5 shows EBMs can combine concepts additively (generate images that are concept A or concept B). By constructing sampling using logsumexp, EBMs can sample an image that is “not smiling male” or “smiling female”, where both “not smiling male” and “smiling female” are specified through the conjunction of energy models of the two concepts. Multiple object combination: We show that our composition operations not only combine object concepts or attributes, but also on the object level. To verify this, we constructed a dataset with one green cube and a large amount background clutter objects (which are not green) in the scene. We train a conditional EBM (conditioned on position) on the dataset. Figure 7 “cube 1” and “cube 2” are the generated images conditioned on different positions. We perform the conjunction operation on the EBMs of “cube 1” and “cube 2” and use the combined energy model to generate images (row 3). We find that adding two conditional EBMs allows us to selectively generate two different cubes. Furthermore, such generation satisfies the constraints of the dataset. For example, when two conditional cubes are too close, the conditionals EBMs are able to default and just generate one cube like the last image in row 3. 4.3 Continual Learning We evaluate to what extent compositionality in EBMs enables continual learning of new concepts and their combination with previously learned concepts. If we create an EBM for a novel concept, can it be combined with previous EBMs that have never observed this concept in their training data? And can we continually repeat this process? To evaluate this, we use the following methodology on MuJoCo dataset: 1) We first train a position EBM on a dataset of varying positions, but a fixed color and a fixed shape. In experiment, we use shape “cube” and color “purple”. The position EBM allows us generate a purple cube at various positions. (Figure 8 row 1). 2) Next we train a shape EBM by training the model in combination with the position EBM to generate images of different shapes at different positions, but without training position EBM. As shown in Figure 8 row 2, after combining the position and shape EBMs, the “sphere” is placed in the same position as “cubes” in row 1 even these “sphere” positions never be seen during training. 3) Finally, we train a color EBM in combination with both position and shape EBMs to generate images of different shapes at different positions and colors. Again we fix both position and shape EBMs, and only train the color model. In Figure 8 row 3, the objects with different color have the same position as row 1 and same shape as row 2 which shows the EBM can continually learn different concepts and extrapolate new concepts in combination with previously learned concepts to generate new images. In Table 2, we quantitatively evaluate the continuous learning ability of our EBM and GAN [21]. Similar to the quantitative evaluation in Section 3.2, we a train three classifiers for position, shape, color respectively. For fair comparison, the GAN model is also trained sequentially on the position, shape, and color datasets (with the corresponding position, shape, color and other random attributes set to match the training in EBMs). The position accuracy of EBM does not drop significantly when continually learning new concepts (shape and color) which shows our EBM is able to extrapolate earlier learned concepts by combining them with newly learned concepts. In contrast, while the GAN model is able to learn the attributes of position, shape and color models given the corresponding dataset. We find the accuracies of position and shape drops significantly after learning color. The bad performance shows that GANs cannot com- bine the newly learned attributes with the previous attributes. 4.4 Cross Product Extrapolation Humans are endowed with the ability to extrapolate novel concept combinations when only a limited number of combinations were originally observed. For example, despite never having seen a “purple cube”, a human can compose what it looks like based on the previously observation of “red cube” and “purple sphere”. To evaluate the extrapolation ability of EBMs, we construct a dataset of MuJoCo scene images with spheres of all possible sizes appearing only in the top right corner of the scene and spheres of only large size appearing in the remaining positions. The left figure in Figure 9 shows a qualitative illustration. For the spheres only in the top right corner of the scene, we design different settings. For example, 1% meaning only 1% of positions (starting from the top right corner) that contain all sphere sizes are used for training. At test time, we evaluate the generation of spheres of all sizes at positions that are not seen during the training time. Similar to 1%, 10% and 100% mean the spheres of all sizes appears only in the top right 10% and 100% of the scene. The task is to test the quality of generated objects with unseen size and position combinations. This requires the model to extrapolate the learned position and size concepts in novel combinations. We train two EBMs on this dataset. One is conditioned on the position latent and trained only on large sizes and another is conditioned on the size latent and trained at the aforementioned percentage of positions. Conjunction of the two EBMs is fine-tuned for generation through gradient descent. We compare this composed model with a baseline holistic model conditioned on both position and size jointly. The baseline is trained on the same position and size combinations and optimized directly from the Mean Squared Error between the generated image and real image. Both models use the same architecture and number of parameters are described in the appendix. We qualitatively compare the EBM and baseline in Figure 9. When sphere of all sizes are only distributed in the 1% of possible locations, both the EBM and baseline have bad performance. This is because the very few combinations of sizes and positions make both models fail in extrapolation. For the 10% setting, our EBM is better than baseline. EBM is able to combine concepts to form images from few combination examples by learning an independent model for each concept factor. Both EBM and baseline models generate accurate images when given examples of all combinations (100% setting), but our EBM is closer to ground truth than the baseline. In Figure 10, we quantitatively evaluate the extrapolation ability of EBM and the baseline. We train a regression model that outputs both the position and size of a generated sphere image. We compute the error between the predicted size and ground truth size and report it in the first image of Figure 10. Similarly, we report the position error in the second image. EBMs are able to extrapolate both position and size better than the baseline model with smaller errors. The size errors goes down with more examples of all sphere sizes. For position error, both EBM and the baseline model have smaller errors at 1% data than 5% or 10% data. This result is due to the make-up of the data – with 1% data, only 1% of the rightmost sphere positions have different size annotations, so the models generate large spheres at the conditioned position which are closer to the ground truth position since most positions (99%) are large spheres. 4.5 Concept Inference Our formulation also allows us to infer concept parameters given a compositional relationship in inputs. For example, given a generated set of of images, each generated by the same underlying concept (conjunction), the likelihood of a concept is given by: p(x1, x2, . . . , xn|c) ∝ e− ∑ i E(xi|c). (11) We can then obtain maximum a posteriori (MAP) estimates of concept parameters by minimizing the logarithm of the above expression. We evaluate inference on an EBM trained on object position, which takes an image and an object position (x,y in 2D) as input and outputs an energy. We analyze the accuracy of such inference in the appendix and find EBMs exhibit both high accuracy and robustness, performing before than a ResNet. Concept Inference from Multiple Observations The composition rules in Section 3.2 apply directly to inference. When given several different views of an object at a particular position with different size, shape, camera view points, and lighting conditions, we can formulate concept inference as inference over a conjunction of multiple positional EBMs. Each positional EBM takes a different view as input we minimize energy value over positions across the sum of the energies. We use the same metric used above, i.e. Mean Absolute Error, in position inference and find the error in regressing positions goes down when successively giving more images in Figure 11. Concept Inference of Unseen Scene with Multiple Objects We also investigate the inherent compositionality that emerges from inference on a single EBM generalizing to multiple objects. Given EBMs trained on images of a single object, we test on images with multiple objects (not seen in training). In Figure 12, we plot the input RGB image and the generated energy maps over all positions in the scene. The “Two Cubes” scenes are never seen during training, but the output energy map is still make scene with the bimodality energy distribution. The generated energy map of “Two Cubes” is also close to the summation of energy maps of “Cube 1” and “Cube 2” which shows the EBM is able to infer concepts, such as position, on unseen scene with multiple objects. 5 Conclusion In this paper, we demonstrate the potential of EBMs for both compositional generation and inference. We show that EBMs support composition on both the factor and object level, unifying different perspectives of compositionality and can recursively combine with each other. We further showcase how this composition can be applied to both continually learn and compositionally infer underlying concepts. We hope our results inspire future work in this direction. 6 Acknowledgement We should like to thank Jiayuan Mao for reading and providing feedback on the paper and both Josh Tenenbaum and Jiayuan Mao for helpful feedback on the paper. 7 Broader Impacts We believe that compositionality is a crucial component of next generation AI systems. Compositionality enables system to synthesize and combine knowledge from different domains to tackle the problem in hand. Our proposed method is step towards more composable deep learning models. A truly compositional system has many positive societal benefits, potentially enabling a intelligent and flexible robots that can selectively recruit different skills learned for the task on hand, or super-human synthesis of scientific knowledge that can further progress of scientific discovery. At the same time, there remain unanswered ethical problems about any such next generation AI system.
1. What is the main contribution of the paper regarding compositionality in energy-based models? 2. What are the strengths of the proposed approach, particularly in terms of its ability to generate natural images and conduct continual learning? 3. Do you have any concerns about the novelty of the proposed method compared to prior works? 4. How does the reviewer assess the clarity and sufficiency of the explanation of energy-based models in the paper? 5. What are the weaknesses of the paper regarding its qualitative comparisons with other generative models and the quality of the generated images?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper, the authors firstly address human capabilities that humans can combine a finite number of primitive components and compose increasingly complex concepts using those components by recombining and reusing in novel ways. They are interested in enabling such capabilities in machine learning systems, particularly in the context of generative modeling. Thus, in this paper, the authors propose to implement the compositionality via energy based models (EBMs). Instead of an explicit vector of factors that is input to a generator function or object slots that are blended to form an image, the proposed method defines factors of variation and object slots via energy functions. The contribution of this paper is threefold: (1) the authors show that composition of energy-based model can be used to generate plausible natural images, while previous works have shown in abstract setting, (2) they propose a principled approach to combine independent trained energy models based on logical operators (i.e., conjunction, disjunction, and negation) which can be chained recursively, allowing controllable generation, and (3) this paper allows to extrapolate to new concept combinations, continually incorporate new visual concepts for generation, and infer concept properties compositionally. Strengths 1) To the best of my knowledge, this is the first work in demonstrating the potential of EBMs for both compositional generation and inference. While composition of energy-based models has been proposed in abstract settings before, this paper newly shows that it can be used to generate natural images. 2) Thorough experiments regarding contributions are well conducted. This paper clearly provides both quantitative results and qualitative visualizations in compositional generation, continual learning, and concept inference. Weaknesses 1) While the proposed idea is novel, the overall proposed method seems to be the adaptation from the previous works [1,2], thus seemingly incremental. In addition,thorough explanation of energy based models (EBMs) is not sufficient. Preliminary explanation of EBMs is necessary. 2) There is no qualitative comparison between this work and previous works, such as generative adversarial network. It is better to shown qualitative comparison between EBMs and other generative models related to your work. Moreover, in looking at the qualitative result on the proposed method, the generated images seem to be not promising, showing blurry and not detailed facial images. [1] G. E. Hinton. Products of experts. International Conference on Artificial Neural Networks, 1999. [2] Y. Du and I. Mordatch. Implicit generation and generalization in energy-based models. arXiv preprint 318 arXiv:1903.08689, 2019. [3] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of 374 the 28th International Conference on Machine Learning (ICML-11), pages 681–688, 2011.
NIPS
Title DaDA: Distortion-aware Domain Adaptation for Unsupervised Semantic Segmentation Abstract Distributional shifts in photometry and texture have been extensively studied for unsupervised domain adaptation, but their counterparts in optical distortion have been largely neglected. In this work, we tackle the task of unsupervised domain adaptation for semantic image segmentation where unknown optical distortion exists between source and target images. To this end, we propose a distortion-aware domain adaptation (DaDA) framework that boosts the unsupervised segmentation performance. We first present a relative distortion learning (RDL) approach that is capable of modeling domain shifts in fine-grained geometric deformation based on diffeomorphic transformation. Then, we demonstrate that applying additional global affine transformations to the diffeomorphically transformed source images can further improve the segmentation adaptation. Besides, we find that our distortion-aware adaptation method helps to enhance self-supervised learning by providing higher-quality initial models and pseudo labels. To evaluate, we propose new distortion adaptation benchmarks, where rectilinear source images and fisheye target images are used for unsupervised domain adaptation. Extensive experimental results highlight the effectiveness of our approach over state-of-the-art methods under unknown relative distortion across domains. Datasets and more information are available at https://sait-fdd.github.io/. 1 Introduction Recent years have witnessed dramatic improvements in semantic segmentation performance based on a massive amount of pixel-level annotations. However, applying a segmentation model learned from the training dataset (source domain) directly to unseen test scenarios (target domain) often leads to significant performance degradation, which is caused by the divergence of data distribution between source and target domains. In addition, manually annotating target domain images with pixel-wise labels is extremely expensive and taxing. To address such issues, many studies have proposed unsupervised domain adaptation methods to transfer the knowledge learned from the labeled source domain to the unlabeled target domain (e.g., [33, 34, 27, 26, 40]). Existing works on unsupervised segmentation adaptation methods have predominantly investigated domain shift problems mainly due to photometric and textural discrepancies between rectilinear image domains (e.g., adapting synthetically generated images [29, 30] to real-world imagery [8]; or adapting images from/to different cities [7]). In contrast, domain shifts in geometric and optical distortion have not been well explored, despite commonly appearing in many practical applications. For example, wide-angle cameras (e.g., fisheye cameras) have been extensively used for complex 36th Conference on Neural Information Processing Systems (NeurIPS 2022). vision systems like autonomous vehicles [38] and surveillance [4] to obtain more information about the surrounding environment. Fisheye images (e.g., target image IT in Fig.1) have quite different geometric deformations (e.g., radial distortion) compared to regular rectilinear images (e.g., source image IS in Fig.1). Such distortion variants pose even more challenging domain adaptation tasks. For example, a state-of-the-art semantic segmentation model [6] only trained on rectilinear images fails to correctly predict pixels of vehicles and roads under optical distortion and its overall prediction accuracy drops from 67.02% (trained on fisheye images) to 32.39% (trained only on rectilinear images, see Tab.1). To alleviate this, we may rectify distortion at test time. However, such a workaround inevitably leads to reduced field-of-view (e.g., over 30% loss for fisheye images [38]), resampling distortion artifacts at the image periphery, calibration errors in practice, and additional computing resources for the rectification step at test time [19], which is against the original purpose of using wide-angle cameras. This urges us to use native fisheye images without needing rectification. Another motivation for our work is the scarcity and difficulty of constructing annotations for distorted fisheye images (e.g., Woodscape [38]), while we already have larger amounts of annotations for rectilinear images (e.g., Cityscapes [8], GTAV [29]). Remarkably, we have relatively fewer public datasets providing large-scale and finely annotated fisheye images. Woodscape [38] is currently the only real-world public dataset with segmentation annotations. Such a lack of annotations for distorted images necessitates introducing optical distortion into unsupervised domain adaptation. With these insights, we first formulate an important but challenging unsupervised domain adaptation task for semantic segmentation where unknown optical distortion exists between the source and target domains. To this end, we propose a novel distortion-aware domain adaptation (DaDA) framework, which provides a new perspective to minimizing domain gaps in geometric and optical distortion. We first present relative distortion learning (RDL), which is capable of modeling relative deformation between the source and target domain. In particular, we build a deformation field generator to transform the source image to a new image sharing similar distortion features of the target image. To enable such challenging unsupervised and unpaired distortion learning, we exploit the properties of diffeomorphism, that is differentiable and has a differentiable inverse. We directly integrate such properties into our distortion-aware losses to enforce the semantic quality of relative deformation fields at the image- and the prediction level. We also observe that applying additional global affine transformations (e.g., rotation, shearing) to the diffeomorphically transformed source images can further improve the segmentation adaptation performance in most cases. Owing to fine-grained diffeomorphic and global affine transformations, our framework provides distortionaware segmentation adaptation models and reliable pseudo labels, and thus ultimately improves the performance of self-supervised learning methods. To validate, we propose new domain adaptation benchmarks, where a segmentation model trained on rectilinear images (Cityscapes [8] or GTAV [29]) is transferred to fisheye images (Woodscape [38] or our in-house fisheye driving scene dataset (FDD)). With mean Intersection-over-Union (mIoU) as the evaluation metric, our framework achieves significantly improved prediction performance compared to existing segmentation adaptation methods. In essence, our key contributions can be summarized as: (1) a distortion-aware domain adaptation framework for boosting unsupervised semantic segmentation performance in the presence of unknown relative distortion; (2) distortion-aware losses that effectively enforce the unpaired and unsupervised relative distortion learning to transfer distortion style from target to source domain; (3) new unsupervised domain adaptation benchmarks posing challenging tasks, where source and target images have additional domain gaps in optical distortion. 2 Related Work 2.1 Domain Adaptive Semantic Segmentation Recent advances in image-to-image translation and style transfer [42] have promoted the translation of source-domain images to mimic texture and appearance of the target-domain images. CyCADA [24] applied CycleGAN [42] to preserve semantic consistency before and after image-level adaptation. BDL [22] proposed a bidirectional learning method where image-to-image translation and segmentation adaptation is learned alternatively. Similar methods have been proposed to adapt source images [35, 17] or to reconstruct the source image from the output-level representation [36]. Deviated from these adversarial approaches, FDA [37] proposed a method to align the low-level frequencies of source images and their counterparts in target images. Ma et al. [25] applied a photometric alignment method to minimize domain shifts in image- and category-level feature distributions. More recently, self-supervised learning (SSL) approach has emerged in domain adaption tasks to further improve the segmentation adaptation performance. SSL approaches try to rectify noisy pseudo labels with respect to uncertainties or confidence scores [22, 17, 28, 27, 26, 13, 41, 40]. These methods inherently require good initial models, that are commonly trained on source or adapted on target beforehand. To obviate the need of multi-stage and adversarial training, Araslanov and Roth [2] introduced a photometric augmentation method to SSL. However, none of the prior works directly considers distortion-oriented domain shifts in the unsupervised segmentation adaptation tasks. Moreover, it remains unknown how well existing adaptation methods address unknown relative distortion between domains. As a pioneering work, we first aim to evaluate prior arts on the adaption task in the presence of unknown optical distortion across domains. For this, we noted that many of current state-of-the-art methods rely on SSL (e.g., [17, 27, 26, 41, 40]) and have adopted AdaptSeg [33] or AdvEnt [34] as for their based segmentation adaptation. Thus, we take these adversarial adaptation approaches as our baseline methods and further evaluate SSL approaches [26, 27, 40] in the presence of optical distortion shifts across domains. To evaluate, we newly formulate segmentation adaptation benchmarks, i.e., transferring from real or synthetic rectilinear images to real fisheye images. 2.2 Diffeomorphic Deformation Networks Spatial transformation networks (STN [15]) have been used in various contexts since it is capable of learning differentiable deformation field. STN primarily allows simple linear transformations (e.g., affine, translations, scaling, and rotations) and can be extended to more flexible mappings such as thin plate spline transformations [23]. However, optical distortion involves complex nonlinear transformation and STNs are limited to support such transformations. Instead Detlefsen et al. [12] proposed to exploit diffeomorphisms in spatial transformation, which can model complex nonlinear deformation and address optimization divergence issues in learning spatial transformation. A diffeomorphism is a globally one-to-one continuous and smooth mapping, which has a differentiable inverse. Diffeomorphic transformations have been largely used for image registration [16, 10, 32] and shape analysis [39]. However, traditional diffeomorphic deformation methods demand high computational costs and difficult to implement; and thus make it difficult to be incorporated into deep neural networks. To address such issue, Dalca et al. [11] proposed a probabilistic generative model to generate diffeomorphic deformation field for medical image registration task. However, their work depends on a paired set of 3D brain images depicting the same contexts. Inspired by [11], we propose a new relative distortion generator, which takes a set of unpaired source- and target-domain image, to transform the source image to a new image sharing similar distortion style of the target image. 2.3 Radial Distortion Rectification Traditional distortion rectification approaches require images captured with specific camera calibration parameters [20] and thus are not flexible. The other work exploits the principle that a straight line should be projected into a straight line in a calibrated image [1], but such method mainly depends on the accuracy of line detection. More recently, deep neural networks have been used for robust and efficient rectification performance [5, 21]. However, these works primarily aim to rectify distorted images and do not support distortion style transfer for unpaired and unsupervised domain adaptation tasks. In contrast, we propose to transfer distortion style between unpaired source and target images. 3 Method Let S be the source-domain data with segmentation labels YS and T be the target-domain data with no labels. We aim to train a scene segmentation network M performing satisfactory pixel-wise prediction on T by minimizing domain gaps between S and T . Here we formulate more challenging adaptation tasks where distributional shifts include not only visual domain gaps (e.g., texture, lighting, contrast) but also geometric and optical distortions (e.g., radial distortion). Such domain gaps make M difficult to learn transferable knowledge in both visual and geometric domains. 3.1 Relative Distortion Learning To minimize the discrepancy between source and target domain, prior works proposed pixel-level image translation methods based on cycle-consistency [14, 17, 35, 42] or photometric alignment [2, 25]. However, these methods do not consider domain gaps in geometric distortion and thus fail to transfer distortion style from IT to IS (see Fig.4). To address this, we propose a relative distortion learning (RDL) method, which predicts a relative deformation field to transfer the distortion style between domains. Given a source-domain image IS and a target-domain image IT , we aim to transform IS to a new image IS→T based on a relative deformation field ΦS→T , where IS→T shares a similar distortion style of IT . We achieve this transformation through a grid-based sampling operation IS ◦ ΦS→T = IS→T . Ultimately, the transformed source image IS→T aim to mitigate the domain shift in optical distortion at a fine-grained level. However, it is not trivial to predict relative deformation fields since input images are not paired (i.e., dissimilar image contents); and the relative distortion involves nonlinear geometric deformation, which is hard to be parameterized without knowing optical features (e.g., lens distortion model, focal length). To learn such a challenging geometric relationship, we exploit diffeomorphic transformation in the unpaired and unsupervised distortion translation task. Diffeomorphic Transformation. Let u ∈ R2×w×h be a flow field which is constant over time. Then, we describe a differential equation of evolution of deformation φ(t) by ∂φ(t) ∂t = u(φ(t)), (1) where t is time and φ(0) the identity transformation. By integrating u(φ(t)) over t ∈ [0, 1], we get a diffeomorphic deformation field Φ := φ(1), which maps the coordinates from one image to another image. For fast and differentiable integration, we use an exponentiated integration technique, so-called scaling-and-squaring [3]. This integration method starts from φ(1/2 T ) = φ(0) + u/2T and iteratively computes deformation fields of next time steps via φ(1/2 t−1) = φ(1/2 t) ◦ φ(1/2t) over T times, where ◦ indicates a grid-based resampling operation. Then, we obtain an approximate deformation field Φ. The inverse deformation field Φ− can be achieved by integrating the negative field φ(1/2 T ) = φ(0) − u/2T via the same integration method. To generate a field of ΦS→T , which defines relative distortion between IS and IT , we propose a deformation field generator G, which takes both IS and IT ; and their first-order gradient (i.e., Sobel filter) ∇IS and ∇IT as input. Here we use the image gradients to provide rich geometric features inspired by the distortion rectification method [1]. Then, G generates deformation field ΦS→T and its inverse field ΦT→S . For example, in Fig.2, G generates a flow field u to construct a ΦS→T (red-colored grid in (e)) from a pair of IT (a) and IS (b). Finally, a new transformed image IS→T (e) is generated via IS ◦ ΦS→T . Similarly, negative integration of the flow field u yields an inverse deformation field ΦT→S , which generate a reconstructed image (d) via I ′S = IS→T ◦ ΦT→S . To supervise the generator G for learning the relative distortion in unpaired and unsupervised settings, we exploit the properties of diffeomorphic transformations: topology preserving, invertible, and differentiable. Such diffeomorphic constraints are directly built into relative distortion learning to enforce desirable deformation field outputs. In particular, we propose distortion-aware losses: distortion reconstruction loss and semantic distortion-consistent loss. These losses evaluate the cycleconsistency of the relative deformation field at the image- and the prediction-level, while enforcing the semantic quality of the relative deformation field. Distortion reconstruction loss (Lrecon) ensures that G generates convincing flow fields to reconstruct the source image IS from the transformed image IS→T by the inverse deformation field ΦT→S , and vice versa: Lrecon = ‖IS − I ′S‖1 + ‖IT − I ′T ‖1, where I ′S = IS→T ◦ ΦT→S , I ′T = IT→S ◦ ΦS→T (2) Semantic distortion-consistent loss (Lsem) enforces semantic consistency between the source image IS and its transformed pair IS→T at the pixel-wise prediction level. The same constraint can be applied to between IT and IT→S . That is, an ideal deformation field generator G should be able to generate ΦS→T and ΦT→S , that reconstruct structural information from the pixel-wise prediction outputs of the distorted input images. With these semantic constraints, the relative distortion learning can be further improved by the segmentation model M ; and also the segmentation adaptation can benefit from the improved quality of relative deformation fields. The loss function can be written as: Lsem = ‖M(IS) ◦ ΦS→T −M(IS→T )‖1 + ‖M(IT ) ◦ ΦT→S −M(IT→S)‖1. (3) To minimize domain gaps in optical distortion between the transformed source images IS→T and the target-domain images IT , we introduce a distortion-aware discriminator DG, which aims to discriminate distortion style between IT and IS→T . Similar to G, we again use the first-order image gradient, ∇IT and ∇IS→T , as input to DG along with the images. The loss function is defined as: LDG = EIS∼S,IT∼T [1−DG(IS ◦ ΦS→T ,∇(IS ◦ ΦS→T ))] + EIT∼T [DG(IT ,∇IT )]. (4) We also calculate the adversarial loss using DG: Ladv_G = EIS∼S,IT∼T [DG(IS ◦ ΦS→T ,∇(IS ◦ ΦS→T ))], (5) where the deformation generator G tries to produce the relative deformation field to fool the discriminator DG in distinguishing distortion style. Therefore, the total loss function for the relative distortion learning is defined as: Lrdl = β1Lrecon + β2Lsem + β3Ladv_G, (6) where β1, β2, and β3 are constants controlling the effect of corresponding losses. 3.2 Distortion-aware Adversarial Adaptation Relative distortion learning aims to adapt optical distortion shifts at both pixel- and feature-level. Based on the adapted representations, we further apply adversarial segmentation adaptation. As our baseline adaptation methods, we take AdaptSeg [33] and AdvEnt [34], that have been extensively used in many prior work [26–28, 40] as for their based segmentation adaptation. Typically, the segmentation loss uses the cross-entropy for the transformed source image IS→T and its corresponding label YS→T : Lseg = − ∑ h,w ∑ c∈C Y (h,w,c) S→T log(M(IS→T ) (h,w,c)), (7) where YS→T is obtained by transforming ground-truth annotations from the source domain by YS ◦ ΦS→T . Entropy minimization loss is further introduced by AdvEnt [34]: Lent = −1 log(C) ∑ h,w ∑ c∈C M(IT ) (h,w,c) logM(IT ) (h,w,c), (8) which tries to directly minimize pixel-wise entropies to enhance prediction certainty in the target domain. As a common practice, a domain discriminator DM is used to minimize the difference between the transformed source and target prediction probabilities: LDM = EIS∼S,IT∼T [1−DM (M(IS ◦ ΦS→T ))] + EIT∼T [DM (M(IT ))]. (9) The adversarial loss function for segmentation adaptation can be written as: Ladv_M = EIT∼T [DM (1−M(IT ))], (10) where the segmentation model M is trained to fool the discriminator DM . Equipped with the relative distortion learning and the adversarial adaptation, we train the adaptive segmentation network with the following total loss: Lall = Lrdl + Ladv_M + Lseg + γLent, (11) where γ is 0 for AdaptSeg [33] and 1 for AdvEnt [34]. 4 Experimental Results We present extensive experimental results to validate our distortion-aware domain adaptation (DaDA) framework for semantic segmentation in the presence of both visual and geometric domain shifts. For this, we formulate new domain adaptive segmentation benchmarks, i.e., transferring from real-world or synthetic rectilinear images to real fisheye images. 4.1 Datasets In the experiments, we first show evaluations of the model trained on real-world rectilinear images from the Cityscapes [8] and test the adapted model on real fisheye images from Woodscape [38] or our in-house fisheye driving dataset (FDD). Then, we introduce a more challenging adaptation task, where the model trained on synthetic dataset GTAV [29] is transferred to real fisheye images (Woodscape or FDD) without annotations. The Cityscapes dataset contains 2, 975 training images of high-quality driving scene with the resolution of 2048× 1024. The GTAV dataset contains 24, 966 synthesized images with the resolution of 1914× 1052. The Woodscape dataset consists of 8, 234 fisheye images with the resolution of 1280× 966, where the images are captured by fisheye cameras (190◦ F.O.V) looking at four different directions of the vehicle. We use front and rear camera scenes containing 4, 023 images in our experiments. The images are randomly split into a training set with 3, 023 images and a validation set with 1, 000 images. We report the results on 17 classes aligning mismatched classes between Cityscapes and GTAV; and Woodscape. Our in-house fisheye driving dataset (FDD) includes 3, 897 of fully annotated images with the resolution of 1920× 1080 captured by fisheye cameras (200◦ F.O.V) at front- and rear-side of the vehicle. We randomly pulled 974 of validation images and remaining 2, 923 images are used for the training. For FDD, we use 12 classes, where incompatible classes in Cityscapes and GTAV are merged or excluded similar to Woodscape (see Appendix B.4 for detailed class information of Woodscape and FDD). 4.2 Experimental Details Diffeomorphic and Affine Transformation Fig.2 depicts an example of diffeomorphic and affine transformations applied to a source- and a target-domain image. First, both original target and source images are randomly cropped and resized to (a) and (b) followed by randomized horizontal flipping and photometric jittering. For the fine-grained diffeomorphic transformation, both target (IT in (a)) and source image (IS in (b)) are used to generate a transformed image IS→T in (e) via the relative deformation field generator G. Additional global affine transformation is applied to IS→T to generate an image in (f), which includes both fine-grained diffeomorphic and global affine deformations. For the affine transformation, we adopted RandAugment (RA) [9] excluding preceded photometric jittering. RA is one of the immediately probable and applicable augmentation methods including a series of affine transformations (i.e., rotation, shear-x, shear-y, trans-x, and trans-y with 0.5 probability of application). For example, a rotation transformation is applied to the source image IS in (c). In our experiments, we tested transformed images generated by either one of affine-only (c), diffeomorphic-only (e), or both transformations (f). Implementation Details. We trained all networks with the Adam [18] solver with a batch size of 4. The learning rate is 0.2× 10−5 for M and DM and 0.1× 10−6 for G and DG. We set the weight factors of losses in Eq.(6) as: β1 = 100.0, β2 = 10.0, β3 = 10.0 for Cityscapes→Woodscape (or FDD); and β1 = 100.0, β2 = 1.0, β3 = 100.0 for GTAV→Woodscape (or FDD). Further details on the implementation as well as our hyperparameter selections are provided in Appendix A and B. 4.3 Comparisons with State-of-the-Art Methods We first evaluate the effect of our distortion-aware adaptation framework when applied to the based adaptation methods: AdaptSeg [33] and AdvEnt [34]. We use DeepLab-V2 [6] with ResNet101 backbone as the base semantic segmentation architecture M . We test the different spatial transformations (+RDL: applying a diffeomorphic transformation learned by RDL, +RA: applying an affine transformation via RandAugment [9], +RA+RDL: applying both diffeomorphic and affine transformations) on source images in the adaptation tasks. Woodscape as target domain (Tab.1). Compared to the based adaptation methods, applying either one of fine-grained diffeomorphic (+RDL) or global affine transformations (+RA) achieves clear improvements on the all adaptation tasks. Remarkably, applying both fine-grained diffeomorphic and global affine transformations (+RA+RDL) achieves significant improvements of +7.38% and +2.92% on Cityscapes→Woodscape and GTAV→Woodscape tasks, respectively. Note that the performance improvement of the distortion-aware adaptation comes from object classes such as person (+20.07%), car (+14.49%), bus (+14.61%), and truck (+12.61%) as well as background such as sidewalk (+13.09%) (see class-wise iou(%) in Appendix B.4). Optical distortion gradually increases when objects and backgrounds appear closer to the image periphery. We hypothesize that our distortion adaptation method rectifies erroneous predictions in such distorted regions. These improvements are also confirmed in Fig.1-(b), Fig.3, and Fig.5 (See Section 4.4). FDD as target domain (Tab.1). Here, the results are consistent with the previous adaptation tasks. Again, +RA+RDL clearly shows superior prediction performance to the based methods by up to +3.45% and +3.62% on Cityscapes→FDD and GTAV→FDD, respectively. Notably, +RDL contributes up to 3.56% (AdvEnt+RDL on Cityscapes→ FDD) of improvements, which also consistently achieves higher performance gain than +RA (+2.71%). Such results are also echoed in the Cityscapes → CityscapesFishEye task in Appendix B.2. Note that +RDL always leads to improvements in segmentation adaptation, regardless of domain shift, throughout our experiments (e.g., based methods vs. +RDL, and +RA vs. +RA+RDL) as presented in Tab.1 and Tab.7 in Appendix B.2. In contrast, the randomized affine augmentation (RA) leads to degraded segmentation adaptation results upon the geometric distributional shifts between source and target domains (e.g., +RDL vs. +RDL+RA in Cityscapes→ CityscapesFisheye and Cityscapes→ FDD). Thus, we may conclude that our learnable diffeomorphic transformation (RDL) plays an important role in aligning the domain gap of geometric deformation. Relationship to SSL (Tab.2). We evaluate the effect of our distortion-aware domain adaptation (DaDA), as a “warm-up” phase, for state-of-the-art adaptation methods using self-supervised learning: IAST [26], IntraDA [27], and ProDA [40]. In particular, we train the initial segmentation model with +RA+RDL distortion adaptation; and use AdaptSeg-based adaptation models for IAST [26] and ProDA [40]; and AdvEnt-based models for IntraDA [27]. Tab.2 shows the effectiveness of DaDA, where it further improves the self-supervised learning by providing higher-quality initial models and pseudo labels. DaDA attains up to +6.82% improvement against the baseline SSL methods. This implies that satisfactory distortion-aware adaptation cannot be achieved only by relying on SSL. Comparisons with Image-to-Image Translation. We compare our relative distortion learning (RDL) with an existing image-to-image translation method CycleGAN [42]. Unpaired rectilinear source images (Cityscapes and GTAV) and fisheye target images (Woodscape) are given to both RDL and CycleGAN. The input images are randomly cropped and resized to 512× 512. Fig.4 shows that CycleGAN fails to generate transformed images from source images mimicking the distortion style of target images. This is obvious to observe since CycleGAN-like existing translation approaches do not have devices (e.g., diffeomorphic transformer) to geometrically transform source images. In contrast, RDL enables modeling relative distortion across domains and generates transformed source images alike the distortion style of target images. In Fig.4, buildings and vehicles are distorted replicating counterparts in target images via RDL, while CycleGAN focuses on translating texture and color of images. Fig.4 also demonstrates that RDL is able to reconstruct source images (IS) from the distortion-translated images (IS→T ) via an inverse relative deformation field ΦT→S . Overall, these results imply that our distortion-aware losses (Lrecon, Lsem) are effective in guiding the generator G to produce convincing relative deformation fields. 4.4 Ablation Studies To better understand the effect of our distortion-aware adaptation approach, we conduct ablation studies on the distortion-aware losses and the competence in predicting distorted image regions. Effect of Distortion-aware Losses (Tab.3). We evaluate the effect of the distortion-aware losses on segmentation performance. Tab.3 depicts the improvement of prediction performance compared to the based methods, by adding individual or all of the proposed distortion-aware losses in Eq.(11). Only adding the adversarial loss (+Ladv_G) contributes to the adaptation performance up to +3.28% of improvements. Progressive introduction of the distortion-aware losses consistently improves prediction accuracy (+Lsem, +Lrecon). Ultimately, utilizing all losses together with segmentation adaptation achieves +5.68% and +1.69% of improvements for Cityscapes → Woodscape and GTAV→ Woodscape, respectively. Here we observe a relatively smaller improvement in GTAV →Woodscape task, which exhibits both severe visual and geometric distributional misalignment. We believe that introducing additional texture-aware translation methods along with our distortion adaptation approach might lead to further improvement in such a synthetic-to-real adaptation task. Overall, the relative distortion learning supervised by the distortion-aware losses effectively reduces domain shifts in optical distortion, and thus improves the prediction performance. Distortion-aware mIoU. To quantitatively demonstrate the effectiveness of DaDA on predicting distorted image areas, we propose a distortion-aware mIoU(%) metric. The image coordinates are normalized to [-1.0,1.0] and we gradually mask label where √ i2 + j2 of pixel at (i, j) is smaller than a certain distance threshold (dist). In Fig.5 (bottom row), dist = 0.0 shows that the original label is used for mIoU(%) calculation, while dist = 0.8 indicates that large areas of undistorted regions are masked out in the label so that we evaluate the mod- els on distorted regions. Plots in Fig.5 (top row) show that the performance gain (∆mIoU) achieved by adding DaDA increases as dist increases. This indicates that DaDA effectively addresses domain shifts in distortion and improves the prediction performance for the distorted image regions. 5 Discussion and Conclusion In this paper, we proposed a novel distortion-aware domain adaptation (DaDA) framework that is capable of modeling domain shifts in geometric deformation based on a relative distortion learning (RDL) method. Besides, we demonstrated that our distortion adaptation approach further improves self-supervised learning by providing higher-quality initial models and pseudo labels. Extensive experimental results proved that our method minimizes domain shifts in optical distortion, and thus significantly improves the segmentation adaptation performance under unknown relative distortion across domains. In the future, we will investigate the interplay between the texture-oriented and the distortion-oriented domain shifts to further improve the unsupervised domain adaptation. While we first tackle adapting existing semantic segmentation models trained on rectilinear images to unlabeled fisheye images, various set-ups of domain adaptation tasks among distorted and rectilinear images can be further considered. For example, we may use distorted images as source and rectilinear images as a target, or both source and target domains include distorted images. Applying relative distortion learning to such extended adaptation tasks could be another interesting direction for future work. We hope our work provides a solid baseline and new perspectives on distortion-aware domain adaptation. 6 Disclosure of Funding We received no third-party funding for this work.
1. What is the primary contribution of the paper regarding domain adaptation for semantic segmentation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to handle geometric distortion? 3. How effective is the proposed method compared to other methods that address appearance changes and output space discrepancies? 4. Is the proposed method capable of closing the gap under only geometric distortion, or does it rely on other factors such as spatial augmentation? 5. How does the proposed method compare to optical networks and camera calibration in terms of their ability to address domain gaps caused by geometric distortion? 6. Are there any limitations or potential negative societal impacts associated with the proposed approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper concerns the geometric distortion that causes domain gaps during unsupervised domain adaptation for semantic segmentation, which is motivated by the differences between rectilinear images and fisheye images. It is practically well-grounded. The authors propose to train a deformation generator that consumes two images chosen from the source and target domains. The output of the deformation generator is used to map the source image to the target style in terms of geometric distortion. The training or adaptation is then happened by learning from the distorted source image and semantic segmentations. The authors also propose a few benchmarks to evaluate the performance of the method and show good domain adaptation gain compared to baselines that concern global appearance change of the images. An ablation study on each term in the proposed training loss is also performed. Strengths And Weaknesses Strength: the motivation is practically meaningful and also the introduced setting is interesting. the proposed method is simple yet shows good domain adaptation gain. Also, the proposed benchmarks could help develop more sophisticated methods that deal with both geometric distortion and appearance change. the constraints to impose semantic consistency of the generated geometric distortion are interesting. the paper is well-written and the idea is clearly conveyed. Weakness: even though the motivation is well-grounded, the study of the validity of the problem, or how the motivating factor affects the adaptation performance is not crystal clear. For example, the authors should isolate geometric distortion from other factors like appearance and output space discrepancies. A simple experiment to do is to create a distorted target dataset from rectilinear images, and check how the other methods and the proposed method perform. The current message in this paper does not support a judgment on this aspect. In theory, an ideal method should close the gap given only geometric distortion. But there is concern on whether the proposed method con do it under only geometric distortion or not, since the proposed randomly picks two images even though they are not in correspondence. This may be okay, since we can think of the proposed as a kind of spatial augmentation method, but we need to check the results. The above concerns are critical for us to analyze the proposed method and the underlying problems that the proposed is trying to solve. For example, affine aug already works well on some of the directions in Table 1, does this mean that spatial augmentation/transformation is the key here? If so, would the proposed method a more sophisticated such augmentation method? Also, how does the proposed method compare to spatial transformation generated by optical networks? The proposed do not really care about finding the correct underlying dense matching, which makes OF look like a good candidate to try, say, use OF to output some flow and then warp the images to the target domain or vice versa and then adapt? What is the difference then? If camera distortion is the only factor that causes domain gap and is the focus of the current paper, then it seems that camera calibration would be a nice tool and can be performed quite well using existing tools. Then what does the paper tell us beyond that? Questions The main concerns are listed in the above section, please provide more information since they are related to the significance of the proposed problem and method. some comments and questions on writting the title "DaDA: Distortion-aware Domain Adaptation for Unsupervised Semantic Segmentation" may not be appropriate as semantic segmentation is not unsupervised. ln 58 "to enforce the semantic quality of relative deformation fields at the image- and the prediction level." how do you define semantic quality of deformation fields? also, what do you mean by "image- and prediction level"? ln 142, the equation is confusing, phi_s2t is mapping from the source domain to the target domain, which takes pixel coordinates from source and maps it to source. Limitations Did not observe much negative societal impact. On the limitation side, the authors indicate that other methods that deal with texture differences between domains can be combined with the proposed one to further improve performance. However, this is also an indicator that the paper with a clean motivation is evaluating on datasets with multiple causal factors, which does not convey a clean message.
NIPS
Title DaDA: Distortion-aware Domain Adaptation for Unsupervised Semantic Segmentation Abstract Distributional shifts in photometry and texture have been extensively studied for unsupervised domain adaptation, but their counterparts in optical distortion have been largely neglected. In this work, we tackle the task of unsupervised domain adaptation for semantic image segmentation where unknown optical distortion exists between source and target images. To this end, we propose a distortion-aware domain adaptation (DaDA) framework that boosts the unsupervised segmentation performance. We first present a relative distortion learning (RDL) approach that is capable of modeling domain shifts in fine-grained geometric deformation based on diffeomorphic transformation. Then, we demonstrate that applying additional global affine transformations to the diffeomorphically transformed source images can further improve the segmentation adaptation. Besides, we find that our distortion-aware adaptation method helps to enhance self-supervised learning by providing higher-quality initial models and pseudo labels. To evaluate, we propose new distortion adaptation benchmarks, where rectilinear source images and fisheye target images are used for unsupervised domain adaptation. Extensive experimental results highlight the effectiveness of our approach over state-of-the-art methods under unknown relative distortion across domains. Datasets and more information are available at https://sait-fdd.github.io/. 1 Introduction Recent years have witnessed dramatic improvements in semantic segmentation performance based on a massive amount of pixel-level annotations. However, applying a segmentation model learned from the training dataset (source domain) directly to unseen test scenarios (target domain) often leads to significant performance degradation, which is caused by the divergence of data distribution between source and target domains. In addition, manually annotating target domain images with pixel-wise labels is extremely expensive and taxing. To address such issues, many studies have proposed unsupervised domain adaptation methods to transfer the knowledge learned from the labeled source domain to the unlabeled target domain (e.g., [33, 34, 27, 26, 40]). Existing works on unsupervised segmentation adaptation methods have predominantly investigated domain shift problems mainly due to photometric and textural discrepancies between rectilinear image domains (e.g., adapting synthetically generated images [29, 30] to real-world imagery [8]; or adapting images from/to different cities [7]). In contrast, domain shifts in geometric and optical distortion have not been well explored, despite commonly appearing in many practical applications. For example, wide-angle cameras (e.g., fisheye cameras) have been extensively used for complex 36th Conference on Neural Information Processing Systems (NeurIPS 2022). vision systems like autonomous vehicles [38] and surveillance [4] to obtain more information about the surrounding environment. Fisheye images (e.g., target image IT in Fig.1) have quite different geometric deformations (e.g., radial distortion) compared to regular rectilinear images (e.g., source image IS in Fig.1). Such distortion variants pose even more challenging domain adaptation tasks. For example, a state-of-the-art semantic segmentation model [6] only trained on rectilinear images fails to correctly predict pixels of vehicles and roads under optical distortion and its overall prediction accuracy drops from 67.02% (trained on fisheye images) to 32.39% (trained only on rectilinear images, see Tab.1). To alleviate this, we may rectify distortion at test time. However, such a workaround inevitably leads to reduced field-of-view (e.g., over 30% loss for fisheye images [38]), resampling distortion artifacts at the image periphery, calibration errors in practice, and additional computing resources for the rectification step at test time [19], which is against the original purpose of using wide-angle cameras. This urges us to use native fisheye images without needing rectification. Another motivation for our work is the scarcity and difficulty of constructing annotations for distorted fisheye images (e.g., Woodscape [38]), while we already have larger amounts of annotations for rectilinear images (e.g., Cityscapes [8], GTAV [29]). Remarkably, we have relatively fewer public datasets providing large-scale and finely annotated fisheye images. Woodscape [38] is currently the only real-world public dataset with segmentation annotations. Such a lack of annotations for distorted images necessitates introducing optical distortion into unsupervised domain adaptation. With these insights, we first formulate an important but challenging unsupervised domain adaptation task for semantic segmentation where unknown optical distortion exists between the source and target domains. To this end, we propose a novel distortion-aware domain adaptation (DaDA) framework, which provides a new perspective to minimizing domain gaps in geometric and optical distortion. We first present relative distortion learning (RDL), which is capable of modeling relative deformation between the source and target domain. In particular, we build a deformation field generator to transform the source image to a new image sharing similar distortion features of the target image. To enable such challenging unsupervised and unpaired distortion learning, we exploit the properties of diffeomorphism, that is differentiable and has a differentiable inverse. We directly integrate such properties into our distortion-aware losses to enforce the semantic quality of relative deformation fields at the image- and the prediction level. We also observe that applying additional global affine transformations (e.g., rotation, shearing) to the diffeomorphically transformed source images can further improve the segmentation adaptation performance in most cases. Owing to fine-grained diffeomorphic and global affine transformations, our framework provides distortionaware segmentation adaptation models and reliable pseudo labels, and thus ultimately improves the performance of self-supervised learning methods. To validate, we propose new domain adaptation benchmarks, where a segmentation model trained on rectilinear images (Cityscapes [8] or GTAV [29]) is transferred to fisheye images (Woodscape [38] or our in-house fisheye driving scene dataset (FDD)). With mean Intersection-over-Union (mIoU) as the evaluation metric, our framework achieves significantly improved prediction performance compared to existing segmentation adaptation methods. In essence, our key contributions can be summarized as: (1) a distortion-aware domain adaptation framework for boosting unsupervised semantic segmentation performance in the presence of unknown relative distortion; (2) distortion-aware losses that effectively enforce the unpaired and unsupervised relative distortion learning to transfer distortion style from target to source domain; (3) new unsupervised domain adaptation benchmarks posing challenging tasks, where source and target images have additional domain gaps in optical distortion. 2 Related Work 2.1 Domain Adaptive Semantic Segmentation Recent advances in image-to-image translation and style transfer [42] have promoted the translation of source-domain images to mimic texture and appearance of the target-domain images. CyCADA [24] applied CycleGAN [42] to preserve semantic consistency before and after image-level adaptation. BDL [22] proposed a bidirectional learning method where image-to-image translation and segmentation adaptation is learned alternatively. Similar methods have been proposed to adapt source images [35, 17] or to reconstruct the source image from the output-level representation [36]. Deviated from these adversarial approaches, FDA [37] proposed a method to align the low-level frequencies of source images and their counterparts in target images. Ma et al. [25] applied a photometric alignment method to minimize domain shifts in image- and category-level feature distributions. More recently, self-supervised learning (SSL) approach has emerged in domain adaption tasks to further improve the segmentation adaptation performance. SSL approaches try to rectify noisy pseudo labels with respect to uncertainties or confidence scores [22, 17, 28, 27, 26, 13, 41, 40]. These methods inherently require good initial models, that are commonly trained on source or adapted on target beforehand. To obviate the need of multi-stage and adversarial training, Araslanov and Roth [2] introduced a photometric augmentation method to SSL. However, none of the prior works directly considers distortion-oriented domain shifts in the unsupervised segmentation adaptation tasks. Moreover, it remains unknown how well existing adaptation methods address unknown relative distortion between domains. As a pioneering work, we first aim to evaluate prior arts on the adaption task in the presence of unknown optical distortion across domains. For this, we noted that many of current state-of-the-art methods rely on SSL (e.g., [17, 27, 26, 41, 40]) and have adopted AdaptSeg [33] or AdvEnt [34] as for their based segmentation adaptation. Thus, we take these adversarial adaptation approaches as our baseline methods and further evaluate SSL approaches [26, 27, 40] in the presence of optical distortion shifts across domains. To evaluate, we newly formulate segmentation adaptation benchmarks, i.e., transferring from real or synthetic rectilinear images to real fisheye images. 2.2 Diffeomorphic Deformation Networks Spatial transformation networks (STN [15]) have been used in various contexts since it is capable of learning differentiable deformation field. STN primarily allows simple linear transformations (e.g., affine, translations, scaling, and rotations) and can be extended to more flexible mappings such as thin plate spline transformations [23]. However, optical distortion involves complex nonlinear transformation and STNs are limited to support such transformations. Instead Detlefsen et al. [12] proposed to exploit diffeomorphisms in spatial transformation, which can model complex nonlinear deformation and address optimization divergence issues in learning spatial transformation. A diffeomorphism is a globally one-to-one continuous and smooth mapping, which has a differentiable inverse. Diffeomorphic transformations have been largely used for image registration [16, 10, 32] and shape analysis [39]. However, traditional diffeomorphic deformation methods demand high computational costs and difficult to implement; and thus make it difficult to be incorporated into deep neural networks. To address such issue, Dalca et al. [11] proposed a probabilistic generative model to generate diffeomorphic deformation field for medical image registration task. However, their work depends on a paired set of 3D brain images depicting the same contexts. Inspired by [11], we propose a new relative distortion generator, which takes a set of unpaired source- and target-domain image, to transform the source image to a new image sharing similar distortion style of the target image. 2.3 Radial Distortion Rectification Traditional distortion rectification approaches require images captured with specific camera calibration parameters [20] and thus are not flexible. The other work exploits the principle that a straight line should be projected into a straight line in a calibrated image [1], but such method mainly depends on the accuracy of line detection. More recently, deep neural networks have been used for robust and efficient rectification performance [5, 21]. However, these works primarily aim to rectify distorted images and do not support distortion style transfer for unpaired and unsupervised domain adaptation tasks. In contrast, we propose to transfer distortion style between unpaired source and target images. 3 Method Let S be the source-domain data with segmentation labels YS and T be the target-domain data with no labels. We aim to train a scene segmentation network M performing satisfactory pixel-wise prediction on T by minimizing domain gaps between S and T . Here we formulate more challenging adaptation tasks where distributional shifts include not only visual domain gaps (e.g., texture, lighting, contrast) but also geometric and optical distortions (e.g., radial distortion). Such domain gaps make M difficult to learn transferable knowledge in both visual and geometric domains. 3.1 Relative Distortion Learning To minimize the discrepancy between source and target domain, prior works proposed pixel-level image translation methods based on cycle-consistency [14, 17, 35, 42] or photometric alignment [2, 25]. However, these methods do not consider domain gaps in geometric distortion and thus fail to transfer distortion style from IT to IS (see Fig.4). To address this, we propose a relative distortion learning (RDL) method, which predicts a relative deformation field to transfer the distortion style between domains. Given a source-domain image IS and a target-domain image IT , we aim to transform IS to a new image IS→T based on a relative deformation field ΦS→T , where IS→T shares a similar distortion style of IT . We achieve this transformation through a grid-based sampling operation IS ◦ ΦS→T = IS→T . Ultimately, the transformed source image IS→T aim to mitigate the domain shift in optical distortion at a fine-grained level. However, it is not trivial to predict relative deformation fields since input images are not paired (i.e., dissimilar image contents); and the relative distortion involves nonlinear geometric deformation, which is hard to be parameterized without knowing optical features (e.g., lens distortion model, focal length). To learn such a challenging geometric relationship, we exploit diffeomorphic transformation in the unpaired and unsupervised distortion translation task. Diffeomorphic Transformation. Let u ∈ R2×w×h be a flow field which is constant over time. Then, we describe a differential equation of evolution of deformation φ(t) by ∂φ(t) ∂t = u(φ(t)), (1) where t is time and φ(0) the identity transformation. By integrating u(φ(t)) over t ∈ [0, 1], we get a diffeomorphic deformation field Φ := φ(1), which maps the coordinates from one image to another image. For fast and differentiable integration, we use an exponentiated integration technique, so-called scaling-and-squaring [3]. This integration method starts from φ(1/2 T ) = φ(0) + u/2T and iteratively computes deformation fields of next time steps via φ(1/2 t−1) = φ(1/2 t) ◦ φ(1/2t) over T times, where ◦ indicates a grid-based resampling operation. Then, we obtain an approximate deformation field Φ. The inverse deformation field Φ− can be achieved by integrating the negative field φ(1/2 T ) = φ(0) − u/2T via the same integration method. To generate a field of ΦS→T , which defines relative distortion between IS and IT , we propose a deformation field generator G, which takes both IS and IT ; and their first-order gradient (i.e., Sobel filter) ∇IS and ∇IT as input. Here we use the image gradients to provide rich geometric features inspired by the distortion rectification method [1]. Then, G generates deformation field ΦS→T and its inverse field ΦT→S . For example, in Fig.2, G generates a flow field u to construct a ΦS→T (red-colored grid in (e)) from a pair of IT (a) and IS (b). Finally, a new transformed image IS→T (e) is generated via IS ◦ ΦS→T . Similarly, negative integration of the flow field u yields an inverse deformation field ΦT→S , which generate a reconstructed image (d) via I ′S = IS→T ◦ ΦT→S . To supervise the generator G for learning the relative distortion in unpaired and unsupervised settings, we exploit the properties of diffeomorphic transformations: topology preserving, invertible, and differentiable. Such diffeomorphic constraints are directly built into relative distortion learning to enforce desirable deformation field outputs. In particular, we propose distortion-aware losses: distortion reconstruction loss and semantic distortion-consistent loss. These losses evaluate the cycleconsistency of the relative deformation field at the image- and the prediction-level, while enforcing the semantic quality of the relative deformation field. Distortion reconstruction loss (Lrecon) ensures that G generates convincing flow fields to reconstruct the source image IS from the transformed image IS→T by the inverse deformation field ΦT→S , and vice versa: Lrecon = ‖IS − I ′S‖1 + ‖IT − I ′T ‖1, where I ′S = IS→T ◦ ΦT→S , I ′T = IT→S ◦ ΦS→T (2) Semantic distortion-consistent loss (Lsem) enforces semantic consistency between the source image IS and its transformed pair IS→T at the pixel-wise prediction level. The same constraint can be applied to between IT and IT→S . That is, an ideal deformation field generator G should be able to generate ΦS→T and ΦT→S , that reconstruct structural information from the pixel-wise prediction outputs of the distorted input images. With these semantic constraints, the relative distortion learning can be further improved by the segmentation model M ; and also the segmentation adaptation can benefit from the improved quality of relative deformation fields. The loss function can be written as: Lsem = ‖M(IS) ◦ ΦS→T −M(IS→T )‖1 + ‖M(IT ) ◦ ΦT→S −M(IT→S)‖1. (3) To minimize domain gaps in optical distortion between the transformed source images IS→T and the target-domain images IT , we introduce a distortion-aware discriminator DG, which aims to discriminate distortion style between IT and IS→T . Similar to G, we again use the first-order image gradient, ∇IT and ∇IS→T , as input to DG along with the images. The loss function is defined as: LDG = EIS∼S,IT∼T [1−DG(IS ◦ ΦS→T ,∇(IS ◦ ΦS→T ))] + EIT∼T [DG(IT ,∇IT )]. (4) We also calculate the adversarial loss using DG: Ladv_G = EIS∼S,IT∼T [DG(IS ◦ ΦS→T ,∇(IS ◦ ΦS→T ))], (5) where the deformation generator G tries to produce the relative deformation field to fool the discriminator DG in distinguishing distortion style. Therefore, the total loss function for the relative distortion learning is defined as: Lrdl = β1Lrecon + β2Lsem + β3Ladv_G, (6) where β1, β2, and β3 are constants controlling the effect of corresponding losses. 3.2 Distortion-aware Adversarial Adaptation Relative distortion learning aims to adapt optical distortion shifts at both pixel- and feature-level. Based on the adapted representations, we further apply adversarial segmentation adaptation. As our baseline adaptation methods, we take AdaptSeg [33] and AdvEnt [34], that have been extensively used in many prior work [26–28, 40] as for their based segmentation adaptation. Typically, the segmentation loss uses the cross-entropy for the transformed source image IS→T and its corresponding label YS→T : Lseg = − ∑ h,w ∑ c∈C Y (h,w,c) S→T log(M(IS→T ) (h,w,c)), (7) where YS→T is obtained by transforming ground-truth annotations from the source domain by YS ◦ ΦS→T . Entropy minimization loss is further introduced by AdvEnt [34]: Lent = −1 log(C) ∑ h,w ∑ c∈C M(IT ) (h,w,c) logM(IT ) (h,w,c), (8) which tries to directly minimize pixel-wise entropies to enhance prediction certainty in the target domain. As a common practice, a domain discriminator DM is used to minimize the difference between the transformed source and target prediction probabilities: LDM = EIS∼S,IT∼T [1−DM (M(IS ◦ ΦS→T ))] + EIT∼T [DM (M(IT ))]. (9) The adversarial loss function for segmentation adaptation can be written as: Ladv_M = EIT∼T [DM (1−M(IT ))], (10) where the segmentation model M is trained to fool the discriminator DM . Equipped with the relative distortion learning and the adversarial adaptation, we train the adaptive segmentation network with the following total loss: Lall = Lrdl + Ladv_M + Lseg + γLent, (11) where γ is 0 for AdaptSeg [33] and 1 for AdvEnt [34]. 4 Experimental Results We present extensive experimental results to validate our distortion-aware domain adaptation (DaDA) framework for semantic segmentation in the presence of both visual and geometric domain shifts. For this, we formulate new domain adaptive segmentation benchmarks, i.e., transferring from real-world or synthetic rectilinear images to real fisheye images. 4.1 Datasets In the experiments, we first show evaluations of the model trained on real-world rectilinear images from the Cityscapes [8] and test the adapted model on real fisheye images from Woodscape [38] or our in-house fisheye driving dataset (FDD). Then, we introduce a more challenging adaptation task, where the model trained on synthetic dataset GTAV [29] is transferred to real fisheye images (Woodscape or FDD) without annotations. The Cityscapes dataset contains 2, 975 training images of high-quality driving scene with the resolution of 2048× 1024. The GTAV dataset contains 24, 966 synthesized images with the resolution of 1914× 1052. The Woodscape dataset consists of 8, 234 fisheye images with the resolution of 1280× 966, where the images are captured by fisheye cameras (190◦ F.O.V) looking at four different directions of the vehicle. We use front and rear camera scenes containing 4, 023 images in our experiments. The images are randomly split into a training set with 3, 023 images and a validation set with 1, 000 images. We report the results on 17 classes aligning mismatched classes between Cityscapes and GTAV; and Woodscape. Our in-house fisheye driving dataset (FDD) includes 3, 897 of fully annotated images with the resolution of 1920× 1080 captured by fisheye cameras (200◦ F.O.V) at front- and rear-side of the vehicle. We randomly pulled 974 of validation images and remaining 2, 923 images are used for the training. For FDD, we use 12 classes, where incompatible classes in Cityscapes and GTAV are merged or excluded similar to Woodscape (see Appendix B.4 for detailed class information of Woodscape and FDD). 4.2 Experimental Details Diffeomorphic and Affine Transformation Fig.2 depicts an example of diffeomorphic and affine transformations applied to a source- and a target-domain image. First, both original target and source images are randomly cropped and resized to (a) and (b) followed by randomized horizontal flipping and photometric jittering. For the fine-grained diffeomorphic transformation, both target (IT in (a)) and source image (IS in (b)) are used to generate a transformed image IS→T in (e) via the relative deformation field generator G. Additional global affine transformation is applied to IS→T to generate an image in (f), which includes both fine-grained diffeomorphic and global affine deformations. For the affine transformation, we adopted RandAugment (RA) [9] excluding preceded photometric jittering. RA is one of the immediately probable and applicable augmentation methods including a series of affine transformations (i.e., rotation, shear-x, shear-y, trans-x, and trans-y with 0.5 probability of application). For example, a rotation transformation is applied to the source image IS in (c). In our experiments, we tested transformed images generated by either one of affine-only (c), diffeomorphic-only (e), or both transformations (f). Implementation Details. We trained all networks with the Adam [18] solver with a batch size of 4. The learning rate is 0.2× 10−5 for M and DM and 0.1× 10−6 for G and DG. We set the weight factors of losses in Eq.(6) as: β1 = 100.0, β2 = 10.0, β3 = 10.0 for Cityscapes→Woodscape (or FDD); and β1 = 100.0, β2 = 1.0, β3 = 100.0 for GTAV→Woodscape (or FDD). Further details on the implementation as well as our hyperparameter selections are provided in Appendix A and B. 4.3 Comparisons with State-of-the-Art Methods We first evaluate the effect of our distortion-aware adaptation framework when applied to the based adaptation methods: AdaptSeg [33] and AdvEnt [34]. We use DeepLab-V2 [6] with ResNet101 backbone as the base semantic segmentation architecture M . We test the different spatial transformations (+RDL: applying a diffeomorphic transformation learned by RDL, +RA: applying an affine transformation via RandAugment [9], +RA+RDL: applying both diffeomorphic and affine transformations) on source images in the adaptation tasks. Woodscape as target domain (Tab.1). Compared to the based adaptation methods, applying either one of fine-grained diffeomorphic (+RDL) or global affine transformations (+RA) achieves clear improvements on the all adaptation tasks. Remarkably, applying both fine-grained diffeomorphic and global affine transformations (+RA+RDL) achieves significant improvements of +7.38% and +2.92% on Cityscapes→Woodscape and GTAV→Woodscape tasks, respectively. Note that the performance improvement of the distortion-aware adaptation comes from object classes such as person (+20.07%), car (+14.49%), bus (+14.61%), and truck (+12.61%) as well as background such as sidewalk (+13.09%) (see class-wise iou(%) in Appendix B.4). Optical distortion gradually increases when objects and backgrounds appear closer to the image periphery. We hypothesize that our distortion adaptation method rectifies erroneous predictions in such distorted regions. These improvements are also confirmed in Fig.1-(b), Fig.3, and Fig.5 (See Section 4.4). FDD as target domain (Tab.1). Here, the results are consistent with the previous adaptation tasks. Again, +RA+RDL clearly shows superior prediction performance to the based methods by up to +3.45% and +3.62% on Cityscapes→FDD and GTAV→FDD, respectively. Notably, +RDL contributes up to 3.56% (AdvEnt+RDL on Cityscapes→ FDD) of improvements, which also consistently achieves higher performance gain than +RA (+2.71%). Such results are also echoed in the Cityscapes → CityscapesFishEye task in Appendix B.2. Note that +RDL always leads to improvements in segmentation adaptation, regardless of domain shift, throughout our experiments (e.g., based methods vs. +RDL, and +RA vs. +RA+RDL) as presented in Tab.1 and Tab.7 in Appendix B.2. In contrast, the randomized affine augmentation (RA) leads to degraded segmentation adaptation results upon the geometric distributional shifts between source and target domains (e.g., +RDL vs. +RDL+RA in Cityscapes→ CityscapesFisheye and Cityscapes→ FDD). Thus, we may conclude that our learnable diffeomorphic transformation (RDL) plays an important role in aligning the domain gap of geometric deformation. Relationship to SSL (Tab.2). We evaluate the effect of our distortion-aware domain adaptation (DaDA), as a “warm-up” phase, for state-of-the-art adaptation methods using self-supervised learning: IAST [26], IntraDA [27], and ProDA [40]. In particular, we train the initial segmentation model with +RA+RDL distortion adaptation; and use AdaptSeg-based adaptation models for IAST [26] and ProDA [40]; and AdvEnt-based models for IntraDA [27]. Tab.2 shows the effectiveness of DaDA, where it further improves the self-supervised learning by providing higher-quality initial models and pseudo labels. DaDA attains up to +6.82% improvement against the baseline SSL methods. This implies that satisfactory distortion-aware adaptation cannot be achieved only by relying on SSL. Comparisons with Image-to-Image Translation. We compare our relative distortion learning (RDL) with an existing image-to-image translation method CycleGAN [42]. Unpaired rectilinear source images (Cityscapes and GTAV) and fisheye target images (Woodscape) are given to both RDL and CycleGAN. The input images are randomly cropped and resized to 512× 512. Fig.4 shows that CycleGAN fails to generate transformed images from source images mimicking the distortion style of target images. This is obvious to observe since CycleGAN-like existing translation approaches do not have devices (e.g., diffeomorphic transformer) to geometrically transform source images. In contrast, RDL enables modeling relative distortion across domains and generates transformed source images alike the distortion style of target images. In Fig.4, buildings and vehicles are distorted replicating counterparts in target images via RDL, while CycleGAN focuses on translating texture and color of images. Fig.4 also demonstrates that RDL is able to reconstruct source images (IS) from the distortion-translated images (IS→T ) via an inverse relative deformation field ΦT→S . Overall, these results imply that our distortion-aware losses (Lrecon, Lsem) are effective in guiding the generator G to produce convincing relative deformation fields. 4.4 Ablation Studies To better understand the effect of our distortion-aware adaptation approach, we conduct ablation studies on the distortion-aware losses and the competence in predicting distorted image regions. Effect of Distortion-aware Losses (Tab.3). We evaluate the effect of the distortion-aware losses on segmentation performance. Tab.3 depicts the improvement of prediction performance compared to the based methods, by adding individual or all of the proposed distortion-aware losses in Eq.(11). Only adding the adversarial loss (+Ladv_G) contributes to the adaptation performance up to +3.28% of improvements. Progressive introduction of the distortion-aware losses consistently improves prediction accuracy (+Lsem, +Lrecon). Ultimately, utilizing all losses together with segmentation adaptation achieves +5.68% and +1.69% of improvements for Cityscapes → Woodscape and GTAV→ Woodscape, respectively. Here we observe a relatively smaller improvement in GTAV →Woodscape task, which exhibits both severe visual and geometric distributional misalignment. We believe that introducing additional texture-aware translation methods along with our distortion adaptation approach might lead to further improvement in such a synthetic-to-real adaptation task. Overall, the relative distortion learning supervised by the distortion-aware losses effectively reduces domain shifts in optical distortion, and thus improves the prediction performance. Distortion-aware mIoU. To quantitatively demonstrate the effectiveness of DaDA on predicting distorted image areas, we propose a distortion-aware mIoU(%) metric. The image coordinates are normalized to [-1.0,1.0] and we gradually mask label where √ i2 + j2 of pixel at (i, j) is smaller than a certain distance threshold (dist). In Fig.5 (bottom row), dist = 0.0 shows that the original label is used for mIoU(%) calculation, while dist = 0.8 indicates that large areas of undistorted regions are masked out in the label so that we evaluate the mod- els on distorted regions. Plots in Fig.5 (top row) show that the performance gain (∆mIoU) achieved by adding DaDA increases as dist increases. This indicates that DaDA effectively addresses domain shifts in distortion and improves the prediction performance for the distorted image regions. 5 Discussion and Conclusion In this paper, we proposed a novel distortion-aware domain adaptation (DaDA) framework that is capable of modeling domain shifts in geometric deformation based on a relative distortion learning (RDL) method. Besides, we demonstrated that our distortion adaptation approach further improves self-supervised learning by providing higher-quality initial models and pseudo labels. Extensive experimental results proved that our method minimizes domain shifts in optical distortion, and thus significantly improves the segmentation adaptation performance under unknown relative distortion across domains. In the future, we will investigate the interplay between the texture-oriented and the distortion-oriented domain shifts to further improve the unsupervised domain adaptation. While we first tackle adapting existing semantic segmentation models trained on rectilinear images to unlabeled fisheye images, various set-ups of domain adaptation tasks among distorted and rectilinear images can be further considered. For example, we may use distorted images as source and rectilinear images as a target, or both source and target domains include distorted images. Applying relative distortion learning to such extended adaptation tasks could be another interesting direction for future work. We hope our work provides a solid baseline and new perspectives on distortion-aware domain adaptation. 6 Disclosure of Funding We received no third-party funding for this work.
1. What is the focus and contribution of the paper on domain adaptation for semantic image segmentation? 2. What are the strengths of the proposed approach, particularly in combining geometric and optical distortion? 3. What are the weaknesses of the paper, especially regarding the experimental results and the neatness of the reconstructed images? 4. Do you have any questions regarding the references cited in the paper? 5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The author proposed a distortion-aware domain adaptation (DaDA) framework that is capable of modeling domain shifts in geometric deformation based on a relative distortion learning (RDL) method. The proposed method tackles the task of unsupervised domain adaptation for semantic image segmentation where unknown optical distortion exists between the source and target images. Adequate experimental results also prove the validity of the proposed method。 Strengths And Weaknesses 1.Strengths This paper proposes a novel domain adaptation method for unsupervised semantic segmentation. And it combines geometric and optical distortion in domain shift. And the extensive experimental results highlight the effectiveness of our approach over state-of-the-art methods under unknown relative distortion across domains. 2.Weaknesses See the “Questions” part. Questions 1.The figures in the text are far apart from where the figures are mentioned in the body text. It is suggested that the author shorten the corresponding distance to facilitate the reader’s comparative reading. 2.Why are the edges of the reconstructed source domain image not neat? Such as the “d” part in figure2. 3.In the "Experimental Details" section, the performance of both the “AdaptSeg+RA+RDL” and “AdvEnt+RA+RDL” methods decreases compared to the RA removal method when cityscapes is the source domain dataset and FDD is the target domain dataset. The authors need an explanation for this experimental result. 4.Are the references in [17] and [18] duplicated? 5.The format of references is not uniform. For example, the format of the reference in [28] differs from the rest of the literature. 6.Most of the references are earlier than 2020. It is recommended to cite more papers in related fields after 2021. Limitations The authors adequately address the limitations of their work and the potential negative social impact.
NIPS
Title Fast and Flexible Multi-Task Classification using Conditional Neural Adaptive Processes Abstract The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature. The resulting approach, called CNAPS, comprises a classifier whose parameters are modulated by an adaptation network that takes the current task’s dataset as input. We demonstrate that CNAPS achieves state-of-theart results on the challenging META-DATASET benchmark indicating high-quality transfer-learning. We show that the approach is robust, avoiding both over-fitting in low-shot regimes and under-fitting in high-shot regimes. Timing experiments reveal that CNAPS is computationally efficient at test-time as it does not involve gradient based adaptation. Finally, we show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning. 1 Introduction We consider the development of general purpose image classification systems that can handle tasks from a broad range of data distributions, in both the low and high data regimes, without the need for costly retraining when new tasks are encountered. We argue that such systems require mechanisms that adapt to each task, and that these mechanisms should themselves be learned from a diversity of datasets and tasks at training time. This general approach relates to methods for meta-learning [1, 2] and few-shot learning [3]. However, existing work in this area typically considers homogeneous task distributions at train and test-time that therefore require only minimal adaptation. To handle the more challenging case of different task distributions we design a fully adaptive system, requiring specific design choices in the model and training procedure. Current approaches to meta-learning and few-shot learning for classification are characterized by two fundamental trade-offs. (i) The number of parameters that are adapted to each task. One approach adapts only the top, or head, of the classifier leaving the feature extractor fixed [4, 5]. While useful in simple settings, this approach is prone to under-fitting when the task distribution is heterogeneous [6]. Alternatively, we can adapt all parameters in the feature extractor [7, 8] thereby increasing ∗Authors contributed equally 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. fitting capacity, but incurring a computation cost and opening the door to over-fitting in the low-shot regime. What is needed is a middle ground which strikes a balance between model capacity and reliability of the adaptation. (ii) The adaptation mechanism. Many approaches use gradient-based adaptation [7, 9]. While this approach can incorporate training data in a very flexible way, it is computationally inefficient at test-time, may require expertise to tune the optimization procedure, and is again prone to over-fitting. Conversely, function approximators can be used to directly map training data to the desired parameters (we refer to this as amortization) [5, 10]. This yields fixed-cost adaptation mechanisms, and enables greater sharing across training tasks. However, it may under-fit if the function approximation is not sufficiently flexible. On the other hand, high-capacity function approximators require a large number of training tasks to be learned. We introduce a modelling class that is well-positioned with respect to these two trade-offs for the multi-task classification setting called Conditional Neural Adaptive Processes (CNAPS).2 CNAPS directly model the desired predictive distribution [11, 12], thereby introducing a conditional neural processes (CNPs) [13] approach to the multi-task classification setting. CNAPS handles varying way classification tasks and introduces a parametrization and training procedure enabling the model to learn to adapt the feature representation for classification of diverse tasks at test time. CNAPS utilize i) a classification model with shared global parameters and a small number of task-specific parameters. We demonstrate that by identifying a small set of key parameters, the model can balance the trade-off between flexibility and robustness. ii) A rich adaptation neural network with a novel auto-regressive parameterization that avoids under-fitting while proving easy to train in practice with existing datasets [6]. In Section 5 we evaluate CNAPS. Recently, Triantafillou et al. [6] proposed META-DATASET, a few-shot classification benchmark that addresses the issue of homogeneous train and test-time tasks and more closely resembles real-world few-shot multi-task learning. Many of the approaches that achieved excellent performance on simple benchmarks struggle with this collection of diverse tasks. In contrast, we show that CNAPS achieve state-of-the-art performance on the META-DATASET benchmark, often by comfortable margins and at a fraction of the time required by competing methods. Finally, we showcase the versatility of the model class by demonstrating that CNAPS can be applied “out of the box” to continual learning and active learning. 2 Model Design We consider a setup where a large number of training tasks are available, each composed of a set of inputs x and labels y. The data for task τ includes a context set Dτ = {(xτn,yτn)} Nτ n=1, with inputs and outputs observed, and a target set {(xτ∗m ,yτ∗m )} Mτ m=1 for which we wish to make predictions (y τ∗ are only observed during training). CNPs [13] construct predictive distributions given x∗ as: p (y∗|x∗,θ, Dτ ) = p (y∗|x∗,θ,ψτ = ψφ (Dτ )) . (1) Here θ are global classifier parameters shared across tasks. ψτ are local task-specific parameters, produced by a function ψφ(·) that acts on Dτ . ψφ(·) has another set of global parameters φ called adaptation network parameters. θ and φ are the learnable parameters in the model (see Figure 1a). 2Source code available at https://github.com/cambridge-mlg/cnaps. CNAPS is a model class that specializes the CNP framework for the multi-task classification setting. The model-class is characterized by a number of design choices, made specifically for the multi-task image classification setting. CNAPS employ global parameters θ that are trained offline to capture high-level features, facilitating transfer and multi-task learning. Whereas CNPs define ψτ to be a fixed dimensional vector used as an input to the model, CNAPS instead let ψτ be specific parameters of the model itself. This increases the flexibility of the classifier, enabling it to model a broader range of input / output distributions. We discuss our choices (and associated trade-offs) for these parameters below. Finally, CNAPS employ a novel auto-regressive parameterization of ψφ(·) that significantly improves performance. An overview of CNAPS and its key components is illustrated in Figure 1b. 2.1 Specification of the classifier: global θ and task-specific parameters ψτ We begin by specifying the classifier’s global parameters θ followed by how these are adapted by the local parameters ψτ . Global Classifier Parameters. The global classifier parameters will parameterize a feature extractor fθ(x) whose output is fed into a linear classifier, described below. A natural choice for fθ(·) in the image setting is a convolutional neural network, e.g., a ResNet [14]. In what follows, we assume that the global parameters θ are fixed and known. In Section 3 we discuss the training of θ. Task-Specific Classifier Parameters: Linear Classification Weights. The final classification layer must be task-specific as each task involves distinguishing a potentially unique set of classes. We use a task specific affine transformation of the feature extractor output, followed by a softmax. The task-specific weights are denoted ψτw ∈ Rdf×C τ (suppressing the biases to simplify notation), where df is the dimension of the feature extractor output fθ(x) and Cτ is the number of classes in task τ . Task-Specific Classifier Parameters: Feature Extractor Parameters. A sufficiently flexible model must have capacity to adapt its feature representation fθ(·) as well as the classification layer (e.g. compare the optimal features required for ImageNet versus Omiglot). We therefore introduce a set of local feature extractor parameters ψτf , and denote fθ(·) the unadapted feature extractor, and fθ(·;ψτf ) the feature extractor adapted to task τ . It is critical in few-shot multi-task learning to adapt the feature extractor in a parameter-efficient manner. Unconstrained adaptation of all the feature extractor parameters (e.g. by fine-tuning [9]) gives flexibility, but it is also slow and prone to over-fitting [6]. Instead, we employ linear modulation of the convolutional feature maps as proposed by Perez et al. [15], which adapts the feature extractor through a relatively small number of task specific parameters. A Feature-wise Linear Modulation (FiLM) layer [15] scales and shifts the ith unadapted feature map fi in the feature extractor FiLM(fi; γτi , β τ i ) = γ τ i fi+ β τ i using two task specific parameters, γ τ i and βτi . Figure 2a illustrates a FiLM layer operating on a convolutional layer, and Figure 2b illustrates how a FiLM layer can be added to a standard Residual network block [14]. A key advantage of FiLM layers is that they enable expressive feature adaptation while adding only a small number of parameters [15]. For example, in our implementation we use a ResNet18 with FiLM layers after every convolutional layer. The set of task specific FiLM parameters (ψτf = {γτi ,βτi }) constitute fewer than 0.7% of the parameters in the model. Despite this, as we show in Section 5, they allow the model to adapt to a broad class of datasets. Split 𝑓𝜃 (𝒙;𝝍𝑓 𝜏 ) by class label 𝑦 {𝑓𝜃 (𝒙;𝝍𝑓 𝜏 )} {𝒚} 𝝍𝑤(·; 𝜙𝑤 , 𝝍𝑓 𝜏 , 𝜽) {𝑓𝜃 (𝑥𝑘 𝑐; 𝝍𝑓 𝜏 )}𝑘=1 𝑘𝑐 (i.e. 𝑘𝑐 train examples from each class 𝑐) 𝑓𝜃 (𝒙 ∗; 𝝍𝑓 𝜏 ) Linear Classifier 𝑤 Softmax 𝑝 𝒚∗ 𝒙∗, 𝜽, 𝝍𝜏)𝑤1 𝑤𝑐 𝑤𝐶… 𝑏𝑐 𝑏1 𝑏𝐶 … … …Shared network for each class 𝑐 in C Mean Pooling 𝜙𝑏 𝜙𝑤 𝑧𝐶 𝜓𝑤 𝐷𝑖 𝜏 ≔ 𝑤𝑖; 𝑏𝑖 Figure 3: Implementation of functional representation of the class-specific parameters ψw. In this parameterization, ψcw are the linear classification parameters for class c, and φw are the learnable parameters. 2.2 Computing the local parameters via adaptation networks The previous sections have specified the form of the classifier p (y∗|x∗,θ,ψτ ) in terms of the global and task specific parameters, θ and ψτ = {ψτf ,ψτw}. The local parameters could now be learned separately for every task τ via optimization. While in practice this is feasible for small numbers of tasks (see e.g., [16, 17]), this approach is computationally demanding, requires expert oversight (e.g. for tuning early stopping), and can over-fit in the low-data regime. Instead, CNAPS uses a function, such as a neural network, that takes the context set Dτ as an input and returns the task-specific parameters, ψτ = ψφ (Dτ ). The adaptation network has parameters φ that will be trained on multiple tasks to learn how to produce local parameters that result in good generalisation, a form of meta-learning. Sacrificing some of the flexibility of the optimisation approach, this method is comparatively cheap computationally (only involving a forward pass through the adaptation network), automatic (with no need for expert oversight), and employs explicit parameter sharing (via φ) across the training tasks. Adaptation Network: Linear Classifier Weights. CNAPS represents the linear classifier weights ψτw as a parameterized function of the formψ τ w = ψw(D τ ;φw,ψf ,θ), denotedψw(Dτ ) for brevity. There are three challenges with this approach: first, the dimensionality of the weights depends on the task (ψτw is a matrix with a column for each class, see Figure 3) and thus the network must output parameters of different dimensionalities; second, the number of datapoints in Dτ will also depend on the task and so the network must be able to take inputs of variable cardinality; third, we would like the model to support continual learning. To handle the first two challenges we follow Gordon et al. [5]. First, each column of the weight matrix is generated independently from the context points from that class ψτw = [ψw (D τ 1 ) , . . . , ψw (D τ C)], an approach which scales to arbitrary numbers of classes. Second, we employ a permutation invariant architecture [18, 19] for ψw(·) to handle the variable input cardinality (see Appendix E for details). Third, as permutation invariant architectures can be incrementally updated [20], continual learning is supported (as discussed in Section 5). Intuitively, the classifier weights should be determined by the representation of the data points emerging from the adapted feature extractor. We therefore input the adapted feature representation of the data points into the network, rather than the raw data points (hence the dependency of ψw on ψf and θ). To summarize, ψw(·) is a function on sets that accepts as input a set of adapted feature representations from Dτc , and outputs the c th column of the linear classification matrix, i.e., ψw (D τ c ;φw,ψf ,θ) = ψw ({fθ (xm;ψf ) |xm ∈ Dτ ,ym = c};φw) . (2) Here φw are learnable parameters of ψw(·). See Figure 3 for an illustration. Adaptation Network: Feature Extractor Parameters. CNAPS represents the task-specific feature extractor parameters ψτf , comprising the parameters of the FiLM layers γ τ and βτ in our implementation, as a parameterized function of the context-set Dτ . Thus, ψf (·;φf ,θ) is a collection of functions (one for each FiLM layer) with parameters φf , many of which are shared across functions. We denote the function generating the parameters for the ith FiLM layer ψif (·) for brevity. Our experiments (Section 5) show that this mapping requires careful parameterization. We propose a novel parameterization that improves performance in complex settings with diverse datasets. Our implementation contains two components: a task-specific representation that provides context about the task to all layers of the feature extractor (denoted zτG), and an auto-regressive component that provides information to deeper layers in the feature extractor concerning how shallower layers have adapted to the task (denoted ziAR). The input to the ψ i f (·) network is zi = (zτG, ziAR). zτG is computed for every task τ by passing the inputs xτn through a global set encoder g with parameters in φf . To adapt the lth layer in the feature extractor, it is useful for the system to have access to the representation of task-relevant inputs from layer l− 1. While zG could in principle encode how layer l− 1 has adapted, we opt to provide this information directly to the adaptation network adapting layer l by passing the adapted activations from layer l−1. The auto-regressive component ziAR is computed by processing the adapted activations of the previous convolutional block with a layer-specific set encoder (except for the first residual block, whose auto-regressive component is given by the unadapted initial pre-processing stage in the ResNet). Both the global and all layer-specific set-encoders are implemented as permutation invariant functions [18, 19] (see Appendix E for details). The full parameterization is illustrated in Figure 4, and the architecture of ψif (·) networks is illustrated in Figure 5. 3 Model Training The previous section has specified the model (see Figure 1b for a schematic). We now describe how to train the global classifier parameters θ and the adaptation network parameters φ = {φf ,φw}. Training the global classifier parameters θ. A natural approach to training the model (originally employed by CNPs [13]) would be to maximize the likelihood of the training data jointly over θ and φ. However, experiments (detailed in Appendix D.3) showed that it is crucially important to adopt a two stage process instead. In the first stage, θ are trained on a large dataset (e.g., the training set of ImageNet [21, 6]) in a full-way classification procedure, mirroring standard pre-training. Second, θ are fixed and φ are trained using episodic training over all meta-training datasets in the multi-task setting. We hypothesize that two-stage training is important for two reasons: (i) during the second stage, φf are trained to adapt fθ(·) to tasks τ by outputting ψτf . As θ has far more capacity than ψτf , if they are trained in the context of all tasks, there is no need for ψτf to adapt the feature extractor, resulting in little-to-no training signal for φf and poor generalisation. (ii) Allowing θ to adapt during the second phase violates the principle of “train as you test”, i.e., when test tasks are encountered, θ will be fixed, so it is important to simulate this scenario during training. Finally, fixing θ during meta-training is desireable as it results in a dramatic decrease in training time. Training the adaptation network parameters φ. Following the work of Garnelo et al. [13], we train φ with maximum likelihood. An unbiased stochastic estimator of the log-likelihood is: L̂ (φ) = 1 MT ∑ m,τ log p (y∗τm |x∗τm ,ψφ (Dτ ) ,θ) , (3) where {y∗τm ,x∗τm , Dτ} ∼ P̂ , with P̂ representing the data distribution (e.g., sampling tasks and splitting them into disjoint context (Dτ ) and target data {(x∗τm ,y∗τm )} Mt m=1). Maximum likelihood training therefore naturally uses episodic context / target splits often used in meta-learning. In our experiments we use the protocol defined by Triantafillou et al. [6] and META-DATASET for this sampling procedure. Algorithm A.1 details computation of the stochastic estimator for a single task. 4 Related Work Our work frames multi-task classification as directly modelling the predictive distribution p(y∗|x∗,ψ(Dτ )). The perspective allows previous work [7, 5, 15, 22, 16, 17, 23, 4, 6, 24, 9, 25, 26] to be organised in terms of i) the choice of the parameterization of the classifier (and in particular the nature of the local parameters), and ii) the function used to compute the local parameters from the training data. This space is illustrated in Figure 6, and further elaborated upon in Appendix B. One of the inspirations for our work is conditional neural processes (CNPs) [13]. CNPs directly model the predictive distribution p(y∗|x∗,ψ(Dτ )) and train the parameters using maximum likelihood. Whereas previous work on CNPs has focused on homogeneous regression and classification datasets and fairly simple models, here we study multiple heterogeneous classification datasets and use a more complex model to handle this scenario. In particular, whereas the original CNP approach to classification required pre-specifying the number of classes in advance, CNAPS handles varying way classification tasks, which is required for e.g. the meta-dataset benchmark. Further, CNAPS employs a parameter-sharing hierarchy that parameterizes the feature extractor. This contrasts to the original CNP approach that shared all parameters across tasks, and use latent inputs to the decoder to adapt to new tasks. Finally, CNAPS employs a meta-training procedure geared towards learning to adapt to diverse tasks. Similarly, our work can be viewed as a deterministic limit of ML-PIP [5] which employs a distributional treatment of the local-parameters ψ. A model with design choices closely related to CNAPS is TADAM [27]. TADAM employs a similar set of local parameters, allowing for adaptation of both the feature extractor and classification layer. However, it uses a far simpler adaptation network (lacking auto-regressive structure) and an expensive and ad-hoc training procedure. Moreover, TADAM was applied to simple few-shot learning benchmarks (e.g. CIFAR100 and mini-ImageNet) and sees little gain from feature extractor adaptation. In contrast, we see a large benefit from adapting the feature extractor. This may in part reflect the differences in the two models, but we observe that feature extractor adaptation has the largest impact when used to adapt to different datasets and that two stage training is required to see this. Further differences are our usage of the CNP framework and the flexible deployment of CNAPS to continual learning and active learning (see Section 5). 5 Experiments and Results The experiments target three key questions: (i) Can CNAPS improve performance in multi-task few-shot learning? (ii) Does the use of an adaptation network benefit computational-efficiency and data-efficiency? (iii) Can CNAPS be deployed directly to complex learning scenarios like continual learning and active learning? The experiments use the following modelling choices (see Appendix E for full details). While CNAPS can utilize any feature extractor, a ResNet18 [14] is used throughout to enable fair comparison with Triantafillou et al. [6]. To ensure that each task is handled independently, batch normalization statistics [28] are learned (and fixed) during the pre-training phase for θ. Actual batch statistics of the test data are never used during meta-training or testing. Few Shot Classification. The first experiment tackles a demanding few-shot classification challenge called META-DATASET [6]. META-DATASET is composed of ten (eight train, two test) image classification datasets. The challenge constructs few-shot learning tasks by drawing from the following distribution. First, one of the datasets is sampled uniformly; second, the “way” and “shot” are sampled randomly according to a fixed procedure; third, the classes and context / target instances are sampled. Where a hierarchical structure exists in the data (ILSVRC or OMNIGLOT), task-sampling respects the hierarchy. In the meta-test phase, the identity of the original dataset is not revealed and the tasks must be treated independently (i.e. no information can be transferred between them). Notably, the meta-training set comprises a disjoint and dissimilar set of classes from those used for meta-test. Full details are available in Appendix C.1 and [6]. Triantafillou et al. [6] consider two stage training: an initial stage that trains a feature extractor in a standard classification setting, and a meta-training stage of all parameters in an episodic regime. For the meta-training stage, they consider two settings: meta-training only on the META-DATASET version of ILSVRC, and on all meta-training data. We focus on the latter as CNAPS rely on training data from a variety of training tasks to learn to adapt, but provide results for the former in Appendix D.1. We pre-train θ on the meta-training set of the META-DATASET version of ILSVRC, and meta-train φ in an episodic fashion using all meta-training data. We compare CNAPS to models considered by Triantafillou et al. [6], including their proposed method (Proto-MAML) in Table 1. We meta-test CNAPS on three additional held-out datasets: MNIST [29], CIFAR10 [30], and CIFAR100 [30]. As an ablation study, we compare a version of CNAPS that does not make use of the auto-regressive component zAR, and a version that uses no feature extractor adaptation. In our analysis of Table 1, we distinguish between two types of generalization: (i) unseen tasks (classes) in meta-training datasets, and (ii) unseen datasets. Unseen tasks: CNAPS achieve significant improvements over existing methods on seven of the eight datasets. The exception is the TEXTURES dataset, which has only seven test classes and accuracy is highly sensitive to the train / validation / test class split. The ablation study demonstrates that removing zAR from the feature extractor adaptation degrades accuracy in most cases, and that removing all feature extractor adaptation results in drastic reductions in accuracy. Unseen datasets: CNAPS-models outperform all competitive models with the exception of FINETUNE on the TRAFFIC SIGNS dataset. Removing zAR from the feature extractor decreases accuracy and removing the feature extractor adaptation entirely significantly impairs performance. The degradation is particularly pronounced when the held out dataset differs substantially from the dataset used to pretrain θ, e.g. for MNIST. Note that the superior results when using the auto-regressive component can not be attributed to increased network capacity alone. In Appendix D.4 we demonstrate that CNAPS yields superior classification accuracy when compared to parallel residual adapters [17] even though CNAPS requires significantly less network capacity in order to adapt the feature extractor to a given task. Additional results: Results when meta-training only on the META-DATASET version of ILSVRC are given in Table D.3. In Appendix D.2, we visualize the task encodings and parameters, demonstrating that the model is able to learn meaningful task and dataset level representations and parameterizations. The results support the hypothesis that learning to adapt key parts of the network is more robust and achieves significantly better performance than existing approaches. FiLM Parameter Learning Performance: Speed-Accuracy Trade-off. CNAPS generate FiLM layer parameters for each task τ at test time using the adaptation network ψf (Dτ ). It is also possible to learn the FiLM parameters via gradient descent (see [16, 17]). Here we compare CNAPS to this approach. Figure 7 shows plots of 5-way classification accuracy versus time for four held out data sets as the number of shots was varied. For gradient descent, we used a fixed learning rate of 0.001 and took 25 steps for each point. The overall time required to produce the plot was 1274 and 7214 seconds for CNAPS and gradient approaches, respectively, on a NVIDIA Tesla P100-PCIE-16GB GPU. CNAPS is at least 5 times faster at test time than gradient-based optimization requiring only a single forward pass through the network while gradient based approaches require multiple forward and backward passes. Further, the accuracy achieved with adaptation networks is significantly higher for fewer shots as it protects against over-fitting. For large numbers of shots, gradient descent catches up, albeit slowly. Complex Learning Scenarios: Continual Learning. In continual learning [40] new tasks appear over time and existing tasks may change. The goal is to adapt accordingly, but without retaining old data which is challenging for artificial systems. To demonstrate the the versatility CNAPS we show that, although it has not been explicitly trained for continual learning, we are able to apply the same model trained for the few-shot classification experiments (without the auto-regressive component) to standard continual learning benchmarks on held out datasets: Split MNIST [41] and Split CIFAR100 [42]. We modify the model to compute running averages for the representations of both ψτw and ψ τ f (see Appendix F for further details), in this way it performs incremental updates using the new data and the old model, and does not need to access old data. Figure 8 (left) shows the accumulated multiand single-head [42] test accuracy averaged over 30 runs (further results and more detailed figures are in Appendix G). Figure 8 (right) shows average results at the final task comparing to SI [41], EWC [43], VCL [44], and Riemannian Walk [42]. Figure 8 demonstrates that CNAPS naturally resists catastrophic forgetting [43] and compares favourably to competing methods, despite the fact that it was not exposed to these datasets during training, observes orders of magnitude fewer examples, and was not trained explicitly to perform continual learning. CNAPS performs similarly to, or better than, the state-of-the-art Riemannian Walk method which departs from the pure continual learning setting by maintaining a small number of training samples across tasks. Conversely, CNAPS has the advantage of being exposed to a larger range of datasets and can therefore leverage task transfer. We emphasize that this is not meant to be an “apples-to-apples” comparison, but rather, the goal is to demonstrate the out-of-the-box versatility and strong performance of CNAPS in new domains and learning scenarios. Complex Learning Scenarios: Active Learning. Active learning [46, 47] requires accurate dataefficient learning that returns well-calibrated uncertainty estimates. Figure 9 compares the performance of CNAPS and prototypical networks using two standard active learning acquisition functions (variation ratios and predictive entropy [46]) against random acquisition on the FLOWERS dataset and three representative held-out languages from OMNIGLOT (performance on all languages is presented in Appendix H). Figure 9 and Appendix H show that CNAPS achieves higher accuracy on average than prototypical networks. Moreover, CNAPS achieves significant improvements over random acquisition, whereas prototypical networks do not. These tests indicates that CNAPS is more accurate and suggest that CNAPS has better calibrated uncertainty estimates than prototypical networks. 6 Conclusions This paper has introduced CNAPS, an automatic, fast and flexible modelling approach for multitask classification. We have demonstrated that CNAPS achieve state-of-the-art performance on the META-DATASET challenge, and can be deployed “out-of-the-box” to diverse learning scenarios such as continual and active learning where they are competitive with the state-of-the-art. Future avenues of research are to consider the exploration of the design space by introducing gradients and function approximation to the adaptation mechanisms, as well as generalizing the approach to distributional extensions of CNAPS [48, 49]. Acknowledgments The authors would like to thank Ambrish Rawat for helpful discussions and David Duvenaud, Wessel Bruinsma, Will Tebbutt Adrià Garriga Alonso, Eric Nalisnick, and Lyndon White for the insightful comments and feedback. Richard E. Turner is supported by Google, Amazon, Improbable and EPSRC grants EP/M0269571 and EP/L000776/1.
1. What is the main contribution of the paper in few-shot learning? 2. How does the proposed method adapt to new tasks at testing time? 3. Can you explain the two networks used in the proposed method? 4. What is the significance of the superior performance demonstrated by the paper? 5. Are there any typos or inconsistencies in the notation used in the paper?
Review
Review In this paper, the authors proposed a multi-task learning approach for few-shot learning using deep neural networks. The proposed method can automatically adapt to new tasks at testing time after initial multi-task training. To make this possible and achieve better performance over existing methods, the authors proposed to use two networks to learn task-specific parameters in the classifier. One is for task-specific parameters in the final classification layer. The other one is to learn task-specific parameters that adapt the common feature extractor (a network) shared among tasks to be task specific. The superior of this method is demonstrated by not only the better than the-state-of-the-art performance on few-shot learning problems, but also competitive performance on continual learning and active learning tasks. Even though there have been several existing works on few-shot learning, as demonstrated by the empirical results in the paper, this work significantly moves the-state-of-the-art. The paper is well organized and easy to follow. I found the illustrations, i.e., Figure 1-3 are very helpful for me to understand the architecture of the proposed method. There is just one typo that I noticed on line 246, D_{\tau}, should it be D_^{\tau} to keep it consistent with the notations in the rest of the paper?
NIPS
Title Fast and Flexible Multi-Task Classification using Conditional Neural Adaptive Processes Abstract The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature. The resulting approach, called CNAPS, comprises a classifier whose parameters are modulated by an adaptation network that takes the current task’s dataset as input. We demonstrate that CNAPS achieves state-of-theart results on the challenging META-DATASET benchmark indicating high-quality transfer-learning. We show that the approach is robust, avoiding both over-fitting in low-shot regimes and under-fitting in high-shot regimes. Timing experiments reveal that CNAPS is computationally efficient at test-time as it does not involve gradient based adaptation. Finally, we show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning. 1 Introduction We consider the development of general purpose image classification systems that can handle tasks from a broad range of data distributions, in both the low and high data regimes, without the need for costly retraining when new tasks are encountered. We argue that such systems require mechanisms that adapt to each task, and that these mechanisms should themselves be learned from a diversity of datasets and tasks at training time. This general approach relates to methods for meta-learning [1, 2] and few-shot learning [3]. However, existing work in this area typically considers homogeneous task distributions at train and test-time that therefore require only minimal adaptation. To handle the more challenging case of different task distributions we design a fully adaptive system, requiring specific design choices in the model and training procedure. Current approaches to meta-learning and few-shot learning for classification are characterized by two fundamental trade-offs. (i) The number of parameters that are adapted to each task. One approach adapts only the top, or head, of the classifier leaving the feature extractor fixed [4, 5]. While useful in simple settings, this approach is prone to under-fitting when the task distribution is heterogeneous [6]. Alternatively, we can adapt all parameters in the feature extractor [7, 8] thereby increasing ∗Authors contributed equally 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. fitting capacity, but incurring a computation cost and opening the door to over-fitting in the low-shot regime. What is needed is a middle ground which strikes a balance between model capacity and reliability of the adaptation. (ii) The adaptation mechanism. Many approaches use gradient-based adaptation [7, 9]. While this approach can incorporate training data in a very flexible way, it is computationally inefficient at test-time, may require expertise to tune the optimization procedure, and is again prone to over-fitting. Conversely, function approximators can be used to directly map training data to the desired parameters (we refer to this as amortization) [5, 10]. This yields fixed-cost adaptation mechanisms, and enables greater sharing across training tasks. However, it may under-fit if the function approximation is not sufficiently flexible. On the other hand, high-capacity function approximators require a large number of training tasks to be learned. We introduce a modelling class that is well-positioned with respect to these two trade-offs for the multi-task classification setting called Conditional Neural Adaptive Processes (CNAPS).2 CNAPS directly model the desired predictive distribution [11, 12], thereby introducing a conditional neural processes (CNPs) [13] approach to the multi-task classification setting. CNAPS handles varying way classification tasks and introduces a parametrization and training procedure enabling the model to learn to adapt the feature representation for classification of diverse tasks at test time. CNAPS utilize i) a classification model with shared global parameters and a small number of task-specific parameters. We demonstrate that by identifying a small set of key parameters, the model can balance the trade-off between flexibility and robustness. ii) A rich adaptation neural network with a novel auto-regressive parameterization that avoids under-fitting while proving easy to train in practice with existing datasets [6]. In Section 5 we evaluate CNAPS. Recently, Triantafillou et al. [6] proposed META-DATASET, a few-shot classification benchmark that addresses the issue of homogeneous train and test-time tasks and more closely resembles real-world few-shot multi-task learning. Many of the approaches that achieved excellent performance on simple benchmarks struggle with this collection of diverse tasks. In contrast, we show that CNAPS achieve state-of-the-art performance on the META-DATASET benchmark, often by comfortable margins and at a fraction of the time required by competing methods. Finally, we showcase the versatility of the model class by demonstrating that CNAPS can be applied “out of the box” to continual learning and active learning. 2 Model Design We consider a setup where a large number of training tasks are available, each composed of a set of inputs x and labels y. The data for task τ includes a context set Dτ = {(xτn,yτn)} Nτ n=1, with inputs and outputs observed, and a target set {(xτ∗m ,yτ∗m )} Mτ m=1 for which we wish to make predictions (y τ∗ are only observed during training). CNPs [13] construct predictive distributions given x∗ as: p (y∗|x∗,θ, Dτ ) = p (y∗|x∗,θ,ψτ = ψφ (Dτ )) . (1) Here θ are global classifier parameters shared across tasks. ψτ are local task-specific parameters, produced by a function ψφ(·) that acts on Dτ . ψφ(·) has another set of global parameters φ called adaptation network parameters. θ and φ are the learnable parameters in the model (see Figure 1a). 2Source code available at https://github.com/cambridge-mlg/cnaps. CNAPS is a model class that specializes the CNP framework for the multi-task classification setting. The model-class is characterized by a number of design choices, made specifically for the multi-task image classification setting. CNAPS employ global parameters θ that are trained offline to capture high-level features, facilitating transfer and multi-task learning. Whereas CNPs define ψτ to be a fixed dimensional vector used as an input to the model, CNAPS instead let ψτ be specific parameters of the model itself. This increases the flexibility of the classifier, enabling it to model a broader range of input / output distributions. We discuss our choices (and associated trade-offs) for these parameters below. Finally, CNAPS employ a novel auto-regressive parameterization of ψφ(·) that significantly improves performance. An overview of CNAPS and its key components is illustrated in Figure 1b. 2.1 Specification of the classifier: global θ and task-specific parameters ψτ We begin by specifying the classifier’s global parameters θ followed by how these are adapted by the local parameters ψτ . Global Classifier Parameters. The global classifier parameters will parameterize a feature extractor fθ(x) whose output is fed into a linear classifier, described below. A natural choice for fθ(·) in the image setting is a convolutional neural network, e.g., a ResNet [14]. In what follows, we assume that the global parameters θ are fixed and known. In Section 3 we discuss the training of θ. Task-Specific Classifier Parameters: Linear Classification Weights. The final classification layer must be task-specific as each task involves distinguishing a potentially unique set of classes. We use a task specific affine transformation of the feature extractor output, followed by a softmax. The task-specific weights are denoted ψτw ∈ Rdf×C τ (suppressing the biases to simplify notation), where df is the dimension of the feature extractor output fθ(x) and Cτ is the number of classes in task τ . Task-Specific Classifier Parameters: Feature Extractor Parameters. A sufficiently flexible model must have capacity to adapt its feature representation fθ(·) as well as the classification layer (e.g. compare the optimal features required for ImageNet versus Omiglot). We therefore introduce a set of local feature extractor parameters ψτf , and denote fθ(·) the unadapted feature extractor, and fθ(·;ψτf ) the feature extractor adapted to task τ . It is critical in few-shot multi-task learning to adapt the feature extractor in a parameter-efficient manner. Unconstrained adaptation of all the feature extractor parameters (e.g. by fine-tuning [9]) gives flexibility, but it is also slow and prone to over-fitting [6]. Instead, we employ linear modulation of the convolutional feature maps as proposed by Perez et al. [15], which adapts the feature extractor through a relatively small number of task specific parameters. A Feature-wise Linear Modulation (FiLM) layer [15] scales and shifts the ith unadapted feature map fi in the feature extractor FiLM(fi; γτi , β τ i ) = γ τ i fi+ β τ i using two task specific parameters, γ τ i and βτi . Figure 2a illustrates a FiLM layer operating on a convolutional layer, and Figure 2b illustrates how a FiLM layer can be added to a standard Residual network block [14]. A key advantage of FiLM layers is that they enable expressive feature adaptation while adding only a small number of parameters [15]. For example, in our implementation we use a ResNet18 with FiLM layers after every convolutional layer. The set of task specific FiLM parameters (ψτf = {γτi ,βτi }) constitute fewer than 0.7% of the parameters in the model. Despite this, as we show in Section 5, they allow the model to adapt to a broad class of datasets. Split 𝑓𝜃 (𝒙;𝝍𝑓 𝜏 ) by class label 𝑦 {𝑓𝜃 (𝒙;𝝍𝑓 𝜏 )} {𝒚} 𝝍𝑤(·; 𝜙𝑤 , 𝝍𝑓 𝜏 , 𝜽) {𝑓𝜃 (𝑥𝑘 𝑐; 𝝍𝑓 𝜏 )}𝑘=1 𝑘𝑐 (i.e. 𝑘𝑐 train examples from each class 𝑐) 𝑓𝜃 (𝒙 ∗; 𝝍𝑓 𝜏 ) Linear Classifier 𝑤 Softmax 𝑝 𝒚∗ 𝒙∗, 𝜽, 𝝍𝜏)𝑤1 𝑤𝑐 𝑤𝐶… 𝑏𝑐 𝑏1 𝑏𝐶 … … …Shared network for each class 𝑐 in C Mean Pooling 𝜙𝑏 𝜙𝑤 𝑧𝐶 𝜓𝑤 𝐷𝑖 𝜏 ≔ 𝑤𝑖; 𝑏𝑖 Figure 3: Implementation of functional representation of the class-specific parameters ψw. In this parameterization, ψcw are the linear classification parameters for class c, and φw are the learnable parameters. 2.2 Computing the local parameters via adaptation networks The previous sections have specified the form of the classifier p (y∗|x∗,θ,ψτ ) in terms of the global and task specific parameters, θ and ψτ = {ψτf ,ψτw}. The local parameters could now be learned separately for every task τ via optimization. While in practice this is feasible for small numbers of tasks (see e.g., [16, 17]), this approach is computationally demanding, requires expert oversight (e.g. for tuning early stopping), and can over-fit in the low-data regime. Instead, CNAPS uses a function, such as a neural network, that takes the context set Dτ as an input and returns the task-specific parameters, ψτ = ψφ (Dτ ). The adaptation network has parameters φ that will be trained on multiple tasks to learn how to produce local parameters that result in good generalisation, a form of meta-learning. Sacrificing some of the flexibility of the optimisation approach, this method is comparatively cheap computationally (only involving a forward pass through the adaptation network), automatic (with no need for expert oversight), and employs explicit parameter sharing (via φ) across the training tasks. Adaptation Network: Linear Classifier Weights. CNAPS represents the linear classifier weights ψτw as a parameterized function of the formψ τ w = ψw(D τ ;φw,ψf ,θ), denotedψw(Dτ ) for brevity. There are three challenges with this approach: first, the dimensionality of the weights depends on the task (ψτw is a matrix with a column for each class, see Figure 3) and thus the network must output parameters of different dimensionalities; second, the number of datapoints in Dτ will also depend on the task and so the network must be able to take inputs of variable cardinality; third, we would like the model to support continual learning. To handle the first two challenges we follow Gordon et al. [5]. First, each column of the weight matrix is generated independently from the context points from that class ψτw = [ψw (D τ 1 ) , . . . , ψw (D τ C)], an approach which scales to arbitrary numbers of classes. Second, we employ a permutation invariant architecture [18, 19] for ψw(·) to handle the variable input cardinality (see Appendix E for details). Third, as permutation invariant architectures can be incrementally updated [20], continual learning is supported (as discussed in Section 5). Intuitively, the classifier weights should be determined by the representation of the data points emerging from the adapted feature extractor. We therefore input the adapted feature representation of the data points into the network, rather than the raw data points (hence the dependency of ψw on ψf and θ). To summarize, ψw(·) is a function on sets that accepts as input a set of adapted feature representations from Dτc , and outputs the c th column of the linear classification matrix, i.e., ψw (D τ c ;φw,ψf ,θ) = ψw ({fθ (xm;ψf ) |xm ∈ Dτ ,ym = c};φw) . (2) Here φw are learnable parameters of ψw(·). See Figure 3 for an illustration. Adaptation Network: Feature Extractor Parameters. CNAPS represents the task-specific feature extractor parameters ψτf , comprising the parameters of the FiLM layers γ τ and βτ in our implementation, as a parameterized function of the context-set Dτ . Thus, ψf (·;φf ,θ) is a collection of functions (one for each FiLM layer) with parameters φf , many of which are shared across functions. We denote the function generating the parameters for the ith FiLM layer ψif (·) for brevity. Our experiments (Section 5) show that this mapping requires careful parameterization. We propose a novel parameterization that improves performance in complex settings with diverse datasets. Our implementation contains two components: a task-specific representation that provides context about the task to all layers of the feature extractor (denoted zτG), and an auto-regressive component that provides information to deeper layers in the feature extractor concerning how shallower layers have adapted to the task (denoted ziAR). The input to the ψ i f (·) network is zi = (zτG, ziAR). zτG is computed for every task τ by passing the inputs xτn through a global set encoder g with parameters in φf . To adapt the lth layer in the feature extractor, it is useful for the system to have access to the representation of task-relevant inputs from layer l− 1. While zG could in principle encode how layer l− 1 has adapted, we opt to provide this information directly to the adaptation network adapting layer l by passing the adapted activations from layer l−1. The auto-regressive component ziAR is computed by processing the adapted activations of the previous convolutional block with a layer-specific set encoder (except for the first residual block, whose auto-regressive component is given by the unadapted initial pre-processing stage in the ResNet). Both the global and all layer-specific set-encoders are implemented as permutation invariant functions [18, 19] (see Appendix E for details). The full parameterization is illustrated in Figure 4, and the architecture of ψif (·) networks is illustrated in Figure 5. 3 Model Training The previous section has specified the model (see Figure 1b for a schematic). We now describe how to train the global classifier parameters θ and the adaptation network parameters φ = {φf ,φw}. Training the global classifier parameters θ. A natural approach to training the model (originally employed by CNPs [13]) would be to maximize the likelihood of the training data jointly over θ and φ. However, experiments (detailed in Appendix D.3) showed that it is crucially important to adopt a two stage process instead. In the first stage, θ are trained on a large dataset (e.g., the training set of ImageNet [21, 6]) in a full-way classification procedure, mirroring standard pre-training. Second, θ are fixed and φ are trained using episodic training over all meta-training datasets in the multi-task setting. We hypothesize that two-stage training is important for two reasons: (i) during the second stage, φf are trained to adapt fθ(·) to tasks τ by outputting ψτf . As θ has far more capacity than ψτf , if they are trained in the context of all tasks, there is no need for ψτf to adapt the feature extractor, resulting in little-to-no training signal for φf and poor generalisation. (ii) Allowing θ to adapt during the second phase violates the principle of “train as you test”, i.e., when test tasks are encountered, θ will be fixed, so it is important to simulate this scenario during training. Finally, fixing θ during meta-training is desireable as it results in a dramatic decrease in training time. Training the adaptation network parameters φ. Following the work of Garnelo et al. [13], we train φ with maximum likelihood. An unbiased stochastic estimator of the log-likelihood is: L̂ (φ) = 1 MT ∑ m,τ log p (y∗τm |x∗τm ,ψφ (Dτ ) ,θ) , (3) where {y∗τm ,x∗τm , Dτ} ∼ P̂ , with P̂ representing the data distribution (e.g., sampling tasks and splitting them into disjoint context (Dτ ) and target data {(x∗τm ,y∗τm )} Mt m=1). Maximum likelihood training therefore naturally uses episodic context / target splits often used in meta-learning. In our experiments we use the protocol defined by Triantafillou et al. [6] and META-DATASET for this sampling procedure. Algorithm A.1 details computation of the stochastic estimator for a single task. 4 Related Work Our work frames multi-task classification as directly modelling the predictive distribution p(y∗|x∗,ψ(Dτ )). The perspective allows previous work [7, 5, 15, 22, 16, 17, 23, 4, 6, 24, 9, 25, 26] to be organised in terms of i) the choice of the parameterization of the classifier (and in particular the nature of the local parameters), and ii) the function used to compute the local parameters from the training data. This space is illustrated in Figure 6, and further elaborated upon in Appendix B. One of the inspirations for our work is conditional neural processes (CNPs) [13]. CNPs directly model the predictive distribution p(y∗|x∗,ψ(Dτ )) and train the parameters using maximum likelihood. Whereas previous work on CNPs has focused on homogeneous regression and classification datasets and fairly simple models, here we study multiple heterogeneous classification datasets and use a more complex model to handle this scenario. In particular, whereas the original CNP approach to classification required pre-specifying the number of classes in advance, CNAPS handles varying way classification tasks, which is required for e.g. the meta-dataset benchmark. Further, CNAPS employs a parameter-sharing hierarchy that parameterizes the feature extractor. This contrasts to the original CNP approach that shared all parameters across tasks, and use latent inputs to the decoder to adapt to new tasks. Finally, CNAPS employs a meta-training procedure geared towards learning to adapt to diverse tasks. Similarly, our work can be viewed as a deterministic limit of ML-PIP [5] which employs a distributional treatment of the local-parameters ψ. A model with design choices closely related to CNAPS is TADAM [27]. TADAM employs a similar set of local parameters, allowing for adaptation of both the feature extractor and classification layer. However, it uses a far simpler adaptation network (lacking auto-regressive structure) and an expensive and ad-hoc training procedure. Moreover, TADAM was applied to simple few-shot learning benchmarks (e.g. CIFAR100 and mini-ImageNet) and sees little gain from feature extractor adaptation. In contrast, we see a large benefit from adapting the feature extractor. This may in part reflect the differences in the two models, but we observe that feature extractor adaptation has the largest impact when used to adapt to different datasets and that two stage training is required to see this. Further differences are our usage of the CNP framework and the flexible deployment of CNAPS to continual learning and active learning (see Section 5). 5 Experiments and Results The experiments target three key questions: (i) Can CNAPS improve performance in multi-task few-shot learning? (ii) Does the use of an adaptation network benefit computational-efficiency and data-efficiency? (iii) Can CNAPS be deployed directly to complex learning scenarios like continual learning and active learning? The experiments use the following modelling choices (see Appendix E for full details). While CNAPS can utilize any feature extractor, a ResNet18 [14] is used throughout to enable fair comparison with Triantafillou et al. [6]. To ensure that each task is handled independently, batch normalization statistics [28] are learned (and fixed) during the pre-training phase for θ. Actual batch statistics of the test data are never used during meta-training or testing. Few Shot Classification. The first experiment tackles a demanding few-shot classification challenge called META-DATASET [6]. META-DATASET is composed of ten (eight train, two test) image classification datasets. The challenge constructs few-shot learning tasks by drawing from the following distribution. First, one of the datasets is sampled uniformly; second, the “way” and “shot” are sampled randomly according to a fixed procedure; third, the classes and context / target instances are sampled. Where a hierarchical structure exists in the data (ILSVRC or OMNIGLOT), task-sampling respects the hierarchy. In the meta-test phase, the identity of the original dataset is not revealed and the tasks must be treated independently (i.e. no information can be transferred between them). Notably, the meta-training set comprises a disjoint and dissimilar set of classes from those used for meta-test. Full details are available in Appendix C.1 and [6]. Triantafillou et al. [6] consider two stage training: an initial stage that trains a feature extractor in a standard classification setting, and a meta-training stage of all parameters in an episodic regime. For the meta-training stage, they consider two settings: meta-training only on the META-DATASET version of ILSVRC, and on all meta-training data. We focus on the latter as CNAPS rely on training data from a variety of training tasks to learn to adapt, but provide results for the former in Appendix D.1. We pre-train θ on the meta-training set of the META-DATASET version of ILSVRC, and meta-train φ in an episodic fashion using all meta-training data. We compare CNAPS to models considered by Triantafillou et al. [6], including their proposed method (Proto-MAML) in Table 1. We meta-test CNAPS on three additional held-out datasets: MNIST [29], CIFAR10 [30], and CIFAR100 [30]. As an ablation study, we compare a version of CNAPS that does not make use of the auto-regressive component zAR, and a version that uses no feature extractor adaptation. In our analysis of Table 1, we distinguish between two types of generalization: (i) unseen tasks (classes) in meta-training datasets, and (ii) unseen datasets. Unseen tasks: CNAPS achieve significant improvements over existing methods on seven of the eight datasets. The exception is the TEXTURES dataset, which has only seven test classes and accuracy is highly sensitive to the train / validation / test class split. The ablation study demonstrates that removing zAR from the feature extractor adaptation degrades accuracy in most cases, and that removing all feature extractor adaptation results in drastic reductions in accuracy. Unseen datasets: CNAPS-models outperform all competitive models with the exception of FINETUNE on the TRAFFIC SIGNS dataset. Removing zAR from the feature extractor decreases accuracy and removing the feature extractor adaptation entirely significantly impairs performance. The degradation is particularly pronounced when the held out dataset differs substantially from the dataset used to pretrain θ, e.g. for MNIST. Note that the superior results when using the auto-regressive component can not be attributed to increased network capacity alone. In Appendix D.4 we demonstrate that CNAPS yields superior classification accuracy when compared to parallel residual adapters [17] even though CNAPS requires significantly less network capacity in order to adapt the feature extractor to a given task. Additional results: Results when meta-training only on the META-DATASET version of ILSVRC are given in Table D.3. In Appendix D.2, we visualize the task encodings and parameters, demonstrating that the model is able to learn meaningful task and dataset level representations and parameterizations. The results support the hypothesis that learning to adapt key parts of the network is more robust and achieves significantly better performance than existing approaches. FiLM Parameter Learning Performance: Speed-Accuracy Trade-off. CNAPS generate FiLM layer parameters for each task τ at test time using the adaptation network ψf (Dτ ). It is also possible to learn the FiLM parameters via gradient descent (see [16, 17]). Here we compare CNAPS to this approach. Figure 7 shows plots of 5-way classification accuracy versus time for four held out data sets as the number of shots was varied. For gradient descent, we used a fixed learning rate of 0.001 and took 25 steps for each point. The overall time required to produce the plot was 1274 and 7214 seconds for CNAPS and gradient approaches, respectively, on a NVIDIA Tesla P100-PCIE-16GB GPU. CNAPS is at least 5 times faster at test time than gradient-based optimization requiring only a single forward pass through the network while gradient based approaches require multiple forward and backward passes. Further, the accuracy achieved with adaptation networks is significantly higher for fewer shots as it protects against over-fitting. For large numbers of shots, gradient descent catches up, albeit slowly. Complex Learning Scenarios: Continual Learning. In continual learning [40] new tasks appear over time and existing tasks may change. The goal is to adapt accordingly, but without retaining old data which is challenging for artificial systems. To demonstrate the the versatility CNAPS we show that, although it has not been explicitly trained for continual learning, we are able to apply the same model trained for the few-shot classification experiments (without the auto-regressive component) to standard continual learning benchmarks on held out datasets: Split MNIST [41] and Split CIFAR100 [42]. We modify the model to compute running averages for the representations of both ψτw and ψ τ f (see Appendix F for further details), in this way it performs incremental updates using the new data and the old model, and does not need to access old data. Figure 8 (left) shows the accumulated multiand single-head [42] test accuracy averaged over 30 runs (further results and more detailed figures are in Appendix G). Figure 8 (right) shows average results at the final task comparing to SI [41], EWC [43], VCL [44], and Riemannian Walk [42]. Figure 8 demonstrates that CNAPS naturally resists catastrophic forgetting [43] and compares favourably to competing methods, despite the fact that it was not exposed to these datasets during training, observes orders of magnitude fewer examples, and was not trained explicitly to perform continual learning. CNAPS performs similarly to, or better than, the state-of-the-art Riemannian Walk method which departs from the pure continual learning setting by maintaining a small number of training samples across tasks. Conversely, CNAPS has the advantage of being exposed to a larger range of datasets and can therefore leverage task transfer. We emphasize that this is not meant to be an “apples-to-apples” comparison, but rather, the goal is to demonstrate the out-of-the-box versatility and strong performance of CNAPS in new domains and learning scenarios. Complex Learning Scenarios: Active Learning. Active learning [46, 47] requires accurate dataefficient learning that returns well-calibrated uncertainty estimates. Figure 9 compares the performance of CNAPS and prototypical networks using two standard active learning acquisition functions (variation ratios and predictive entropy [46]) against random acquisition on the FLOWERS dataset and three representative held-out languages from OMNIGLOT (performance on all languages is presented in Appendix H). Figure 9 and Appendix H show that CNAPS achieves higher accuracy on average than prototypical networks. Moreover, CNAPS achieves significant improvements over random acquisition, whereas prototypical networks do not. These tests indicates that CNAPS is more accurate and suggest that CNAPS has better calibrated uncertainty estimates than prototypical networks. 6 Conclusions This paper has introduced CNAPS, an automatic, fast and flexible modelling approach for multitask classification. We have demonstrated that CNAPS achieve state-of-the-art performance on the META-DATASET challenge, and can be deployed “out-of-the-box” to diverse learning scenarios such as continual and active learning where they are competitive with the state-of-the-art. Future avenues of research are to consider the exploration of the design space by introducing gradients and function approximation to the adaptation mechanisms, as well as generalizing the approach to distributional extensions of CNAPS [48, 49]. Acknowledgments The authors would like to thank Ambrish Rawat for helpful discussions and David Duvenaud, Wessel Bruinsma, Will Tebbutt Adrià Garriga Alonso, Eric Nalisnick, and Lyndon White for the insightful comments and feedback. Richard E. Turner is supported by Google, Amazon, Improbable and EPSRC grants EP/M0269571 and EP/L000776/1.
1. What are the strengths and weaknesses of the proposed approach in the paper? 2. How does the reviewer assess the technical content and novelty of the paper's contributions? 3. Are there any concerns regarding the training procedure and its justification? 4. How does the reviewer evaluate the significance and impact of the proposed method in few-shot learning and related areas? 5. Are there any suggestions or recommendations for future research directions?
Review
Review Quality: The technical content of the paper is well motivated and the approach taken is interesting. However, a few things are worth mentioning. 1 - The classification parameters for a given class are generated independently from the other classes. This means that the classifier is more likely to act as a prototypical model than a discriminative one. 2 - In the adaptation network, the auto-regressive component is not technically motivated. The fact that it improves results just shows the lack of capacity in the FiLM network as a way to modulate the feature extractor parameters alone. Did you compare different ways of modulating the feature extractor parameters? 3 - z_G is computed using only the inputs from the query set, what about the labels? 4 - The statement “ Allowing θ to adapt during the second phase violates the principle of “train as you test", i.e., when test tasks are encountered, θ will be fixed, so it is important to simulate this scenario during training “ is technically false as within each meta-learning step θ will be fixed even when is not pretrained. Thus, the justification for the training procedure is a bit weak despite the comparison between the proposed approach and the classical one. Maybe the sensitivity of the hyper-parameters is more the main reason for those differences. 5 - Related to the previous point, pretraining θ requires a large dataset, which is not always available in other domains as it is in computer vision, do not play in favor of the proposed training procedure. Thus, it is critical to find an alternative that works for training all parameters together using the meta-dataset instead of the two-phase approach proposed. 6 - Despite great results shown for the few-shot learning settings, the results section is a bit unfocused as the application to active learning and continual learning seems unnatural and forced. Clarity: The paper is generally well-written and structured and really easy to understand. Originality: The main originality in this work is definitely the auto-regressive modulation network that was proposed. Significance: This work shows significant improvements over the state of the art in few-shot classification which is an important contribution. While weakly motivated, it also proposes a new neural net architecture that improves upon modulation results achieved by FiLM, which helps to achieve better results.
NIPS
Title Fast and Flexible Multi-Task Classification using Conditional Neural Adaptive Processes Abstract The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature. The resulting approach, called CNAPS, comprises a classifier whose parameters are modulated by an adaptation network that takes the current task’s dataset as input. We demonstrate that CNAPS achieves state-of-theart results on the challenging META-DATASET benchmark indicating high-quality transfer-learning. We show that the approach is robust, avoiding both over-fitting in low-shot regimes and under-fitting in high-shot regimes. Timing experiments reveal that CNAPS is computationally efficient at test-time as it does not involve gradient based adaptation. Finally, we show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning. 1 Introduction We consider the development of general purpose image classification systems that can handle tasks from a broad range of data distributions, in both the low and high data regimes, without the need for costly retraining when new tasks are encountered. We argue that such systems require mechanisms that adapt to each task, and that these mechanisms should themselves be learned from a diversity of datasets and tasks at training time. This general approach relates to methods for meta-learning [1, 2] and few-shot learning [3]. However, existing work in this area typically considers homogeneous task distributions at train and test-time that therefore require only minimal adaptation. To handle the more challenging case of different task distributions we design a fully adaptive system, requiring specific design choices in the model and training procedure. Current approaches to meta-learning and few-shot learning for classification are characterized by two fundamental trade-offs. (i) The number of parameters that are adapted to each task. One approach adapts only the top, or head, of the classifier leaving the feature extractor fixed [4, 5]. While useful in simple settings, this approach is prone to under-fitting when the task distribution is heterogeneous [6]. Alternatively, we can adapt all parameters in the feature extractor [7, 8] thereby increasing ∗Authors contributed equally 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. fitting capacity, but incurring a computation cost and opening the door to over-fitting in the low-shot regime. What is needed is a middle ground which strikes a balance between model capacity and reliability of the adaptation. (ii) The adaptation mechanism. Many approaches use gradient-based adaptation [7, 9]. While this approach can incorporate training data in a very flexible way, it is computationally inefficient at test-time, may require expertise to tune the optimization procedure, and is again prone to over-fitting. Conversely, function approximators can be used to directly map training data to the desired parameters (we refer to this as amortization) [5, 10]. This yields fixed-cost adaptation mechanisms, and enables greater sharing across training tasks. However, it may under-fit if the function approximation is not sufficiently flexible. On the other hand, high-capacity function approximators require a large number of training tasks to be learned. We introduce a modelling class that is well-positioned with respect to these two trade-offs for the multi-task classification setting called Conditional Neural Adaptive Processes (CNAPS).2 CNAPS directly model the desired predictive distribution [11, 12], thereby introducing a conditional neural processes (CNPs) [13] approach to the multi-task classification setting. CNAPS handles varying way classification tasks and introduces a parametrization and training procedure enabling the model to learn to adapt the feature representation for classification of diverse tasks at test time. CNAPS utilize i) a classification model with shared global parameters and a small number of task-specific parameters. We demonstrate that by identifying a small set of key parameters, the model can balance the trade-off between flexibility and robustness. ii) A rich adaptation neural network with a novel auto-regressive parameterization that avoids under-fitting while proving easy to train in practice with existing datasets [6]. In Section 5 we evaluate CNAPS. Recently, Triantafillou et al. [6] proposed META-DATASET, a few-shot classification benchmark that addresses the issue of homogeneous train and test-time tasks and more closely resembles real-world few-shot multi-task learning. Many of the approaches that achieved excellent performance on simple benchmarks struggle with this collection of diverse tasks. In contrast, we show that CNAPS achieve state-of-the-art performance on the META-DATASET benchmark, often by comfortable margins and at a fraction of the time required by competing methods. Finally, we showcase the versatility of the model class by demonstrating that CNAPS can be applied “out of the box” to continual learning and active learning. 2 Model Design We consider a setup where a large number of training tasks are available, each composed of a set of inputs x and labels y. The data for task τ includes a context set Dτ = {(xτn,yτn)} Nτ n=1, with inputs and outputs observed, and a target set {(xτ∗m ,yτ∗m )} Mτ m=1 for which we wish to make predictions (y τ∗ are only observed during training). CNPs [13] construct predictive distributions given x∗ as: p (y∗|x∗,θ, Dτ ) = p (y∗|x∗,θ,ψτ = ψφ (Dτ )) . (1) Here θ are global classifier parameters shared across tasks. ψτ are local task-specific parameters, produced by a function ψφ(·) that acts on Dτ . ψφ(·) has another set of global parameters φ called adaptation network parameters. θ and φ are the learnable parameters in the model (see Figure 1a). 2Source code available at https://github.com/cambridge-mlg/cnaps. CNAPS is a model class that specializes the CNP framework for the multi-task classification setting. The model-class is characterized by a number of design choices, made specifically for the multi-task image classification setting. CNAPS employ global parameters θ that are trained offline to capture high-level features, facilitating transfer and multi-task learning. Whereas CNPs define ψτ to be a fixed dimensional vector used as an input to the model, CNAPS instead let ψτ be specific parameters of the model itself. This increases the flexibility of the classifier, enabling it to model a broader range of input / output distributions. We discuss our choices (and associated trade-offs) for these parameters below. Finally, CNAPS employ a novel auto-regressive parameterization of ψφ(·) that significantly improves performance. An overview of CNAPS and its key components is illustrated in Figure 1b. 2.1 Specification of the classifier: global θ and task-specific parameters ψτ We begin by specifying the classifier’s global parameters θ followed by how these are adapted by the local parameters ψτ . Global Classifier Parameters. The global classifier parameters will parameterize a feature extractor fθ(x) whose output is fed into a linear classifier, described below. A natural choice for fθ(·) in the image setting is a convolutional neural network, e.g., a ResNet [14]. In what follows, we assume that the global parameters θ are fixed and known. In Section 3 we discuss the training of θ. Task-Specific Classifier Parameters: Linear Classification Weights. The final classification layer must be task-specific as each task involves distinguishing a potentially unique set of classes. We use a task specific affine transformation of the feature extractor output, followed by a softmax. The task-specific weights are denoted ψτw ∈ Rdf×C τ (suppressing the biases to simplify notation), where df is the dimension of the feature extractor output fθ(x) and Cτ is the number of classes in task τ . Task-Specific Classifier Parameters: Feature Extractor Parameters. A sufficiently flexible model must have capacity to adapt its feature representation fθ(·) as well as the classification layer (e.g. compare the optimal features required for ImageNet versus Omiglot). We therefore introduce a set of local feature extractor parameters ψτf , and denote fθ(·) the unadapted feature extractor, and fθ(·;ψτf ) the feature extractor adapted to task τ . It is critical in few-shot multi-task learning to adapt the feature extractor in a parameter-efficient manner. Unconstrained adaptation of all the feature extractor parameters (e.g. by fine-tuning [9]) gives flexibility, but it is also slow and prone to over-fitting [6]. Instead, we employ linear modulation of the convolutional feature maps as proposed by Perez et al. [15], which adapts the feature extractor through a relatively small number of task specific parameters. A Feature-wise Linear Modulation (FiLM) layer [15] scales and shifts the ith unadapted feature map fi in the feature extractor FiLM(fi; γτi , β τ i ) = γ τ i fi+ β τ i using two task specific parameters, γ τ i and βτi . Figure 2a illustrates a FiLM layer operating on a convolutional layer, and Figure 2b illustrates how a FiLM layer can be added to a standard Residual network block [14]. A key advantage of FiLM layers is that they enable expressive feature adaptation while adding only a small number of parameters [15]. For example, in our implementation we use a ResNet18 with FiLM layers after every convolutional layer. The set of task specific FiLM parameters (ψτf = {γτi ,βτi }) constitute fewer than 0.7% of the parameters in the model. Despite this, as we show in Section 5, they allow the model to adapt to a broad class of datasets. Split 𝑓𝜃 (𝒙;𝝍𝑓 𝜏 ) by class label 𝑦 {𝑓𝜃 (𝒙;𝝍𝑓 𝜏 )} {𝒚} 𝝍𝑤(·; 𝜙𝑤 , 𝝍𝑓 𝜏 , 𝜽) {𝑓𝜃 (𝑥𝑘 𝑐; 𝝍𝑓 𝜏 )}𝑘=1 𝑘𝑐 (i.e. 𝑘𝑐 train examples from each class 𝑐) 𝑓𝜃 (𝒙 ∗; 𝝍𝑓 𝜏 ) Linear Classifier 𝑤 Softmax 𝑝 𝒚∗ 𝒙∗, 𝜽, 𝝍𝜏)𝑤1 𝑤𝑐 𝑤𝐶… 𝑏𝑐 𝑏1 𝑏𝐶 … … …Shared network for each class 𝑐 in C Mean Pooling 𝜙𝑏 𝜙𝑤 𝑧𝐶 𝜓𝑤 𝐷𝑖 𝜏 ≔ 𝑤𝑖; 𝑏𝑖 Figure 3: Implementation of functional representation of the class-specific parameters ψw. In this parameterization, ψcw are the linear classification parameters for class c, and φw are the learnable parameters. 2.2 Computing the local parameters via adaptation networks The previous sections have specified the form of the classifier p (y∗|x∗,θ,ψτ ) in terms of the global and task specific parameters, θ and ψτ = {ψτf ,ψτw}. The local parameters could now be learned separately for every task τ via optimization. While in practice this is feasible for small numbers of tasks (see e.g., [16, 17]), this approach is computationally demanding, requires expert oversight (e.g. for tuning early stopping), and can over-fit in the low-data regime. Instead, CNAPS uses a function, such as a neural network, that takes the context set Dτ as an input and returns the task-specific parameters, ψτ = ψφ (Dτ ). The adaptation network has parameters φ that will be trained on multiple tasks to learn how to produce local parameters that result in good generalisation, a form of meta-learning. Sacrificing some of the flexibility of the optimisation approach, this method is comparatively cheap computationally (only involving a forward pass through the adaptation network), automatic (with no need for expert oversight), and employs explicit parameter sharing (via φ) across the training tasks. Adaptation Network: Linear Classifier Weights. CNAPS represents the linear classifier weights ψτw as a parameterized function of the formψ τ w = ψw(D τ ;φw,ψf ,θ), denotedψw(Dτ ) for brevity. There are three challenges with this approach: first, the dimensionality of the weights depends on the task (ψτw is a matrix with a column for each class, see Figure 3) and thus the network must output parameters of different dimensionalities; second, the number of datapoints in Dτ will also depend on the task and so the network must be able to take inputs of variable cardinality; third, we would like the model to support continual learning. To handle the first two challenges we follow Gordon et al. [5]. First, each column of the weight matrix is generated independently from the context points from that class ψτw = [ψw (D τ 1 ) , . . . , ψw (D τ C)], an approach which scales to arbitrary numbers of classes. Second, we employ a permutation invariant architecture [18, 19] for ψw(·) to handle the variable input cardinality (see Appendix E for details). Third, as permutation invariant architectures can be incrementally updated [20], continual learning is supported (as discussed in Section 5). Intuitively, the classifier weights should be determined by the representation of the data points emerging from the adapted feature extractor. We therefore input the adapted feature representation of the data points into the network, rather than the raw data points (hence the dependency of ψw on ψf and θ). To summarize, ψw(·) is a function on sets that accepts as input a set of adapted feature representations from Dτc , and outputs the c th column of the linear classification matrix, i.e., ψw (D τ c ;φw,ψf ,θ) = ψw ({fθ (xm;ψf ) |xm ∈ Dτ ,ym = c};φw) . (2) Here φw are learnable parameters of ψw(·). See Figure 3 for an illustration. Adaptation Network: Feature Extractor Parameters. CNAPS represents the task-specific feature extractor parameters ψτf , comprising the parameters of the FiLM layers γ τ and βτ in our implementation, as a parameterized function of the context-set Dτ . Thus, ψf (·;φf ,θ) is a collection of functions (one for each FiLM layer) with parameters φf , many of which are shared across functions. We denote the function generating the parameters for the ith FiLM layer ψif (·) for brevity. Our experiments (Section 5) show that this mapping requires careful parameterization. We propose a novel parameterization that improves performance in complex settings with diverse datasets. Our implementation contains two components: a task-specific representation that provides context about the task to all layers of the feature extractor (denoted zτG), and an auto-regressive component that provides information to deeper layers in the feature extractor concerning how shallower layers have adapted to the task (denoted ziAR). The input to the ψ i f (·) network is zi = (zτG, ziAR). zτG is computed for every task τ by passing the inputs xτn through a global set encoder g with parameters in φf . To adapt the lth layer in the feature extractor, it is useful for the system to have access to the representation of task-relevant inputs from layer l− 1. While zG could in principle encode how layer l− 1 has adapted, we opt to provide this information directly to the adaptation network adapting layer l by passing the adapted activations from layer l−1. The auto-regressive component ziAR is computed by processing the adapted activations of the previous convolutional block with a layer-specific set encoder (except for the first residual block, whose auto-regressive component is given by the unadapted initial pre-processing stage in the ResNet). Both the global and all layer-specific set-encoders are implemented as permutation invariant functions [18, 19] (see Appendix E for details). The full parameterization is illustrated in Figure 4, and the architecture of ψif (·) networks is illustrated in Figure 5. 3 Model Training The previous section has specified the model (see Figure 1b for a schematic). We now describe how to train the global classifier parameters θ and the adaptation network parameters φ = {φf ,φw}. Training the global classifier parameters θ. A natural approach to training the model (originally employed by CNPs [13]) would be to maximize the likelihood of the training data jointly over θ and φ. However, experiments (detailed in Appendix D.3) showed that it is crucially important to adopt a two stage process instead. In the first stage, θ are trained on a large dataset (e.g., the training set of ImageNet [21, 6]) in a full-way classification procedure, mirroring standard pre-training. Second, θ are fixed and φ are trained using episodic training over all meta-training datasets in the multi-task setting. We hypothesize that two-stage training is important for two reasons: (i) during the second stage, φf are trained to adapt fθ(·) to tasks τ by outputting ψτf . As θ has far more capacity than ψτf , if they are trained in the context of all tasks, there is no need for ψτf to adapt the feature extractor, resulting in little-to-no training signal for φf and poor generalisation. (ii) Allowing θ to adapt during the second phase violates the principle of “train as you test”, i.e., when test tasks are encountered, θ will be fixed, so it is important to simulate this scenario during training. Finally, fixing θ during meta-training is desireable as it results in a dramatic decrease in training time. Training the adaptation network parameters φ. Following the work of Garnelo et al. [13], we train φ with maximum likelihood. An unbiased stochastic estimator of the log-likelihood is: L̂ (φ) = 1 MT ∑ m,τ log p (y∗τm |x∗τm ,ψφ (Dτ ) ,θ) , (3) where {y∗τm ,x∗τm , Dτ} ∼ P̂ , with P̂ representing the data distribution (e.g., sampling tasks and splitting them into disjoint context (Dτ ) and target data {(x∗τm ,y∗τm )} Mt m=1). Maximum likelihood training therefore naturally uses episodic context / target splits often used in meta-learning. In our experiments we use the protocol defined by Triantafillou et al. [6] and META-DATASET for this sampling procedure. Algorithm A.1 details computation of the stochastic estimator for a single task. 4 Related Work Our work frames multi-task classification as directly modelling the predictive distribution p(y∗|x∗,ψ(Dτ )). The perspective allows previous work [7, 5, 15, 22, 16, 17, 23, 4, 6, 24, 9, 25, 26] to be organised in terms of i) the choice of the parameterization of the classifier (and in particular the nature of the local parameters), and ii) the function used to compute the local parameters from the training data. This space is illustrated in Figure 6, and further elaborated upon in Appendix B. One of the inspirations for our work is conditional neural processes (CNPs) [13]. CNPs directly model the predictive distribution p(y∗|x∗,ψ(Dτ )) and train the parameters using maximum likelihood. Whereas previous work on CNPs has focused on homogeneous regression and classification datasets and fairly simple models, here we study multiple heterogeneous classification datasets and use a more complex model to handle this scenario. In particular, whereas the original CNP approach to classification required pre-specifying the number of classes in advance, CNAPS handles varying way classification tasks, which is required for e.g. the meta-dataset benchmark. Further, CNAPS employs a parameter-sharing hierarchy that parameterizes the feature extractor. This contrasts to the original CNP approach that shared all parameters across tasks, and use latent inputs to the decoder to adapt to new tasks. Finally, CNAPS employs a meta-training procedure geared towards learning to adapt to diverse tasks. Similarly, our work can be viewed as a deterministic limit of ML-PIP [5] which employs a distributional treatment of the local-parameters ψ. A model with design choices closely related to CNAPS is TADAM [27]. TADAM employs a similar set of local parameters, allowing for adaptation of both the feature extractor and classification layer. However, it uses a far simpler adaptation network (lacking auto-regressive structure) and an expensive and ad-hoc training procedure. Moreover, TADAM was applied to simple few-shot learning benchmarks (e.g. CIFAR100 and mini-ImageNet) and sees little gain from feature extractor adaptation. In contrast, we see a large benefit from adapting the feature extractor. This may in part reflect the differences in the two models, but we observe that feature extractor adaptation has the largest impact when used to adapt to different datasets and that two stage training is required to see this. Further differences are our usage of the CNP framework and the flexible deployment of CNAPS to continual learning and active learning (see Section 5). 5 Experiments and Results The experiments target three key questions: (i) Can CNAPS improve performance in multi-task few-shot learning? (ii) Does the use of an adaptation network benefit computational-efficiency and data-efficiency? (iii) Can CNAPS be deployed directly to complex learning scenarios like continual learning and active learning? The experiments use the following modelling choices (see Appendix E for full details). While CNAPS can utilize any feature extractor, a ResNet18 [14] is used throughout to enable fair comparison with Triantafillou et al. [6]. To ensure that each task is handled independently, batch normalization statistics [28] are learned (and fixed) during the pre-training phase for θ. Actual batch statistics of the test data are never used during meta-training or testing. Few Shot Classification. The first experiment tackles a demanding few-shot classification challenge called META-DATASET [6]. META-DATASET is composed of ten (eight train, two test) image classification datasets. The challenge constructs few-shot learning tasks by drawing from the following distribution. First, one of the datasets is sampled uniformly; second, the “way” and “shot” are sampled randomly according to a fixed procedure; third, the classes and context / target instances are sampled. Where a hierarchical structure exists in the data (ILSVRC or OMNIGLOT), task-sampling respects the hierarchy. In the meta-test phase, the identity of the original dataset is not revealed and the tasks must be treated independently (i.e. no information can be transferred between them). Notably, the meta-training set comprises a disjoint and dissimilar set of classes from those used for meta-test. Full details are available in Appendix C.1 and [6]. Triantafillou et al. [6] consider two stage training: an initial stage that trains a feature extractor in a standard classification setting, and a meta-training stage of all parameters in an episodic regime. For the meta-training stage, they consider two settings: meta-training only on the META-DATASET version of ILSVRC, and on all meta-training data. We focus on the latter as CNAPS rely on training data from a variety of training tasks to learn to adapt, but provide results for the former in Appendix D.1. We pre-train θ on the meta-training set of the META-DATASET version of ILSVRC, and meta-train φ in an episodic fashion using all meta-training data. We compare CNAPS to models considered by Triantafillou et al. [6], including their proposed method (Proto-MAML) in Table 1. We meta-test CNAPS on three additional held-out datasets: MNIST [29], CIFAR10 [30], and CIFAR100 [30]. As an ablation study, we compare a version of CNAPS that does not make use of the auto-regressive component zAR, and a version that uses no feature extractor adaptation. In our analysis of Table 1, we distinguish between two types of generalization: (i) unseen tasks (classes) in meta-training datasets, and (ii) unseen datasets. Unseen tasks: CNAPS achieve significant improvements over existing methods on seven of the eight datasets. The exception is the TEXTURES dataset, which has only seven test classes and accuracy is highly sensitive to the train / validation / test class split. The ablation study demonstrates that removing zAR from the feature extractor adaptation degrades accuracy in most cases, and that removing all feature extractor adaptation results in drastic reductions in accuracy. Unseen datasets: CNAPS-models outperform all competitive models with the exception of FINETUNE on the TRAFFIC SIGNS dataset. Removing zAR from the feature extractor decreases accuracy and removing the feature extractor adaptation entirely significantly impairs performance. The degradation is particularly pronounced when the held out dataset differs substantially from the dataset used to pretrain θ, e.g. for MNIST. Note that the superior results when using the auto-regressive component can not be attributed to increased network capacity alone. In Appendix D.4 we demonstrate that CNAPS yields superior classification accuracy when compared to parallel residual adapters [17] even though CNAPS requires significantly less network capacity in order to adapt the feature extractor to a given task. Additional results: Results when meta-training only on the META-DATASET version of ILSVRC are given in Table D.3. In Appendix D.2, we visualize the task encodings and parameters, demonstrating that the model is able to learn meaningful task and dataset level representations and parameterizations. The results support the hypothesis that learning to adapt key parts of the network is more robust and achieves significantly better performance than existing approaches. FiLM Parameter Learning Performance: Speed-Accuracy Trade-off. CNAPS generate FiLM layer parameters for each task τ at test time using the adaptation network ψf (Dτ ). It is also possible to learn the FiLM parameters via gradient descent (see [16, 17]). Here we compare CNAPS to this approach. Figure 7 shows plots of 5-way classification accuracy versus time for four held out data sets as the number of shots was varied. For gradient descent, we used a fixed learning rate of 0.001 and took 25 steps for each point. The overall time required to produce the plot was 1274 and 7214 seconds for CNAPS and gradient approaches, respectively, on a NVIDIA Tesla P100-PCIE-16GB GPU. CNAPS is at least 5 times faster at test time than gradient-based optimization requiring only a single forward pass through the network while gradient based approaches require multiple forward and backward passes. Further, the accuracy achieved with adaptation networks is significantly higher for fewer shots as it protects against over-fitting. For large numbers of shots, gradient descent catches up, albeit slowly. Complex Learning Scenarios: Continual Learning. In continual learning [40] new tasks appear over time and existing tasks may change. The goal is to adapt accordingly, but without retaining old data which is challenging for artificial systems. To demonstrate the the versatility CNAPS we show that, although it has not been explicitly trained for continual learning, we are able to apply the same model trained for the few-shot classification experiments (without the auto-regressive component) to standard continual learning benchmarks on held out datasets: Split MNIST [41] and Split CIFAR100 [42]. We modify the model to compute running averages for the representations of both ψτw and ψ τ f (see Appendix F for further details), in this way it performs incremental updates using the new data and the old model, and does not need to access old data. Figure 8 (left) shows the accumulated multiand single-head [42] test accuracy averaged over 30 runs (further results and more detailed figures are in Appendix G). Figure 8 (right) shows average results at the final task comparing to SI [41], EWC [43], VCL [44], and Riemannian Walk [42]. Figure 8 demonstrates that CNAPS naturally resists catastrophic forgetting [43] and compares favourably to competing methods, despite the fact that it was not exposed to these datasets during training, observes orders of magnitude fewer examples, and was not trained explicitly to perform continual learning. CNAPS performs similarly to, or better than, the state-of-the-art Riemannian Walk method which departs from the pure continual learning setting by maintaining a small number of training samples across tasks. Conversely, CNAPS has the advantage of being exposed to a larger range of datasets and can therefore leverage task transfer. We emphasize that this is not meant to be an “apples-to-apples” comparison, but rather, the goal is to demonstrate the out-of-the-box versatility and strong performance of CNAPS in new domains and learning scenarios. Complex Learning Scenarios: Active Learning. Active learning [46, 47] requires accurate dataefficient learning that returns well-calibrated uncertainty estimates. Figure 9 compares the performance of CNAPS and prototypical networks using two standard active learning acquisition functions (variation ratios and predictive entropy [46]) against random acquisition on the FLOWERS dataset and three representative held-out languages from OMNIGLOT (performance on all languages is presented in Appendix H). Figure 9 and Appendix H show that CNAPS achieves higher accuracy on average than prototypical networks. Moreover, CNAPS achieves significant improvements over random acquisition, whereas prototypical networks do not. These tests indicates that CNAPS is more accurate and suggest that CNAPS has better calibrated uncertainty estimates than prototypical networks. 6 Conclusions This paper has introduced CNAPS, an automatic, fast and flexible modelling approach for multitask classification. We have demonstrated that CNAPS achieve state-of-the-art performance on the META-DATASET challenge, and can be deployed “out-of-the-box” to diverse learning scenarios such as continual and active learning where they are competitive with the state-of-the-art. Future avenues of research are to consider the exploration of the design space by introducing gradients and function approximation to the adaptation mechanisms, as well as generalizing the approach to distributional extensions of CNAPS [48, 49]. Acknowledgments The authors would like to thank Ambrish Rawat for helpful discussions and David Duvenaud, Wessel Bruinsma, Will Tebbutt Adrià Garriga Alonso, Eric Nalisnick, and Lyndon White for the insightful comments and feedback. Richard E. Turner is supported by Google, Amazon, Improbable and EPSRC grants EP/M0269571 and EP/L000776/1.
1. What is the focus of the paper on conditional neural processes? 2. What are the strengths of the proposed approach, particularly in its adaptation and conditioning of parameters? 3. What are the weaknesses of the paper, especially regarding its similarity with prior works? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review The authors present a method called Conditional Neural adaptive Processes (CNAPs) able to efficiently solve new multi-class classification problems after an initial pre-training phase. The proposed approach, based on Conditional Neural Processes[1], adapts a small number of task-specific parameters for each new task encountered at test time. These parameters are conditioned on a set of training examples for the new task, don't require any additional tuning and adapt both the final classification layer and the feature extraction process, allowing to handle different input distribution. While being very close to CNP, this work focuses on the image classification task and makes several addition to the original method. These additions (FiLM layers, auto-regressive feature adapter, usage of deep sets) are clearly justified and their individual contributions are explored in the different experiments. The major negative point of this paper is its similarity with CNPs. The authors compare the two approaches in section 2 (lines 67-70), but this argument is not convincing at all, the adapted parameters can also be seen as a simple vector. I think the article would gain from putting a bigger emphasize on the auto-regressive way of dynamically adapting the parameters, which is an interesting and novel contribution. The article is very well written. While the approach is complex, the authors did a good job at progressively presenting the different components used, with clear explanations and corresponding references to justify each choice they made. [1] Conditional neural processes. Garnelo et. al. 2018
NIPS
Title The interplay between randomness and structure during learning in RNNs Abstract Recurrent neural networks (RNNs) trained on low-dimensional tasks have been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices, despite the unconstrained nature of the learning algorithm. To identify the origin of the low-rank structure, we turn to an analytically tractable setting: training a linear RNN on a simplified task. We show how the low-dimensional task structure leads to low-rank changes to connectivity. This low-rank structure allows us to explain and quantify the phenomenon of accelerated learning in the presence of random initial connectivity. Altogether, our study opens a new perspective to understanding trained RNNs in terms of both the learning process and the resulting network structure. 1 Introduction Recurrent neural networks (RNNs) have been used both as tools for machine learning, and as models for neuroscience. In the latter context, RNNs are typically initialized with random connectivity and trained on abstractions of tasks used in experimental settings [3, 20, 23, 32, 33, 35, 37, 40]. The obtained networks are then compared to both behavioral and neural experimental results, with the added advantage that the RNNs are more amenable to analysis than their biological counterparts [34]. Despite this advantage, the understanding of how RNNs implement neuroscience tasks is still limited. Open questions concern especially the relationship between the final connectivity and the task, and its formation through training. Here, we examine the relation between the initial connectivity of the RNN, the task at hand, and the changes to connectivity through training. We use unconstrained gradient descent that can potentially alter the connectivity completely. However, evaluating nonlinear RNNs trained on several neuroscience-inspired tasks, we observe that the connectivity changes are small compared to the initial connectivity. We thus split the connectivity matrix W at the end of training into the initial part W0 and the changes ∆W , writing W = W0 + ∆W . (1) For all tasks we consider, we find that the training-induced connectivity structure ∆W is of low rank, despite the unconstrained nature of training used. This finding directly connects gradient-based learning with a number of existing neuroscience frameworks based on low-rank aspects of connectivity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [4, 9, 12, 14, 18, 21, 24, 33, 36]. Despite the low-rank nature of the changes to connectivity ∆W , the initial, full-rank, random connectivity W0 plays an important role in learning. Consistent with previous work [28, 33], we find that the initial connectivity accelerates learning. Moreover we show that the final, trained network relies on correlations between ∆W and W0. In the second part of our work, we analyze the mechanism behind these observations in a simplified and analytically tractable setting: nonlinear dynamics of learning in a linear RNN trained on a simple input-output mapping task. We show how the low-dimensional task structure leads to low-rank connectivity changes; importantly, the amplitude and geometry of these low-rank changes depend on the random initial connectivity. Our work reveals how this dependence accelerates learning and quantifies the degree of acceleration as a function of initial connectivity strength. Finally, we show that our results extend to real-world settings of an LSTM network trained on a natural language processing task, suggesting practical applications of our results. 2 Training RNNs on low-dimensional tasks Tasks We trained RNNs on three tasks inspired by the neuroscience literature. All tasks are characterized by a small number of input and output channels. The first task is a working memory task, in which the network receives pulses from two different input channels and needs to remember the sign of the last pulse in each channel independently [34]. The second task is a context-dependent decision task: The network receives two noisy signals, as well as one of two context inputs which indicates the relevant signal. After the input presentation, it needs to output whether the average of the relevant signal was positive or negative [20]. The third task is a delayed-discrimination task [25] in which the network receives two positive pulses separated by a delay. After yet another delay, it needs to output which of the two pulses had the larger amplitude. Based on their origin, we refer to the three tasks as "flip-flop" [34], "Mante" [20], and "Romo" [25] task, respectively. For each task, we plotted a single trial for a successfully trained network in Fig. 1(a-c). Detailed parameters can be found in the supplementary. RNN model Each RNN model consists of N neurons whose state vector evolves according to ẋ(t) = −x(t) +Wφ(x(t)) + √ N Nin∑ i=1 miui(t) . (2) The recurrent input is given by the firing rate vector φ(x) multiplied by the weight matrix W . We use the element-wise nonlinearity φ = tanh. The network receives time-dependent inputs ui(t) through input vectors mi. The output is the projection of the firing rate onto readout vectors wi, namely zi(t) = wTi φ(x(t))√ N for i in {1, . . . , Nout} . (3) We formulate target values ẑi(t) during specific segments of the trial [see dark lines for output panels in Fig. 1(a-c)]. The task determines the numbers Nin and Nout of input and output vectors. For example, the Mante task requires four input vectors (for both signals and contexts) and a single output vector. We are interested in the behavior of large networks, N >> 1, while the dimension of the tasks is small, Nin, Nout ∼ O(1). For the simulation, we chose N to be large enough so that learning dynamics become invariant under changes in N (see supplementary Fig. S1). Training and initialization For training the RNNs, we formulated a quadratic cost in zi(t) and applied the gradient descent method “Adam” [15] to the internal connectivity W as well as to the input and output vectors mi, wi. Restricting the updates to W or training with SGD impaired the convergence times but yielded similar results (not shown). The initial input and output vectors were drawn independently from N (0, 1/N). We initialized the internal weights as a random matrix W0 with independent elements drawn from N (0, g2/N). The parameter g thus scales the strength of the initial connectivity. Learning dynamics in the absence of initial connectivity To understand what kind of connectivity arises during learning, we first looked at the simplest case without initial connectivity, g = 0. The loss curves indicate convergence for all three tasks [see darker lines in Fig. 1(d-f)]. We analyzed the 2 connectivity at the end of training by computing its singular values (SVs). For the flip-flop task, we found that the first two SVs were much larger than the remaining ones [Fig. 1(g)]. To see whether the network utilizes this approximate rank-two structure, we replaced the changes ∆W with the singular value decomposition truncated at rank R, ∆W (R) = R∑ r=1 srurv T r . (4) Note that we keep the initial connectivity W0. The loss after truncation indeed drops to zero at rank 2 [Fig. 1(j)]. A similar situation is observed for the Mante and Romo tasks, see Fig. 1(h, k) and (i, l), respectively. Although for these tasks the SVs drop more slowly, the first six SVs are discernibly larger than the remaining tail; the truncation loss drops to zero at rank 4 and 6, respectively. In sum, we observe that for g = 0, training via gradient descent yields an effective low-rank solution for all three tasks. Effects of initial connectivity on learning dynamics and connectivity The loss-curves in Fig. 1(d-f) indicate a strong influence of the initial connectivity strength g on the training dynamics (lighter colors for g = 0.9). We observe that learning becomes faster and smoother with initial connectivity. In Fig. 2(a), we quantify the acceleration of learning with the number of epochs needed to reach 5% of the initial loss. We observe that convergence time smoothly decreases as a function of connectivity strength g; for very large g, networks finally transition to chaotic activity [31], and convergence time increases again. 3 After observing the drastic decrease in learning time, we wondered how initial connectivity affects the resulting connectivity changes. The first observation is that, for increasing g, the final connectivity W = W0 + ∆W is dominated by W0, since ||W0|| = √ Ng. In fact, the norm of ∆W not only remains unchanged for increasing N (see supplementary), but further decreases with increasing g, see Fig. 2(b). If a smaller ∆W solves the task for larger initial connectivity, it is reasonable to assume that W0 amplifies the effect of ∆W . To test this idea, we shuffled the elements of W0, destroying any correlation between W0 and ∆W , while maintaining its statistics. The loss after replacing the connectivity with W shuffle0 + ∆W is shown in Figure 2(c). For all tasks, shuffling strongly degraded performance except for cases with very weak initial connectivity. Low-rank changes in connectivity Despite the effects of the initial connectivity on convergence time and the norm of ∆W , the low-rank nature of ∆W remains similar to the case with g = 0. In Fig. 1(g-h), the SVs of ∆W are plotted in lighter colors. We see that the pattern and overall amplitude is very similar to the darker lines for g = 0: only a small number of SVs dominates over a tail. To assess the functional rank, we replaced ∆W in our RNN with the rank-R truncation, Eq. (4), while keeping the initial connectivity W0 identical. The resulting loss, Fig. 1(j-l), indicates that the effective connectivity change is indeed low-rank: for all three tasks, it drops to a value close to zero before rank 10. We quantified this observation by computing the “functional rank”, the rank at which the loss decreases below 5% of the initial value [see Fig. 2(d)]. This functional rank is between 2 and 10 for all three tasks (averaged over independent simulations). It increases with g for the flip-flop task, while it remains less affected for the other two tasks. 3 Analytical results for linear system The observation of effective low-rank changes in connectivity and accelerated learning for random initial connectivity were general across the three different tasks considered. To understand the underlying mechanisms, we turn to a much simpler task and a linear RNN model. This setting allows us to analytically describe the learning dynamics, understand the origin of the low-rank connectivity changes, and quantify how correlations between W0 and ∆W accelerate learning. Our approach is similar to that of Saxe et al. [27], who analyzed gradient descent dynamics in linear feed-forward networks. Both for the feed-forward and the recurrent model, the learning dynamics are nonlinear despite the linearity of the networks. Nevertheless, we will see that the recurrent nature of our models results in very different dynamics compared to the linear feed-forward model. Below we will present our main results for the simplified model; the details of all our analytical derivations can be found in the supplementary. Simplified setting Our simple task is an input-output transformation: Given a constant input u(t) = 1, the output z(t) has to reach a target value ẑ at time T . The corresponding loss is L = (ẑ − z(T ))2/2. An example with two different target values ẑ = 0.5, 2.0 is plotted in Fig. 3(a). The linear RNN model is obtained by replacing the nonlinearity in Eq. (2) with the identity, φ(x) = x, 4 and keeping only a single input and output. All weights are initialized as before. We keep the initial connectivity strength g < 1 so that the linear network remains stable. To further simplify, we constrain weight changes to the recurrent weights W only, and apply plain gradient descent. To compare between different simulations, we define the learning time τ = η · epochs. Evaluating the trained networks reveals similar phenomena as observed for the nonlinear, more complex tasks. Figure 3(b-e) shows the loss and SVs of ∆W over learning time for two values of g. We observe that learning induces low-rank connectivity changes – in fact, a single SV dominates. Because of the small magnitude of the second SV, truncating ∆W at rank 1 does not lead to increased loss (not shown), so that the functional rank as defined in the previous section is 1. Comparing between g = 0 and g = 0.6, we further see that learning is accelerated by the initial connectivity, and that the magnitude of the first SV decreases with increasing g. These observations will be quantified with our analytical results. Gradient descent dynamics For our analytical treatment, we only consider the limit of long trials, with the output z = limT→∞ z(T ) at the end of a trial. In this limit, the network converges to its fixed point x∗ = √ N (I −W )−1 m with identity matrix I , and the readout is z = wTx∗√ N = wT (I −W )−1 m . (5) The input and output vectors, m and w, remain fixed during training, and only W is changed. We can explicitly compute the changes induced by the gradient of the loss: dW (τ) dτ = − dL dW = [ẑ − z(τ)] [ I −WT(τ) ]−1 wmT [ I −WT(τ) ]−1 , (6) with initial connectivity W (0) = W0. We made a continuous-time approximation of the weight updates (“gradient flow”), valid to small learning rates η. Note that the readout z at the fixed point depends on the learning time τ through W (τ). Note that, unlike the feed-forward case [26], the inverse of W appears in Eq. (6), opening the possibility of divergence during learning. It also precludes a closed-form solution to the dynamics. However, we can obtain analytical insight by expanding the learning dynamics in learning time around the initial connectivity [5]. We write W (τ) = ∞∑ k=0 Wk τk k! . (7) 5 The changes in connectivity are obtained by subtracting W0, which yields ∆W (τ) = W1τ + W2τ 2/2 + . . . . We analytically computed the coefficients Wk by evaluating dkW/dτk at τ = 0. A comparison of the expansion up to third order with the numerical results from gradient descent learning indicates close agreement during most of the learning [see Fig. 3(b-e) full vs. dashed lines]. Learning dynamics in absence of initial connectivity It is instructive to first consider the case of no initial connectivity, g = 0. The readout at the beginning of training is then z0 = wTm. Due to the independence of m and w, the expected value of z0 vanishes. Moreover, the standard deviation scales as 1/ √ N with the network size. In this work, we are interested in the learning dynamics for large networks; all our analytical results are valid in the limit N →∞. We therefore write z0 = 0. Similar reasoning goes for all scalar quantities of interest: they are of order O(1), with deviations O(1/ √ N). With this self-averaging quality, we omit stating the limit as well as the expectation symbol and use the equality sign instead. Inserting W0 and z0 – both zero – into the gradient descent, Eq. (6), yields the first order coefficient W1 = ẑwm T . (8) Hence, the weight changes at linear order in τ are described by a rank-one matrix, and the readout is z(τ) = τ ẑ +O(τ2). The gradient descent for g = 0 would therefore converge at τ∗1 = 1, if it only depended on the first-order term. The numerical results already show deviations in the form of faster or slower convergence, depending on the target ẑ [see dark lines in Fig. 3(b,c) and note that τ̃ = τ for g = 0]. This indicates the importance of higher order terms. We observe that the gradient in Eq. (6) contains the transpose WT . At higher orders, this term introduces other outer-product combinations of m and w. In fact, for g = 0, these are the only vectors present in the gradient, so that the connectivity can always be written as ∆W (τ) = [w m] [ A11 A12 A21 A22 ] [ wT mT ] . (9) This form implies that ∆W will be at most a rank-two matrix. An analysis of the SVs [Eq. (14) below for general g] reveals that the second SV remains very small, as visible in Fig. 3(d,e). The entries of the 2× 2 matrix A(τ) up to order O(τ3) are (see supplementary) A11 = ẑ2 2 ( τ2 − τ3 ) , A12 = ẑ ( τ − τ 2 2 + τ3 6 (1 + 2ẑ2) ) , A21 = ẑ3τ3 3 , (10) and A22 = A11. The first surprising observation is that the target value ẑ enters nonlinearly into the expressions above. This is the origin of the qualitative difference between learning curves for different values of the target output in Fig. 3(b,c). We further observe that the connectivity changes develop a nonzero eigenvalue only at O(τ2). This is because the off-diagonal terms, which grow linearly with τ contribute a zero eigenvalue because mTw = 0. At second order the diagonal entries of A – and, with it, the eigenvalues – change. Changes in connectivity eigenvalues imply changes in time scales of network dynamics, which may be necessary for some tasks (for example, those involving memory), but can also lead to problems of exploding gradients (see supplementary). Effects of initial connectivity In the presence of initial connectivity, we can still apply the expansion introduced above. Due to the independence of W0, m, and w, the initial readout z0 remains zero. The gradient descent, Eq. (6), then directly yields the first-order connectivity coefficient W1 = ẑ B TwmTBT , with B = (I −W0)−1 . (11) Thus, W1 is still a rank-one matrix despite the full-rank initial connectivity. However, the connectivity changes now include the initial connectivity W0 via the matrix B. As a consequence, the norm of the first-order coefficient, ||W1|| = ẑβ (see supplementary), increases with g by the factor β = wTBBTw = mTBTBm = 1 1− g2 . (12) The readout is also affected by the initial connectivity. We compute (see supplementary) z(τ) = τ ẑβ2 +O(τ2) . (13) 6 Learning converges when z(τ) reaches the target value ẑ. The first-order prediction of the convergence time is therefore τ∗1 = 1/β 2, and the initial connectivity accelerates learning by the factor 1/β2 = (1− g2)2. We can decompose this acceleration into two factors: The growth rate is increased by β, and the norm of the final connectivity changes decreased by 1/β. For the first contribution, we note that the first-order coefficient W1 is, by definition, the constant part of the gradient, and hence the rate at which connectivity changes. For the second contribution, we compute the norm of ∆W (τ) at the predicted convergence time τ∗1 (see supplementary). In Fig. 4(a-c), we compare our first-order predictions with numerical simulations. In panels (a,b), we plot the convergence time τ∗ and the norm of ∆W at the end of training. As for the more complex, nonlinear tasks [see Fig. 2(a,b)], we defined the numerical τ∗ as the point in time where the loss drops to 5% of the initial value. For the gradient, panel (c), we averaged the norm ||dW/dτ || over the interval [0, τ∗]. To compare the collapsed curves with the predicted scalings, we normalized the curves for the different target values ẑ by their value at g = 0 for all three quantities. We observe good agreement between the numerical results and the theory, even though we only used the first-order predictions, and τ∗ often shows notable differences between theory and simulation [for example in Fig. 3(b,c)]. Finally, we assess the role of correlations between ∆W and W0 by shuffling W0. After shuffling, the readout loses the amplification by β2 and is hence zshuff = τ∗1 ẑ. The corresponding loss is Lshuff1 = L0 g 4(2 − g2)2, with initial loss L0 = ẑ2/2. A comparison of this first-order prediction with numerical results shows qualitative agreement with notable quantitative differences especially for the larger target amplitude, see Fig. 4(d). A comparison with the nonlinear case, Fig. 2(c) shows that our simple model captures the phenomenon qualitatively. Higher-order terms Does the initial connectivity lead to higher-rank changes in connectivity? For g > 0, the explicit rank-two expression for the weight changes, Eq. (9), does not hold anymore: The input and output vectors accumulate multiples of B and BT (such as BTw and BBTw) which increase the number of possible outer products – and hence potentially the rank. However, computing the first two SVs, s1 and s2, up to order O(τ3) (see supplementary) shows that ∆W remains approximately rank one: s1 = ẑ β [ τ̃ − τ̃ 2 2 + ( 1 + 7 2 ẑ2β ) τ̃3 6 ] , s2 = ẑ 3 τ̃ 3 12 . (14) where τ̃ = β2τ is the effective learning time. We observe that s1 grows linearly, but s2 only at third order of τ . Different parts of connectivity therefore grow on top of each other, giving rise to a temporal hierarchy in the learning dynamics. Numerical simulations show good agreement with this prediction (see supplementary). We further state the resulting readout up to O(τ3): z(τ) = ẑ [ τ̃ − τ̃ 2 2 + (1 + 8ẑ2β) τ̃3 6 ] . (15) 7 The appearance of β in the third-order contributions in Eqs. (14) and (15) shows that the learning with different values of g does not entirely collapse onto one curve after rescaling the time by β2. Instead, there is an additional acceleration, which increases with increasing target amplitude ẑ. This effect can be appreciated in Fig. 3(b,c), where for larger ẑ the loss curve becomes concave. Note that our approximation up to O(τ3) predicts this trend, despite quantitative disagreement. As we saw in Fig. 4, the scaling of the convergence time τ∗ with g is not strongly affected by the higher order terms. 4 Beyond neuroscience tasks We asked whether our observation that connectivity changes are low-rank despite full-rank initial connectivity would extend to more complex network architectures and tasks, specifically those not restricted to a small input or output dimension. We therefore trained a two-layer LSTM network on a natural language processing task, sentiment analysis of movie reviews [30] (details in supplementary). The SVs at the end of training showed the pattern that we predicted: learning only leads to small changes in the connectivity so that the final connectivity W is dominated by the initial connectivity and has full rank. The changes ∆W only have a small number of large SVs. For the recurrent weights of layer 2, the SVs are plotted in Fig. 5(a); other weights behave similarly (see supplementary). Like before, we evaluated the accuracy of networks after truncation at a given rank, see Fig. 5(b). We truncated the recurrent weights of both layers as well as input weights to layer 2. If we keep the random parts and truncate the changes as in Eq. (4) a rank-10 approximation already yields the final training accuracy. In contrast, if we truncate the entire weight matrices, as previously suggested [38], it takes more that half of the network rank (256 neurons per layer) to get close to the final accuracy. 5 Discussion Summary of results Our key finding is that the connectivity changes ∆W induced by unconstrained training on low-dimensional tasks are of low rank. With our simplified analytical model, we demonstrated why: The connectivity changes are spanned by a small number of existing directions, determined by the input and output vectors. Without initial connectivity, the maximum rank that linear networks can obtain through learning is in fact bounded by this number. The initial connectivity W0 enlarges the pool of available directions. The fact that learning arrives at a low-rank solution even in presence of initial connectivity is then a result of the temporal structure of learning: Initially, only a small number of available directions grow, inducing a low-rank structure. For our simplified task, the first of these structures already reduces the loss, and learning converges before other structures emerge; the final connectivity changes are hence rank-one. For other tasks, the available input and output directions alone may not be sufficient, so that initial connectivity becomes necessary for successful learning (see supplementary). Note that our theoretical analysis is limited to linear networks; however, nonlinearity may also contribute to generate novel learning directions. 8 Our numerical simulations further showed that initial connectivity significantly accelerated learning. Our analytical results revealed the underlying mechanism: The input and output vectors spanning the gradient are multiplied by powers of W0, which strongly correlates ∆W to W0. This correlation amplifies the effect of ∆W , and removing the correlation by shuffling W0 indeed degrades performance. This is in line with a recent study demonstrating such amplification through correlation between a random matrix and a low-rank perturbation in a model without learning [29]. Finally, we showed that the general observation of low-rank weight changes indeed holds even in a much more complex setting: a sentiment analysis task and a two-layer LSTM network. This implies a large potential for network compression [38]: one may truncate the changes in connectivity at a very low rank and recover the specific random initial connectivity using the seed of its random number generator. Task dimension and rank Low-rank connectivity structures have previously been studied and applied. On the one hand, a number of RNN frameworks explicitly rely on low-rank feedback for training [4, 9, 14, 18, 33]. On the other hand, low-rank networks are amenable to analysis, because the network activity is low-dimensional and evolves in directions determined by the vectors spanning the connectivity [12, 21, 24, 29, 36]. Our surprising observation that unconstrained gradient descent also leads to low-rank connectivity opens new possibilities for studying general gradient-based learning with the tools developed by previous works. We observed that the functional rank of the training-induced connectivity changes is strongly task dependent. A better understanding of the relation between task and connectivity calls for a concept of a task dimension, ideally based on the underlying abstract computations and independent of the specific implementation [10, 17, 19, 40]. Such a concept would allow to compare the solutions obtained by different algorithms and define a necessary minimal rank for a given task [8]. Learning as a dynamical process and relation to feed-forward networks Our approach stresses a dynamical perspective on learning, in which the solutions are not determined by the task alone, but also by the initial connectivity and the temporal evolution of weight changes. In particular, our expansion in learning time shows that some components in the connectivity only grow after others are present, which induces a temporal hierarchy. This affects the solutions the network arrives at. The temporal structure may also induce pitfalls for learning, for example divergent gradients when the networks undergo a phase transition [22] (see supplementary). A better understanding of the learning dynamics could be used to circumvent such problems, for example by introducing adapted learning curricula [6]. Learning in feed-forward networks has previously been analyzed from a similar perspective. It was found that the statistical structure of the training data induces a temporal hierarchy with long plateaus between step-like transitions in the learning curve [1, 11, 16, 26, 27, 41]. The hierarchy in our work originates in the dynamics of the RNN rather than the structure of the training data. For example, the plateaus seen in Fig. 1(d-f) can be related to phase transitions in the network dynamics, such as the emergence of new fixed points. Combining such internal learning dynamics with structured training data would be an interesting future direction. Finally, recent work on feed-forward networks identified two different learning regimes: a kernel regime vs. a rich, feature-learning regime [2, 7, 13, 39]. In the prior, the change in weights vanishes as the network width increases, and the network function can be linearized around the weights at initialization. In our work, too, the weight changes ∆W become infinitely small in the limit of wide networks. However, even such vanishing ∆W may significantly change the dynamics of the neural network by inducing large outlier eigenvalues [29]. For example, the readout for our linear network, Eq. (5), diverges for a eigenvalue of W approaching 1. In such a case, the network function cannot be approximated by linearization around the initial weights. Understanding the relation between learning regimes in feed-forward and recurrent networks constitutes an interesting field for future studies. 9 Broader Impact This work is a theoretical study on the dynamics of learning in RNNs. We show which kind of connectivity changes are induced by gradient descent. We expect that our insights will help to understand learning in RNNs, which benefits the research community as a whole and may ultimately lead to the development of improved learning algorithms or schemes. As a possible application, we show that one can use our results to efficiently compress a multi-layer RNN trained on a natural language processing task. In this work, there are no new algorithms, tasks, or data sets introduced. Therefore, the questions regarding any disadvantages, failures of the system, or biases do not apply. Acknowledgments and Disclosure of Funding This work was supported in part by the Israeli Science Foundation (grant number 346/16, OB). The project was further supported by the ANR project MORSE (ANR-16-CE37-0016), the program “Ecoles Universitaires de Recherche” launched by the French Government and implemented by the ANR, with the reference ANR-17-EURE-0017. F.S. acknowledges the Max Planck Society for a Minerva Fellowship. There are no competing interests.
1. What is the main contribution of the paper regarding RNN architecture and training? 2. What are the strengths of the proposed approach, particularly in terms of its mathematical analysis? 3. What are the weaknesses of the paper regarding its experimental scope and potential impact? 4. How does the reviewer assess the significance of the finding and its implications for model compression? 5. Are there any suggestions for improving the paper, such as focusing more on experimental analysis or exploring more complex problems?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors study how a given RNN architecture deviates from its initial random weights over training. Surprisingly, they find that in 3 very simple tasks, the deviation from the initial values is very low rank (rank 6 at most for an RNN with 256 units). Then the authors proceed to analytically study why this phenomena emerges, in a simplified setup (linear RNNs). Strengths - The authors make a surprising finding. When you decompose an RNNs weight matrix into its initial value and the deviation from it during training, the deviation (\delta W) can have very low rank. - Authors study the phenomena mathematically in a simplified setting. - The implications of such a phenomena (if it holds in realistic setups) are of VERY HIGH importance for the model compression community. I would be willing to considerably improve my rating if the authors can show that this phenomena happens in more realistic scenarios. - While the analytical study is nice, it takes too much space. I would focus more on experimentally analyzing it to argue it's a phenomena worth studying more in-depth. Weaknesses - There are no experiment in more complex problems, which reduces the impact of the work considerably.
NIPS
Title The interplay between randomness and structure during learning in RNNs Abstract Recurrent neural networks (RNNs) trained on low-dimensional tasks have been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices, despite the unconstrained nature of the learning algorithm. To identify the origin of the low-rank structure, we turn to an analytically tractable setting: training a linear RNN on a simplified task. We show how the low-dimensional task structure leads to low-rank changes to connectivity. This low-rank structure allows us to explain and quantify the phenomenon of accelerated learning in the presence of random initial connectivity. Altogether, our study opens a new perspective to understanding trained RNNs in terms of both the learning process and the resulting network structure. 1 Introduction Recurrent neural networks (RNNs) have been used both as tools for machine learning, and as models for neuroscience. In the latter context, RNNs are typically initialized with random connectivity and trained on abstractions of tasks used in experimental settings [3, 20, 23, 32, 33, 35, 37, 40]. The obtained networks are then compared to both behavioral and neural experimental results, with the added advantage that the RNNs are more amenable to analysis than their biological counterparts [34]. Despite this advantage, the understanding of how RNNs implement neuroscience tasks is still limited. Open questions concern especially the relationship between the final connectivity and the task, and its formation through training. Here, we examine the relation between the initial connectivity of the RNN, the task at hand, and the changes to connectivity through training. We use unconstrained gradient descent that can potentially alter the connectivity completely. However, evaluating nonlinear RNNs trained on several neuroscience-inspired tasks, we observe that the connectivity changes are small compared to the initial connectivity. We thus split the connectivity matrix W at the end of training into the initial part W0 and the changes ∆W , writing W = W0 + ∆W . (1) For all tasks we consider, we find that the training-induced connectivity structure ∆W is of low rank, despite the unconstrained nature of training used. This finding directly connects gradient-based learning with a number of existing neuroscience frameworks based on low-rank aspects of connectivity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [4, 9, 12, 14, 18, 21, 24, 33, 36]. Despite the low-rank nature of the changes to connectivity ∆W , the initial, full-rank, random connectivity W0 plays an important role in learning. Consistent with previous work [28, 33], we find that the initial connectivity accelerates learning. Moreover we show that the final, trained network relies on correlations between ∆W and W0. In the second part of our work, we analyze the mechanism behind these observations in a simplified and analytically tractable setting: nonlinear dynamics of learning in a linear RNN trained on a simple input-output mapping task. We show how the low-dimensional task structure leads to low-rank connectivity changes; importantly, the amplitude and geometry of these low-rank changes depend on the random initial connectivity. Our work reveals how this dependence accelerates learning and quantifies the degree of acceleration as a function of initial connectivity strength. Finally, we show that our results extend to real-world settings of an LSTM network trained on a natural language processing task, suggesting practical applications of our results. 2 Training RNNs on low-dimensional tasks Tasks We trained RNNs on three tasks inspired by the neuroscience literature. All tasks are characterized by a small number of input and output channels. The first task is a working memory task, in which the network receives pulses from two different input channels and needs to remember the sign of the last pulse in each channel independently [34]. The second task is a context-dependent decision task: The network receives two noisy signals, as well as one of two context inputs which indicates the relevant signal. After the input presentation, it needs to output whether the average of the relevant signal was positive or negative [20]. The third task is a delayed-discrimination task [25] in which the network receives two positive pulses separated by a delay. After yet another delay, it needs to output which of the two pulses had the larger amplitude. Based on their origin, we refer to the three tasks as "flip-flop" [34], "Mante" [20], and "Romo" [25] task, respectively. For each task, we plotted a single trial for a successfully trained network in Fig. 1(a-c). Detailed parameters can be found in the supplementary. RNN model Each RNN model consists of N neurons whose state vector evolves according to ẋ(t) = −x(t) +Wφ(x(t)) + √ N Nin∑ i=1 miui(t) . (2) The recurrent input is given by the firing rate vector φ(x) multiplied by the weight matrix W . We use the element-wise nonlinearity φ = tanh. The network receives time-dependent inputs ui(t) through input vectors mi. The output is the projection of the firing rate onto readout vectors wi, namely zi(t) = wTi φ(x(t))√ N for i in {1, . . . , Nout} . (3) We formulate target values ẑi(t) during specific segments of the trial [see dark lines for output panels in Fig. 1(a-c)]. The task determines the numbers Nin and Nout of input and output vectors. For example, the Mante task requires four input vectors (for both signals and contexts) and a single output vector. We are interested in the behavior of large networks, N >> 1, while the dimension of the tasks is small, Nin, Nout ∼ O(1). For the simulation, we chose N to be large enough so that learning dynamics become invariant under changes in N (see supplementary Fig. S1). Training and initialization For training the RNNs, we formulated a quadratic cost in zi(t) and applied the gradient descent method “Adam” [15] to the internal connectivity W as well as to the input and output vectors mi, wi. Restricting the updates to W or training with SGD impaired the convergence times but yielded similar results (not shown). The initial input and output vectors were drawn independently from N (0, 1/N). We initialized the internal weights as a random matrix W0 with independent elements drawn from N (0, g2/N). The parameter g thus scales the strength of the initial connectivity. Learning dynamics in the absence of initial connectivity To understand what kind of connectivity arises during learning, we first looked at the simplest case without initial connectivity, g = 0. The loss curves indicate convergence for all three tasks [see darker lines in Fig. 1(d-f)]. We analyzed the 2 connectivity at the end of training by computing its singular values (SVs). For the flip-flop task, we found that the first two SVs were much larger than the remaining ones [Fig. 1(g)]. To see whether the network utilizes this approximate rank-two structure, we replaced the changes ∆W with the singular value decomposition truncated at rank R, ∆W (R) = R∑ r=1 srurv T r . (4) Note that we keep the initial connectivity W0. The loss after truncation indeed drops to zero at rank 2 [Fig. 1(j)]. A similar situation is observed for the Mante and Romo tasks, see Fig. 1(h, k) and (i, l), respectively. Although for these tasks the SVs drop more slowly, the first six SVs are discernibly larger than the remaining tail; the truncation loss drops to zero at rank 4 and 6, respectively. In sum, we observe that for g = 0, training via gradient descent yields an effective low-rank solution for all three tasks. Effects of initial connectivity on learning dynamics and connectivity The loss-curves in Fig. 1(d-f) indicate a strong influence of the initial connectivity strength g on the training dynamics (lighter colors for g = 0.9). We observe that learning becomes faster and smoother with initial connectivity. In Fig. 2(a), we quantify the acceleration of learning with the number of epochs needed to reach 5% of the initial loss. We observe that convergence time smoothly decreases as a function of connectivity strength g; for very large g, networks finally transition to chaotic activity [31], and convergence time increases again. 3 After observing the drastic decrease in learning time, we wondered how initial connectivity affects the resulting connectivity changes. The first observation is that, for increasing g, the final connectivity W = W0 + ∆W is dominated by W0, since ||W0|| = √ Ng. In fact, the norm of ∆W not only remains unchanged for increasing N (see supplementary), but further decreases with increasing g, see Fig. 2(b). If a smaller ∆W solves the task for larger initial connectivity, it is reasonable to assume that W0 amplifies the effect of ∆W . To test this idea, we shuffled the elements of W0, destroying any correlation between W0 and ∆W , while maintaining its statistics. The loss after replacing the connectivity with W shuffle0 + ∆W is shown in Figure 2(c). For all tasks, shuffling strongly degraded performance except for cases with very weak initial connectivity. Low-rank changes in connectivity Despite the effects of the initial connectivity on convergence time and the norm of ∆W , the low-rank nature of ∆W remains similar to the case with g = 0. In Fig. 1(g-h), the SVs of ∆W are plotted in lighter colors. We see that the pattern and overall amplitude is very similar to the darker lines for g = 0: only a small number of SVs dominates over a tail. To assess the functional rank, we replaced ∆W in our RNN with the rank-R truncation, Eq. (4), while keeping the initial connectivity W0 identical. The resulting loss, Fig. 1(j-l), indicates that the effective connectivity change is indeed low-rank: for all three tasks, it drops to a value close to zero before rank 10. We quantified this observation by computing the “functional rank”, the rank at which the loss decreases below 5% of the initial value [see Fig. 2(d)]. This functional rank is between 2 and 10 for all three tasks (averaged over independent simulations). It increases with g for the flip-flop task, while it remains less affected for the other two tasks. 3 Analytical results for linear system The observation of effective low-rank changes in connectivity and accelerated learning for random initial connectivity were general across the three different tasks considered. To understand the underlying mechanisms, we turn to a much simpler task and a linear RNN model. This setting allows us to analytically describe the learning dynamics, understand the origin of the low-rank connectivity changes, and quantify how correlations between W0 and ∆W accelerate learning. Our approach is similar to that of Saxe et al. [27], who analyzed gradient descent dynamics in linear feed-forward networks. Both for the feed-forward and the recurrent model, the learning dynamics are nonlinear despite the linearity of the networks. Nevertheless, we will see that the recurrent nature of our models results in very different dynamics compared to the linear feed-forward model. Below we will present our main results for the simplified model; the details of all our analytical derivations can be found in the supplementary. Simplified setting Our simple task is an input-output transformation: Given a constant input u(t) = 1, the output z(t) has to reach a target value ẑ at time T . The corresponding loss is L = (ẑ − z(T ))2/2. An example with two different target values ẑ = 0.5, 2.0 is plotted in Fig. 3(a). The linear RNN model is obtained by replacing the nonlinearity in Eq. (2) with the identity, φ(x) = x, 4 and keeping only a single input and output. All weights are initialized as before. We keep the initial connectivity strength g < 1 so that the linear network remains stable. To further simplify, we constrain weight changes to the recurrent weights W only, and apply plain gradient descent. To compare between different simulations, we define the learning time τ = η · epochs. Evaluating the trained networks reveals similar phenomena as observed for the nonlinear, more complex tasks. Figure 3(b-e) shows the loss and SVs of ∆W over learning time for two values of g. We observe that learning induces low-rank connectivity changes – in fact, a single SV dominates. Because of the small magnitude of the second SV, truncating ∆W at rank 1 does not lead to increased loss (not shown), so that the functional rank as defined in the previous section is 1. Comparing between g = 0 and g = 0.6, we further see that learning is accelerated by the initial connectivity, and that the magnitude of the first SV decreases with increasing g. These observations will be quantified with our analytical results. Gradient descent dynamics For our analytical treatment, we only consider the limit of long trials, with the output z = limT→∞ z(T ) at the end of a trial. In this limit, the network converges to its fixed point x∗ = √ N (I −W )−1 m with identity matrix I , and the readout is z = wTx∗√ N = wT (I −W )−1 m . (5) The input and output vectors, m and w, remain fixed during training, and only W is changed. We can explicitly compute the changes induced by the gradient of the loss: dW (τ) dτ = − dL dW = [ẑ − z(τ)] [ I −WT(τ) ]−1 wmT [ I −WT(τ) ]−1 , (6) with initial connectivity W (0) = W0. We made a continuous-time approximation of the weight updates (“gradient flow”), valid to small learning rates η. Note that the readout z at the fixed point depends on the learning time τ through W (τ). Note that, unlike the feed-forward case [26], the inverse of W appears in Eq. (6), opening the possibility of divergence during learning. It also precludes a closed-form solution to the dynamics. However, we can obtain analytical insight by expanding the learning dynamics in learning time around the initial connectivity [5]. We write W (τ) = ∞∑ k=0 Wk τk k! . (7) 5 The changes in connectivity are obtained by subtracting W0, which yields ∆W (τ) = W1τ + W2τ 2/2 + . . . . We analytically computed the coefficients Wk by evaluating dkW/dτk at τ = 0. A comparison of the expansion up to third order with the numerical results from gradient descent learning indicates close agreement during most of the learning [see Fig. 3(b-e) full vs. dashed lines]. Learning dynamics in absence of initial connectivity It is instructive to first consider the case of no initial connectivity, g = 0. The readout at the beginning of training is then z0 = wTm. Due to the independence of m and w, the expected value of z0 vanishes. Moreover, the standard deviation scales as 1/ √ N with the network size. In this work, we are interested in the learning dynamics for large networks; all our analytical results are valid in the limit N →∞. We therefore write z0 = 0. Similar reasoning goes for all scalar quantities of interest: they are of order O(1), with deviations O(1/ √ N). With this self-averaging quality, we omit stating the limit as well as the expectation symbol and use the equality sign instead. Inserting W0 and z0 – both zero – into the gradient descent, Eq. (6), yields the first order coefficient W1 = ẑwm T . (8) Hence, the weight changes at linear order in τ are described by a rank-one matrix, and the readout is z(τ) = τ ẑ +O(τ2). The gradient descent for g = 0 would therefore converge at τ∗1 = 1, if it only depended on the first-order term. The numerical results already show deviations in the form of faster or slower convergence, depending on the target ẑ [see dark lines in Fig. 3(b,c) and note that τ̃ = τ for g = 0]. This indicates the importance of higher order terms. We observe that the gradient in Eq. (6) contains the transpose WT . At higher orders, this term introduces other outer-product combinations of m and w. In fact, for g = 0, these are the only vectors present in the gradient, so that the connectivity can always be written as ∆W (τ) = [w m] [ A11 A12 A21 A22 ] [ wT mT ] . (9) This form implies that ∆W will be at most a rank-two matrix. An analysis of the SVs [Eq. (14) below for general g] reveals that the second SV remains very small, as visible in Fig. 3(d,e). The entries of the 2× 2 matrix A(τ) up to order O(τ3) are (see supplementary) A11 = ẑ2 2 ( τ2 − τ3 ) , A12 = ẑ ( τ − τ 2 2 + τ3 6 (1 + 2ẑ2) ) , A21 = ẑ3τ3 3 , (10) and A22 = A11. The first surprising observation is that the target value ẑ enters nonlinearly into the expressions above. This is the origin of the qualitative difference between learning curves for different values of the target output in Fig. 3(b,c). We further observe that the connectivity changes develop a nonzero eigenvalue only at O(τ2). This is because the off-diagonal terms, which grow linearly with τ contribute a zero eigenvalue because mTw = 0. At second order the diagonal entries of A – and, with it, the eigenvalues – change. Changes in connectivity eigenvalues imply changes in time scales of network dynamics, which may be necessary for some tasks (for example, those involving memory), but can also lead to problems of exploding gradients (see supplementary). Effects of initial connectivity In the presence of initial connectivity, we can still apply the expansion introduced above. Due to the independence of W0, m, and w, the initial readout z0 remains zero. The gradient descent, Eq. (6), then directly yields the first-order connectivity coefficient W1 = ẑ B TwmTBT , with B = (I −W0)−1 . (11) Thus, W1 is still a rank-one matrix despite the full-rank initial connectivity. However, the connectivity changes now include the initial connectivity W0 via the matrix B. As a consequence, the norm of the first-order coefficient, ||W1|| = ẑβ (see supplementary), increases with g by the factor β = wTBBTw = mTBTBm = 1 1− g2 . (12) The readout is also affected by the initial connectivity. We compute (see supplementary) z(τ) = τ ẑβ2 +O(τ2) . (13) 6 Learning converges when z(τ) reaches the target value ẑ. The first-order prediction of the convergence time is therefore τ∗1 = 1/β 2, and the initial connectivity accelerates learning by the factor 1/β2 = (1− g2)2. We can decompose this acceleration into two factors: The growth rate is increased by β, and the norm of the final connectivity changes decreased by 1/β. For the first contribution, we note that the first-order coefficient W1 is, by definition, the constant part of the gradient, and hence the rate at which connectivity changes. For the second contribution, we compute the norm of ∆W (τ) at the predicted convergence time τ∗1 (see supplementary). In Fig. 4(a-c), we compare our first-order predictions with numerical simulations. In panels (a,b), we plot the convergence time τ∗ and the norm of ∆W at the end of training. As for the more complex, nonlinear tasks [see Fig. 2(a,b)], we defined the numerical τ∗ as the point in time where the loss drops to 5% of the initial value. For the gradient, panel (c), we averaged the norm ||dW/dτ || over the interval [0, τ∗]. To compare the collapsed curves with the predicted scalings, we normalized the curves for the different target values ẑ by their value at g = 0 for all three quantities. We observe good agreement between the numerical results and the theory, even though we only used the first-order predictions, and τ∗ often shows notable differences between theory and simulation [for example in Fig. 3(b,c)]. Finally, we assess the role of correlations between ∆W and W0 by shuffling W0. After shuffling, the readout loses the amplification by β2 and is hence zshuff = τ∗1 ẑ. The corresponding loss is Lshuff1 = L0 g 4(2 − g2)2, with initial loss L0 = ẑ2/2. A comparison of this first-order prediction with numerical results shows qualitative agreement with notable quantitative differences especially for the larger target amplitude, see Fig. 4(d). A comparison with the nonlinear case, Fig. 2(c) shows that our simple model captures the phenomenon qualitatively. Higher-order terms Does the initial connectivity lead to higher-rank changes in connectivity? For g > 0, the explicit rank-two expression for the weight changes, Eq. (9), does not hold anymore: The input and output vectors accumulate multiples of B and BT (such as BTw and BBTw) which increase the number of possible outer products – and hence potentially the rank. However, computing the first two SVs, s1 and s2, up to order O(τ3) (see supplementary) shows that ∆W remains approximately rank one: s1 = ẑ β [ τ̃ − τ̃ 2 2 + ( 1 + 7 2 ẑ2β ) τ̃3 6 ] , s2 = ẑ 3 τ̃ 3 12 . (14) where τ̃ = β2τ is the effective learning time. We observe that s1 grows linearly, but s2 only at third order of τ . Different parts of connectivity therefore grow on top of each other, giving rise to a temporal hierarchy in the learning dynamics. Numerical simulations show good agreement with this prediction (see supplementary). We further state the resulting readout up to O(τ3): z(τ) = ẑ [ τ̃ − τ̃ 2 2 + (1 + 8ẑ2β) τ̃3 6 ] . (15) 7 The appearance of β in the third-order contributions in Eqs. (14) and (15) shows that the learning with different values of g does not entirely collapse onto one curve after rescaling the time by β2. Instead, there is an additional acceleration, which increases with increasing target amplitude ẑ. This effect can be appreciated in Fig. 3(b,c), where for larger ẑ the loss curve becomes concave. Note that our approximation up to O(τ3) predicts this trend, despite quantitative disagreement. As we saw in Fig. 4, the scaling of the convergence time τ∗ with g is not strongly affected by the higher order terms. 4 Beyond neuroscience tasks We asked whether our observation that connectivity changes are low-rank despite full-rank initial connectivity would extend to more complex network architectures and tasks, specifically those not restricted to a small input or output dimension. We therefore trained a two-layer LSTM network on a natural language processing task, sentiment analysis of movie reviews [30] (details in supplementary). The SVs at the end of training showed the pattern that we predicted: learning only leads to small changes in the connectivity so that the final connectivity W is dominated by the initial connectivity and has full rank. The changes ∆W only have a small number of large SVs. For the recurrent weights of layer 2, the SVs are plotted in Fig. 5(a); other weights behave similarly (see supplementary). Like before, we evaluated the accuracy of networks after truncation at a given rank, see Fig. 5(b). We truncated the recurrent weights of both layers as well as input weights to layer 2. If we keep the random parts and truncate the changes as in Eq. (4) a rank-10 approximation already yields the final training accuracy. In contrast, if we truncate the entire weight matrices, as previously suggested [38], it takes more that half of the network rank (256 neurons per layer) to get close to the final accuracy. 5 Discussion Summary of results Our key finding is that the connectivity changes ∆W induced by unconstrained training on low-dimensional tasks are of low rank. With our simplified analytical model, we demonstrated why: The connectivity changes are spanned by a small number of existing directions, determined by the input and output vectors. Without initial connectivity, the maximum rank that linear networks can obtain through learning is in fact bounded by this number. The initial connectivity W0 enlarges the pool of available directions. The fact that learning arrives at a low-rank solution even in presence of initial connectivity is then a result of the temporal structure of learning: Initially, only a small number of available directions grow, inducing a low-rank structure. For our simplified task, the first of these structures already reduces the loss, and learning converges before other structures emerge; the final connectivity changes are hence rank-one. For other tasks, the available input and output directions alone may not be sufficient, so that initial connectivity becomes necessary for successful learning (see supplementary). Note that our theoretical analysis is limited to linear networks; however, nonlinearity may also contribute to generate novel learning directions. 8 Our numerical simulations further showed that initial connectivity significantly accelerated learning. Our analytical results revealed the underlying mechanism: The input and output vectors spanning the gradient are multiplied by powers of W0, which strongly correlates ∆W to W0. This correlation amplifies the effect of ∆W , and removing the correlation by shuffling W0 indeed degrades performance. This is in line with a recent study demonstrating such amplification through correlation between a random matrix and a low-rank perturbation in a model without learning [29]. Finally, we showed that the general observation of low-rank weight changes indeed holds even in a much more complex setting: a sentiment analysis task and a two-layer LSTM network. This implies a large potential for network compression [38]: one may truncate the changes in connectivity at a very low rank and recover the specific random initial connectivity using the seed of its random number generator. Task dimension and rank Low-rank connectivity structures have previously been studied and applied. On the one hand, a number of RNN frameworks explicitly rely on low-rank feedback for training [4, 9, 14, 18, 33]. On the other hand, low-rank networks are amenable to analysis, because the network activity is low-dimensional and evolves in directions determined by the vectors spanning the connectivity [12, 21, 24, 29, 36]. Our surprising observation that unconstrained gradient descent also leads to low-rank connectivity opens new possibilities for studying general gradient-based learning with the tools developed by previous works. We observed that the functional rank of the training-induced connectivity changes is strongly task dependent. A better understanding of the relation between task and connectivity calls for a concept of a task dimension, ideally based on the underlying abstract computations and independent of the specific implementation [10, 17, 19, 40]. Such a concept would allow to compare the solutions obtained by different algorithms and define a necessary minimal rank for a given task [8]. Learning as a dynamical process and relation to feed-forward networks Our approach stresses a dynamical perspective on learning, in which the solutions are not determined by the task alone, but also by the initial connectivity and the temporal evolution of weight changes. In particular, our expansion in learning time shows that some components in the connectivity only grow after others are present, which induces a temporal hierarchy. This affects the solutions the network arrives at. The temporal structure may also induce pitfalls for learning, for example divergent gradients when the networks undergo a phase transition [22] (see supplementary). A better understanding of the learning dynamics could be used to circumvent such problems, for example by introducing adapted learning curricula [6]. Learning in feed-forward networks has previously been analyzed from a similar perspective. It was found that the statistical structure of the training data induces a temporal hierarchy with long plateaus between step-like transitions in the learning curve [1, 11, 16, 26, 27, 41]. The hierarchy in our work originates in the dynamics of the RNN rather than the structure of the training data. For example, the plateaus seen in Fig. 1(d-f) can be related to phase transitions in the network dynamics, such as the emergence of new fixed points. Combining such internal learning dynamics with structured training data would be an interesting future direction. Finally, recent work on feed-forward networks identified two different learning regimes: a kernel regime vs. a rich, feature-learning regime [2, 7, 13, 39]. In the prior, the change in weights vanishes as the network width increases, and the network function can be linearized around the weights at initialization. In our work, too, the weight changes ∆W become infinitely small in the limit of wide networks. However, even such vanishing ∆W may significantly change the dynamics of the neural network by inducing large outlier eigenvalues [29]. For example, the readout for our linear network, Eq. (5), diverges for a eigenvalue of W approaching 1. In such a case, the network function cannot be approximated by linearization around the initial weights. Understanding the relation between learning regimes in feed-forward and recurrent networks constitutes an interesting field for future studies. 9 Broader Impact This work is a theoretical study on the dynamics of learning in RNNs. We show which kind of connectivity changes are induced by gradient descent. We expect that our insights will help to understand learning in RNNs, which benefits the research community as a whole and may ultimately lead to the development of improved learning algorithms or schemes. As a possible application, we show that one can use our results to efficiently compress a multi-layer RNN trained on a natural language processing task. In this work, there are no new algorithms, tasks, or data sets introduced. Therefore, the questions regarding any disadvantages, failures of the system, or biases do not apply. Acknowledgments and Disclosure of Funding This work was supported in part by the Israeli Science Foundation (grant number 346/16, OB). The project was further supported by the ANR project MORSE (ANR-16-CE37-0016), the program “Ecoles Universitaires de Recherche” launched by the French Government and implemented by the ANR, with the reference ANR-17-EURE-0017. F.S. acknowledges the Max Planck Society for a Minerva Fellowship. There are no competing interests.
1. What is the main contribution of the paper regarding RNNs? 2. What are the strengths of the proposed approach or method? 3. What are the weaknesses or limitations of the paper, particularly in terms of task complexity?
Summary and Contributions Strengths Weaknesses
Summary and Contributions POST REBUTTAL COMMENT ---- I thank the authors for the additional results in the rebuttal, and think that this paper is a good submission. I leave my score unchanged. ------ This paper examines the relationship between initial connectivity, task structure, and changes in connectivity in RNNs, by making the hypothesis low-dimensional tasks induce low-rank changes in connectivity, which in turn allows to explain the phenomenon of accelerated learning in the presence of random initial connectivity. One studies the influence of the connectivity strength of the initialization on the learning dynamics as well as the final connectivity matrix by proving a series of novel claims (analytically and experimentally) highlighting how the learning dynamics and the final connectivity are heavily influenced by the choice of initial connectivity. For the experimental part, the hypotheses are validated on three low-dimensional toy tasks inspired by the neuroscience literature. For the analytical part, the simplified analytically tractable case of a linear RNN on a simplified task is considered, where one quantifies the degree of accelerated learning as a function of the initial connectivity strength. Strengths The paper proves a diverse set of novel claims in regards to gradient-based learning dynamics in RNNs experimentally as well as analytically. It provides a useful framework that could be used to understand the interplay between task dimension, initial connectivity and rank of the weight matrix changes as well as their influence on the structure of the final weight matrix in broader settings of gradient-based learning in neural networks. Weaknesses One could have examined how exactly the claims are changing for more complex higher-dimensional tasks.
NIPS
Title The interplay between randomness and structure during learning in RNNs Abstract Recurrent neural networks (RNNs) trained on low-dimensional tasks have been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices, despite the unconstrained nature of the learning algorithm. To identify the origin of the low-rank structure, we turn to an analytically tractable setting: training a linear RNN on a simplified task. We show how the low-dimensional task structure leads to low-rank changes to connectivity. This low-rank structure allows us to explain and quantify the phenomenon of accelerated learning in the presence of random initial connectivity. Altogether, our study opens a new perspective to understanding trained RNNs in terms of both the learning process and the resulting network structure. 1 Introduction Recurrent neural networks (RNNs) have been used both as tools for machine learning, and as models for neuroscience. In the latter context, RNNs are typically initialized with random connectivity and trained on abstractions of tasks used in experimental settings [3, 20, 23, 32, 33, 35, 37, 40]. The obtained networks are then compared to both behavioral and neural experimental results, with the added advantage that the RNNs are more amenable to analysis than their biological counterparts [34]. Despite this advantage, the understanding of how RNNs implement neuroscience tasks is still limited. Open questions concern especially the relationship between the final connectivity and the task, and its formation through training. Here, we examine the relation between the initial connectivity of the RNN, the task at hand, and the changes to connectivity through training. We use unconstrained gradient descent that can potentially alter the connectivity completely. However, evaluating nonlinear RNNs trained on several neuroscience-inspired tasks, we observe that the connectivity changes are small compared to the initial connectivity. We thus split the connectivity matrix W at the end of training into the initial part W0 and the changes ∆W , writing W = W0 + ∆W . (1) For all tasks we consider, we find that the training-induced connectivity structure ∆W is of low rank, despite the unconstrained nature of training used. This finding directly connects gradient-based learning with a number of existing neuroscience frameworks based on low-rank aspects of connectivity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [4, 9, 12, 14, 18, 21, 24, 33, 36]. Despite the low-rank nature of the changes to connectivity ∆W , the initial, full-rank, random connectivity W0 plays an important role in learning. Consistent with previous work [28, 33], we find that the initial connectivity accelerates learning. Moreover we show that the final, trained network relies on correlations between ∆W and W0. In the second part of our work, we analyze the mechanism behind these observations in a simplified and analytically tractable setting: nonlinear dynamics of learning in a linear RNN trained on a simple input-output mapping task. We show how the low-dimensional task structure leads to low-rank connectivity changes; importantly, the amplitude and geometry of these low-rank changes depend on the random initial connectivity. Our work reveals how this dependence accelerates learning and quantifies the degree of acceleration as a function of initial connectivity strength. Finally, we show that our results extend to real-world settings of an LSTM network trained on a natural language processing task, suggesting practical applications of our results. 2 Training RNNs on low-dimensional tasks Tasks We trained RNNs on three tasks inspired by the neuroscience literature. All tasks are characterized by a small number of input and output channels. The first task is a working memory task, in which the network receives pulses from two different input channels and needs to remember the sign of the last pulse in each channel independently [34]. The second task is a context-dependent decision task: The network receives two noisy signals, as well as one of two context inputs which indicates the relevant signal. After the input presentation, it needs to output whether the average of the relevant signal was positive or negative [20]. The third task is a delayed-discrimination task [25] in which the network receives two positive pulses separated by a delay. After yet another delay, it needs to output which of the two pulses had the larger amplitude. Based on their origin, we refer to the three tasks as "flip-flop" [34], "Mante" [20], and "Romo" [25] task, respectively. For each task, we plotted a single trial for a successfully trained network in Fig. 1(a-c). Detailed parameters can be found in the supplementary. RNN model Each RNN model consists of N neurons whose state vector evolves according to ẋ(t) = −x(t) +Wφ(x(t)) + √ N Nin∑ i=1 miui(t) . (2) The recurrent input is given by the firing rate vector φ(x) multiplied by the weight matrix W . We use the element-wise nonlinearity φ = tanh. The network receives time-dependent inputs ui(t) through input vectors mi. The output is the projection of the firing rate onto readout vectors wi, namely zi(t) = wTi φ(x(t))√ N for i in {1, . . . , Nout} . (3) We formulate target values ẑi(t) during specific segments of the trial [see dark lines for output panels in Fig. 1(a-c)]. The task determines the numbers Nin and Nout of input and output vectors. For example, the Mante task requires four input vectors (for both signals and contexts) and a single output vector. We are interested in the behavior of large networks, N >> 1, while the dimension of the tasks is small, Nin, Nout ∼ O(1). For the simulation, we chose N to be large enough so that learning dynamics become invariant under changes in N (see supplementary Fig. S1). Training and initialization For training the RNNs, we formulated a quadratic cost in zi(t) and applied the gradient descent method “Adam” [15] to the internal connectivity W as well as to the input and output vectors mi, wi. Restricting the updates to W or training with SGD impaired the convergence times but yielded similar results (not shown). The initial input and output vectors were drawn independently from N (0, 1/N). We initialized the internal weights as a random matrix W0 with independent elements drawn from N (0, g2/N). The parameter g thus scales the strength of the initial connectivity. Learning dynamics in the absence of initial connectivity To understand what kind of connectivity arises during learning, we first looked at the simplest case without initial connectivity, g = 0. The loss curves indicate convergence for all three tasks [see darker lines in Fig. 1(d-f)]. We analyzed the 2 connectivity at the end of training by computing its singular values (SVs). For the flip-flop task, we found that the first two SVs were much larger than the remaining ones [Fig. 1(g)]. To see whether the network utilizes this approximate rank-two structure, we replaced the changes ∆W with the singular value decomposition truncated at rank R, ∆W (R) = R∑ r=1 srurv T r . (4) Note that we keep the initial connectivity W0. The loss after truncation indeed drops to zero at rank 2 [Fig. 1(j)]. A similar situation is observed for the Mante and Romo tasks, see Fig. 1(h, k) and (i, l), respectively. Although for these tasks the SVs drop more slowly, the first six SVs are discernibly larger than the remaining tail; the truncation loss drops to zero at rank 4 and 6, respectively. In sum, we observe that for g = 0, training via gradient descent yields an effective low-rank solution for all three tasks. Effects of initial connectivity on learning dynamics and connectivity The loss-curves in Fig. 1(d-f) indicate a strong influence of the initial connectivity strength g on the training dynamics (lighter colors for g = 0.9). We observe that learning becomes faster and smoother with initial connectivity. In Fig. 2(a), we quantify the acceleration of learning with the number of epochs needed to reach 5% of the initial loss. We observe that convergence time smoothly decreases as a function of connectivity strength g; for very large g, networks finally transition to chaotic activity [31], and convergence time increases again. 3 After observing the drastic decrease in learning time, we wondered how initial connectivity affects the resulting connectivity changes. The first observation is that, for increasing g, the final connectivity W = W0 + ∆W is dominated by W0, since ||W0|| = √ Ng. In fact, the norm of ∆W not only remains unchanged for increasing N (see supplementary), but further decreases with increasing g, see Fig. 2(b). If a smaller ∆W solves the task for larger initial connectivity, it is reasonable to assume that W0 amplifies the effect of ∆W . To test this idea, we shuffled the elements of W0, destroying any correlation between W0 and ∆W , while maintaining its statistics. The loss after replacing the connectivity with W shuffle0 + ∆W is shown in Figure 2(c). For all tasks, shuffling strongly degraded performance except for cases with very weak initial connectivity. Low-rank changes in connectivity Despite the effects of the initial connectivity on convergence time and the norm of ∆W , the low-rank nature of ∆W remains similar to the case with g = 0. In Fig. 1(g-h), the SVs of ∆W are plotted in lighter colors. We see that the pattern and overall amplitude is very similar to the darker lines for g = 0: only a small number of SVs dominates over a tail. To assess the functional rank, we replaced ∆W in our RNN with the rank-R truncation, Eq. (4), while keeping the initial connectivity W0 identical. The resulting loss, Fig. 1(j-l), indicates that the effective connectivity change is indeed low-rank: for all three tasks, it drops to a value close to zero before rank 10. We quantified this observation by computing the “functional rank”, the rank at which the loss decreases below 5% of the initial value [see Fig. 2(d)]. This functional rank is between 2 and 10 for all three tasks (averaged over independent simulations). It increases with g for the flip-flop task, while it remains less affected for the other two tasks. 3 Analytical results for linear system The observation of effective low-rank changes in connectivity and accelerated learning for random initial connectivity were general across the three different tasks considered. To understand the underlying mechanisms, we turn to a much simpler task and a linear RNN model. This setting allows us to analytically describe the learning dynamics, understand the origin of the low-rank connectivity changes, and quantify how correlations between W0 and ∆W accelerate learning. Our approach is similar to that of Saxe et al. [27], who analyzed gradient descent dynamics in linear feed-forward networks. Both for the feed-forward and the recurrent model, the learning dynamics are nonlinear despite the linearity of the networks. Nevertheless, we will see that the recurrent nature of our models results in very different dynamics compared to the linear feed-forward model. Below we will present our main results for the simplified model; the details of all our analytical derivations can be found in the supplementary. Simplified setting Our simple task is an input-output transformation: Given a constant input u(t) = 1, the output z(t) has to reach a target value ẑ at time T . The corresponding loss is L = (ẑ − z(T ))2/2. An example with two different target values ẑ = 0.5, 2.0 is plotted in Fig. 3(a). The linear RNN model is obtained by replacing the nonlinearity in Eq. (2) with the identity, φ(x) = x, 4 and keeping only a single input and output. All weights are initialized as before. We keep the initial connectivity strength g < 1 so that the linear network remains stable. To further simplify, we constrain weight changes to the recurrent weights W only, and apply plain gradient descent. To compare between different simulations, we define the learning time τ = η · epochs. Evaluating the trained networks reveals similar phenomena as observed for the nonlinear, more complex tasks. Figure 3(b-e) shows the loss and SVs of ∆W over learning time for two values of g. We observe that learning induces low-rank connectivity changes – in fact, a single SV dominates. Because of the small magnitude of the second SV, truncating ∆W at rank 1 does not lead to increased loss (not shown), so that the functional rank as defined in the previous section is 1. Comparing between g = 0 and g = 0.6, we further see that learning is accelerated by the initial connectivity, and that the magnitude of the first SV decreases with increasing g. These observations will be quantified with our analytical results. Gradient descent dynamics For our analytical treatment, we only consider the limit of long trials, with the output z = limT→∞ z(T ) at the end of a trial. In this limit, the network converges to its fixed point x∗ = √ N (I −W )−1 m with identity matrix I , and the readout is z = wTx∗√ N = wT (I −W )−1 m . (5) The input and output vectors, m and w, remain fixed during training, and only W is changed. We can explicitly compute the changes induced by the gradient of the loss: dW (τ) dτ = − dL dW = [ẑ − z(τ)] [ I −WT(τ) ]−1 wmT [ I −WT(τ) ]−1 , (6) with initial connectivity W (0) = W0. We made a continuous-time approximation of the weight updates (“gradient flow”), valid to small learning rates η. Note that the readout z at the fixed point depends on the learning time τ through W (τ). Note that, unlike the feed-forward case [26], the inverse of W appears in Eq. (6), opening the possibility of divergence during learning. It also precludes a closed-form solution to the dynamics. However, we can obtain analytical insight by expanding the learning dynamics in learning time around the initial connectivity [5]. We write W (τ) = ∞∑ k=0 Wk τk k! . (7) 5 The changes in connectivity are obtained by subtracting W0, which yields ∆W (τ) = W1τ + W2τ 2/2 + . . . . We analytically computed the coefficients Wk by evaluating dkW/dτk at τ = 0. A comparison of the expansion up to third order with the numerical results from gradient descent learning indicates close agreement during most of the learning [see Fig. 3(b-e) full vs. dashed lines]. Learning dynamics in absence of initial connectivity It is instructive to first consider the case of no initial connectivity, g = 0. The readout at the beginning of training is then z0 = wTm. Due to the independence of m and w, the expected value of z0 vanishes. Moreover, the standard deviation scales as 1/ √ N with the network size. In this work, we are interested in the learning dynamics for large networks; all our analytical results are valid in the limit N →∞. We therefore write z0 = 0. Similar reasoning goes for all scalar quantities of interest: they are of order O(1), with deviations O(1/ √ N). With this self-averaging quality, we omit stating the limit as well as the expectation symbol and use the equality sign instead. Inserting W0 and z0 – both zero – into the gradient descent, Eq. (6), yields the first order coefficient W1 = ẑwm T . (8) Hence, the weight changes at linear order in τ are described by a rank-one matrix, and the readout is z(τ) = τ ẑ +O(τ2). The gradient descent for g = 0 would therefore converge at τ∗1 = 1, if it only depended on the first-order term. The numerical results already show deviations in the form of faster or slower convergence, depending on the target ẑ [see dark lines in Fig. 3(b,c) and note that τ̃ = τ for g = 0]. This indicates the importance of higher order terms. We observe that the gradient in Eq. (6) contains the transpose WT . At higher orders, this term introduces other outer-product combinations of m and w. In fact, for g = 0, these are the only vectors present in the gradient, so that the connectivity can always be written as ∆W (τ) = [w m] [ A11 A12 A21 A22 ] [ wT mT ] . (9) This form implies that ∆W will be at most a rank-two matrix. An analysis of the SVs [Eq. (14) below for general g] reveals that the second SV remains very small, as visible in Fig. 3(d,e). The entries of the 2× 2 matrix A(τ) up to order O(τ3) are (see supplementary) A11 = ẑ2 2 ( τ2 − τ3 ) , A12 = ẑ ( τ − τ 2 2 + τ3 6 (1 + 2ẑ2) ) , A21 = ẑ3τ3 3 , (10) and A22 = A11. The first surprising observation is that the target value ẑ enters nonlinearly into the expressions above. This is the origin of the qualitative difference between learning curves for different values of the target output in Fig. 3(b,c). We further observe that the connectivity changes develop a nonzero eigenvalue only at O(τ2). This is because the off-diagonal terms, which grow linearly with τ contribute a zero eigenvalue because mTw = 0. At second order the diagonal entries of A – and, with it, the eigenvalues – change. Changes in connectivity eigenvalues imply changes in time scales of network dynamics, which may be necessary for some tasks (for example, those involving memory), but can also lead to problems of exploding gradients (see supplementary). Effects of initial connectivity In the presence of initial connectivity, we can still apply the expansion introduced above. Due to the independence of W0, m, and w, the initial readout z0 remains zero. The gradient descent, Eq. (6), then directly yields the first-order connectivity coefficient W1 = ẑ B TwmTBT , with B = (I −W0)−1 . (11) Thus, W1 is still a rank-one matrix despite the full-rank initial connectivity. However, the connectivity changes now include the initial connectivity W0 via the matrix B. As a consequence, the norm of the first-order coefficient, ||W1|| = ẑβ (see supplementary), increases with g by the factor β = wTBBTw = mTBTBm = 1 1− g2 . (12) The readout is also affected by the initial connectivity. We compute (see supplementary) z(τ) = τ ẑβ2 +O(τ2) . (13) 6 Learning converges when z(τ) reaches the target value ẑ. The first-order prediction of the convergence time is therefore τ∗1 = 1/β 2, and the initial connectivity accelerates learning by the factor 1/β2 = (1− g2)2. We can decompose this acceleration into two factors: The growth rate is increased by β, and the norm of the final connectivity changes decreased by 1/β. For the first contribution, we note that the first-order coefficient W1 is, by definition, the constant part of the gradient, and hence the rate at which connectivity changes. For the second contribution, we compute the norm of ∆W (τ) at the predicted convergence time τ∗1 (see supplementary). In Fig. 4(a-c), we compare our first-order predictions with numerical simulations. In panels (a,b), we plot the convergence time τ∗ and the norm of ∆W at the end of training. As for the more complex, nonlinear tasks [see Fig. 2(a,b)], we defined the numerical τ∗ as the point in time where the loss drops to 5% of the initial value. For the gradient, panel (c), we averaged the norm ||dW/dτ || over the interval [0, τ∗]. To compare the collapsed curves with the predicted scalings, we normalized the curves for the different target values ẑ by their value at g = 0 for all three quantities. We observe good agreement between the numerical results and the theory, even though we only used the first-order predictions, and τ∗ often shows notable differences between theory and simulation [for example in Fig. 3(b,c)]. Finally, we assess the role of correlations between ∆W and W0 by shuffling W0. After shuffling, the readout loses the amplification by β2 and is hence zshuff = τ∗1 ẑ. The corresponding loss is Lshuff1 = L0 g 4(2 − g2)2, with initial loss L0 = ẑ2/2. A comparison of this first-order prediction with numerical results shows qualitative agreement with notable quantitative differences especially for the larger target amplitude, see Fig. 4(d). A comparison with the nonlinear case, Fig. 2(c) shows that our simple model captures the phenomenon qualitatively. Higher-order terms Does the initial connectivity lead to higher-rank changes in connectivity? For g > 0, the explicit rank-two expression for the weight changes, Eq. (9), does not hold anymore: The input and output vectors accumulate multiples of B and BT (such as BTw and BBTw) which increase the number of possible outer products – and hence potentially the rank. However, computing the first two SVs, s1 and s2, up to order O(τ3) (see supplementary) shows that ∆W remains approximately rank one: s1 = ẑ β [ τ̃ − τ̃ 2 2 + ( 1 + 7 2 ẑ2β ) τ̃3 6 ] , s2 = ẑ 3 τ̃ 3 12 . (14) where τ̃ = β2τ is the effective learning time. We observe that s1 grows linearly, but s2 only at third order of τ . Different parts of connectivity therefore grow on top of each other, giving rise to a temporal hierarchy in the learning dynamics. Numerical simulations show good agreement with this prediction (see supplementary). We further state the resulting readout up to O(τ3): z(τ) = ẑ [ τ̃ − τ̃ 2 2 + (1 + 8ẑ2β) τ̃3 6 ] . (15) 7 The appearance of β in the third-order contributions in Eqs. (14) and (15) shows that the learning with different values of g does not entirely collapse onto one curve after rescaling the time by β2. Instead, there is an additional acceleration, which increases with increasing target amplitude ẑ. This effect can be appreciated in Fig. 3(b,c), where for larger ẑ the loss curve becomes concave. Note that our approximation up to O(τ3) predicts this trend, despite quantitative disagreement. As we saw in Fig. 4, the scaling of the convergence time τ∗ with g is not strongly affected by the higher order terms. 4 Beyond neuroscience tasks We asked whether our observation that connectivity changes are low-rank despite full-rank initial connectivity would extend to more complex network architectures and tasks, specifically those not restricted to a small input or output dimension. We therefore trained a two-layer LSTM network on a natural language processing task, sentiment analysis of movie reviews [30] (details in supplementary). The SVs at the end of training showed the pattern that we predicted: learning only leads to small changes in the connectivity so that the final connectivity W is dominated by the initial connectivity and has full rank. The changes ∆W only have a small number of large SVs. For the recurrent weights of layer 2, the SVs are plotted in Fig. 5(a); other weights behave similarly (see supplementary). Like before, we evaluated the accuracy of networks after truncation at a given rank, see Fig. 5(b). We truncated the recurrent weights of both layers as well as input weights to layer 2. If we keep the random parts and truncate the changes as in Eq. (4) a rank-10 approximation already yields the final training accuracy. In contrast, if we truncate the entire weight matrices, as previously suggested [38], it takes more that half of the network rank (256 neurons per layer) to get close to the final accuracy. 5 Discussion Summary of results Our key finding is that the connectivity changes ∆W induced by unconstrained training on low-dimensional tasks are of low rank. With our simplified analytical model, we demonstrated why: The connectivity changes are spanned by a small number of existing directions, determined by the input and output vectors. Without initial connectivity, the maximum rank that linear networks can obtain through learning is in fact bounded by this number. The initial connectivity W0 enlarges the pool of available directions. The fact that learning arrives at a low-rank solution even in presence of initial connectivity is then a result of the temporal structure of learning: Initially, only a small number of available directions grow, inducing a low-rank structure. For our simplified task, the first of these structures already reduces the loss, and learning converges before other structures emerge; the final connectivity changes are hence rank-one. For other tasks, the available input and output directions alone may not be sufficient, so that initial connectivity becomes necessary for successful learning (see supplementary). Note that our theoretical analysis is limited to linear networks; however, nonlinearity may also contribute to generate novel learning directions. 8 Our numerical simulations further showed that initial connectivity significantly accelerated learning. Our analytical results revealed the underlying mechanism: The input and output vectors spanning the gradient are multiplied by powers of W0, which strongly correlates ∆W to W0. This correlation amplifies the effect of ∆W , and removing the correlation by shuffling W0 indeed degrades performance. This is in line with a recent study demonstrating such amplification through correlation between a random matrix and a low-rank perturbation in a model without learning [29]. Finally, we showed that the general observation of low-rank weight changes indeed holds even in a much more complex setting: a sentiment analysis task and a two-layer LSTM network. This implies a large potential for network compression [38]: one may truncate the changes in connectivity at a very low rank and recover the specific random initial connectivity using the seed of its random number generator. Task dimension and rank Low-rank connectivity structures have previously been studied and applied. On the one hand, a number of RNN frameworks explicitly rely on low-rank feedback for training [4, 9, 14, 18, 33]. On the other hand, low-rank networks are amenable to analysis, because the network activity is low-dimensional and evolves in directions determined by the vectors spanning the connectivity [12, 21, 24, 29, 36]. Our surprising observation that unconstrained gradient descent also leads to low-rank connectivity opens new possibilities for studying general gradient-based learning with the tools developed by previous works. We observed that the functional rank of the training-induced connectivity changes is strongly task dependent. A better understanding of the relation between task and connectivity calls for a concept of a task dimension, ideally based on the underlying abstract computations and independent of the specific implementation [10, 17, 19, 40]. Such a concept would allow to compare the solutions obtained by different algorithms and define a necessary minimal rank for a given task [8]. Learning as a dynamical process and relation to feed-forward networks Our approach stresses a dynamical perspective on learning, in which the solutions are not determined by the task alone, but also by the initial connectivity and the temporal evolution of weight changes. In particular, our expansion in learning time shows that some components in the connectivity only grow after others are present, which induces a temporal hierarchy. This affects the solutions the network arrives at. The temporal structure may also induce pitfalls for learning, for example divergent gradients when the networks undergo a phase transition [22] (see supplementary). A better understanding of the learning dynamics could be used to circumvent such problems, for example by introducing adapted learning curricula [6]. Learning in feed-forward networks has previously been analyzed from a similar perspective. It was found that the statistical structure of the training data induces a temporal hierarchy with long plateaus between step-like transitions in the learning curve [1, 11, 16, 26, 27, 41]. The hierarchy in our work originates in the dynamics of the RNN rather than the structure of the training data. For example, the plateaus seen in Fig. 1(d-f) can be related to phase transitions in the network dynamics, such as the emergence of new fixed points. Combining such internal learning dynamics with structured training data would be an interesting future direction. Finally, recent work on feed-forward networks identified two different learning regimes: a kernel regime vs. a rich, feature-learning regime [2, 7, 13, 39]. In the prior, the change in weights vanishes as the network width increases, and the network function can be linearized around the weights at initialization. In our work, too, the weight changes ∆W become infinitely small in the limit of wide networks. However, even such vanishing ∆W may significantly change the dynamics of the neural network by inducing large outlier eigenvalues [29]. For example, the readout for our linear network, Eq. (5), diverges for a eigenvalue of W approaching 1. In such a case, the network function cannot be approximated by linearization around the initial weights. Understanding the relation between learning regimes in feed-forward and recurrent networks constitutes an interesting field for future studies. 9 Broader Impact This work is a theoretical study on the dynamics of learning in RNNs. We show which kind of connectivity changes are induced by gradient descent. We expect that our insights will help to understand learning in RNNs, which benefits the research community as a whole and may ultimately lead to the development of improved learning algorithms or schemes. As a possible application, we show that one can use our results to efficiently compress a multi-layer RNN trained on a natural language processing task. In this work, there are no new algorithms, tasks, or data sets introduced. Therefore, the questions regarding any disadvantages, failures of the system, or biases do not apply. Acknowledgments and Disclosure of Funding This work was supported in part by the Israeli Science Foundation (grant number 346/16, OB). The project was further supported by the ANR project MORSE (ANR-16-CE37-0016), the program “Ecoles Universitaires de Recherche” launched by the French Government and implemented by the ANR, with the reference ANR-17-EURE-0017. F.S. acknowledges the Max Planck Society for a Minerva Fellowship. There are no competing interests.
1. What is the main contribution of the paper regarding recurrent neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its analytical treatment and simplicity? 3. Do you have any concerns or limitations regarding the scope of the paper's applications? 4. How does the reviewer assess the reliability and validation of the paper's conclusions, especially regarding its assumptions and numerical validation?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Recent studies on computation by recurrent neural networks assume that the computation performed by the system is due to a low-rank component in the dense connectivity matrix. In this work, the authors show for simple tasks that a gradient-decent algorithm, which is not constrained to operate in low-dimension, results in a low-rank change to the connectivity matrix. This low-rank solution is achieved with or without full-rank initial random connectivity. The authors further show that initial random connectivity can improve learning by shortening the learning time. By considering a linear network, the authors could derive an analytical expression for the loss and singular values of the learned weights across training. They show that the leaning follows nontrivial dynamics. Finally, they show that the simplified linear model is a good predictor for the behavior of more complex tasks and nonlinear networks. Strengths This work offers interesting new observations: * The authors show the benefits of training near the transition to chaos — something previously observed in RNN training but not rigorously shown through analytics. * They present evidence that the learned low-rank structure of the recurrent connectivity is correlated with the initial random connectivity. * An interesting finding is that even for a simple linear system with a fixed target, the training dynamics depend on the input's value in a nonlinear way. * The work provides excellent analytical treatment using a toy model that leads to interpretable results. The theory is verified with a close match to the toy model. Furthermore, a good match with more complex tasks indicates the right choice of a toy model. Weaknesses * The tasks studied in this work are all simple tasks that come down to converging to a fixed point. While this is the assumption of several recent studies in the field, it is a limited view of problems tackled by RNNs. I don't see any reason to assume that the low-rank connectivity assumption would extend to complex tasks where the target is not a simple fixpoint. For example, problems that require backpropagation through time to learn. I think the scope should be stated more clearly in the paper. * The analysis relies on power-law expansion in training time $\tau$. Many of the conclusions are made on the fixed point where $\beta^2\tau=1$. The authors note that the time always appears with $\beta^2\tau$, in Eq. (13)-(15). The authors use the expansion to conclude that the learning converges within $1/\beta^2$. It is not clear how the authors make this assumption about the convergence when it is clearly outside the scope of their approximation. Numerics indeed seems to validate the result, and I don't s reason to doubt the final result, but the validity of the derivations is not clear to me.
NIPS
Title The interplay between randomness and structure during learning in RNNs Abstract Recurrent neural networks (RNNs) trained on low-dimensional tasks have been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices, despite the unconstrained nature of the learning algorithm. To identify the origin of the low-rank structure, we turn to an analytically tractable setting: training a linear RNN on a simplified task. We show how the low-dimensional task structure leads to low-rank changes to connectivity. This low-rank structure allows us to explain and quantify the phenomenon of accelerated learning in the presence of random initial connectivity. Altogether, our study opens a new perspective to understanding trained RNNs in terms of both the learning process and the resulting network structure. 1 Introduction Recurrent neural networks (RNNs) have been used both as tools for machine learning, and as models for neuroscience. In the latter context, RNNs are typically initialized with random connectivity and trained on abstractions of tasks used in experimental settings [3, 20, 23, 32, 33, 35, 37, 40]. The obtained networks are then compared to both behavioral and neural experimental results, with the added advantage that the RNNs are more amenable to analysis than their biological counterparts [34]. Despite this advantage, the understanding of how RNNs implement neuroscience tasks is still limited. Open questions concern especially the relationship between the final connectivity and the task, and its formation through training. Here, we examine the relation between the initial connectivity of the RNN, the task at hand, and the changes to connectivity through training. We use unconstrained gradient descent that can potentially alter the connectivity completely. However, evaluating nonlinear RNNs trained on several neuroscience-inspired tasks, we observe that the connectivity changes are small compared to the initial connectivity. We thus split the connectivity matrix W at the end of training into the initial part W0 and the changes ∆W , writing W = W0 + ∆W . (1) For all tasks we consider, we find that the training-induced connectivity structure ∆W is of low rank, despite the unconstrained nature of training used. This finding directly connects gradient-based learning with a number of existing neuroscience frameworks based on low-rank aspects of connectivity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. [4, 9, 12, 14, 18, 21, 24, 33, 36]. Despite the low-rank nature of the changes to connectivity ∆W , the initial, full-rank, random connectivity W0 plays an important role in learning. Consistent with previous work [28, 33], we find that the initial connectivity accelerates learning. Moreover we show that the final, trained network relies on correlations between ∆W and W0. In the second part of our work, we analyze the mechanism behind these observations in a simplified and analytically tractable setting: nonlinear dynamics of learning in a linear RNN trained on a simple input-output mapping task. We show how the low-dimensional task structure leads to low-rank connectivity changes; importantly, the amplitude and geometry of these low-rank changes depend on the random initial connectivity. Our work reveals how this dependence accelerates learning and quantifies the degree of acceleration as a function of initial connectivity strength. Finally, we show that our results extend to real-world settings of an LSTM network trained on a natural language processing task, suggesting practical applications of our results. 2 Training RNNs on low-dimensional tasks Tasks We trained RNNs on three tasks inspired by the neuroscience literature. All tasks are characterized by a small number of input and output channels. The first task is a working memory task, in which the network receives pulses from two different input channels and needs to remember the sign of the last pulse in each channel independently [34]. The second task is a context-dependent decision task: The network receives two noisy signals, as well as one of two context inputs which indicates the relevant signal. After the input presentation, it needs to output whether the average of the relevant signal was positive or negative [20]. The third task is a delayed-discrimination task [25] in which the network receives two positive pulses separated by a delay. After yet another delay, it needs to output which of the two pulses had the larger amplitude. Based on their origin, we refer to the three tasks as "flip-flop" [34], "Mante" [20], and "Romo" [25] task, respectively. For each task, we plotted a single trial for a successfully trained network in Fig. 1(a-c). Detailed parameters can be found in the supplementary. RNN model Each RNN model consists of N neurons whose state vector evolves according to ẋ(t) = −x(t) +Wφ(x(t)) + √ N Nin∑ i=1 miui(t) . (2) The recurrent input is given by the firing rate vector φ(x) multiplied by the weight matrix W . We use the element-wise nonlinearity φ = tanh. The network receives time-dependent inputs ui(t) through input vectors mi. The output is the projection of the firing rate onto readout vectors wi, namely zi(t) = wTi φ(x(t))√ N for i in {1, . . . , Nout} . (3) We formulate target values ẑi(t) during specific segments of the trial [see dark lines for output panels in Fig. 1(a-c)]. The task determines the numbers Nin and Nout of input and output vectors. For example, the Mante task requires four input vectors (for both signals and contexts) and a single output vector. We are interested in the behavior of large networks, N >> 1, while the dimension of the tasks is small, Nin, Nout ∼ O(1). For the simulation, we chose N to be large enough so that learning dynamics become invariant under changes in N (see supplementary Fig. S1). Training and initialization For training the RNNs, we formulated a quadratic cost in zi(t) and applied the gradient descent method “Adam” [15] to the internal connectivity W as well as to the input and output vectors mi, wi. Restricting the updates to W or training with SGD impaired the convergence times but yielded similar results (not shown). The initial input and output vectors were drawn independently from N (0, 1/N). We initialized the internal weights as a random matrix W0 with independent elements drawn from N (0, g2/N). The parameter g thus scales the strength of the initial connectivity. Learning dynamics in the absence of initial connectivity To understand what kind of connectivity arises during learning, we first looked at the simplest case without initial connectivity, g = 0. The loss curves indicate convergence for all three tasks [see darker lines in Fig. 1(d-f)]. We analyzed the 2 connectivity at the end of training by computing its singular values (SVs). For the flip-flop task, we found that the first two SVs were much larger than the remaining ones [Fig. 1(g)]. To see whether the network utilizes this approximate rank-two structure, we replaced the changes ∆W with the singular value decomposition truncated at rank R, ∆W (R) = R∑ r=1 srurv T r . (4) Note that we keep the initial connectivity W0. The loss after truncation indeed drops to zero at rank 2 [Fig. 1(j)]. A similar situation is observed for the Mante and Romo tasks, see Fig. 1(h, k) and (i, l), respectively. Although for these tasks the SVs drop more slowly, the first six SVs are discernibly larger than the remaining tail; the truncation loss drops to zero at rank 4 and 6, respectively. In sum, we observe that for g = 0, training via gradient descent yields an effective low-rank solution for all three tasks. Effects of initial connectivity on learning dynamics and connectivity The loss-curves in Fig. 1(d-f) indicate a strong influence of the initial connectivity strength g on the training dynamics (lighter colors for g = 0.9). We observe that learning becomes faster and smoother with initial connectivity. In Fig. 2(a), we quantify the acceleration of learning with the number of epochs needed to reach 5% of the initial loss. We observe that convergence time smoothly decreases as a function of connectivity strength g; for very large g, networks finally transition to chaotic activity [31], and convergence time increases again. 3 After observing the drastic decrease in learning time, we wondered how initial connectivity affects the resulting connectivity changes. The first observation is that, for increasing g, the final connectivity W = W0 + ∆W is dominated by W0, since ||W0|| = √ Ng. In fact, the norm of ∆W not only remains unchanged for increasing N (see supplementary), but further decreases with increasing g, see Fig. 2(b). If a smaller ∆W solves the task for larger initial connectivity, it is reasonable to assume that W0 amplifies the effect of ∆W . To test this idea, we shuffled the elements of W0, destroying any correlation between W0 and ∆W , while maintaining its statistics. The loss after replacing the connectivity with W shuffle0 + ∆W is shown in Figure 2(c). For all tasks, shuffling strongly degraded performance except for cases with very weak initial connectivity. Low-rank changes in connectivity Despite the effects of the initial connectivity on convergence time and the norm of ∆W , the low-rank nature of ∆W remains similar to the case with g = 0. In Fig. 1(g-h), the SVs of ∆W are plotted in lighter colors. We see that the pattern and overall amplitude is very similar to the darker lines for g = 0: only a small number of SVs dominates over a tail. To assess the functional rank, we replaced ∆W in our RNN with the rank-R truncation, Eq. (4), while keeping the initial connectivity W0 identical. The resulting loss, Fig. 1(j-l), indicates that the effective connectivity change is indeed low-rank: for all three tasks, it drops to a value close to zero before rank 10. We quantified this observation by computing the “functional rank”, the rank at which the loss decreases below 5% of the initial value [see Fig. 2(d)]. This functional rank is between 2 and 10 for all three tasks (averaged over independent simulations). It increases with g for the flip-flop task, while it remains less affected for the other two tasks. 3 Analytical results for linear system The observation of effective low-rank changes in connectivity and accelerated learning for random initial connectivity were general across the three different tasks considered. To understand the underlying mechanisms, we turn to a much simpler task and a linear RNN model. This setting allows us to analytically describe the learning dynamics, understand the origin of the low-rank connectivity changes, and quantify how correlations between W0 and ∆W accelerate learning. Our approach is similar to that of Saxe et al. [27], who analyzed gradient descent dynamics in linear feed-forward networks. Both for the feed-forward and the recurrent model, the learning dynamics are nonlinear despite the linearity of the networks. Nevertheless, we will see that the recurrent nature of our models results in very different dynamics compared to the linear feed-forward model. Below we will present our main results for the simplified model; the details of all our analytical derivations can be found in the supplementary. Simplified setting Our simple task is an input-output transformation: Given a constant input u(t) = 1, the output z(t) has to reach a target value ẑ at time T . The corresponding loss is L = (ẑ − z(T ))2/2. An example with two different target values ẑ = 0.5, 2.0 is plotted in Fig. 3(a). The linear RNN model is obtained by replacing the nonlinearity in Eq. (2) with the identity, φ(x) = x, 4 and keeping only a single input and output. All weights are initialized as before. We keep the initial connectivity strength g < 1 so that the linear network remains stable. To further simplify, we constrain weight changes to the recurrent weights W only, and apply plain gradient descent. To compare between different simulations, we define the learning time τ = η · epochs. Evaluating the trained networks reveals similar phenomena as observed for the nonlinear, more complex tasks. Figure 3(b-e) shows the loss and SVs of ∆W over learning time for two values of g. We observe that learning induces low-rank connectivity changes – in fact, a single SV dominates. Because of the small magnitude of the second SV, truncating ∆W at rank 1 does not lead to increased loss (not shown), so that the functional rank as defined in the previous section is 1. Comparing between g = 0 and g = 0.6, we further see that learning is accelerated by the initial connectivity, and that the magnitude of the first SV decreases with increasing g. These observations will be quantified with our analytical results. Gradient descent dynamics For our analytical treatment, we only consider the limit of long trials, with the output z = limT→∞ z(T ) at the end of a trial. In this limit, the network converges to its fixed point x∗ = √ N (I −W )−1 m with identity matrix I , and the readout is z = wTx∗√ N = wT (I −W )−1 m . (5) The input and output vectors, m and w, remain fixed during training, and only W is changed. We can explicitly compute the changes induced by the gradient of the loss: dW (τ) dτ = − dL dW = [ẑ − z(τ)] [ I −WT(τ) ]−1 wmT [ I −WT(τ) ]−1 , (6) with initial connectivity W (0) = W0. We made a continuous-time approximation of the weight updates (“gradient flow”), valid to small learning rates η. Note that the readout z at the fixed point depends on the learning time τ through W (τ). Note that, unlike the feed-forward case [26], the inverse of W appears in Eq. (6), opening the possibility of divergence during learning. It also precludes a closed-form solution to the dynamics. However, we can obtain analytical insight by expanding the learning dynamics in learning time around the initial connectivity [5]. We write W (τ) = ∞∑ k=0 Wk τk k! . (7) 5 The changes in connectivity are obtained by subtracting W0, which yields ∆W (τ) = W1τ + W2τ 2/2 + . . . . We analytically computed the coefficients Wk by evaluating dkW/dτk at τ = 0. A comparison of the expansion up to third order with the numerical results from gradient descent learning indicates close agreement during most of the learning [see Fig. 3(b-e) full vs. dashed lines]. Learning dynamics in absence of initial connectivity It is instructive to first consider the case of no initial connectivity, g = 0. The readout at the beginning of training is then z0 = wTm. Due to the independence of m and w, the expected value of z0 vanishes. Moreover, the standard deviation scales as 1/ √ N with the network size. In this work, we are interested in the learning dynamics for large networks; all our analytical results are valid in the limit N →∞. We therefore write z0 = 0. Similar reasoning goes for all scalar quantities of interest: they are of order O(1), with deviations O(1/ √ N). With this self-averaging quality, we omit stating the limit as well as the expectation symbol and use the equality sign instead. Inserting W0 and z0 – both zero – into the gradient descent, Eq. (6), yields the first order coefficient W1 = ẑwm T . (8) Hence, the weight changes at linear order in τ are described by a rank-one matrix, and the readout is z(τ) = τ ẑ +O(τ2). The gradient descent for g = 0 would therefore converge at τ∗1 = 1, if it only depended on the first-order term. The numerical results already show deviations in the form of faster or slower convergence, depending on the target ẑ [see dark lines in Fig. 3(b,c) and note that τ̃ = τ for g = 0]. This indicates the importance of higher order terms. We observe that the gradient in Eq. (6) contains the transpose WT . At higher orders, this term introduces other outer-product combinations of m and w. In fact, for g = 0, these are the only vectors present in the gradient, so that the connectivity can always be written as ∆W (τ) = [w m] [ A11 A12 A21 A22 ] [ wT mT ] . (9) This form implies that ∆W will be at most a rank-two matrix. An analysis of the SVs [Eq. (14) below for general g] reveals that the second SV remains very small, as visible in Fig. 3(d,e). The entries of the 2× 2 matrix A(τ) up to order O(τ3) are (see supplementary) A11 = ẑ2 2 ( τ2 − τ3 ) , A12 = ẑ ( τ − τ 2 2 + τ3 6 (1 + 2ẑ2) ) , A21 = ẑ3τ3 3 , (10) and A22 = A11. The first surprising observation is that the target value ẑ enters nonlinearly into the expressions above. This is the origin of the qualitative difference between learning curves for different values of the target output in Fig. 3(b,c). We further observe that the connectivity changes develop a nonzero eigenvalue only at O(τ2). This is because the off-diagonal terms, which grow linearly with τ contribute a zero eigenvalue because mTw = 0. At second order the diagonal entries of A – and, with it, the eigenvalues – change. Changes in connectivity eigenvalues imply changes in time scales of network dynamics, which may be necessary for some tasks (for example, those involving memory), but can also lead to problems of exploding gradients (see supplementary). Effects of initial connectivity In the presence of initial connectivity, we can still apply the expansion introduced above. Due to the independence of W0, m, and w, the initial readout z0 remains zero. The gradient descent, Eq. (6), then directly yields the first-order connectivity coefficient W1 = ẑ B TwmTBT , with B = (I −W0)−1 . (11) Thus, W1 is still a rank-one matrix despite the full-rank initial connectivity. However, the connectivity changes now include the initial connectivity W0 via the matrix B. As a consequence, the norm of the first-order coefficient, ||W1|| = ẑβ (see supplementary), increases with g by the factor β = wTBBTw = mTBTBm = 1 1− g2 . (12) The readout is also affected by the initial connectivity. We compute (see supplementary) z(τ) = τ ẑβ2 +O(τ2) . (13) 6 Learning converges when z(τ) reaches the target value ẑ. The first-order prediction of the convergence time is therefore τ∗1 = 1/β 2, and the initial connectivity accelerates learning by the factor 1/β2 = (1− g2)2. We can decompose this acceleration into two factors: The growth rate is increased by β, and the norm of the final connectivity changes decreased by 1/β. For the first contribution, we note that the first-order coefficient W1 is, by definition, the constant part of the gradient, and hence the rate at which connectivity changes. For the second contribution, we compute the norm of ∆W (τ) at the predicted convergence time τ∗1 (see supplementary). In Fig. 4(a-c), we compare our first-order predictions with numerical simulations. In panels (a,b), we plot the convergence time τ∗ and the norm of ∆W at the end of training. As for the more complex, nonlinear tasks [see Fig. 2(a,b)], we defined the numerical τ∗ as the point in time where the loss drops to 5% of the initial value. For the gradient, panel (c), we averaged the norm ||dW/dτ || over the interval [0, τ∗]. To compare the collapsed curves with the predicted scalings, we normalized the curves for the different target values ẑ by their value at g = 0 for all three quantities. We observe good agreement between the numerical results and the theory, even though we only used the first-order predictions, and τ∗ often shows notable differences between theory and simulation [for example in Fig. 3(b,c)]. Finally, we assess the role of correlations between ∆W and W0 by shuffling W0. After shuffling, the readout loses the amplification by β2 and is hence zshuff = τ∗1 ẑ. The corresponding loss is Lshuff1 = L0 g 4(2 − g2)2, with initial loss L0 = ẑ2/2. A comparison of this first-order prediction with numerical results shows qualitative agreement with notable quantitative differences especially for the larger target amplitude, see Fig. 4(d). A comparison with the nonlinear case, Fig. 2(c) shows that our simple model captures the phenomenon qualitatively. Higher-order terms Does the initial connectivity lead to higher-rank changes in connectivity? For g > 0, the explicit rank-two expression for the weight changes, Eq. (9), does not hold anymore: The input and output vectors accumulate multiples of B and BT (such as BTw and BBTw) which increase the number of possible outer products – and hence potentially the rank. However, computing the first two SVs, s1 and s2, up to order O(τ3) (see supplementary) shows that ∆W remains approximately rank one: s1 = ẑ β [ τ̃ − τ̃ 2 2 + ( 1 + 7 2 ẑ2β ) τ̃3 6 ] , s2 = ẑ 3 τ̃ 3 12 . (14) where τ̃ = β2τ is the effective learning time. We observe that s1 grows linearly, but s2 only at third order of τ . Different parts of connectivity therefore grow on top of each other, giving rise to a temporal hierarchy in the learning dynamics. Numerical simulations show good agreement with this prediction (see supplementary). We further state the resulting readout up to O(τ3): z(τ) = ẑ [ τ̃ − τ̃ 2 2 + (1 + 8ẑ2β) τ̃3 6 ] . (15) 7 The appearance of β in the third-order contributions in Eqs. (14) and (15) shows that the learning with different values of g does not entirely collapse onto one curve after rescaling the time by β2. Instead, there is an additional acceleration, which increases with increasing target amplitude ẑ. This effect can be appreciated in Fig. 3(b,c), where for larger ẑ the loss curve becomes concave. Note that our approximation up to O(τ3) predicts this trend, despite quantitative disagreement. As we saw in Fig. 4, the scaling of the convergence time τ∗ with g is not strongly affected by the higher order terms. 4 Beyond neuroscience tasks We asked whether our observation that connectivity changes are low-rank despite full-rank initial connectivity would extend to more complex network architectures and tasks, specifically those not restricted to a small input or output dimension. We therefore trained a two-layer LSTM network on a natural language processing task, sentiment analysis of movie reviews [30] (details in supplementary). The SVs at the end of training showed the pattern that we predicted: learning only leads to small changes in the connectivity so that the final connectivity W is dominated by the initial connectivity and has full rank. The changes ∆W only have a small number of large SVs. For the recurrent weights of layer 2, the SVs are plotted in Fig. 5(a); other weights behave similarly (see supplementary). Like before, we evaluated the accuracy of networks after truncation at a given rank, see Fig. 5(b). We truncated the recurrent weights of both layers as well as input weights to layer 2. If we keep the random parts and truncate the changes as in Eq. (4) a rank-10 approximation already yields the final training accuracy. In contrast, if we truncate the entire weight matrices, as previously suggested [38], it takes more that half of the network rank (256 neurons per layer) to get close to the final accuracy. 5 Discussion Summary of results Our key finding is that the connectivity changes ∆W induced by unconstrained training on low-dimensional tasks are of low rank. With our simplified analytical model, we demonstrated why: The connectivity changes are spanned by a small number of existing directions, determined by the input and output vectors. Without initial connectivity, the maximum rank that linear networks can obtain through learning is in fact bounded by this number. The initial connectivity W0 enlarges the pool of available directions. The fact that learning arrives at a low-rank solution even in presence of initial connectivity is then a result of the temporal structure of learning: Initially, only a small number of available directions grow, inducing a low-rank structure. For our simplified task, the first of these structures already reduces the loss, and learning converges before other structures emerge; the final connectivity changes are hence rank-one. For other tasks, the available input and output directions alone may not be sufficient, so that initial connectivity becomes necessary for successful learning (see supplementary). Note that our theoretical analysis is limited to linear networks; however, nonlinearity may also contribute to generate novel learning directions. 8 Our numerical simulations further showed that initial connectivity significantly accelerated learning. Our analytical results revealed the underlying mechanism: The input and output vectors spanning the gradient are multiplied by powers of W0, which strongly correlates ∆W to W0. This correlation amplifies the effect of ∆W , and removing the correlation by shuffling W0 indeed degrades performance. This is in line with a recent study demonstrating such amplification through correlation between a random matrix and a low-rank perturbation in a model without learning [29]. Finally, we showed that the general observation of low-rank weight changes indeed holds even in a much more complex setting: a sentiment analysis task and a two-layer LSTM network. This implies a large potential for network compression [38]: one may truncate the changes in connectivity at a very low rank and recover the specific random initial connectivity using the seed of its random number generator. Task dimension and rank Low-rank connectivity structures have previously been studied and applied. On the one hand, a number of RNN frameworks explicitly rely on low-rank feedback for training [4, 9, 14, 18, 33]. On the other hand, low-rank networks are amenable to analysis, because the network activity is low-dimensional and evolves in directions determined by the vectors spanning the connectivity [12, 21, 24, 29, 36]. Our surprising observation that unconstrained gradient descent also leads to low-rank connectivity opens new possibilities for studying general gradient-based learning with the tools developed by previous works. We observed that the functional rank of the training-induced connectivity changes is strongly task dependent. A better understanding of the relation between task and connectivity calls for a concept of a task dimension, ideally based on the underlying abstract computations and independent of the specific implementation [10, 17, 19, 40]. Such a concept would allow to compare the solutions obtained by different algorithms and define a necessary minimal rank for a given task [8]. Learning as a dynamical process and relation to feed-forward networks Our approach stresses a dynamical perspective on learning, in which the solutions are not determined by the task alone, but also by the initial connectivity and the temporal evolution of weight changes. In particular, our expansion in learning time shows that some components in the connectivity only grow after others are present, which induces a temporal hierarchy. This affects the solutions the network arrives at. The temporal structure may also induce pitfalls for learning, for example divergent gradients when the networks undergo a phase transition [22] (see supplementary). A better understanding of the learning dynamics could be used to circumvent such problems, for example by introducing adapted learning curricula [6]. Learning in feed-forward networks has previously been analyzed from a similar perspective. It was found that the statistical structure of the training data induces a temporal hierarchy with long plateaus between step-like transitions in the learning curve [1, 11, 16, 26, 27, 41]. The hierarchy in our work originates in the dynamics of the RNN rather than the structure of the training data. For example, the plateaus seen in Fig. 1(d-f) can be related to phase transitions in the network dynamics, such as the emergence of new fixed points. Combining such internal learning dynamics with structured training data would be an interesting future direction. Finally, recent work on feed-forward networks identified two different learning regimes: a kernel regime vs. a rich, feature-learning regime [2, 7, 13, 39]. In the prior, the change in weights vanishes as the network width increases, and the network function can be linearized around the weights at initialization. In our work, too, the weight changes ∆W become infinitely small in the limit of wide networks. However, even such vanishing ∆W may significantly change the dynamics of the neural network by inducing large outlier eigenvalues [29]. For example, the readout for our linear network, Eq. (5), diverges for a eigenvalue of W approaching 1. In such a case, the network function cannot be approximated by linearization around the initial weights. Understanding the relation between learning regimes in feed-forward and recurrent networks constitutes an interesting field for future studies. 9 Broader Impact This work is a theoretical study on the dynamics of learning in RNNs. We show which kind of connectivity changes are induced by gradient descent. We expect that our insights will help to understand learning in RNNs, which benefits the research community as a whole and may ultimately lead to the development of improved learning algorithms or schemes. As a possible application, we show that one can use our results to efficiently compress a multi-layer RNN trained on a natural language processing task. In this work, there are no new algorithms, tasks, or data sets introduced. Therefore, the questions regarding any disadvantages, failures of the system, or biases do not apply. Acknowledgments and Disclosure of Funding This work was supported in part by the Israeli Science Foundation (grant number 346/16, OB). The project was further supported by the ANR project MORSE (ANR-16-CE37-0016), the program “Ecoles Universitaires de Recherche” launched by the French Government and implemented by the ANR, with the reference ANR-17-EURE-0017. F.S. acknowledges the Max Planck Society for a Minerva Fellowship. There are no competing interests.
1. What is the focus and contribution of the paper regarding the dynamics of learning in RNNs? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis? 3. Do you have any concerns or questions regarding the paper's methodology or findings? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper is a theoretical study of the dynamics of learning in RNNs. The study focuses on the relationship between the learnt recurrent connectivity of a network and the underlying task, and the way in which training shapes this relation. The authors first evaluate nonlinear RNNs trained on three example neuroscience-inspired tasks and make the following key observations: 1. Learnt connectivity changes are small (in Frobenius norm) compared to the initial connectivity 2. Training induced change in connectivity is of low-rank even though training is unconstrained 3. Amplitude and geometry of these low-rank changes depend on the initial connectivity 4. Strength of random initial connectivity (g) affects learning dynamics (convergence time first decreases with g and then increases again as networks transition to chaotic activity) To understand the mechanisms underlying these observations, the authors study a linear RNN performing a simple reaching task for a constant input signal. This simple setting allows them to perform a theoretical analysis of the network’s learning dynamics. They demonstrate that the connectivity changes are spanned by a small number of directions specified by the input and readout vectors, and random initial connectivity enlarges the pool of available directions. Further, they derive a Taylor series type expansion in learning time for the learnt weights. Using this expansion they show that some components in the connectivity only grow after others are present, which induces a temporal hierarchy in the learning dynamics. Strengths This work presents a purely theoretical study of learning dynamics in RNNs. The authors acknowledge that there are no new algorithms, tasks, or data sets introduced. This work however is a potentially important theoretical study. Although the main analytical results are obtained for a simple linear RNN, the approach stresses a dynamical perspective on learning which is important when trying to understand learning in (deep/) biological networks. [Update post-feedback. I appreciate that the authors tried their analysis on a new task. My score remains a solid accept.] Weaknesses I do have a concern about the expansion in eq(7), which requires learning time τ to be small. For what range of values of τ would this local approximation hold? (In figure 3b, learning converges for τ around 1)
NIPS
Title A Biologically Plausible Neural Network for Slow Feature Analysis Abstract Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli. 1 Introduction Unsupervised learning of meaningful latent features from noisy, high-dimensional data is a fundamental problem for both machine learning and brain function. Often, the relevant features in an environment (e.g., objects) vary on relatively slow timescales when compared to noisy sensory data (e.g., the light intensity measured by a single receptor in the retina). Therefore, temporal slowness has been proposed as a computational principle for extracting relevant latent features [8, 19, 31]. A popular approach for extracting slow features, introduced by Wiskott and Sejnowski [31], is Slow Feature Analysis (SFA). SFA is an unsupervised learning algorithm that extracts the slowest projection, in terms of discrete time derivative, from a nonlinear expansion of the input signal. When trained on natural image sequences, SFA extracts features that resemble response properties of complex cells in early visual processing [2]. Impressively, hierarchical networks of SFA trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus [9]. The relevance of SFA is strengthened by its close relationship to information theoretic objectives and its equivalence to other successful algorithms under certain assumptions. When the time series is reversible and Gaussian, (Linear) SFA is equivalent to maximizing mutual information between the current output of the system and the next input [7, 5]. Moreover, features extracted by several ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. algorithms favoring predictability from real-world datasets are similar to those extracted by SFA [29]. Finally, (Linear) SFA is equivalent to a time-lagged independent components analysis [3, 10], which is a popular statistical technique used to analyze molecular dynamics [22, 20, 26, 27]. Due to its success in modeling aspects of neural processing, deriving an algorithm for SFA with a biologically plausible neural network implementation is an important task. For the purposes of this work, we define biologically plausible to mean that the neural network operates in the online setting (i.e., after receiving an input, it computes its output before receiving its next input, never storing a significant fraction of past inputs), and its synaptic learning rules are local (i.e., a synaptic weight update depends only on variables represented in the pre- and postsynaptic neurons). In addition to satisfying basic properties of neural circuits, these online and locality requirements can lead to networks that are well-suited for analyzing large datasets because they operate in the online setting with low computational overhead. While there are a few online algorithms for SFA, none have biologically plausible neural network implementations that extract multiple slow features. Moreover, there are no neural network implementations for the related information theoretic algorithms discussed above [29, 5]. Kompella et al. propose Incremental SFA [14] (see [16, 32] for extensions). However, this approach relies on non-local learning rules, so it does not meet the above criteria for biological plausibility. Malik et al. [17] use an online generalized eigenvalue problem solver [33] to derive an online algorithm for SFA. While their algorithm for finding one-dimensional projections can be implemented in a biologically plausible network, their extension to multi-dimensional projections is not fully online. In this work, we propose Bio-SFA: an online algorithm for SFA with a biologically plausible neural network implementation, Fig. 1. We adopt a normative approach to derive our algorithm. First, we express the solution of the SFA problem in terms of an objective from classical multidimensional scaling. We then manipulate the objective to arrive at a min-max optimization problem that can be solved in the online setting by taking stochastic gradient descent-ascent steps. These steps can be expressed in terms of neural activities and updates to synaptic weight matrices, which leads to a natural interpretation of our online algorithm as a biologically plausible neural network. To validate our approach, we test our algorithm on datasets of naturalistic stimuli and reproduce results originally performed in the offline setting. The synaptic updates of the feedforward weights W in our network are similar, although not identical, to the updates proposed heuristically by Földiák [8] to extract slow temporal features. However, there is no theoretical analysis of the algorithm in [8]. In contrast, in our normative approach, Bio-SFA is derived directly from an SFA objective, so we can analytically predict its output, as well as the synaptic weights, without resorting to numerical simulation. In addition, the comparison of our learning rules with Földiák’s illuminates the relationship of [8] to SFA. 2 Slow Feature Analysis Here and below, vectors are boldface lowercase letters (e.g., v), and matrices are boldface uppercase letters (e.g., M). We use superscripts to denote the components of a vector (e.g., vi). 2.1 Problem statement Wiskott and Sejnowski [31] proposed the following 2 step method for extracting slow features from a noisy data set: (1) generate a nonlinear expansion of the input signal, and (2) find the slowest, in terms of discrete time derivative, low-dimensional projection of the expanded signal. In this section, we review these 2 steps. Let {s0, s1, . . . , sT } be a d-dimensional input signal.2 The first step of SFA is to generate an mdimensional expansion {xt}, referred to as the expanded signal, of {st}. Let h = (h1, . . . , hm) : Rd ! Rm be an expansion function and define xt := h(st) 1 T TX t0=1 h(st0), t = 0, 1, . . . , T, so that {xt} is centered. Let k < m. The second step of SFA is to find the k-dimensional linear projection {yt} of the expanded signal {xt} that minimizes the mean discrete-time derivative of the output signal {yt}, subject to a whitening constraint. To be precise, the objective can be formulated as follows: argmin {yt} 1 T TX t=1 kẏtk2 subject to 1 T TX t=1 yty > t = Ik, (1) where ẏt is the discrete time derive of yt, and yt is a linear projection of xt; that is, ẏt := yt yt 1, t = 1, . . . , T, (2) yt := V >xt, t = 0, 1, . . . , T, for some V 2 Rm⇥k. (3) Note, since {xt} is centered, the projection {yt} is also centered. 2.2 Quadratic SFA The focus of this work is to derive a biologically plausible neural network that learns to output the optimal output signal {yt} when streamed the expanded signal {xt}. While our algorithm does not depend on the specific choice of the expansion function h, for concreteness, we provide an example here. In their original paper, Wiskott and Sejnowski [31] proposed setting the components of the function h : Rd ! Rm to be the monomials of degree one and two. This choice, which we refer to as “Quadratic SFA”, has been widely used in applications [31, 2, 9, 34]. In particular, let m := d+ d(d+ 1)/2 and h1, . . . , hm : Rd ! R denote the m possible linear and quadratic functions of the form h(s) := si or h(s) := sisj , for 1 i j d. (When only the linear features are used, i.e., xi = si + const, this is referred to “Linear SFA”.) Thus, each component of the output signal is a quadratic polynomial in the components of the signal of the form: yi = V1ih 1(s) + · · ·+ Vmihm(s) + const. (4) Biologically, there are a number of mechanism that have been proposed for computing products of the form sisj ; see, e.g., [13] and the references therein. One such mechanism uses “Sigma-Pi” units [23], which multiplies two inputs via gating and have been invoked in cortical modeling [18]. In Sec. 6, we perform our numerical experiments using the quadratic expansion. 2The zeroth time step is included to ensure the discrete-time derivative is defined at t = 1. 3 A novel SFA objective from classical multidimensional scaling To derive an SFA network, we identify an objective function whose optimization leads to an online algorithm that can be implemented in a biologically plausible network. To identify the objective function, we first rewrite the SFA output as a principal subspace projection and then take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6]. This approach is similar to the derivation of a biologically plausible neural network for canonical correlation analysis [15]. To begin, we define the discrete derivative process {ẋt} and the delayed sum process {x̄t} by ẋt := xt xt 1 and x̄t := xt+xt 1, for t = 1, . . . , T . In addition, we define the sample covariance matrices Cxx := 1 T TX t=1 xtx > t , Cẋẋ := 1 T TX t=1 ẋtẋ > t , Cx̄x̄ := 1 T TX t=1 x̄tx̄ > t . (5) Substituting the definitions in Eqs. (2), (3) and (5) into the objective in Eq. (1), we can equivalently write the SFA problem as the following constrained minimization problem of the projection matrix V: argmin V2Rm⇥k TrV>CẋẋV subject to V>CxxV = Ik. (6) Due to the whitening constraint in Eq. (6), we can equivalently write it as the maximization of the one-step autocorrelation of the projection {yt} (see Appendix A for details): argmax V2Rm⇥k TrV>Cx̄x̄V subject to V>CxxV = Ik. (7) Next, setting x̂t := C 1/2 xx x̄t for t = 1, . . . , T , and V̂ := C1/2xx V, Cx̂x̂ := 1 T TX t=1 x̂tx̂ > t = C 1/2 xx Cx̄x̄C 1/2 xx , we see that V is a solution of Eq. (7) if and only if V̂ is the solution of: argmax V̂2Rm⇥k Tr V̂>Cx̂x̂V̂ subject to V̂>V̂ = Ik. (8) Notably, Eq. (8) is the variance maximization objective for the PCA eigenproblem, which is optimized when the column vectors of V̂ span the k-dimensional principal subspace of Cx̂x̂. Finally, we take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6, 21]. To this end, define the data matrices X̄ := [x̄t, . . . , x̄T ], X̂ := [x̂1, . . . , x̂T ], Ȳ := [ȳ1, . . . , ȳT ]. Then, since ȳt = V>x̄t = V̂>x̂t, we see that Ȳ is the projection of X̂t onto its k-dimensional principal subspace. As shown in [6], this principal projection can be expressed as a solution of the following objective from classical multidimensional scaling: argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̂>X̂ 2 Frob = argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̄>C 1xx X̄ 2 Frob. (9) This objective minimizes the difference between the similarity of consecutive sums of output pairs, ȳ>t ȳt0 , and the similarity of consecutive sums of whitened input pairs, x̂>t x̂t0 , where similarity is measured in terms of inner products. Here we have assumed that Cxx is full rank. If Cxx is not full rank (but is at least rank k), we can replace C 1xx in Eq. (9) with the Moore-Penrose inverse C+xx (see Appendix A). 4 Derivation of an online algorithm While the objective (9) can be minimized by taking gradient descent steps in Ȳ, this does not lead to an online algorithm because the gradient steps require combining inputs from different time steps. Instead, we rewrite the objective as a min-max problem that can be solved by taking gradient descent-ascent steps that correspond to neural activities and synaptic update rules. 4.1 A min-max formulation Expanding the square in Eq. (9) and dropping terms that do not depend on Ȳ, we obtain the minimization problem min Ȳ2Rk⇥T 1 2T 2 Tr Ȳ>ȲȲ>Ȳ 2Ȳ>ȲX̄>C 1xx X̄ . (10) By introducing dynamical matrix variables W and M, which will correspond to synaptic weights, we can rewrite the minimization problem (10) as a min-max problem: min Ȳ2Rk⇥T min W2Rk⇥n max M2Sk++ L(W,M, Ȳ), where Sk++ denotes the set of k ⇥ k positive definite matrices and L(W,M, Ȳ) := 1 T Tr Ȳ>MȲ 2Ȳ>WX̄ Tr ✓ 1 2 M2 WCxxW> ◆ . (11) This step can be verified by differentiating L(W,M, Ȳ) with respect to W and M and noting that the optimal values are achieved when W and M equal 1T ȲX̄ >C 1xx and 1 T ȲȲ >, respectively. Finally, we interchange the order of minimization with respect to Ȳ and W, as well as the order of optimization with respect to Ȳ and with respect to M: min W2Rk⇥m max M2Sk++ min Ȳ2Rk⇥T L(W,M, Ȳ). (12) The second interchange is justified by the fact that L(W,M, Ȳ) satisfies the saddle point property with respect to Ȳ and M, which follows from the fact that L(W,M, Ȳ) is strictly convex in Y (since M is positive definite) and strictly concave in M. 4.2 Offline algorithm In the offline, or batch, setting, we have access to the sample covariance matrices Cxx and Cx̄x̄, and we solve the min-max problem (12) by alternating optimization steps. First, for fixed W and M, we minimize the objective function L(W,M, Ȳ) over Ȳ, to obtain Ȳ = M 1WX̄. (13) With Ȳ fixed, we then perform a gradient descent-ascent step with respect to W and M: W W + 2⌘ ✓ 1 T ȲX̄> WCxx ◆ (14) M M+ ⌘ ⌧ ✓ 1 T ȲȲ> M ◆ . (15) Here ⌧ > 0 is the ratio of the learning rates of W and M and ⌘ 2 (0, ⌧) is the (possibly timedependent) learning rate for W. The condition ⌘ < ⌧ ensures that matrix M remains positive definite given a positive definite initialization. 4.3 Online algorithm In the online setting, the expanded signal {xt} is streamed one sample at a time, and the algorithm must compute its output without storing any significant fraction of the data in memory. In this case, at each time-step t, we compute the output yt = M 1at, where at := Wxt is the projection of xt onto the k-dimensional “slow” subspace, in a biologically plausible manner by running the following fast (neural) dynamics to equilibrium (our algorithm implements these dynamics using an Euler approximation): dyt( ) d = at Myt( ). (16) To update the (synaptic) matrices W and M, we replace the covariance matrices in (14)–(15) with the rank-1 stochastic approximations: 1 T ȲX̄> 7! ȳtx̄>t , 1 T ȲȲ> 7! ȳtȳ>t , Cxx 7! xtx>t . This yields the following stochastic gradient descent-ascent steps with respect to W and M: W W + 2⌘ ȳtx̄ > t atx>t M M+ ⌘ ⌧ ȳtȳ > t M . We can now state our online SFA algorithm, which we refer to as Bio-SFA (Alg. 1). Algorithm 1: Bio-SFA input expanded signal {x0,x1, . . . ,xT }; dimension k; parameters , ⌘, ⌧ initialize matrix W and positive definite matrix M for t = 1, 2, . . . , T do at Wxt . project inputs repeat yt yt + (at Myt) . compute neural output until convergence x̄t xt + xt 1 ȳt yt + yt 1 W W + 2⌘(ȳtx̄>t atx>t ) . synaptic updates M M+ ⌘⌧ (ȳtȳ > t M) end for 5 Biologically plausible neural network implementation We now demonstrate that Bio-SFA can be implemented in a biologically plausible network, depicted in Fig. 1. Recall that we define a network to be biologically plausible if it computes its output in the online setting and has local learning rules. The neural network consists of an input layer of m neurons (blue circles) and an output layer of k neurons with separate dendritic and somatic compartments (black circles with 2 compartments). At each time t, the m-dimensional expanded signal xt, which is represented by the activity of the input neurons, is multiplied by the weight matrix W, which is encoded by the feedforward synapses connecting the input neurons to the output neurons (green lines). This yields the k-dimensional projection at = Wxt, which is represented in the dendritic compartment of the output neurons and then propagated to the somatic compartments. This is followed by the fast recurrent neural dynamics Eq. (16) amongst the somatic compartments of the output neurons, where the matrix M is encoded by the lateral synapses connecting the layer of output neurons (red lines). These fast neural dynamics equilibriate at yt = M 1at. The k-dimensional output signal yt is represented by the activity of the output neurons. The synaptic updates are as follows. Recall that x̄t = xt + xt 1 (resp. ȳt = yt + yt 1) is the delayed sum of the inputs (resp. outputs), which we assume are represented in the m input neurons (resp. k output neurons). Biologically, they can be represented by slowly changing concentrations (e.g., calcium) at the pre- and post-synaptic terminals. We can write the elementwise synaptic updates in Alg. 1 as Wij Wij + 2⌘ ⇣ ȳitx̄ j t aitx j t ⌘ , 1 i k, 1 j d, (17) Mij Mij + ⌘ ⌧ ⇣ ȳitȳ j t Mij ⌘ , 1 i, j k. (18) Since the jth input neuron stores the variables xjt , x̄ j t and the ith output neuron stores the variables ait, y i t, ȳ i t, the update for each synapse is local. It is worth comparing the derived updates to the feedforward weights Eq. (17) to the updates proposed by Földiák [8], which are given by wij wij + ⌘ ⇣ ȳitx j t ȳitwij ⌘ , 1 i k, 1 j d. The first terms in the updates, ȳitx̄ j t and ȳitx j t , are quite similar. The main difference between the updates is between the second terms: aitx j t and ȳitwij . In our network, the second term aitx j t serves to whiten the inputs in our network, whereas Földiák’s second term ȳitwij is added as a decay to ensure the weights remain bounded. In addition, our network includes lateral weights Mij which ensure that the projections yit are distinct, and such lateral weights are not included in Földiák’s network. While the updates are similar in some respects, it is difficult to compare the outputs of the networks because Földiák’s network is postulated rather than derived from a principled objective function, so the network must be simulated numerically in order to evaluate its output. 6 Experiments To validate our approach, we test Bio-SFA (Alg. 1) on synthetic and naturalistic datasets. We provide an overview of the experiments here and defer detailed descriptions and additional figures to Sec. B of the supplement. The evaluation code is available at https://github.com/flatironinstitute/ bio-sfa. To measure the performance of our algorithm, we compare the “slowness” of the projection Y = M 1WX, with the slowest possible projection. This can be quantified using the objective (6). We first evaluate the objective (6) at its optimum: slow := min TrV>CẋẋV : V 2 Rm⇥k s.t. V>CxxV = Ik which can be evaluated using an offline generalized eigenvalue problem solver. To compute the error at each iteration, we compare the slowness of the current projection to the minimal slowness: Error = Ṽ>CẋẋṼ slow, Ṽ := W>M 1(M 1WCxxW>M 1) 1/2, (19) where the normalization ensures that Ṽ satisfies the constraint in Eq. (6). In Sec. B, we show that V indeed asymptotically satisfies the constraint in Eq. (6). 6.1 Chaotic time series Before testing on naturalistic datasets, we test Bio-SFA on a challenging synthetic dataset. Let { t} be a (slow) driving force equal to the sum of 6 sine functions with random amplitudes, frequencies and phases, Fig. 2a (red line). Define the noisy series derived from the recursive logistic map with time-varying growth rate: zt = (3.6 + 0.4 t)zt 1(1 zt 1), Fig. 2b (black dots). Wiskott [30] showed that the driving force { t} can be recovered from the noisy series {zt} by implementing (offline) Quadratic SFA on the 4-dimensional signal {st} whose components correspond to the values of the noisy series over the 4 most recent time steps, i.e., st := (zt, zt 1, zt 2, zt 3). We replicate the results from [30] using Bio-SFA. Let {xt} be the 14-dimensional quadratic expansion of {st}. We use Bio-SFA to extract the slowest one-dimensional projection {yt}, Fig. 2c (green dots). Qualitatively, we see that the slowest projection recovered by Bio-SFA closely aligns with the slow driving force { t}. In Fig. 2d we plot the error at each iteration. 6.2 Sequence of natural images Next, we test Bio-SFA on a sequence of natural images. First, a 256-dimensional sequence {zt} was generated by moving a 16⇥ 16 patch over 13 natural images from [12] via translations, zooms, and rotations, Fig. 3a. To extract relevant features, we follow the exact same procedure as Berkes and Wiskott [1], but replace the offline SFA solver with Bio-SFA to generate a 49-dimensional output signal {yt}. To visualize the 49-dimensional output, we calculate the unit vector z 2 R256 that maximizes yi, for i = 1, . . . , 49. These optimal stimuli, z, which are displayed as 16⇥ 16 patches in Fig. 3b, resemble Gabor patches and are in qualitative agreement with physiological characteristics of complex cells in the visual cortex. This aligns with the results in [1]; see also, [2]. To evaluate the performance of Bio-SFA, we plot the error at each iteration in Fig. 3c. 6.3 Hierarchical SFA on the visual stream of a simulated rat Following Schönfeld and Wiskott [25], we test a hierarchical 3-layer organization of Bio-SFA “modules” on the inputs from the RatLab framework [24], which simulates the field of view of a rat with random trajectories in a rectangular room. Each layer consists of spatially distributed modules that receive overlapping patches of either the visual stream or the preceding layer. Inside each module, there are 3 steps: (1) Bio-SFA first reduces the dimension of the inputs to generate a 32-dimensional signal, (2) the reduced signal is quadratically expanded, and (3) Bio-SFA reduces the expanded signal to the slowest 32 features. The layers are organized so that the modules in each successive layer receive inputs from larger patches of the visual field, Fig. 4a. Adopting the procedure in [25], the network is trained greedily layer-by-layer with weight sharing across modules in each layer (see Sec. B of the supplement). The final layer consists of a single module, with a 32-dimensional output, whose spatially-dependent firing maps are shown in Fig. 4b. The 3 SFA layers are followed by a fourth layer, which performs sparse coding via Independent Component Analysis (ICA) [11] (in the offline setting) with a 32-dimensional output, whose firing map is shown in Fig. 4c. As in [25], the firing maps of the final ICA layer are spatially localized and resemble the firing maps of place cells in the hippocampus. To quantify the performance of this hierarchical network, we plot the slowness (not errors, see Sec. B of the supplement) of each of the first 3 layers’ outputs at each iteration, Fig. 4d. 7 Discussion We derived an online algorithm for SFA with a biologically plausible neural network implementation, which is an important step towards understanding how the brain could use temporal slowness as a computational principle. While our network implementation satisfies natural requirements for biological plausibility, it differs from biological neural circuits in a number of ways. For instance, our network includes direct lateral inhibitory synapses between excitatory neurons, whereas inhibition is typically modulated by interneurons in biological networks. By adapting the approach in [21], interneurons can be introduced to modulate inhibition. Second, the synaptic updates in our network require both the pre- and post-synaptic neurons to store slow variables; however, signal frequencies in dendrites are slower than in axons, suggesting that it is more likely for slow variables to be stored in the post-synaptic neuron, not the pre-synaptic neuron. We can address this with a modification, which is exact when the expanded signal {xt} exhibits time-reversibility, so that only the post-synaptic represents slow variables; see Sec. C of the supplement. Finally, our network includes linear neurons, which do not respect the nonnegativity constraints of neuronal outputs. An interesting future direction is to understand the effect of enforcing a nonnegativity constraint on yt in the objective function (9). Broader impact An important problem in neuroscience is to understand the computational principles the brain uses to process information. Progress on this front has the potential to have wide ranging benefits for helping to manage the adverse effects of neurological diseases and disorders. This work represents a small step in that direction. Acknowledgements This work was internally support by the Simons Foundation. We thank Yanis Bahroun, Nicholas Chua, Shiva Farashahi, Johannes Friedrich, Alexander Genkin, Jason Moore, Anirvan Sengupta and Tiberiu Tesileanu for helpful comments and feedback on an earlier draft of this work.
1. What is the main contribution of the paper, and how does it relate to previous research in the field? 2. What are the strengths of the proposed approach, particularly in terms of its biological plausibility? 3. What are the weaknesses of the paper, and how could they be addressed in future research? 4. How does the reviewer assess the novelty and significance of the paper's contributions? 5. Are there any concerns or suggestions regarding the experimental design or results presented in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Derives a biologically plausible version of SFA, and runs it on a standard moving image datasets, obtaining results similar to previous batch learning methods. Strengths An elegant formulation of the problem and nice solution. Weaknesses Perhaps could benefit from stronger ties to potential neural mechanisms, or where in the brain you may expect to find this, experimental predictions, etc. It seems this is the main point after all. Also, how do you propose that biology computes the nonlinear expansion \mathbf{x}_t?
NIPS
Title A Biologically Plausible Neural Network for Slow Feature Analysis Abstract Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli. 1 Introduction Unsupervised learning of meaningful latent features from noisy, high-dimensional data is a fundamental problem for both machine learning and brain function. Often, the relevant features in an environment (e.g., objects) vary on relatively slow timescales when compared to noisy sensory data (e.g., the light intensity measured by a single receptor in the retina). Therefore, temporal slowness has been proposed as a computational principle for extracting relevant latent features [8, 19, 31]. A popular approach for extracting slow features, introduced by Wiskott and Sejnowski [31], is Slow Feature Analysis (SFA). SFA is an unsupervised learning algorithm that extracts the slowest projection, in terms of discrete time derivative, from a nonlinear expansion of the input signal. When trained on natural image sequences, SFA extracts features that resemble response properties of complex cells in early visual processing [2]. Impressively, hierarchical networks of SFA trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus [9]. The relevance of SFA is strengthened by its close relationship to information theoretic objectives and its equivalence to other successful algorithms under certain assumptions. When the time series is reversible and Gaussian, (Linear) SFA is equivalent to maximizing mutual information between the current output of the system and the next input [7, 5]. Moreover, features extracted by several ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. algorithms favoring predictability from real-world datasets are similar to those extracted by SFA [29]. Finally, (Linear) SFA is equivalent to a time-lagged independent components analysis [3, 10], which is a popular statistical technique used to analyze molecular dynamics [22, 20, 26, 27]. Due to its success in modeling aspects of neural processing, deriving an algorithm for SFA with a biologically plausible neural network implementation is an important task. For the purposes of this work, we define biologically plausible to mean that the neural network operates in the online setting (i.e., after receiving an input, it computes its output before receiving its next input, never storing a significant fraction of past inputs), and its synaptic learning rules are local (i.e., a synaptic weight update depends only on variables represented in the pre- and postsynaptic neurons). In addition to satisfying basic properties of neural circuits, these online and locality requirements can lead to networks that are well-suited for analyzing large datasets because they operate in the online setting with low computational overhead. While there are a few online algorithms for SFA, none have biologically plausible neural network implementations that extract multiple slow features. Moreover, there are no neural network implementations for the related information theoretic algorithms discussed above [29, 5]. Kompella et al. propose Incremental SFA [14] (see [16, 32] for extensions). However, this approach relies on non-local learning rules, so it does not meet the above criteria for biological plausibility. Malik et al. [17] use an online generalized eigenvalue problem solver [33] to derive an online algorithm for SFA. While their algorithm for finding one-dimensional projections can be implemented in a biologically plausible network, their extension to multi-dimensional projections is not fully online. In this work, we propose Bio-SFA: an online algorithm for SFA with a biologically plausible neural network implementation, Fig. 1. We adopt a normative approach to derive our algorithm. First, we express the solution of the SFA problem in terms of an objective from classical multidimensional scaling. We then manipulate the objective to arrive at a min-max optimization problem that can be solved in the online setting by taking stochastic gradient descent-ascent steps. These steps can be expressed in terms of neural activities and updates to synaptic weight matrices, which leads to a natural interpretation of our online algorithm as a biologically plausible neural network. To validate our approach, we test our algorithm on datasets of naturalistic stimuli and reproduce results originally performed in the offline setting. The synaptic updates of the feedforward weights W in our network are similar, although not identical, to the updates proposed heuristically by Földiák [8] to extract slow temporal features. However, there is no theoretical analysis of the algorithm in [8]. In contrast, in our normative approach, Bio-SFA is derived directly from an SFA objective, so we can analytically predict its output, as well as the synaptic weights, without resorting to numerical simulation. In addition, the comparison of our learning rules with Földiák’s illuminates the relationship of [8] to SFA. 2 Slow Feature Analysis Here and below, vectors are boldface lowercase letters (e.g., v), and matrices are boldface uppercase letters (e.g., M). We use superscripts to denote the components of a vector (e.g., vi). 2.1 Problem statement Wiskott and Sejnowski [31] proposed the following 2 step method for extracting slow features from a noisy data set: (1) generate a nonlinear expansion of the input signal, and (2) find the slowest, in terms of discrete time derivative, low-dimensional projection of the expanded signal. In this section, we review these 2 steps. Let {s0, s1, . . . , sT } be a d-dimensional input signal.2 The first step of SFA is to generate an mdimensional expansion {xt}, referred to as the expanded signal, of {st}. Let h = (h1, . . . , hm) : Rd ! Rm be an expansion function and define xt := h(st) 1 T TX t0=1 h(st0), t = 0, 1, . . . , T, so that {xt} is centered. Let k < m. The second step of SFA is to find the k-dimensional linear projection {yt} of the expanded signal {xt} that minimizes the mean discrete-time derivative of the output signal {yt}, subject to a whitening constraint. To be precise, the objective can be formulated as follows: argmin {yt} 1 T TX t=1 kẏtk2 subject to 1 T TX t=1 yty > t = Ik, (1) where ẏt is the discrete time derive of yt, and yt is a linear projection of xt; that is, ẏt := yt yt 1, t = 1, . . . , T, (2) yt := V >xt, t = 0, 1, . . . , T, for some V 2 Rm⇥k. (3) Note, since {xt} is centered, the projection {yt} is also centered. 2.2 Quadratic SFA The focus of this work is to derive a biologically plausible neural network that learns to output the optimal output signal {yt} when streamed the expanded signal {xt}. While our algorithm does not depend on the specific choice of the expansion function h, for concreteness, we provide an example here. In their original paper, Wiskott and Sejnowski [31] proposed setting the components of the function h : Rd ! Rm to be the monomials of degree one and two. This choice, which we refer to as “Quadratic SFA”, has been widely used in applications [31, 2, 9, 34]. In particular, let m := d+ d(d+ 1)/2 and h1, . . . , hm : Rd ! R denote the m possible linear and quadratic functions of the form h(s) := si or h(s) := sisj , for 1 i j d. (When only the linear features are used, i.e., xi = si + const, this is referred to “Linear SFA”.) Thus, each component of the output signal is a quadratic polynomial in the components of the signal of the form: yi = V1ih 1(s) + · · ·+ Vmihm(s) + const. (4) Biologically, there are a number of mechanism that have been proposed for computing products of the form sisj ; see, e.g., [13] and the references therein. One such mechanism uses “Sigma-Pi” units [23], which multiplies two inputs via gating and have been invoked in cortical modeling [18]. In Sec. 6, we perform our numerical experiments using the quadratic expansion. 2The zeroth time step is included to ensure the discrete-time derivative is defined at t = 1. 3 A novel SFA objective from classical multidimensional scaling To derive an SFA network, we identify an objective function whose optimization leads to an online algorithm that can be implemented in a biologically plausible network. To identify the objective function, we first rewrite the SFA output as a principal subspace projection and then take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6]. This approach is similar to the derivation of a biologically plausible neural network for canonical correlation analysis [15]. To begin, we define the discrete derivative process {ẋt} and the delayed sum process {x̄t} by ẋt := xt xt 1 and x̄t := xt+xt 1, for t = 1, . . . , T . In addition, we define the sample covariance matrices Cxx := 1 T TX t=1 xtx > t , Cẋẋ := 1 T TX t=1 ẋtẋ > t , Cx̄x̄ := 1 T TX t=1 x̄tx̄ > t . (5) Substituting the definitions in Eqs. (2), (3) and (5) into the objective in Eq. (1), we can equivalently write the SFA problem as the following constrained minimization problem of the projection matrix V: argmin V2Rm⇥k TrV>CẋẋV subject to V>CxxV = Ik. (6) Due to the whitening constraint in Eq. (6), we can equivalently write it as the maximization of the one-step autocorrelation of the projection {yt} (see Appendix A for details): argmax V2Rm⇥k TrV>Cx̄x̄V subject to V>CxxV = Ik. (7) Next, setting x̂t := C 1/2 xx x̄t for t = 1, . . . , T , and V̂ := C1/2xx V, Cx̂x̂ := 1 T TX t=1 x̂tx̂ > t = C 1/2 xx Cx̄x̄C 1/2 xx , we see that V is a solution of Eq. (7) if and only if V̂ is the solution of: argmax V̂2Rm⇥k Tr V̂>Cx̂x̂V̂ subject to V̂>V̂ = Ik. (8) Notably, Eq. (8) is the variance maximization objective for the PCA eigenproblem, which is optimized when the column vectors of V̂ span the k-dimensional principal subspace of Cx̂x̂. Finally, we take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6, 21]. To this end, define the data matrices X̄ := [x̄t, . . . , x̄T ], X̂ := [x̂1, . . . , x̂T ], Ȳ := [ȳ1, . . . , ȳT ]. Then, since ȳt = V>x̄t = V̂>x̂t, we see that Ȳ is the projection of X̂t onto its k-dimensional principal subspace. As shown in [6], this principal projection can be expressed as a solution of the following objective from classical multidimensional scaling: argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̂>X̂ 2 Frob = argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̄>C 1xx X̄ 2 Frob. (9) This objective minimizes the difference between the similarity of consecutive sums of output pairs, ȳ>t ȳt0 , and the similarity of consecutive sums of whitened input pairs, x̂>t x̂t0 , where similarity is measured in terms of inner products. Here we have assumed that Cxx is full rank. If Cxx is not full rank (but is at least rank k), we can replace C 1xx in Eq. (9) with the Moore-Penrose inverse C+xx (see Appendix A). 4 Derivation of an online algorithm While the objective (9) can be minimized by taking gradient descent steps in Ȳ, this does not lead to an online algorithm because the gradient steps require combining inputs from different time steps. Instead, we rewrite the objective as a min-max problem that can be solved by taking gradient descent-ascent steps that correspond to neural activities and synaptic update rules. 4.1 A min-max formulation Expanding the square in Eq. (9) and dropping terms that do not depend on Ȳ, we obtain the minimization problem min Ȳ2Rk⇥T 1 2T 2 Tr Ȳ>ȲȲ>Ȳ 2Ȳ>ȲX̄>C 1xx X̄ . (10) By introducing dynamical matrix variables W and M, which will correspond to synaptic weights, we can rewrite the minimization problem (10) as a min-max problem: min Ȳ2Rk⇥T min W2Rk⇥n max M2Sk++ L(W,M, Ȳ), where Sk++ denotes the set of k ⇥ k positive definite matrices and L(W,M, Ȳ) := 1 T Tr Ȳ>MȲ 2Ȳ>WX̄ Tr ✓ 1 2 M2 WCxxW> ◆ . (11) This step can be verified by differentiating L(W,M, Ȳ) with respect to W and M and noting that the optimal values are achieved when W and M equal 1T ȲX̄ >C 1xx and 1 T ȲȲ >, respectively. Finally, we interchange the order of minimization with respect to Ȳ and W, as well as the order of optimization with respect to Ȳ and with respect to M: min W2Rk⇥m max M2Sk++ min Ȳ2Rk⇥T L(W,M, Ȳ). (12) The second interchange is justified by the fact that L(W,M, Ȳ) satisfies the saddle point property with respect to Ȳ and M, which follows from the fact that L(W,M, Ȳ) is strictly convex in Y (since M is positive definite) and strictly concave in M. 4.2 Offline algorithm In the offline, or batch, setting, we have access to the sample covariance matrices Cxx and Cx̄x̄, and we solve the min-max problem (12) by alternating optimization steps. First, for fixed W and M, we minimize the objective function L(W,M, Ȳ) over Ȳ, to obtain Ȳ = M 1WX̄. (13) With Ȳ fixed, we then perform a gradient descent-ascent step with respect to W and M: W W + 2⌘ ✓ 1 T ȲX̄> WCxx ◆ (14) M M+ ⌘ ⌧ ✓ 1 T ȲȲ> M ◆ . (15) Here ⌧ > 0 is the ratio of the learning rates of W and M and ⌘ 2 (0, ⌧) is the (possibly timedependent) learning rate for W. The condition ⌘ < ⌧ ensures that matrix M remains positive definite given a positive definite initialization. 4.3 Online algorithm In the online setting, the expanded signal {xt} is streamed one sample at a time, and the algorithm must compute its output without storing any significant fraction of the data in memory. In this case, at each time-step t, we compute the output yt = M 1at, where at := Wxt is the projection of xt onto the k-dimensional “slow” subspace, in a biologically plausible manner by running the following fast (neural) dynamics to equilibrium (our algorithm implements these dynamics using an Euler approximation): dyt( ) d = at Myt( ). (16) To update the (synaptic) matrices W and M, we replace the covariance matrices in (14)–(15) with the rank-1 stochastic approximations: 1 T ȲX̄> 7! ȳtx̄>t , 1 T ȲȲ> 7! ȳtȳ>t , Cxx 7! xtx>t . This yields the following stochastic gradient descent-ascent steps with respect to W and M: W W + 2⌘ ȳtx̄ > t atx>t M M+ ⌘ ⌧ ȳtȳ > t M . We can now state our online SFA algorithm, which we refer to as Bio-SFA (Alg. 1). Algorithm 1: Bio-SFA input expanded signal {x0,x1, . . . ,xT }; dimension k; parameters , ⌘, ⌧ initialize matrix W and positive definite matrix M for t = 1, 2, . . . , T do at Wxt . project inputs repeat yt yt + (at Myt) . compute neural output until convergence x̄t xt + xt 1 ȳt yt + yt 1 W W + 2⌘(ȳtx̄>t atx>t ) . synaptic updates M M+ ⌘⌧ (ȳtȳ > t M) end for 5 Biologically plausible neural network implementation We now demonstrate that Bio-SFA can be implemented in a biologically plausible network, depicted in Fig. 1. Recall that we define a network to be biologically plausible if it computes its output in the online setting and has local learning rules. The neural network consists of an input layer of m neurons (blue circles) and an output layer of k neurons with separate dendritic and somatic compartments (black circles with 2 compartments). At each time t, the m-dimensional expanded signal xt, which is represented by the activity of the input neurons, is multiplied by the weight matrix W, which is encoded by the feedforward synapses connecting the input neurons to the output neurons (green lines). This yields the k-dimensional projection at = Wxt, which is represented in the dendritic compartment of the output neurons and then propagated to the somatic compartments. This is followed by the fast recurrent neural dynamics Eq. (16) amongst the somatic compartments of the output neurons, where the matrix M is encoded by the lateral synapses connecting the layer of output neurons (red lines). These fast neural dynamics equilibriate at yt = M 1at. The k-dimensional output signal yt is represented by the activity of the output neurons. The synaptic updates are as follows. Recall that x̄t = xt + xt 1 (resp. ȳt = yt + yt 1) is the delayed sum of the inputs (resp. outputs), which we assume are represented in the m input neurons (resp. k output neurons). Biologically, they can be represented by slowly changing concentrations (e.g., calcium) at the pre- and post-synaptic terminals. We can write the elementwise synaptic updates in Alg. 1 as Wij Wij + 2⌘ ⇣ ȳitx̄ j t aitx j t ⌘ , 1 i k, 1 j d, (17) Mij Mij + ⌘ ⌧ ⇣ ȳitȳ j t Mij ⌘ , 1 i, j k. (18) Since the jth input neuron stores the variables xjt , x̄ j t and the ith output neuron stores the variables ait, y i t, ȳ i t, the update for each synapse is local. It is worth comparing the derived updates to the feedforward weights Eq. (17) to the updates proposed by Földiák [8], which are given by wij wij + ⌘ ⇣ ȳitx j t ȳitwij ⌘ , 1 i k, 1 j d. The first terms in the updates, ȳitx̄ j t and ȳitx j t , are quite similar. The main difference between the updates is between the second terms: aitx j t and ȳitwij . In our network, the second term aitx j t serves to whiten the inputs in our network, whereas Földiák’s second term ȳitwij is added as a decay to ensure the weights remain bounded. In addition, our network includes lateral weights Mij which ensure that the projections yit are distinct, and such lateral weights are not included in Földiák’s network. While the updates are similar in some respects, it is difficult to compare the outputs of the networks because Földiák’s network is postulated rather than derived from a principled objective function, so the network must be simulated numerically in order to evaluate its output. 6 Experiments To validate our approach, we test Bio-SFA (Alg. 1) on synthetic and naturalistic datasets. We provide an overview of the experiments here and defer detailed descriptions and additional figures to Sec. B of the supplement. The evaluation code is available at https://github.com/flatironinstitute/ bio-sfa. To measure the performance of our algorithm, we compare the “slowness” of the projection Y = M 1WX, with the slowest possible projection. This can be quantified using the objective (6). We first evaluate the objective (6) at its optimum: slow := min TrV>CẋẋV : V 2 Rm⇥k s.t. V>CxxV = Ik which can be evaluated using an offline generalized eigenvalue problem solver. To compute the error at each iteration, we compare the slowness of the current projection to the minimal slowness: Error = Ṽ>CẋẋṼ slow, Ṽ := W>M 1(M 1WCxxW>M 1) 1/2, (19) where the normalization ensures that Ṽ satisfies the constraint in Eq. (6). In Sec. B, we show that V indeed asymptotically satisfies the constraint in Eq. (6). 6.1 Chaotic time series Before testing on naturalistic datasets, we test Bio-SFA on a challenging synthetic dataset. Let { t} be a (slow) driving force equal to the sum of 6 sine functions with random amplitudes, frequencies and phases, Fig. 2a (red line). Define the noisy series derived from the recursive logistic map with time-varying growth rate: zt = (3.6 + 0.4 t)zt 1(1 zt 1), Fig. 2b (black dots). Wiskott [30] showed that the driving force { t} can be recovered from the noisy series {zt} by implementing (offline) Quadratic SFA on the 4-dimensional signal {st} whose components correspond to the values of the noisy series over the 4 most recent time steps, i.e., st := (zt, zt 1, zt 2, zt 3). We replicate the results from [30] using Bio-SFA. Let {xt} be the 14-dimensional quadratic expansion of {st}. We use Bio-SFA to extract the slowest one-dimensional projection {yt}, Fig. 2c (green dots). Qualitatively, we see that the slowest projection recovered by Bio-SFA closely aligns with the slow driving force { t}. In Fig. 2d we plot the error at each iteration. 6.2 Sequence of natural images Next, we test Bio-SFA on a sequence of natural images. First, a 256-dimensional sequence {zt} was generated by moving a 16⇥ 16 patch over 13 natural images from [12] via translations, zooms, and rotations, Fig. 3a. To extract relevant features, we follow the exact same procedure as Berkes and Wiskott [1], but replace the offline SFA solver with Bio-SFA to generate a 49-dimensional output signal {yt}. To visualize the 49-dimensional output, we calculate the unit vector z 2 R256 that maximizes yi, for i = 1, . . . , 49. These optimal stimuli, z, which are displayed as 16⇥ 16 patches in Fig. 3b, resemble Gabor patches and are in qualitative agreement with physiological characteristics of complex cells in the visual cortex. This aligns with the results in [1]; see also, [2]. To evaluate the performance of Bio-SFA, we plot the error at each iteration in Fig. 3c. 6.3 Hierarchical SFA on the visual stream of a simulated rat Following Schönfeld and Wiskott [25], we test a hierarchical 3-layer organization of Bio-SFA “modules” on the inputs from the RatLab framework [24], which simulates the field of view of a rat with random trajectories in a rectangular room. Each layer consists of spatially distributed modules that receive overlapping patches of either the visual stream or the preceding layer. Inside each module, there are 3 steps: (1) Bio-SFA first reduces the dimension of the inputs to generate a 32-dimensional signal, (2) the reduced signal is quadratically expanded, and (3) Bio-SFA reduces the expanded signal to the slowest 32 features. The layers are organized so that the modules in each successive layer receive inputs from larger patches of the visual field, Fig. 4a. Adopting the procedure in [25], the network is trained greedily layer-by-layer with weight sharing across modules in each layer (see Sec. B of the supplement). The final layer consists of a single module, with a 32-dimensional output, whose spatially-dependent firing maps are shown in Fig. 4b. The 3 SFA layers are followed by a fourth layer, which performs sparse coding via Independent Component Analysis (ICA) [11] (in the offline setting) with a 32-dimensional output, whose firing map is shown in Fig. 4c. As in [25], the firing maps of the final ICA layer are spatially localized and resemble the firing maps of place cells in the hippocampus. To quantify the performance of this hierarchical network, we plot the slowness (not errors, see Sec. B of the supplement) of each of the first 3 layers’ outputs at each iteration, Fig. 4d. 7 Discussion We derived an online algorithm for SFA with a biologically plausible neural network implementation, which is an important step towards understanding how the brain could use temporal slowness as a computational principle. While our network implementation satisfies natural requirements for biological plausibility, it differs from biological neural circuits in a number of ways. For instance, our network includes direct lateral inhibitory synapses between excitatory neurons, whereas inhibition is typically modulated by interneurons in biological networks. By adapting the approach in [21], interneurons can be introduced to modulate inhibition. Second, the synaptic updates in our network require both the pre- and post-synaptic neurons to store slow variables; however, signal frequencies in dendrites are slower than in axons, suggesting that it is more likely for slow variables to be stored in the post-synaptic neuron, not the pre-synaptic neuron. We can address this with a modification, which is exact when the expanded signal {xt} exhibits time-reversibility, so that only the post-synaptic represents slow variables; see Sec. C of the supplement. Finally, our network includes linear neurons, which do not respect the nonnegativity constraints of neuronal outputs. An interesting future direction is to understand the effect of enforcing a nonnegativity constraint on yt in the objective function (9). Broader impact An important problem in neuroscience is to understand the computational principles the brain uses to process information. Progress on this front has the potential to have wide ranging benefits for helping to manage the adverse effects of neurological diseases and disorders. This work represents a small step in that direction. Acknowledgements This work was internally support by the Simons Foundation. We thank Yanis Bahroun, Nicholas Chua, Shiva Farashahi, Johannes Friedrich, Alexander Genkin, Jason Moore, Anirvan Sengupta and Tiberiu Tesileanu for helpful comments and feedback on an earlier draft of this work.
1. What is the focus and contribution of the paper on slow feature analysis? 2. What are the strengths of the proposed approach, particularly in terms of biological plausibility? 3. What are the weaknesses of the paper, especially regarding the choice of neuron model? 4. How does the reviewer assess the significance and impact of the proposed method in the field? 5. Are there any concerns or suggestions for future work related to the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper develops a slow feature analysis (SFA) method that is biologically plausible, in that it operates online and has weight updates that only use information that is available in the presynaptic and postsynaptic neurons. The method is tested on several datasets, and it is shown that cost is reduced with training and that interesting receptive fields emerge. Strengths - I agree with the authors that a biologically plausible implementation of SFA is an important development. - The development of the method from standard SFA was systematic and rigorous. Weaknesses - Some limitations were mentioned in the discussion. Of these, the fact that the method uses linear neurons seems to be the most serious. I appreciate that the authors defined their sense of "biologically plausible", but linear neurons don't sit very well with this term.
NIPS
Title A Biologically Plausible Neural Network for Slow Feature Analysis Abstract Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli. 1 Introduction Unsupervised learning of meaningful latent features from noisy, high-dimensional data is a fundamental problem for both machine learning and brain function. Often, the relevant features in an environment (e.g., objects) vary on relatively slow timescales when compared to noisy sensory data (e.g., the light intensity measured by a single receptor in the retina). Therefore, temporal slowness has been proposed as a computational principle for extracting relevant latent features [8, 19, 31]. A popular approach for extracting slow features, introduced by Wiskott and Sejnowski [31], is Slow Feature Analysis (SFA). SFA is an unsupervised learning algorithm that extracts the slowest projection, in terms of discrete time derivative, from a nonlinear expansion of the input signal. When trained on natural image sequences, SFA extracts features that resemble response properties of complex cells in early visual processing [2]. Impressively, hierarchical networks of SFA trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus [9]. The relevance of SFA is strengthened by its close relationship to information theoretic objectives and its equivalence to other successful algorithms under certain assumptions. When the time series is reversible and Gaussian, (Linear) SFA is equivalent to maximizing mutual information between the current output of the system and the next input [7, 5]. Moreover, features extracted by several ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. algorithms favoring predictability from real-world datasets are similar to those extracted by SFA [29]. Finally, (Linear) SFA is equivalent to a time-lagged independent components analysis [3, 10], which is a popular statistical technique used to analyze molecular dynamics [22, 20, 26, 27]. Due to its success in modeling aspects of neural processing, deriving an algorithm for SFA with a biologically plausible neural network implementation is an important task. For the purposes of this work, we define biologically plausible to mean that the neural network operates in the online setting (i.e., after receiving an input, it computes its output before receiving its next input, never storing a significant fraction of past inputs), and its synaptic learning rules are local (i.e., a synaptic weight update depends only on variables represented in the pre- and postsynaptic neurons). In addition to satisfying basic properties of neural circuits, these online and locality requirements can lead to networks that are well-suited for analyzing large datasets because they operate in the online setting with low computational overhead. While there are a few online algorithms for SFA, none have biologically plausible neural network implementations that extract multiple slow features. Moreover, there are no neural network implementations for the related information theoretic algorithms discussed above [29, 5]. Kompella et al. propose Incremental SFA [14] (see [16, 32] for extensions). However, this approach relies on non-local learning rules, so it does not meet the above criteria for biological plausibility. Malik et al. [17] use an online generalized eigenvalue problem solver [33] to derive an online algorithm for SFA. While their algorithm for finding one-dimensional projections can be implemented in a biologically plausible network, their extension to multi-dimensional projections is not fully online. In this work, we propose Bio-SFA: an online algorithm for SFA with a biologically plausible neural network implementation, Fig. 1. We adopt a normative approach to derive our algorithm. First, we express the solution of the SFA problem in terms of an objective from classical multidimensional scaling. We then manipulate the objective to arrive at a min-max optimization problem that can be solved in the online setting by taking stochastic gradient descent-ascent steps. These steps can be expressed in terms of neural activities and updates to synaptic weight matrices, which leads to a natural interpretation of our online algorithm as a biologically plausible neural network. To validate our approach, we test our algorithm on datasets of naturalistic stimuli and reproduce results originally performed in the offline setting. The synaptic updates of the feedforward weights W in our network are similar, although not identical, to the updates proposed heuristically by Földiák [8] to extract slow temporal features. However, there is no theoretical analysis of the algorithm in [8]. In contrast, in our normative approach, Bio-SFA is derived directly from an SFA objective, so we can analytically predict its output, as well as the synaptic weights, without resorting to numerical simulation. In addition, the comparison of our learning rules with Földiák’s illuminates the relationship of [8] to SFA. 2 Slow Feature Analysis Here and below, vectors are boldface lowercase letters (e.g., v), and matrices are boldface uppercase letters (e.g., M). We use superscripts to denote the components of a vector (e.g., vi). 2.1 Problem statement Wiskott and Sejnowski [31] proposed the following 2 step method for extracting slow features from a noisy data set: (1) generate a nonlinear expansion of the input signal, and (2) find the slowest, in terms of discrete time derivative, low-dimensional projection of the expanded signal. In this section, we review these 2 steps. Let {s0, s1, . . . , sT } be a d-dimensional input signal.2 The first step of SFA is to generate an mdimensional expansion {xt}, referred to as the expanded signal, of {st}. Let h = (h1, . . . , hm) : Rd ! Rm be an expansion function and define xt := h(st) 1 T TX t0=1 h(st0), t = 0, 1, . . . , T, so that {xt} is centered. Let k < m. The second step of SFA is to find the k-dimensional linear projection {yt} of the expanded signal {xt} that minimizes the mean discrete-time derivative of the output signal {yt}, subject to a whitening constraint. To be precise, the objective can be formulated as follows: argmin {yt} 1 T TX t=1 kẏtk2 subject to 1 T TX t=1 yty > t = Ik, (1) where ẏt is the discrete time derive of yt, and yt is a linear projection of xt; that is, ẏt := yt yt 1, t = 1, . . . , T, (2) yt := V >xt, t = 0, 1, . . . , T, for some V 2 Rm⇥k. (3) Note, since {xt} is centered, the projection {yt} is also centered. 2.2 Quadratic SFA The focus of this work is to derive a biologically plausible neural network that learns to output the optimal output signal {yt} when streamed the expanded signal {xt}. While our algorithm does not depend on the specific choice of the expansion function h, for concreteness, we provide an example here. In their original paper, Wiskott and Sejnowski [31] proposed setting the components of the function h : Rd ! Rm to be the monomials of degree one and two. This choice, which we refer to as “Quadratic SFA”, has been widely used in applications [31, 2, 9, 34]. In particular, let m := d+ d(d+ 1)/2 and h1, . . . , hm : Rd ! R denote the m possible linear and quadratic functions of the form h(s) := si or h(s) := sisj , for 1 i j d. (When only the linear features are used, i.e., xi = si + const, this is referred to “Linear SFA”.) Thus, each component of the output signal is a quadratic polynomial in the components of the signal of the form: yi = V1ih 1(s) + · · ·+ Vmihm(s) + const. (4) Biologically, there are a number of mechanism that have been proposed for computing products of the form sisj ; see, e.g., [13] and the references therein. One such mechanism uses “Sigma-Pi” units [23], which multiplies two inputs via gating and have been invoked in cortical modeling [18]. In Sec. 6, we perform our numerical experiments using the quadratic expansion. 2The zeroth time step is included to ensure the discrete-time derivative is defined at t = 1. 3 A novel SFA objective from classical multidimensional scaling To derive an SFA network, we identify an objective function whose optimization leads to an online algorithm that can be implemented in a biologically plausible network. To identify the objective function, we first rewrite the SFA output as a principal subspace projection and then take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6]. This approach is similar to the derivation of a biologically plausible neural network for canonical correlation analysis [15]. To begin, we define the discrete derivative process {ẋt} and the delayed sum process {x̄t} by ẋt := xt xt 1 and x̄t := xt+xt 1, for t = 1, . . . , T . In addition, we define the sample covariance matrices Cxx := 1 T TX t=1 xtx > t , Cẋẋ := 1 T TX t=1 ẋtẋ > t , Cx̄x̄ := 1 T TX t=1 x̄tx̄ > t . (5) Substituting the definitions in Eqs. (2), (3) and (5) into the objective in Eq. (1), we can equivalently write the SFA problem as the following constrained minimization problem of the projection matrix V: argmin V2Rm⇥k TrV>CẋẋV subject to V>CxxV = Ik. (6) Due to the whitening constraint in Eq. (6), we can equivalently write it as the maximization of the one-step autocorrelation of the projection {yt} (see Appendix A for details): argmax V2Rm⇥k TrV>Cx̄x̄V subject to V>CxxV = Ik. (7) Next, setting x̂t := C 1/2 xx x̄t for t = 1, . . . , T , and V̂ := C1/2xx V, Cx̂x̂ := 1 T TX t=1 x̂tx̂ > t = C 1/2 xx Cx̄x̄C 1/2 xx , we see that V is a solution of Eq. (7) if and only if V̂ is the solution of: argmax V̂2Rm⇥k Tr V̂>Cx̂x̂V̂ subject to V̂>V̂ = Ik. (8) Notably, Eq. (8) is the variance maximization objective for the PCA eigenproblem, which is optimized when the column vectors of V̂ span the k-dimensional principal subspace of Cx̂x̂. Finally, we take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6, 21]. To this end, define the data matrices X̄ := [x̄t, . . . , x̄T ], X̂ := [x̂1, . . . , x̂T ], Ȳ := [ȳ1, . . . , ȳT ]. Then, since ȳt = V>x̄t = V̂>x̂t, we see that Ȳ is the projection of X̂t onto its k-dimensional principal subspace. As shown in [6], this principal projection can be expressed as a solution of the following objective from classical multidimensional scaling: argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̂>X̂ 2 Frob = argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̄>C 1xx X̄ 2 Frob. (9) This objective minimizes the difference between the similarity of consecutive sums of output pairs, ȳ>t ȳt0 , and the similarity of consecutive sums of whitened input pairs, x̂>t x̂t0 , where similarity is measured in terms of inner products. Here we have assumed that Cxx is full rank. If Cxx is not full rank (but is at least rank k), we can replace C 1xx in Eq. (9) with the Moore-Penrose inverse C+xx (see Appendix A). 4 Derivation of an online algorithm While the objective (9) can be minimized by taking gradient descent steps in Ȳ, this does not lead to an online algorithm because the gradient steps require combining inputs from different time steps. Instead, we rewrite the objective as a min-max problem that can be solved by taking gradient descent-ascent steps that correspond to neural activities and synaptic update rules. 4.1 A min-max formulation Expanding the square in Eq. (9) and dropping terms that do not depend on Ȳ, we obtain the minimization problem min Ȳ2Rk⇥T 1 2T 2 Tr Ȳ>ȲȲ>Ȳ 2Ȳ>ȲX̄>C 1xx X̄ . (10) By introducing dynamical matrix variables W and M, which will correspond to synaptic weights, we can rewrite the minimization problem (10) as a min-max problem: min Ȳ2Rk⇥T min W2Rk⇥n max M2Sk++ L(W,M, Ȳ), where Sk++ denotes the set of k ⇥ k positive definite matrices and L(W,M, Ȳ) := 1 T Tr Ȳ>MȲ 2Ȳ>WX̄ Tr ✓ 1 2 M2 WCxxW> ◆ . (11) This step can be verified by differentiating L(W,M, Ȳ) with respect to W and M and noting that the optimal values are achieved when W and M equal 1T ȲX̄ >C 1xx and 1 T ȲȲ >, respectively. Finally, we interchange the order of minimization with respect to Ȳ and W, as well as the order of optimization with respect to Ȳ and with respect to M: min W2Rk⇥m max M2Sk++ min Ȳ2Rk⇥T L(W,M, Ȳ). (12) The second interchange is justified by the fact that L(W,M, Ȳ) satisfies the saddle point property with respect to Ȳ and M, which follows from the fact that L(W,M, Ȳ) is strictly convex in Y (since M is positive definite) and strictly concave in M. 4.2 Offline algorithm In the offline, or batch, setting, we have access to the sample covariance matrices Cxx and Cx̄x̄, and we solve the min-max problem (12) by alternating optimization steps. First, for fixed W and M, we minimize the objective function L(W,M, Ȳ) over Ȳ, to obtain Ȳ = M 1WX̄. (13) With Ȳ fixed, we then perform a gradient descent-ascent step with respect to W and M: W W + 2⌘ ✓ 1 T ȲX̄> WCxx ◆ (14) M M+ ⌘ ⌧ ✓ 1 T ȲȲ> M ◆ . (15) Here ⌧ > 0 is the ratio of the learning rates of W and M and ⌘ 2 (0, ⌧) is the (possibly timedependent) learning rate for W. The condition ⌘ < ⌧ ensures that matrix M remains positive definite given a positive definite initialization. 4.3 Online algorithm In the online setting, the expanded signal {xt} is streamed one sample at a time, and the algorithm must compute its output without storing any significant fraction of the data in memory. In this case, at each time-step t, we compute the output yt = M 1at, where at := Wxt is the projection of xt onto the k-dimensional “slow” subspace, in a biologically plausible manner by running the following fast (neural) dynamics to equilibrium (our algorithm implements these dynamics using an Euler approximation): dyt( ) d = at Myt( ). (16) To update the (synaptic) matrices W and M, we replace the covariance matrices in (14)–(15) with the rank-1 stochastic approximations: 1 T ȲX̄> 7! ȳtx̄>t , 1 T ȲȲ> 7! ȳtȳ>t , Cxx 7! xtx>t . This yields the following stochastic gradient descent-ascent steps with respect to W and M: W W + 2⌘ ȳtx̄ > t atx>t M M+ ⌘ ⌧ ȳtȳ > t M . We can now state our online SFA algorithm, which we refer to as Bio-SFA (Alg. 1). Algorithm 1: Bio-SFA input expanded signal {x0,x1, . . . ,xT }; dimension k; parameters , ⌘, ⌧ initialize matrix W and positive definite matrix M for t = 1, 2, . . . , T do at Wxt . project inputs repeat yt yt + (at Myt) . compute neural output until convergence x̄t xt + xt 1 ȳt yt + yt 1 W W + 2⌘(ȳtx̄>t atx>t ) . synaptic updates M M+ ⌘⌧ (ȳtȳ > t M) end for 5 Biologically plausible neural network implementation We now demonstrate that Bio-SFA can be implemented in a biologically plausible network, depicted in Fig. 1. Recall that we define a network to be biologically plausible if it computes its output in the online setting and has local learning rules. The neural network consists of an input layer of m neurons (blue circles) and an output layer of k neurons with separate dendritic and somatic compartments (black circles with 2 compartments). At each time t, the m-dimensional expanded signal xt, which is represented by the activity of the input neurons, is multiplied by the weight matrix W, which is encoded by the feedforward synapses connecting the input neurons to the output neurons (green lines). This yields the k-dimensional projection at = Wxt, which is represented in the dendritic compartment of the output neurons and then propagated to the somatic compartments. This is followed by the fast recurrent neural dynamics Eq. (16) amongst the somatic compartments of the output neurons, where the matrix M is encoded by the lateral synapses connecting the layer of output neurons (red lines). These fast neural dynamics equilibriate at yt = M 1at. The k-dimensional output signal yt is represented by the activity of the output neurons. The synaptic updates are as follows. Recall that x̄t = xt + xt 1 (resp. ȳt = yt + yt 1) is the delayed sum of the inputs (resp. outputs), which we assume are represented in the m input neurons (resp. k output neurons). Biologically, they can be represented by slowly changing concentrations (e.g., calcium) at the pre- and post-synaptic terminals. We can write the elementwise synaptic updates in Alg. 1 as Wij Wij + 2⌘ ⇣ ȳitx̄ j t aitx j t ⌘ , 1 i k, 1 j d, (17) Mij Mij + ⌘ ⌧ ⇣ ȳitȳ j t Mij ⌘ , 1 i, j k. (18) Since the jth input neuron stores the variables xjt , x̄ j t and the ith output neuron stores the variables ait, y i t, ȳ i t, the update for each synapse is local. It is worth comparing the derived updates to the feedforward weights Eq. (17) to the updates proposed by Földiák [8], which are given by wij wij + ⌘ ⇣ ȳitx j t ȳitwij ⌘ , 1 i k, 1 j d. The first terms in the updates, ȳitx̄ j t and ȳitx j t , are quite similar. The main difference between the updates is between the second terms: aitx j t and ȳitwij . In our network, the second term aitx j t serves to whiten the inputs in our network, whereas Földiák’s second term ȳitwij is added as a decay to ensure the weights remain bounded. In addition, our network includes lateral weights Mij which ensure that the projections yit are distinct, and such lateral weights are not included in Földiák’s network. While the updates are similar in some respects, it is difficult to compare the outputs of the networks because Földiák’s network is postulated rather than derived from a principled objective function, so the network must be simulated numerically in order to evaluate its output. 6 Experiments To validate our approach, we test Bio-SFA (Alg. 1) on synthetic and naturalistic datasets. We provide an overview of the experiments here and defer detailed descriptions and additional figures to Sec. B of the supplement. The evaluation code is available at https://github.com/flatironinstitute/ bio-sfa. To measure the performance of our algorithm, we compare the “slowness” of the projection Y = M 1WX, with the slowest possible projection. This can be quantified using the objective (6). We first evaluate the objective (6) at its optimum: slow := min TrV>CẋẋV : V 2 Rm⇥k s.t. V>CxxV = Ik which can be evaluated using an offline generalized eigenvalue problem solver. To compute the error at each iteration, we compare the slowness of the current projection to the minimal slowness: Error = Ṽ>CẋẋṼ slow, Ṽ := W>M 1(M 1WCxxW>M 1) 1/2, (19) where the normalization ensures that Ṽ satisfies the constraint in Eq. (6). In Sec. B, we show that V indeed asymptotically satisfies the constraint in Eq. (6). 6.1 Chaotic time series Before testing on naturalistic datasets, we test Bio-SFA on a challenging synthetic dataset. Let { t} be a (slow) driving force equal to the sum of 6 sine functions with random amplitudes, frequencies and phases, Fig. 2a (red line). Define the noisy series derived from the recursive logistic map with time-varying growth rate: zt = (3.6 + 0.4 t)zt 1(1 zt 1), Fig. 2b (black dots). Wiskott [30] showed that the driving force { t} can be recovered from the noisy series {zt} by implementing (offline) Quadratic SFA on the 4-dimensional signal {st} whose components correspond to the values of the noisy series over the 4 most recent time steps, i.e., st := (zt, zt 1, zt 2, zt 3). We replicate the results from [30] using Bio-SFA. Let {xt} be the 14-dimensional quadratic expansion of {st}. We use Bio-SFA to extract the slowest one-dimensional projection {yt}, Fig. 2c (green dots). Qualitatively, we see that the slowest projection recovered by Bio-SFA closely aligns with the slow driving force { t}. In Fig. 2d we plot the error at each iteration. 6.2 Sequence of natural images Next, we test Bio-SFA on a sequence of natural images. First, a 256-dimensional sequence {zt} was generated by moving a 16⇥ 16 patch over 13 natural images from [12] via translations, zooms, and rotations, Fig. 3a. To extract relevant features, we follow the exact same procedure as Berkes and Wiskott [1], but replace the offline SFA solver with Bio-SFA to generate a 49-dimensional output signal {yt}. To visualize the 49-dimensional output, we calculate the unit vector z 2 R256 that maximizes yi, for i = 1, . . . , 49. These optimal stimuli, z, which are displayed as 16⇥ 16 patches in Fig. 3b, resemble Gabor patches and are in qualitative agreement with physiological characteristics of complex cells in the visual cortex. This aligns with the results in [1]; see also, [2]. To evaluate the performance of Bio-SFA, we plot the error at each iteration in Fig. 3c. 6.3 Hierarchical SFA on the visual stream of a simulated rat Following Schönfeld and Wiskott [25], we test a hierarchical 3-layer organization of Bio-SFA “modules” on the inputs from the RatLab framework [24], which simulates the field of view of a rat with random trajectories in a rectangular room. Each layer consists of spatially distributed modules that receive overlapping patches of either the visual stream or the preceding layer. Inside each module, there are 3 steps: (1) Bio-SFA first reduces the dimension of the inputs to generate a 32-dimensional signal, (2) the reduced signal is quadratically expanded, and (3) Bio-SFA reduces the expanded signal to the slowest 32 features. The layers are organized so that the modules in each successive layer receive inputs from larger patches of the visual field, Fig. 4a. Adopting the procedure in [25], the network is trained greedily layer-by-layer with weight sharing across modules in each layer (see Sec. B of the supplement). The final layer consists of a single module, with a 32-dimensional output, whose spatially-dependent firing maps are shown in Fig. 4b. The 3 SFA layers are followed by a fourth layer, which performs sparse coding via Independent Component Analysis (ICA) [11] (in the offline setting) with a 32-dimensional output, whose firing map is shown in Fig. 4c. As in [25], the firing maps of the final ICA layer are spatially localized and resemble the firing maps of place cells in the hippocampus. To quantify the performance of this hierarchical network, we plot the slowness (not errors, see Sec. B of the supplement) of each of the first 3 layers’ outputs at each iteration, Fig. 4d. 7 Discussion We derived an online algorithm for SFA with a biologically plausible neural network implementation, which is an important step towards understanding how the brain could use temporal slowness as a computational principle. While our network implementation satisfies natural requirements for biological plausibility, it differs from biological neural circuits in a number of ways. For instance, our network includes direct lateral inhibitory synapses between excitatory neurons, whereas inhibition is typically modulated by interneurons in biological networks. By adapting the approach in [21], interneurons can be introduced to modulate inhibition. Second, the synaptic updates in our network require both the pre- and post-synaptic neurons to store slow variables; however, signal frequencies in dendrites are slower than in axons, suggesting that it is more likely for slow variables to be stored in the post-synaptic neuron, not the pre-synaptic neuron. We can address this with a modification, which is exact when the expanded signal {xt} exhibits time-reversibility, so that only the post-synaptic represents slow variables; see Sec. C of the supplement. Finally, our network includes linear neurons, which do not respect the nonnegativity constraints of neuronal outputs. An interesting future direction is to understand the effect of enforcing a nonnegativity constraint on yt in the objective function (9). Broader impact An important problem in neuroscience is to understand the computational principles the brain uses to process information. Progress on this front has the potential to have wide ranging benefits for helping to manage the adverse effects of neurological diseases and disorders. This work represents a small step in that direction. Acknowledgements This work was internally support by the Simons Foundation. We thank Yanis Bahroun, Nicholas Chua, Shiva Farashahi, Johannes Friedrich, Alexander Genkin, Jason Moore, Anirvan Sengupta and Tiberiu Tesileanu for helpful comments and feedback on an earlier draft of this work.
1. What is the focus and contribution of the paper on slow feature analysis? 2. What are the strengths of the proposed approach, particularly in terms of its relationship to the normative MDS objective? 3. What are the weaknesses of the paper, especially regarding its claim of biological plausibility? 4. How does the reviewer assess the novelty and impact of the paper's contributions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper produces a so-called biologically plausible neural network for slow-feature analysis. Biological plausibility here means that network learning is online and based on local synaptic learning rules. These online and locality requirements might lead to low computational overhead. While Foldiak, Wiskott, and many others have explored online local learning for SFA in the last thirty years, this paper attempts to relate SFA to a normative theory through an MDS objective. Strengths While Foldiak, Wiskott, and others have explored local online learning for SFA in the last thirty years, this paper successfully relates it to the normative MDS objective. Similar work on biologically plausible implementations has been one-dimensional; this work extends that to multiple dimensions. Weaknesses The conceptual and theoretical innovations are limited, which is not surprising given that the problem has been worked on for the last thirty years, most notably by Wiskott's lab. Claim of biological plausibility seems weak, limited only to local learning rules and online learning.
NIPS
Title A Biologically Plausible Neural Network for Slow Feature Analysis Abstract Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli. 1 Introduction Unsupervised learning of meaningful latent features from noisy, high-dimensional data is a fundamental problem for both machine learning and brain function. Often, the relevant features in an environment (e.g., objects) vary on relatively slow timescales when compared to noisy sensory data (e.g., the light intensity measured by a single receptor in the retina). Therefore, temporal slowness has been proposed as a computational principle for extracting relevant latent features [8, 19, 31]. A popular approach for extracting slow features, introduced by Wiskott and Sejnowski [31], is Slow Feature Analysis (SFA). SFA is an unsupervised learning algorithm that extracts the slowest projection, in terms of discrete time derivative, from a nonlinear expansion of the input signal. When trained on natural image sequences, SFA extracts features that resemble response properties of complex cells in early visual processing [2]. Impressively, hierarchical networks of SFA trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus [9]. The relevance of SFA is strengthened by its close relationship to information theoretic objectives and its equivalence to other successful algorithms under certain assumptions. When the time series is reversible and Gaussian, (Linear) SFA is equivalent to maximizing mutual information between the current output of the system and the next input [7, 5]. Moreover, features extracted by several ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. algorithms favoring predictability from real-world datasets are similar to those extracted by SFA [29]. Finally, (Linear) SFA is equivalent to a time-lagged independent components analysis [3, 10], which is a popular statistical technique used to analyze molecular dynamics [22, 20, 26, 27]. Due to its success in modeling aspects of neural processing, deriving an algorithm for SFA with a biologically plausible neural network implementation is an important task. For the purposes of this work, we define biologically plausible to mean that the neural network operates in the online setting (i.e., after receiving an input, it computes its output before receiving its next input, never storing a significant fraction of past inputs), and its synaptic learning rules are local (i.e., a synaptic weight update depends only on variables represented in the pre- and postsynaptic neurons). In addition to satisfying basic properties of neural circuits, these online and locality requirements can lead to networks that are well-suited for analyzing large datasets because they operate in the online setting with low computational overhead. While there are a few online algorithms for SFA, none have biologically plausible neural network implementations that extract multiple slow features. Moreover, there are no neural network implementations for the related information theoretic algorithms discussed above [29, 5]. Kompella et al. propose Incremental SFA [14] (see [16, 32] for extensions). However, this approach relies on non-local learning rules, so it does not meet the above criteria for biological plausibility. Malik et al. [17] use an online generalized eigenvalue problem solver [33] to derive an online algorithm for SFA. While their algorithm for finding one-dimensional projections can be implemented in a biologically plausible network, their extension to multi-dimensional projections is not fully online. In this work, we propose Bio-SFA: an online algorithm for SFA with a biologically plausible neural network implementation, Fig. 1. We adopt a normative approach to derive our algorithm. First, we express the solution of the SFA problem in terms of an objective from classical multidimensional scaling. We then manipulate the objective to arrive at a min-max optimization problem that can be solved in the online setting by taking stochastic gradient descent-ascent steps. These steps can be expressed in terms of neural activities and updates to synaptic weight matrices, which leads to a natural interpretation of our online algorithm as a biologically plausible neural network. To validate our approach, we test our algorithm on datasets of naturalistic stimuli and reproduce results originally performed in the offline setting. The synaptic updates of the feedforward weights W in our network are similar, although not identical, to the updates proposed heuristically by Földiák [8] to extract slow temporal features. However, there is no theoretical analysis of the algorithm in [8]. In contrast, in our normative approach, Bio-SFA is derived directly from an SFA objective, so we can analytically predict its output, as well as the synaptic weights, without resorting to numerical simulation. In addition, the comparison of our learning rules with Földiák’s illuminates the relationship of [8] to SFA. 2 Slow Feature Analysis Here and below, vectors are boldface lowercase letters (e.g., v), and matrices are boldface uppercase letters (e.g., M). We use superscripts to denote the components of a vector (e.g., vi). 2.1 Problem statement Wiskott and Sejnowski [31] proposed the following 2 step method for extracting slow features from a noisy data set: (1) generate a nonlinear expansion of the input signal, and (2) find the slowest, in terms of discrete time derivative, low-dimensional projection of the expanded signal. In this section, we review these 2 steps. Let {s0, s1, . . . , sT } be a d-dimensional input signal.2 The first step of SFA is to generate an mdimensional expansion {xt}, referred to as the expanded signal, of {st}. Let h = (h1, . . . , hm) : Rd ! Rm be an expansion function and define xt := h(st) 1 T TX t0=1 h(st0), t = 0, 1, . . . , T, so that {xt} is centered. Let k < m. The second step of SFA is to find the k-dimensional linear projection {yt} of the expanded signal {xt} that minimizes the mean discrete-time derivative of the output signal {yt}, subject to a whitening constraint. To be precise, the objective can be formulated as follows: argmin {yt} 1 T TX t=1 kẏtk2 subject to 1 T TX t=1 yty > t = Ik, (1) where ẏt is the discrete time derive of yt, and yt is a linear projection of xt; that is, ẏt := yt yt 1, t = 1, . . . , T, (2) yt := V >xt, t = 0, 1, . . . , T, for some V 2 Rm⇥k. (3) Note, since {xt} is centered, the projection {yt} is also centered. 2.2 Quadratic SFA The focus of this work is to derive a biologically plausible neural network that learns to output the optimal output signal {yt} when streamed the expanded signal {xt}. While our algorithm does not depend on the specific choice of the expansion function h, for concreteness, we provide an example here. In their original paper, Wiskott and Sejnowski [31] proposed setting the components of the function h : Rd ! Rm to be the monomials of degree one and two. This choice, which we refer to as “Quadratic SFA”, has been widely used in applications [31, 2, 9, 34]. In particular, let m := d+ d(d+ 1)/2 and h1, . . . , hm : Rd ! R denote the m possible linear and quadratic functions of the form h(s) := si or h(s) := sisj , for 1 i j d. (When only the linear features are used, i.e., xi = si + const, this is referred to “Linear SFA”.) Thus, each component of the output signal is a quadratic polynomial in the components of the signal of the form: yi = V1ih 1(s) + · · ·+ Vmihm(s) + const. (4) Biologically, there are a number of mechanism that have been proposed for computing products of the form sisj ; see, e.g., [13] and the references therein. One such mechanism uses “Sigma-Pi” units [23], which multiplies two inputs via gating and have been invoked in cortical modeling [18]. In Sec. 6, we perform our numerical experiments using the quadratic expansion. 2The zeroth time step is included to ensure the discrete-time derivative is defined at t = 1. 3 A novel SFA objective from classical multidimensional scaling To derive an SFA network, we identify an objective function whose optimization leads to an online algorithm that can be implemented in a biologically plausible network. To identify the objective function, we first rewrite the SFA output as a principal subspace projection and then take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6]. This approach is similar to the derivation of a biologically plausible neural network for canonical correlation analysis [15]. To begin, we define the discrete derivative process {ẋt} and the delayed sum process {x̄t} by ẋt := xt xt 1 and x̄t := xt+xt 1, for t = 1, . . . , T . In addition, we define the sample covariance matrices Cxx := 1 T TX t=1 xtx > t , Cẋẋ := 1 T TX t=1 ẋtẋ > t , Cx̄x̄ := 1 T TX t=1 x̄tx̄ > t . (5) Substituting the definitions in Eqs. (2), (3) and (5) into the objective in Eq. (1), we can equivalently write the SFA problem as the following constrained minimization problem of the projection matrix V: argmin V2Rm⇥k TrV>CẋẋV subject to V>CxxV = Ik. (6) Due to the whitening constraint in Eq. (6), we can equivalently write it as the maximization of the one-step autocorrelation of the projection {yt} (see Appendix A for details): argmax V2Rm⇥k TrV>Cx̄x̄V subject to V>CxxV = Ik. (7) Next, setting x̂t := C 1/2 xx x̄t for t = 1, . . . , T , and V̂ := C1/2xx V, Cx̂x̂ := 1 T TX t=1 x̂tx̂ > t = C 1/2 xx Cx̄x̄C 1/2 xx , we see that V is a solution of Eq. (7) if and only if V̂ is the solution of: argmax V̂2Rm⇥k Tr V̂>Cx̂x̂V̂ subject to V̂>V̂ = Ik. (8) Notably, Eq. (8) is the variance maximization objective for the PCA eigenproblem, which is optimized when the column vectors of V̂ span the k-dimensional principal subspace of Cx̂x̂. Finally, we take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6, 21]. To this end, define the data matrices X̄ := [x̄t, . . . , x̄T ], X̂ := [x̂1, . . . , x̂T ], Ȳ := [ȳ1, . . . , ȳT ]. Then, since ȳt = V>x̄t = V̂>x̂t, we see that Ȳ is the projection of X̂t onto its k-dimensional principal subspace. As shown in [6], this principal projection can be expressed as a solution of the following objective from classical multidimensional scaling: argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̂>X̂ 2 Frob = argmin Ȳ2Rk⇥T 1 2T 2 Ȳ>Ȳ X̄>C 1xx X̄ 2 Frob. (9) This objective minimizes the difference between the similarity of consecutive sums of output pairs, ȳ>t ȳt0 , and the similarity of consecutive sums of whitened input pairs, x̂>t x̂t0 , where similarity is measured in terms of inner products. Here we have assumed that Cxx is full rank. If Cxx is not full rank (but is at least rank k), we can replace C 1xx in Eq. (9) with the Moore-Penrose inverse C+xx (see Appendix A). 4 Derivation of an online algorithm While the objective (9) can be minimized by taking gradient descent steps in Ȳ, this does not lead to an online algorithm because the gradient steps require combining inputs from different time steps. Instead, we rewrite the objective as a min-max problem that can be solved by taking gradient descent-ascent steps that correspond to neural activities and synaptic update rules. 4.1 A min-max formulation Expanding the square in Eq. (9) and dropping terms that do not depend on Ȳ, we obtain the minimization problem min Ȳ2Rk⇥T 1 2T 2 Tr Ȳ>ȲȲ>Ȳ 2Ȳ>ȲX̄>C 1xx X̄ . (10) By introducing dynamical matrix variables W and M, which will correspond to synaptic weights, we can rewrite the minimization problem (10) as a min-max problem: min Ȳ2Rk⇥T min W2Rk⇥n max M2Sk++ L(W,M, Ȳ), where Sk++ denotes the set of k ⇥ k positive definite matrices and L(W,M, Ȳ) := 1 T Tr Ȳ>MȲ 2Ȳ>WX̄ Tr ✓ 1 2 M2 WCxxW> ◆ . (11) This step can be verified by differentiating L(W,M, Ȳ) with respect to W and M and noting that the optimal values are achieved when W and M equal 1T ȲX̄ >C 1xx and 1 T ȲȲ >, respectively. Finally, we interchange the order of minimization with respect to Ȳ and W, as well as the order of optimization with respect to Ȳ and with respect to M: min W2Rk⇥m max M2Sk++ min Ȳ2Rk⇥T L(W,M, Ȳ). (12) The second interchange is justified by the fact that L(W,M, Ȳ) satisfies the saddle point property with respect to Ȳ and M, which follows from the fact that L(W,M, Ȳ) is strictly convex in Y (since M is positive definite) and strictly concave in M. 4.2 Offline algorithm In the offline, or batch, setting, we have access to the sample covariance matrices Cxx and Cx̄x̄, and we solve the min-max problem (12) by alternating optimization steps. First, for fixed W and M, we minimize the objective function L(W,M, Ȳ) over Ȳ, to obtain Ȳ = M 1WX̄. (13) With Ȳ fixed, we then perform a gradient descent-ascent step with respect to W and M: W W + 2⌘ ✓ 1 T ȲX̄> WCxx ◆ (14) M M+ ⌘ ⌧ ✓ 1 T ȲȲ> M ◆ . (15) Here ⌧ > 0 is the ratio of the learning rates of W and M and ⌘ 2 (0, ⌧) is the (possibly timedependent) learning rate for W. The condition ⌘ < ⌧ ensures that matrix M remains positive definite given a positive definite initialization. 4.3 Online algorithm In the online setting, the expanded signal {xt} is streamed one sample at a time, and the algorithm must compute its output without storing any significant fraction of the data in memory. In this case, at each time-step t, we compute the output yt = M 1at, where at := Wxt is the projection of xt onto the k-dimensional “slow” subspace, in a biologically plausible manner by running the following fast (neural) dynamics to equilibrium (our algorithm implements these dynamics using an Euler approximation): dyt( ) d = at Myt( ). (16) To update the (synaptic) matrices W and M, we replace the covariance matrices in (14)–(15) with the rank-1 stochastic approximations: 1 T ȲX̄> 7! ȳtx̄>t , 1 T ȲȲ> 7! ȳtȳ>t , Cxx 7! xtx>t . This yields the following stochastic gradient descent-ascent steps with respect to W and M: W W + 2⌘ ȳtx̄ > t atx>t M M+ ⌘ ⌧ ȳtȳ > t M . We can now state our online SFA algorithm, which we refer to as Bio-SFA (Alg. 1). Algorithm 1: Bio-SFA input expanded signal {x0,x1, . . . ,xT }; dimension k; parameters , ⌘, ⌧ initialize matrix W and positive definite matrix M for t = 1, 2, . . . , T do at Wxt . project inputs repeat yt yt + (at Myt) . compute neural output until convergence x̄t xt + xt 1 ȳt yt + yt 1 W W + 2⌘(ȳtx̄>t atx>t ) . synaptic updates M M+ ⌘⌧ (ȳtȳ > t M) end for 5 Biologically plausible neural network implementation We now demonstrate that Bio-SFA can be implemented in a biologically plausible network, depicted in Fig. 1. Recall that we define a network to be biologically plausible if it computes its output in the online setting and has local learning rules. The neural network consists of an input layer of m neurons (blue circles) and an output layer of k neurons with separate dendritic and somatic compartments (black circles with 2 compartments). At each time t, the m-dimensional expanded signal xt, which is represented by the activity of the input neurons, is multiplied by the weight matrix W, which is encoded by the feedforward synapses connecting the input neurons to the output neurons (green lines). This yields the k-dimensional projection at = Wxt, which is represented in the dendritic compartment of the output neurons and then propagated to the somatic compartments. This is followed by the fast recurrent neural dynamics Eq. (16) amongst the somatic compartments of the output neurons, where the matrix M is encoded by the lateral synapses connecting the layer of output neurons (red lines). These fast neural dynamics equilibriate at yt = M 1at. The k-dimensional output signal yt is represented by the activity of the output neurons. The synaptic updates are as follows. Recall that x̄t = xt + xt 1 (resp. ȳt = yt + yt 1) is the delayed sum of the inputs (resp. outputs), which we assume are represented in the m input neurons (resp. k output neurons). Biologically, they can be represented by slowly changing concentrations (e.g., calcium) at the pre- and post-synaptic terminals. We can write the elementwise synaptic updates in Alg. 1 as Wij Wij + 2⌘ ⇣ ȳitx̄ j t aitx j t ⌘ , 1 i k, 1 j d, (17) Mij Mij + ⌘ ⌧ ⇣ ȳitȳ j t Mij ⌘ , 1 i, j k. (18) Since the jth input neuron stores the variables xjt , x̄ j t and the ith output neuron stores the variables ait, y i t, ȳ i t, the update for each synapse is local. It is worth comparing the derived updates to the feedforward weights Eq. (17) to the updates proposed by Földiák [8], which are given by wij wij + ⌘ ⇣ ȳitx j t ȳitwij ⌘ , 1 i k, 1 j d. The first terms in the updates, ȳitx̄ j t and ȳitx j t , are quite similar. The main difference between the updates is between the second terms: aitx j t and ȳitwij . In our network, the second term aitx j t serves to whiten the inputs in our network, whereas Földiák’s second term ȳitwij is added as a decay to ensure the weights remain bounded. In addition, our network includes lateral weights Mij which ensure that the projections yit are distinct, and such lateral weights are not included in Földiák’s network. While the updates are similar in some respects, it is difficult to compare the outputs of the networks because Földiák’s network is postulated rather than derived from a principled objective function, so the network must be simulated numerically in order to evaluate its output. 6 Experiments To validate our approach, we test Bio-SFA (Alg. 1) on synthetic and naturalistic datasets. We provide an overview of the experiments here and defer detailed descriptions and additional figures to Sec. B of the supplement. The evaluation code is available at https://github.com/flatironinstitute/ bio-sfa. To measure the performance of our algorithm, we compare the “slowness” of the projection Y = M 1WX, with the slowest possible projection. This can be quantified using the objective (6). We first evaluate the objective (6) at its optimum: slow := min TrV>CẋẋV : V 2 Rm⇥k s.t. V>CxxV = Ik which can be evaluated using an offline generalized eigenvalue problem solver. To compute the error at each iteration, we compare the slowness of the current projection to the minimal slowness: Error = Ṽ>CẋẋṼ slow, Ṽ := W>M 1(M 1WCxxW>M 1) 1/2, (19) where the normalization ensures that Ṽ satisfies the constraint in Eq. (6). In Sec. B, we show that V indeed asymptotically satisfies the constraint in Eq. (6). 6.1 Chaotic time series Before testing on naturalistic datasets, we test Bio-SFA on a challenging synthetic dataset. Let { t} be a (slow) driving force equal to the sum of 6 sine functions with random amplitudes, frequencies and phases, Fig. 2a (red line). Define the noisy series derived from the recursive logistic map with time-varying growth rate: zt = (3.6 + 0.4 t)zt 1(1 zt 1), Fig. 2b (black dots). Wiskott [30] showed that the driving force { t} can be recovered from the noisy series {zt} by implementing (offline) Quadratic SFA on the 4-dimensional signal {st} whose components correspond to the values of the noisy series over the 4 most recent time steps, i.e., st := (zt, zt 1, zt 2, zt 3). We replicate the results from [30] using Bio-SFA. Let {xt} be the 14-dimensional quadratic expansion of {st}. We use Bio-SFA to extract the slowest one-dimensional projection {yt}, Fig. 2c (green dots). Qualitatively, we see that the slowest projection recovered by Bio-SFA closely aligns with the slow driving force { t}. In Fig. 2d we plot the error at each iteration. 6.2 Sequence of natural images Next, we test Bio-SFA on a sequence of natural images. First, a 256-dimensional sequence {zt} was generated by moving a 16⇥ 16 patch over 13 natural images from [12] via translations, zooms, and rotations, Fig. 3a. To extract relevant features, we follow the exact same procedure as Berkes and Wiskott [1], but replace the offline SFA solver with Bio-SFA to generate a 49-dimensional output signal {yt}. To visualize the 49-dimensional output, we calculate the unit vector z 2 R256 that maximizes yi, for i = 1, . . . , 49. These optimal stimuli, z, which are displayed as 16⇥ 16 patches in Fig. 3b, resemble Gabor patches and are in qualitative agreement with physiological characteristics of complex cells in the visual cortex. This aligns with the results in [1]; see also, [2]. To evaluate the performance of Bio-SFA, we plot the error at each iteration in Fig. 3c. 6.3 Hierarchical SFA on the visual stream of a simulated rat Following Schönfeld and Wiskott [25], we test a hierarchical 3-layer organization of Bio-SFA “modules” on the inputs from the RatLab framework [24], which simulates the field of view of a rat with random trajectories in a rectangular room. Each layer consists of spatially distributed modules that receive overlapping patches of either the visual stream or the preceding layer. Inside each module, there are 3 steps: (1) Bio-SFA first reduces the dimension of the inputs to generate a 32-dimensional signal, (2) the reduced signal is quadratically expanded, and (3) Bio-SFA reduces the expanded signal to the slowest 32 features. The layers are organized so that the modules in each successive layer receive inputs from larger patches of the visual field, Fig. 4a. Adopting the procedure in [25], the network is trained greedily layer-by-layer with weight sharing across modules in each layer (see Sec. B of the supplement). The final layer consists of a single module, with a 32-dimensional output, whose spatially-dependent firing maps are shown in Fig. 4b. The 3 SFA layers are followed by a fourth layer, which performs sparse coding via Independent Component Analysis (ICA) [11] (in the offline setting) with a 32-dimensional output, whose firing map is shown in Fig. 4c. As in [25], the firing maps of the final ICA layer are spatially localized and resemble the firing maps of place cells in the hippocampus. To quantify the performance of this hierarchical network, we plot the slowness (not errors, see Sec. B of the supplement) of each of the first 3 layers’ outputs at each iteration, Fig. 4d. 7 Discussion We derived an online algorithm for SFA with a biologically plausible neural network implementation, which is an important step towards understanding how the brain could use temporal slowness as a computational principle. While our network implementation satisfies natural requirements for biological plausibility, it differs from biological neural circuits in a number of ways. For instance, our network includes direct lateral inhibitory synapses between excitatory neurons, whereas inhibition is typically modulated by interneurons in biological networks. By adapting the approach in [21], interneurons can be introduced to modulate inhibition. Second, the synaptic updates in our network require both the pre- and post-synaptic neurons to store slow variables; however, signal frequencies in dendrites are slower than in axons, suggesting that it is more likely for slow variables to be stored in the post-synaptic neuron, not the pre-synaptic neuron. We can address this with a modification, which is exact when the expanded signal {xt} exhibits time-reversibility, so that only the post-synaptic represents slow variables; see Sec. C of the supplement. Finally, our network includes linear neurons, which do not respect the nonnegativity constraints of neuronal outputs. An interesting future direction is to understand the effect of enforcing a nonnegativity constraint on yt in the objective function (9). Broader impact An important problem in neuroscience is to understand the computational principles the brain uses to process information. Progress on this front has the potential to have wide ranging benefits for helping to manage the adverse effects of neurological diseases and disorders. This work represents a small step in that direction. Acknowledgements This work was internally support by the Simons Foundation. We thank Yanis Bahroun, Nicholas Chua, Shiva Farashahi, Johannes Friedrich, Alexander Genkin, Jason Moore, Anirvan Sengupta and Tiberiu Tesileanu for helpful comments and feedback on an earlier draft of this work.
1. What is the focus and contribution of the paper regarding SFA? 2. What are the strengths of the proposed approach, particularly in its application to the computational neuroscience community? 3. What are the weaknesses of the paper, such as minor typos and notation issues?
Summary and Contributions Strengths Weaknesses
Summary and Contributions While SFA is a classical and widely used unsupervised learning algorithm, a neural plausible neural algorithm for SFA is still missing. This paper presents a solution and the numerical results well support the proposed method. So overall, it's a very useful piece of work. Strengths 1. The presented method is very useful to the computational neuroscience community. 2. The paper presents a reformulation of SFA, which leads to a neural plausible SFA algorithm. 3. The numerical results seem to be solid and well support the proposed theory. Weaknesses There are some minor typos and notation issues, just like most of the NeurIPS drafts. If accepted for publication, the authors should definitely make an effort to polish the paper.
NIPS
Title Learning Latent Seasonal-Trend Representations for Time Series Forecasting Abstract Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github1. 1 Introduction Time series forecasting plays a significant role in plethora of modern applications, ranging from climate analysis [1], energy production [2], traffic flows [3] to financial markets and various industrial systems [4]. The ubiquity and importance of time series data have recently attracted researcher efforts resulting in a myriad of deep learning forecasting models [5, 6] ameliorating the time series forecasting. Based on advanced techniques such as RNN and Transformer [7]), these methods usually learn latent representations to epitomize every instant of the signals, and then derive forecasting results by a predictor, achieving great progress on forecasting tasks. However, these models have difficulties to extract exact/clear information related to temporal patterns (e.g., seasonality, trend, and level), especially in supervised end-to-end architecture without any constraint on representations [8]. As a consequence, efforts have been made to apply the variational inference into time series modeling [9, 10, 11], where improved guidance for latent representations with probabilistic form has been proved beneficial to downstream time series tasks [12]. However, when various intricately co-evolving constituents exist a time series data, analyzing with a single representation will result in superficial variable and models’ non-reusability and lack of interpretability, due to the highly entangled nature of neural networks [13, 14]. Thus, while providing efficiency and effectiveness, existing approaches with a single high-dimensional representation sacrifice the ∗Corresponding author. 1https://github.com/zhycs/LaST 36th Conference on Neural Information Processing Systems (NeurIPS 2022). information utilization and explainability, which may further lead to overfitting and degenerated performance. To address the above limitations and seek a new disentangled time series learning framework, we leverage the ideas from the decomposition strategy [15, 16] to split time series data into several components, each of which captures an underlying pattern category. The decomposition assists the analysis process and reveals underlying insights more consistently with human intuition. This insight motivates us to produce a couple of latent representations that respond to different time series characteristics (in our case the seasonal and trend), from which we predict the results by formulating sequence as the sum of these characteristics. The representations should be as independent as possible to avoid a model prone to feature entanglement, while also having sufficient information to the input sequence. Towards that, we propose a novel framework LaST to learn the Latent Seasonal-Trend representations for time series forecasting. LaST exploits an encoder-decoder architecture and follows variational inference theory [17] to learn a couple of disentangled latent representations that describe seasonal and trend of time series. To achieve this goal, LaST enforces the representations into disentanglement under two constraints: (1) from the input reconstruction, we dissect intrinsic seasonal-trend patterns which can be readily obtained from raw time series and off-the-shelf measurement methods, and accordingly design a series of auxiliary objectives; (2) from representations themselves, we minimize the mutual information (MI) between seasonal and trend representations on the premise that grantee the consistency between input data and each of them. Our main contributions are threefold: • We start with the variational inference and information theory to design the seasonal-trend representations learning and disentanglement mechanisms, and practically demonstrate their effectiveness and superiority (over the existing baselines) on time series forecasting task. • We propose LaST, a novel latent seasonal-trend representations learning framework, which encodes input as disentangled seasonal-trend representations and provides a practicable approach that reconstructs seasonal and trend separately to avoid chaos. • We introduce MI terms as a penalty and present a novel tractable lower bound and an upper bound for their optimizations. The lower bound ameliorates the biased gradient issue in prevalent MINE approach and ensures informative representations. The upper bound provides the feasibility to further reduce the overlapping of seasonal-trend representations. 2 Related work Most of the deep learning methods for time series forecasting are designed as an end-to-end architecture. Various basic techniques (e.g., residual structure [18, 19], autoregressive network [20, 21], and convolutions [22, 23]) are exploited to produce expressive non-linear hidden states and embeddings that reflect the temporal dependencies and patterns. There is also a body of works that apply Transformer [7] structure into time series forecasting tasks [24, 25, 26, 6], aiming to discover the relationships across the sequence and focus on the important time points. Deep learning methods have achieved superior performance in comparison to the classical algorithms such as ARIMA [27] and VAR [28], and have become prevalent in multiple applications. Learning flexible representations has been demonstrated to be beneficial for downstream tasks by numerous researches [12, 29]. In time series representations domain, early methods, employing the variational inference, jointly train an encoder and corresponding decoder that reconstructs raw signals to learn approximate latent representations [10, 30, 31]. Recent efforts have improve these variational methods [32, 33] by establishing more complex and flexible distributions using techniques such as copula [32] and normalizing flow [34, 35]. Another group of works exploited the burgeoning contrastive learning to obtain invariant representations from augmented time series [36, 37, 38], which avoids the reconstruction process and improves representations without additional supervision. Time series decomposition [15, 16] is a classical method that splits complex time series into several components to obtain temporal patterns and interpretability. Recent works have applied machine learning and deep learning approaches [39, 40, 41] to robustly and efficiently achieve the decomposition on large-scale datasets. There are also research results that tackle forecasting with the assistance of decomposition. For example, Autoformer [26] decomposes time series into seasonal-trend parts by average pooling and introduces an autocorrelation mechanism to empower Transformer [7] for better relations discovery. CoST [38] encodes signals into seasonal and trend representations in frequency and temporal domains, respectively, and introduces contrastive learning to supervise their learning. Different from our work, these methods exploit simple average pooling decomposition mechanism which may provide incompatible periodical assumptions, or intuitively disentangle the representations by processing in different domains. Meanwhile, LaST adaptively epitomizes the seasonality and trend by disentangled representations and boosts their disassociation from a probabilistic perspective in the latent space. 3 Latent seasonal-trend representations learning framework We now formalize the problem definition and introduce the proposed LaST framework. We note that in LaST we use seasonal and trend characteristics for disentanglement learning but our framework can be easily extended to adapt to situations that have more than two components to dissociate. Problem definition. Consider a time series dataset D consisting of N i.i.d. sequences denoted as X(i)1:T = {x (i) 1 , x (i) 2 , · · · , x (i) t , · · · , x (i) T }, where i ∈ {1, 2, . . . , N}, and each x (i) t is univariate or multivariate value representing the current observation(s) at time instant t (e.g., price and rainfall). We aim to derive a model that outputs expressive representations Z1:T , suitable for predicting future sequence(s) Y = X̂T+1:T+τ . Hereafter, when there is no ambiguity we omit the superscripts and the subscript 1:T . A model that infers the likelihood between observation X and future Y with latent representation Z can be formulated as follows: P (X,Y ) = P (Y |X)P (X) = ∫ Z P (Y |Z)P (Z|X)dZ ∫ Z P (X|Z)P (Z)dZ. (1) From the perspective of variational inference (cf. [17]), the likelihood P (X|Z) is calculated by a posterior distribution Qϕ(Z|X) and maximized by the following evidence lower bound (ELBO): logPΘ(X,Y ) ≥ log ∫ Z Pψ(Y |Z)Qϕ(Z|X)dZ + EQϕ(Z|X)[logPθ(X|Z)] −KL(Qϕ(Z|X)||P (Z)) = LELBO, (2) where Θ is composed of ψ, ϕ, and θ denotes learned parameters. However, as pointed out in Sec. 1, this faces an entanglement problem and cannot clearly extract complicated temporal patterns. To ameliorate this limitation, we incorporate the decomposition strategy into our LaST that learns a couple of disentangled representations to depict seasonal and trend dynamics. Specifically, we formulate the temporal signals X and Y as the sum of seasonal and trend components, i.e., X = Xs +Xt. Accordingly, the latent representation Z is factorized into Zs and Zt, assumed to be independent of each other – i.e., P (Z) = P (Zs)P (Zt). Figure 1 illustrates the two parts of the LaST framework: (a) representations learning, producing disentangles seasonal-trend representations (separate reconstructions and MI constraints); and (b) prediction based on learned representations. Theorem 1. With the decomposition strategy, Eq. (2) (i.e., the ELBO) naturally has the following factorized form: LELBO = log ∫ Zs ∫ Zt Pψ(Y |Zs, Zt)Qϕs,ϕt(Zs, Zt|X)dZsdZt (predictor) (3) + EQϕs (Zs|X)[logPθs(X s|Zs)] + EQϕt (Zt|X)[logPθt(X t|Zt)] (reconstruction) (4) −KL(Qϕs(Zs|X)||P (Zs))−KL(Qϕt(Zt|X)||P (Zt)). (KL divergence) (5) The detailed inference process of the above formula is provided in Appendix A.1. The ELBO is split into three main units, i.e., Eqs. (3), (4), and (5). The predictor makes forecasting and measures the accuracy (e.g,. L1 or L2 losses), reconstruction and KL divergence are served as regularization terms aiming to improve the learned representations. The three units are described in the following. Predictor: The predictor (cf. Eq. (3) and Figure 1(b)) can be regarded as the sum of two independent parts: log ∫ Zs Pψs(Y s|Zs)Qϕs(Zs|X)dZs and log ∫ Zt Pψt(Y t|Zt)Qϕt(Zt|X)dZt. Here we introduce two specialized approaches to harness the seasonal-trend representations combining their own characteristics. Given the seasonal latent representation Zs ∈ RT×d, the seasonal predictor first employs the discrete Fourier transform (DFT) algorithm to detect the seasonal frequencies, i.e., ZsF = DFT(Z s) ∈ CF×d, where F = ⌊T+12 ⌋ due to the Nyquist theorem [42]. Then, we inverse the frequencies back to the temporal domain to extend the representation to the future part, i.e., Z̃s = iDFT(ZsF ) ∈ Rτ×d. More details of the DFT and iDFT functions can be found in Appendix B.1. Given Zt, trend predictor provides a feed forward network (FFN) f : T → τ to produce a predictable representation Z̃t ∈ Rτ×d. We end the predictor with two FFNs to map Z̃s and Z̃t into Y s and Y t, respectively, and obtain the forecasting result Y by their sum. Reconstruction and KL divergence: Among these two terms, the KL divergence can be easily estimated by Monte Carlo sampling with prior assumptions. Here we take a widely used setting that priors both follow N (0, I) for efficiency, more discussions of our priors can be found in Appendix C. As for reconstruction term, it cannot be directly measured owing to the unknown Xs and Xt. Besides, merging these two terms into EQϕs,ϕt (Zs,Zt|X)[logPθs,θt(X|Z s, Zt)] will result in chaos since the decoder is prone to reconstruct the intricate time series from every representation. Theorem 2. With the Gaussian distribution assumption, the reconstruction loss Lrec can be estimated without leveraging Xs and Xt, and Eq. (4) can be replaced with the following formula: Lrec = − T−1∑ κ=1 ∥∥AXX(κ)−AX̂sX̂s(κ)∥∥2 + CORT(X, X̂t)− ∥∥∥X̂t + X̂s −X∥∥∥2 , (6) CORT(X, X̂t) = ∑T−1 i=1 ∆X t i∆X̂ t i√∑T−1 i=1 ∆X t √∑T−1 i=1 ∆X̂ t , (7) where AXX(κ) = ∑T−κ i=1 (Xt − X̄)(Xt+κ − X̄) is the autocorrelation coefficient with lagged value κ (we employ an efficient implementation in frequency domain [43], details are in Appendix B.2), CORT(X, X̂t) is the temporal correlation coefficient, and ∆Xi = Xi −Xi−1 is the first difference. The proof is provided in Appendix A.2. According to Eq. (6), the reconstruction loss now can be estimated and, conversely, used to supervise disentangled representation learning. However, we find that this framework still holds certain drawbacks: (1) The KL divergence tends to narrow down the distance between posterior and prior. The modeling choice tends to sacrifice the variational inference vs. data fit when modeling capacity is not sufficient to achieve both [44]. The posterior may become almost non-informative for the inputs, which causes the forecastings irrelevant to the observations. (2) The disentanglement of the seasonal-trend representations is boosted indirectly by the separate reconstruction, where we need to impose a direct constraint on the representations themselves. We alleviate these limitations by introducing additional mutual information regularization terms. Specifically, we increase the mutual information between Zs, Zt and X to alleviate the divergence narrowing problem [44, 45], while decreasing mutual information between Zs and Zt to further dissociate their representations. The maximizing objective of LaST becomes LLaST = LELBO + I(X,Zs) + I(X,Zt)− I(Zs, Zt), (8) where I(·, ·) denotes the mutual information between two representations. However, the two mutual information terms are untraceable [46, 47, 48]. We address this problem in the next section. 4 Mutual information bounds for optimization We now address the traceable mutual information bounds, maximizing I(X,Zs) and I(X,Zt), and minimizing I(Zs, Zt) in Eq. (8), and provide lower and upper bounds for the model optimization. Lower bound for I(X,Zs) or I(X,Zt). We omit the superscript s or t when analyzing lower bound. Among the prior approaches exploring the lower bounds for MI [49, 50, 51], MINE [51], for example, employs KL divergence between the joint distribution and marginals and defines an energybased variational family to achieve a flexible and scalable lower bound. This can be formulated as I(X,Z) ≥ EQϕ(X,Z)[γα(X,Z)] − logEQ(x)Qϕ(z)[eγα(X,Z)] = IMINE , where γα is a learned normalized critic with parameters α. However, this bound suffers from the biased gradient owing to the parametric logarithmic term (see Appendix A.3 for proof). Inspired by [47], we substitute the logarithmic function by its tangent family to ameliorate the above biased bound: IMINE ≥ EQϕ(X,Z)[γα(X,Z)]− ( 1 η EQ(x)Qϕ(z)[e γα(X,Z)] + log η − 1) ≥ EQϕ(X,Z)[γα(X,Z)]− 1 η EQ(x)Qϕ(z)[e γα(X,Z)], (9) where η denotes the different tangent points. The first inequality relies on the concave negative logarithmic function – the values on the curve are upper bounds for that on the tangent line, and is tight when the tangent point overlaps the independent variable, i.e., the true value of EQ(x)Q(z)[eγ(X,Z)]. The closer the distance between tangent point and independent variable, the greater the lower bound. Therefore, we set η as the variational term EQ(x)Qϕ(z)[eγα(X,Z)] that estimates the independent variable to obtain as great lower bound as possible. In the second inequality, γα(x, z) – a critic function activated by Sigmoid – is limited within [0, 1] and thus −(log η − 1) ≥ 0. This inequality is tight only if EQ(x)Qϕ(z)[γα(X,Z)] = 1, which means γα can discriminate whether a pair of variables (X,Z) is sampled from the joint distribution or marginals. Similarly to MINE, this consistency problem can be addressed by the universal approximation theorem for neural networks [52]. Thus, Eq. (9) provides a flexible and scalable lower bound for I(X,Z) with an unbiased gradient. For the evaluation, we exploit a traceable manner [53, 51] that draws joint samples (X(i), Z(i)) by Q(Z(i)|X(i))PD(X(i)). As for the marginal Qϕ(Z), we randomly select a datapoint j and then sample it from Qϕ(Z|X(j))PD(X(j)). Details of the optimization are shown in Algorithm 1. Upper bound for I(Zs, Zt). Few efforts have been made that explore the traceable upper bound for mutual information [54, 47, 55]. Existing upper bounds (listed in Appendix D.1) are traceable with known probabilistic density of joint or conditional distributions here being Q(Zs|Zt), Q(Zt|Zs) or Q(Zs, Zt). However, these distributions lack interpretability and can hardly be directly modeled, which leads to untraceable estimations of the above upper bounds. To avoid the direct estimation of unknown probabilistic densities, we introduce an energy-based variational family for Q(Zs, Zt) that uses a normalized critic γβ(Zs, Zt) like Eq. (9) to establish a traceable upper bound. Specifically, we incorporate the critic γβ into the upper bound ICLUB [55] to obtain a traceable Seasonal-Trend Upper Bound (STUB) for I(Zs, Zt), which is defined as: I(Zs, Zt) ≤ EQ(Zs,Zt)[logQ(Zs|Zt)]− EQ(Zs)Q(Zt)[logQ(Zs|Zt)] = ICLUB (10) = EQϕs,ϕt (Zs,Zt)[γβ(Z s, Zt)]− EQϕs (Zs)Qϕt (Zt)[γβ(Z s, Zt)] = ISTUB. (11) The derivation details of this formula are provided in Appendix D.2. The inequality in Eq. (10) is tight only if Zs and Zt are a pair of independent variables [55]. This is exactly a sufficient condition for ISTUB, since MI and Eq. (11) are both zeros on the independent situation, which is our seasonaltrend disentanglement optimal objective. The critic γβ , similar to γα, takes on the discriminating responsibility but provides converse scores, constraining the MI to a minimum. However, Eq. (11) may get negative values during the learning of parameter β, resulting an invalid upper bound for MI. To alleviate this problem, we additionally introduce a penalty term ∥InegSTUB∥ 2 to assist the model optimization, which is an L2 loss of the negative parts in ISTUB. For the evaluation, we take the same sampling manner as the one in the lower bound and optimization details are also shown in Algorithm 1. Algorithm 1 An epoch of the optimization of LaST. 1: Initialize the parameters of LaST: Θ = {ψs, ψt, ϕs, ϕt, θs, θt}, Γ = {αs, αt, β}. 2: for a mini-batch with size B consisting of {X(i), Y (i)}i∈B in training set do 3: Get samples of the latent representations {Zs(i)}i∈B and {Zt(i)}i∈B from distributions {Qϕs(Zs|X(i))}i∈B and {Qϕt(Zt|X(i))}i∈B, respectively; 4: Shuffle the {Zs(i)}i∈B and {Zt(i)}i∈B and form {Zs(j)}j∈B and {Zt(j)}j∈B, respectively; 5: Compute the ηs, ηt: ηs ← 1B ∑B i=j=1 e γαs (X (i),Zs(j)), ηt ← 1B ∑B i=j=1 e γαt (X (i),Zt(j)); 6: Update parameters Θ: Θ ← G(∇Θ)[LELBO + 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1 η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Z s(i), Zt(j))) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 7: Update parameters Γ: Γ ← G(∇Γ)[ 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Zs(i), Zt(j)) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 8: end for 5 Experiments We now present the results of our extensive experimental evaluations comparing LaST with state-ofthe-art baselines and report a series of empirical results, along with ablation study and visualizations of seasonal-trend representations. Further details and results are provided in Appendix F. 5.1 Settings Datasets and Baselines. We conducted our experiments on seven real-world benchmark datasets from four categories of mainstream time series forecasting applications: (1) ETT 2[25]: Electricity Transformer Temperature consists of the target value “oil temperature” and six “power load” features, recorded hourly (i.e., ETTh1 and ETTh2) and every 15 minutes (i.e., ETTm1 and ETTm2) over two years. (2) Electricity, from the UCI Machine Learning Repository 3 and preprocessed by [56], is composed of the hourly electricity consumption of 321 clients in kWh from 2012 to 2014. (3) Exchange [56] with daily exchange rates of eight countries from 1990 to 2016. (4) Weather 4 contains 21 meteorological indicators (e.g., temperature and humidity) and is recorded every 10 minutes in 2020. We compare our LaST with the latest state-of-the-art methods on time series modeling and forecasting tasks from two categories: (1) representation learning techniques, including COST [38], TS2Vec [37], and TNC [36]; (2) end-to-end forecasting models, including VAE-GRU [10], Autoformer [26], Informer [25], and TCN [22]. Further descriptions and settings of these baselines are provided in appendix F.1. Evaluation setup. Following the prior work, we run our model on both univariate and multivariate forecasting settings. In multivariate forecasting, LaST accepts and forecasts all variables in datasets. In univariate forecasting, LaST only considers a specific feature in each dataset. We employ the standard normalization and set input length T = 201 for all datasets. For the dataset split, we follow a standard protocol that categorizes all datasets into training, validation, and test set in chronological order by the ratio of 6:2:2 for all datasets. We report the evaluation results on the test set while the model achieves the best performance on the validation set. Implementation details. As for the network structure of LaST, we use a single-layer fully connected network as the feed forward network (FFN), which is applied in the modeling of posterior, reconstruction, and predictor. Besides, we employ the 2-layer MLP for the critic γ in MI bound estimations. Dimensions of seasonal and trend representations are consistent. We set them as 32 in univariate forecasting and as 128 in multivariate forecasting on other datasets. MAE loss is used to measure the 2https://github.com/zhouhaoyi/ETDataset 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams 4https://www.bgc-jena.mpg.de/wetter forecasting derived from the predictor. For the training strategy, we use the Adam [57] optimizer, and training process is early stopped within 10 epochs. We initialize the learning rate with 10-3 and decay it with 0.95 weight every epoch. 5.2 Performance comparisons and model analysis Effectiveness. Tables 1 and 2 summarize the results of univariate and multivariate forecastings respectively. LaST achieves state-of-the-art performance against the advanced representation baselines on five real-world datasets. The relative improvements on MSE and MAE are 25.6% and 22.1% against the best representation learning method CoST and are 22.0% and 18.9% against the best end-to-end models Autoformer. We note that Autoformer achieves better performance on long horizons forecasting on hourly ETT datasets and think there are two reasons: (1) Transformer-based models intrinsically establish long-range dependencies, which plays a crucial role in long sequence forecasting; (2) it employs a simple decomposition by average pooling with a fixed kernel size, which is more suitable for strongly periodic datasets like hourly ETT. This phenomenon is beneficial to long-term forecasting but limits the sensitivity to local context, and the bonus does not have significant impact on other datasets. Compared with baselines, LaST extracts the seasonal and trend patterns with disentangled representations adaptively and thus can be applied to intricate time series. Ablation study. We investigated the performance benefits brought by each mechanism of LaST on a synthetic dataset (generation details are provided in appendix F.3) and ETTh1. The results are shown in Table 3, consisting of two groups: M1 validates the mechanisms of seasonal-trend representations learning framework. In it, “w/o seasonal” and “w/o trend” denote LaST without the seasonal and trend components respectively; “w/o coe” denotes LaST without autocorrelation and CORT coefficients while estimating the reconstruction loss. M2 judges the introduction and estimations of MI, where“w/o lower” and “w/o upper” indicate the removal of the lower and upper bounds for MI in regularization terms respectively; “with MINE” denotes that we replace our lower bound with MINE. The results show that all mechanisms improve the performance on the forecasting task. We notice that the quality drops a lot when removing the trend component. The reason is that seasonal forecasting derives from the iDFT algorithm, which is essentially a periodical repetition of historical observations. However, it captures the seasonal patterns and assists the trend component in complete LaST to bring the superiority, especially in the long-term settings and strongly periodical synthetic dataset. Besides, we observe that with biased regularization term MINE, the performance becomes unstable and sometimes even worse than LaST without MI lower bound, while our unbiased bound (cf. Eq.(9)) continuously outperforms it. Representation disentanglement. We visualize the seasonal-trend representations with the tSNE [58] technique in Figure 2. We also visualize the embeddings in last layer of Autoformer decoder as a comparison. The points with same color have a clearer and closer clustering in LaST, while they mix together without decomposition mechanisms (“w/o dec” indicates removal of the two decomposition mechanisms (autocorrelation and CORT coefficients, and the upper bound to MI). Notably, though Autofomer with the simple moving average block achieves satisfying decomposition from the time series perspective, their representations are still prone to entanglement. These results suggest that (1) learning disentangled seasonal-trend representations is not trivial, and (2) the proposed decomposition mechanisms successfully disentangle the seasonal-trend representations in latent space, each paying attention to a specific temporal pattern. Input settings. We further investigate the influence of hyperparameter input length to validate the sensitivity and Table 4 shows the results. Long look-back window improves the performance especially in long-term forecasting, while others even have performance degradation. This verifies that LaST can effectively utilize past information to understand patterns and make predictions. La ST w /o d ec A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er Figure 2: Visualizations of seasonal (red) and trend (blue) representations on ETTh1 dataset. (a) ETTh1 (b) ETTm1 (c) Exchange Figure 3: Top: learned seasonality visualizations (autocorrelation statistics of reconstructed seasonal sequences). Bottom: seasonal (red) and trend (blue) reconstructions to the ground truths (black). Observations from a case-study. We further validate LaST by by visualizing the extracted seasonality and trend in specific cases. As shown in Figure 3, LaST can capture the seasonal patterns on realworld datasets. For example, a strong daily period is indicated on hourly and 15-minutes ETT datasets. Even though the period on Exchange dataset is not obvious, LaST still provides some long-term periods on the daily data. Besides, trend and seasonal components jointly accurately restore the original sequence with their own perspective, which supports that LaST can produce workable disentangled representations for intricate time series. 6 Conclusion We presented LaST, a disentangled variational inference framework with mutual information constraints to disassociate a couple of seasonal-trend representations in latent space, for effective forecasting of time series. Our extensive experiments demonstrated that LaST successfully disentangles the seasonal-trend representations and achieves state-of-the-art performance. Our future work will focus on tackling other challenging downstream tasks in the time series domain, e.g., generation and imputation. In addition, we plan to model stochastic factors explicitly in decomposition strategies, which will better understand the real-world time series. Acknowledgments and Disclosure of Funding This work was supported in part by National Natural Science Foundation of China (Grant No.62072077 and No.62176043), Natural Science Foundation of Sichuan Province (Grant No. 2022NSFSC0505), and National Science Foundation SWIFT (Grant No.2030249). Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [No] (c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main exper- imental results (either in the supplemental material or as a URL)? [Yes] Our source code for reproducibility is publicly available at GitHub. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See “Datasets” and “Evaluation setup” in Sec. 5.1. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] See Appendix F.5. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix F.2. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] See URLs in Sec. 5. (b) Did you mention the license of the assets? [Yes] See folder “license” in supplementary materials. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] Our source code for reproducibility is included in the supplemental material. (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] See folder “license” in supplementary materials. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] They do not contain personally identifiable information or offensive content. 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper on time-series forecasting? 2. What are the strengths of the proposed approach, particularly in terms of novelty and technical challenges addressed? 3. What are the weaknesses of the paper, especially regarding complexity and potential robustness concerns? 4. How difficult was it to train and implement the model, considering its intricate architecture?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents an approach for learning latent season-trend based representations for time-series forecasting, based on variational inference. A sesonal encoder and a trend encoder are used to obtain latent representations. The latent representations are then combined together to produce a forecast. The forecast is done via the predictor module, which makes use of Fourier Transform for the seaosnal representation and a simple MLP for a the trend representation, and then sums them up to obtain the forecast. Reconstruction error cannot be minimized directly, as the true trend and season components are not known, as a result autocorrelation distance is used for seaosnality reconstruction, and the temporal correlation is used for reconstructing the trend reconstruction. For the purpose of disentanglement, the MI between the latent components is minimized. The overall loss function has an ELBO loss, and MI maximization between the original time-series and latent component, and MI minimization between the season and trend latent components. Since the optimization is untractable, lower and upper bounds for MI are derived to help with the optimization. Experiments are performed on standard univariate and multivariate timeseries benchmarks, showing LaST achieves state of the art performance. Visualizations show effective disentanglement. Strengths And Weaknesses Strengths good motivation to expand on decomposition of trend seasonality novel way to do so compared to existing approaches (e.g. variational inference vs CoST using contrastive learning) meaningful loss function (elbo, reconstruction, predictor) identified technical challenges in reconstruction, and proposed a way to address it identified challenges in MI optimisation and developed bounds to address it good coverage and comprehensive experiments - baselines + datasets good ablation / visualizations a generally comprehensive paper which is well written Weakness superposition of a lot of complex components to make it work, possibly creating uncertainty on the robustness of the model Questions how complex was performing this training and getting it to work? To me, it seemed like a daunting task Limitations NA
NIPS
Title Learning Latent Seasonal-Trend Representations for Time Series Forecasting Abstract Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github1. 1 Introduction Time series forecasting plays a significant role in plethora of modern applications, ranging from climate analysis [1], energy production [2], traffic flows [3] to financial markets and various industrial systems [4]. The ubiquity and importance of time series data have recently attracted researcher efforts resulting in a myriad of deep learning forecasting models [5, 6] ameliorating the time series forecasting. Based on advanced techniques such as RNN and Transformer [7]), these methods usually learn latent representations to epitomize every instant of the signals, and then derive forecasting results by a predictor, achieving great progress on forecasting tasks. However, these models have difficulties to extract exact/clear information related to temporal patterns (e.g., seasonality, trend, and level), especially in supervised end-to-end architecture without any constraint on representations [8]. As a consequence, efforts have been made to apply the variational inference into time series modeling [9, 10, 11], where improved guidance for latent representations with probabilistic form has been proved beneficial to downstream time series tasks [12]. However, when various intricately co-evolving constituents exist a time series data, analyzing with a single representation will result in superficial variable and models’ non-reusability and lack of interpretability, due to the highly entangled nature of neural networks [13, 14]. Thus, while providing efficiency and effectiveness, existing approaches with a single high-dimensional representation sacrifice the ∗Corresponding author. 1https://github.com/zhycs/LaST 36th Conference on Neural Information Processing Systems (NeurIPS 2022). information utilization and explainability, which may further lead to overfitting and degenerated performance. To address the above limitations and seek a new disentangled time series learning framework, we leverage the ideas from the decomposition strategy [15, 16] to split time series data into several components, each of which captures an underlying pattern category. The decomposition assists the analysis process and reveals underlying insights more consistently with human intuition. This insight motivates us to produce a couple of latent representations that respond to different time series characteristics (in our case the seasonal and trend), from which we predict the results by formulating sequence as the sum of these characteristics. The representations should be as independent as possible to avoid a model prone to feature entanglement, while also having sufficient information to the input sequence. Towards that, we propose a novel framework LaST to learn the Latent Seasonal-Trend representations for time series forecasting. LaST exploits an encoder-decoder architecture and follows variational inference theory [17] to learn a couple of disentangled latent representations that describe seasonal and trend of time series. To achieve this goal, LaST enforces the representations into disentanglement under two constraints: (1) from the input reconstruction, we dissect intrinsic seasonal-trend patterns which can be readily obtained from raw time series and off-the-shelf measurement methods, and accordingly design a series of auxiliary objectives; (2) from representations themselves, we minimize the mutual information (MI) between seasonal and trend representations on the premise that grantee the consistency between input data and each of them. Our main contributions are threefold: • We start with the variational inference and information theory to design the seasonal-trend representations learning and disentanglement mechanisms, and practically demonstrate their effectiveness and superiority (over the existing baselines) on time series forecasting task. • We propose LaST, a novel latent seasonal-trend representations learning framework, which encodes input as disentangled seasonal-trend representations and provides a practicable approach that reconstructs seasonal and trend separately to avoid chaos. • We introduce MI terms as a penalty and present a novel tractable lower bound and an upper bound for their optimizations. The lower bound ameliorates the biased gradient issue in prevalent MINE approach and ensures informative representations. The upper bound provides the feasibility to further reduce the overlapping of seasonal-trend representations. 2 Related work Most of the deep learning methods for time series forecasting are designed as an end-to-end architecture. Various basic techniques (e.g., residual structure [18, 19], autoregressive network [20, 21], and convolutions [22, 23]) are exploited to produce expressive non-linear hidden states and embeddings that reflect the temporal dependencies and patterns. There is also a body of works that apply Transformer [7] structure into time series forecasting tasks [24, 25, 26, 6], aiming to discover the relationships across the sequence and focus on the important time points. Deep learning methods have achieved superior performance in comparison to the classical algorithms such as ARIMA [27] and VAR [28], and have become prevalent in multiple applications. Learning flexible representations has been demonstrated to be beneficial for downstream tasks by numerous researches [12, 29]. In time series representations domain, early methods, employing the variational inference, jointly train an encoder and corresponding decoder that reconstructs raw signals to learn approximate latent representations [10, 30, 31]. Recent efforts have improve these variational methods [32, 33] by establishing more complex and flexible distributions using techniques such as copula [32] and normalizing flow [34, 35]. Another group of works exploited the burgeoning contrastive learning to obtain invariant representations from augmented time series [36, 37, 38], which avoids the reconstruction process and improves representations without additional supervision. Time series decomposition [15, 16] is a classical method that splits complex time series into several components to obtain temporal patterns and interpretability. Recent works have applied machine learning and deep learning approaches [39, 40, 41] to robustly and efficiently achieve the decomposition on large-scale datasets. There are also research results that tackle forecasting with the assistance of decomposition. For example, Autoformer [26] decomposes time series into seasonal-trend parts by average pooling and introduces an autocorrelation mechanism to empower Transformer [7] for better relations discovery. CoST [38] encodes signals into seasonal and trend representations in frequency and temporal domains, respectively, and introduces contrastive learning to supervise their learning. Different from our work, these methods exploit simple average pooling decomposition mechanism which may provide incompatible periodical assumptions, or intuitively disentangle the representations by processing in different domains. Meanwhile, LaST adaptively epitomizes the seasonality and trend by disentangled representations and boosts their disassociation from a probabilistic perspective in the latent space. 3 Latent seasonal-trend representations learning framework We now formalize the problem definition and introduce the proposed LaST framework. We note that in LaST we use seasonal and trend characteristics for disentanglement learning but our framework can be easily extended to adapt to situations that have more than two components to dissociate. Problem definition. Consider a time series dataset D consisting of N i.i.d. sequences denoted as X(i)1:T = {x (i) 1 , x (i) 2 , · · · , x (i) t , · · · , x (i) T }, where i ∈ {1, 2, . . . , N}, and each x (i) t is univariate or multivariate value representing the current observation(s) at time instant t (e.g., price and rainfall). We aim to derive a model that outputs expressive representations Z1:T , suitable for predicting future sequence(s) Y = X̂T+1:T+τ . Hereafter, when there is no ambiguity we omit the superscripts and the subscript 1:T . A model that infers the likelihood between observation X and future Y with latent representation Z can be formulated as follows: P (X,Y ) = P (Y |X)P (X) = ∫ Z P (Y |Z)P (Z|X)dZ ∫ Z P (X|Z)P (Z)dZ. (1) From the perspective of variational inference (cf. [17]), the likelihood P (X|Z) is calculated by a posterior distribution Qϕ(Z|X) and maximized by the following evidence lower bound (ELBO): logPΘ(X,Y ) ≥ log ∫ Z Pψ(Y |Z)Qϕ(Z|X)dZ + EQϕ(Z|X)[logPθ(X|Z)] −KL(Qϕ(Z|X)||P (Z)) = LELBO, (2) where Θ is composed of ψ, ϕ, and θ denotes learned parameters. However, as pointed out in Sec. 1, this faces an entanglement problem and cannot clearly extract complicated temporal patterns. To ameliorate this limitation, we incorporate the decomposition strategy into our LaST that learns a couple of disentangled representations to depict seasonal and trend dynamics. Specifically, we formulate the temporal signals X and Y as the sum of seasonal and trend components, i.e., X = Xs +Xt. Accordingly, the latent representation Z is factorized into Zs and Zt, assumed to be independent of each other – i.e., P (Z) = P (Zs)P (Zt). Figure 1 illustrates the two parts of the LaST framework: (a) representations learning, producing disentangles seasonal-trend representations (separate reconstructions and MI constraints); and (b) prediction based on learned representations. Theorem 1. With the decomposition strategy, Eq. (2) (i.e., the ELBO) naturally has the following factorized form: LELBO = log ∫ Zs ∫ Zt Pψ(Y |Zs, Zt)Qϕs,ϕt(Zs, Zt|X)dZsdZt (predictor) (3) + EQϕs (Zs|X)[logPθs(X s|Zs)] + EQϕt (Zt|X)[logPθt(X t|Zt)] (reconstruction) (4) −KL(Qϕs(Zs|X)||P (Zs))−KL(Qϕt(Zt|X)||P (Zt)). (KL divergence) (5) The detailed inference process of the above formula is provided in Appendix A.1. The ELBO is split into three main units, i.e., Eqs. (3), (4), and (5). The predictor makes forecasting and measures the accuracy (e.g,. L1 or L2 losses), reconstruction and KL divergence are served as regularization terms aiming to improve the learned representations. The three units are described in the following. Predictor: The predictor (cf. Eq. (3) and Figure 1(b)) can be regarded as the sum of two independent parts: log ∫ Zs Pψs(Y s|Zs)Qϕs(Zs|X)dZs and log ∫ Zt Pψt(Y t|Zt)Qϕt(Zt|X)dZt. Here we introduce two specialized approaches to harness the seasonal-trend representations combining their own characteristics. Given the seasonal latent representation Zs ∈ RT×d, the seasonal predictor first employs the discrete Fourier transform (DFT) algorithm to detect the seasonal frequencies, i.e., ZsF = DFT(Z s) ∈ CF×d, where F = ⌊T+12 ⌋ due to the Nyquist theorem [42]. Then, we inverse the frequencies back to the temporal domain to extend the representation to the future part, i.e., Z̃s = iDFT(ZsF ) ∈ Rτ×d. More details of the DFT and iDFT functions can be found in Appendix B.1. Given Zt, trend predictor provides a feed forward network (FFN) f : T → τ to produce a predictable representation Z̃t ∈ Rτ×d. We end the predictor with two FFNs to map Z̃s and Z̃t into Y s and Y t, respectively, and obtain the forecasting result Y by their sum. Reconstruction and KL divergence: Among these two terms, the KL divergence can be easily estimated by Monte Carlo sampling with prior assumptions. Here we take a widely used setting that priors both follow N (0, I) for efficiency, more discussions of our priors can be found in Appendix C. As for reconstruction term, it cannot be directly measured owing to the unknown Xs and Xt. Besides, merging these two terms into EQϕs,ϕt (Zs,Zt|X)[logPθs,θt(X|Z s, Zt)] will result in chaos since the decoder is prone to reconstruct the intricate time series from every representation. Theorem 2. With the Gaussian distribution assumption, the reconstruction loss Lrec can be estimated without leveraging Xs and Xt, and Eq. (4) can be replaced with the following formula: Lrec = − T−1∑ κ=1 ∥∥AXX(κ)−AX̂sX̂s(κ)∥∥2 + CORT(X, X̂t)− ∥∥∥X̂t + X̂s −X∥∥∥2 , (6) CORT(X, X̂t) = ∑T−1 i=1 ∆X t i∆X̂ t i√∑T−1 i=1 ∆X t √∑T−1 i=1 ∆X̂ t , (7) where AXX(κ) = ∑T−κ i=1 (Xt − X̄)(Xt+κ − X̄) is the autocorrelation coefficient with lagged value κ (we employ an efficient implementation in frequency domain [43], details are in Appendix B.2), CORT(X, X̂t) is the temporal correlation coefficient, and ∆Xi = Xi −Xi−1 is the first difference. The proof is provided in Appendix A.2. According to Eq. (6), the reconstruction loss now can be estimated and, conversely, used to supervise disentangled representation learning. However, we find that this framework still holds certain drawbacks: (1) The KL divergence tends to narrow down the distance between posterior and prior. The modeling choice tends to sacrifice the variational inference vs. data fit when modeling capacity is not sufficient to achieve both [44]. The posterior may become almost non-informative for the inputs, which causes the forecastings irrelevant to the observations. (2) The disentanglement of the seasonal-trend representations is boosted indirectly by the separate reconstruction, where we need to impose a direct constraint on the representations themselves. We alleviate these limitations by introducing additional mutual information regularization terms. Specifically, we increase the mutual information between Zs, Zt and X to alleviate the divergence narrowing problem [44, 45], while decreasing mutual information between Zs and Zt to further dissociate their representations. The maximizing objective of LaST becomes LLaST = LELBO + I(X,Zs) + I(X,Zt)− I(Zs, Zt), (8) where I(·, ·) denotes the mutual information between two representations. However, the two mutual information terms are untraceable [46, 47, 48]. We address this problem in the next section. 4 Mutual information bounds for optimization We now address the traceable mutual information bounds, maximizing I(X,Zs) and I(X,Zt), and minimizing I(Zs, Zt) in Eq. (8), and provide lower and upper bounds for the model optimization. Lower bound for I(X,Zs) or I(X,Zt). We omit the superscript s or t when analyzing lower bound. Among the prior approaches exploring the lower bounds for MI [49, 50, 51], MINE [51], for example, employs KL divergence between the joint distribution and marginals and defines an energybased variational family to achieve a flexible and scalable lower bound. This can be formulated as I(X,Z) ≥ EQϕ(X,Z)[γα(X,Z)] − logEQ(x)Qϕ(z)[eγα(X,Z)] = IMINE , where γα is a learned normalized critic with parameters α. However, this bound suffers from the biased gradient owing to the parametric logarithmic term (see Appendix A.3 for proof). Inspired by [47], we substitute the logarithmic function by its tangent family to ameliorate the above biased bound: IMINE ≥ EQϕ(X,Z)[γα(X,Z)]− ( 1 η EQ(x)Qϕ(z)[e γα(X,Z)] + log η − 1) ≥ EQϕ(X,Z)[γα(X,Z)]− 1 η EQ(x)Qϕ(z)[e γα(X,Z)], (9) where η denotes the different tangent points. The first inequality relies on the concave negative logarithmic function – the values on the curve are upper bounds for that on the tangent line, and is tight when the tangent point overlaps the independent variable, i.e., the true value of EQ(x)Q(z)[eγ(X,Z)]. The closer the distance between tangent point and independent variable, the greater the lower bound. Therefore, we set η as the variational term EQ(x)Qϕ(z)[eγα(X,Z)] that estimates the independent variable to obtain as great lower bound as possible. In the second inequality, γα(x, z) – a critic function activated by Sigmoid – is limited within [0, 1] and thus −(log η − 1) ≥ 0. This inequality is tight only if EQ(x)Qϕ(z)[γα(X,Z)] = 1, which means γα can discriminate whether a pair of variables (X,Z) is sampled from the joint distribution or marginals. Similarly to MINE, this consistency problem can be addressed by the universal approximation theorem for neural networks [52]. Thus, Eq. (9) provides a flexible and scalable lower bound for I(X,Z) with an unbiased gradient. For the evaluation, we exploit a traceable manner [53, 51] that draws joint samples (X(i), Z(i)) by Q(Z(i)|X(i))PD(X(i)). As for the marginal Qϕ(Z), we randomly select a datapoint j and then sample it from Qϕ(Z|X(j))PD(X(j)). Details of the optimization are shown in Algorithm 1. Upper bound for I(Zs, Zt). Few efforts have been made that explore the traceable upper bound for mutual information [54, 47, 55]. Existing upper bounds (listed in Appendix D.1) are traceable with known probabilistic density of joint or conditional distributions here being Q(Zs|Zt), Q(Zt|Zs) or Q(Zs, Zt). However, these distributions lack interpretability and can hardly be directly modeled, which leads to untraceable estimations of the above upper bounds. To avoid the direct estimation of unknown probabilistic densities, we introduce an energy-based variational family for Q(Zs, Zt) that uses a normalized critic γβ(Zs, Zt) like Eq. (9) to establish a traceable upper bound. Specifically, we incorporate the critic γβ into the upper bound ICLUB [55] to obtain a traceable Seasonal-Trend Upper Bound (STUB) for I(Zs, Zt), which is defined as: I(Zs, Zt) ≤ EQ(Zs,Zt)[logQ(Zs|Zt)]− EQ(Zs)Q(Zt)[logQ(Zs|Zt)] = ICLUB (10) = EQϕs,ϕt (Zs,Zt)[γβ(Z s, Zt)]− EQϕs (Zs)Qϕt (Zt)[γβ(Z s, Zt)] = ISTUB. (11) The derivation details of this formula are provided in Appendix D.2. The inequality in Eq. (10) is tight only if Zs and Zt are a pair of independent variables [55]. This is exactly a sufficient condition for ISTUB, since MI and Eq. (11) are both zeros on the independent situation, which is our seasonaltrend disentanglement optimal objective. The critic γβ , similar to γα, takes on the discriminating responsibility but provides converse scores, constraining the MI to a minimum. However, Eq. (11) may get negative values during the learning of parameter β, resulting an invalid upper bound for MI. To alleviate this problem, we additionally introduce a penalty term ∥InegSTUB∥ 2 to assist the model optimization, which is an L2 loss of the negative parts in ISTUB. For the evaluation, we take the same sampling manner as the one in the lower bound and optimization details are also shown in Algorithm 1. Algorithm 1 An epoch of the optimization of LaST. 1: Initialize the parameters of LaST: Θ = {ψs, ψt, ϕs, ϕt, θs, θt}, Γ = {αs, αt, β}. 2: for a mini-batch with size B consisting of {X(i), Y (i)}i∈B in training set do 3: Get samples of the latent representations {Zs(i)}i∈B and {Zt(i)}i∈B from distributions {Qϕs(Zs|X(i))}i∈B and {Qϕt(Zt|X(i))}i∈B, respectively; 4: Shuffle the {Zs(i)}i∈B and {Zt(i)}i∈B and form {Zs(j)}j∈B and {Zt(j)}j∈B, respectively; 5: Compute the ηs, ηt: ηs ← 1B ∑B i=j=1 e γαs (X (i),Zs(j)), ηt ← 1B ∑B i=j=1 e γαt (X (i),Zt(j)); 6: Update parameters Θ: Θ ← G(∇Θ)[LELBO + 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1 η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Z s(i), Zt(j))) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 7: Update parameters Γ: Γ ← G(∇Γ)[ 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Zs(i), Zt(j)) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 8: end for 5 Experiments We now present the results of our extensive experimental evaluations comparing LaST with state-ofthe-art baselines and report a series of empirical results, along with ablation study and visualizations of seasonal-trend representations. Further details and results are provided in Appendix F. 5.1 Settings Datasets and Baselines. We conducted our experiments on seven real-world benchmark datasets from four categories of mainstream time series forecasting applications: (1) ETT 2[25]: Electricity Transformer Temperature consists of the target value “oil temperature” and six “power load” features, recorded hourly (i.e., ETTh1 and ETTh2) and every 15 minutes (i.e., ETTm1 and ETTm2) over two years. (2) Electricity, from the UCI Machine Learning Repository 3 and preprocessed by [56], is composed of the hourly electricity consumption of 321 clients in kWh from 2012 to 2014. (3) Exchange [56] with daily exchange rates of eight countries from 1990 to 2016. (4) Weather 4 contains 21 meteorological indicators (e.g., temperature and humidity) and is recorded every 10 minutes in 2020. We compare our LaST with the latest state-of-the-art methods on time series modeling and forecasting tasks from two categories: (1) representation learning techniques, including COST [38], TS2Vec [37], and TNC [36]; (2) end-to-end forecasting models, including VAE-GRU [10], Autoformer [26], Informer [25], and TCN [22]. Further descriptions and settings of these baselines are provided in appendix F.1. Evaluation setup. Following the prior work, we run our model on both univariate and multivariate forecasting settings. In multivariate forecasting, LaST accepts and forecasts all variables in datasets. In univariate forecasting, LaST only considers a specific feature in each dataset. We employ the standard normalization and set input length T = 201 for all datasets. For the dataset split, we follow a standard protocol that categorizes all datasets into training, validation, and test set in chronological order by the ratio of 6:2:2 for all datasets. We report the evaluation results on the test set while the model achieves the best performance on the validation set. Implementation details. As for the network structure of LaST, we use a single-layer fully connected network as the feed forward network (FFN), which is applied in the modeling of posterior, reconstruction, and predictor. Besides, we employ the 2-layer MLP for the critic γ in MI bound estimations. Dimensions of seasonal and trend representations are consistent. We set them as 32 in univariate forecasting and as 128 in multivariate forecasting on other datasets. MAE loss is used to measure the 2https://github.com/zhouhaoyi/ETDataset 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams 4https://www.bgc-jena.mpg.de/wetter forecasting derived from the predictor. For the training strategy, we use the Adam [57] optimizer, and training process is early stopped within 10 epochs. We initialize the learning rate with 10-3 and decay it with 0.95 weight every epoch. 5.2 Performance comparisons and model analysis Effectiveness. Tables 1 and 2 summarize the results of univariate and multivariate forecastings respectively. LaST achieves state-of-the-art performance against the advanced representation baselines on five real-world datasets. The relative improvements on MSE and MAE are 25.6% and 22.1% against the best representation learning method CoST and are 22.0% and 18.9% against the best end-to-end models Autoformer. We note that Autoformer achieves better performance on long horizons forecasting on hourly ETT datasets and think there are two reasons: (1) Transformer-based models intrinsically establish long-range dependencies, which plays a crucial role in long sequence forecasting; (2) it employs a simple decomposition by average pooling with a fixed kernel size, which is more suitable for strongly periodic datasets like hourly ETT. This phenomenon is beneficial to long-term forecasting but limits the sensitivity to local context, and the bonus does not have significant impact on other datasets. Compared with baselines, LaST extracts the seasonal and trend patterns with disentangled representations adaptively and thus can be applied to intricate time series. Ablation study. We investigated the performance benefits brought by each mechanism of LaST on a synthetic dataset (generation details are provided in appendix F.3) and ETTh1. The results are shown in Table 3, consisting of two groups: M1 validates the mechanisms of seasonal-trend representations learning framework. In it, “w/o seasonal” and “w/o trend” denote LaST without the seasonal and trend components respectively; “w/o coe” denotes LaST without autocorrelation and CORT coefficients while estimating the reconstruction loss. M2 judges the introduction and estimations of MI, where“w/o lower” and “w/o upper” indicate the removal of the lower and upper bounds for MI in regularization terms respectively; “with MINE” denotes that we replace our lower bound with MINE. The results show that all mechanisms improve the performance on the forecasting task. We notice that the quality drops a lot when removing the trend component. The reason is that seasonal forecasting derives from the iDFT algorithm, which is essentially a periodical repetition of historical observations. However, it captures the seasonal patterns and assists the trend component in complete LaST to bring the superiority, especially in the long-term settings and strongly periodical synthetic dataset. Besides, we observe that with biased regularization term MINE, the performance becomes unstable and sometimes even worse than LaST without MI lower bound, while our unbiased bound (cf. Eq.(9)) continuously outperforms it. Representation disentanglement. We visualize the seasonal-trend representations with the tSNE [58] technique in Figure 2. We also visualize the embeddings in last layer of Autoformer decoder as a comparison. The points with same color have a clearer and closer clustering in LaST, while they mix together without decomposition mechanisms (“w/o dec” indicates removal of the two decomposition mechanisms (autocorrelation and CORT coefficients, and the upper bound to MI). Notably, though Autofomer with the simple moving average block achieves satisfying decomposition from the time series perspective, their representations are still prone to entanglement. These results suggest that (1) learning disentangled seasonal-trend representations is not trivial, and (2) the proposed decomposition mechanisms successfully disentangle the seasonal-trend representations in latent space, each paying attention to a specific temporal pattern. Input settings. We further investigate the influence of hyperparameter input length to validate the sensitivity and Table 4 shows the results. Long look-back window improves the performance especially in long-term forecasting, while others even have performance degradation. This verifies that LaST can effectively utilize past information to understand patterns and make predictions. La ST w /o d ec A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er Figure 2: Visualizations of seasonal (red) and trend (blue) representations on ETTh1 dataset. (a) ETTh1 (b) ETTm1 (c) Exchange Figure 3: Top: learned seasonality visualizations (autocorrelation statistics of reconstructed seasonal sequences). Bottom: seasonal (red) and trend (blue) reconstructions to the ground truths (black). Observations from a case-study. We further validate LaST by by visualizing the extracted seasonality and trend in specific cases. As shown in Figure 3, LaST can capture the seasonal patterns on realworld datasets. For example, a strong daily period is indicated on hourly and 15-minutes ETT datasets. Even though the period on Exchange dataset is not obvious, LaST still provides some long-term periods on the daily data. Besides, trend and seasonal components jointly accurately restore the original sequence with their own perspective, which supports that LaST can produce workable disentangled representations for intricate time series. 6 Conclusion We presented LaST, a disentangled variational inference framework with mutual information constraints to disassociate a couple of seasonal-trend representations in latent space, for effective forecasting of time series. Our extensive experiments demonstrated that LaST successfully disentangles the seasonal-trend representations and achieves state-of-the-art performance. Our future work will focus on tackling other challenging downstream tasks in the time series domain, e.g., generation and imputation. In addition, we plan to model stochastic factors explicitly in decomposition strategies, which will better understand the real-world time series. Acknowledgments and Disclosure of Funding This work was supported in part by National Natural Science Foundation of China (Grant No.62072077 and No.62176043), Natural Science Foundation of Sichuan Province (Grant No. 2022NSFSC0505), and National Science Foundation SWIFT (Grant No.2030249). Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [No] (c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main exper- imental results (either in the supplemental material or as a URL)? [Yes] Our source code for reproducibility is publicly available at GitHub. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See “Datasets” and “Evaluation setup” in Sec. 5.1. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] See Appendix F.5. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix F.2. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] See URLs in Sec. 5. (b) Did you mention the license of the assets? [Yes] See folder “license” in supplementary materials. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] Our source code for reproducibility is included in the supplemental material. (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] See folder “license” in supplementary materials. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] They do not contain personally identifiable information or offensive content. 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper on seasonal time series representation? 2. What are the strengths of the proposed approach, particularly in terms of its originality and quality? 3. What are the weaknesses of the paper regarding its limitations and potential feature degeneration? 4. How does the reviewer assess the clarity and significance of the paper's content? 5. What are some questions raised by the reviewer regarding the connection to previous works, the method's boundaries, training and inference time, and potential applications?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this submission, the authors propose to learn disentangled seasonal and trend representations of seasonal time series. In particular, in contrast to previous methods that use average pooling for the trend feature, the major contribution here is to directly optimize the two representations by variational method. Also, a detailed analysis of the loss function and the optimization bound is provided. Empirical results show strong effectiveness and outperforms several strong baselines. The following analysis also supports the motivation. Strengths And Weaknesses Originality: The proposed method of representation disentanglement is an extension of other fields such as computer vision. However, given the nature of seasonal time series, the application of this method is non-trivial, as well as the design of a proper predictor. The bound proof is original in this setting. Quality: The conduct of empirical analysis is of high quality. The experiments are sufficient to support the claims (great performance boost). One weakness, as also noticed by the author, is the potential degeneration of features caused by narrowing down prior and posterior, and the effect is not discussed in detail. As the reviewer is not an expert in theoretical analysis, I would leave this part to other reviewers. Clarity: The overall presentation is clear, including a good schematic pipeline description and results tables. The development of Theorem 2 is a bit confusing. The authors first claim that L rec can be estimated without leveraging Xs and Xt, while eq 6 still has the two variables. Line 143, what is "first difference?" Significance: The reviewer thinks this method provides significant advances in this subfield. Questions When discussing the connection to previous papers, the writing should both include the difference and the inheritance. Which components are extended from the previous studies? Also, what are techniques that can potentially be combined (such as contrastive learning from CoST)? What is the boundary of the method? The drawback of feature degeneration should be discussed in more detail. What is the training and inference time? How it can be compared with the baselines? Limitations As discussed above, the reviewer encourages the authors to face the general limitation of VI methods and discuss the implications in this setting.
NIPS
Title Learning Latent Seasonal-Trend Representations for Time Series Forecasting Abstract Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github1. 1 Introduction Time series forecasting plays a significant role in plethora of modern applications, ranging from climate analysis [1], energy production [2], traffic flows [3] to financial markets and various industrial systems [4]. The ubiquity and importance of time series data have recently attracted researcher efforts resulting in a myriad of deep learning forecasting models [5, 6] ameliorating the time series forecasting. Based on advanced techniques such as RNN and Transformer [7]), these methods usually learn latent representations to epitomize every instant of the signals, and then derive forecasting results by a predictor, achieving great progress on forecasting tasks. However, these models have difficulties to extract exact/clear information related to temporal patterns (e.g., seasonality, trend, and level), especially in supervised end-to-end architecture without any constraint on representations [8]. As a consequence, efforts have been made to apply the variational inference into time series modeling [9, 10, 11], where improved guidance for latent representations with probabilistic form has been proved beneficial to downstream time series tasks [12]. However, when various intricately co-evolving constituents exist a time series data, analyzing with a single representation will result in superficial variable and models’ non-reusability and lack of interpretability, due to the highly entangled nature of neural networks [13, 14]. Thus, while providing efficiency and effectiveness, existing approaches with a single high-dimensional representation sacrifice the ∗Corresponding author. 1https://github.com/zhycs/LaST 36th Conference on Neural Information Processing Systems (NeurIPS 2022). information utilization and explainability, which may further lead to overfitting and degenerated performance. To address the above limitations and seek a new disentangled time series learning framework, we leverage the ideas from the decomposition strategy [15, 16] to split time series data into several components, each of which captures an underlying pattern category. The decomposition assists the analysis process and reveals underlying insights more consistently with human intuition. This insight motivates us to produce a couple of latent representations that respond to different time series characteristics (in our case the seasonal and trend), from which we predict the results by formulating sequence as the sum of these characteristics. The representations should be as independent as possible to avoid a model prone to feature entanglement, while also having sufficient information to the input sequence. Towards that, we propose a novel framework LaST to learn the Latent Seasonal-Trend representations for time series forecasting. LaST exploits an encoder-decoder architecture and follows variational inference theory [17] to learn a couple of disentangled latent representations that describe seasonal and trend of time series. To achieve this goal, LaST enforces the representations into disentanglement under two constraints: (1) from the input reconstruction, we dissect intrinsic seasonal-trend patterns which can be readily obtained from raw time series and off-the-shelf measurement methods, and accordingly design a series of auxiliary objectives; (2) from representations themselves, we minimize the mutual information (MI) between seasonal and trend representations on the premise that grantee the consistency between input data and each of them. Our main contributions are threefold: • We start with the variational inference and information theory to design the seasonal-trend representations learning and disentanglement mechanisms, and practically demonstrate their effectiveness and superiority (over the existing baselines) on time series forecasting task. • We propose LaST, a novel latent seasonal-trend representations learning framework, which encodes input as disentangled seasonal-trend representations and provides a practicable approach that reconstructs seasonal and trend separately to avoid chaos. • We introduce MI terms as a penalty and present a novel tractable lower bound and an upper bound for their optimizations. The lower bound ameliorates the biased gradient issue in prevalent MINE approach and ensures informative representations. The upper bound provides the feasibility to further reduce the overlapping of seasonal-trend representations. 2 Related work Most of the deep learning methods for time series forecasting are designed as an end-to-end architecture. Various basic techniques (e.g., residual structure [18, 19], autoregressive network [20, 21], and convolutions [22, 23]) are exploited to produce expressive non-linear hidden states and embeddings that reflect the temporal dependencies and patterns. There is also a body of works that apply Transformer [7] structure into time series forecasting tasks [24, 25, 26, 6], aiming to discover the relationships across the sequence and focus on the important time points. Deep learning methods have achieved superior performance in comparison to the classical algorithms such as ARIMA [27] and VAR [28], and have become prevalent in multiple applications. Learning flexible representations has been demonstrated to be beneficial for downstream tasks by numerous researches [12, 29]. In time series representations domain, early methods, employing the variational inference, jointly train an encoder and corresponding decoder that reconstructs raw signals to learn approximate latent representations [10, 30, 31]. Recent efforts have improve these variational methods [32, 33] by establishing more complex and flexible distributions using techniques such as copula [32] and normalizing flow [34, 35]. Another group of works exploited the burgeoning contrastive learning to obtain invariant representations from augmented time series [36, 37, 38], which avoids the reconstruction process and improves representations without additional supervision. Time series decomposition [15, 16] is a classical method that splits complex time series into several components to obtain temporal patterns and interpretability. Recent works have applied machine learning and deep learning approaches [39, 40, 41] to robustly and efficiently achieve the decomposition on large-scale datasets. There are also research results that tackle forecasting with the assistance of decomposition. For example, Autoformer [26] decomposes time series into seasonal-trend parts by average pooling and introduces an autocorrelation mechanism to empower Transformer [7] for better relations discovery. CoST [38] encodes signals into seasonal and trend representations in frequency and temporal domains, respectively, and introduces contrastive learning to supervise their learning. Different from our work, these methods exploit simple average pooling decomposition mechanism which may provide incompatible periodical assumptions, or intuitively disentangle the representations by processing in different domains. Meanwhile, LaST adaptively epitomizes the seasonality and trend by disentangled representations and boosts their disassociation from a probabilistic perspective in the latent space. 3 Latent seasonal-trend representations learning framework We now formalize the problem definition and introduce the proposed LaST framework. We note that in LaST we use seasonal and trend characteristics for disentanglement learning but our framework can be easily extended to adapt to situations that have more than two components to dissociate. Problem definition. Consider a time series dataset D consisting of N i.i.d. sequences denoted as X(i)1:T = {x (i) 1 , x (i) 2 , · · · , x (i) t , · · · , x (i) T }, where i ∈ {1, 2, . . . , N}, and each x (i) t is univariate or multivariate value representing the current observation(s) at time instant t (e.g., price and rainfall). We aim to derive a model that outputs expressive representations Z1:T , suitable for predicting future sequence(s) Y = X̂T+1:T+τ . Hereafter, when there is no ambiguity we omit the superscripts and the subscript 1:T . A model that infers the likelihood between observation X and future Y with latent representation Z can be formulated as follows: P (X,Y ) = P (Y |X)P (X) = ∫ Z P (Y |Z)P (Z|X)dZ ∫ Z P (X|Z)P (Z)dZ. (1) From the perspective of variational inference (cf. [17]), the likelihood P (X|Z) is calculated by a posterior distribution Qϕ(Z|X) and maximized by the following evidence lower bound (ELBO): logPΘ(X,Y ) ≥ log ∫ Z Pψ(Y |Z)Qϕ(Z|X)dZ + EQϕ(Z|X)[logPθ(X|Z)] −KL(Qϕ(Z|X)||P (Z)) = LELBO, (2) where Θ is composed of ψ, ϕ, and θ denotes learned parameters. However, as pointed out in Sec. 1, this faces an entanglement problem and cannot clearly extract complicated temporal patterns. To ameliorate this limitation, we incorporate the decomposition strategy into our LaST that learns a couple of disentangled representations to depict seasonal and trend dynamics. Specifically, we formulate the temporal signals X and Y as the sum of seasonal and trend components, i.e., X = Xs +Xt. Accordingly, the latent representation Z is factorized into Zs and Zt, assumed to be independent of each other – i.e., P (Z) = P (Zs)P (Zt). Figure 1 illustrates the two parts of the LaST framework: (a) representations learning, producing disentangles seasonal-trend representations (separate reconstructions and MI constraints); and (b) prediction based on learned representations. Theorem 1. With the decomposition strategy, Eq. (2) (i.e., the ELBO) naturally has the following factorized form: LELBO = log ∫ Zs ∫ Zt Pψ(Y |Zs, Zt)Qϕs,ϕt(Zs, Zt|X)dZsdZt (predictor) (3) + EQϕs (Zs|X)[logPθs(X s|Zs)] + EQϕt (Zt|X)[logPθt(X t|Zt)] (reconstruction) (4) −KL(Qϕs(Zs|X)||P (Zs))−KL(Qϕt(Zt|X)||P (Zt)). (KL divergence) (5) The detailed inference process of the above formula is provided in Appendix A.1. The ELBO is split into three main units, i.e., Eqs. (3), (4), and (5). The predictor makes forecasting and measures the accuracy (e.g,. L1 or L2 losses), reconstruction and KL divergence are served as regularization terms aiming to improve the learned representations. The three units are described in the following. Predictor: The predictor (cf. Eq. (3) and Figure 1(b)) can be regarded as the sum of two independent parts: log ∫ Zs Pψs(Y s|Zs)Qϕs(Zs|X)dZs and log ∫ Zt Pψt(Y t|Zt)Qϕt(Zt|X)dZt. Here we introduce two specialized approaches to harness the seasonal-trend representations combining their own characteristics. Given the seasonal latent representation Zs ∈ RT×d, the seasonal predictor first employs the discrete Fourier transform (DFT) algorithm to detect the seasonal frequencies, i.e., ZsF = DFT(Z s) ∈ CF×d, where F = ⌊T+12 ⌋ due to the Nyquist theorem [42]. Then, we inverse the frequencies back to the temporal domain to extend the representation to the future part, i.e., Z̃s = iDFT(ZsF ) ∈ Rτ×d. More details of the DFT and iDFT functions can be found in Appendix B.1. Given Zt, trend predictor provides a feed forward network (FFN) f : T → τ to produce a predictable representation Z̃t ∈ Rτ×d. We end the predictor with two FFNs to map Z̃s and Z̃t into Y s and Y t, respectively, and obtain the forecasting result Y by their sum. Reconstruction and KL divergence: Among these two terms, the KL divergence can be easily estimated by Monte Carlo sampling with prior assumptions. Here we take a widely used setting that priors both follow N (0, I) for efficiency, more discussions of our priors can be found in Appendix C. As for reconstruction term, it cannot be directly measured owing to the unknown Xs and Xt. Besides, merging these two terms into EQϕs,ϕt (Zs,Zt|X)[logPθs,θt(X|Z s, Zt)] will result in chaos since the decoder is prone to reconstruct the intricate time series from every representation. Theorem 2. With the Gaussian distribution assumption, the reconstruction loss Lrec can be estimated without leveraging Xs and Xt, and Eq. (4) can be replaced with the following formula: Lrec = − T−1∑ κ=1 ∥∥AXX(κ)−AX̂sX̂s(κ)∥∥2 + CORT(X, X̂t)− ∥∥∥X̂t + X̂s −X∥∥∥2 , (6) CORT(X, X̂t) = ∑T−1 i=1 ∆X t i∆X̂ t i√∑T−1 i=1 ∆X t √∑T−1 i=1 ∆X̂ t , (7) where AXX(κ) = ∑T−κ i=1 (Xt − X̄)(Xt+κ − X̄) is the autocorrelation coefficient with lagged value κ (we employ an efficient implementation in frequency domain [43], details are in Appendix B.2), CORT(X, X̂t) is the temporal correlation coefficient, and ∆Xi = Xi −Xi−1 is the first difference. The proof is provided in Appendix A.2. According to Eq. (6), the reconstruction loss now can be estimated and, conversely, used to supervise disentangled representation learning. However, we find that this framework still holds certain drawbacks: (1) The KL divergence tends to narrow down the distance between posterior and prior. The modeling choice tends to sacrifice the variational inference vs. data fit when modeling capacity is not sufficient to achieve both [44]. The posterior may become almost non-informative for the inputs, which causes the forecastings irrelevant to the observations. (2) The disentanglement of the seasonal-trend representations is boosted indirectly by the separate reconstruction, where we need to impose a direct constraint on the representations themselves. We alleviate these limitations by introducing additional mutual information regularization terms. Specifically, we increase the mutual information between Zs, Zt and X to alleviate the divergence narrowing problem [44, 45], while decreasing mutual information between Zs and Zt to further dissociate their representations. The maximizing objective of LaST becomes LLaST = LELBO + I(X,Zs) + I(X,Zt)− I(Zs, Zt), (8) where I(·, ·) denotes the mutual information between two representations. However, the two mutual information terms are untraceable [46, 47, 48]. We address this problem in the next section. 4 Mutual information bounds for optimization We now address the traceable mutual information bounds, maximizing I(X,Zs) and I(X,Zt), and minimizing I(Zs, Zt) in Eq. (8), and provide lower and upper bounds for the model optimization. Lower bound for I(X,Zs) or I(X,Zt). We omit the superscript s or t when analyzing lower bound. Among the prior approaches exploring the lower bounds for MI [49, 50, 51], MINE [51], for example, employs KL divergence between the joint distribution and marginals and defines an energybased variational family to achieve a flexible and scalable lower bound. This can be formulated as I(X,Z) ≥ EQϕ(X,Z)[γα(X,Z)] − logEQ(x)Qϕ(z)[eγα(X,Z)] = IMINE , where γα is a learned normalized critic with parameters α. However, this bound suffers from the biased gradient owing to the parametric logarithmic term (see Appendix A.3 for proof). Inspired by [47], we substitute the logarithmic function by its tangent family to ameliorate the above biased bound: IMINE ≥ EQϕ(X,Z)[γα(X,Z)]− ( 1 η EQ(x)Qϕ(z)[e γα(X,Z)] + log η − 1) ≥ EQϕ(X,Z)[γα(X,Z)]− 1 η EQ(x)Qϕ(z)[e γα(X,Z)], (9) where η denotes the different tangent points. The first inequality relies on the concave negative logarithmic function – the values on the curve are upper bounds for that on the tangent line, and is tight when the tangent point overlaps the independent variable, i.e., the true value of EQ(x)Q(z)[eγ(X,Z)]. The closer the distance between tangent point and independent variable, the greater the lower bound. Therefore, we set η as the variational term EQ(x)Qϕ(z)[eγα(X,Z)] that estimates the independent variable to obtain as great lower bound as possible. In the second inequality, γα(x, z) – a critic function activated by Sigmoid – is limited within [0, 1] and thus −(log η − 1) ≥ 0. This inequality is tight only if EQ(x)Qϕ(z)[γα(X,Z)] = 1, which means γα can discriminate whether a pair of variables (X,Z) is sampled from the joint distribution or marginals. Similarly to MINE, this consistency problem can be addressed by the universal approximation theorem for neural networks [52]. Thus, Eq. (9) provides a flexible and scalable lower bound for I(X,Z) with an unbiased gradient. For the evaluation, we exploit a traceable manner [53, 51] that draws joint samples (X(i), Z(i)) by Q(Z(i)|X(i))PD(X(i)). As for the marginal Qϕ(Z), we randomly select a datapoint j and then sample it from Qϕ(Z|X(j))PD(X(j)). Details of the optimization are shown in Algorithm 1. Upper bound for I(Zs, Zt). Few efforts have been made that explore the traceable upper bound for mutual information [54, 47, 55]. Existing upper bounds (listed in Appendix D.1) are traceable with known probabilistic density of joint or conditional distributions here being Q(Zs|Zt), Q(Zt|Zs) or Q(Zs, Zt). However, these distributions lack interpretability and can hardly be directly modeled, which leads to untraceable estimations of the above upper bounds. To avoid the direct estimation of unknown probabilistic densities, we introduce an energy-based variational family for Q(Zs, Zt) that uses a normalized critic γβ(Zs, Zt) like Eq. (9) to establish a traceable upper bound. Specifically, we incorporate the critic γβ into the upper bound ICLUB [55] to obtain a traceable Seasonal-Trend Upper Bound (STUB) for I(Zs, Zt), which is defined as: I(Zs, Zt) ≤ EQ(Zs,Zt)[logQ(Zs|Zt)]− EQ(Zs)Q(Zt)[logQ(Zs|Zt)] = ICLUB (10) = EQϕs,ϕt (Zs,Zt)[γβ(Z s, Zt)]− EQϕs (Zs)Qϕt (Zt)[γβ(Z s, Zt)] = ISTUB. (11) The derivation details of this formula are provided in Appendix D.2. The inequality in Eq. (10) is tight only if Zs and Zt are a pair of independent variables [55]. This is exactly a sufficient condition for ISTUB, since MI and Eq. (11) are both zeros on the independent situation, which is our seasonaltrend disentanglement optimal objective. The critic γβ , similar to γα, takes on the discriminating responsibility but provides converse scores, constraining the MI to a minimum. However, Eq. (11) may get negative values during the learning of parameter β, resulting an invalid upper bound for MI. To alleviate this problem, we additionally introduce a penalty term ∥InegSTUB∥ 2 to assist the model optimization, which is an L2 loss of the negative parts in ISTUB. For the evaluation, we take the same sampling manner as the one in the lower bound and optimization details are also shown in Algorithm 1. Algorithm 1 An epoch of the optimization of LaST. 1: Initialize the parameters of LaST: Θ = {ψs, ψt, ϕs, ϕt, θs, θt}, Γ = {αs, αt, β}. 2: for a mini-batch with size B consisting of {X(i), Y (i)}i∈B in training set do 3: Get samples of the latent representations {Zs(i)}i∈B and {Zt(i)}i∈B from distributions {Qϕs(Zs|X(i))}i∈B and {Qϕt(Zt|X(i))}i∈B, respectively; 4: Shuffle the {Zs(i)}i∈B and {Zt(i)}i∈B and form {Zs(j)}j∈B and {Zt(j)}j∈B, respectively; 5: Compute the ηs, ηt: ηs ← 1B ∑B i=j=1 e γαs (X (i),Zs(j)), ηt ← 1B ∑B i=j=1 e γαt (X (i),Zt(j)); 6: Update parameters Θ: Θ ← G(∇Θ)[LELBO + 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1 η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Z s(i), Zt(j))) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 7: Update parameters Γ: Γ ← G(∇Γ)[ 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Zs(i), Zt(j)) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 8: end for 5 Experiments We now present the results of our extensive experimental evaluations comparing LaST with state-ofthe-art baselines and report a series of empirical results, along with ablation study and visualizations of seasonal-trend representations. Further details and results are provided in Appendix F. 5.1 Settings Datasets and Baselines. We conducted our experiments on seven real-world benchmark datasets from four categories of mainstream time series forecasting applications: (1) ETT 2[25]: Electricity Transformer Temperature consists of the target value “oil temperature” and six “power load” features, recorded hourly (i.e., ETTh1 and ETTh2) and every 15 minutes (i.e., ETTm1 and ETTm2) over two years. (2) Electricity, from the UCI Machine Learning Repository 3 and preprocessed by [56], is composed of the hourly electricity consumption of 321 clients in kWh from 2012 to 2014. (3) Exchange [56] with daily exchange rates of eight countries from 1990 to 2016. (4) Weather 4 contains 21 meteorological indicators (e.g., temperature and humidity) and is recorded every 10 minutes in 2020. We compare our LaST with the latest state-of-the-art methods on time series modeling and forecasting tasks from two categories: (1) representation learning techniques, including COST [38], TS2Vec [37], and TNC [36]; (2) end-to-end forecasting models, including VAE-GRU [10], Autoformer [26], Informer [25], and TCN [22]. Further descriptions and settings of these baselines are provided in appendix F.1. Evaluation setup. Following the prior work, we run our model on both univariate and multivariate forecasting settings. In multivariate forecasting, LaST accepts and forecasts all variables in datasets. In univariate forecasting, LaST only considers a specific feature in each dataset. We employ the standard normalization and set input length T = 201 for all datasets. For the dataset split, we follow a standard protocol that categorizes all datasets into training, validation, and test set in chronological order by the ratio of 6:2:2 for all datasets. We report the evaluation results on the test set while the model achieves the best performance on the validation set. Implementation details. As for the network structure of LaST, we use a single-layer fully connected network as the feed forward network (FFN), which is applied in the modeling of posterior, reconstruction, and predictor. Besides, we employ the 2-layer MLP for the critic γ in MI bound estimations. Dimensions of seasonal and trend representations are consistent. We set them as 32 in univariate forecasting and as 128 in multivariate forecasting on other datasets. MAE loss is used to measure the 2https://github.com/zhouhaoyi/ETDataset 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams 4https://www.bgc-jena.mpg.de/wetter forecasting derived from the predictor. For the training strategy, we use the Adam [57] optimizer, and training process is early stopped within 10 epochs. We initialize the learning rate with 10-3 and decay it with 0.95 weight every epoch. 5.2 Performance comparisons and model analysis Effectiveness. Tables 1 and 2 summarize the results of univariate and multivariate forecastings respectively. LaST achieves state-of-the-art performance against the advanced representation baselines on five real-world datasets. The relative improvements on MSE and MAE are 25.6% and 22.1% against the best representation learning method CoST and are 22.0% and 18.9% against the best end-to-end models Autoformer. We note that Autoformer achieves better performance on long horizons forecasting on hourly ETT datasets and think there are two reasons: (1) Transformer-based models intrinsically establish long-range dependencies, which plays a crucial role in long sequence forecasting; (2) it employs a simple decomposition by average pooling with a fixed kernel size, which is more suitable for strongly periodic datasets like hourly ETT. This phenomenon is beneficial to long-term forecasting but limits the sensitivity to local context, and the bonus does not have significant impact on other datasets. Compared with baselines, LaST extracts the seasonal and trend patterns with disentangled representations adaptively and thus can be applied to intricate time series. Ablation study. We investigated the performance benefits brought by each mechanism of LaST on a synthetic dataset (generation details are provided in appendix F.3) and ETTh1. The results are shown in Table 3, consisting of two groups: M1 validates the mechanisms of seasonal-trend representations learning framework. In it, “w/o seasonal” and “w/o trend” denote LaST without the seasonal and trend components respectively; “w/o coe” denotes LaST without autocorrelation and CORT coefficients while estimating the reconstruction loss. M2 judges the introduction and estimations of MI, where“w/o lower” and “w/o upper” indicate the removal of the lower and upper bounds for MI in regularization terms respectively; “with MINE” denotes that we replace our lower bound with MINE. The results show that all mechanisms improve the performance on the forecasting task. We notice that the quality drops a lot when removing the trend component. The reason is that seasonal forecasting derives from the iDFT algorithm, which is essentially a periodical repetition of historical observations. However, it captures the seasonal patterns and assists the trend component in complete LaST to bring the superiority, especially in the long-term settings and strongly periodical synthetic dataset. Besides, we observe that with biased regularization term MINE, the performance becomes unstable and sometimes even worse than LaST without MI lower bound, while our unbiased bound (cf. Eq.(9)) continuously outperforms it. Representation disentanglement. We visualize the seasonal-trend representations with the tSNE [58] technique in Figure 2. We also visualize the embeddings in last layer of Autoformer decoder as a comparison. The points with same color have a clearer and closer clustering in LaST, while they mix together without decomposition mechanisms (“w/o dec” indicates removal of the two decomposition mechanisms (autocorrelation and CORT coefficients, and the upper bound to MI). Notably, though Autofomer with the simple moving average block achieves satisfying decomposition from the time series perspective, their representations are still prone to entanglement. These results suggest that (1) learning disentangled seasonal-trend representations is not trivial, and (2) the proposed decomposition mechanisms successfully disentangle the seasonal-trend representations in latent space, each paying attention to a specific temporal pattern. Input settings. We further investigate the influence of hyperparameter input length to validate the sensitivity and Table 4 shows the results. Long look-back window improves the performance especially in long-term forecasting, while others even have performance degradation. This verifies that LaST can effectively utilize past information to understand patterns and make predictions. La ST w /o d ec A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er Figure 2: Visualizations of seasonal (red) and trend (blue) representations on ETTh1 dataset. (a) ETTh1 (b) ETTm1 (c) Exchange Figure 3: Top: learned seasonality visualizations (autocorrelation statistics of reconstructed seasonal sequences). Bottom: seasonal (red) and trend (blue) reconstructions to the ground truths (black). Observations from a case-study. We further validate LaST by by visualizing the extracted seasonality and trend in specific cases. As shown in Figure 3, LaST can capture the seasonal patterns on realworld datasets. For example, a strong daily period is indicated on hourly and 15-minutes ETT datasets. Even though the period on Exchange dataset is not obvious, LaST still provides some long-term periods on the daily data. Besides, trend and seasonal components jointly accurately restore the original sequence with their own perspective, which supports that LaST can produce workable disentangled representations for intricate time series. 6 Conclusion We presented LaST, a disentangled variational inference framework with mutual information constraints to disassociate a couple of seasonal-trend representations in latent space, for effective forecasting of time series. Our extensive experiments demonstrated that LaST successfully disentangles the seasonal-trend representations and achieves state-of-the-art performance. Our future work will focus on tackling other challenging downstream tasks in the time series domain, e.g., generation and imputation. In addition, we plan to model stochastic factors explicitly in decomposition strategies, which will better understand the real-world time series. Acknowledgments and Disclosure of Funding This work was supported in part by National Natural Science Foundation of China (Grant No.62072077 and No.62176043), Natural Science Foundation of Sichuan Province (Grant No. 2022NSFSC0505), and National Science Foundation SWIFT (Grant No.2030249). Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [No] (c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main exper- imental results (either in the supplemental material or as a URL)? [Yes] Our source code for reproducibility is publicly available at GitHub. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See “Datasets” and “Evaluation setup” in Sec. 5.1. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] See Appendix F.5. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix F.2. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] See URLs in Sec. 5. (b) Did you mention the license of the assets? [Yes] See folder “license” in supplementary materials. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] Our source code for reproducibility is included in the supplemental material. (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] See folder “license” in supplementary materials. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] They do not contain personally identifiable information or offensive content. 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper on time series forecasting? 2. What are the strengths and weaknesses of the proposed LaST method, particularly regarding its loss function, organization, and performance? 3. Do you have any concerns or suggestions regarding the proof and minimization process in the paper? 4. How does the reviewer assess the necessity and effectiveness of learning latent representations in the paper? 5. What are the differences between the autocorrelation calculation used in Equation 6 and the autocorrelation calculation method in Autoformer? 6. How does the reviewer evaluate the visualization in Figure 2, and how does it compare to Autoformer? 7. What are the reviewer's questions or concerns regarding the predictor in Figure 1(b), especially regarding the "extend" part? 8. Does the paper adequately discuss the limitations of the work, such as the convergence property and stochasticity of time series?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper focuses on the time series forecasting task and presents a decomposition method LaST to learn latent representations. The decomposition method is derived from variational inference and mutual information. Along with a predictor, LaST can achieve competitive performance in many benchmarks. Strengths And Weaknesses Strengths The loss function for time series decomposition is reasonable and novel. This paper is well-organized and clear. The model performance is competitive and is with detailed analysis. Weaknesses The ‘proof’ of the ‘Theorem’ 2 is too intuitive. The minimization between autocorrelations of X and X ^ s is not convincing, which will bring the noise to the optimization. And this process also relies on an underlying assumption, that the raw time series is periodic by removing the trend, which is contradictory to the statement in line 89. The same problem is also in the design of CORT( X , X ^ t ). Comparing baselines: I think the N-BEATS is a necessary baseline since you both adopt the decomposition and a similar predictor for forecasting (linear model for trend and sine/cosine functions for seasonal). The most important thing of this paper is to explain the necessity of learning ‘latent’ representations. What about using the moving average of Autoformer for decomposition and adopting the same predictor for prediction? I think this vanilla baseline is also necessary. Questions What is the difference between the autocorrelation calculation used in Equ 6 and the autocorrelation calculation method in Autoformer? Maybe giving a citation is better. The visualization in Figure 2 is not surprising. You can conduct the same visualization to Autoformer. I think the simple moving average block can achieve the representation disentanglement well. You can compare these two decomposition methods. I am a little confused about the predictor (Figure 1 (b)) without the supplementary material. Especially in lines 127-128, the inverse DFT cannot change the sequence length. You should give more details about the “extend” in the main text. It seems the prediction results show the over-smoothing problem (Figure 3). This problem is fatal for details predicting of time series. This can be caused by the design of the predictor. More showcases are required especially the non-stationary time series. Limitations No, the author has not discussed any limitations of this work. I think the convergence property can be further analyzed. Also, the stochasticity of time series is neglected in the decomposition, which should also be noticed by the authors.
NIPS
Title Learning Latent Seasonal-Trend Representations for Time Series Forecasting Abstract Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github1. 1 Introduction Time series forecasting plays a significant role in plethora of modern applications, ranging from climate analysis [1], energy production [2], traffic flows [3] to financial markets and various industrial systems [4]. The ubiquity and importance of time series data have recently attracted researcher efforts resulting in a myriad of deep learning forecasting models [5, 6] ameliorating the time series forecasting. Based on advanced techniques such as RNN and Transformer [7]), these methods usually learn latent representations to epitomize every instant of the signals, and then derive forecasting results by a predictor, achieving great progress on forecasting tasks. However, these models have difficulties to extract exact/clear information related to temporal patterns (e.g., seasonality, trend, and level), especially in supervised end-to-end architecture without any constraint on representations [8]. As a consequence, efforts have been made to apply the variational inference into time series modeling [9, 10, 11], where improved guidance for latent representations with probabilistic form has been proved beneficial to downstream time series tasks [12]. However, when various intricately co-evolving constituents exist a time series data, analyzing with a single representation will result in superficial variable and models’ non-reusability and lack of interpretability, due to the highly entangled nature of neural networks [13, 14]. Thus, while providing efficiency and effectiveness, existing approaches with a single high-dimensional representation sacrifice the ∗Corresponding author. 1https://github.com/zhycs/LaST 36th Conference on Neural Information Processing Systems (NeurIPS 2022). information utilization and explainability, which may further lead to overfitting and degenerated performance. To address the above limitations and seek a new disentangled time series learning framework, we leverage the ideas from the decomposition strategy [15, 16] to split time series data into several components, each of which captures an underlying pattern category. The decomposition assists the analysis process and reveals underlying insights more consistently with human intuition. This insight motivates us to produce a couple of latent representations that respond to different time series characteristics (in our case the seasonal and trend), from which we predict the results by formulating sequence as the sum of these characteristics. The representations should be as independent as possible to avoid a model prone to feature entanglement, while also having sufficient information to the input sequence. Towards that, we propose a novel framework LaST to learn the Latent Seasonal-Trend representations for time series forecasting. LaST exploits an encoder-decoder architecture and follows variational inference theory [17] to learn a couple of disentangled latent representations that describe seasonal and trend of time series. To achieve this goal, LaST enforces the representations into disentanglement under two constraints: (1) from the input reconstruction, we dissect intrinsic seasonal-trend patterns which can be readily obtained from raw time series and off-the-shelf measurement methods, and accordingly design a series of auxiliary objectives; (2) from representations themselves, we minimize the mutual information (MI) between seasonal and trend representations on the premise that grantee the consistency between input data and each of them. Our main contributions are threefold: • We start with the variational inference and information theory to design the seasonal-trend representations learning and disentanglement mechanisms, and practically demonstrate their effectiveness and superiority (over the existing baselines) on time series forecasting task. • We propose LaST, a novel latent seasonal-trend representations learning framework, which encodes input as disentangled seasonal-trend representations and provides a practicable approach that reconstructs seasonal and trend separately to avoid chaos. • We introduce MI terms as a penalty and present a novel tractable lower bound and an upper bound for their optimizations. The lower bound ameliorates the biased gradient issue in prevalent MINE approach and ensures informative representations. The upper bound provides the feasibility to further reduce the overlapping of seasonal-trend representations. 2 Related work Most of the deep learning methods for time series forecasting are designed as an end-to-end architecture. Various basic techniques (e.g., residual structure [18, 19], autoregressive network [20, 21], and convolutions [22, 23]) are exploited to produce expressive non-linear hidden states and embeddings that reflect the temporal dependencies and patterns. There is also a body of works that apply Transformer [7] structure into time series forecasting tasks [24, 25, 26, 6], aiming to discover the relationships across the sequence and focus on the important time points. Deep learning methods have achieved superior performance in comparison to the classical algorithms such as ARIMA [27] and VAR [28], and have become prevalent in multiple applications. Learning flexible representations has been demonstrated to be beneficial for downstream tasks by numerous researches [12, 29]. In time series representations domain, early methods, employing the variational inference, jointly train an encoder and corresponding decoder that reconstructs raw signals to learn approximate latent representations [10, 30, 31]. Recent efforts have improve these variational methods [32, 33] by establishing more complex and flexible distributions using techniques such as copula [32] and normalizing flow [34, 35]. Another group of works exploited the burgeoning contrastive learning to obtain invariant representations from augmented time series [36, 37, 38], which avoids the reconstruction process and improves representations without additional supervision. Time series decomposition [15, 16] is a classical method that splits complex time series into several components to obtain temporal patterns and interpretability. Recent works have applied machine learning and deep learning approaches [39, 40, 41] to robustly and efficiently achieve the decomposition on large-scale datasets. There are also research results that tackle forecasting with the assistance of decomposition. For example, Autoformer [26] decomposes time series into seasonal-trend parts by average pooling and introduces an autocorrelation mechanism to empower Transformer [7] for better relations discovery. CoST [38] encodes signals into seasonal and trend representations in frequency and temporal domains, respectively, and introduces contrastive learning to supervise their learning. Different from our work, these methods exploit simple average pooling decomposition mechanism which may provide incompatible periodical assumptions, or intuitively disentangle the representations by processing in different domains. Meanwhile, LaST adaptively epitomizes the seasonality and trend by disentangled representations and boosts their disassociation from a probabilistic perspective in the latent space. 3 Latent seasonal-trend representations learning framework We now formalize the problem definition and introduce the proposed LaST framework. We note that in LaST we use seasonal and trend characteristics for disentanglement learning but our framework can be easily extended to adapt to situations that have more than two components to dissociate. Problem definition. Consider a time series dataset D consisting of N i.i.d. sequences denoted as X(i)1:T = {x (i) 1 , x (i) 2 , · · · , x (i) t , · · · , x (i) T }, where i ∈ {1, 2, . . . , N}, and each x (i) t is univariate or multivariate value representing the current observation(s) at time instant t (e.g., price and rainfall). We aim to derive a model that outputs expressive representations Z1:T , suitable for predicting future sequence(s) Y = X̂T+1:T+τ . Hereafter, when there is no ambiguity we omit the superscripts and the subscript 1:T . A model that infers the likelihood between observation X and future Y with latent representation Z can be formulated as follows: P (X,Y ) = P (Y |X)P (X) = ∫ Z P (Y |Z)P (Z|X)dZ ∫ Z P (X|Z)P (Z)dZ. (1) From the perspective of variational inference (cf. [17]), the likelihood P (X|Z) is calculated by a posterior distribution Qϕ(Z|X) and maximized by the following evidence lower bound (ELBO): logPΘ(X,Y ) ≥ log ∫ Z Pψ(Y |Z)Qϕ(Z|X)dZ + EQϕ(Z|X)[logPθ(X|Z)] −KL(Qϕ(Z|X)||P (Z)) = LELBO, (2) where Θ is composed of ψ, ϕ, and θ denotes learned parameters. However, as pointed out in Sec. 1, this faces an entanglement problem and cannot clearly extract complicated temporal patterns. To ameliorate this limitation, we incorporate the decomposition strategy into our LaST that learns a couple of disentangled representations to depict seasonal and trend dynamics. Specifically, we formulate the temporal signals X and Y as the sum of seasonal and trend components, i.e., X = Xs +Xt. Accordingly, the latent representation Z is factorized into Zs and Zt, assumed to be independent of each other – i.e., P (Z) = P (Zs)P (Zt). Figure 1 illustrates the two parts of the LaST framework: (a) representations learning, producing disentangles seasonal-trend representations (separate reconstructions and MI constraints); and (b) prediction based on learned representations. Theorem 1. With the decomposition strategy, Eq. (2) (i.e., the ELBO) naturally has the following factorized form: LELBO = log ∫ Zs ∫ Zt Pψ(Y |Zs, Zt)Qϕs,ϕt(Zs, Zt|X)dZsdZt (predictor) (3) + EQϕs (Zs|X)[logPθs(X s|Zs)] + EQϕt (Zt|X)[logPθt(X t|Zt)] (reconstruction) (4) −KL(Qϕs(Zs|X)||P (Zs))−KL(Qϕt(Zt|X)||P (Zt)). (KL divergence) (5) The detailed inference process of the above formula is provided in Appendix A.1. The ELBO is split into three main units, i.e., Eqs. (3), (4), and (5). The predictor makes forecasting and measures the accuracy (e.g,. L1 or L2 losses), reconstruction and KL divergence are served as regularization terms aiming to improve the learned representations. The three units are described in the following. Predictor: The predictor (cf. Eq. (3) and Figure 1(b)) can be regarded as the sum of two independent parts: log ∫ Zs Pψs(Y s|Zs)Qϕs(Zs|X)dZs and log ∫ Zt Pψt(Y t|Zt)Qϕt(Zt|X)dZt. Here we introduce two specialized approaches to harness the seasonal-trend representations combining their own characteristics. Given the seasonal latent representation Zs ∈ RT×d, the seasonal predictor first employs the discrete Fourier transform (DFT) algorithm to detect the seasonal frequencies, i.e., ZsF = DFT(Z s) ∈ CF×d, where F = ⌊T+12 ⌋ due to the Nyquist theorem [42]. Then, we inverse the frequencies back to the temporal domain to extend the representation to the future part, i.e., Z̃s = iDFT(ZsF ) ∈ Rτ×d. More details of the DFT and iDFT functions can be found in Appendix B.1. Given Zt, trend predictor provides a feed forward network (FFN) f : T → τ to produce a predictable representation Z̃t ∈ Rτ×d. We end the predictor with two FFNs to map Z̃s and Z̃t into Y s and Y t, respectively, and obtain the forecasting result Y by their sum. Reconstruction and KL divergence: Among these two terms, the KL divergence can be easily estimated by Monte Carlo sampling with prior assumptions. Here we take a widely used setting that priors both follow N (0, I) for efficiency, more discussions of our priors can be found in Appendix C. As for reconstruction term, it cannot be directly measured owing to the unknown Xs and Xt. Besides, merging these two terms into EQϕs,ϕt (Zs,Zt|X)[logPθs,θt(X|Z s, Zt)] will result in chaos since the decoder is prone to reconstruct the intricate time series from every representation. Theorem 2. With the Gaussian distribution assumption, the reconstruction loss Lrec can be estimated without leveraging Xs and Xt, and Eq. (4) can be replaced with the following formula: Lrec = − T−1∑ κ=1 ∥∥AXX(κ)−AX̂sX̂s(κ)∥∥2 + CORT(X, X̂t)− ∥∥∥X̂t + X̂s −X∥∥∥2 , (6) CORT(X, X̂t) = ∑T−1 i=1 ∆X t i∆X̂ t i√∑T−1 i=1 ∆X t √∑T−1 i=1 ∆X̂ t , (7) where AXX(κ) = ∑T−κ i=1 (Xt − X̄)(Xt+κ − X̄) is the autocorrelation coefficient with lagged value κ (we employ an efficient implementation in frequency domain [43], details are in Appendix B.2), CORT(X, X̂t) is the temporal correlation coefficient, and ∆Xi = Xi −Xi−1 is the first difference. The proof is provided in Appendix A.2. According to Eq. (6), the reconstruction loss now can be estimated and, conversely, used to supervise disentangled representation learning. However, we find that this framework still holds certain drawbacks: (1) The KL divergence tends to narrow down the distance between posterior and prior. The modeling choice tends to sacrifice the variational inference vs. data fit when modeling capacity is not sufficient to achieve both [44]. The posterior may become almost non-informative for the inputs, which causes the forecastings irrelevant to the observations. (2) The disentanglement of the seasonal-trend representations is boosted indirectly by the separate reconstruction, where we need to impose a direct constraint on the representations themselves. We alleviate these limitations by introducing additional mutual information regularization terms. Specifically, we increase the mutual information between Zs, Zt and X to alleviate the divergence narrowing problem [44, 45], while decreasing mutual information between Zs and Zt to further dissociate their representations. The maximizing objective of LaST becomes LLaST = LELBO + I(X,Zs) + I(X,Zt)− I(Zs, Zt), (8) where I(·, ·) denotes the mutual information between two representations. However, the two mutual information terms are untraceable [46, 47, 48]. We address this problem in the next section. 4 Mutual information bounds for optimization We now address the traceable mutual information bounds, maximizing I(X,Zs) and I(X,Zt), and minimizing I(Zs, Zt) in Eq. (8), and provide lower and upper bounds for the model optimization. Lower bound for I(X,Zs) or I(X,Zt). We omit the superscript s or t when analyzing lower bound. Among the prior approaches exploring the lower bounds for MI [49, 50, 51], MINE [51], for example, employs KL divergence between the joint distribution and marginals and defines an energybased variational family to achieve a flexible and scalable lower bound. This can be formulated as I(X,Z) ≥ EQϕ(X,Z)[γα(X,Z)] − logEQ(x)Qϕ(z)[eγα(X,Z)] = IMINE , where γα is a learned normalized critic with parameters α. However, this bound suffers from the biased gradient owing to the parametric logarithmic term (see Appendix A.3 for proof). Inspired by [47], we substitute the logarithmic function by its tangent family to ameliorate the above biased bound: IMINE ≥ EQϕ(X,Z)[γα(X,Z)]− ( 1 η EQ(x)Qϕ(z)[e γα(X,Z)] + log η − 1) ≥ EQϕ(X,Z)[γα(X,Z)]− 1 η EQ(x)Qϕ(z)[e γα(X,Z)], (9) where η denotes the different tangent points. The first inequality relies on the concave negative logarithmic function – the values on the curve are upper bounds for that on the tangent line, and is tight when the tangent point overlaps the independent variable, i.e., the true value of EQ(x)Q(z)[eγ(X,Z)]. The closer the distance between tangent point and independent variable, the greater the lower bound. Therefore, we set η as the variational term EQ(x)Qϕ(z)[eγα(X,Z)] that estimates the independent variable to obtain as great lower bound as possible. In the second inequality, γα(x, z) – a critic function activated by Sigmoid – is limited within [0, 1] and thus −(log η − 1) ≥ 0. This inequality is tight only if EQ(x)Qϕ(z)[γα(X,Z)] = 1, which means γα can discriminate whether a pair of variables (X,Z) is sampled from the joint distribution or marginals. Similarly to MINE, this consistency problem can be addressed by the universal approximation theorem for neural networks [52]. Thus, Eq. (9) provides a flexible and scalable lower bound for I(X,Z) with an unbiased gradient. For the evaluation, we exploit a traceable manner [53, 51] that draws joint samples (X(i), Z(i)) by Q(Z(i)|X(i))PD(X(i)). As for the marginal Qϕ(Z), we randomly select a datapoint j and then sample it from Qϕ(Z|X(j))PD(X(j)). Details of the optimization are shown in Algorithm 1. Upper bound for I(Zs, Zt). Few efforts have been made that explore the traceable upper bound for mutual information [54, 47, 55]. Existing upper bounds (listed in Appendix D.1) are traceable with known probabilistic density of joint or conditional distributions here being Q(Zs|Zt), Q(Zt|Zs) or Q(Zs, Zt). However, these distributions lack interpretability and can hardly be directly modeled, which leads to untraceable estimations of the above upper bounds. To avoid the direct estimation of unknown probabilistic densities, we introduce an energy-based variational family for Q(Zs, Zt) that uses a normalized critic γβ(Zs, Zt) like Eq. (9) to establish a traceable upper bound. Specifically, we incorporate the critic γβ into the upper bound ICLUB [55] to obtain a traceable Seasonal-Trend Upper Bound (STUB) for I(Zs, Zt), which is defined as: I(Zs, Zt) ≤ EQ(Zs,Zt)[logQ(Zs|Zt)]− EQ(Zs)Q(Zt)[logQ(Zs|Zt)] = ICLUB (10) = EQϕs,ϕt (Zs,Zt)[γβ(Z s, Zt)]− EQϕs (Zs)Qϕt (Zt)[γβ(Z s, Zt)] = ISTUB. (11) The derivation details of this formula are provided in Appendix D.2. The inequality in Eq. (10) is tight only if Zs and Zt are a pair of independent variables [55]. This is exactly a sufficient condition for ISTUB, since MI and Eq. (11) are both zeros on the independent situation, which is our seasonaltrend disentanglement optimal objective. The critic γβ , similar to γα, takes on the discriminating responsibility but provides converse scores, constraining the MI to a minimum. However, Eq. (11) may get negative values during the learning of parameter β, resulting an invalid upper bound for MI. To alleviate this problem, we additionally introduce a penalty term ∥InegSTUB∥ 2 to assist the model optimization, which is an L2 loss of the negative parts in ISTUB. For the evaluation, we take the same sampling manner as the one in the lower bound and optimization details are also shown in Algorithm 1. Algorithm 1 An epoch of the optimization of LaST. 1: Initialize the parameters of LaST: Θ = {ψs, ψt, ϕs, ϕt, θs, θt}, Γ = {αs, αt, β}. 2: for a mini-batch with size B consisting of {X(i), Y (i)}i∈B in training set do 3: Get samples of the latent representations {Zs(i)}i∈B and {Zt(i)}i∈B from distributions {Qϕs(Zs|X(i))}i∈B and {Qϕt(Zt|X(i))}i∈B, respectively; 4: Shuffle the {Zs(i)}i∈B and {Zt(i)}i∈B and form {Zs(j)}j∈B and {Zt(j)}j∈B, respectively; 5: Compute the ηs, ηt: ηs ← 1B ∑B i=j=1 e γαs (X (i),Zs(j)), ηt ← 1B ∑B i=j=1 e γαt (X (i),Zt(j)); 6: Update parameters Θ: Θ ← G(∇Θ)[LELBO + 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1 η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Z s(i), Zt(j))) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 7: Update parameters Γ: Γ ← G(∇Γ)[ 1B ∑B i=j=1(γαs(X (i), Zs(i)) − 1η e γαs (X (i),Zs(j)) + γαt(X (i), Zt(i)) − 1η e γαt (X (i),Zt(j))) − 1B ∑B i=j=1(γβ(Z s(i), Zt(i)) − γβ(Zs(i), Zt(j)) + average( ∥∥(γβ(Zs(i), Zt(i))− γβ(Zs(i), Zt(j)))neg∥∥2)]; 8: end for 5 Experiments We now present the results of our extensive experimental evaluations comparing LaST with state-ofthe-art baselines and report a series of empirical results, along with ablation study and visualizations of seasonal-trend representations. Further details and results are provided in Appendix F. 5.1 Settings Datasets and Baselines. We conducted our experiments on seven real-world benchmark datasets from four categories of mainstream time series forecasting applications: (1) ETT 2[25]: Electricity Transformer Temperature consists of the target value “oil temperature” and six “power load” features, recorded hourly (i.e., ETTh1 and ETTh2) and every 15 minutes (i.e., ETTm1 and ETTm2) over two years. (2) Electricity, from the UCI Machine Learning Repository 3 and preprocessed by [56], is composed of the hourly electricity consumption of 321 clients in kWh from 2012 to 2014. (3) Exchange [56] with daily exchange rates of eight countries from 1990 to 2016. (4) Weather 4 contains 21 meteorological indicators (e.g., temperature and humidity) and is recorded every 10 minutes in 2020. We compare our LaST with the latest state-of-the-art methods on time series modeling and forecasting tasks from two categories: (1) representation learning techniques, including COST [38], TS2Vec [37], and TNC [36]; (2) end-to-end forecasting models, including VAE-GRU [10], Autoformer [26], Informer [25], and TCN [22]. Further descriptions and settings of these baselines are provided in appendix F.1. Evaluation setup. Following the prior work, we run our model on both univariate and multivariate forecasting settings. In multivariate forecasting, LaST accepts and forecasts all variables in datasets. In univariate forecasting, LaST only considers a specific feature in each dataset. We employ the standard normalization and set input length T = 201 for all datasets. For the dataset split, we follow a standard protocol that categorizes all datasets into training, validation, and test set in chronological order by the ratio of 6:2:2 for all datasets. We report the evaluation results on the test set while the model achieves the best performance on the validation set. Implementation details. As for the network structure of LaST, we use a single-layer fully connected network as the feed forward network (FFN), which is applied in the modeling of posterior, reconstruction, and predictor. Besides, we employ the 2-layer MLP for the critic γ in MI bound estimations. Dimensions of seasonal and trend representations are consistent. We set them as 32 in univariate forecasting and as 128 in multivariate forecasting on other datasets. MAE loss is used to measure the 2https://github.com/zhouhaoyi/ETDataset 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams 4https://www.bgc-jena.mpg.de/wetter forecasting derived from the predictor. For the training strategy, we use the Adam [57] optimizer, and training process is early stopped within 10 epochs. We initialize the learning rate with 10-3 and decay it with 0.95 weight every epoch. 5.2 Performance comparisons and model analysis Effectiveness. Tables 1 and 2 summarize the results of univariate and multivariate forecastings respectively. LaST achieves state-of-the-art performance against the advanced representation baselines on five real-world datasets. The relative improvements on MSE and MAE are 25.6% and 22.1% against the best representation learning method CoST and are 22.0% and 18.9% against the best end-to-end models Autoformer. We note that Autoformer achieves better performance on long horizons forecasting on hourly ETT datasets and think there are two reasons: (1) Transformer-based models intrinsically establish long-range dependencies, which plays a crucial role in long sequence forecasting; (2) it employs a simple decomposition by average pooling with a fixed kernel size, which is more suitable for strongly periodic datasets like hourly ETT. This phenomenon is beneficial to long-term forecasting but limits the sensitivity to local context, and the bonus does not have significant impact on other datasets. Compared with baselines, LaST extracts the seasonal and trend patterns with disentangled representations adaptively and thus can be applied to intricate time series. Ablation study. We investigated the performance benefits brought by each mechanism of LaST on a synthetic dataset (generation details are provided in appendix F.3) and ETTh1. The results are shown in Table 3, consisting of two groups: M1 validates the mechanisms of seasonal-trend representations learning framework. In it, “w/o seasonal” and “w/o trend” denote LaST without the seasonal and trend components respectively; “w/o coe” denotes LaST without autocorrelation and CORT coefficients while estimating the reconstruction loss. M2 judges the introduction and estimations of MI, where“w/o lower” and “w/o upper” indicate the removal of the lower and upper bounds for MI in regularization terms respectively; “with MINE” denotes that we replace our lower bound with MINE. The results show that all mechanisms improve the performance on the forecasting task. We notice that the quality drops a lot when removing the trend component. The reason is that seasonal forecasting derives from the iDFT algorithm, which is essentially a periodical repetition of historical observations. However, it captures the seasonal patterns and assists the trend component in complete LaST to bring the superiority, especially in the long-term settings and strongly periodical synthetic dataset. Besides, we observe that with biased regularization term MINE, the performance becomes unstable and sometimes even worse than LaST without MI lower bound, while our unbiased bound (cf. Eq.(9)) continuously outperforms it. Representation disentanglement. We visualize the seasonal-trend representations with the tSNE [58] technique in Figure 2. We also visualize the embeddings in last layer of Autoformer decoder as a comparison. The points with same color have a clearer and closer clustering in LaST, while they mix together without decomposition mechanisms (“w/o dec” indicates removal of the two decomposition mechanisms (autocorrelation and CORT coefficients, and the upper bound to MI). Notably, though Autofomer with the simple moving average block achieves satisfying decomposition from the time series perspective, their representations are still prone to entanglement. These results suggest that (1) learning disentangled seasonal-trend representations is not trivial, and (2) the proposed decomposition mechanisms successfully disentangle the seasonal-trend representations in latent space, each paying attention to a specific temporal pattern. Input settings. We further investigate the influence of hyperparameter input length to validate the sensitivity and Table 4 shows the results. Long look-back window improves the performance especially in long-term forecasting, while others even have performance degradation. This verifies that LaST can effectively utilize past information to understand patterns and make predictions. La ST w /o d ec A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er La ST A ut of or m er Figure 2: Visualizations of seasonal (red) and trend (blue) representations on ETTh1 dataset. (a) ETTh1 (b) ETTm1 (c) Exchange Figure 3: Top: learned seasonality visualizations (autocorrelation statistics of reconstructed seasonal sequences). Bottom: seasonal (red) and trend (blue) reconstructions to the ground truths (black). Observations from a case-study. We further validate LaST by by visualizing the extracted seasonality and trend in specific cases. As shown in Figure 3, LaST can capture the seasonal patterns on realworld datasets. For example, a strong daily period is indicated on hourly and 15-minutes ETT datasets. Even though the period on Exchange dataset is not obvious, LaST still provides some long-term periods on the daily data. Besides, trend and seasonal components jointly accurately restore the original sequence with their own perspective, which supports that LaST can produce workable disentangled representations for intricate time series. 6 Conclusion We presented LaST, a disentangled variational inference framework with mutual information constraints to disassociate a couple of seasonal-trend representations in latent space, for effective forecasting of time series. Our extensive experiments demonstrated that LaST successfully disentangles the seasonal-trend representations and achieves state-of-the-art performance. Our future work will focus on tackling other challenging downstream tasks in the time series domain, e.g., generation and imputation. In addition, we plan to model stochastic factors explicitly in decomposition strategies, which will better understand the real-world time series. Acknowledgments and Disclosure of Funding This work was supported in part by National Natural Science Foundation of China (Grant No.62072077 and No.62176043), Natural Science Foundation of Sichuan Province (Grant No. 2022NSFSC0505), and National Science Foundation SWIFT (Grant No.2030249). Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [No] (c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A. 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main exper- imental results (either in the supplemental material or as a URL)? [Yes] Our source code for reproducibility is publicly available at GitHub. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See “Datasets” and “Evaluation setup” in Sec. 5.1. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] See Appendix F.5. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix F.2. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] See URLs in Sec. 5. (b) Did you mention the license of the assets? [Yes] See folder “license” in supplementary materials. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] Our source code for reproducibility is included in the supplemental material. (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] See folder “license” in supplementary materials. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] They do not contain personally identifiable information or offensive content. 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper on disentangling seasonal-trend representations? 2. What are the strengths of the proposed approach, particularly in terms of using trend and seasonal data as reconstructed targets? 3. What are the weaknesses of the paper, especially regarding the comparison with other approaches and the limitation of the proposed method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the extraction of trend and seasonal data, and the comparison with other disentangled representation learning methods for sequential data?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The author proposed VAE-based method, LaST, to disentangle the seasonal-trend representations of sequential data. The LaST uses the trend and seasonal data as the reconstructed targets, which enforce the model to learn representations dedicated to its target hence achieving better disentanglement. Finally, a predictor is introduced to guarantee its performance on the downstream tasks. Results show that LaST can achieve better predicting performance compared to other forecasting baselines. Strengths And Weaknesses Pros: The usage of trend and seasonal input Xt and Xs force the model to learn dedicated representations and make us easier to evaluate the disentanglement. The author provides rigorous mathematical proof, including the decomposition of ELBO, and the lower and upper bounds for MI optimization. The author provides extensive results on both the prediction performance for downstream tasks and the disentanglement of the trend and seasonal features. Cons: The author stated that "existing approaches with a single high-dimensional representation sacrifice the information utilization and explainability". Models like CoST uses different modules to extract trend/seasonal dependencies. The author fails to address the difference between those models that do not use a high-dimensional representation. Other disentangled representation learning methods for sequential data are not referenced as baselines i Questions Does these trend data Xt and seasonal data Xs are extracted from X? If so, how do you extract them? Can you compare your results with other disentangled representation learning methods for sequential data? Limitations The generalisability of this LaST might be limited since it requires that the input data are a triplet: Time series data X, trend data Xt and seasonal data Xs. Many sequential datasets, especially the real-world ones, often don't have the complete triplet data or with a lot of missing/noising data. So its generalisability should be further investigated. The author might need to add more baselines (other disentangled representation learning methods for sequential data).
NIPS
Title Universal approximation and model compression for radial neural networks Abstract We introduce a class of fully-connected neural networks whose activa1 tion functions, rather than being pointwise, rescale feature vectors by a 2 function depending only on their norm. We call such networks radial 3 neural networks, extending previous work on rotation equivariant net4 works that considers rescaling activations in less generality. We prove 5 universal approximation theorems for radial neural networks, including 6 in the more difficult cases of bounded widths and unbounded domains. 7 Our proof techniques are novel, distinct from those in the pointwise case. 8 Additionally, radial neural networks exhibit a rich group of orthogonal 9 change-of-basis symmetries on the vector space of trainable parameters. 10 Factoring out these symmetries leads to a practical lossless model com11 pression algorithm. Optimization of the compressed model by gradient 12 descent is equivalent to projected gradient descent for the full model. 13 N/A We introduce a class of fully-connected neural networks whose activa-1 tion functions, rather than being pointwise, rescale feature vectors by a2 function depending only on their norm. We call such networks radial3 neural networks, extending previous work on rotation equivariant net-4 works that considers rescaling activations in less generality. We prove5 universal approximation theorems for radial neural networks, including6 in the more difficult cases of bounded widths and unbounded domains.7 Our proof techniques are novel, distinct from those in the pointwise case.8 Additionally, radial neural networks exhibit a rich group of orthogonal9 change-of-basis symmetries on the vector space of trainable parameters.10 Factoring out these symmetries leads to a practical lossless model com-11 pression algorithm. Optimization of the compressed model by gradient12 descent is equivalent to projected gradient descent for the full model.13 1 Introduction14 Inspired by biological neural networks, the theory of artificial neural networks has largely15 focused on pointwise (or “local”) nonlinear layers [46, 14], in which the same function16 σ : R→ R is applied to each coordinate independently:17 Rn → Rn, v = (v1 , . . . , vn) 7→ (σ(v1) , σ(v2) , . . . , σ(vn)). (1.1) In networks with pointwise nonlinearities, the standard basis vectors in Rn can be inter-18 preted as “neurons” and the nonlinearity as a “neuron activation.” Research has generally19 focused on finding functions σ which lead to more stable training, have less sensitivity to20 initialization, or are better adapted to certain applications [42, 38, 37, 10, 29]. Many σ have21 been considered, including sigmoid, ReLU, arctangent, ELU, Swish, and others.22 However, by setting aside the biological metaphor, it is possible to consider a much23 broader class of nonlinearities, which are not necessarily pointwise, but instead depend24 simultaneously on many coordinates. Freedom from the pointwise assumption allows25 one to design activations that yield expressive function classes with specific advantages.26 Additionally, certain choices of non-pointwise activations maximize symmetry in the27 parameter space of the network, leading to compressibility and other desirable properties.28 In this paper, we introduce radial neural networks which employ non-pointwise nonlin-29 earities called radial rescaling activations. Such networks enjoy several provable properties30 including high model compressibility, symmetry in optimization, and universal approxi-31 mation. Radial rescaling activations are defined by rescaling each vector by a scalar that32 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. depends only on the norm of the vector:33 ρ : Rn → Rn, v 7→ λ(|v|)v, (1.2) where λ is a scalar-valued function of the norm. Whereas in the pointwise setting, only the34 linear layers mix information between different components of the latent features, for radial35 rescaling, all coordinates of the activation output vector are affected by all coordinates of36 the activation input vector. The inherent geometric symmetry of radial rescalings makes37 them particularly useful for designing equivariant neural networks [55, 47, 56, 57].38 We note that radial neural networks constitute a simple and previously unconsidered type39 of multilayer radial basis functions network [4], namely, one where the number of hidden40 activation neurons (often denoted N) in each layer is equal to one. Indeed, pre-composing41 equation 1.2 with a translation and post-composing with a linear map, one obtains a special42 case of the local linear model extension of a radial basis functions network.43 In our first set of main results, we prove that radial neural networks are in fact universal44 approximators. Specifically, we demonstrate that any asymptotically affine function can be45 approximated with a radial neural network, suggesting potentially good extrapolation46 behavior. Moreover, this approximation can be done with bounded width. Our approach47 to proving these results departs markedly from techniques used in the pointwise case.48 Additionally, our result is not implied by the universality property of radial basis functions49 networks in general, and differs in significant ways, particularly in the bounded width50 property and the approximation of asymptotically affine functions.51 In our second set of main results, we exploit parameter space symmetries of radial neural52 networks to achieve model compression. Using the fact that radial rescaling activations53 commute with orthogonal transformations, we develop a practical algorithm to system-54 atically factor out orthogonal symmetries via iterated QR decompositions. This leads to55 another radial neural network with fewer neurons in each hidden layer. The resulting56 model compression algorithm is lossless: the compressed network and the original network57 both have the same value of the loss function on any batch of training data.58 Furthermore, we prove that the loss of the compressed model after one step of gradient59 descent is equal to the loss of the original model after one step of projected gradient descent.60 As explained below, projected gradient descent involves zeroing out certain parameter61 values after each step of gradient descent. Although training the original network may62 result in a lower loss function after fewer epochs, in many cases the compressed network63 takes less time per epoch to train and is faster in reaching a local minimum.64 To summarize, our main contributions are:65 • A formalization of radial neural networks, a new class of neural networks;66 • Universal approximations results for radial neural networks, including: a) approxi-67 mation of asymptotically affine functions, and b) bounded width approximation;68 • Implementation of a lossless compression algorithm for radial neural networks;69 • A theorem providing the precise relationship between gradient descent optimiza-70 tion of the original and compressed networks.71 2 Related work72 Radial rescaling activations. As noted, radial rescaling activations are a special case of the73 activations used in radial basis functions networks [4]. Radial rescaling functions have the74 symmetry property of preserving vector directions, and hence exhibit rotation equivariance.75 Consequently, examples of such functions, such as the squashing nonlinearity and Norm-76 ReLU, feature in the study of rotationally equivariant neural networks [55, 47, 56, 57, 26].77 However, previous works apply the activation only along the channel dimension, and78 consider the orthogonal group O(n) only for n = 2, 3. In contrast, we consider a radial79 rescaling activation across the entire hidden layer, and O(n)-equivariance where n is the80 hidden layer dimension. Our constructions echo the vector neurons formalism [15], in81 which the output of a nonlinearity is a vector rather than a scalar.82 Universal approximation. Neural networks of arbitrary width and sigmoid activations83 have long been known to be universal approximators [14]. Universality can also be achieved84 by bounded width networks with arbitrary depth [36], and generalizes to other activations85 and architectures [24, 60, 43, 50]. While most work has focused on compact domains,86 some recent work also considers non-compact domains [28, 54]. The techniques used for87 pointwise activations do not generalize to radial rescaling activations, where all activation88 output coordinates are affected by all input coordinates. Consequently, individual radial89 neural network approximators of two different functions cannot be easily combined to an90 approximator of the sum of the functions. The standard proof of universal approximation91 for radial basis functions networks requires an unbounded increase the number of hidden92 activation neurons, and hence does not apply to the case of radial neural networks [40].93 Groups and symmetry. Appearances of symmetry in machine learning have generally94 focused on symmetric input and output spaces. Most prominently, equivariant neural95 networks incorporate symmetry as an inductive bias and feature weight-sharing constraints96 based on equivariance. Examples include G-convolution, steerable CNN, and Clebsch-97 Gordon networks [13, 55, 11, 9, 30, 2, 58, 12, 57, 16, 31, 44]. By contrast, our approach to98 radial neural networks does not depend on symmetries of the input domain, output space,99 or feedforward mapping. Instead, we exploit parameter space symmetries and thus obtain100 more general results that apply to domains with no apparent symmetry.101 Model compression. A major goal in machine learning is to find methods to reduce102 the number of trainable parameters, decrease memory usage, or accelerate inference and103 training [8, 61]. Our approach toward this goal differs significantly from most existing104 methods in that it is based on the inherent symmetry of network parameter spaces.105 One prior method is weight pruning, which removes redundant weights with little loss106 in accuracy [20, 3, 27]. Pruning can be done during training [18] or at initialization107 [34, 53]. Gradient-based pruning removes weights by estimating the increase in loss resulting108 from their removal [33, 22, 17, 39]. A complementary approach is quantization, which109 decreases the bit depth of weights [59, 25, 19]. Knowledge distillation identifies a small model110 mimicking the performance of a larger model [5, 23, 1]. Matrix Factorization methods replace111 fully connected layers with lower rank or sparse factored tensors [6, 7, 52, 32, 45, 35] and112 can often be applied before training. Our method involves a type of matrix factorization113 based on the QR decomposition; however, rather than aim for rank reduction, we leverage114 this decomposition to reduce hidden widths via change-of-basis operations on the hidden115 representations. Close to our method are lossless compression methods which remove116 stable neurons in ReLU networks [49, 48] or exploit permutation parameter space symmetry117 to remove neurons [51]; our compression instead follows from the symmetries of the radial118 rescaling activation. Finally, the compression results of [26], while conceptually similar to119 ours, are weaker, as (1) the unitary group action is on disjoint layers instead of moving120 through all layers, and (2) the results are only stated for the squashing nonlinearity.121 3 Radial neural networks122 In this section, we define radial rescaling functions and radial neural networks. Let123 h : R→ R be a function. For any n ≥ 1, set:124 h(n) : Rn → Rn h(n)(v) = h(|v|) v|v| for v ̸= 0, and h(n)(0) = 0. A function ρ : Rn → Rn is called a radial rescaling function if125 ρ = h(n) for some piecewise differentiable h : R→ R. Hence, ρ sends each input vector to126 a scalar multiple of itself, and that scalar depends only on the norm of the vector1. It is127 easy to show that radial rescaling functions commute with orthogonal transformations.128 Example 1. (1) Step-ReLU, where h(r) = r if r ≥ 1 and 0 otherwise. In this case, the radial129 rescaling function is given by130 ρ : Rn → Rn, v 7→ v if |v| ≥ 1; v 7→ 0 if |v| < 1 (3.1) (2) The squashing function, where h(r) = r2/(r2 + 1). (3) Shifted ReLU, where h(r) =131 max(0, r − b) for r > 0 and b is a real number. See Figure 2. We refer to [55] and the132 references therein for more examples and discussion of radial functions.133 A radial neural network with L layers consists of a positive integer ni indicating the width of134 each layer i = 0, 1, . . . , L; the trainable parameters, comprising of a matrix Wi ∈ Rni×ni−1135 of weights and a bias vector bi ∈ Rni for each i = 1, . . . , L; and a radial rescaling function136 ρi : Rni → Rni for each i = 1, . . . , L. We refer to the tuple n = (n0, n1, . . . , nL) as the widths137 vector of the neural network. The hidden widths vector is nhid = (n1, n2, . . . , nL−1). The138 feedforward function F : Rn0 → RnL of a radial neural network is defined in the usual way139 as an iterated composition of affine maps and activations. Explicitly, set F0 = idRn0 and140 recursively define the partial feedforward functions for i = 1, . . . , L:141 Fi : Rn0 → Rni , x 7→ ρi (Wi ◦ Fi−1(x) + bi) Then the feedforward function is F = FL. Radial neural networks are a special type of142 radial basis functions network; we explain the connection in Appendix F.143 Remark 2. If bi = 0 for all i, then the feedforward function takes the form F(x) = W (µ(x)x)144 where µ : Rn → R is a scalar-valued function and W = WLWL−1 · · ·W1 ∈ RnL×n0 is the145 product of the weight matrices. If any of the biases are non-zero, then the feedforward146 function lacks such a simple form.147 1A function Rn → R that depends only on the norm of a vector is known as a radial function. Radial rescaling functions rescale each vector according to the radial function v 7→ λ(|v|) := h(|v|)|v| . This explains the connection to Equation 1.2. 4 Universal Approximation148 In this section, we consider two universal approximation results. The first approxi-149 mates asymptotically affine functions with a network of unbounded width. The second150 generalizes to bounded width networks. Proofs appear in Appendix B. Throughout,151 Br(c) = {x ∈ Rn : |x − c| < r} denotes the r-ball around a point c, and an affine map152 Rn → Rm is one of the from L(x) = Ax + b for a matrix A ∈ Rm×n and b ∈ Rm.153 4.1 Approximation of asymptotically affine functions154 A continuous function f : Rn → Rm is said to be asymptotically affine if there exists an155 affine map L : Rn → Rm such that, for every ϵ > 0, there is a compact subset K of Rn such156 that |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. In particular, continuous functions with compact157 support are asymptotically affine. The continuity of f and compactness of K imply that,158 for any ϵ > 0, there exist c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the union159 of the balls Bri (ci) covers K and, second, for all i, we have f (Bri (ci) ∩ K) ⊆ Bϵ( f (ci)). Let160 N( f , K, ϵ) be the minimal2 choice of N.161 Theorem 3 (Universal approximation). Let f : Rn → Rm be an asymptotically affine function.162 For any ϵ > 0, there exists a compact set K ⊂ Rn and a function F : Rn → Rm such that:163 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) layers whose164 hidden widths are (n + 1, n + 2, . . . , n + N).165 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.166 We note that the approximation in Theorem 3 is valid on all of Rn. To give an idea167 of the proof, first fix c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) as above. Let e1, . . . , eN be168 orthonormal basis vectors extending Rn to Rn+N . For i = 1, . . . , N define affine maps169 Ti : Rn+i−1 → Rn+i and Si : Rn+i → Rn+i by170 Ti(z) = z− ci + hiei Si(z) = z− (1 + h−1i )⟨ei, z⟩ei + ci + ei where h2i = 1− r2i and ⟨ei, z⟩ is the coefficient of ei in z. Setting ρi to be Step-ReLU171 (Equation 3.1) on Rn+i, these maps are chosen so that the composition Si ◦ ρi ◦ Ti maps172 the points in Bri (ci) to ci + ei, while keeping points outside this ball the same. We now173 describe a radial neural network with widths (n, n + 1, . . . , n + N, m) whose feedforward174 function approximates f . For i = 1, . . . , N the affine map from layer i− 1 to layer i is given175 by z 7→ Ti ◦ Si−1(z), with S0 = idRn . The activation at each hidden layer is Step-ReLU. Let176 L be the affine map such that |L− f | < ϵ on Rn \ K. The affine map from layer N to the177 output layer is Φ ◦ SN where Φ : Rn+N → Rm is the unique affine map determined by178 x 7→ L(x) if x ∈ Rn, and ei 7→ f (ci)− L(ci). This construction is illustrated in Figure 3.179 Corollary 4. Radial neural networks are dense in the space of all continuous functions with respect180 to the topology of compact convergence, and hence satisfy cc-universality.181 4.2 Bounded width approximation182 We now turn our attention to a bounded width universal approximation result.183 Theorem 5. Let f : Rn → Rm be an asymptotically affine function. For any ϵ > 0, there exists a184 compact set K ⊂ Rn and a function F : Rn → Rm such that:185 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) hidden186 layers whose widths are all n + m + 1.187 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.188 The proof, which is more involved than that of Theorem 3, relies on using orthogonal189 dimensions to represent the domain and the range of f , together with an indicator190 2In many cases, the constant N( f , K, ϵ) can be bounded explicitly. For example, if K is the unit cube in Rn and f is Lipschitz continuous with Lipschitz constant R, then N( f , K, ϵ) ≤ ⌈ R √ n 2ϵ ⌉n . dimension to distinguish the two. We regard points in Rn+m+1 as triples (x, y, θ) where191 x ∈ Rn, y ∈ Rm and θ ∈ R. The proof of Theorem 5 parallels that of Theorem 3, but instead192 of mapping points in Bri (ci) to ci + ei, we map the points in Bri ((ci, 0, 0)) to (0, f (ci)−L(0) s , 1),193 where s is chosen such that different balls do not interfere. The final layer then uses an194 affine map (x, y, θ) 7→ L(x) + sy, which takes (x, 0, 0) to L(x), and (0, f (ci)−L(0)s , 1) to f (ci).195 We remark on several additional results; see Appendix B for full statements and proofs.196 The bound of Theorem 5 can be strengthened to max(n, m) + 1 in the case of functions197 f : K → Rm defined on a compact domain K ⊂ Rn (i.e., ignoring asymptotic behavior).198 Furthermore, with more layers, it is possible to reduce that bound to max(n, m).199 5 Model compression200 In this section, we prove a model compression result. Specifically, we provide an algorithm201 which, given any radial neural network, computes a different radial neural network with202 smaller widths. The resulting compressed network has the same feedforward function203 as the original network, and hence the same value of the loss function on any batch of204 training data. In other words, our model compression procedure is lossless. Although205 our algorithm is practical and explicit, it reflects more conceptual phenomena, namely, a206 change-of-basis action on network parameter spaces (Section 5.1).207 5.1 The parameter space208 Suppose a fully connected network has L layers and widths given by the tuple n =209 (n0, n1, n2, . . . , nL−1, nL). In other words, the i-th layer has input width ni−1 and output210 width ni. The parameter space is defined as the vector space of all possible choices of211 parameter values. Hence, it is given by the following product of vector spaces:212 Param(n) = ( Rn1×n0 ×Rn2×n1 × · · · ×RnL×nL−1 ) × (Rn1 ×Rn2 × · · · ×RnL) We denote an element therein as a pair of tuples (W, b) where W = (Wi ∈ Rni×ni−1)Li=1213 are the weights and b = (bi ∈ Rni )Li=1 are the biases. To describe certain symmetries of214 the parameter space, consider the following product of orthogonal groups, with sizes215 corresponding to the widths of the hidden layers:216 O(nhid) = O(n1)×O(n2)× · · · ×O(nL−1) There is a change-of-basis action of O(nhid) on the parameter space Param(n). Explicitly,217 the tuple of orthogonal matrices Q = (Qi)L−1i=1 ∈ O(n hid) transforms the parameter values218 (W, b) to Q ·W := ( QiWiQ−1i−1 )L i=1 and Q ·b := (Qibi)Li=1, where Q0 = idn0 and QL = idnL .219 5.2 Model compression220 In order to state the compression result, we first define the reduced widths. Namely,221 the reduction nred = (nred0 , n red 1 , . . . , n red L ) of a widths vector n is defined recursively by222 setting nred0 = n0, then n red i = min(ni, n red i−1 + 1) for i = 1, . . . , L − 1, and finally n red L =223 nL. For a tuple ρ = (ρi : Rni → Rni )Li=1 of radial rescaling functions, we write ρred =224 ( ρredi : R nredi → Rnredi ) for the corresponding tuple of restrictions, which are all radial225 rescaling functions. The following result relies on Algorithm 1 below.226 Theorem 6. Let (W, b, ρ) be a radial neural network with widths n. Let Wred and bred be the227 weights and biases of the compressed network produced by Algorithm 1. The feedforward function228 of the original network (W, b, ρ) coincides with that of the compressed network (Wred, bred, ρred).229 Algorithm 1: QR Model Compression (QR-compress) input : W, b ∈ Param(n) output : Q ∈ O(nhid) and Wred, bred ∈ Param(nred) Q, Wred, bred ← [ ], [ ], [ ] // initialize output lists A1 ← [b1 W1] // matrix of size n1 × (n0 + 1) for i← 1 to L− 1 do // iterate through layers Qi, Ri ← QR-decomp(Ai , mode = ‘complete’) // Ai = QiInciRi Append Qi to Q Append first column of Ri to bred // reduced bias for layer i Append remainder of Ri to Wred // reduced weights for layer i Set Ai+1 ← [bi+1 Wi+1QiInci] // matrix of size ni+1 × (nredi + 1) end Append the first column of AL to bred // reduced bias for last layer Append the remainder of AL to Wred // reduced weights for last layer return Q, Wred, bred 230 We explain the notation of the algorithm. The inclusion matrix Inci ∈ Rni×n red i has231 ones along the main diagonal and zeros elsewhere. The method QR-decomp with mode =232 ‘complete’ computes the complete QR decomposition of the ni × (1 + nredi−1) matrix Ai233 as QiInciRi where Qi ∈ O(ni) and Ri is upper-triangular of size nredi × (1 + n red i−1). The234 definition of nredi implies that either n red i = n red i−1 + 1 or n red i = ni. The matrix Ri is of size235 nredi × n red i in the former case and of size ni × (1 + n red i−1) in the latter case.236 Example 7. Suppose the widths of a radial neural network are (1, 8, 16, 8, 1). Then it has237 ∑4i=1(ni−1 + 1)ni = 305 trainable parameters. The reduced network has widths (1, 2, 3, 4, 1)238 and ∑4i=1(n red i−1 + 1)(n red i ) = 34 trainable parameters. Another example appears in Figure 4.239 We note that the tuple of matrices Q produced by Algorithm 1 does not feature in the240 statement of Theorem 6, but is important in the proof (which appears in Appendix C).241 Namely, an induction argument shows that the i-th partial feedforward function of the242 original and reduced models are related via the matrices Qi and Inci. A crucial ingredient243 in the proof is that radial rescaling activations commute with orthogonal transformations.244 6 Projected gradient descent245 The typical use case for model compression algorithms is to produce a smaller version246 of the fully trained model which can be deployed to make inference more efficient. It247 is also worth considering whether compression can be used to accelerate training. For248 example, for some compression algorithms, the compressed and full models have the same249 feedforward function after a step of gradient descent is applied to each, and so one can250 compress before training and still reach the same minimum. Unfortunately, in the context251 of radial neural networks, compression using Algorithm 1 and then training does not252 necessarily give the same result as training and then compression (see Appendix D.6 for a253 counterexample). However, QR-compress does lead to a precise mathematical relationship254 between optimization of the two models: the loss of the compressed model after one step255 of gradient descent is equivalent to the loss of (a transformed version of) the original model256 after one step of projected gradient descent. Proofs appear in Appendix D.257 To state our results, fix a tuple of widths n and a tuple ρ = (ρi : Rni → Rni )Li=1 of radial258 rescaling functions. The loss function L : Param(n)→ R associated to a batch of training259 data {(xj, yj)} ⊆ Rn0 × RnL is defined as taking parameter values (W, b) to the sum260 ∑j C(F(xj), yj) where C : RnL ×RnL → R is a cost function on the output space, and261 F = F(W,b,ρ) is the feedforward of the radial neural network with parameters (W, b) and262 activations ρ. Similarly, we have a loss function Lred on the parameter space Param(nred)263 with reduced widths vector. For any learning rate η > 0, we obtain gradient descent maps:264 γ : Param(n)→ Param(n) γred : Param(nred)→ Param(nred) (W, b) 7→ (W, b)− η∇(W,b)L (V, c) 7→ (V, c)− η∇(V,c)Lred We will also consider, for k ≥ 0, the k-fold composition γk = γ ◦ γ ◦ · · · ◦ γ and similarly265 for γred. The projected gradient descent map on Param(n) is given by:266 γproj : Param(n)→ Param(n), (W, b) 7→ Proj (γ(W, b)) where the map Proj zeroes out all entries in the bottom left (ni − nredi )× n red i−1 submatrix of267 Wi −∇WiL, and the bottom (ni − n red i ) entries in bi −∇biL, for each i. Schematically:268 Wi −∇WiL = [ ∗ ∗ ∗ ∗ ] 7→ [ ∗ ∗ 0 ∗ ] , bi −∇biL = [ ∗ ∗ ] 7→ [ ∗ 0 ] To state the following theorem, let Wred, bred, Q = QR-compress(W, b) be the outputs269 of Algorithm 1 applied to (W, b) ∈ Param(n). Hence (Wred, bred) ∈ Param(nred) are270 the parameters of the compressed model, and Q ∈ O(nhid) is an orthogonal parameter271 symmetry. We also consider the action (Section 5.1) of Q−1 applied to (W, b).272 Theorem 8. Let Wred, bred, Q = QR-compress(W, b) be the outputs of Algorithm 1 applied to273 (W, b) ∈ Param(n). Set U = Q−1 · (W, b)− (Wred, bred). For any k ≥ 0, we have:274 γk(W, b) = Q · γk(Q−1 · (W, b)) γkproj(Q−1 · (W, b)) = γkred(W red, bred) + U. We conclude that gradient descent with initial values (W, b) is equivalent to gradient275 descent with initial values Q−1 · (W, b) since at any stage we can apply Q±1 to move from276 one to the other. Furthermore, projected gradient descent with initial values Q−1 · (W, b)277 is equivalent to gradient descent on Param(nred) with initial values (Wred, bred) since at278 any stage we can move from one to the other by ±U. Neither Q nor U depends on k.279 7 Experiments280 In addition to the theoretical results in this work, we provide an implementation of281 Algorithm 1, in order to validate the claims of Theorems 6 and 8 empirically, as well as to282 quantify real-world performance. Full experimental details are in Appendix E.283 (1) Empirical verification of Theorem 6. We learn the function f (x) = e−x 2 from samples284 using a radial neural network with widths n = (1, 6, 7, 1) and activation the radial shifted285 sigmoid h(x) = 1/(1 + e−x+s). Applying QR-compress gives a compressed radial neural286 network with widths nred = (1, 2, 3, 1). Theorem 6 implies that the respective neural287 functions F and Fred are equal. Over 10 random initializations, the mean absolute error is288 negligible up to machine precision: (1/N)∑j |F(xj)− Fred(xj)| = 1.31 · 10−8 ± 4.45 · 10−9.289 (2) Empirical verification of Theorem 8. The claim is that training the transformed model290 with parameters Q−1 · (W, b) and objective L by projected gradient descent coincides291 with training the reduced model with parameters (Wred, bred) and objective Lred by292 usual gradient descent. We verified this on synthetic data as above. Over 10 random293 initializations, the loss functions after training match: |L − Lred| = 4.02 · 10−9 ± 7.01 · 10−9.294 (3) The compressed model trains faster. Our compression method may be applied before295 training to produce a smaller model class which trains faster without sacrificing accuracy.296 We demonstrate this in learning the function f : R2 → R2 sending (t1, t2) to (e−t 2 1 , e−t 2 2)297 using a radial neural network with widths n = (2, 16, 64, 128, 16, 2) and activation the298 radial sigmoid h(r) = 1/(1 + e−r). Applying QR-compress gives a compressed network299 with widths nred = (2, 3, 4, 5, 6, 2). We trained both models until the training loss was300 ≤ 0.01. Over 10 random initializations on our system, the reduced network trained in301 15.32± 2.53 seconds and the original network trained in 31.24± 4.55 seconds.302 8 Conclusions and Discussion303 This paper demonstrates that radial neural networks are universal approximators and that304 their parameter spaces exhibit a rich symmetry group, leading to a model compression305 algorithm. The results of this work combine to build a theoretical foundation for the use of306 radial neural networks, and suggest that radial neural networks hold promise for wider307 practical applicability. Furthermore, this work makes an argument for considering the308 advantages of non-pointwise nonlinearities in neural networks.309 There are two main limitations of our results, each providing an opportunity for future310 work. First, our universal approximation constructions currently work only for Step-ReLU311 radial rescaling radial activations; it would be desirable to generalize to other activations.312 Additionally, Theorem 6 achieves compression only for networks whose widths satisfy313 ni > ni−1 + 1 for some i. Neural networks which do not have increasing widths anywhere314 in their architecture, such as encoders, would not be compressible.315 Further extensions of this work include: First, little is currently known about the stabil-316 ity properties of radial neural networks during training, as well as their sensitivity to317 initialization. Second, radial rescaling activations provide an extreme case of symmetry;318 there may be benefits to combining radial and pointwise activations within a single net-319 work, for example, through ‘block’ radial rescaling functions. Our techniques may yield320 weaker compression properties for more general radial basis functions networks; radial321 neural networks may be the most compressible such networks. Third, the parameter space322 symmetries may provide a key ingredient in analyzing the gradient flow dynamics of323 radial neural networks and computation of conserved quantities. Fourth, radial rescaling324 activations can be used within convolutional or group-equivariant NNs. Finally, based325 on the theoretical advantages laid out in this paper, future work will explore empirically326 applications in which we expect radial networks to outperform alternate methods. Such327 potential applications include data spaces with circular or distance-based class boundaries.328 References329 [1] Lei Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? arXiv:1312.6184,330 2013. 3331 [2] Erkao Bao and Linqi Song. Equivariant neural networks and equivarification.332 arXiv:1906.07172, 2019. 3333 [3] Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. What is334 the state of neural network pruning? arXiv:2003.03033, 2020. 3335 [4] David S Broomhead and David Lowe. Radial basis functions, multi-variable functional336 interpolation and adaptive networks. Technical report, Royal Signals and Radar337 Establishment Malvern (United Kingdom), 1988. 2, 3338 [5] Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression.339 In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery340 and Data Mining, pages 535–541, 2006. 3341 [6] Yu Cheng, X Yu Felix, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shih-Fu342 Chang. Fast neural networks with circulant projections. arXiv:1502.03436, 2, 2015. 3343 [7] Yu Cheng, Felix X Yu, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu344 Chang. An exploration of parameter redundancy in deep networks with circulant345 projections. In Proceedings of the IEEE international conference on computer vision, pages346 2857–2865, 2015. 3347 [8] Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. A survey of model compression348 and acceleration for deep neural networks. arXiv:1710.09282, 2017. 3349 [9] Benjamin Chidester, Minh N. Do, and Jian Ma. Rotation equivariance and invariance350 in convolutional neural networks. arXiv:1805.12301, 2018. 3351 [10] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep352 network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289,353 2015. 1354 [11] Taco S. Cohen and Max Welling. Group equivariant convolutional networks. In355 International conference on machine learning (ICML), pages 2990–2999, 2016. 3356 [12] Taco S Cohen and Max Welling. Steerable CNNs. In Proceedings of the International357 Conference on Learning Representations (ICLR), 2017. 3358 [13] Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equiv-359 ariant convolutional networks and the icosahedral CNN. In Proceedings of the 36th360 International Conference on Machine Learning (ICML), volume 97, pages 1321–1330, 2019.361 3362 [14] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathe-363 matics of control, signals and systems, 2(4):303–314, 1989. 1, 3364 [15] Congyue Deng, O. Litany, Yueqi Duan, A. Poulenard, A. Tagliasacchi, and L. Guibas.365 Vector Neurons: A General Framework for SO(3)-Equivariant Networks. 2021366 IEEE/CVF International Conference on Computer Vision (ICCV), 2021. doi: 10.1109/367 iccv48922.2021.01198. 3368 [16] Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symme-369 try in convolutional neural networks. In International Conference on Machine Learning370 (ICML), 2016. 3371 [17] Xin Dong, Shangyu Chen, and Sinno Jialin Pan. Learning to prune deep neural372 networks via layer-wise optimal brain surgeon. arXiv preprint arXiv:1705.07565, 2017.373 3374 [18] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse,375 trainable neural networks. arXiv:1803.03635, 2018. 3376 [19] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep377 convolutional networks using vector quantization. arXiv:1412.6115, 2014. 3378 [20] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neu-379 ral networks with pruning, trained quantization and huffman coding. arXiv:1510.00149,380 2015. 3381 [21] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli382 Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J.383 Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew384 Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre385 Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi,386 Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Nature,387 585(7825):357–362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https:388 //doi.org/10.1038/s41586-020-2649-2. 35389 [22] Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal390 brain surgeon. Morgan Kaufmann, 1993. 3391 [23] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural392 network. arXiv:1503.02531, 2015. 3393 [24] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural394 networks, 4(2):251–257, 1991. 3395 [25] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang,396 Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient con-397 volutional neural networks for mobile vision applications. arXiv:1704.04861, 2017.398 3399 [26] George Jeffreys and Siu-Cheong Lau. Kähler Geometry of Quiver Varieties and400 Machine Learning. arXiv:2101.11487, 2021. URL http://arxiv.org/abs/2101.11487.401 3402 [27] Ehud D Karnin. A simple procedure for pruning back-propagation trained neural403 networks. IEEE transactions on neural networks, 1(2):239–242, 1990. 3404 [28] Patrick Kidger and Terry Lyons. Universal approximation with deep narrow networks.405 In Conference on learning theory, pages 2306–2327. PMLR, 2020. 3406 [29] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-407 normalizing neural networks. Advances in neural information processing systems, 30,408 2017. 1409 [30] Risi Kondor and Shubhendu Trivedi. On the Generalization of Equivariance and410 Convolution in Neural Networks to the Action of Compact Groups. In International411 conference on machine learning (ICML), 2018. 3412 [31] Leon Lang and Maurice Weiler. A Wigner-Eckart theorem for group equivariant413 convolution kernels. In International Conference on Learning Representations (ICLR), 2021.414 3415 [32] Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempit-416 sky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition.417 arXiv:1412.6553, 2014. 3418 [33] Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in419 neural information processing systems, pages 598–605, 1990. 3420 [34] Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip HS Torr. A421 signal propagation perspective for pruning neural networks at initialization. arXiv422 preprint arXiv:1906.06307, 2019. 3423 [35] Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, and Rogerio424 Feris. Fully-adaptive feature sharing in multi-task networks with applications in425 person attribute classification. In Proceedings of the IEEE conference on computer vision426 and pattern recognition (CVPR), pages 5334–5343, 2017. 3427 [36] Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. The428 expressive power of neural networks: A view from the width. Advances in neural429 information processing systems, 30, 2017. 3430 [37] Mirco Milletarí, Thiparat Chotibut, and Paolo E Trevisanutto. Mean field theory of431 activation functions in deep neural networks. arXiv preprint arXiv:1805.08786, 2018. 1432 [38] Diganta Misra. Mish: A self regularized non-monotonic activation function. arXiv433 preprint arXiv:1908.08681, 2019. 1434 [39] Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Prun-435 ing convolutional neural networks for resource efficient inference. arXiv preprint436 arXiv:1611.06440, 2016. 3437 [40] Jooyoung Park and Irwin W Sandberg. Universal approximation using radial-basis-438 function networks. Neural computation, 3(2):246–257, 1991. 3439 [41] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory440 Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban441 Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan442 Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith443 Chintala. Pytorch: An imperative style, high-performance deep learning library. In444 H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett,445 editors, Advances in Neural Information Processing Systems (NeurIPS) 32, pages446 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/447 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.448 pdf. 35449 [42] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions.450 arXiv preprint arXiv:1710.05941, 2017. 1451 [43] Siamak Ravanbakhsh. Universal equivariant multilayer perceptrons. In International452 Conference on Machine Learning, pages 7996–8006. PMLR, 2020. 3453 [44] Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Equivariance through454 parameter-sharing. In International Conference on Machine Learning, pages 2892–2901.455 PMLR, 2017. 3456 [45] Roberto Rigamonti, Amos Sironi, Vincent Lepetit, and Pascal Fua. Learning separable457 filters. In Proceedings of the IEEE conference on computer vision and pattern recognition,458 pages 2754–2761, 2013. 3459 [46] Frank Rosenblatt. The perceptron: a probabilistic model for information storage and460 organization in the brain. Psychological review, 65(6):386, 1958. 1461 [47] Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between462 capsules. arXiv:1710.09829, 2017. 2, 3463 [48] Thiago Serra, Abhinav Kumar, and Srikumar Ramalingam. Lossless compression464 of deep neural networks. In International Conference on Integration of Constraint Pro-465 gramming, Artificial Intelligence, and Operations Research, pages 417–430. Springer, 2020.466 3467 [49] Thiago Serra, Xin Yu, Abhinav Kumar, and Srikumar Ramalingam. Scaling up exact468 neural network compression by relu stability. Advances in Neural Information Processing469 Systems, 34, 2021. 3470 [50] Sho Sonoda and Noboru Murata. Neural network with unbounded activation func-471 tions is universal approximator. Applied and Computational Harmonic Analysis, 43(2):472 233–268, 2017. 3473 [51] Gustav Sourek, Filip Zelezny, and Ondrej Kuzelka. Lossless compression of structured474 convolutional models via lifting. arXiv preprint arXiv:2007.06567, 2020. 3475 [52] Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks476 with low-rank regularization. arXiv:1511.06067, 2015. 3477 [53] Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before478 training by preserving gradient flow. arXiv preprint arXiv:2002.07376, 2020. 3479 [54] Ming-Xi Wang and Yang Qu. Approximation capabilities of neural networks on480 unbounded domains. Neural Networks, 145:56–67, 2022. 3481 [55] Maurice Weiler and Gabriele Cesa. General E(2)-Equivariant Steerable CNNs. Confer-482 ence on Neural Information Processing Systems (NeurIPS), 2019. 2, 3, 4483 [56] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen.484 3D steerable CNNs: Learning rotationally equivariant features in volumetric data.485 Proceedings of the 32nd International Conference on Neural Information Processing Systems486 (NeurIPS), 2018. 2, 3487 [57] Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for488 rotation equivariant CNNs. In Proceedings of the IEEE Conference on Computer Vision489 and Pattern Recognition (CVPR), pages 849–858, 2018. 2, 3490 [58] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow.491 Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the492 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5028–5037,493 2017. 3494 [59] Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized495 convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference496 on Computer Vision and Pattern Recognition (CVPR), pages 4820–4828, 2016. 3497 [60] Dmitry Yarotsky. Universal approximations of invariant maps by neural networks.498 Constructive Approximation, 55(1):407–474, 2022. 3499 [61] Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad,500 and Yanzhi Wang. A systematic DNN weight pruning framework using alternating501 direction method of multipliers. In Proceedings of the European Conference on Computer502 Vision (ECCV), pages 184–199, 2018. 3503 Checklist504 1. For all authors...505 (a) Do the main claims made in the abstract and introduction accurately reflect506 the paper’s contributions and scope? [Yes]507 (b) Did you describe the limitations of your work? [Yes] See Section 8.508 (c) Did you discuss any potential negative societal impacts of your work? [N/A]509 Our work is theoretical and does not hold specific risks of negative impacts.510 (d) Have you read the ethics review guidelines and ensured that your paper511 conforms to them? [Yes]512 2. If you are including theoretical results...513 (a) Did you state the full set of assumptions of all theoretical results? [Yes]514 (b) Did you include complete proofs of all theoretical results? [Yes] Most of the515 proofs appear in the supplementary material.516 3. If you ran experiments...517 (a) Did you include the code, data, and instructions needed to reproduce the518 main experimental results (either in the supplemental material or as a URL)?519 [Yes]520 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how521 they were chosen)? [Yes]522 (c) Did you report error bars (e.g., with respect to the random seed after running523 experiments multiple times)? [Yes]524 (d) Did you include the total amount of compute and the type of resources used525 (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix E.526 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new527 assets...528 (a) If your work uses existing assets, did you cite the creators? [Yes]529 (b) Did you mention the license of the assets? [N/A]530 (c) Did you include any new assets either in the supplemental material or as a531 URL? [N/A]532 (d) Did you discuss whether and how consent was obtained from people whose533 data you’re using/curating? [N/A]534 (e) Did you discuss whether the data you are using/curating contains personally535 identifiable information or offensive content? [N/A]536 5. If you used crowdsourcing or conducted research with human subjects...537 (a) Did you include the full text of instructions given to participants and screen-538 shots, if applicable? [N/A]539 (b) Did you describe any potential participant risks, with links to Institutional540 Review Board (IRB) approvals, if applicable? [N/A]541 (c) Did you include the estimated hourly wage paid to participants and the total542 amount spent on participant compensation? [N/A]543 A Organization of the appendices544 This paper is a contribution to the mathematical foundations of machine learning, and our545 results are motivated by expanding the applicability and performance of neural networks.546 At the same time, we give precise mathematical formulations of our results and proofs.547 The purposes of these appendices are several:548 1. To clarify the mathematical conventions and terminology, thus making the paper549 more accessible.550 2. To provide full proofs of the main results.551 3. To develop context around various construction appearing in the main text.552 4. To discuss in detail examples, special cases, and generalizations of our results.553 We now give a summary of the contents of the appendices.554 Appendix B contains proofs the universal approximation results (Theorems 3 and 5) stated555 in Section 4 of the main text, as well as proofs of additional bounded width results.556 The proofs use notation given in Appendix B.1, and rely on preliminary topological557 considerations given in Appendix B.2.558 In Appendix C, we give a proof of the model compression result given in Theorem 6, which559 appears in Section 5. For clarity and background we begin the appendix with a discussion560 of the version of the QR decomposition relevant for our purposes (Appendix C.1). We also561 establish elementary properties of radial rescaling activations (Appendix C.2).562 The focus of Appendix D is projected gradient descent, elaborating on Section 6. We563 first prove a result on the interaction of gradient descent and orthogonal transformations564 (Appendix D.1), before formulating projected gradient descent in more detail (Appendix565 D.2), and introducing the so-called interpolating space (Appendix D.3). We restate Theorem566 8 in more convenient notation (Appendix D.4) before proceeding to the proof (Appendix567 D.5).568 Appendix E contains implementation details for the experiments summarized in Section569 7. Our implementations use shifted radial rescaling activations, which we formulate in570 Appendix E.1.571 Appendix F explains the connection between our constructions and radial basis functions572 networks. While radial neural networks turn out to be a specific type of radial basis573 functions network, our universality results are not implied by those for general radial basis574 functions networks.575 B Universal approximation proofs and additional results576 In this section, we provide full proofs of the universal approximation (UA) results for radial577 neural networks, as stated in Section 4. In order to do so, we first clarify our notational578 conventions (Appendix B.1), and collect basic topological results (Appendix B.2).579 B.1 Notation580 Recall that, for a point c in the Euclidean space Rn and a positive real number r, we denote581 the r-ball around c by Br(c) = {x ∈ Rn | |x− c| < r}. All networks in this section have the582 Step-ReLU radial rescaling activation function, defined as:583 ρ : Rn −→ Rn, z 7−→ { z if |z| ≥ 1 0 otherwise Throughout, ◦ denotes the composition of functions. We identify a linear map with a584 corresponding matrix (in the standard bases). In the case of linear maps, the operation ◦585 can be be identified with matrix multiplication. Recall also that an affine map L : Rn → Rm586 is one of the from L(x) = Ax + b for a matrix A ∈ Rm×n and b ∈ Rm.587 B.2 Topology588 Let K be a compact subset of Rn and let f : K → Rm be a continuous function.589 Lemma 9. For any ϵ > 0, there exist c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the590 union of the balls Bri (ci) covers K; second, for all i, we have f (Bri (ci) ∩ K) ⊆ Bϵ( f (ci)).591 Proof. The continuity of f implies that for each c ∈ K, there exists r = rc such that592 f (Brc(c)∩K) ⊆ Bϵ( f (c)). The subsets Brc(c)∩K form an open cover of K. The compactness593 of K implies that there is a finite subcover. The result follows.594 We also prove a variation of Lemma 9 that additionally guarantees that none of the balls in595 the cover of K contains the center point of another ball.596 Lemma 10. For any ϵ > 0, there exist c1, . . . , cM ∈ K and r1, . . . , rM ∈ (0, 1) such that, first, the597 union of the balls Bri (ci) covers K; second, for all i, we have f (Bri (ci)) ⊆ Bϵ( f (ci)); and, third,598 |ci − cj| ≥ ri.599 Proof. Because f is continuous on a compact domain, it is uniformly continuous. So, there600 exists r > 0 such that f (Br(c) ∩ K) ⊆ Bϵ( f (c)) for each c ∈ K. Because K is compact it has601 a finite volume, and so does Br/2(K) = ⋃ c∈K Br/2(c). Hence, there exists a finite maximal602 packing of Br/2(K) with balls of radius r/2. That is, a collection c1, . . . , cM ∈ Br/2(K)603 such that, for all i, Br/2(ci) ⊆ Br/2(K) and, for all j ̸= i, Br/2(ci) ∩ Br/2(cj) = ∅. The first604 condition implies that ci ∈ K. The second condition implies that |ci − cj| ≥ r. Finally, we605 argue that K ⊆ ⋃Mi=1 Br(ci). To see this, suppose, for a contradiction, that x ∈ K does not606 belong to ⋃M i=1 Br(ci). Then Br/2(ci) ∩ Br/2(x) = ∅, and x could be added to the packing,607 which contradicts the fact that the packing was chosen to be maximal. So the union of the608 balls Br(ci) covers K.609 We turn our attention to the minimal choices of N and M in Lemmas 9 and 10.610 Definition 11. Given f : K → Rm continuous and ϵ > 0, let N( f , K, ϵ) be the minimal611 choice of N in Lemma 9, and let M( f , K, ϵ) be the minimal choice of M in Lemma 10.612 Observe that M( f , K, ϵ) ≥ N( f , K, ϵ). In many cases, it is possible to give explicit bounds613 for the constants N( f , K, ϵ) and M( f , K, ϵ). As an illustration, we give the argument in the614 case that K is the closed unit cube in Rn and f : K → Rm is Lipschtiz continuous.615 Proposition 12. Let K = [0, 1]n ⊂ Rn be the (closed) unit cube and let f : K → Rm be Lipschitz616 continuous with Lipschitz constant R. For any ϵ > 0, we have:617 N( f , K, ϵ) ≤ ⌈ R √ n 2ϵ ⌉n and M( f , K, ϵ) ≤ Γ(n/2 + 1) πn/2 ( 2 + 2R ϵ )n . Proof. For the first inequality, observe that the unit cube can be covered with ⌈ R √ n 2ϵ ⌉n 618 cubes of side length 2ϵR√n . Each cube is contained in a ball of radius ϵ R centered at the619 center of the cube. (In general, a cube of side length a in Rn is contained in a ball of620 radius a √ n 2 .) Lipschitz continuity implies that, for all x, x ′ ∈ K, if |x − x′| < ϵ/R then621 | f (x)− f (x′)| ≤ R|x− x′| < ϵ.622 For the second inequality, let r = ϵ/R. Lipschitz continuity implies that, for all x, x′ ∈ K, if623 |x− x′| < r then | f (x)− f (x′)| ≤ R|x− x′| < ϵ. The n-dimensional volume of the set of624 points with distance at most r/2 to the unit cube is vol(Br/2(K)) ≤ (1 + r)n. The volume625 of a ball with radius r/2 is vol(Br/2(0)) = π n/2 Γ(n/2+1) (r/2) n. Hence, any packing of Br/2(K)626 with balls of radius r/2 consists of at most627 vol(Br/2(K)) vol(Br/2(0)) ≤ Γ(n/2 + 1) πn/2 ( 2 + 2R ϵ )n such balls. So there also exists a maximal packing with at most that many balls. This628 packing can be used in the proof of Lemma 10, which implies that it is a bound on629 M( f , K, ϵ).630 We note in passing that any differentiable function f : K → Rn on a compact subset K of631 Rn is Lipschitz continuous. Indeed, the compactness of K implies that there exists R such632 that | f ′(x)| ≤ R for all x ∈ K. Then one can take R to be the Lipschitz constant of f .633 B.3 Proof of Theorem 3: UA for asymptotically affine functions634 In this section, we restate and prove Theorem 3, which proves that radial neural networks635 are universal approximators of asymptotically affine functions. We recall the definition of636 such functions:637 Definition 13. A function f : Rn → Rm is asymptotically affine if there exists an affine638 function L : Rn → Rm such that, for all ϵ > 0, there exists a compact set K ⊂ Rn such that639 |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. We say that L is the limit of f .640 Remark 14. An asymptotically linear function is defined in the same way, except L is taken641 to be linear (i.e., given just by applying matrix multiplication without translation). Hence642 any asymptotically linear function is in particular an asymptotically affine function, and643 Theorem 3 applies to asymptotically linear functions as well.644 Given an asymptotically affine function f : Rn → Rm and ϵ > 0, let K be a compact set as645 in Definition 13. We apply Lemma 9 to the restriction f |K of f to K and produce a minimal646 constant N = N( f |K, K, ϵ) as in Definition 11. We write simply N( f , K, ϵ) for this constant.647 Theorem 3 (Universal approximation). Let f : Rn → Rm be an asymptotically affine function.648 For any ϵ > 0, there exists a compact set K ⊂ Rn and a function F : Rn → Rm such that:649 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) layers whose650 hidden widths are (n + 1, n + 2, . . . , n + N).651 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.652 Proof. By the hypothesis on f , there exists an affine function L : Rn → Rm and a compact653 set K ⊂ Rn such that |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. Abbreviate N( f , K, ϵ) by N. As654 in Lemma 9, fix c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the union of the balls655 Bri (ci) covers K and, second, for all i, we have f (Bri (ci)) ⊆ Bϵ( f (ci)). Let U = ⋃N i=1 Bri (ci),656 so that K ⊂ U. Define F : Rn → Rm as:657 F(x) = { L(x) if x /∈ U f (cj) where j is the smallest index with x ∈ Brj(cj) If x /∈ U, then |F(x)− f (x)| = |L(x)− f (x)| < ϵ. Hence suppose x ∈ U. Let j be the658 smallest index such that x ∈ Brj(cj). Then F(x) = f (cj), and, by the choice of rj, we have:659 |F(x)− f (x)| = | f (cj)− f (x)| < ϵ. We proceed to show that F is the feedforward function of a radial neural network. Let660 e1, . . . , eN be orthonormal basis vectors extending Rn to Rn+N . We regard each Rn+i−1 as661 a subspace of Rn+i by embedding into the first n + i− 1 coordinates. For i = 1, . . . , N, we662 set hi = √ 1− r2i and define the following affine transformations:663 Ti : Rn+i−1 → Rn+i Si : Rn+i → Rn+i z 7→ z− ci + hiei z 7→ z− (1 + h−1i )⟨ei, z⟩ei + ci + ei where ⟨ei, z⟩ is the coefficient of ei in z. Consider the radial neural network with widths664 (n, n + 1, . . . , n + N, m), whose affine transformations and activations are given by:665 • For i = 1, . . . , N the affine transformation from layer i− 1 to layer i is given by666 z 7→ Ti ◦ Si−1(z), where S0 = idRn .667 • The activation function at the i-th hidden layer is Step-ReLU on Rn+i, that is:668 ρi : Rn+i −→ Rn+i, z 7−→ { z if |z| ≥ 1 0 otherwise • The affine transformation from layer i = N to the output layer is669 z 7→ ΦL, f ,c ◦ SN(z) where ΦL, f ,c is the affine transformation given by:670 ΦL, f ,c : R n+N → Rm, x + N ∑ i=1 aiei 7→ L(x) + N ∑ i=1 ai( f (ci)− L(ci)) which can be shown to be affine when L is affine. Indeed, write L(x) = Ax + b671 where A is a matrix in Rm×n and b ∈ Rm is a vector. Then ΦL, f ,c is the composition672 of the linear map given by the matrix673 [A f (c1)− L(c1) f (c2)− L(c2) · · · f (cN)− L(cN)] ∈ Rm×(n+N) and translation by b ∈ Rm. Note that we regard each f (ci) − L(ci) ∈ Rm as a674 column vector in the matrix above.675 We claim that the feedforward function of the above radial neural network is exactly F. To676 show this, we first state a lemma, whose (omitted) proof is an elementary computation.677 Lemma 3.1. For i = 1, . . . , N, the composition Si ◦ Ti is the embedding Rn+i−1 ↪→ Rn+i.678 Next, recursively define Gi : Rn → Rn+i via679 Gi = Si ◦ ρi ◦ Ti ◦ Gi−1, where G0 = idRn . The function Gi admits an direct formulation:680 Proposition 3.2. For i = 0, 1, . . . , N, we have:681 Gi(x) = { x if x /∈ ⋃ij=1 Brj(cj) cj + ej where j ≤ i is the smallest index with x ∈ Brj(cj) . Proof. We proceed by induction. The base step i = 0 is immediate. For the induction step,682 assume the claim is true for i− 1, where 0 ≤ i− 1 < N. There are three cases to consider.683 Case 1. Suppose x /∈ ⋃ij=1 Brj(cj). Then in particular x /∈ ⋃i−1j=1 Brj(cj), so the induction684 hypothesis implies that Gi−1(x) = x. Additionally, x /∈ Bri (ci), so:685 |Ti(x)| = |x− ci + hiei| = √ |x− ci|+ h2i ≥ √ r2i + 1− r2i = 1. Using the definition of ρi and Lemma 3.1, we compute:686 Gi(x) = Si ◦ ρi ◦ Ti ◦ Gi−1(x) = Si ◦ ρi ◦ Ti(x) = Si ◦ Ti(x) = x. Case 2. Suppose x ∈ Bj \ ⋃j−1 k=1 Brk (ck) for some j ≤ i− 1. Then the induction hypothesis687 implies that Gi−1(x) = cj + ej. We compute:688 |Ti(cj + ej)| = |cj + ej − ci + hiei| > |ej| = 1. Therefore,689 Gi(x) = Si ◦ ρi ◦ Ti(cj + ej) = Si ◦ Ti(cj + ej) = cj + ej. Case 3. Finally, suppose x ∈ Bi \ ⋃i−1 j=1 Brj(cj). The induction hypothesis implies that690 Gi−1(x) = x. Since x ∈ Bri (ci), we have:691 |Ti(x)| = |x− ci + hiei| = √ |x− ci|+ h2i < √ r2i + 1− r2i = 1. Therefore:692 Gi(x) = Si ◦ ρi ◦ Ti(x) = Si(0) = ci + ei. This completes the proof of the proposition.693 Finally, we show that the function F defined at the beginning of the proof is the feedforward694 function of the above radial neural network. The computation is elementary:695 Ffeedforward = ΦL, f ,c ◦ SN ◦ ρN ◦ TN ◦ SN−1 ◦ ρN−1 ◦ TN−1 ◦ · · · S1 ◦ ρ1 ◦ T1 = ΦL, f ,c ◦ GN = F where the first equality follows from the definition of the feedforward function, the second696 from the definition of GN , and the last from the case i = N of Proposition 3.2 together with697 the definition of ΦL, f ,c. This completes the proof of the theorem.698 B.4 Proof of Theorem 5: bounded width UA for asymptotically affine functions699 We restate and prove Theorem 5, which strengthens Theorem 3 by providing a bounded700 width radial neural network approximation of any asymptotically affine function.701 Theorem 5. Let f : Rn → Rm be an asymptotically affine function. For any ϵ > 0, there exists a702 compact set K ⊂ Rn and a function F : Rn → Rm such that:703 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) hidden704 layers whose widths are all n + m + 1.705 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.706 Proof. By the hypothesis
1. What is the focus and contribution of the paper regarding radial neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of expressivity and model compressivity? 3. Do you have any concerns or questions regarding the paper's claims and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any limitations or potential negative impacts associated with the proposed approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper provides an analysis of radial neural networks from the viewpoint of expressivity and model compressivity. Strengths And Weaknesses I have appreciated the originality of the work, namely, exploring simultaneously in a same paper the pros and cons of these architectures. The main weaknesses is to me a lack of motivations: why should we use these networks instead of the radial basis function networks, for which the theory is more understood? After having read the paper, I am still unsure whether I should use this architecture or another, mostly for the following reason: it is unclear to me what expressivity we loose using those architectures. The authors only prove the ability of the model to represent simple continuous functions, but do not say exactly how expressivity compares to common architectures regarding state-of-the-art expressivity guarantees. Regarding paper clarity, I found the section about network compression hard to understand. The rest of the paper was well-presented. The paper is overall well written, I found few typos. Questions Why should we use these networks instead of the radial basis function networks, for which the theory is more understood? Intuitively, one may expect that applying the nonlinearity on the activation norm instead of each component of the activation vector decreases expressivity. The result you provide is very generic; would it be possible to characterize the loss of expressivity for more complicated functions? How does the number of layers/width needed to approximate a continuous function to accuracy epsilon compare to those needed for other types of architectures? Could you please give intuition of why radial neural nets are more compressible than other architectures? Am I right that Q in Thm 7 depends on k? This should be clarified. Limitations I do not expect this work to have any societal impact.
NIPS
Title Universal approximation and model compression for radial neural networks Abstract We introduce a class of fully-connected neural networks whose activa1 tion functions, rather than being pointwise, rescale feature vectors by a 2 function depending only on their norm. We call such networks radial 3 neural networks, extending previous work on rotation equivariant net4 works that considers rescaling activations in less generality. We prove 5 universal approximation theorems for radial neural networks, including 6 in the more difficult cases of bounded widths and unbounded domains. 7 Our proof techniques are novel, distinct from those in the pointwise case. 8 Additionally, radial neural networks exhibit a rich group of orthogonal 9 change-of-basis symmetries on the vector space of trainable parameters. 10 Factoring out these symmetries leads to a practical lossless model com11 pression algorithm. Optimization of the compressed model by gradient 12 descent is equivalent to projected gradient descent for the full model. 13 N/A We introduce a class of fully-connected neural networks whose activa-1 tion functions, rather than being pointwise, rescale feature vectors by a2 function depending only on their norm. We call such networks radial3 neural networks, extending previous work on rotation equivariant net-4 works that considers rescaling activations in less generality. We prove5 universal approximation theorems for radial neural networks, including6 in the more difficult cases of bounded widths and unbounded domains.7 Our proof techniques are novel, distinct from those in the pointwise case.8 Additionally, radial neural networks exhibit a rich group of orthogonal9 change-of-basis symmetries on the vector space of trainable parameters.10 Factoring out these symmetries leads to a practical lossless model com-11 pression algorithm. Optimization of the compressed model by gradient12 descent is equivalent to projected gradient descent for the full model.13 1 Introduction14 Inspired by biological neural networks, the theory of artificial neural networks has largely15 focused on pointwise (or “local”) nonlinear layers [46, 14], in which the same function16 σ : R→ R is applied to each coordinate independently:17 Rn → Rn, v = (v1 , . . . , vn) 7→ (σ(v1) , σ(v2) , . . . , σ(vn)). (1.1) In networks with pointwise nonlinearities, the standard basis vectors in Rn can be inter-18 preted as “neurons” and the nonlinearity as a “neuron activation.” Research has generally19 focused on finding functions σ which lead to more stable training, have less sensitivity to20 initialization, or are better adapted to certain applications [42, 38, 37, 10, 29]. Many σ have21 been considered, including sigmoid, ReLU, arctangent, ELU, Swish, and others.22 However, by setting aside the biological metaphor, it is possible to consider a much23 broader class of nonlinearities, which are not necessarily pointwise, but instead depend24 simultaneously on many coordinates. Freedom from the pointwise assumption allows25 one to design activations that yield expressive function classes with specific advantages.26 Additionally, certain choices of non-pointwise activations maximize symmetry in the27 parameter space of the network, leading to compressibility and other desirable properties.28 In this paper, we introduce radial neural networks which employ non-pointwise nonlin-29 earities called radial rescaling activations. Such networks enjoy several provable properties30 including high model compressibility, symmetry in optimization, and universal approxi-31 mation. Radial rescaling activations are defined by rescaling each vector by a scalar that32 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. depends only on the norm of the vector:33 ρ : Rn → Rn, v 7→ λ(|v|)v, (1.2) where λ is a scalar-valued function of the norm. Whereas in the pointwise setting, only the34 linear layers mix information between different components of the latent features, for radial35 rescaling, all coordinates of the activation output vector are affected by all coordinates of36 the activation input vector. The inherent geometric symmetry of radial rescalings makes37 them particularly useful for designing equivariant neural networks [55, 47, 56, 57].38 We note that radial neural networks constitute a simple and previously unconsidered type39 of multilayer radial basis functions network [4], namely, one where the number of hidden40 activation neurons (often denoted N) in each layer is equal to one. Indeed, pre-composing41 equation 1.2 with a translation and post-composing with a linear map, one obtains a special42 case of the local linear model extension of a radial basis functions network.43 In our first set of main results, we prove that radial neural networks are in fact universal44 approximators. Specifically, we demonstrate that any asymptotically affine function can be45 approximated with a radial neural network, suggesting potentially good extrapolation46 behavior. Moreover, this approximation can be done with bounded width. Our approach47 to proving these results departs markedly from techniques used in the pointwise case.48 Additionally, our result is not implied by the universality property of radial basis functions49 networks in general, and differs in significant ways, particularly in the bounded width50 property and the approximation of asymptotically affine functions.51 In our second set of main results, we exploit parameter space symmetries of radial neural52 networks to achieve model compression. Using the fact that radial rescaling activations53 commute with orthogonal transformations, we develop a practical algorithm to system-54 atically factor out orthogonal symmetries via iterated QR decompositions. This leads to55 another radial neural network with fewer neurons in each hidden layer. The resulting56 model compression algorithm is lossless: the compressed network and the original network57 both have the same value of the loss function on any batch of training data.58 Furthermore, we prove that the loss of the compressed model after one step of gradient59 descent is equal to the loss of the original model after one step of projected gradient descent.60 As explained below, projected gradient descent involves zeroing out certain parameter61 values after each step of gradient descent. Although training the original network may62 result in a lower loss function after fewer epochs, in many cases the compressed network63 takes less time per epoch to train and is faster in reaching a local minimum.64 To summarize, our main contributions are:65 • A formalization of radial neural networks, a new class of neural networks;66 • Universal approximations results for radial neural networks, including: a) approxi-67 mation of asymptotically affine functions, and b) bounded width approximation;68 • Implementation of a lossless compression algorithm for radial neural networks;69 • A theorem providing the precise relationship between gradient descent optimiza-70 tion of the original and compressed networks.71 2 Related work72 Radial rescaling activations. As noted, radial rescaling activations are a special case of the73 activations used in radial basis functions networks [4]. Radial rescaling functions have the74 symmetry property of preserving vector directions, and hence exhibit rotation equivariance.75 Consequently, examples of such functions, such as the squashing nonlinearity and Norm-76 ReLU, feature in the study of rotationally equivariant neural networks [55, 47, 56, 57, 26].77 However, previous works apply the activation only along the channel dimension, and78 consider the orthogonal group O(n) only for n = 2, 3. In contrast, we consider a radial79 rescaling activation across the entire hidden layer, and O(n)-equivariance where n is the80 hidden layer dimension. Our constructions echo the vector neurons formalism [15], in81 which the output of a nonlinearity is a vector rather than a scalar.82 Universal approximation. Neural networks of arbitrary width and sigmoid activations83 have long been known to be universal approximators [14]. Universality can also be achieved84 by bounded width networks with arbitrary depth [36], and generalizes to other activations85 and architectures [24, 60, 43, 50]. While most work has focused on compact domains,86 some recent work also considers non-compact domains [28, 54]. The techniques used for87 pointwise activations do not generalize to radial rescaling activations, where all activation88 output coordinates are affected by all input coordinates. Consequently, individual radial89 neural network approximators of two different functions cannot be easily combined to an90 approximator of the sum of the functions. The standard proof of universal approximation91 for radial basis functions networks requires an unbounded increase the number of hidden92 activation neurons, and hence does not apply to the case of radial neural networks [40].93 Groups and symmetry. Appearances of symmetry in machine learning have generally94 focused on symmetric input and output spaces. Most prominently, equivariant neural95 networks incorporate symmetry as an inductive bias and feature weight-sharing constraints96 based on equivariance. Examples include G-convolution, steerable CNN, and Clebsch-97 Gordon networks [13, 55, 11, 9, 30, 2, 58, 12, 57, 16, 31, 44]. By contrast, our approach to98 radial neural networks does not depend on symmetries of the input domain, output space,99 or feedforward mapping. Instead, we exploit parameter space symmetries and thus obtain100 more general results that apply to domains with no apparent symmetry.101 Model compression. A major goal in machine learning is to find methods to reduce102 the number of trainable parameters, decrease memory usage, or accelerate inference and103 training [8, 61]. Our approach toward this goal differs significantly from most existing104 methods in that it is based on the inherent symmetry of network parameter spaces.105 One prior method is weight pruning, which removes redundant weights with little loss106 in accuracy [20, 3, 27]. Pruning can be done during training [18] or at initialization107 [34, 53]. Gradient-based pruning removes weights by estimating the increase in loss resulting108 from their removal [33, 22, 17, 39]. A complementary approach is quantization, which109 decreases the bit depth of weights [59, 25, 19]. Knowledge distillation identifies a small model110 mimicking the performance of a larger model [5, 23, 1]. Matrix Factorization methods replace111 fully connected layers with lower rank or sparse factored tensors [6, 7, 52, 32, 45, 35] and112 can often be applied before training. Our method involves a type of matrix factorization113 based on the QR decomposition; however, rather than aim for rank reduction, we leverage114 this decomposition to reduce hidden widths via change-of-basis operations on the hidden115 representations. Close to our method are lossless compression methods which remove116 stable neurons in ReLU networks [49, 48] or exploit permutation parameter space symmetry117 to remove neurons [51]; our compression instead follows from the symmetries of the radial118 rescaling activation. Finally, the compression results of [26], while conceptually similar to119 ours, are weaker, as (1) the unitary group action is on disjoint layers instead of moving120 through all layers, and (2) the results are only stated for the squashing nonlinearity.121 3 Radial neural networks122 In this section, we define radial rescaling functions and radial neural networks. Let123 h : R→ R be a function. For any n ≥ 1, set:124 h(n) : Rn → Rn h(n)(v) = h(|v|) v|v| for v ̸= 0, and h(n)(0) = 0. A function ρ : Rn → Rn is called a radial rescaling function if125 ρ = h(n) for some piecewise differentiable h : R→ R. Hence, ρ sends each input vector to126 a scalar multiple of itself, and that scalar depends only on the norm of the vector1. It is127 easy to show that radial rescaling functions commute with orthogonal transformations.128 Example 1. (1) Step-ReLU, where h(r) = r if r ≥ 1 and 0 otherwise. In this case, the radial129 rescaling function is given by130 ρ : Rn → Rn, v 7→ v if |v| ≥ 1; v 7→ 0 if |v| < 1 (3.1) (2) The squashing function, where h(r) = r2/(r2 + 1). (3) Shifted ReLU, where h(r) =131 max(0, r − b) for r > 0 and b is a real number. See Figure 2. We refer to [55] and the132 references therein for more examples and discussion of radial functions.133 A radial neural network with L layers consists of a positive integer ni indicating the width of134 each layer i = 0, 1, . . . , L; the trainable parameters, comprising of a matrix Wi ∈ Rni×ni−1135 of weights and a bias vector bi ∈ Rni for each i = 1, . . . , L; and a radial rescaling function136 ρi : Rni → Rni for each i = 1, . . . , L. We refer to the tuple n = (n0, n1, . . . , nL) as the widths137 vector of the neural network. The hidden widths vector is nhid = (n1, n2, . . . , nL−1). The138 feedforward function F : Rn0 → RnL of a radial neural network is defined in the usual way139 as an iterated composition of affine maps and activations. Explicitly, set F0 = idRn0 and140 recursively define the partial feedforward functions for i = 1, . . . , L:141 Fi : Rn0 → Rni , x 7→ ρi (Wi ◦ Fi−1(x) + bi) Then the feedforward function is F = FL. Radial neural networks are a special type of142 radial basis functions network; we explain the connection in Appendix F.143 Remark 2. If bi = 0 for all i, then the feedforward function takes the form F(x) = W (µ(x)x)144 where µ : Rn → R is a scalar-valued function and W = WLWL−1 · · ·W1 ∈ RnL×n0 is the145 product of the weight matrices. If any of the biases are non-zero, then the feedforward146 function lacks such a simple form.147 1A function Rn → R that depends only on the norm of a vector is known as a radial function. Radial rescaling functions rescale each vector according to the radial function v 7→ λ(|v|) := h(|v|)|v| . This explains the connection to Equation 1.2. 4 Universal Approximation148 In this section, we consider two universal approximation results. The first approxi-149 mates asymptotically affine functions with a network of unbounded width. The second150 generalizes to bounded width networks. Proofs appear in Appendix B. Throughout,151 Br(c) = {x ∈ Rn : |x − c| < r} denotes the r-ball around a point c, and an affine map152 Rn → Rm is one of the from L(x) = Ax + b for a matrix A ∈ Rm×n and b ∈ Rm.153 4.1 Approximation of asymptotically affine functions154 A continuous function f : Rn → Rm is said to be asymptotically affine if there exists an155 affine map L : Rn → Rm such that, for every ϵ > 0, there is a compact subset K of Rn such156 that |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. In particular, continuous functions with compact157 support are asymptotically affine. The continuity of f and compactness of K imply that,158 for any ϵ > 0, there exist c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the union159 of the balls Bri (ci) covers K and, second, for all i, we have f (Bri (ci) ∩ K) ⊆ Bϵ( f (ci)). Let160 N( f , K, ϵ) be the minimal2 choice of N.161 Theorem 3 (Universal approximation). Let f : Rn → Rm be an asymptotically affine function.162 For any ϵ > 0, there exists a compact set K ⊂ Rn and a function F : Rn → Rm such that:163 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) layers whose164 hidden widths are (n + 1, n + 2, . . . , n + N).165 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.166 We note that the approximation in Theorem 3 is valid on all of Rn. To give an idea167 of the proof, first fix c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) as above. Let e1, . . . , eN be168 orthonormal basis vectors extending Rn to Rn+N . For i = 1, . . . , N define affine maps169 Ti : Rn+i−1 → Rn+i and Si : Rn+i → Rn+i by170 Ti(z) = z− ci + hiei Si(z) = z− (1 + h−1i )⟨ei, z⟩ei + ci + ei where h2i = 1− r2i and ⟨ei, z⟩ is the coefficient of ei in z. Setting ρi to be Step-ReLU171 (Equation 3.1) on Rn+i, these maps are chosen so that the composition Si ◦ ρi ◦ Ti maps172 the points in Bri (ci) to ci + ei, while keeping points outside this ball the same. We now173 describe a radial neural network with widths (n, n + 1, . . . , n + N, m) whose feedforward174 function approximates f . For i = 1, . . . , N the affine map from layer i− 1 to layer i is given175 by z 7→ Ti ◦ Si−1(z), with S0 = idRn . The activation at each hidden layer is Step-ReLU. Let176 L be the affine map such that |L− f | < ϵ on Rn \ K. The affine map from layer N to the177 output layer is Φ ◦ SN where Φ : Rn+N → Rm is the unique affine map determined by178 x 7→ L(x) if x ∈ Rn, and ei 7→ f (ci)− L(ci). This construction is illustrated in Figure 3.179 Corollary 4. Radial neural networks are dense in the space of all continuous functions with respect180 to the topology of compact convergence, and hence satisfy cc-universality.181 4.2 Bounded width approximation182 We now turn our attention to a bounded width universal approximation result.183 Theorem 5. Let f : Rn → Rm be an asymptotically affine function. For any ϵ > 0, there exists a184 compact set K ⊂ Rn and a function F : Rn → Rm such that:185 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) hidden186 layers whose widths are all n + m + 1.187 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.188 The proof, which is more involved than that of Theorem 3, relies on using orthogonal189 dimensions to represent the domain and the range of f , together with an indicator190 2In many cases, the constant N( f , K, ϵ) can be bounded explicitly. For example, if K is the unit cube in Rn and f is Lipschitz continuous with Lipschitz constant R, then N( f , K, ϵ) ≤ ⌈ R √ n 2ϵ ⌉n . dimension to distinguish the two. We regard points in Rn+m+1 as triples (x, y, θ) where191 x ∈ Rn, y ∈ Rm and θ ∈ R. The proof of Theorem 5 parallels that of Theorem 3, but instead192 of mapping points in Bri (ci) to ci + ei, we map the points in Bri ((ci, 0, 0)) to (0, f (ci)−L(0) s , 1),193 where s is chosen such that different balls do not interfere. The final layer then uses an194 affine map (x, y, θ) 7→ L(x) + sy, which takes (x, 0, 0) to L(x), and (0, f (ci)−L(0)s , 1) to f (ci).195 We remark on several additional results; see Appendix B for full statements and proofs.196 The bound of Theorem 5 can be strengthened to max(n, m) + 1 in the case of functions197 f : K → Rm defined on a compact domain K ⊂ Rn (i.e., ignoring asymptotic behavior).198 Furthermore, with more layers, it is possible to reduce that bound to max(n, m).199 5 Model compression200 In this section, we prove a model compression result. Specifically, we provide an algorithm201 which, given any radial neural network, computes a different radial neural network with202 smaller widths. The resulting compressed network has the same feedforward function203 as the original network, and hence the same value of the loss function on any batch of204 training data. In other words, our model compression procedure is lossless. Although205 our algorithm is practical and explicit, it reflects more conceptual phenomena, namely, a206 change-of-basis action on network parameter spaces (Section 5.1).207 5.1 The parameter space208 Suppose a fully connected network has L layers and widths given by the tuple n =209 (n0, n1, n2, . . . , nL−1, nL). In other words, the i-th layer has input width ni−1 and output210 width ni. The parameter space is defined as the vector space of all possible choices of211 parameter values. Hence, it is given by the following product of vector spaces:212 Param(n) = ( Rn1×n0 ×Rn2×n1 × · · · ×RnL×nL−1 ) × (Rn1 ×Rn2 × · · · ×RnL) We denote an element therein as a pair of tuples (W, b) where W = (Wi ∈ Rni×ni−1)Li=1213 are the weights and b = (bi ∈ Rni )Li=1 are the biases. To describe certain symmetries of214 the parameter space, consider the following product of orthogonal groups, with sizes215 corresponding to the widths of the hidden layers:216 O(nhid) = O(n1)×O(n2)× · · · ×O(nL−1) There is a change-of-basis action of O(nhid) on the parameter space Param(n). Explicitly,217 the tuple of orthogonal matrices Q = (Qi)L−1i=1 ∈ O(n hid) transforms the parameter values218 (W, b) to Q ·W := ( QiWiQ−1i−1 )L i=1 and Q ·b := (Qibi)Li=1, where Q0 = idn0 and QL = idnL .219 5.2 Model compression220 In order to state the compression result, we first define the reduced widths. Namely,221 the reduction nred = (nred0 , n red 1 , . . . , n red L ) of a widths vector n is defined recursively by222 setting nred0 = n0, then n red i = min(ni, n red i−1 + 1) for i = 1, . . . , L − 1, and finally n red L =223 nL. For a tuple ρ = (ρi : Rni → Rni )Li=1 of radial rescaling functions, we write ρred =224 ( ρredi : R nredi → Rnredi ) for the corresponding tuple of restrictions, which are all radial225 rescaling functions. The following result relies on Algorithm 1 below.226 Theorem 6. Let (W, b, ρ) be a radial neural network with widths n. Let Wred and bred be the227 weights and biases of the compressed network produced by Algorithm 1. The feedforward function228 of the original network (W, b, ρ) coincides with that of the compressed network (Wred, bred, ρred).229 Algorithm 1: QR Model Compression (QR-compress) input : W, b ∈ Param(n) output : Q ∈ O(nhid) and Wred, bred ∈ Param(nred) Q, Wred, bred ← [ ], [ ], [ ] // initialize output lists A1 ← [b1 W1] // matrix of size n1 × (n0 + 1) for i← 1 to L− 1 do // iterate through layers Qi, Ri ← QR-decomp(Ai , mode = ‘complete’) // Ai = QiInciRi Append Qi to Q Append first column of Ri to bred // reduced bias for layer i Append remainder of Ri to Wred // reduced weights for layer i Set Ai+1 ← [bi+1 Wi+1QiInci] // matrix of size ni+1 × (nredi + 1) end Append the first column of AL to bred // reduced bias for last layer Append the remainder of AL to Wred // reduced weights for last layer return Q, Wred, bred 230 We explain the notation of the algorithm. The inclusion matrix Inci ∈ Rni×n red i has231 ones along the main diagonal and zeros elsewhere. The method QR-decomp with mode =232 ‘complete’ computes the complete QR decomposition of the ni × (1 + nredi−1) matrix Ai233 as QiInciRi where Qi ∈ O(ni) and Ri is upper-triangular of size nredi × (1 + n red i−1). The234 definition of nredi implies that either n red i = n red i−1 + 1 or n red i = ni. The matrix Ri is of size235 nredi × n red i in the former case and of size ni × (1 + n red i−1) in the latter case.236 Example 7. Suppose the widths of a radial neural network are (1, 8, 16, 8, 1). Then it has237 ∑4i=1(ni−1 + 1)ni = 305 trainable parameters. The reduced network has widths (1, 2, 3, 4, 1)238 and ∑4i=1(n red i−1 + 1)(n red i ) = 34 trainable parameters. Another example appears in Figure 4.239 We note that the tuple of matrices Q produced by Algorithm 1 does not feature in the240 statement of Theorem 6, but is important in the proof (which appears in Appendix C).241 Namely, an induction argument shows that the i-th partial feedforward function of the242 original and reduced models are related via the matrices Qi and Inci. A crucial ingredient243 in the proof is that radial rescaling activations commute with orthogonal transformations.244 6 Projected gradient descent245 The typical use case for model compression algorithms is to produce a smaller version246 of the fully trained model which can be deployed to make inference more efficient. It247 is also worth considering whether compression can be used to accelerate training. For248 example, for some compression algorithms, the compressed and full models have the same249 feedforward function after a step of gradient descent is applied to each, and so one can250 compress before training and still reach the same minimum. Unfortunately, in the context251 of radial neural networks, compression using Algorithm 1 and then training does not252 necessarily give the same result as training and then compression (see Appendix D.6 for a253 counterexample). However, QR-compress does lead to a precise mathematical relationship254 between optimization of the two models: the loss of the compressed model after one step255 of gradient descent is equivalent to the loss of (a transformed version of) the original model256 after one step of projected gradient descent. Proofs appear in Appendix D.257 To state our results, fix a tuple of widths n and a tuple ρ = (ρi : Rni → Rni )Li=1 of radial258 rescaling functions. The loss function L : Param(n)→ R associated to a batch of training259 data {(xj, yj)} ⊆ Rn0 × RnL is defined as taking parameter values (W, b) to the sum260 ∑j C(F(xj), yj) where C : RnL ×RnL → R is a cost function on the output space, and261 F = F(W,b,ρ) is the feedforward of the radial neural network with parameters (W, b) and262 activations ρ. Similarly, we have a loss function Lred on the parameter space Param(nred)263 with reduced widths vector. For any learning rate η > 0, we obtain gradient descent maps:264 γ : Param(n)→ Param(n) γred : Param(nred)→ Param(nred) (W, b) 7→ (W, b)− η∇(W,b)L (V, c) 7→ (V, c)− η∇(V,c)Lred We will also consider, for k ≥ 0, the k-fold composition γk = γ ◦ γ ◦ · · · ◦ γ and similarly265 for γred. The projected gradient descent map on Param(n) is given by:266 γproj : Param(n)→ Param(n), (W, b) 7→ Proj (γ(W, b)) where the map Proj zeroes out all entries in the bottom left (ni − nredi )× n red i−1 submatrix of267 Wi −∇WiL, and the bottom (ni − n red i ) entries in bi −∇biL, for each i. Schematically:268 Wi −∇WiL = [ ∗ ∗ ∗ ∗ ] 7→ [ ∗ ∗ 0 ∗ ] , bi −∇biL = [ ∗ ∗ ] 7→ [ ∗ 0 ] To state the following theorem, let Wred, bred, Q = QR-compress(W, b) be the outputs269 of Algorithm 1 applied to (W, b) ∈ Param(n). Hence (Wred, bred) ∈ Param(nred) are270 the parameters of the compressed model, and Q ∈ O(nhid) is an orthogonal parameter271 symmetry. We also consider the action (Section 5.1) of Q−1 applied to (W, b).272 Theorem 8. Let Wred, bred, Q = QR-compress(W, b) be the outputs of Algorithm 1 applied to273 (W, b) ∈ Param(n). Set U = Q−1 · (W, b)− (Wred, bred). For any k ≥ 0, we have:274 γk(W, b) = Q · γk(Q−1 · (W, b)) γkproj(Q−1 · (W, b)) = γkred(W red, bred) + U. We conclude that gradient descent with initial values (W, b) is equivalent to gradient275 descent with initial values Q−1 · (W, b) since at any stage we can apply Q±1 to move from276 one to the other. Furthermore, projected gradient descent with initial values Q−1 · (W, b)277 is equivalent to gradient descent on Param(nred) with initial values (Wred, bred) since at278 any stage we can move from one to the other by ±U. Neither Q nor U depends on k.279 7 Experiments280 In addition to the theoretical results in this work, we provide an implementation of281 Algorithm 1, in order to validate the claims of Theorems 6 and 8 empirically, as well as to282 quantify real-world performance. Full experimental details are in Appendix E.283 (1) Empirical verification of Theorem 6. We learn the function f (x) = e−x 2 from samples284 using a radial neural network with widths n = (1, 6, 7, 1) and activation the radial shifted285 sigmoid h(x) = 1/(1 + e−x+s). Applying QR-compress gives a compressed radial neural286 network with widths nred = (1, 2, 3, 1). Theorem 6 implies that the respective neural287 functions F and Fred are equal. Over 10 random initializations, the mean absolute error is288 negligible up to machine precision: (1/N)∑j |F(xj)− Fred(xj)| = 1.31 · 10−8 ± 4.45 · 10−9.289 (2) Empirical verification of Theorem 8. The claim is that training the transformed model290 with parameters Q−1 · (W, b) and objective L by projected gradient descent coincides291 with training the reduced model with parameters (Wred, bred) and objective Lred by292 usual gradient descent. We verified this on synthetic data as above. Over 10 random293 initializations, the loss functions after training match: |L − Lred| = 4.02 · 10−9 ± 7.01 · 10−9.294 (3) The compressed model trains faster. Our compression method may be applied before295 training to produce a smaller model class which trains faster without sacrificing accuracy.296 We demonstrate this in learning the function f : R2 → R2 sending (t1, t2) to (e−t 2 1 , e−t 2 2)297 using a radial neural network with widths n = (2, 16, 64, 128, 16, 2) and activation the298 radial sigmoid h(r) = 1/(1 + e−r). Applying QR-compress gives a compressed network299 with widths nred = (2, 3, 4, 5, 6, 2). We trained both models until the training loss was300 ≤ 0.01. Over 10 random initializations on our system, the reduced network trained in301 15.32± 2.53 seconds and the original network trained in 31.24± 4.55 seconds.302 8 Conclusions and Discussion303 This paper demonstrates that radial neural networks are universal approximators and that304 their parameter spaces exhibit a rich symmetry group, leading to a model compression305 algorithm. The results of this work combine to build a theoretical foundation for the use of306 radial neural networks, and suggest that radial neural networks hold promise for wider307 practical applicability. Furthermore, this work makes an argument for considering the308 advantages of non-pointwise nonlinearities in neural networks.309 There are two main limitations of our results, each providing an opportunity for future310 work. First, our universal approximation constructions currently work only for Step-ReLU311 radial rescaling radial activations; it would be desirable to generalize to other activations.312 Additionally, Theorem 6 achieves compression only for networks whose widths satisfy313 ni > ni−1 + 1 for some i. Neural networks which do not have increasing widths anywhere314 in their architecture, such as encoders, would not be compressible.315 Further extensions of this work include: First, little is currently known about the stabil-316 ity properties of radial neural networks during training, as well as their sensitivity to317 initialization. Second, radial rescaling activations provide an extreme case of symmetry;318 there may be benefits to combining radial and pointwise activations within a single net-319 work, for example, through ‘block’ radial rescaling functions. Our techniques may yield320 weaker compression properties for more general radial basis functions networks; radial321 neural networks may be the most compressible such networks. Third, the parameter space322 symmetries may provide a key ingredient in analyzing the gradient flow dynamics of323 radial neural networks and computation of conserved quantities. Fourth, radial rescaling324 activations can be used within convolutional or group-equivariant NNs. Finally, based325 on the theoretical advantages laid out in this paper, future work will explore empirically326 applications in which we expect radial networks to outperform alternate methods. Such327 potential applications include data spaces with circular or distance-based class boundaries.328 References329 [1] Lei Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? arXiv:1312.6184,330 2013. 3331 [2] Erkao Bao and Linqi Song. Equivariant neural networks and equivarification.332 arXiv:1906.07172, 2019. 3333 [3] Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. What is334 the state of neural network pruning? arXiv:2003.03033, 2020. 3335 [4] David S Broomhead and David Lowe. Radial basis functions, multi-variable functional336 interpolation and adaptive networks. Technical report, Royal Signals and Radar337 Establishment Malvern (United Kingdom), 1988. 2, 3338 [5] Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression.339 In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery340 and Data Mining, pages 535–541, 2006. 3341 [6] Yu Cheng, X Yu Felix, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shih-Fu342 Chang. Fast neural networks with circulant projections. arXiv:1502.03436, 2, 2015. 3343 [7] Yu Cheng, Felix X Yu, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu344 Chang. An exploration of parameter redundancy in deep networks with circulant345 projections. In Proceedings of the IEEE international conference on computer vision, pages346 2857–2865, 2015. 3347 [8] Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. A survey of model compression348 and acceleration for deep neural networks. arXiv:1710.09282, 2017. 3349 [9] Benjamin Chidester, Minh N. Do, and Jian Ma. Rotation equivariance and invariance350 in convolutional neural networks. arXiv:1805.12301, 2018. 3351 [10] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep352 network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289,353 2015. 1354 [11] Taco S. Cohen and Max Welling. Group equivariant convolutional networks. In355 International conference on machine learning (ICML), pages 2990–2999, 2016. 3356 [12] Taco S Cohen and Max Welling. Steerable CNNs. In Proceedings of the International357 Conference on Learning Representations (ICLR), 2017. 3358 [13] Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equiv-359 ariant convolutional networks and the icosahedral CNN. In Proceedings of the 36th360 International Conference on Machine Learning (ICML), volume 97, pages 1321–1330, 2019.361 3362 [14] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathe-363 matics of control, signals and systems, 2(4):303–314, 1989. 1, 3364 [15] Congyue Deng, O. Litany, Yueqi Duan, A. Poulenard, A. Tagliasacchi, and L. Guibas.365 Vector Neurons: A General Framework for SO(3)-Equivariant Networks. 2021366 IEEE/CVF International Conference on Computer Vision (ICCV), 2021. doi: 10.1109/367 iccv48922.2021.01198. 3368 [16] Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symme-369 try in convolutional neural networks. In International Conference on Machine Learning370 (ICML), 2016. 3371 [17] Xin Dong, Shangyu Chen, and Sinno Jialin Pan. Learning to prune deep neural372 networks via layer-wise optimal brain surgeon. arXiv preprint arXiv:1705.07565, 2017.373 3374 [18] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse,375 trainable neural networks. arXiv:1803.03635, 2018. 3376 [19] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep377 convolutional networks using vector quantization. arXiv:1412.6115, 2014. 3378 [20] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neu-379 ral networks with pruning, trained quantization and huffman coding. arXiv:1510.00149,380 2015. 3381 [21] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli382 Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J.383 Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew384 Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre385 Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi,386 Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Nature,387 585(7825):357–362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https:388 //doi.org/10.1038/s41586-020-2649-2. 35389 [22] Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal390 brain surgeon. Morgan Kaufmann, 1993. 3391 [23] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural392 network. arXiv:1503.02531, 2015. 3393 [24] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural394 networks, 4(2):251–257, 1991. 3395 [25] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang,396 Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient con-397 volutional neural networks for mobile vision applications. arXiv:1704.04861, 2017.398 3399 [26] George Jeffreys and Siu-Cheong Lau. Kähler Geometry of Quiver Varieties and400 Machine Learning. arXiv:2101.11487, 2021. URL http://arxiv.org/abs/2101.11487.401 3402 [27] Ehud D Karnin. A simple procedure for pruning back-propagation trained neural403 networks. IEEE transactions on neural networks, 1(2):239–242, 1990. 3404 [28] Patrick Kidger and Terry Lyons. Universal approximation with deep narrow networks.405 In Conference on learning theory, pages 2306–2327. PMLR, 2020. 3406 [29] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-407 normalizing neural networks. Advances in neural information processing systems, 30,408 2017. 1409 [30] Risi Kondor and Shubhendu Trivedi. On the Generalization of Equivariance and410 Convolution in Neural Networks to the Action of Compact Groups. In International411 conference on machine learning (ICML), 2018. 3412 [31] Leon Lang and Maurice Weiler. A Wigner-Eckart theorem for group equivariant413 convolution kernels. In International Conference on Learning Representations (ICLR), 2021.414 3415 [32] Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempit-416 sky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition.417 arXiv:1412.6553, 2014. 3418 [33] Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in419 neural information processing systems, pages 598–605, 1990. 3420 [34] Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip HS Torr. A421 signal propagation perspective for pruning neural networks at initialization. arXiv422 preprint arXiv:1906.06307, 2019. 3423 [35] Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, and Rogerio424 Feris. Fully-adaptive feature sharing in multi-task networks with applications in425 person attribute classification. In Proceedings of the IEEE conference on computer vision426 and pattern recognition (CVPR), pages 5334–5343, 2017. 3427 [36] Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. The428 expressive power of neural networks: A view from the width. Advances in neural429 information processing systems, 30, 2017. 3430 [37] Mirco Milletarí, Thiparat Chotibut, and Paolo E Trevisanutto. Mean field theory of431 activation functions in deep neural networks. arXiv preprint arXiv:1805.08786, 2018. 1432 [38] Diganta Misra. Mish: A self regularized non-monotonic activation function. arXiv433 preprint arXiv:1908.08681, 2019. 1434 [39] Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Prun-435 ing convolutional neural networks for resource efficient inference. arXiv preprint436 arXiv:1611.06440, 2016. 3437 [40] Jooyoung Park and Irwin W Sandberg. Universal approximation using radial-basis-438 function networks. Neural computation, 3(2):246–257, 1991. 3439 [41] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory440 Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban441 Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan442 Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith443 Chintala. Pytorch: An imperative style, high-performance deep learning library. In444 H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett,445 editors, Advances in Neural Information Processing Systems (NeurIPS) 32, pages446 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/447 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.448 pdf. 35449 [42] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions.450 arXiv preprint arXiv:1710.05941, 2017. 1451 [43] Siamak Ravanbakhsh. Universal equivariant multilayer perceptrons. In International452 Conference on Machine Learning, pages 7996–8006. PMLR, 2020. 3453 [44] Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Equivariance through454 parameter-sharing. In International Conference on Machine Learning, pages 2892–2901.455 PMLR, 2017. 3456 [45] Roberto Rigamonti, Amos Sironi, Vincent Lepetit, and Pascal Fua. Learning separable457 filters. In Proceedings of the IEEE conference on computer vision and pattern recognition,458 pages 2754–2761, 2013. 3459 [46] Frank Rosenblatt. The perceptron: a probabilistic model for information storage and460 organization in the brain. Psychological review, 65(6):386, 1958. 1461 [47] Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between462 capsules. arXiv:1710.09829, 2017. 2, 3463 [48] Thiago Serra, Abhinav Kumar, and Srikumar Ramalingam. Lossless compression464 of deep neural networks. In International Conference on Integration of Constraint Pro-465 gramming, Artificial Intelligence, and Operations Research, pages 417–430. Springer, 2020.466 3467 [49] Thiago Serra, Xin Yu, Abhinav Kumar, and Srikumar Ramalingam. Scaling up exact468 neural network compression by relu stability. Advances in Neural Information Processing469 Systems, 34, 2021. 3470 [50] Sho Sonoda and Noboru Murata. Neural network with unbounded activation func-471 tions is universal approximator. Applied and Computational Harmonic Analysis, 43(2):472 233–268, 2017. 3473 [51] Gustav Sourek, Filip Zelezny, and Ondrej Kuzelka. Lossless compression of structured474 convolutional models via lifting. arXiv preprint arXiv:2007.06567, 2020. 3475 [52] Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks476 with low-rank regularization. arXiv:1511.06067, 2015. 3477 [53] Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before478 training by preserving gradient flow. arXiv preprint arXiv:2002.07376, 2020. 3479 [54] Ming-Xi Wang and Yang Qu. Approximation capabilities of neural networks on480 unbounded domains. Neural Networks, 145:56–67, 2022. 3481 [55] Maurice Weiler and Gabriele Cesa. General E(2)-Equivariant Steerable CNNs. Confer-482 ence on Neural Information Processing Systems (NeurIPS), 2019. 2, 3, 4483 [56] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen.484 3D steerable CNNs: Learning rotationally equivariant features in volumetric data.485 Proceedings of the 32nd International Conference on Neural Information Processing Systems486 (NeurIPS), 2018. 2, 3487 [57] Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for488 rotation equivariant CNNs. In Proceedings of the IEEE Conference on Computer Vision489 and Pattern Recognition (CVPR), pages 849–858, 2018. 2, 3490 [58] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow.491 Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the492 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5028–5037,493 2017. 3494 [59] Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized495 convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference496 on Computer Vision and Pattern Recognition (CVPR), pages 4820–4828, 2016. 3497 [60] Dmitry Yarotsky. Universal approximations of invariant maps by neural networks.498 Constructive Approximation, 55(1):407–474, 2022. 3499 [61] Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad,500 and Yanzhi Wang. A systematic DNN weight pruning framework using alternating501 direction method of multipliers. In Proceedings of the European Conference on Computer502 Vision (ECCV), pages 184–199, 2018. 3503 Checklist504 1. For all authors...505 (a) Do the main claims made in the abstract and introduction accurately reflect506 the paper’s contributions and scope? [Yes]507 (b) Did you describe the limitations of your work? [Yes] See Section 8.508 (c) Did you discuss any potential negative societal impacts of your work? [N/A]509 Our work is theoretical and does not hold specific risks of negative impacts.510 (d) Have you read the ethics review guidelines and ensured that your paper511 conforms to them? [Yes]512 2. If you are including theoretical results...513 (a) Did you state the full set of assumptions of all theoretical results? [Yes]514 (b) Did you include complete proofs of all theoretical results? [Yes] Most of the515 proofs appear in the supplementary material.516 3. If you ran experiments...517 (a) Did you include the code, data, and instructions needed to reproduce the518 main experimental results (either in the supplemental material or as a URL)?519 [Yes]520 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how521 they were chosen)? [Yes]522 (c) Did you report error bars (e.g., with respect to the random seed after running523 experiments multiple times)? [Yes]524 (d) Did you include the total amount of compute and the type of resources used525 (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix E.526 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new527 assets...528 (a) If your work uses existing assets, did you cite the creators? [Yes]529 (b) Did you mention the license of the assets? [N/A]530 (c) Did you include any new assets either in the supplemental material or as a531 URL? [N/A]532 (d) Did you discuss whether and how consent was obtained from people whose533 data you’re using/curating? [N/A]534 (e) Did you discuss whether the data you are using/curating contains personally535 identifiable information or offensive content? [N/A]536 5. If you used crowdsourcing or conducted research with human subjects...537 (a) Did you include the full text of instructions given to participants and screen-538 shots, if applicable? [N/A]539 (b) Did you describe any potential participant risks, with links to Institutional540 Review Board (IRB) approvals, if applicable? [N/A]541 (c) Did you include the estimated hourly wage paid to participants and the total542 amount spent on participant compensation? [N/A]543 A Organization of the appendices544 This paper is a contribution to the mathematical foundations of machine learning, and our545 results are motivated by expanding the applicability and performance of neural networks.546 At the same time, we give precise mathematical formulations of our results and proofs.547 The purposes of these appendices are several:548 1. To clarify the mathematical conventions and terminology, thus making the paper549 more accessible.550 2. To provide full proofs of the main results.551 3. To develop context around various construction appearing in the main text.552 4. To discuss in detail examples, special cases, and generalizations of our results.553 We now give a summary of the contents of the appendices.554 Appendix B contains proofs the universal approximation results (Theorems 3 and 5) stated555 in Section 4 of the main text, as well as proofs of additional bounded width results.556 The proofs use notation given in Appendix B.1, and rely on preliminary topological557 considerations given in Appendix B.2.558 In Appendix C, we give a proof of the model compression result given in Theorem 6, which559 appears in Section 5. For clarity and background we begin the appendix with a discussion560 of the version of the QR decomposition relevant for our purposes (Appendix C.1). We also561 establish elementary properties of radial rescaling activations (Appendix C.2).562 The focus of Appendix D is projected gradient descent, elaborating on Section 6. We563 first prove a result on the interaction of gradient descent and orthogonal transformations564 (Appendix D.1), before formulating projected gradient descent in more detail (Appendix565 D.2), and introducing the so-called interpolating space (Appendix D.3). We restate Theorem566 8 in more convenient notation (Appendix D.4) before proceeding to the proof (Appendix567 D.5).568 Appendix E contains implementation details for the experiments summarized in Section569 7. Our implementations use shifted radial rescaling activations, which we formulate in570 Appendix E.1.571 Appendix F explains the connection between our constructions and radial basis functions572 networks. While radial neural networks turn out to be a specific type of radial basis573 functions network, our universality results are not implied by those for general radial basis574 functions networks.575 B Universal approximation proofs and additional results576 In this section, we provide full proofs of the universal approximation (UA) results for radial577 neural networks, as stated in Section 4. In order to do so, we first clarify our notational578 conventions (Appendix B.1), and collect basic topological results (Appendix B.2).579 B.1 Notation580 Recall that, for a point c in the Euclidean space Rn and a positive real number r, we denote581 the r-ball around c by Br(c) = {x ∈ Rn | |x− c| < r}. All networks in this section have the582 Step-ReLU radial rescaling activation function, defined as:583 ρ : Rn −→ Rn, z 7−→ { z if |z| ≥ 1 0 otherwise Throughout, ◦ denotes the composition of functions. We identify a linear map with a584 corresponding matrix (in the standard bases). In the case of linear maps, the operation ◦585 can be be identified with matrix multiplication. Recall also that an affine map L : Rn → Rm586 is one of the from L(x) = Ax + b for a matrix A ∈ Rm×n and b ∈ Rm.587 B.2 Topology588 Let K be a compact subset of Rn and let f : K → Rm be a continuous function.589 Lemma 9. For any ϵ > 0, there exist c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the590 union of the balls Bri (ci) covers K; second, for all i, we have f (Bri (ci) ∩ K) ⊆ Bϵ( f (ci)).591 Proof. The continuity of f implies that for each c ∈ K, there exists r = rc such that592 f (Brc(c)∩K) ⊆ Bϵ( f (c)). The subsets Brc(c)∩K form an open cover of K. The compactness593 of K implies that there is a finite subcover. The result follows.594 We also prove a variation of Lemma 9 that additionally guarantees that none of the balls in595 the cover of K contains the center point of another ball.596 Lemma 10. For any ϵ > 0, there exist c1, . . . , cM ∈ K and r1, . . . , rM ∈ (0, 1) such that, first, the597 union of the balls Bri (ci) covers K; second, for all i, we have f (Bri (ci)) ⊆ Bϵ( f (ci)); and, third,598 |ci − cj| ≥ ri.599 Proof. Because f is continuous on a compact domain, it is uniformly continuous. So, there600 exists r > 0 such that f (Br(c) ∩ K) ⊆ Bϵ( f (c)) for each c ∈ K. Because K is compact it has601 a finite volume, and so does Br/2(K) = ⋃ c∈K Br/2(c). Hence, there exists a finite maximal602 packing of Br/2(K) with balls of radius r/2. That is, a collection c1, . . . , cM ∈ Br/2(K)603 such that, for all i, Br/2(ci) ⊆ Br/2(K) and, for all j ̸= i, Br/2(ci) ∩ Br/2(cj) = ∅. The first604 condition implies that ci ∈ K. The second condition implies that |ci − cj| ≥ r. Finally, we605 argue that K ⊆ ⋃Mi=1 Br(ci). To see this, suppose, for a contradiction, that x ∈ K does not606 belong to ⋃M i=1 Br(ci). Then Br/2(ci) ∩ Br/2(x) = ∅, and x could be added to the packing,607 which contradicts the fact that the packing was chosen to be maximal. So the union of the608 balls Br(ci) covers K.609 We turn our attention to the minimal choices of N and M in Lemmas 9 and 10.610 Definition 11. Given f : K → Rm continuous and ϵ > 0, let N( f , K, ϵ) be the minimal611 choice of N in Lemma 9, and let M( f , K, ϵ) be the minimal choice of M in Lemma 10.612 Observe that M( f , K, ϵ) ≥ N( f , K, ϵ). In many cases, it is possible to give explicit bounds613 for the constants N( f , K, ϵ) and M( f , K, ϵ). As an illustration, we give the argument in the614 case that K is the closed unit cube in Rn and f : K → Rm is Lipschtiz continuous.615 Proposition 12. Let K = [0, 1]n ⊂ Rn be the (closed) unit cube and let f : K → Rm be Lipschitz616 continuous with Lipschitz constant R. For any ϵ > 0, we have:617 N( f , K, ϵ) ≤ ⌈ R √ n 2ϵ ⌉n and M( f , K, ϵ) ≤ Γ(n/2 + 1) πn/2 ( 2 + 2R ϵ )n . Proof. For the first inequality, observe that the unit cube can be covered with ⌈ R √ n 2ϵ ⌉n 618 cubes of side length 2ϵR√n . Each cube is contained in a ball of radius ϵ R centered at the619 center of the cube. (In general, a cube of side length a in Rn is contained in a ball of620 radius a √ n 2 .) Lipschitz continuity implies that, for all x, x ′ ∈ K, if |x − x′| < ϵ/R then621 | f (x)− f (x′)| ≤ R|x− x′| < ϵ.622 For the second inequality, let r = ϵ/R. Lipschitz continuity implies that, for all x, x′ ∈ K, if623 |x− x′| < r then | f (x)− f (x′)| ≤ R|x− x′| < ϵ. The n-dimensional volume of the set of624 points with distance at most r/2 to the unit cube is vol(Br/2(K)) ≤ (1 + r)n. The volume625 of a ball with radius r/2 is vol(Br/2(0)) = π n/2 Γ(n/2+1) (r/2) n. Hence, any packing of Br/2(K)626 with balls of radius r/2 consists of at most627 vol(Br/2(K)) vol(Br/2(0)) ≤ Γ(n/2 + 1) πn/2 ( 2 + 2R ϵ )n such balls. So there also exists a maximal packing with at most that many balls. This628 packing can be used in the proof of Lemma 10, which implies that it is a bound on629 M( f , K, ϵ).630 We note in passing that any differentiable function f : K → Rn on a compact subset K of631 Rn is Lipschitz continuous. Indeed, the compactness of K implies that there exists R such632 that | f ′(x)| ≤ R for all x ∈ K. Then one can take R to be the Lipschitz constant of f .633 B.3 Proof of Theorem 3: UA for asymptotically affine functions634 In this section, we restate and prove Theorem 3, which proves that radial neural networks635 are universal approximators of asymptotically affine functions. We recall the definition of636 such functions:637 Definition 13. A function f : Rn → Rm is asymptotically affine if there exists an affine638 function L : Rn → Rm such that, for all ϵ > 0, there exists a compact set K ⊂ Rn such that639 |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. We say that L is the limit of f .640 Remark 14. An asymptotically linear function is defined in the same way, except L is taken641 to be linear (i.e., given just by applying matrix multiplication without translation). Hence642 any asymptotically linear function is in particular an asymptotically affine function, and643 Theorem 3 applies to asymptotically linear functions as well.644 Given an asymptotically affine function f : Rn → Rm and ϵ > 0, let K be a compact set as645 in Definition 13. We apply Lemma 9 to the restriction f |K of f to K and produce a minimal646 constant N = N( f |K, K, ϵ) as in Definition 11. We write simply N( f , K, ϵ) for this constant.647 Theorem 3 (Universal approximation). Let f : Rn → Rm be an asymptotically affine function.648 For any ϵ > 0, there exists a compact set K ⊂ Rn and a function F : Rn → Rm such that:649 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) layers whose650 hidden widths are (n + 1, n + 2, . . . , n + N).651 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.652 Proof. By the hypothesis on f , there exists an affine function L : Rn → Rm and a compact653 set K ⊂ Rn such that |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. Abbreviate N( f , K, ϵ) by N. As654 in Lemma 9, fix c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the union of the balls655 Bri (ci) covers K and, second, for all i, we have f (Bri (ci)) ⊆ Bϵ( f (ci)). Let U = ⋃N i=1 Bri (ci),656 so that K ⊂ U. Define F : Rn → Rm as:657 F(x) = { L(x) if x /∈ U f (cj) where j is the smallest index with x ∈ Brj(cj) If x /∈ U, then |F(x)− f (x)| = |L(x)− f (x)| < ϵ. Hence suppose x ∈ U. Let j be the658 smallest index such that x ∈ Brj(cj). Then F(x) = f (cj), and, by the choice of rj, we have:659 |F(x)− f (x)| = | f (cj)− f (x)| < ϵ. We proceed to show that F is the feedforward function of a radial neural network. Let660 e1, . . . , eN be orthonormal basis vectors extending Rn to Rn+N . We regard each Rn+i−1 as661 a subspace of Rn+i by embedding into the first n + i− 1 coordinates. For i = 1, . . . , N, we662 set hi = √ 1− r2i and define the following affine transformations:663 Ti : Rn+i−1 → Rn+i Si : Rn+i → Rn+i z 7→ z− ci + hiei z 7→ z− (1 + h−1i )⟨ei, z⟩ei + ci + ei where ⟨ei, z⟩ is the coefficient of ei in z. Consider the radial neural network with widths664 (n, n + 1, . . . , n + N, m), whose affine transformations and activations are given by:665 • For i = 1, . . . , N the affine transformation from layer i− 1 to layer i is given by666 z 7→ Ti ◦ Si−1(z), where S0 = idRn .667 • The activation function at the i-th hidden layer is Step-ReLU on Rn+i, that is:668 ρi : Rn+i −→ Rn+i, z 7−→ { z if |z| ≥ 1 0 otherwise • The affine transformation from layer i = N to the output layer is669 z 7→ ΦL, f ,c ◦ SN(z) where ΦL, f ,c is the affine transformation given by:670 ΦL, f ,c : R n+N → Rm, x + N ∑ i=1 aiei 7→ L(x) + N ∑ i=1 ai( f (ci)− L(ci)) which can be shown to be affine when L is affine. Indeed, write L(x) = Ax + b671 where A is a matrix in Rm×n and b ∈ Rm is a vector. Then ΦL, f ,c is the composition672 of the linear map given by the matrix673 [A f (c1)− L(c1) f (c2)− L(c2) · · · f (cN)− L(cN)] ∈ Rm×(n+N) and translation by b ∈ Rm. Note that we regard each f (ci) − L(ci) ∈ Rm as a674 column vector in the matrix above.675 We claim that the feedforward function of the above radial neural network is exactly F. To676 show this, we first state a lemma, whose (omitted) proof is an elementary computation.677 Lemma 3.1. For i = 1, . . . , N, the composition Si ◦ Ti is the embedding Rn+i−1 ↪→ Rn+i.678 Next, recursively define Gi : Rn → Rn+i via679 Gi = Si ◦ ρi ◦ Ti ◦ Gi−1, where G0 = idRn . The function Gi admits an direct formulation:680 Proposition 3.2. For i = 0, 1, . . . , N, we have:681 Gi(x) = { x if x /∈ ⋃ij=1 Brj(cj) cj + ej where j ≤ i is the smallest index with x ∈ Brj(cj) . Proof. We proceed by induction. The base step i = 0 is immediate. For the induction step,682 assume the claim is true for i− 1, where 0 ≤ i− 1 < N. There are three cases to consider.683 Case 1. Suppose x /∈ ⋃ij=1 Brj(cj). Then in particular x /∈ ⋃i−1j=1 Brj(cj), so the induction684 hypothesis implies that Gi−1(x) = x. Additionally, x /∈ Bri (ci), so:685 |Ti(x)| = |x− ci + hiei| = √ |x− ci|+ h2i ≥ √ r2i + 1− r2i = 1. Using the definition of ρi and Lemma 3.1, we compute:686 Gi(x) = Si ◦ ρi ◦ Ti ◦ Gi−1(x) = Si ◦ ρi ◦ Ti(x) = Si ◦ Ti(x) = x. Case 2. Suppose x ∈ Bj \ ⋃j−1 k=1 Brk (ck) for some j ≤ i− 1. Then the induction hypothesis687 implies that Gi−1(x) = cj + ej. We compute:688 |Ti(cj + ej)| = |cj + ej − ci + hiei| > |ej| = 1. Therefore,689 Gi(x) = Si ◦ ρi ◦ Ti(cj + ej) = Si ◦ Ti(cj + ej) = cj + ej. Case 3. Finally, suppose x ∈ Bi \ ⋃i−1 j=1 Brj(cj). The induction hypothesis implies that690 Gi−1(x) = x. Since x ∈ Bri (ci), we have:691 |Ti(x)| = |x− ci + hiei| = √ |x− ci|+ h2i < √ r2i + 1− r2i = 1. Therefore:692 Gi(x) = Si ◦ ρi ◦ Ti(x) = Si(0) = ci + ei. This completes the proof of the proposition.693 Finally, we show that the function F defined at the beginning of the proof is the feedforward694 function of the above radial neural network. The computation is elementary:695 Ffeedforward = ΦL, f ,c ◦ SN ◦ ρN ◦ TN ◦ SN−1 ◦ ρN−1 ◦ TN−1 ◦ · · · S1 ◦ ρ1 ◦ T1 = ΦL, f ,c ◦ GN = F where the first equality follows from the definition of the feedforward function, the second696 from the definition of GN , and the last from the case i = N of Proposition 3.2 together with697 the definition of ΦL, f ,c. This completes the proof of the theorem.698 B.4 Proof of Theorem 5: bounded width UA for asymptotically affine functions699 We restate and prove Theorem 5, which strengthens Theorem 3 by providing a bounded700 width radial neural network approximation of any asymptotically affine function.701 Theorem 5. Let f : Rn → Rm be an asymptotically affine function. For any ϵ > 0, there exists a702 compact set K ⊂ Rn and a function F : Rn → Rm such that:703 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) hidden704 layers whose widths are all n + m + 1.705 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.706 Proof. By the hypothesis
1. What is the focus and contribution of the paper on deep radial neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its proof and decomposition method? 3. What are the weaknesses of the paper, especially regarding the limitation of the main theorem? 4. Do you have any suggestions or recommendations for improving the paper's content or results? 5. What are the implications of the paper's findings, and how do they contribute to the broader field of deep learning and approximation theory?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This study introduces a new deep radial neural network, shows its density in the space of `asymptotically affine' continuous functions w.r.t the sup norm, and presents a compression algorithm that has a similar taste with QR decomposition. Strengths And Weaknesses Strength: The proof of the main theorem is constructive as well as simple. In traditional approximation theory, the decomposition of a function into ‘width’ direction has often been investigated. That is, decomposition into coefficients and bases (or frames). On the other hand, deep learning decomposes a function into function composites, a method that has not been well investigated by traditional approximation theory. This study provides a concrete example of how to decompose a nonlinear function in the ‘depth’ direction, and I consider this is highly valuable as a result of pioneering research. Weakness: As the authors also pointed out in the conclusion and discussion section, the main theorem (universality) holds only for step-relu function. Questions Suggestion: The authors introduced a new class of functions: Asymptotically affine. As pointed out in l.158, this class covers compactly supported functions. As a consequence, Theorem 3 further implies the so-called cc-universality (i.e., the density in the space of all continuous functions w.r.t. the topology of compact convergence.) I recommend the authors to clearly state that the cc-universality is a corollary of Theorem 3. Limitations The authors have addressed the limitation in Section 8.
NIPS
Title Universal approximation and model compression for radial neural networks Abstract We introduce a class of fully-connected neural networks whose activa1 tion functions, rather than being pointwise, rescale feature vectors by a 2 function depending only on their norm. We call such networks radial 3 neural networks, extending previous work on rotation equivariant net4 works that considers rescaling activations in less generality. We prove 5 universal approximation theorems for radial neural networks, including 6 in the more difficult cases of bounded widths and unbounded domains. 7 Our proof techniques are novel, distinct from those in the pointwise case. 8 Additionally, radial neural networks exhibit a rich group of orthogonal 9 change-of-basis symmetries on the vector space of trainable parameters. 10 Factoring out these symmetries leads to a practical lossless model com11 pression algorithm. Optimization of the compressed model by gradient 12 descent is equivalent to projected gradient descent for the full model. 13 N/A We introduce a class of fully-connected neural networks whose activa-1 tion functions, rather than being pointwise, rescale feature vectors by a2 function depending only on their norm. We call such networks radial3 neural networks, extending previous work on rotation equivariant net-4 works that considers rescaling activations in less generality. We prove5 universal approximation theorems for radial neural networks, including6 in the more difficult cases of bounded widths and unbounded domains.7 Our proof techniques are novel, distinct from those in the pointwise case.8 Additionally, radial neural networks exhibit a rich group of orthogonal9 change-of-basis symmetries on the vector space of trainable parameters.10 Factoring out these symmetries leads to a practical lossless model com-11 pression algorithm. Optimization of the compressed model by gradient12 descent is equivalent to projected gradient descent for the full model.13 1 Introduction14 Inspired by biological neural networks, the theory of artificial neural networks has largely15 focused on pointwise (or “local”) nonlinear layers [46, 14], in which the same function16 σ : R→ R is applied to each coordinate independently:17 Rn → Rn, v = (v1 , . . . , vn) 7→ (σ(v1) , σ(v2) , . . . , σ(vn)). (1.1) In networks with pointwise nonlinearities, the standard basis vectors in Rn can be inter-18 preted as “neurons” and the nonlinearity as a “neuron activation.” Research has generally19 focused on finding functions σ which lead to more stable training, have less sensitivity to20 initialization, or are better adapted to certain applications [42, 38, 37, 10, 29]. Many σ have21 been considered, including sigmoid, ReLU, arctangent, ELU, Swish, and others.22 However, by setting aside the biological metaphor, it is possible to consider a much23 broader class of nonlinearities, which are not necessarily pointwise, but instead depend24 simultaneously on many coordinates. Freedom from the pointwise assumption allows25 one to design activations that yield expressive function classes with specific advantages.26 Additionally, certain choices of non-pointwise activations maximize symmetry in the27 parameter space of the network, leading to compressibility and other desirable properties.28 In this paper, we introduce radial neural networks which employ non-pointwise nonlin-29 earities called radial rescaling activations. Such networks enjoy several provable properties30 including high model compressibility, symmetry in optimization, and universal approxi-31 mation. Radial rescaling activations are defined by rescaling each vector by a scalar that32 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. depends only on the norm of the vector:33 ρ : Rn → Rn, v 7→ λ(|v|)v, (1.2) where λ is a scalar-valued function of the norm. Whereas in the pointwise setting, only the34 linear layers mix information between different components of the latent features, for radial35 rescaling, all coordinates of the activation output vector are affected by all coordinates of36 the activation input vector. The inherent geometric symmetry of radial rescalings makes37 them particularly useful for designing equivariant neural networks [55, 47, 56, 57].38 We note that radial neural networks constitute a simple and previously unconsidered type39 of multilayer radial basis functions network [4], namely, one where the number of hidden40 activation neurons (often denoted N) in each layer is equal to one. Indeed, pre-composing41 equation 1.2 with a translation and post-composing with a linear map, one obtains a special42 case of the local linear model extension of a radial basis functions network.43 In our first set of main results, we prove that radial neural networks are in fact universal44 approximators. Specifically, we demonstrate that any asymptotically affine function can be45 approximated with a radial neural network, suggesting potentially good extrapolation46 behavior. Moreover, this approximation can be done with bounded width. Our approach47 to proving these results departs markedly from techniques used in the pointwise case.48 Additionally, our result is not implied by the universality property of radial basis functions49 networks in general, and differs in significant ways, particularly in the bounded width50 property and the approximation of asymptotically affine functions.51 In our second set of main results, we exploit parameter space symmetries of radial neural52 networks to achieve model compression. Using the fact that radial rescaling activations53 commute with orthogonal transformations, we develop a practical algorithm to system-54 atically factor out orthogonal symmetries via iterated QR decompositions. This leads to55 another radial neural network with fewer neurons in each hidden layer. The resulting56 model compression algorithm is lossless: the compressed network and the original network57 both have the same value of the loss function on any batch of training data.58 Furthermore, we prove that the loss of the compressed model after one step of gradient59 descent is equal to the loss of the original model after one step of projected gradient descent.60 As explained below, projected gradient descent involves zeroing out certain parameter61 values after each step of gradient descent. Although training the original network may62 result in a lower loss function after fewer epochs, in many cases the compressed network63 takes less time per epoch to train and is faster in reaching a local minimum.64 To summarize, our main contributions are:65 • A formalization of radial neural networks, a new class of neural networks;66 • Universal approximations results for radial neural networks, including: a) approxi-67 mation of asymptotically affine functions, and b) bounded width approximation;68 • Implementation of a lossless compression algorithm for radial neural networks;69 • A theorem providing the precise relationship between gradient descent optimiza-70 tion of the original and compressed networks.71 2 Related work72 Radial rescaling activations. As noted, radial rescaling activations are a special case of the73 activations used in radial basis functions networks [4]. Radial rescaling functions have the74 symmetry property of preserving vector directions, and hence exhibit rotation equivariance.75 Consequently, examples of such functions, such as the squashing nonlinearity and Norm-76 ReLU, feature in the study of rotationally equivariant neural networks [55, 47, 56, 57, 26].77 However, previous works apply the activation only along the channel dimension, and78 consider the orthogonal group O(n) only for n = 2, 3. In contrast, we consider a radial79 rescaling activation across the entire hidden layer, and O(n)-equivariance where n is the80 hidden layer dimension. Our constructions echo the vector neurons formalism [15], in81 which the output of a nonlinearity is a vector rather than a scalar.82 Universal approximation. Neural networks of arbitrary width and sigmoid activations83 have long been known to be universal approximators [14]. Universality can also be achieved84 by bounded width networks with arbitrary depth [36], and generalizes to other activations85 and architectures [24, 60, 43, 50]. While most work has focused on compact domains,86 some recent work also considers non-compact domains [28, 54]. The techniques used for87 pointwise activations do not generalize to radial rescaling activations, where all activation88 output coordinates are affected by all input coordinates. Consequently, individual radial89 neural network approximators of two different functions cannot be easily combined to an90 approximator of the sum of the functions. The standard proof of universal approximation91 for radial basis functions networks requires an unbounded increase the number of hidden92 activation neurons, and hence does not apply to the case of radial neural networks [40].93 Groups and symmetry. Appearances of symmetry in machine learning have generally94 focused on symmetric input and output spaces. Most prominently, equivariant neural95 networks incorporate symmetry as an inductive bias and feature weight-sharing constraints96 based on equivariance. Examples include G-convolution, steerable CNN, and Clebsch-97 Gordon networks [13, 55, 11, 9, 30, 2, 58, 12, 57, 16, 31, 44]. By contrast, our approach to98 radial neural networks does not depend on symmetries of the input domain, output space,99 or feedforward mapping. Instead, we exploit parameter space symmetries and thus obtain100 more general results that apply to domains with no apparent symmetry.101 Model compression. A major goal in machine learning is to find methods to reduce102 the number of trainable parameters, decrease memory usage, or accelerate inference and103 training [8, 61]. Our approach toward this goal differs significantly from most existing104 methods in that it is based on the inherent symmetry of network parameter spaces.105 One prior method is weight pruning, which removes redundant weights with little loss106 in accuracy [20, 3, 27]. Pruning can be done during training [18] or at initialization107 [34, 53]. Gradient-based pruning removes weights by estimating the increase in loss resulting108 from their removal [33, 22, 17, 39]. A complementary approach is quantization, which109 decreases the bit depth of weights [59, 25, 19]. Knowledge distillation identifies a small model110 mimicking the performance of a larger model [5, 23, 1]. Matrix Factorization methods replace111 fully connected layers with lower rank or sparse factored tensors [6, 7, 52, 32, 45, 35] and112 can often be applied before training. Our method involves a type of matrix factorization113 based on the QR decomposition; however, rather than aim for rank reduction, we leverage114 this decomposition to reduce hidden widths via change-of-basis operations on the hidden115 representations. Close to our method are lossless compression methods which remove116 stable neurons in ReLU networks [49, 48] or exploit permutation parameter space symmetry117 to remove neurons [51]; our compression instead follows from the symmetries of the radial118 rescaling activation. Finally, the compression results of [26], while conceptually similar to119 ours, are weaker, as (1) the unitary group action is on disjoint layers instead of moving120 through all layers, and (2) the results are only stated for the squashing nonlinearity.121 3 Radial neural networks122 In this section, we define radial rescaling functions and radial neural networks. Let123 h : R→ R be a function. For any n ≥ 1, set:124 h(n) : Rn → Rn h(n)(v) = h(|v|) v|v| for v ̸= 0, and h(n)(0) = 0. A function ρ : Rn → Rn is called a radial rescaling function if125 ρ = h(n) for some piecewise differentiable h : R→ R. Hence, ρ sends each input vector to126 a scalar multiple of itself, and that scalar depends only on the norm of the vector1. It is127 easy to show that radial rescaling functions commute with orthogonal transformations.128 Example 1. (1) Step-ReLU, where h(r) = r if r ≥ 1 and 0 otherwise. In this case, the radial129 rescaling function is given by130 ρ : Rn → Rn, v 7→ v if |v| ≥ 1; v 7→ 0 if |v| < 1 (3.1) (2) The squashing function, where h(r) = r2/(r2 + 1). (3) Shifted ReLU, where h(r) =131 max(0, r − b) for r > 0 and b is a real number. See Figure 2. We refer to [55] and the132 references therein for more examples and discussion of radial functions.133 A radial neural network with L layers consists of a positive integer ni indicating the width of134 each layer i = 0, 1, . . . , L; the trainable parameters, comprising of a matrix Wi ∈ Rni×ni−1135 of weights and a bias vector bi ∈ Rni for each i = 1, . . . , L; and a radial rescaling function136 ρi : Rni → Rni for each i = 1, . . . , L. We refer to the tuple n = (n0, n1, . . . , nL) as the widths137 vector of the neural network. The hidden widths vector is nhid = (n1, n2, . . . , nL−1). The138 feedforward function F : Rn0 → RnL of a radial neural network is defined in the usual way139 as an iterated composition of affine maps and activations. Explicitly, set F0 = idRn0 and140 recursively define the partial feedforward functions for i = 1, . . . , L:141 Fi : Rn0 → Rni , x 7→ ρi (Wi ◦ Fi−1(x) + bi) Then the feedforward function is F = FL. Radial neural networks are a special type of142 radial basis functions network; we explain the connection in Appendix F.143 Remark 2. If bi = 0 for all i, then the feedforward function takes the form F(x) = W (µ(x)x)144 where µ : Rn → R is a scalar-valued function and W = WLWL−1 · · ·W1 ∈ RnL×n0 is the145 product of the weight matrices. If any of the biases are non-zero, then the feedforward146 function lacks such a simple form.147 1A function Rn → R that depends only on the norm of a vector is known as a radial function. Radial rescaling functions rescale each vector according to the radial function v 7→ λ(|v|) := h(|v|)|v| . This explains the connection to Equation 1.2. 4 Universal Approximation148 In this section, we consider two universal approximation results. The first approxi-149 mates asymptotically affine functions with a network of unbounded width. The second150 generalizes to bounded width networks. Proofs appear in Appendix B. Throughout,151 Br(c) = {x ∈ Rn : |x − c| < r} denotes the r-ball around a point c, and an affine map152 Rn → Rm is one of the from L(x) = Ax + b for a matrix A ∈ Rm×n and b ∈ Rm.153 4.1 Approximation of asymptotically affine functions154 A continuous function f : Rn → Rm is said to be asymptotically affine if there exists an155 affine map L : Rn → Rm such that, for every ϵ > 0, there is a compact subset K of Rn such156 that |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. In particular, continuous functions with compact157 support are asymptotically affine. The continuity of f and compactness of K imply that,158 for any ϵ > 0, there exist c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the union159 of the balls Bri (ci) covers K and, second, for all i, we have f (Bri (ci) ∩ K) ⊆ Bϵ( f (ci)). Let160 N( f , K, ϵ) be the minimal2 choice of N.161 Theorem 3 (Universal approximation). Let f : Rn → Rm be an asymptotically affine function.162 For any ϵ > 0, there exists a compact set K ⊂ Rn and a function F : Rn → Rm such that:163 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) layers whose164 hidden widths are (n + 1, n + 2, . . . , n + N).165 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.166 We note that the approximation in Theorem 3 is valid on all of Rn. To give an idea167 of the proof, first fix c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) as above. Let e1, . . . , eN be168 orthonormal basis vectors extending Rn to Rn+N . For i = 1, . . . , N define affine maps169 Ti : Rn+i−1 → Rn+i and Si : Rn+i → Rn+i by170 Ti(z) = z− ci + hiei Si(z) = z− (1 + h−1i )⟨ei, z⟩ei + ci + ei where h2i = 1− r2i and ⟨ei, z⟩ is the coefficient of ei in z. Setting ρi to be Step-ReLU171 (Equation 3.1) on Rn+i, these maps are chosen so that the composition Si ◦ ρi ◦ Ti maps172 the points in Bri (ci) to ci + ei, while keeping points outside this ball the same. We now173 describe a radial neural network with widths (n, n + 1, . . . , n + N, m) whose feedforward174 function approximates f . For i = 1, . . . , N the affine map from layer i− 1 to layer i is given175 by z 7→ Ti ◦ Si−1(z), with S0 = idRn . The activation at each hidden layer is Step-ReLU. Let176 L be the affine map such that |L− f | < ϵ on Rn \ K. The affine map from layer N to the177 output layer is Φ ◦ SN where Φ : Rn+N → Rm is the unique affine map determined by178 x 7→ L(x) if x ∈ Rn, and ei 7→ f (ci)− L(ci). This construction is illustrated in Figure 3.179 Corollary 4. Radial neural networks are dense in the space of all continuous functions with respect180 to the topology of compact convergence, and hence satisfy cc-universality.181 4.2 Bounded width approximation182 We now turn our attention to a bounded width universal approximation result.183 Theorem 5. Let f : Rn → Rm be an asymptotically affine function. For any ϵ > 0, there exists a184 compact set K ⊂ Rn and a function F : Rn → Rm such that:185 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) hidden186 layers whose widths are all n + m + 1.187 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.188 The proof, which is more involved than that of Theorem 3, relies on using orthogonal189 dimensions to represent the domain and the range of f , together with an indicator190 2In many cases, the constant N( f , K, ϵ) can be bounded explicitly. For example, if K is the unit cube in Rn and f is Lipschitz continuous with Lipschitz constant R, then N( f , K, ϵ) ≤ ⌈ R √ n 2ϵ ⌉n . dimension to distinguish the two. We regard points in Rn+m+1 as triples (x, y, θ) where191 x ∈ Rn, y ∈ Rm and θ ∈ R. The proof of Theorem 5 parallels that of Theorem 3, but instead192 of mapping points in Bri (ci) to ci + ei, we map the points in Bri ((ci, 0, 0)) to (0, f (ci)−L(0) s , 1),193 where s is chosen such that different balls do not interfere. The final layer then uses an194 affine map (x, y, θ) 7→ L(x) + sy, which takes (x, 0, 0) to L(x), and (0, f (ci)−L(0)s , 1) to f (ci).195 We remark on several additional results; see Appendix B for full statements and proofs.196 The bound of Theorem 5 can be strengthened to max(n, m) + 1 in the case of functions197 f : K → Rm defined on a compact domain K ⊂ Rn (i.e., ignoring asymptotic behavior).198 Furthermore, with more layers, it is possible to reduce that bound to max(n, m).199 5 Model compression200 In this section, we prove a model compression result. Specifically, we provide an algorithm201 which, given any radial neural network, computes a different radial neural network with202 smaller widths. The resulting compressed network has the same feedforward function203 as the original network, and hence the same value of the loss function on any batch of204 training data. In other words, our model compression procedure is lossless. Although205 our algorithm is practical and explicit, it reflects more conceptual phenomena, namely, a206 change-of-basis action on network parameter spaces (Section 5.1).207 5.1 The parameter space208 Suppose a fully connected network has L layers and widths given by the tuple n =209 (n0, n1, n2, . . . , nL−1, nL). In other words, the i-th layer has input width ni−1 and output210 width ni. The parameter space is defined as the vector space of all possible choices of211 parameter values. Hence, it is given by the following product of vector spaces:212 Param(n) = ( Rn1×n0 ×Rn2×n1 × · · · ×RnL×nL−1 ) × (Rn1 ×Rn2 × · · · ×RnL) We denote an element therein as a pair of tuples (W, b) where W = (Wi ∈ Rni×ni−1)Li=1213 are the weights and b = (bi ∈ Rni )Li=1 are the biases. To describe certain symmetries of214 the parameter space, consider the following product of orthogonal groups, with sizes215 corresponding to the widths of the hidden layers:216 O(nhid) = O(n1)×O(n2)× · · · ×O(nL−1) There is a change-of-basis action of O(nhid) on the parameter space Param(n). Explicitly,217 the tuple of orthogonal matrices Q = (Qi)L−1i=1 ∈ O(n hid) transforms the parameter values218 (W, b) to Q ·W := ( QiWiQ−1i−1 )L i=1 and Q ·b := (Qibi)Li=1, where Q0 = idn0 and QL = idnL .219 5.2 Model compression220 In order to state the compression result, we first define the reduced widths. Namely,221 the reduction nred = (nred0 , n red 1 , . . . , n red L ) of a widths vector n is defined recursively by222 setting nred0 = n0, then n red i = min(ni, n red i−1 + 1) for i = 1, . . . , L − 1, and finally n red L =223 nL. For a tuple ρ = (ρi : Rni → Rni )Li=1 of radial rescaling functions, we write ρred =224 ( ρredi : R nredi → Rnredi ) for the corresponding tuple of restrictions, which are all radial225 rescaling functions. The following result relies on Algorithm 1 below.226 Theorem 6. Let (W, b, ρ) be a radial neural network with widths n. Let Wred and bred be the227 weights and biases of the compressed network produced by Algorithm 1. The feedforward function228 of the original network (W, b, ρ) coincides with that of the compressed network (Wred, bred, ρred).229 Algorithm 1: QR Model Compression (QR-compress) input : W, b ∈ Param(n) output : Q ∈ O(nhid) and Wred, bred ∈ Param(nred) Q, Wred, bred ← [ ], [ ], [ ] // initialize output lists A1 ← [b1 W1] // matrix of size n1 × (n0 + 1) for i← 1 to L− 1 do // iterate through layers Qi, Ri ← QR-decomp(Ai , mode = ‘complete’) // Ai = QiInciRi Append Qi to Q Append first column of Ri to bred // reduced bias for layer i Append remainder of Ri to Wred // reduced weights for layer i Set Ai+1 ← [bi+1 Wi+1QiInci] // matrix of size ni+1 × (nredi + 1) end Append the first column of AL to bred // reduced bias for last layer Append the remainder of AL to Wred // reduced weights for last layer return Q, Wred, bred 230 We explain the notation of the algorithm. The inclusion matrix Inci ∈ Rni×n red i has231 ones along the main diagonal and zeros elsewhere. The method QR-decomp with mode =232 ‘complete’ computes the complete QR decomposition of the ni × (1 + nredi−1) matrix Ai233 as QiInciRi where Qi ∈ O(ni) and Ri is upper-triangular of size nredi × (1 + n red i−1). The234 definition of nredi implies that either n red i = n red i−1 + 1 or n red i = ni. The matrix Ri is of size235 nredi × n red i in the former case and of size ni × (1 + n red i−1) in the latter case.236 Example 7. Suppose the widths of a radial neural network are (1, 8, 16, 8, 1). Then it has237 ∑4i=1(ni−1 + 1)ni = 305 trainable parameters. The reduced network has widths (1, 2, 3, 4, 1)238 and ∑4i=1(n red i−1 + 1)(n red i ) = 34 trainable parameters. Another example appears in Figure 4.239 We note that the tuple of matrices Q produced by Algorithm 1 does not feature in the240 statement of Theorem 6, but is important in the proof (which appears in Appendix C).241 Namely, an induction argument shows that the i-th partial feedforward function of the242 original and reduced models are related via the matrices Qi and Inci. A crucial ingredient243 in the proof is that radial rescaling activations commute with orthogonal transformations.244 6 Projected gradient descent245 The typical use case for model compression algorithms is to produce a smaller version246 of the fully trained model which can be deployed to make inference more efficient. It247 is also worth considering whether compression can be used to accelerate training. For248 example, for some compression algorithms, the compressed and full models have the same249 feedforward function after a step of gradient descent is applied to each, and so one can250 compress before training and still reach the same minimum. Unfortunately, in the context251 of radial neural networks, compression using Algorithm 1 and then training does not252 necessarily give the same result as training and then compression (see Appendix D.6 for a253 counterexample). However, QR-compress does lead to a precise mathematical relationship254 between optimization of the two models: the loss of the compressed model after one step255 of gradient descent is equivalent to the loss of (a transformed version of) the original model256 after one step of projected gradient descent. Proofs appear in Appendix D.257 To state our results, fix a tuple of widths n and a tuple ρ = (ρi : Rni → Rni )Li=1 of radial258 rescaling functions. The loss function L : Param(n)→ R associated to a batch of training259 data {(xj, yj)} ⊆ Rn0 × RnL is defined as taking parameter values (W, b) to the sum260 ∑j C(F(xj), yj) where C : RnL ×RnL → R is a cost function on the output space, and261 F = F(W,b,ρ) is the feedforward of the radial neural network with parameters (W, b) and262 activations ρ. Similarly, we have a loss function Lred on the parameter space Param(nred)263 with reduced widths vector. For any learning rate η > 0, we obtain gradient descent maps:264 γ : Param(n)→ Param(n) γred : Param(nred)→ Param(nred) (W, b) 7→ (W, b)− η∇(W,b)L (V, c) 7→ (V, c)− η∇(V,c)Lred We will also consider, for k ≥ 0, the k-fold composition γk = γ ◦ γ ◦ · · · ◦ γ and similarly265 for γred. The projected gradient descent map on Param(n) is given by:266 γproj : Param(n)→ Param(n), (W, b) 7→ Proj (γ(W, b)) where the map Proj zeroes out all entries in the bottom left (ni − nredi )× n red i−1 submatrix of267 Wi −∇WiL, and the bottom (ni − n red i ) entries in bi −∇biL, for each i. Schematically:268 Wi −∇WiL = [ ∗ ∗ ∗ ∗ ] 7→ [ ∗ ∗ 0 ∗ ] , bi −∇biL = [ ∗ ∗ ] 7→ [ ∗ 0 ] To state the following theorem, let Wred, bred, Q = QR-compress(W, b) be the outputs269 of Algorithm 1 applied to (W, b) ∈ Param(n). Hence (Wred, bred) ∈ Param(nred) are270 the parameters of the compressed model, and Q ∈ O(nhid) is an orthogonal parameter271 symmetry. We also consider the action (Section 5.1) of Q−1 applied to (W, b).272 Theorem 8. Let Wred, bred, Q = QR-compress(W, b) be the outputs of Algorithm 1 applied to273 (W, b) ∈ Param(n). Set U = Q−1 · (W, b)− (Wred, bred). For any k ≥ 0, we have:274 γk(W, b) = Q · γk(Q−1 · (W, b)) γkproj(Q−1 · (W, b)) = γkred(W red, bred) + U. We conclude that gradient descent with initial values (W, b) is equivalent to gradient275 descent with initial values Q−1 · (W, b) since at any stage we can apply Q±1 to move from276 one to the other. Furthermore, projected gradient descent with initial values Q−1 · (W, b)277 is equivalent to gradient descent on Param(nred) with initial values (Wred, bred) since at278 any stage we can move from one to the other by ±U. Neither Q nor U depends on k.279 7 Experiments280 In addition to the theoretical results in this work, we provide an implementation of281 Algorithm 1, in order to validate the claims of Theorems 6 and 8 empirically, as well as to282 quantify real-world performance. Full experimental details are in Appendix E.283 (1) Empirical verification of Theorem 6. We learn the function f (x) = e−x 2 from samples284 using a radial neural network with widths n = (1, 6, 7, 1) and activation the radial shifted285 sigmoid h(x) = 1/(1 + e−x+s). Applying QR-compress gives a compressed radial neural286 network with widths nred = (1, 2, 3, 1). Theorem 6 implies that the respective neural287 functions F and Fred are equal. Over 10 random initializations, the mean absolute error is288 negligible up to machine precision: (1/N)∑j |F(xj)− Fred(xj)| = 1.31 · 10−8 ± 4.45 · 10−9.289 (2) Empirical verification of Theorem 8. The claim is that training the transformed model290 with parameters Q−1 · (W, b) and objective L by projected gradient descent coincides291 with training the reduced model with parameters (Wred, bred) and objective Lred by292 usual gradient descent. We verified this on synthetic data as above. Over 10 random293 initializations, the loss functions after training match: |L − Lred| = 4.02 · 10−9 ± 7.01 · 10−9.294 (3) The compressed model trains faster. Our compression method may be applied before295 training to produce a smaller model class which trains faster without sacrificing accuracy.296 We demonstrate this in learning the function f : R2 → R2 sending (t1, t2) to (e−t 2 1 , e−t 2 2)297 using a radial neural network with widths n = (2, 16, 64, 128, 16, 2) and activation the298 radial sigmoid h(r) = 1/(1 + e−r). Applying QR-compress gives a compressed network299 with widths nred = (2, 3, 4, 5, 6, 2). We trained both models until the training loss was300 ≤ 0.01. Over 10 random initializations on our system, the reduced network trained in301 15.32± 2.53 seconds and the original network trained in 31.24± 4.55 seconds.302 8 Conclusions and Discussion303 This paper demonstrates that radial neural networks are universal approximators and that304 their parameter spaces exhibit a rich symmetry group, leading to a model compression305 algorithm. The results of this work combine to build a theoretical foundation for the use of306 radial neural networks, and suggest that radial neural networks hold promise for wider307 practical applicability. Furthermore, this work makes an argument for considering the308 advantages of non-pointwise nonlinearities in neural networks.309 There are two main limitations of our results, each providing an opportunity for future310 work. First, our universal approximation constructions currently work only for Step-ReLU311 radial rescaling radial activations; it would be desirable to generalize to other activations.312 Additionally, Theorem 6 achieves compression only for networks whose widths satisfy313 ni > ni−1 + 1 for some i. Neural networks which do not have increasing widths anywhere314 in their architecture, such as encoders, would not be compressible.315 Further extensions of this work include: First, little is currently known about the stabil-316 ity properties of radial neural networks during training, as well as their sensitivity to317 initialization. Second, radial rescaling activations provide an extreme case of symmetry;318 there may be benefits to combining radial and pointwise activations within a single net-319 work, for example, through ‘block’ radial rescaling functions. Our techniques may yield320 weaker compression properties for more general radial basis functions networks; radial321 neural networks may be the most compressible such networks. Third, the parameter space322 symmetries may provide a key ingredient in analyzing the gradient flow dynamics of323 radial neural networks and computation of conserved quantities. Fourth, radial rescaling324 activations can be used within convolutional or group-equivariant NNs. Finally, based325 on the theoretical advantages laid out in this paper, future work will explore empirically326 applications in which we expect radial networks to outperform alternate methods. Such327 potential applications include data spaces with circular or distance-based class boundaries.328 References329 [1] Lei Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? arXiv:1312.6184,330 2013. 3331 [2] Erkao Bao and Linqi Song. Equivariant neural networks and equivarification.332 arXiv:1906.07172, 2019. 3333 [3] Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. What is334 the state of neural network pruning? arXiv:2003.03033, 2020. 3335 [4] David S Broomhead and David Lowe. Radial basis functions, multi-variable functional336 interpolation and adaptive networks. Technical report, Royal Signals and Radar337 Establishment Malvern (United Kingdom), 1988. 2, 3338 [5] Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression.339 In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery340 and Data Mining, pages 535–541, 2006. 3341 [6] Yu Cheng, X Yu Felix, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shih-Fu342 Chang. Fast neural networks with circulant projections. arXiv:1502.03436, 2, 2015. 3343 [7] Yu Cheng, Felix X Yu, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu344 Chang. An exploration of parameter redundancy in deep networks with circulant345 projections. In Proceedings of the IEEE international conference on computer vision, pages346 2857–2865, 2015. 3347 [8] Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. A survey of model compression348 and acceleration for deep neural networks. arXiv:1710.09282, 2017. 3349 [9] Benjamin Chidester, Minh N. Do, and Jian Ma. Rotation equivariance and invariance350 in convolutional neural networks. arXiv:1805.12301, 2018. 3351 [10] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep352 network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289,353 2015. 1354 [11] Taco S. Cohen and Max Welling. Group equivariant convolutional networks. In355 International conference on machine learning (ICML), pages 2990–2999, 2016. 3356 [12] Taco S Cohen and Max Welling. Steerable CNNs. In Proceedings of the International357 Conference on Learning Representations (ICLR), 2017. 3358 [13] Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equiv-359 ariant convolutional networks and the icosahedral CNN. In Proceedings of the 36th360 International Conference on Machine Learning (ICML), volume 97, pages 1321–1330, 2019.361 3362 [14] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathe-363 matics of control, signals and systems, 2(4):303–314, 1989. 1, 3364 [15] Congyue Deng, O. Litany, Yueqi Duan, A. Poulenard, A. Tagliasacchi, and L. Guibas.365 Vector Neurons: A General Framework for SO(3)-Equivariant Networks. 2021366 IEEE/CVF International Conference on Computer Vision (ICCV), 2021. doi: 10.1109/367 iccv48922.2021.01198. 3368 [16] Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symme-369 try in convolutional neural networks. In International Conference on Machine Learning370 (ICML), 2016. 3371 [17] Xin Dong, Shangyu Chen, and Sinno Jialin Pan. Learning to prune deep neural372 networks via layer-wise optimal brain surgeon. arXiv preprint arXiv:1705.07565, 2017.373 3374 [18] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse,375 trainable neural networks. arXiv:1803.03635, 2018. 3376 [19] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep377 convolutional networks using vector quantization. arXiv:1412.6115, 2014. 3378 [20] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neu-379 ral networks with pruning, trained quantization and huffman coding. arXiv:1510.00149,380 2015. 3381 [21] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli382 Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J.383 Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew384 Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre385 Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi,386 Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Nature,387 585(7825):357–362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https:388 //doi.org/10.1038/s41586-020-2649-2. 35389 [22] Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal390 brain surgeon. Morgan Kaufmann, 1993. 3391 [23] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural392 network. arXiv:1503.02531, 2015. 3393 [24] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural394 networks, 4(2):251–257, 1991. 3395 [25] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang,396 Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient con-397 volutional neural networks for mobile vision applications. arXiv:1704.04861, 2017.398 3399 [26] George Jeffreys and Siu-Cheong Lau. Kähler Geometry of Quiver Varieties and400 Machine Learning. arXiv:2101.11487, 2021. URL http://arxiv.org/abs/2101.11487.401 3402 [27] Ehud D Karnin. A simple procedure for pruning back-propagation trained neural403 networks. IEEE transactions on neural networks, 1(2):239–242, 1990. 3404 [28] Patrick Kidger and Terry Lyons. Universal approximation with deep narrow networks.405 In Conference on learning theory, pages 2306–2327. PMLR, 2020. 3406 [29] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-407 normalizing neural networks. Advances in neural information processing systems, 30,408 2017. 1409 [30] Risi Kondor and Shubhendu Trivedi. On the Generalization of Equivariance and410 Convolution in Neural Networks to the Action of Compact Groups. In International411 conference on machine learning (ICML), 2018. 3412 [31] Leon Lang and Maurice Weiler. A Wigner-Eckart theorem for group equivariant413 convolution kernels. In International Conference on Learning Representations (ICLR), 2021.414 3415 [32] Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempit-416 sky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition.417 arXiv:1412.6553, 2014. 3418 [33] Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in419 neural information processing systems, pages 598–605, 1990. 3420 [34] Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip HS Torr. A421 signal propagation perspective for pruning neural networks at initialization. arXiv422 preprint arXiv:1906.06307, 2019. 3423 [35] Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, and Rogerio424 Feris. Fully-adaptive feature sharing in multi-task networks with applications in425 person attribute classification. In Proceedings of the IEEE conference on computer vision426 and pattern recognition (CVPR), pages 5334–5343, 2017. 3427 [36] Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. The428 expressive power of neural networks: A view from the width. Advances in neural429 information processing systems, 30, 2017. 3430 [37] Mirco Milletarí, Thiparat Chotibut, and Paolo E Trevisanutto. Mean field theory of431 activation functions in deep neural networks. arXiv preprint arXiv:1805.08786, 2018. 1432 [38] Diganta Misra. Mish: A self regularized non-monotonic activation function. arXiv433 preprint arXiv:1908.08681, 2019. 1434 [39] Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Prun-435 ing convolutional neural networks for resource efficient inference. arXiv preprint436 arXiv:1611.06440, 2016. 3437 [40] Jooyoung Park and Irwin W Sandberg. Universal approximation using radial-basis-438 function networks. Neural computation, 3(2):246–257, 1991. 3439 [41] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory440 Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban441 Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan442 Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith443 Chintala. Pytorch: An imperative style, high-performance deep learning library. In444 H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett,445 editors, Advances in Neural Information Processing Systems (NeurIPS) 32, pages446 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/447 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.448 pdf. 35449 [42] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions.450 arXiv preprint arXiv:1710.05941, 2017. 1451 [43] Siamak Ravanbakhsh. Universal equivariant multilayer perceptrons. In International452 Conference on Machine Learning, pages 7996–8006. PMLR, 2020. 3453 [44] Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Equivariance through454 parameter-sharing. In International Conference on Machine Learning, pages 2892–2901.455 PMLR, 2017. 3456 [45] Roberto Rigamonti, Amos Sironi, Vincent Lepetit, and Pascal Fua. Learning separable457 filters. In Proceedings of the IEEE conference on computer vision and pattern recognition,458 pages 2754–2761, 2013. 3459 [46] Frank Rosenblatt. The perceptron: a probabilistic model for information storage and460 organization in the brain. Psychological review, 65(6):386, 1958. 1461 [47] Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between462 capsules. arXiv:1710.09829, 2017. 2, 3463 [48] Thiago Serra, Abhinav Kumar, and Srikumar Ramalingam. Lossless compression464 of deep neural networks. In International Conference on Integration of Constraint Pro-465 gramming, Artificial Intelligence, and Operations Research, pages 417–430. Springer, 2020.466 3467 [49] Thiago Serra, Xin Yu, Abhinav Kumar, and Srikumar Ramalingam. Scaling up exact468 neural network compression by relu stability. Advances in Neural Information Processing469 Systems, 34, 2021. 3470 [50] Sho Sonoda and Noboru Murata. Neural network with unbounded activation func-471 tions is universal approximator. Applied and Computational Harmonic Analysis, 43(2):472 233–268, 2017. 3473 [51] Gustav Sourek, Filip Zelezny, and Ondrej Kuzelka. Lossless compression of structured474 convolutional models via lifting. arXiv preprint arXiv:2007.06567, 2020. 3475 [52] Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks476 with low-rank regularization. arXiv:1511.06067, 2015. 3477 [53] Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before478 training by preserving gradient flow. arXiv preprint arXiv:2002.07376, 2020. 3479 [54] Ming-Xi Wang and Yang Qu. Approximation capabilities of neural networks on480 unbounded domains. Neural Networks, 145:56–67, 2022. 3481 [55] Maurice Weiler and Gabriele Cesa. General E(2)-Equivariant Steerable CNNs. Confer-482 ence on Neural Information Processing Systems (NeurIPS), 2019. 2, 3, 4483 [56] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen.484 3D steerable CNNs: Learning rotationally equivariant features in volumetric data.485 Proceedings of the 32nd International Conference on Neural Information Processing Systems486 (NeurIPS), 2018. 2, 3487 [57] Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for488 rotation equivariant CNNs. In Proceedings of the IEEE Conference on Computer Vision489 and Pattern Recognition (CVPR), pages 849–858, 2018. 2, 3490 [58] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow.491 Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the492 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5028–5037,493 2017. 3494 [59] Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized495 convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference496 on Computer Vision and Pattern Recognition (CVPR), pages 4820–4828, 2016. 3497 [60] Dmitry Yarotsky. Universal approximations of invariant maps by neural networks.498 Constructive Approximation, 55(1):407–474, 2022. 3499 [61] Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad,500 and Yanzhi Wang. A systematic DNN weight pruning framework using alternating501 direction method of multipliers. In Proceedings of the European Conference on Computer502 Vision (ECCV), pages 184–199, 2018. 3503 Checklist504 1. For all authors...505 (a) Do the main claims made in the abstract and introduction accurately reflect506 the paper’s contributions and scope? [Yes]507 (b) Did you describe the limitations of your work? [Yes] See Section 8.508 (c) Did you discuss any potential negative societal impacts of your work? [N/A]509 Our work is theoretical and does not hold specific risks of negative impacts.510 (d) Have you read the ethics review guidelines and ensured that your paper511 conforms to them? [Yes]512 2. If you are including theoretical results...513 (a) Did you state the full set of assumptions of all theoretical results? [Yes]514 (b) Did you include complete proofs of all theoretical results? [Yes] Most of the515 proofs appear in the supplementary material.516 3. If you ran experiments...517 (a) Did you include the code, data, and instructions needed to reproduce the518 main experimental results (either in the supplemental material or as a URL)?519 [Yes]520 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how521 they were chosen)? [Yes]522 (c) Did you report error bars (e.g., with respect to the random seed after running523 experiments multiple times)? [Yes]524 (d) Did you include the total amount of compute and the type of resources used525 (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix E.526 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new527 assets...528 (a) If your work uses existing assets, did you cite the creators? [Yes]529 (b) Did you mention the license of the assets? [N/A]530 (c) Did you include any new assets either in the supplemental material or as a531 URL? [N/A]532 (d) Did you discuss whether and how consent was obtained from people whose533 data you’re using/curating? [N/A]534 (e) Did you discuss whether the data you are using/curating contains personally535 identifiable information or offensive content? [N/A]536 5. If you used crowdsourcing or conducted research with human subjects...537 (a) Did you include the full text of instructions given to participants and screen-538 shots, if applicable? [N/A]539 (b) Did you describe any potential participant risks, with links to Institutional540 Review Board (IRB) approvals, if applicable? [N/A]541 (c) Did you include the estimated hourly wage paid to participants and the total542 amount spent on participant compensation? [N/A]543 A Organization of the appendices544 This paper is a contribution to the mathematical foundations of machine learning, and our545 results are motivated by expanding the applicability and performance of neural networks.546 At the same time, we give precise mathematical formulations of our results and proofs.547 The purposes of these appendices are several:548 1. To clarify the mathematical conventions and terminology, thus making the paper549 more accessible.550 2. To provide full proofs of the main results.551 3. To develop context around various construction appearing in the main text.552 4. To discuss in detail examples, special cases, and generalizations of our results.553 We now give a summary of the contents of the appendices.554 Appendix B contains proofs the universal approximation results (Theorems 3 and 5) stated555 in Section 4 of the main text, as well as proofs of additional bounded width results.556 The proofs use notation given in Appendix B.1, and rely on preliminary topological557 considerations given in Appendix B.2.558 In Appendix C, we give a proof of the model compression result given in Theorem 6, which559 appears in Section 5. For clarity and background we begin the appendix with a discussion560 of the version of the QR decomposition relevant for our purposes (Appendix C.1). We also561 establish elementary properties of radial rescaling activations (Appendix C.2).562 The focus of Appendix D is projected gradient descent, elaborating on Section 6. We563 first prove a result on the interaction of gradient descent and orthogonal transformations564 (Appendix D.1), before formulating projected gradient descent in more detail (Appendix565 D.2), and introducing the so-called interpolating space (Appendix D.3). We restate Theorem566 8 in more convenient notation (Appendix D.4) before proceeding to the proof (Appendix567 D.5).568 Appendix E contains implementation details for the experiments summarized in Section569 7. Our implementations use shifted radial rescaling activations, which we formulate in570 Appendix E.1.571 Appendix F explains the connection between our constructions and radial basis functions572 networks. While radial neural networks turn out to be a specific type of radial basis573 functions network, our universality results are not implied by those for general radial basis574 functions networks.575 B Universal approximation proofs and additional results576 In this section, we provide full proofs of the universal approximation (UA) results for radial577 neural networks, as stated in Section 4. In order to do so, we first clarify our notational578 conventions (Appendix B.1), and collect basic topological results (Appendix B.2).579 B.1 Notation580 Recall that, for a point c in the Euclidean space Rn and a positive real number r, we denote581 the r-ball around c by Br(c) = {x ∈ Rn | |x− c| < r}. All networks in this section have the582 Step-ReLU radial rescaling activation function, defined as:583 ρ : Rn −→ Rn, z 7−→ { z if |z| ≥ 1 0 otherwise Throughout, ◦ denotes the composition of functions. We identify a linear map with a584 corresponding matrix (in the standard bases). In the case of linear maps, the operation ◦585 can be be identified with matrix multiplication. Recall also that an affine map L : Rn → Rm586 is one of the from L(x) = Ax + b for a matrix A ∈ Rm×n and b ∈ Rm.587 B.2 Topology588 Let K be a compact subset of Rn and let f : K → Rm be a continuous function.589 Lemma 9. For any ϵ > 0, there exist c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the590 union of the balls Bri (ci) covers K; second, for all i, we have f (Bri (ci) ∩ K) ⊆ Bϵ( f (ci)).591 Proof. The continuity of f implies that for each c ∈ K, there exists r = rc such that592 f (Brc(c)∩K) ⊆ Bϵ( f (c)). The subsets Brc(c)∩K form an open cover of K. The compactness593 of K implies that there is a finite subcover. The result follows.594 We also prove a variation of Lemma 9 that additionally guarantees that none of the balls in595 the cover of K contains the center point of another ball.596 Lemma 10. For any ϵ > 0, there exist c1, . . . , cM ∈ K and r1, . . . , rM ∈ (0, 1) such that, first, the597 union of the balls Bri (ci) covers K; second, for all i, we have f (Bri (ci)) ⊆ Bϵ( f (ci)); and, third,598 |ci − cj| ≥ ri.599 Proof. Because f is continuous on a compact domain, it is uniformly continuous. So, there600 exists r > 0 such that f (Br(c) ∩ K) ⊆ Bϵ( f (c)) for each c ∈ K. Because K is compact it has601 a finite volume, and so does Br/2(K) = ⋃ c∈K Br/2(c). Hence, there exists a finite maximal602 packing of Br/2(K) with balls of radius r/2. That is, a collection c1, . . . , cM ∈ Br/2(K)603 such that, for all i, Br/2(ci) ⊆ Br/2(K) and, for all j ̸= i, Br/2(ci) ∩ Br/2(cj) = ∅. The first604 condition implies that ci ∈ K. The second condition implies that |ci − cj| ≥ r. Finally, we605 argue that K ⊆ ⋃Mi=1 Br(ci). To see this, suppose, for a contradiction, that x ∈ K does not606 belong to ⋃M i=1 Br(ci). Then Br/2(ci) ∩ Br/2(x) = ∅, and x could be added to the packing,607 which contradicts the fact that the packing was chosen to be maximal. So the union of the608 balls Br(ci) covers K.609 We turn our attention to the minimal choices of N and M in Lemmas 9 and 10.610 Definition 11. Given f : K → Rm continuous and ϵ > 0, let N( f , K, ϵ) be the minimal611 choice of N in Lemma 9, and let M( f , K, ϵ) be the minimal choice of M in Lemma 10.612 Observe that M( f , K, ϵ) ≥ N( f , K, ϵ). In many cases, it is possible to give explicit bounds613 for the constants N( f , K, ϵ) and M( f , K, ϵ). As an illustration, we give the argument in the614 case that K is the closed unit cube in Rn and f : K → Rm is Lipschtiz continuous.615 Proposition 12. Let K = [0, 1]n ⊂ Rn be the (closed) unit cube and let f : K → Rm be Lipschitz616 continuous with Lipschitz constant R. For any ϵ > 0, we have:617 N( f , K, ϵ) ≤ ⌈ R √ n 2ϵ ⌉n and M( f , K, ϵ) ≤ Γ(n/2 + 1) πn/2 ( 2 + 2R ϵ )n . Proof. For the first inequality, observe that the unit cube can be covered with ⌈ R √ n 2ϵ ⌉n 618 cubes of side length 2ϵR√n . Each cube is contained in a ball of radius ϵ R centered at the619 center of the cube. (In general, a cube of side length a in Rn is contained in a ball of620 radius a √ n 2 .) Lipschitz continuity implies that, for all x, x ′ ∈ K, if |x − x′| < ϵ/R then621 | f (x)− f (x′)| ≤ R|x− x′| < ϵ.622 For the second inequality, let r = ϵ/R. Lipschitz continuity implies that, for all x, x′ ∈ K, if623 |x− x′| < r then | f (x)− f (x′)| ≤ R|x− x′| < ϵ. The n-dimensional volume of the set of624 points with distance at most r/2 to the unit cube is vol(Br/2(K)) ≤ (1 + r)n. The volume625 of a ball with radius r/2 is vol(Br/2(0)) = π n/2 Γ(n/2+1) (r/2) n. Hence, any packing of Br/2(K)626 with balls of radius r/2 consists of at most627 vol(Br/2(K)) vol(Br/2(0)) ≤ Γ(n/2 + 1) πn/2 ( 2 + 2R ϵ )n such balls. So there also exists a maximal packing with at most that many balls. This628 packing can be used in the proof of Lemma 10, which implies that it is a bound on629 M( f , K, ϵ).630 We note in passing that any differentiable function f : K → Rn on a compact subset K of631 Rn is Lipschitz continuous. Indeed, the compactness of K implies that there exists R such632 that | f ′(x)| ≤ R for all x ∈ K. Then one can take R to be the Lipschitz constant of f .633 B.3 Proof of Theorem 3: UA for asymptotically affine functions634 In this section, we restate and prove Theorem 3, which proves that radial neural networks635 are universal approximators of asymptotically affine functions. We recall the definition of636 such functions:637 Definition 13. A function f : Rn → Rm is asymptotically affine if there exists an affine638 function L : Rn → Rm such that, for all ϵ > 0, there exists a compact set K ⊂ Rn such that639 |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. We say that L is the limit of f .640 Remark 14. An asymptotically linear function is defined in the same way, except L is taken641 to be linear (i.e., given just by applying matrix multiplication without translation). Hence642 any asymptotically linear function is in particular an asymptotically affine function, and643 Theorem 3 applies to asymptotically linear functions as well.644 Given an asymptotically affine function f : Rn → Rm and ϵ > 0, let K be a compact set as645 in Definition 13. We apply Lemma 9 to the restriction f |K of f to K and produce a minimal646 constant N = N( f |K, K, ϵ) as in Definition 11. We write simply N( f , K, ϵ) for this constant.647 Theorem 3 (Universal approximation). Let f : Rn → Rm be an asymptotically affine function.648 For any ϵ > 0, there exists a compact set K ⊂ Rn and a function F : Rn → Rm such that:649 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) layers whose650 hidden widths are (n + 1, n + 2, . . . , n + N).651 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.652 Proof. By the hypothesis on f , there exists an affine function L : Rn → Rm and a compact653 set K ⊂ Rn such that |L(x)− f (x)| < ϵ for all x ∈ Rn \ K. Abbreviate N( f , K, ϵ) by N. As654 in Lemma 9, fix c1, . . . , cN ∈ K and r1, . . . , rN ∈ (0, 1) such that, first, the union of the balls655 Bri (ci) covers K and, second, for all i, we have f (Bri (ci)) ⊆ Bϵ( f (ci)). Let U = ⋃N i=1 Bri (ci),656 so that K ⊂ U. Define F : Rn → Rm as:657 F(x) = { L(x) if x /∈ U f (cj) where j is the smallest index with x ∈ Brj(cj) If x /∈ U, then |F(x)− f (x)| = |L(x)− f (x)| < ϵ. Hence suppose x ∈ U. Let j be the658 smallest index such that x ∈ Brj(cj). Then F(x) = f (cj), and, by the choice of rj, we have:659 |F(x)− f (x)| = | f (cj)− f (x)| < ϵ. We proceed to show that F is the feedforward function of a radial neural network. Let660 e1, . . . , eN be orthonormal basis vectors extending Rn to Rn+N . We regard each Rn+i−1 as661 a subspace of Rn+i by embedding into the first n + i− 1 coordinates. For i = 1, . . . , N, we662 set hi = √ 1− r2i and define the following affine transformations:663 Ti : Rn+i−1 → Rn+i Si : Rn+i → Rn+i z 7→ z− ci + hiei z 7→ z− (1 + h−1i )⟨ei, z⟩ei + ci + ei where ⟨ei, z⟩ is the coefficient of ei in z. Consider the radial neural network with widths664 (n, n + 1, . . . , n + N, m), whose affine transformations and activations are given by:665 • For i = 1, . . . , N the affine transformation from layer i− 1 to layer i is given by666 z 7→ Ti ◦ Si−1(z), where S0 = idRn .667 • The activation function at the i-th hidden layer is Step-ReLU on Rn+i, that is:668 ρi : Rn+i −→ Rn+i, z 7−→ { z if |z| ≥ 1 0 otherwise • The affine transformation from layer i = N to the output layer is669 z 7→ ΦL, f ,c ◦ SN(z) where ΦL, f ,c is the affine transformation given by:670 ΦL, f ,c : R n+N → Rm, x + N ∑ i=1 aiei 7→ L(x) + N ∑ i=1 ai( f (ci)− L(ci)) which can be shown to be affine when L is affine. Indeed, write L(x) = Ax + b671 where A is a matrix in Rm×n and b ∈ Rm is a vector. Then ΦL, f ,c is the composition672 of the linear map given by the matrix673 [A f (c1)− L(c1) f (c2)− L(c2) · · · f (cN)− L(cN)] ∈ Rm×(n+N) and translation by b ∈ Rm. Note that we regard each f (ci) − L(ci) ∈ Rm as a674 column vector in the matrix above.675 We claim that the feedforward function of the above radial neural network is exactly F. To676 show this, we first state a lemma, whose (omitted) proof is an elementary computation.677 Lemma 3.1. For i = 1, . . . , N, the composition Si ◦ Ti is the embedding Rn+i−1 ↪→ Rn+i.678 Next, recursively define Gi : Rn → Rn+i via679 Gi = Si ◦ ρi ◦ Ti ◦ Gi−1, where G0 = idRn . The function Gi admits an direct formulation:680 Proposition 3.2. For i = 0, 1, . . . , N, we have:681 Gi(x) = { x if x /∈ ⋃ij=1 Brj(cj) cj + ej where j ≤ i is the smallest index with x ∈ Brj(cj) . Proof. We proceed by induction. The base step i = 0 is immediate. For the induction step,682 assume the claim is true for i− 1, where 0 ≤ i− 1 < N. There are three cases to consider.683 Case 1. Suppose x /∈ ⋃ij=1 Brj(cj). Then in particular x /∈ ⋃i−1j=1 Brj(cj), so the induction684 hypothesis implies that Gi−1(x) = x. Additionally, x /∈ Bri (ci), so:685 |Ti(x)| = |x− ci + hiei| = √ |x− ci|+ h2i ≥ √ r2i + 1− r2i = 1. Using the definition of ρi and Lemma 3.1, we compute:686 Gi(x) = Si ◦ ρi ◦ Ti ◦ Gi−1(x) = Si ◦ ρi ◦ Ti(x) = Si ◦ Ti(x) = x. Case 2. Suppose x ∈ Bj \ ⋃j−1 k=1 Brk (ck) for some j ≤ i− 1. Then the induction hypothesis687 implies that Gi−1(x) = cj + ej. We compute:688 |Ti(cj + ej)| = |cj + ej − ci + hiei| > |ej| = 1. Therefore,689 Gi(x) = Si ◦ ρi ◦ Ti(cj + ej) = Si ◦ Ti(cj + ej) = cj + ej. Case 3. Finally, suppose x ∈ Bi \ ⋃i−1 j=1 Brj(cj). The induction hypothesis implies that690 Gi−1(x) = x. Since x ∈ Bri (ci), we have:691 |Ti(x)| = |x− ci + hiei| = √ |x− ci|+ h2i < √ r2i + 1− r2i = 1. Therefore:692 Gi(x) = Si ◦ ρi ◦ Ti(x) = Si(0) = ci + ei. This completes the proof of the proposition.693 Finally, we show that the function F defined at the beginning of the proof is the feedforward694 function of the above radial neural network. The computation is elementary:695 Ffeedforward = ΦL, f ,c ◦ SN ◦ ρN ◦ TN ◦ SN−1 ◦ ρN−1 ◦ TN−1 ◦ · · · S1 ◦ ρ1 ◦ T1 = ΦL, f ,c ◦ GN = F where the first equality follows from the definition of the feedforward function, the second696 from the definition of GN , and the last from the case i = N of Proposition 3.2 together with697 the definition of ΦL, f ,c. This completes the proof of the theorem.698 B.4 Proof of Theorem 5: bounded width UA for asymptotically affine functions699 We restate and prove Theorem 5, which strengthens Theorem 3 by providing a bounded700 width radial neural network approximation of any asymptotically affine function.701 Theorem 5. Let f : Rn → Rm be an asymptotically affine function. For any ϵ > 0, there exists a702 compact set K ⊂ Rn and a function F : Rn → Rm such that:703 1. F is the feedforward function of a radial neural network with N = N( f , K, ϵ) hidden704 layers whose widths are all n + m + 1.705 2. For any x ∈ Rn, we have |F(x)− f (x)| < ϵ.706 Proof. By the hypothesis
1. What is the focus of the paper regarding neural network design? 2. What are the strengths and weaknesses of the proposed approach, particularly compared to prior works? 3. Are there any concerns regarding the training process and its challenges? 4. How does the reviewer assess the writing quality and clarity of the paper? 5. Do you have any suggestions for improving the paper's content or adding more value to it?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Activation functions are a critical part of the design of neural networks. The choice of activation function in the hidden layers controls how well the network can model a training dataset. In this manuscript, the authors consider so-called radial neural networks, which have similarities to radial basis function neural networks. The authors show universal approximation in both infinite width and depth as well as present a projected gradient descent algorithm for training. Strengths And Weaknesses SIMILARITY TO RBF NNs (WITH LOCAL LINEAR MODEL): A radial basis function NN is given by (in the 2-layer case): NN(x) = sum_{i=1}^N a_i*h(||x+b_i||) It is sometimes convenient to extend a radial basis function NN with a local linear model to obtain (in the 2-layer case): NN(x) = sum_{i=1}^N (a_i + w_i^T(x+b_i))*h(||x+b_i||). This model is more general than the radial neural networks introduced in this manuscript. In particular, set a_i = 0, w_i = c_i*e_i/||x+b_i||, where c_i is a scalar and e_i is the ith unit vector to obtain: NN(x) = sum_{i=1}^N c_i (x+b_i)/||x+b_i||*h(||x+b_i||) (= sum_{i=1}^N c_i rho(||x+b_i||)), which I believe is an expression for 2-layer radial NN (as defined by the manuscript). Therefore, I do not find the NN architecture to be sufficiently novel to support publication. TRAINING: Sensitivity analysis explains the difficulties of gradient descent for radial basis function neural networks with smooth radial basis activation functions (see Karayiannis, "Gradient descent learning of radial basis neural networks." Proc. Inter. Conf. Neural Netw., IEEE, 1997). Do the authors observe similarly difficulties with radial neural networks? EXPERIMENTS: Separate from the novelty of the architecture and training, one would hope for experimental results that demonstrate a significant benefit over existing NN architectures (at least with regard to one example or metric). The three experiments currently do not demonstrate any benefits but verify the theoretical contributions. WRITING The manuscript is well-written. Two typos are: p1, in (1.1): I think you want rho : R^n->R^n at the start of (1.1). p6, line 195: "strengthend" should read "strengthened" Questions Can you better connect the NN architecture to the literature and suggest why the NN architecture is novel? Do the authors observe similarly training difficulties with radial neural networks as radial basis function neural networks? Can the authors add experiments to convince readers to use radial neural networks? Limitations The work has little potential negative societal impact.
NIPS
Title Riemannian Diffusion Models Abstract Diffusion models are recent state-of-the-art methods for image generation and likelihood estimation. In this work, we generalize continuous-time diffusion models to arbitrary Riemannian manifolds and derive a variational framework for likelihood estimation. Computationally, we propose new methods for computing the Riemannian divergence which is needed for likelihood estimation. Moreover, in generalizing the Euclidean case, we prove that maximizing this variational lowerbound is equivalent to Riemannian score matching. Empirically, we demonstrate the expressive power of Riemannian diffusion models on a wide spectrum of smooth manifolds, such as spheres, tori, hyperboloids, and orthogonal groups. Our proposed method achieves new state-of-the-art likelihoods on all benchmarks. 1 Introduction By learning to transmute noise, generative models seek to uncover the underlying generative factors that give rise to observed data. These factors can often be cast as inherently geometric quantities as the data itself need not lie on a flat Euclidean space. Indeed, in many scientific domains such as high-energy physics (Brehmer & Cranmer, 2020), directional statistics (Mardia & Jupp, 2009), geoscience (Mathieu & Nickel, 2020), computer graphics (Kazhdan et al., 2006), and linear biopolymer modeling such as protein and RNA (Mardia et al., 2008; Boomsma et al., 2008; Frellsen et al., 2009), data is best represented on a Riemannian manifold with a non-zero curvature. Naturally, to effectively capture the generative factors of these data, we must take into account the geometry of the space when designing a learning framework. Recently, diffusion based generative models have emerged as an attractive model class that not only achieve likelihoods comparable to state-of-the-art autogressive models (Kingma et al., 2021) but match the sample quality of GANs without the pains of adversarial optimization (Dhariwal & Nichol, 2021). Succinctly, a diffusion model consists of a fixed Markov chain that progressively transforms data to a prior defined by the inference path, and a generative model which is another Markov chain that is learned to invert the inference process (Ho et al., 2020; Song et al., 2021b). While conceptually simple, the learning framework can have a variety of perspectives and goals. For example, Huang et al. (2021) provide a variational framework for general continuous-time diffusion processes on Euclidean manifolds as well as a functional Evidence Lower Bound (ELBO) that can be equivalently shown to be minimizing an implicit score matching objective. At present, however, much of the success of diffusion based generative models and its accompanying variational framework is purpose built for Euclidean spaces, and more specifically, image data. It does not easily translate to general Riemannian manifolds. In this paper, we introduce Riemannian Diffusion Models (RDM)—generalizing conventional diffusion models on Euclidean spaces to arbitrary Riemannian manifolds. Departing from diffusion models on Euclidean spaces, our approach uses the Stratonovich SDE formulation for which the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). conventional chain rule of calculus holds, which, as we demonstrate in section §3, can be exploited to define diffusion on a Riemannian manifold. Furthermore, we take an extrinsic view of geometry by defining the Riemannian manifold of interest as an embedded sub-manifold within a higher dimensional (Euclidean) ambient space. Such a choice enables us to define both our inference and generative SDEs using the coordinate system of the ambient space, greatly simplifying the implementation of the theory developed using the intrinsic view. Main Contributions. We summarize our main contributions below: • We introduce a variational framework built on the Riemannian Feynman-Kac representation and Giransov’s theorem. In Theorem 2 we derive a Riemannian continuous-time ELBO, strictly generalizing the CT-ELBO in Huang et al. (2021) and prove in Theorem 4 that its maximization is equivalent to Riemannian score matching for marginally equivalent SDEs (Theorem 3). • To compute the Riemannian CT-ELBO it is necessary to compute the Riemannian divergence of our parametrized vector field, for which we introduce a QR-decomposition-based method that is computationally efficient for low dimensional manifolds as well a projected Hutchinson method for scalable unbiased estimation. Notably, our approach does not depend on the closest point projection which may not be freely available for many Riemannian manifolds of interest. • We also provide a variance reduction technique to estimate the Riemannian CT-ELBO objective that leverages importance sampling with respect to the time integral, which crucially avoids carefully designing the noise schedule of the inference process. • Empirically, we validate our proposed models on spherical manifolds towards modelling natural disasters as found in earth science datasets, products of spherical manifolds (tori) for protein and RNA, synthetic densities on hyperbolic spaces and orthogonal groups. Our empirical results demonstrate that RDM leads to new state-of-art likelihoods over prior manifold generative models. 2 Background In this section, we provide the necessary background on diffusion models and key concepts from Riemannian geometry that we utilize to build RDMs. For a short review of the latter, see Appendix A or Ratcliffe (1994) for a more comprehensive treatment of the subject matter. 2.1 Euclidean diffusion models A diffusion model can be defined as the solution to the (Itô) SDE (Øksendal, 2003), dX = µ dt+ dBt, (1) with the initial condition X0 following some unstructured prior p0 such as the standard normal distribution, where Bt is a standard Brownian motion, and µ and are the drift and diffusion coefficients of the diffusion process, which control the deterministic forces driving the evolution and the amount of noise injected at each time step. This provides us a way to sample from the model, via numerically solving the dynamics from t = 0 to t = T for some fixed termination time T . To train the model via maximum likelihood, we require an expression for the log marginal density of XT , denoted by log p(x, T ), which is generally intractable. The marginal likelihood can be represented using a stochastic instantaneous change-of-variable formula, by applying the Feynman-Kac theorem to the Fokker-Planck PDE of the density. An application of Girsanov’s theorem followed by an application of Jensen’s inequality leads to the following variational lower bound (Huang et al., 2021; Song et al., 2021a): log p(x, T ) E " log p0(YT ) Z T 0 ✓ 1 2 ka(Ys, s)k 2 2 +r · µ(Ys, T s) ◆ ds Y0 = x # (2) where a is the variational degree of freedom, r· denotes the (Euclidean) divergence operator, and Ys follows the inference SDE (the generative coefficients are evaluated in reversed time, i.e. T s) dY = ( µ+ a) ds+ dB̂s (3) with B̂s being another Brownian motion. This is known as the continuous-time evidence lower bound, or the CT-ELBO for short. 2.2 Riemannian manifolds We work with a d-dimensional Riemannian manifold (M, g) embedded in a higher dimensional ambient space Rm, for m > d. This assumption does not come with a loss of generality, since any Riemannian manifold can be isometrically embedded into a Euclidean space by the Nash embedding theorem (Gunther, 1991). In this case, the metric g coincides with the pullback of the Euclidean metric by the inclusion map. Now, given a coordinate chart ' : M ! Rd and its inverse = ' 1, we can define Ẽj for j = 1, · · · , d to be the basis vectors of the tangent space TxM at point x 2 M. The tangent space can be understood as the pushforward of the Euclidean derivation of the patch space along ; i.e., for any smooth function f 2 C1(M), Ẽj(f) = @@x̃j f . We denote by Px the orthogonal projection onto the linear subspace spanned by the column vectors of the Jacobian Jx = d /dx̃. Specifically, Px can be constructed via Px = Jx(JTx Jx) 1JTx . Note that this subspace is isomorphic to the tangent space TxM, which itself is a subspace of TxRm. As a result, we identify this subspace with TxM. Lastly, we refer to the action of Px as the projection onto the tangential subspace, and Px itself as the tangential projection. 2.3 SDE on manifolds Unlike Euclidean spaces, Riemannian manifolds generally do not possess a vector space structure. This prevents the direct application of the usual (stochastic) calculus. We can resolve this by defining the process via test functions. Specifically, let Vk be a family of smooth vector fields on M, and let Z k be a family of semimartingales (Protter, 2005). Symbolically, we write dXt = X k Vk(Xt) dZ k t if df(Xt) = X k Vk(f)(Xt) dZ k t (4) for any f 2 C1(M) (Hsu, 2002). The in the second differential equation is to be interpreted in the Stratonovich sense (Protter, 2005). The use of the Stratonovich integral is the first step deviating from the Euclidean diffusion model (1), as the Itô integral does not follow the usual chain rule. Working with this abstract definition is not always convenient, so instead we work with specific coordinates of M. Let ' be a chart, and let ṽ = (ṽjk) be a matrix representing the coefficients of Vk in the coordinate basis—i.e. Vk(f) = Pd j=1 ṽjk @ @x̃j f ' 1 x̃='(x) . This allows us to write d'(Xt) = ṽ dZ. Similarly, suppose M is a submanifold embedded in Rm, and denote by v = (vik) the coefficients wrt the Euclidean basis. v and ṽ are related by v = d' 1 dx̃ ṽ. Then we can express the dynamics of X as a regular SDE using the Euclidean space’s coefficients dX = v dZ. Notably, by the relation between v and ṽ, the column vectors of v are required to lie in the span of the column vectors of the Jacobian d' 1 dx̃ which restricts the dynamics to move tangentially on M. 3 Riemannian diffusion models We now develop a variational framework to estimate the likelihood of a diffusion model defined on a Riemannian manifold (M, g). Let Xt 2 M be a process solving the following SDE: Generative SDE: dX = V0 dt+ V dBt, X0 ⇠ p0 (5) where V0 and the columns of the diffusion matrix1 V := [V1, · · · , Vw] are smooth vector fields on M, and Bt is a w-dimensional Brownian motion. The law of the random variable Xt can be written as p(x, t)µ(dx), where p(x, t) is the probability density function and µ is the d-dimensional Hausdorff measure on the manifold associated with the Riemannian volume density. Let V · r be a differential operator defined by (V · rg)U := Pw k=1(rg · Uk)Vk, where rg · Uk denotes the Riemannian divergence of the vector field Uk: rg · Uk = |G| 12 dX j=1 @ @x̃j (|G| 1 2 ũjk). (6) Our first result is a stochastic instantaneous change-of-variable formula for the Riemannian SDE by applying the Feynman-Kac theorem to the Fokker Planck PDE of the density p(x, t). 1The multiplication is interpreted similarly to matrix-vector multiplication, i.e. V dBt = Pw k=1 Vk dB k t . Theorem 1 (Marginal Density). The density p(x, t) of the SDE (5) can be written as p(x, t) = E " p0 (Yt) exp ✓ Z t 0 rg · ✓ V0 1 2 (V ·rg)V ◆ ds ◆ Y0 = x # (7) where the expectation is taken wrt the following process induced by a Brownian motion B0s dY = ( V0 + (V ·rg)V ) ds+ V dB 0 s. (8) For effective likelihood maximization, we require access to log p and its gradient. Towards this goal, we prove the following Riemannian CT-ELBO which serves as our training objective and follows from an application of change of measure (Girsanov’s theorem) and Jensen’s inequality. Theorem 2 (Riemannian CT-ELBO). Let B̂s be a w-dimensional Brownian motion, and let Ys be a process solving the following Inference SDE: dY = ( V0 + (V ·rg)V + V a) ds+ V dB̂s, (9) where a : Rm ⇥ [0, T ] ! Rm is the variational degree of freedom. Then we have log p(x, T ) E " log p0(YT ) Z T 0 1 2 ka(Ys, s)k 2 2 +rg · ✓ V0 1 2 (V ·rg)V ◆ ds Y0 = x # , (10) where all the generative degree of freedoms Vk are evaluated in the reversed time direction. 3.1 Computing Riemannian divergence Similar to the Euclidean case, computing the Riemannian CT-ELBO requires computing the divergence “rg·” of a vector field, which can be achieved by applying the following identity. Proposition 1 (Riemannian divergence identity). Let (M, g) be a d-dimensional Riemannian manifold. For any smooth vector field Vk 2 X(M), the following identity holds: rg · Vk = dX j=1 D rẼj Vk, Ẽ j E g . (11) Furthermore, if the manifold is a submanifold embedded in the ambient space Rm equipped with the induced metric g = ◆⇤ḡ, then (rg · Vk)(x) = tr ✓ Px dvk dx Px ◆ , (12) where vk = (v1k, · · · , vmk) are the ambient space coefficients Vk = Pm i=1 vik @ @xi and Px is the orthogonal projection onto the tangent space. Intrinsic coordinates. The patch-space formula (6) can be used to compute the Riemannian divergence. This view was adopted by Mathieu & Nickel (2020), where they combined the Hutchinson trace identity and the internal coordinate formula to estimate the divergence. The drawbacks of this framework include: (1) obtaining local coordinates may be difficult for some manifolds, hindering generality in practice; (2) we might need to change patches, which complicates implementations; and (3) the inverse scaling of p |G| might result in numerical instability and high variance. Closest-point projection. The coordinate-free expression (11) leads to the closest-point projection method proposed by Rozen et al. (2021). Concretely, define the closest-point projection by ⇡(x) := argminy2M kx yk, where k·k is the Euclidean norm. Let Vk(x) be the derivation corresponding to the ambient space vector vk(x) = P⇡(x)u(⇡(x)) for some unconstrainted u : Rm ! Rm. Rozen et al. (2021) showed that rg · Vk(x) = r · vk(x), since vk is infinitesimally constant in the normal direction to TxM. This allows us to compute the divergence directly in the ambient space. However, the closest-point projection map ⇡ may not always be easily obtained. QR decomposition. An alternative to the closest-point projection is to instead search for an orthogonal basis for TxM. Let Q = [e1, · · · , ed, n1, · · · , nm d] be an orthogonal matrix whose first d columns span the TxM, and the remaining m d vectors span its orthogonal complement TxM ?. To construct Q we can simply sample d vectors—e.g. from N (0, 1)–in the ambient space and orthogonally project them to TxM using Px. These vectors, although not orthogonal yet, form a basis for TxM. Next we concatenate them with m d random vectors and apply a simple QR decomposition to retrieve an orthogonal basis. Using Q we may rewrite equation (12) as follows: (rg · Vk)(x) = tr ✓ QQ > Px dvk dx Px ◆ = tr ✓ (PxQ) > dvk dx PxQ ◆ = dX j=1 e > j dvk dx ej (13) where we used (1) the orthogonality of Q, (2) the cyclic property of trace, (3) and the fact that Pxej = ej and Pxnj = 0. In practice, concatenation with the remaining m d vectors is not needed as they are effectively not used in computing the divergence, speeding up computation when m d. Moreover, the vector-Jacobian product can be computed in O(m) time using reverse-mode autograd and importantly does not require the closest-point projection ⇡. Projected Hutchinson. When QR is too expensive for higher dimensional problems, the Hutchinson trace estimator (Hutchinson, 1989) can be employed within the extrinsic view representation (12). For example, let z be a standard normal vector (or a Rademacher vector), we have (rg ·Vk)(x) = Ez⇠N ,z0=Pxz[z 0> dvk dx z 0]. Different from a direct application of the trace estimator to the closest-point method, we directly project the random vector to the tangent subspace. Therefore, the closest-point projection is again not needed. 3.2 Fixed-inference parameterization Following prior work (Sohl-Dickstein et al., 2015; Ho et al., 2020; Huang et al., 2021), we let the inference SDE (9) be defined as a simple noise process taking observed data to unstructured noise: dY = U0 dt+ V dB̂s, (14) where U0 = 12rg log p0 and V is the tangential projection matrix; that is, Vk(f)(x) =Pm j=1(Px)jk @f @xj for any smooth function f . This is known as the Riemannian Langevin diffusion (Girolami & Calderhead, 2011). As long as p0 satisfies a log-Sobolev inequality, the marginal distribution of Ys (i.e. the aggregated posterior) converges to p0 at a linear rate in the KL divergence (Wang et al., 2020). For compact manifolds, we set p0 to be the uniform density, which means U0 = 0, and (14) is reduced to the extrinsic construction of Brownian motion on M (Hsu, 2002, Section 1.2). The benefits of this fixed-inference parameterization are the following: Stable and Efficient Training. With the fixed-inference parameterization we do not need to optimize the vector fields that generate Ys, and the Riemannian CT-ELBO can be rewritten as: E[log p0(YT )] Z T 0 EYs " 1 2 ka(Ys, s)k 2 2 +rg · ✓ V0 1 2 (V ·rg)V ◆ Y0 = x # ds, (15) where the first term is a constant wrt the model parameters (or it can be optimized separately if we want to refine the prior), and the time integral of the second term can be estimated via importance sampling (see Section 3.3). A sample of Ys can be drawn cheaply by numerically integrating (14), without requiring a stringent error tolerance (see Section 5.2 for an empirical analysis), which allows us to estimate the time integral in (15) by evaluating a(Ys, s) at a single time step s only. Simplified Riemannian CT-ELBO. The CT-ELBO can be simplified as the differential operator V ·rg applied to V yields a zero vector when V is the tangential projection. Proposition 2. If V is the tangential projection matrix, then (V ·rg)V = 0. This means that we can express the generative SDE V0 using the variational parameter a via dX = (V a(X,T t) U0(X,T t)) dt+ V dB̂t, (16) with the corresponding Riemannian CT-ELBO: E[log p0(YT )] Z T 0 EYs " 1 2 kak 2 2 +rg · (V a U0) Y0 = x # ds. (17) 3.3 Variance reduction The inference process can be more generally defined to account for a time reparameterization. In fact, this leads to an equivalent model if one can find an invariant representation of the temporal variable. Learning this time rescaling can help to reduce variance (Kingma et al., 2021). In principle, we can adopt the same methodology, but this would further complicate the parameterization of the model. Alternatively, we opt for a simpler view for variance reduction via importance sampling. We estimate the time integral “ R . . . ds” in (17) using the following estimator: I := 1 q(s) ✓ 1 2 kak 2 2 +rg · (V a U0) ◆ where s ⇠ q(s) and Ys ⇠ q(Ys | Y0), (18) where q(s) is a proposal density supported on [0, T ]. We parameterize q(s) using a 1D monotone flow (Huang et al., 2018). As the expected value of this estimator is the same as the time integral in (17), it is unbiased. However, this means we cannot train the proposal distribution q(s) by maximizing this objective, since the gradient wrt the parameters of q(s) is zero in expectation. Instead, we minimize the variance of the estimator by following the stochastic gradient wrt q(s) rq(s)Var(I) = rq(s)E[I 2] ⇠⇠⇠ ⇠⇠ rq(s)E[I] 2 = rq(s)E[I 2]. (19) The latter can be optimized using the reparameterization trick (Kingma & Welling, 2014) and is a well-known variance reduction method in a multitude of settings (Luo et al., 2020; Tucker et al., 2017). It can be seen as minimizing the 2-divergence from a density proportional to the magnitude of EYs [I] (Dieng et al., 2017; Müller et al., 2019). 3.4 Connection to score matching In the Euclidean case, it can be shown that maximizing the variational lower bound of the fixedinference diffusion model (16) is equivalent to score matching (Ho et al., 2020; Huang et al., 2021; Song et al., 2021a). In this section, we extend this connection to its Riemannian counterpart. Let q(ys, s) be the density of Ys following (14), marginalizing out the data distribution q(y0, 0). The score function is the Riemannian gradient of the log-density rg log q. The following theorem tells us that we can create a family of inference and generative SDEs that induce the same marginal distributions over Ys and XT s as (16) if we have access to its score. Theorem 3 (Marginally equivalent SDEs). For 1, the marginal distributions of XT s and Ys of the processes defined as below dY = ✓ U0 2 rg log q ◆ ds+ p 1 V dB̂s Y0 ⇠ q(·, 0) (20) dX = ✓✓ 1 2 ◆ rg log q U0 ◆ dt+ p 1 V dB̂t X0 ⇠ q(·, T ) (21) both have the density q(·, s). In particular, = 1 gives rise to an equivalent ODE. This suggests if we can approximate the score function, and plug it into the reverse process (21), we obtain a time-reversed process that induces approximately the same marginals. Theorem 4 (Score matching equivalency). For < 1, let E1 denote the Riemannian CTELBO of the generative process (21), with rg log q replaced by an approximate score S✓, and with (20) being the inference SDE. Assume S✓ is a compactly supported smooth vector. Then EY0 [E 1 ] = C1 Z T 0 EYs h kS✓ rg log qk 2 g i ds+ C2 (22) where C1 > 0 and C2 are constants wrt ✓. The first implication of the theorem is that maximizing the Riemannian CT-ELBO of the plug-in reverse process is equivalent to minimizing the Riemannian score-matching loss. Second, if we set = 0, from (135) (in the appendix), we have V a = S✓, which is exactly the fixed-inference training in §3.2. That is, the vector V a trained using equation (17) is actually an approximate score, allowing us to extract an equivalent ODE by substituting V a for rg log q in (20,21) by setting = 1. 4 Related work Diffusion models. Diffusion models can be viewed from two different but ultimately complimentary perspectives. The first approach leverages score based generative models (Song & Ermon, 2019; Song et al., 2021b), while the second approach treats generative modeling as inverting a fixed noiseinjecting process (Sohl-Dickstein et al., 2015; Ho et al., 2020). Finally, continuous-time diffusion models can also be embedded within a maximum likelihood framework (Huang et al., 2021; Song et al., 2021a), which represents the special case of prescribing a flat geometry—i.e. Euclidean—to the generative model and is completely generalized by the theory developed in this work. Riemannian Generative Models. Generative models beyond Euclidean manifolds have recently risen to prominence with early efforts focusing on constant curvature manifolds (Bose et al., 2020; Rezende et al., 2020). Another line of work extends continuous-time flows (Chen et al., 2018a) to more general Riemannian manifolds (Lou et al., 2020; Mathieu & Nickel, 2020; Falorsi & Forré, 2020). To avoid explicitly solving an ODE during training, Rozen et al. (2021) propose Moser Flow whose objective involves computing the Riemannian divergence of a parametrized vector field. Concurrent to our work, De Bortoli et al. (2022) develop Riemannian score-based generative models for compact manifolds like the Sphere. While similar in endeavor, RDMs are couched within the the maximum likelihood framework. As a result our approach is directly amenable to variance reduction techniques via importance sampling and likelihood estimation. Moreover, our approach is also applicable to non-compact manifolds such as hyperbolic spaces, and we demonstrate this in our experiments on a larger variety of manifolds including the orthogonal group and toroids. 5 Experiments We investigate the empirical caliber of RDMs on a range of manifolds. We instantiate RDMs by parametrizing a in (16) using an MLP and maximize the CT-ELBO (17). We report our detailed training procedure—including selected hyperparameters—for all models in §D. 5.1 Sphere For spherical manifolds, we model the datasets compiled by Mathieu & Nickel (2020), which consist of earth and climate science events on the surface of the earth such as volcanoes (NGDC/WDS, 2022b), earthquakes (NGDC/WDS, 2022a), floods (Brakenridge, 2017), and fires (EOSDIS, 2020). In Table 1 for each dataset we report average and standard deviation of test negative log likelihood on 5 different runs with different splits of the dataset. In Figure 1 we plot the model density in blue while the test data is depicted with red dots. Variance reduction. We demonstrate the effect of applying variance reduction on optimizing the Riemannian CT-ELBO (17) using the earthquake dataset. As shown in Figure 2, learning an importance sampling proposal effectively lowers the variance and speeds up training. 5.2 Tori For tori, we use the list of 500 high-resolution proteins compiled in Lovell et al. (2003) and select 113 RNA sequences listed in Murray et al. (2003). Each macromolecule is divided into multiple monomers, and the joint structure is discarded—we model the lower dimensional density of the backbone conformation of the monomer. For the protein data, this corresponds to 3 torsion angles of the amino acid. As one of the angles is normally 180°, we also discard it, and model the density over the 2D torus. For the RNA data, the monomer is a nucleotide described by 7 torsion angles in the backbone, represented by a 7D torus. For protein, we divide the dataset by the type of side chain attached to the amino acid, resulting in 4 datasets, and we discard the nucleobases of the RNA. In Table 2 we report the NLL of our model. Our baseline is a mixture of 4, 096 power spherical distributions (De Cao & Aziz, 2020, MoPS). We observe that RDM outperforms the baseline across the board, and the difference is most noticeable for the RNA data, which has a higher dimensionality. Numerical integration ablation. We estimate the loss (17) by integrating the inference SDE on M. To study the effect of integration error, we experiment with various numbers of time steps evenly spaced between [0, s] on Glycine. Also, as we can directly sample the Brownian motion on tori without numerical integration, we use it as a reference (termed direct loss) for comparison. Figure 3 shows while fewer time steps tend to underestimate the loss, the model trained with 100 time steps is already indistinguishable from the one trained with direct sampling. We also find numerical integration is not a significant overhead as each experiment takes approximately the same wall-clock time with identical setups. This is because the inference path does not involve the neural module a. 5.3 Hyperbolic Manifolds Hyperbolic manifolds provide an example whose closestpoint projection is not cheap to obtain, and a claimed closest-point projection in recent literature is in fact not the closest Euclidean projection (Skopek et al., 2019) (see §C for more details). To demonstrate the generality of our framework, we model the synthetic datasets in Figure 5, first introduced by Bose et al. (2020); Lou et al. (2020). Since hyperbolic manifolds are not compact, we need a non-zero drift to ensure the inference processs is not dissipative. We define the prior as the standard normal distribution on the yz-plane and let U0 be 12rg log p0, so that Ys will revert back to the origin. 5.4 Special Orthogonal Group Another example whose closest-point projection is expensive to compute is the orthogonal group, as it requires performing the singular value decomposition. To evaluate our framework on this matrix group, we generate data using the synthetic multimodal density defined on SO(3) from Brofos et al. (2021). We view it as a submanifold embedded in R3⇥3, therefore d = 3 and m = 9. We use the projected Hutchinson to estimate the Riemannian divergence. Since the data are 3D rotational matrices, we can visualize them using the Euler angles. We plotted the data density and the learned model density in Figure 6, where each coordinate represents the rotation around that particular axis. 6 Conclusion In this paper, we introduce RDMs that extend continuous-time diffusion models to arbitrary Riemannian manifolds—including challenging non-compact manifolds like hyperbolic spaces. We provide a variational framework to train RDMs by optimizing a novel objective, the Riemannian Continuous-Time ELBO. To enable efficient and stable training we provide several key tools such as a fixed-inference paramterization of the SDE in the ambient space, new methodological techniques to compute the Riemannian divergence, as well as an importance sampling procedure with respect to the time integral to reduce the variance of the loss. On a theoretical front, we also show deep connections between our proposed variational framework and Riemannian score matching through the construction of marginally equivalent SDEs. Finally, we complement our theory by constructing RDMs that achieve state-of-the-art performance on density estimation on geoscience datasets, protein/RNA data on toroidal, and synthetic data on hyperbolic and orthogonal-group manifolds.
1. What is the focus and contribution of the paper on Riemannian manifolds? 2. What are the strengths of the proposed approach, particularly in terms of mathematical background and experimental effectiveness? 3. What are the weaknesses of the paper regarding its understandability and presentation? 4. Do you have any questions regarding the computation of the derived CT-ELBO and the use of variance reduction techniques? 5. What are the limitations of the paper that the author should address?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a general diffusion model for Riemannian manifolds. It introduces a variational framework based on Riemannian Feynman-Kac representation and Giransov’s theorem, with which a Rimannian continuous-time ELBO is derived. To compute the resulting ELBO, this paper further suggests a QR decomposition-based method for scalable unbiased estimation. Besides, it also provides a variance reduction technique to approximate the Remannian ELBO objective. Strengths And Weaknesses Strength: Based on SDE, this paper generalized the Euclidean diffusion model into a Riemannian one. The generalization seems to enjoy a solid mathematical background. The experiments demonstrate the effectiveness of the proposed method on the data that are on spherical manifolds, product of spherical manifolds, hyperbolic spaces and orthogonal groups. Weakness: This paper requests a lot of effort to understand its background and the proposed idea. The presentation of background (Sec.2) is also very challenging to understand. For example, regarding Euclidean diffusion models, it might be better to further clarify how the diffusion model can be formulated as a fixed Markov chain that is presented in Line 24-26. Besides, it should be necessary to indicate the meanings of \mu and \sigma after Eq. (1). The same suggestion could be applied to other equations like Eq.(2). As for the computation of the derived CT-ELBO, the suggested QR decomposition based method is an efficient approach. I cannot really understand why the paper further introduces a variance reduction technique to approximate the CT-ELBO. What is the major benefit of the used variance reduction over the QR decomposition based method? It seems that this paper does not study this benefit empirically. Questions In Eq. (27) of Appendix, what’s the meaning of \frac{\partial \varphi_{j}^{-1}}{\partial \tilde{x}{i}}? If my understanding is right, \varphi{j}^{-1} should be understood as (\iota \circ \varphi)_j^{-1} Regarding the spherical dataset, it is not clear to me why the used data lie in a sphere? Maybe some explanation or references will help interested readers. Small typos should be corrected. For example, Eq. (24) in Appendix, the v in LHS should be capitalized. Limitations It seems this paper does not discuss its limitations. It will be highly appreciated if the authors can address my concerns/questions mentioned above.
NIPS
Title Riemannian Diffusion Models Abstract Diffusion models are recent state-of-the-art methods for image generation and likelihood estimation. In this work, we generalize continuous-time diffusion models to arbitrary Riemannian manifolds and derive a variational framework for likelihood estimation. Computationally, we propose new methods for computing the Riemannian divergence which is needed for likelihood estimation. Moreover, in generalizing the Euclidean case, we prove that maximizing this variational lowerbound is equivalent to Riemannian score matching. Empirically, we demonstrate the expressive power of Riemannian diffusion models on a wide spectrum of smooth manifolds, such as spheres, tori, hyperboloids, and orthogonal groups. Our proposed method achieves new state-of-the-art likelihoods on all benchmarks. 1 Introduction By learning to transmute noise, generative models seek to uncover the underlying generative factors that give rise to observed data. These factors can often be cast as inherently geometric quantities as the data itself need not lie on a flat Euclidean space. Indeed, in many scientific domains such as high-energy physics (Brehmer & Cranmer, 2020), directional statistics (Mardia & Jupp, 2009), geoscience (Mathieu & Nickel, 2020), computer graphics (Kazhdan et al., 2006), and linear biopolymer modeling such as protein and RNA (Mardia et al., 2008; Boomsma et al., 2008; Frellsen et al., 2009), data is best represented on a Riemannian manifold with a non-zero curvature. Naturally, to effectively capture the generative factors of these data, we must take into account the geometry of the space when designing a learning framework. Recently, diffusion based generative models have emerged as an attractive model class that not only achieve likelihoods comparable to state-of-the-art autogressive models (Kingma et al., 2021) but match the sample quality of GANs without the pains of adversarial optimization (Dhariwal & Nichol, 2021). Succinctly, a diffusion model consists of a fixed Markov chain that progressively transforms data to a prior defined by the inference path, and a generative model which is another Markov chain that is learned to invert the inference process (Ho et al., 2020; Song et al., 2021b). While conceptually simple, the learning framework can have a variety of perspectives and goals. For example, Huang et al. (2021) provide a variational framework for general continuous-time diffusion processes on Euclidean manifolds as well as a functional Evidence Lower Bound (ELBO) that can be equivalently shown to be minimizing an implicit score matching objective. At present, however, much of the success of diffusion based generative models and its accompanying variational framework is purpose built for Euclidean spaces, and more specifically, image data. It does not easily translate to general Riemannian manifolds. In this paper, we introduce Riemannian Diffusion Models (RDM)—generalizing conventional diffusion models on Euclidean spaces to arbitrary Riemannian manifolds. Departing from diffusion models on Euclidean spaces, our approach uses the Stratonovich SDE formulation for which the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). conventional chain rule of calculus holds, which, as we demonstrate in section §3, can be exploited to define diffusion on a Riemannian manifold. Furthermore, we take an extrinsic view of geometry by defining the Riemannian manifold of interest as an embedded sub-manifold within a higher dimensional (Euclidean) ambient space. Such a choice enables us to define both our inference and generative SDEs using the coordinate system of the ambient space, greatly simplifying the implementation of the theory developed using the intrinsic view. Main Contributions. We summarize our main contributions below: • We introduce a variational framework built on the Riemannian Feynman-Kac representation and Giransov’s theorem. In Theorem 2 we derive a Riemannian continuous-time ELBO, strictly generalizing the CT-ELBO in Huang et al. (2021) and prove in Theorem 4 that its maximization is equivalent to Riemannian score matching for marginally equivalent SDEs (Theorem 3). • To compute the Riemannian CT-ELBO it is necessary to compute the Riemannian divergence of our parametrized vector field, for which we introduce a QR-decomposition-based method that is computationally efficient for low dimensional manifolds as well a projected Hutchinson method for scalable unbiased estimation. Notably, our approach does not depend on the closest point projection which may not be freely available for many Riemannian manifolds of interest. • We also provide a variance reduction technique to estimate the Riemannian CT-ELBO objective that leverages importance sampling with respect to the time integral, which crucially avoids carefully designing the noise schedule of the inference process. • Empirically, we validate our proposed models on spherical manifolds towards modelling natural disasters as found in earth science datasets, products of spherical manifolds (tori) for protein and RNA, synthetic densities on hyperbolic spaces and orthogonal groups. Our empirical results demonstrate that RDM leads to new state-of-art likelihoods over prior manifold generative models. 2 Background In this section, we provide the necessary background on diffusion models and key concepts from Riemannian geometry that we utilize to build RDMs. For a short review of the latter, see Appendix A or Ratcliffe (1994) for a more comprehensive treatment of the subject matter. 2.1 Euclidean diffusion models A diffusion model can be defined as the solution to the (Itô) SDE (Øksendal, 2003), dX = µ dt+ dBt, (1) with the initial condition X0 following some unstructured prior p0 such as the standard normal distribution, where Bt is a standard Brownian motion, and µ and are the drift and diffusion coefficients of the diffusion process, which control the deterministic forces driving the evolution and the amount of noise injected at each time step. This provides us a way to sample from the model, via numerically solving the dynamics from t = 0 to t = T for some fixed termination time T . To train the model via maximum likelihood, we require an expression for the log marginal density of XT , denoted by log p(x, T ), which is generally intractable. The marginal likelihood can be represented using a stochastic instantaneous change-of-variable formula, by applying the Feynman-Kac theorem to the Fokker-Planck PDE of the density. An application of Girsanov’s theorem followed by an application of Jensen’s inequality leads to the following variational lower bound (Huang et al., 2021; Song et al., 2021a): log p(x, T ) E " log p0(YT ) Z T 0 ✓ 1 2 ka(Ys, s)k 2 2 +r · µ(Ys, T s) ◆ ds Y0 = x # (2) where a is the variational degree of freedom, r· denotes the (Euclidean) divergence operator, and Ys follows the inference SDE (the generative coefficients are evaluated in reversed time, i.e. T s) dY = ( µ+ a) ds+ dB̂s (3) with B̂s being another Brownian motion. This is known as the continuous-time evidence lower bound, or the CT-ELBO for short. 2.2 Riemannian manifolds We work with a d-dimensional Riemannian manifold (M, g) embedded in a higher dimensional ambient space Rm, for m > d. This assumption does not come with a loss of generality, since any Riemannian manifold can be isometrically embedded into a Euclidean space by the Nash embedding theorem (Gunther, 1991). In this case, the metric g coincides with the pullback of the Euclidean metric by the inclusion map. Now, given a coordinate chart ' : M ! Rd and its inverse = ' 1, we can define Ẽj for j = 1, · · · , d to be the basis vectors of the tangent space TxM at point x 2 M. The tangent space can be understood as the pushforward of the Euclidean derivation of the patch space along ; i.e., for any smooth function f 2 C1(M), Ẽj(f) = @@x̃j f . We denote by Px the orthogonal projection onto the linear subspace spanned by the column vectors of the Jacobian Jx = d /dx̃. Specifically, Px can be constructed via Px = Jx(JTx Jx) 1JTx . Note that this subspace is isomorphic to the tangent space TxM, which itself is a subspace of TxRm. As a result, we identify this subspace with TxM. Lastly, we refer to the action of Px as the projection onto the tangential subspace, and Px itself as the tangential projection. 2.3 SDE on manifolds Unlike Euclidean spaces, Riemannian manifolds generally do not possess a vector space structure. This prevents the direct application of the usual (stochastic) calculus. We can resolve this by defining the process via test functions. Specifically, let Vk be a family of smooth vector fields on M, and let Z k be a family of semimartingales (Protter, 2005). Symbolically, we write dXt = X k Vk(Xt) dZ k t if df(Xt) = X k Vk(f)(Xt) dZ k t (4) for any f 2 C1(M) (Hsu, 2002). The in the second differential equation is to be interpreted in the Stratonovich sense (Protter, 2005). The use of the Stratonovich integral is the first step deviating from the Euclidean diffusion model (1), as the Itô integral does not follow the usual chain rule. Working with this abstract definition is not always convenient, so instead we work with specific coordinates of M. Let ' be a chart, and let ṽ = (ṽjk) be a matrix representing the coefficients of Vk in the coordinate basis—i.e. Vk(f) = Pd j=1 ṽjk @ @x̃j f ' 1 x̃='(x) . This allows us to write d'(Xt) = ṽ dZ. Similarly, suppose M is a submanifold embedded in Rm, and denote by v = (vik) the coefficients wrt the Euclidean basis. v and ṽ are related by v = d' 1 dx̃ ṽ. Then we can express the dynamics of X as a regular SDE using the Euclidean space’s coefficients dX = v dZ. Notably, by the relation between v and ṽ, the column vectors of v are required to lie in the span of the column vectors of the Jacobian d' 1 dx̃ which restricts the dynamics to move tangentially on M. 3 Riemannian diffusion models We now develop a variational framework to estimate the likelihood of a diffusion model defined on a Riemannian manifold (M, g). Let Xt 2 M be a process solving the following SDE: Generative SDE: dX = V0 dt+ V dBt, X0 ⇠ p0 (5) where V0 and the columns of the diffusion matrix1 V := [V1, · · · , Vw] are smooth vector fields on M, and Bt is a w-dimensional Brownian motion. The law of the random variable Xt can be written as p(x, t)µ(dx), where p(x, t) is the probability density function and µ is the d-dimensional Hausdorff measure on the manifold associated with the Riemannian volume density. Let V · r be a differential operator defined by (V · rg)U := Pw k=1(rg · Uk)Vk, where rg · Uk denotes the Riemannian divergence of the vector field Uk: rg · Uk = |G| 12 dX j=1 @ @x̃j (|G| 1 2 ũjk). (6) Our first result is a stochastic instantaneous change-of-variable formula for the Riemannian SDE by applying the Feynman-Kac theorem to the Fokker Planck PDE of the density p(x, t). 1The multiplication is interpreted similarly to matrix-vector multiplication, i.e. V dBt = Pw k=1 Vk dB k t . Theorem 1 (Marginal Density). The density p(x, t) of the SDE (5) can be written as p(x, t) = E " p0 (Yt) exp ✓ Z t 0 rg · ✓ V0 1 2 (V ·rg)V ◆ ds ◆ Y0 = x # (7) where the expectation is taken wrt the following process induced by a Brownian motion B0s dY = ( V0 + (V ·rg)V ) ds+ V dB 0 s. (8) For effective likelihood maximization, we require access to log p and its gradient. Towards this goal, we prove the following Riemannian CT-ELBO which serves as our training objective and follows from an application of change of measure (Girsanov’s theorem) and Jensen’s inequality. Theorem 2 (Riemannian CT-ELBO). Let B̂s be a w-dimensional Brownian motion, and let Ys be a process solving the following Inference SDE: dY = ( V0 + (V ·rg)V + V a) ds+ V dB̂s, (9) where a : Rm ⇥ [0, T ] ! Rm is the variational degree of freedom. Then we have log p(x, T ) E " log p0(YT ) Z T 0 1 2 ka(Ys, s)k 2 2 +rg · ✓ V0 1 2 (V ·rg)V ◆ ds Y0 = x # , (10) where all the generative degree of freedoms Vk are evaluated in the reversed time direction. 3.1 Computing Riemannian divergence Similar to the Euclidean case, computing the Riemannian CT-ELBO requires computing the divergence “rg·” of a vector field, which can be achieved by applying the following identity. Proposition 1 (Riemannian divergence identity). Let (M, g) be a d-dimensional Riemannian manifold. For any smooth vector field Vk 2 X(M), the following identity holds: rg · Vk = dX j=1 D rẼj Vk, Ẽ j E g . (11) Furthermore, if the manifold is a submanifold embedded in the ambient space Rm equipped with the induced metric g = ◆⇤ḡ, then (rg · Vk)(x) = tr ✓ Px dvk dx Px ◆ , (12) where vk = (v1k, · · · , vmk) are the ambient space coefficients Vk = Pm i=1 vik @ @xi and Px is the orthogonal projection onto the tangent space. Intrinsic coordinates. The patch-space formula (6) can be used to compute the Riemannian divergence. This view was adopted by Mathieu & Nickel (2020), where they combined the Hutchinson trace identity and the internal coordinate formula to estimate the divergence. The drawbacks of this framework include: (1) obtaining local coordinates may be difficult for some manifolds, hindering generality in practice; (2) we might need to change patches, which complicates implementations; and (3) the inverse scaling of p |G| might result in numerical instability and high variance. Closest-point projection. The coordinate-free expression (11) leads to the closest-point projection method proposed by Rozen et al. (2021). Concretely, define the closest-point projection by ⇡(x) := argminy2M kx yk, where k·k is the Euclidean norm. Let Vk(x) be the derivation corresponding to the ambient space vector vk(x) = P⇡(x)u(⇡(x)) for some unconstrainted u : Rm ! Rm. Rozen et al. (2021) showed that rg · Vk(x) = r · vk(x), since vk is infinitesimally constant in the normal direction to TxM. This allows us to compute the divergence directly in the ambient space. However, the closest-point projection map ⇡ may not always be easily obtained. QR decomposition. An alternative to the closest-point projection is to instead search for an orthogonal basis for TxM. Let Q = [e1, · · · , ed, n1, · · · , nm d] be an orthogonal matrix whose first d columns span the TxM, and the remaining m d vectors span its orthogonal complement TxM ?. To construct Q we can simply sample d vectors—e.g. from N (0, 1)–in the ambient space and orthogonally project them to TxM using Px. These vectors, although not orthogonal yet, form a basis for TxM. Next we concatenate them with m d random vectors and apply a simple QR decomposition to retrieve an orthogonal basis. Using Q we may rewrite equation (12) as follows: (rg · Vk)(x) = tr ✓ QQ > Px dvk dx Px ◆ = tr ✓ (PxQ) > dvk dx PxQ ◆ = dX j=1 e > j dvk dx ej (13) where we used (1) the orthogonality of Q, (2) the cyclic property of trace, (3) and the fact that Pxej = ej and Pxnj = 0. In practice, concatenation with the remaining m d vectors is not needed as they are effectively not used in computing the divergence, speeding up computation when m d. Moreover, the vector-Jacobian product can be computed in O(m) time using reverse-mode autograd and importantly does not require the closest-point projection ⇡. Projected Hutchinson. When QR is too expensive for higher dimensional problems, the Hutchinson trace estimator (Hutchinson, 1989) can be employed within the extrinsic view representation (12). For example, let z be a standard normal vector (or a Rademacher vector), we have (rg ·Vk)(x) = Ez⇠N ,z0=Pxz[z 0> dvk dx z 0]. Different from a direct application of the trace estimator to the closest-point method, we directly project the random vector to the tangent subspace. Therefore, the closest-point projection is again not needed. 3.2 Fixed-inference parameterization Following prior work (Sohl-Dickstein et al., 2015; Ho et al., 2020; Huang et al., 2021), we let the inference SDE (9) be defined as a simple noise process taking observed data to unstructured noise: dY = U0 dt+ V dB̂s, (14) where U0 = 12rg log p0 and V is the tangential projection matrix; that is, Vk(f)(x) =Pm j=1(Px)jk @f @xj for any smooth function f . This is known as the Riemannian Langevin diffusion (Girolami & Calderhead, 2011). As long as p0 satisfies a log-Sobolev inequality, the marginal distribution of Ys (i.e. the aggregated posterior) converges to p0 at a linear rate in the KL divergence (Wang et al., 2020). For compact manifolds, we set p0 to be the uniform density, which means U0 = 0, and (14) is reduced to the extrinsic construction of Brownian motion on M (Hsu, 2002, Section 1.2). The benefits of this fixed-inference parameterization are the following: Stable and Efficient Training. With the fixed-inference parameterization we do not need to optimize the vector fields that generate Ys, and the Riemannian CT-ELBO can be rewritten as: E[log p0(YT )] Z T 0 EYs " 1 2 ka(Ys, s)k 2 2 +rg · ✓ V0 1 2 (V ·rg)V ◆ Y0 = x # ds, (15) where the first term is a constant wrt the model parameters (or it can be optimized separately if we want to refine the prior), and the time integral of the second term can be estimated via importance sampling (see Section 3.3). A sample of Ys can be drawn cheaply by numerically integrating (14), without requiring a stringent error tolerance (see Section 5.2 for an empirical analysis), which allows us to estimate the time integral in (15) by evaluating a(Ys, s) at a single time step s only. Simplified Riemannian CT-ELBO. The CT-ELBO can be simplified as the differential operator V ·rg applied to V yields a zero vector when V is the tangential projection. Proposition 2. If V is the tangential projection matrix, then (V ·rg)V = 0. This means that we can express the generative SDE V0 using the variational parameter a via dX = (V a(X,T t) U0(X,T t)) dt+ V dB̂t, (16) with the corresponding Riemannian CT-ELBO: E[log p0(YT )] Z T 0 EYs " 1 2 kak 2 2 +rg · (V a U0) Y0 = x # ds. (17) 3.3 Variance reduction The inference process can be more generally defined to account for a time reparameterization. In fact, this leads to an equivalent model if one can find an invariant representation of the temporal variable. Learning this time rescaling can help to reduce variance (Kingma et al., 2021). In principle, we can adopt the same methodology, but this would further complicate the parameterization of the model. Alternatively, we opt for a simpler view for variance reduction via importance sampling. We estimate the time integral “ R . . . ds” in (17) using the following estimator: I := 1 q(s) ✓ 1 2 kak 2 2 +rg · (V a U0) ◆ where s ⇠ q(s) and Ys ⇠ q(Ys | Y0), (18) where q(s) is a proposal density supported on [0, T ]. We parameterize q(s) using a 1D monotone flow (Huang et al., 2018). As the expected value of this estimator is the same as the time integral in (17), it is unbiased. However, this means we cannot train the proposal distribution q(s) by maximizing this objective, since the gradient wrt the parameters of q(s) is zero in expectation. Instead, we minimize the variance of the estimator by following the stochastic gradient wrt q(s) rq(s)Var(I) = rq(s)E[I 2] ⇠⇠⇠ ⇠⇠ rq(s)E[I] 2 = rq(s)E[I 2]. (19) The latter can be optimized using the reparameterization trick (Kingma & Welling, 2014) and is a well-known variance reduction method in a multitude of settings (Luo et al., 2020; Tucker et al., 2017). It can be seen as minimizing the 2-divergence from a density proportional to the magnitude of EYs [I] (Dieng et al., 2017; Müller et al., 2019). 3.4 Connection to score matching In the Euclidean case, it can be shown that maximizing the variational lower bound of the fixedinference diffusion model (16) is equivalent to score matching (Ho et al., 2020; Huang et al., 2021; Song et al., 2021a). In this section, we extend this connection to its Riemannian counterpart. Let q(ys, s) be the density of Ys following (14), marginalizing out the data distribution q(y0, 0). The score function is the Riemannian gradient of the log-density rg log q. The following theorem tells us that we can create a family of inference and generative SDEs that induce the same marginal distributions over Ys and XT s as (16) if we have access to its score. Theorem 3 (Marginally equivalent SDEs). For 1, the marginal distributions of XT s and Ys of the processes defined as below dY = ✓ U0 2 rg log q ◆ ds+ p 1 V dB̂s Y0 ⇠ q(·, 0) (20) dX = ✓✓ 1 2 ◆ rg log q U0 ◆ dt+ p 1 V dB̂t X0 ⇠ q(·, T ) (21) both have the density q(·, s). In particular, = 1 gives rise to an equivalent ODE. This suggests if we can approximate the score function, and plug it into the reverse process (21), we obtain a time-reversed process that induces approximately the same marginals. Theorem 4 (Score matching equivalency). For < 1, let E1 denote the Riemannian CTELBO of the generative process (21), with rg log q replaced by an approximate score S✓, and with (20) being the inference SDE. Assume S✓ is a compactly supported smooth vector. Then EY0 [E 1 ] = C1 Z T 0 EYs h kS✓ rg log qk 2 g i ds+ C2 (22) where C1 > 0 and C2 are constants wrt ✓. The first implication of the theorem is that maximizing the Riemannian CT-ELBO of the plug-in reverse process is equivalent to minimizing the Riemannian score-matching loss. Second, if we set = 0, from (135) (in the appendix), we have V a = S✓, which is exactly the fixed-inference training in §3.2. That is, the vector V a trained using equation (17) is actually an approximate score, allowing us to extract an equivalent ODE by substituting V a for rg log q in (20,21) by setting = 1. 4 Related work Diffusion models. Diffusion models can be viewed from two different but ultimately complimentary perspectives. The first approach leverages score based generative models (Song & Ermon, 2019; Song et al., 2021b), while the second approach treats generative modeling as inverting a fixed noiseinjecting process (Sohl-Dickstein et al., 2015; Ho et al., 2020). Finally, continuous-time diffusion models can also be embedded within a maximum likelihood framework (Huang et al., 2021; Song et al., 2021a), which represents the special case of prescribing a flat geometry—i.e. Euclidean—to the generative model and is completely generalized by the theory developed in this work. Riemannian Generative Models. Generative models beyond Euclidean manifolds have recently risen to prominence with early efforts focusing on constant curvature manifolds (Bose et al., 2020; Rezende et al., 2020). Another line of work extends continuous-time flows (Chen et al., 2018a) to more general Riemannian manifolds (Lou et al., 2020; Mathieu & Nickel, 2020; Falorsi & Forré, 2020). To avoid explicitly solving an ODE during training, Rozen et al. (2021) propose Moser Flow whose objective involves computing the Riemannian divergence of a parametrized vector field. Concurrent to our work, De Bortoli et al. (2022) develop Riemannian score-based generative models for compact manifolds like the Sphere. While similar in endeavor, RDMs are couched within the the maximum likelihood framework. As a result our approach is directly amenable to variance reduction techniques via importance sampling and likelihood estimation. Moreover, our approach is also applicable to non-compact manifolds such as hyperbolic spaces, and we demonstrate this in our experiments on a larger variety of manifolds including the orthogonal group and toroids. 5 Experiments We investigate the empirical caliber of RDMs on a range of manifolds. We instantiate RDMs by parametrizing a in (16) using an MLP and maximize the CT-ELBO (17). We report our detailed training procedure—including selected hyperparameters—for all models in §D. 5.1 Sphere For spherical manifolds, we model the datasets compiled by Mathieu & Nickel (2020), which consist of earth and climate science events on the surface of the earth such as volcanoes (NGDC/WDS, 2022b), earthquakes (NGDC/WDS, 2022a), floods (Brakenridge, 2017), and fires (EOSDIS, 2020). In Table 1 for each dataset we report average and standard deviation of test negative log likelihood on 5 different runs with different splits of the dataset. In Figure 1 we plot the model density in blue while the test data is depicted with red dots. Variance reduction. We demonstrate the effect of applying variance reduction on optimizing the Riemannian CT-ELBO (17) using the earthquake dataset. As shown in Figure 2, learning an importance sampling proposal effectively lowers the variance and speeds up training. 5.2 Tori For tori, we use the list of 500 high-resolution proteins compiled in Lovell et al. (2003) and select 113 RNA sequences listed in Murray et al. (2003). Each macromolecule is divided into multiple monomers, and the joint structure is discarded—we model the lower dimensional density of the backbone conformation of the monomer. For the protein data, this corresponds to 3 torsion angles of the amino acid. As one of the angles is normally 180°, we also discard it, and model the density over the 2D torus. For the RNA data, the monomer is a nucleotide described by 7 torsion angles in the backbone, represented by a 7D torus. For protein, we divide the dataset by the type of side chain attached to the amino acid, resulting in 4 datasets, and we discard the nucleobases of the RNA. In Table 2 we report the NLL of our model. Our baseline is a mixture of 4, 096 power spherical distributions (De Cao & Aziz, 2020, MoPS). We observe that RDM outperforms the baseline across the board, and the difference is most noticeable for the RNA data, which has a higher dimensionality. Numerical integration ablation. We estimate the loss (17) by integrating the inference SDE on M. To study the effect of integration error, we experiment with various numbers of time steps evenly spaced between [0, s] on Glycine. Also, as we can directly sample the Brownian motion on tori without numerical integration, we use it as a reference (termed direct loss) for comparison. Figure 3 shows while fewer time steps tend to underestimate the loss, the model trained with 100 time steps is already indistinguishable from the one trained with direct sampling. We also find numerical integration is not a significant overhead as each experiment takes approximately the same wall-clock time with identical setups. This is because the inference path does not involve the neural module a. 5.3 Hyperbolic Manifolds Hyperbolic manifolds provide an example whose closestpoint projection is not cheap to obtain, and a claimed closest-point projection in recent literature is in fact not the closest Euclidean projection (Skopek et al., 2019) (see §C for more details). To demonstrate the generality of our framework, we model the synthetic datasets in Figure 5, first introduced by Bose et al. (2020); Lou et al. (2020). Since hyperbolic manifolds are not compact, we need a non-zero drift to ensure the inference processs is not dissipative. We define the prior as the standard normal distribution on the yz-plane and let U0 be 12rg log p0, so that Ys will revert back to the origin. 5.4 Special Orthogonal Group Another example whose closest-point projection is expensive to compute is the orthogonal group, as it requires performing the singular value decomposition. To evaluate our framework on this matrix group, we generate data using the synthetic multimodal density defined on SO(3) from Brofos et al. (2021). We view it as a submanifold embedded in R3⇥3, therefore d = 3 and m = 9. We use the projected Hutchinson to estimate the Riemannian divergence. Since the data are 3D rotational matrices, we can visualize them using the Euler angles. We plotted the data density and the learned model density in Figure 6, where each coordinate represents the rotation around that particular axis. 6 Conclusion In this paper, we introduce RDMs that extend continuous-time diffusion models to arbitrary Riemannian manifolds—including challenging non-compact manifolds like hyperbolic spaces. We provide a variational framework to train RDMs by optimizing a novel objective, the Riemannian Continuous-Time ELBO. To enable efficient and stable training we provide several key tools such as a fixed-inference paramterization of the SDE in the ambient space, new methodological techniques to compute the Riemannian divergence, as well as an importance sampling procedure with respect to the time integral to reduce the variance of the loss. On a theoretical front, we also show deep connections between our proposed variational framework and Riemannian score matching through the construction of marginally equivalent SDEs. Finally, we complement our theory by constructing RDMs that achieve state-of-the-art performance on density estimation on geoscience datasets, protein/RNA data on toroidal, and synthetic data on hyperbolic and orthogonal-group manifolds.
1. What is the focus and contribution of the paper on continuous-time diffusion models? 2. What are the strengths of the proposed approach, particularly in terms of its novel application and connection to score-based models? 3. What are the weaknesses of the paper regarding experiment analysis and training/sampling complexity? 4. Why is there no comparison with the Riemannian Score-Based Model in some tasks? 5. Is there any theoretical reason for diffusion models to perform better in the Riemann space? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper extends the application of continuous-time diffusion models into Riemannian manifold. The paper proposes a new method of calculating the Riemannian divergence for computing the Riemannian lower variational bound objective function. In particular, the authors also makes an insightful connection with Riemannian score matching. Strengths And Weaknesses Strengths: Overall, the paper was well organized and clearly written. The shadowed boxes on the salient equations help follow the derivations. The idea is a novel application/expansion of diffusion models as they have (to the extent of my knowledge) not have been explored on non-Euclidean spaces. The connection between score-based models (in the Riemannian case) is a welcoming and insightful addition. Weaknesses: The experiments show SOTA results and seem very promising, however, there seems to be a lack of analysis on the results. Limited information on training/sampling complexity. Additional Remarks: Pg. 4 Line 136 "showed that that..." typo Questions Why is there only one baseline model comparison in the tori results specifically not having the comparison with the Riemannian Score-Based Model? In some tasks, SGM have performed better in certain image generation than diffusion models (e.g. CIFAR-10, CelebA 256x256). In this task, it seems that this model performs better than its score-based counter part. Is there any reason to believe that diffusion models should theoretically perform better in the Riemann space? Projection can be a difficult task, especially projecting onto a high dimensional complex plane. Does this have any detrimental effects on the training/sampling time? Limitations The authors address their limitations (i.e. algorithmic decisions).
NIPS
Title Riemannian Diffusion Models Abstract Diffusion models are recent state-of-the-art methods for image generation and likelihood estimation. In this work, we generalize continuous-time diffusion models to arbitrary Riemannian manifolds and derive a variational framework for likelihood estimation. Computationally, we propose new methods for computing the Riemannian divergence which is needed for likelihood estimation. Moreover, in generalizing the Euclidean case, we prove that maximizing this variational lowerbound is equivalent to Riemannian score matching. Empirically, we demonstrate the expressive power of Riemannian diffusion models on a wide spectrum of smooth manifolds, such as spheres, tori, hyperboloids, and orthogonal groups. Our proposed method achieves new state-of-the-art likelihoods on all benchmarks. 1 Introduction By learning to transmute noise, generative models seek to uncover the underlying generative factors that give rise to observed data. These factors can often be cast as inherently geometric quantities as the data itself need not lie on a flat Euclidean space. Indeed, in many scientific domains such as high-energy physics (Brehmer & Cranmer, 2020), directional statistics (Mardia & Jupp, 2009), geoscience (Mathieu & Nickel, 2020), computer graphics (Kazhdan et al., 2006), and linear biopolymer modeling such as protein and RNA (Mardia et al., 2008; Boomsma et al., 2008; Frellsen et al., 2009), data is best represented on a Riemannian manifold with a non-zero curvature. Naturally, to effectively capture the generative factors of these data, we must take into account the geometry of the space when designing a learning framework. Recently, diffusion based generative models have emerged as an attractive model class that not only achieve likelihoods comparable to state-of-the-art autogressive models (Kingma et al., 2021) but match the sample quality of GANs without the pains of adversarial optimization (Dhariwal & Nichol, 2021). Succinctly, a diffusion model consists of a fixed Markov chain that progressively transforms data to a prior defined by the inference path, and a generative model which is another Markov chain that is learned to invert the inference process (Ho et al., 2020; Song et al., 2021b). While conceptually simple, the learning framework can have a variety of perspectives and goals. For example, Huang et al. (2021) provide a variational framework for general continuous-time diffusion processes on Euclidean manifolds as well as a functional Evidence Lower Bound (ELBO) that can be equivalently shown to be minimizing an implicit score matching objective. At present, however, much of the success of diffusion based generative models and its accompanying variational framework is purpose built for Euclidean spaces, and more specifically, image data. It does not easily translate to general Riemannian manifolds. In this paper, we introduce Riemannian Diffusion Models (RDM)—generalizing conventional diffusion models on Euclidean spaces to arbitrary Riemannian manifolds. Departing from diffusion models on Euclidean spaces, our approach uses the Stratonovich SDE formulation for which the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). conventional chain rule of calculus holds, which, as we demonstrate in section §3, can be exploited to define diffusion on a Riemannian manifold. Furthermore, we take an extrinsic view of geometry by defining the Riemannian manifold of interest as an embedded sub-manifold within a higher dimensional (Euclidean) ambient space. Such a choice enables us to define both our inference and generative SDEs using the coordinate system of the ambient space, greatly simplifying the implementation of the theory developed using the intrinsic view. Main Contributions. We summarize our main contributions below: • We introduce a variational framework built on the Riemannian Feynman-Kac representation and Giransov’s theorem. In Theorem 2 we derive a Riemannian continuous-time ELBO, strictly generalizing the CT-ELBO in Huang et al. (2021) and prove in Theorem 4 that its maximization is equivalent to Riemannian score matching for marginally equivalent SDEs (Theorem 3). • To compute the Riemannian CT-ELBO it is necessary to compute the Riemannian divergence of our parametrized vector field, for which we introduce a QR-decomposition-based method that is computationally efficient for low dimensional manifolds as well a projected Hutchinson method for scalable unbiased estimation. Notably, our approach does not depend on the closest point projection which may not be freely available for many Riemannian manifolds of interest. • We also provide a variance reduction technique to estimate the Riemannian CT-ELBO objective that leverages importance sampling with respect to the time integral, which crucially avoids carefully designing the noise schedule of the inference process. • Empirically, we validate our proposed models on spherical manifolds towards modelling natural disasters as found in earth science datasets, products of spherical manifolds (tori) for protein and RNA, synthetic densities on hyperbolic spaces and orthogonal groups. Our empirical results demonstrate that RDM leads to new state-of-art likelihoods over prior manifold generative models. 2 Background In this section, we provide the necessary background on diffusion models and key concepts from Riemannian geometry that we utilize to build RDMs. For a short review of the latter, see Appendix A or Ratcliffe (1994) for a more comprehensive treatment of the subject matter. 2.1 Euclidean diffusion models A diffusion model can be defined as the solution to the (Itô) SDE (Øksendal, 2003), dX = µ dt+ dBt, (1) with the initial condition X0 following some unstructured prior p0 such as the standard normal distribution, where Bt is a standard Brownian motion, and µ and are the drift and diffusion coefficients of the diffusion process, which control the deterministic forces driving the evolution and the amount of noise injected at each time step. This provides us a way to sample from the model, via numerically solving the dynamics from t = 0 to t = T for some fixed termination time T . To train the model via maximum likelihood, we require an expression for the log marginal density of XT , denoted by log p(x, T ), which is generally intractable. The marginal likelihood can be represented using a stochastic instantaneous change-of-variable formula, by applying the Feynman-Kac theorem to the Fokker-Planck PDE of the density. An application of Girsanov’s theorem followed by an application of Jensen’s inequality leads to the following variational lower bound (Huang et al., 2021; Song et al., 2021a): log p(x, T ) E " log p0(YT ) Z T 0 ✓ 1 2 ka(Ys, s)k 2 2 +r · µ(Ys, T s) ◆ ds Y0 = x # (2) where a is the variational degree of freedom, r· denotes the (Euclidean) divergence operator, and Ys follows the inference SDE (the generative coefficients are evaluated in reversed time, i.e. T s) dY = ( µ+ a) ds+ dB̂s (3) with B̂s being another Brownian motion. This is known as the continuous-time evidence lower bound, or the CT-ELBO for short. 2.2 Riemannian manifolds We work with a d-dimensional Riemannian manifold (M, g) embedded in a higher dimensional ambient space Rm, for m > d. This assumption does not come with a loss of generality, since any Riemannian manifold can be isometrically embedded into a Euclidean space by the Nash embedding theorem (Gunther, 1991). In this case, the metric g coincides with the pullback of the Euclidean metric by the inclusion map. Now, given a coordinate chart ' : M ! Rd and its inverse = ' 1, we can define Ẽj for j = 1, · · · , d to be the basis vectors of the tangent space TxM at point x 2 M. The tangent space can be understood as the pushforward of the Euclidean derivation of the patch space along ; i.e., for any smooth function f 2 C1(M), Ẽj(f) = @@x̃j f . We denote by Px the orthogonal projection onto the linear subspace spanned by the column vectors of the Jacobian Jx = d /dx̃. Specifically, Px can be constructed via Px = Jx(JTx Jx) 1JTx . Note that this subspace is isomorphic to the tangent space TxM, which itself is a subspace of TxRm. As a result, we identify this subspace with TxM. Lastly, we refer to the action of Px as the projection onto the tangential subspace, and Px itself as the tangential projection. 2.3 SDE on manifolds Unlike Euclidean spaces, Riemannian manifolds generally do not possess a vector space structure. This prevents the direct application of the usual (stochastic) calculus. We can resolve this by defining the process via test functions. Specifically, let Vk be a family of smooth vector fields on M, and let Z k be a family of semimartingales (Protter, 2005). Symbolically, we write dXt = X k Vk(Xt) dZ k t if df(Xt) = X k Vk(f)(Xt) dZ k t (4) for any f 2 C1(M) (Hsu, 2002). The in the second differential equation is to be interpreted in the Stratonovich sense (Protter, 2005). The use of the Stratonovich integral is the first step deviating from the Euclidean diffusion model (1), as the Itô integral does not follow the usual chain rule. Working with this abstract definition is not always convenient, so instead we work with specific coordinates of M. Let ' be a chart, and let ṽ = (ṽjk) be a matrix representing the coefficients of Vk in the coordinate basis—i.e. Vk(f) = Pd j=1 ṽjk @ @x̃j f ' 1 x̃='(x) . This allows us to write d'(Xt) = ṽ dZ. Similarly, suppose M is a submanifold embedded in Rm, and denote by v = (vik) the coefficients wrt the Euclidean basis. v and ṽ are related by v = d' 1 dx̃ ṽ. Then we can express the dynamics of X as a regular SDE using the Euclidean space’s coefficients dX = v dZ. Notably, by the relation between v and ṽ, the column vectors of v are required to lie in the span of the column vectors of the Jacobian d' 1 dx̃ which restricts the dynamics to move tangentially on M. 3 Riemannian diffusion models We now develop a variational framework to estimate the likelihood of a diffusion model defined on a Riemannian manifold (M, g). Let Xt 2 M be a process solving the following SDE: Generative SDE: dX = V0 dt+ V dBt, X0 ⇠ p0 (5) where V0 and the columns of the diffusion matrix1 V := [V1, · · · , Vw] are smooth vector fields on M, and Bt is a w-dimensional Brownian motion. The law of the random variable Xt can be written as p(x, t)µ(dx), where p(x, t) is the probability density function and µ is the d-dimensional Hausdorff measure on the manifold associated with the Riemannian volume density. Let V · r be a differential operator defined by (V · rg)U := Pw k=1(rg · Uk)Vk, where rg · Uk denotes the Riemannian divergence of the vector field Uk: rg · Uk = |G| 12 dX j=1 @ @x̃j (|G| 1 2 ũjk). (6) Our first result is a stochastic instantaneous change-of-variable formula for the Riemannian SDE by applying the Feynman-Kac theorem to the Fokker Planck PDE of the density p(x, t). 1The multiplication is interpreted similarly to matrix-vector multiplication, i.e. V dBt = Pw k=1 Vk dB k t . Theorem 1 (Marginal Density). The density p(x, t) of the SDE (5) can be written as p(x, t) = E " p0 (Yt) exp ✓ Z t 0 rg · ✓ V0 1 2 (V ·rg)V ◆ ds ◆ Y0 = x # (7) where the expectation is taken wrt the following process induced by a Brownian motion B0s dY = ( V0 + (V ·rg)V ) ds+ V dB 0 s. (8) For effective likelihood maximization, we require access to log p and its gradient. Towards this goal, we prove the following Riemannian CT-ELBO which serves as our training objective and follows from an application of change of measure (Girsanov’s theorem) and Jensen’s inequality. Theorem 2 (Riemannian CT-ELBO). Let B̂s be a w-dimensional Brownian motion, and let Ys be a process solving the following Inference SDE: dY = ( V0 + (V ·rg)V + V a) ds+ V dB̂s, (9) where a : Rm ⇥ [0, T ] ! Rm is the variational degree of freedom. Then we have log p(x, T ) E " log p0(YT ) Z T 0 1 2 ka(Ys, s)k 2 2 +rg · ✓ V0 1 2 (V ·rg)V ◆ ds Y0 = x # , (10) where all the generative degree of freedoms Vk are evaluated in the reversed time direction. 3.1 Computing Riemannian divergence Similar to the Euclidean case, computing the Riemannian CT-ELBO requires computing the divergence “rg·” of a vector field, which can be achieved by applying the following identity. Proposition 1 (Riemannian divergence identity). Let (M, g) be a d-dimensional Riemannian manifold. For any smooth vector field Vk 2 X(M), the following identity holds: rg · Vk = dX j=1 D rẼj Vk, Ẽ j E g . (11) Furthermore, if the manifold is a submanifold embedded in the ambient space Rm equipped with the induced metric g = ◆⇤ḡ, then (rg · Vk)(x) = tr ✓ Px dvk dx Px ◆ , (12) where vk = (v1k, · · · , vmk) are the ambient space coefficients Vk = Pm i=1 vik @ @xi and Px is the orthogonal projection onto the tangent space. Intrinsic coordinates. The patch-space formula (6) can be used to compute the Riemannian divergence. This view was adopted by Mathieu & Nickel (2020), where they combined the Hutchinson trace identity and the internal coordinate formula to estimate the divergence. The drawbacks of this framework include: (1) obtaining local coordinates may be difficult for some manifolds, hindering generality in practice; (2) we might need to change patches, which complicates implementations; and (3) the inverse scaling of p |G| might result in numerical instability and high variance. Closest-point projection. The coordinate-free expression (11) leads to the closest-point projection method proposed by Rozen et al. (2021). Concretely, define the closest-point projection by ⇡(x) := argminy2M kx yk, where k·k is the Euclidean norm. Let Vk(x) be the derivation corresponding to the ambient space vector vk(x) = P⇡(x)u(⇡(x)) for some unconstrainted u : Rm ! Rm. Rozen et al. (2021) showed that rg · Vk(x) = r · vk(x), since vk is infinitesimally constant in the normal direction to TxM. This allows us to compute the divergence directly in the ambient space. However, the closest-point projection map ⇡ may not always be easily obtained. QR decomposition. An alternative to the closest-point projection is to instead search for an orthogonal basis for TxM. Let Q = [e1, · · · , ed, n1, · · · , nm d] be an orthogonal matrix whose first d columns span the TxM, and the remaining m d vectors span its orthogonal complement TxM ?. To construct Q we can simply sample d vectors—e.g. from N (0, 1)–in the ambient space and orthogonally project them to TxM using Px. These vectors, although not orthogonal yet, form a basis for TxM. Next we concatenate them with m d random vectors and apply a simple QR decomposition to retrieve an orthogonal basis. Using Q we may rewrite equation (12) as follows: (rg · Vk)(x) = tr ✓ QQ > Px dvk dx Px ◆ = tr ✓ (PxQ) > dvk dx PxQ ◆ = dX j=1 e > j dvk dx ej (13) where we used (1) the orthogonality of Q, (2) the cyclic property of trace, (3) and the fact that Pxej = ej and Pxnj = 0. In practice, concatenation with the remaining m d vectors is not needed as they are effectively not used in computing the divergence, speeding up computation when m d. Moreover, the vector-Jacobian product can be computed in O(m) time using reverse-mode autograd and importantly does not require the closest-point projection ⇡. Projected Hutchinson. When QR is too expensive for higher dimensional problems, the Hutchinson trace estimator (Hutchinson, 1989) can be employed within the extrinsic view representation (12). For example, let z be a standard normal vector (or a Rademacher vector), we have (rg ·Vk)(x) = Ez⇠N ,z0=Pxz[z 0> dvk dx z 0]. Different from a direct application of the trace estimator to the closest-point method, we directly project the random vector to the tangent subspace. Therefore, the closest-point projection is again not needed. 3.2 Fixed-inference parameterization Following prior work (Sohl-Dickstein et al., 2015; Ho et al., 2020; Huang et al., 2021), we let the inference SDE (9) be defined as a simple noise process taking observed data to unstructured noise: dY = U0 dt+ V dB̂s, (14) where U0 = 12rg log p0 and V is the tangential projection matrix; that is, Vk(f)(x) =Pm j=1(Px)jk @f @xj for any smooth function f . This is known as the Riemannian Langevin diffusion (Girolami & Calderhead, 2011). As long as p0 satisfies a log-Sobolev inequality, the marginal distribution of Ys (i.e. the aggregated posterior) converges to p0 at a linear rate in the KL divergence (Wang et al., 2020). For compact manifolds, we set p0 to be the uniform density, which means U0 = 0, and (14) is reduced to the extrinsic construction of Brownian motion on M (Hsu, 2002, Section 1.2). The benefits of this fixed-inference parameterization are the following: Stable and Efficient Training. With the fixed-inference parameterization we do not need to optimize the vector fields that generate Ys, and the Riemannian CT-ELBO can be rewritten as: E[log p0(YT )] Z T 0 EYs " 1 2 ka(Ys, s)k 2 2 +rg · ✓ V0 1 2 (V ·rg)V ◆ Y0 = x # ds, (15) where the first term is a constant wrt the model parameters (or it can be optimized separately if we want to refine the prior), and the time integral of the second term can be estimated via importance sampling (see Section 3.3). A sample of Ys can be drawn cheaply by numerically integrating (14), without requiring a stringent error tolerance (see Section 5.2 for an empirical analysis), which allows us to estimate the time integral in (15) by evaluating a(Ys, s) at a single time step s only. Simplified Riemannian CT-ELBO. The CT-ELBO can be simplified as the differential operator V ·rg applied to V yields a zero vector when V is the tangential projection. Proposition 2. If V is the tangential projection matrix, then (V ·rg)V = 0. This means that we can express the generative SDE V0 using the variational parameter a via dX = (V a(X,T t) U0(X,T t)) dt+ V dB̂t, (16) with the corresponding Riemannian CT-ELBO: E[log p0(YT )] Z T 0 EYs " 1 2 kak 2 2 +rg · (V a U0) Y0 = x # ds. (17) 3.3 Variance reduction The inference process can be more generally defined to account for a time reparameterization. In fact, this leads to an equivalent model if one can find an invariant representation of the temporal variable. Learning this time rescaling can help to reduce variance (Kingma et al., 2021). In principle, we can adopt the same methodology, but this would further complicate the parameterization of the model. Alternatively, we opt for a simpler view for variance reduction via importance sampling. We estimate the time integral “ R . . . ds” in (17) using the following estimator: I := 1 q(s) ✓ 1 2 kak 2 2 +rg · (V a U0) ◆ where s ⇠ q(s) and Ys ⇠ q(Ys | Y0), (18) where q(s) is a proposal density supported on [0, T ]. We parameterize q(s) using a 1D monotone flow (Huang et al., 2018). As the expected value of this estimator is the same as the time integral in (17), it is unbiased. However, this means we cannot train the proposal distribution q(s) by maximizing this objective, since the gradient wrt the parameters of q(s) is zero in expectation. Instead, we minimize the variance of the estimator by following the stochastic gradient wrt q(s) rq(s)Var(I) = rq(s)E[I 2] ⇠⇠⇠ ⇠⇠ rq(s)E[I] 2 = rq(s)E[I 2]. (19) The latter can be optimized using the reparameterization trick (Kingma & Welling, 2014) and is a well-known variance reduction method in a multitude of settings (Luo et al., 2020; Tucker et al., 2017). It can be seen as minimizing the 2-divergence from a density proportional to the magnitude of EYs [I] (Dieng et al., 2017; Müller et al., 2019). 3.4 Connection to score matching In the Euclidean case, it can be shown that maximizing the variational lower bound of the fixedinference diffusion model (16) is equivalent to score matching (Ho et al., 2020; Huang et al., 2021; Song et al., 2021a). In this section, we extend this connection to its Riemannian counterpart. Let q(ys, s) be the density of Ys following (14), marginalizing out the data distribution q(y0, 0). The score function is the Riemannian gradient of the log-density rg log q. The following theorem tells us that we can create a family of inference and generative SDEs that induce the same marginal distributions over Ys and XT s as (16) if we have access to its score. Theorem 3 (Marginally equivalent SDEs). For 1, the marginal distributions of XT s and Ys of the processes defined as below dY = ✓ U0 2 rg log q ◆ ds+ p 1 V dB̂s Y0 ⇠ q(·, 0) (20) dX = ✓✓ 1 2 ◆ rg log q U0 ◆ dt+ p 1 V dB̂t X0 ⇠ q(·, T ) (21) both have the density q(·, s). In particular, = 1 gives rise to an equivalent ODE. This suggests if we can approximate the score function, and plug it into the reverse process (21), we obtain a time-reversed process that induces approximately the same marginals. Theorem 4 (Score matching equivalency). For < 1, let E1 denote the Riemannian CTELBO of the generative process (21), with rg log q replaced by an approximate score S✓, and with (20) being the inference SDE. Assume S✓ is a compactly supported smooth vector. Then EY0 [E 1 ] = C1 Z T 0 EYs h kS✓ rg log qk 2 g i ds+ C2 (22) where C1 > 0 and C2 are constants wrt ✓. The first implication of the theorem is that maximizing the Riemannian CT-ELBO of the plug-in reverse process is equivalent to minimizing the Riemannian score-matching loss. Second, if we set = 0, from (135) (in the appendix), we have V a = S✓, which is exactly the fixed-inference training in §3.2. That is, the vector V a trained using equation (17) is actually an approximate score, allowing us to extract an equivalent ODE by substituting V a for rg log q in (20,21) by setting = 1. 4 Related work Diffusion models. Diffusion models can be viewed from two different but ultimately complimentary perspectives. The first approach leverages score based generative models (Song & Ermon, 2019; Song et al., 2021b), while the second approach treats generative modeling as inverting a fixed noiseinjecting process (Sohl-Dickstein et al., 2015; Ho et al., 2020). Finally, continuous-time diffusion models can also be embedded within a maximum likelihood framework (Huang et al., 2021; Song et al., 2021a), which represents the special case of prescribing a flat geometry—i.e. Euclidean—to the generative model and is completely generalized by the theory developed in this work. Riemannian Generative Models. Generative models beyond Euclidean manifolds have recently risen to prominence with early efforts focusing on constant curvature manifolds (Bose et al., 2020; Rezende et al., 2020). Another line of work extends continuous-time flows (Chen et al., 2018a) to more general Riemannian manifolds (Lou et al., 2020; Mathieu & Nickel, 2020; Falorsi & Forré, 2020). To avoid explicitly solving an ODE during training, Rozen et al. (2021) propose Moser Flow whose objective involves computing the Riemannian divergence of a parametrized vector field. Concurrent to our work, De Bortoli et al. (2022) develop Riemannian score-based generative models for compact manifolds like the Sphere. While similar in endeavor, RDMs are couched within the the maximum likelihood framework. As a result our approach is directly amenable to variance reduction techniques via importance sampling and likelihood estimation. Moreover, our approach is also applicable to non-compact manifolds such as hyperbolic spaces, and we demonstrate this in our experiments on a larger variety of manifolds including the orthogonal group and toroids. 5 Experiments We investigate the empirical caliber of RDMs on a range of manifolds. We instantiate RDMs by parametrizing a in (16) using an MLP and maximize the CT-ELBO (17). We report our detailed training procedure—including selected hyperparameters—for all models in §D. 5.1 Sphere For spherical manifolds, we model the datasets compiled by Mathieu & Nickel (2020), which consist of earth and climate science events on the surface of the earth such as volcanoes (NGDC/WDS, 2022b), earthquakes (NGDC/WDS, 2022a), floods (Brakenridge, 2017), and fires (EOSDIS, 2020). In Table 1 for each dataset we report average and standard deviation of test negative log likelihood on 5 different runs with different splits of the dataset. In Figure 1 we plot the model density in blue while the test data is depicted with red dots. Variance reduction. We demonstrate the effect of applying variance reduction on optimizing the Riemannian CT-ELBO (17) using the earthquake dataset. As shown in Figure 2, learning an importance sampling proposal effectively lowers the variance and speeds up training. 5.2 Tori For tori, we use the list of 500 high-resolution proteins compiled in Lovell et al. (2003) and select 113 RNA sequences listed in Murray et al. (2003). Each macromolecule is divided into multiple monomers, and the joint structure is discarded—we model the lower dimensional density of the backbone conformation of the monomer. For the protein data, this corresponds to 3 torsion angles of the amino acid. As one of the angles is normally 180°, we also discard it, and model the density over the 2D torus. For the RNA data, the monomer is a nucleotide described by 7 torsion angles in the backbone, represented by a 7D torus. For protein, we divide the dataset by the type of side chain attached to the amino acid, resulting in 4 datasets, and we discard the nucleobases of the RNA. In Table 2 we report the NLL of our model. Our baseline is a mixture of 4, 096 power spherical distributions (De Cao & Aziz, 2020, MoPS). We observe that RDM outperforms the baseline across the board, and the difference is most noticeable for the RNA data, which has a higher dimensionality. Numerical integration ablation. We estimate the loss (17) by integrating the inference SDE on M. To study the effect of integration error, we experiment with various numbers of time steps evenly spaced between [0, s] on Glycine. Also, as we can directly sample the Brownian motion on tori without numerical integration, we use it as a reference (termed direct loss) for comparison. Figure 3 shows while fewer time steps tend to underestimate the loss, the model trained with 100 time steps is already indistinguishable from the one trained with direct sampling. We also find numerical integration is not a significant overhead as each experiment takes approximately the same wall-clock time with identical setups. This is because the inference path does not involve the neural module a. 5.3 Hyperbolic Manifolds Hyperbolic manifolds provide an example whose closestpoint projection is not cheap to obtain, and a claimed closest-point projection in recent literature is in fact not the closest Euclidean projection (Skopek et al., 2019) (see §C for more details). To demonstrate the generality of our framework, we model the synthetic datasets in Figure 5, first introduced by Bose et al. (2020); Lou et al. (2020). Since hyperbolic manifolds are not compact, we need a non-zero drift to ensure the inference processs is not dissipative. We define the prior as the standard normal distribution on the yz-plane and let U0 be 12rg log p0, so that Ys will revert back to the origin. 5.4 Special Orthogonal Group Another example whose closest-point projection is expensive to compute is the orthogonal group, as it requires performing the singular value decomposition. To evaluate our framework on this matrix group, we generate data using the synthetic multimodal density defined on SO(3) from Brofos et al. (2021). We view it as a submanifold embedded in R3⇥3, therefore d = 3 and m = 9. We use the projected Hutchinson to estimate the Riemannian divergence. Since the data are 3D rotational matrices, we can visualize them using the Euler angles. We plotted the data density and the learned model density in Figure 6, where each coordinate represents the rotation around that particular axis. 6 Conclusion In this paper, we introduce RDMs that extend continuous-time diffusion models to arbitrary Riemannian manifolds—including challenging non-compact manifolds like hyperbolic spaces. We provide a variational framework to train RDMs by optimizing a novel objective, the Riemannian Continuous-Time ELBO. To enable efficient and stable training we provide several key tools such as a fixed-inference paramterization of the SDE in the ambient space, new methodological techniques to compute the Riemannian divergence, as well as an importance sampling procedure with respect to the time integral to reduce the variance of the loss. On a theoretical front, we also show deep connections between our proposed variational framework and Riemannian score matching through the construction of marginally equivalent SDEs. Finally, we complement our theory by constructing RDMs that achieve state-of-the-art performance on density estimation on geoscience datasets, protein/RNA data on toroidal, and synthetic data on hyperbolic and orthogonal-group manifolds.
1. What is the focus and contribution of the paper on likelihood estimation of diffusion models? 2. What are the strengths of the proposed approach, particularly in terms of technical aspects and mathematical interest? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. Do you have any concerns or questions about the paper's content, such as the explanation of certain concepts or computational issues? 5. Are there any limitations or potential drawbacks of the proposed method that should be acknowledged?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper derives a variational framework for likelihood estimation of diffusion models on Riemannian manifolds. This is based on prior work on continuous time ELBOs and score matching. The authors generalize this to Riemannian manifolds and address computational issues including fast computation of the Riemannian divergence. The methods are experimentally demonstrated on several manifolds. Strengths And Weaknesses Strengths: well-written and technically sound paper addresses an important class of problems the presented techniques are mathematically interesting and non-trivial the paper couples stochastic process theory, variational inference and geometry Weaknesses: the CT-ELBO was derived in earlier paper in the Euclidean setting. To some extend, the extension to the geometric context is not groundbreaking. However, it is in no way trivial either In general, I found the paper a very interesting read. Questions line 172-173: I didn't get what is meant by "is evaluated only once as opposed to estimating the ELBO by integrating over the entire path of Y_s". What is evaluated only once? The entire sample path is still used? Limitations yes
NIPS
Title Factorizable Graph Convolutional Networks Abstract Graphs have been widely adopted to denote structural connections between entities. The relations are in many cases heterogeneous, but entangled together and denoted merely as a single edge between a pair of nodes. For example, in a social network graph, users in different latent relationships like friends and colleagues, are usually connected via a bare edge that conceals such intrinsic connections. In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network (FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph. FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs, each of which represents a latent and disentangled relation among nodes. The features of the nodes are then aggregated separately in each factorized latent space to produce disentangled features, which further leads to better performances for downstream tasks. We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets, and demonstrate that it yields truly encouraging results in terms of both disentangling and feature aggregation. Code is publicly available at https://github.com/ihollywhy/FactorGCN.PyTorch. 1 Introduction Disentangling aims to factorize an entity, like a feature vector, into several interpretable components, so that the behavior of a learning model can be better understood. In recent years, many approaches have been proposed towards tackling disentangling in deep neural networks and have achieved promising results. Most prior efforts, however, have been focused on the disentanglement of convolutional neural network (CNN) especially the auto-encoder architecture, where disentangling takes place during the stage of latent feature generation. For example, VAE [1] restrains the distribution of the latent features to Gaussian and generates disentangled representation; β-VAE [2] further improves the disentangling by introducing β to balance the independence constraints and reconstruction accuracy. Despite the many prior efforts in CNN disentangling, there are few endeavors toward disentangling in the irregular structural domain, where graph convolutional network (GCN) models are applied. Meanwhile, the inherent differences between grid-like data and structural data precludes applying CNN-based disentangling methods to GCN ones. The works of [3, 4], as pioneering attempts, focus on the node-level neighbour partition and ignore the latent multi-relations among nodes. We introduce in this paper a novel GCN, that aims to explicitly conduct graph-level disentangling, based on which convolutional features are aggregated. Our approach, termed as factorizable graph ∗Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convolutional network (FactorGCN), takes as input a simple graph, and decomposes it into several factor graphs, each of which corresponds to a disentangled and interpretable relation space, as shown in Fig. 1. Each such graph then undergoes a GCN, tailored to aggregate features only from one disentangled latent space, followed by a merging operation that concatenates all derived features from disentangled spaces, so as to produce the final block-wise interpretable features. These steps constitute one layer of the proposed FactorGCN. As the output graph with updated features share the identical topology as input, nothing prevents us from stacking a number of layers to disentangle the input data at different levels, yielding a hierarchical disentanglement with various numbers of factor graph at different levels. FactorGCN, therefore, potentially finds application in a wide spectrum of scenarios. In many realworld graphs, multiple heterogeneous relations between nodes are mixed and collapsed to one single edge. In the case of social networks, two people may be friends, colleagues, and living in the same city simultaneously, but linked via one single edge that omits such interconnections; in the co-purchasing scenario [5], products are bought together for different reasons like promotion, and functional complementary, but are often ignored in the graph construction. FactorGCN would, in these cases, deliver a disentangled and interpretable solution towards explaining the underlying rationale, and provide discriminant learned features for the target task. Specifically, the contributions of FactorGCN are summarized as follows. • Graph-level Disentangling. FactorGCN conducts disentangling and produces block-wise interpretable node features by analyzing the whole graph all at once, during which process the global-level topological semantics, such as the higher-order relations between edges and nodes, is explicitly accounted for. The disentangled factor graphs reveal latent-relation specific interconnections between the entities of interests, and yield interpretable features that benefit the downstream tasks. This scheme therefore contrasts to the prior approaches of [3, 4], where the disentanglement takes place only within a local neighborhood, without accounting for global contexts. • Multi-relation Disentangling. Unlike prior methods that decode only a single attribute for a neighboring node, FactorGCN enables multi-relation disentangling, meaning that the center node may aggregate information from a neighbour under multiple types of relations. This mechanism is crucial since real-world data may contain various relations among the same pair of entities. In the case of a social network graph, for example, FactorGCN would produce disentangled results allowing for two users to be both friends and living in the same city; such multi-relation disentangling is not supported by prior GCN methods. • Quantitative Evaluation Metric. Existing quantitative evaluation methods [6, 7] in the grid domain rely on generative models, like auto-encoder [8] or GAN [9]. Yet in the irregular domain, unfortunately, state-of-the-art graph generative models are only applicable for generating small graphs or larger ones without features. Moreover, these models comprise a sequential generation step, making it infeasible to be integrated into the graph disentangling frameworks. To this end, we propose a graph edit-distance based metric, which bypasses the generation step and estimates the similarity between the factor graphs and the ground truth. We conducted experiments on five datasets in various domains, and demonstrate that the proposed FactorGCN yields state-of-the-art performances for both disentanglement and downstream tasks. This indicates that, even putting side its disentangling capability, FactorGCN may well serve as a general GCN framework. Specifically, on the ZINC dataset [10], FactorGCN outperforms other methods by a large margin, and, without the bond information of the edges, FactorGCN achieves a performance on par with the state-of-the-art method that explicitly utilizes edge-type information. 2 Related Work Disentangled representation learning. Learning disentangled representations has recently emerged as a significant task towards interpretable AI [11, 12]. Unlike earlier attempts that rely on handcrafted disentangled representations or variables [13, 14], most of the recent works in disentangled representation learning are based on the architecture of auto-encoder [2, 15, 16, 7, 17, 8] or generative model [9, 18, 19]. One mainstream auto-encoder approach is to constrain the latent feature generated from the encoder to make it independent in each dimension. For example, VAE [1] constrains the distribution of the latent features to Gaussian; β-VAE[2] enlarges the weight of the KL divergence term to balance the independence constraints and reconstruction accuracy; [20] disentangles the latent features by ensuring that each block of latent features cannot be predicted from the rest; DSD [15] swaps some of the latent features twice to achieve semi-supervised disentanglement. For the generative model, extra information is introduced during the generation. For example, InfoGAN [9] adds the class code to the model and maximizes the mutual information between the generated data and the class code. Graph convolutional network. Graph convolutional network (GCN) has shown its potential in the non-grid domain [21–26], achieving promising results on various type of structural data, like citation graph [27], social graph [28], and relational graph [29]. Besides designing GCN to better extract information from non-grid data, there are also a couple of works that explore the disentangled GCNs [30, 4]. DisenGCN [3] adopts neighbour routine to divide the neighbours of the node into several mutually exclusive parts. IPGDN [4] improves DisenGCN by making the different parts of the embedded feature independent. Despite results of the previous works, there remain still several problems: the disentanglement is in the node level, which does not consider the information of the whole graph, and there is no quantitative metrics to evaluate the performance of disentanglement. 3 Method In this section, we will give a detailed description about the architecture of FactorGCN, whose basic component is the disentangle layer, as shown in Fig. 1. 3.1 Disentangling Step The goal of this step is to factorize the input graph into several factor graphs. To this end, we treat the edges equally across the whole graph. The mechanism we adopt to generate these factorized coefficient is similar to that of graph attention network [27]. We denote the input of the disentangle layer as h = {h0, h1, ..., hn}, hi ∈ RF and e = {e0, e1, ..., em}, ek = (hi, hj). h denotes the set of nodes with feature of F dimension, and e denotes the set of edges. The input nodes are transformed to a new space, done by multiplying the features of nodes with a linear transformation matrix W ∈ RF ′×F . This is a standard operation in most GCN models, which increases the capacity of the model. The transformed features are then used to generate the factor coefficients as follows Eije = 1/ ( 1 + e−Ψe(h ′ i,h ′ j) ) ;h′ = Wh, (1) where Ψe is the function that takes the features of node i and node j as input and computes the attention score of the edge for factor graph e, and takes the form of an one-layer MLP in our implementation; Eije then can be obtained by normalizing the attention score to [0, 1], representing the coefficient of edge from node i to node j in the factor graph e; h′ is the transformed node feature, shared across all functions Ψ∗. Different from most previous forms of attention-based GCNs that normalize the attention coefficients among all the neighbours of nodes, our proposed model generates these coefficients directly as the factor graph. Once all the coefficients are computed, a factor graph e can be represented by its own Ee, which will be used for the next aggregation step. However, without any other constrain, some of the generated factor graphs may contain a similar structure, degrading the disentanglement performance and capacity of the model. We therefore introduce an additional head in the disentangle layer, aiming to avoid the degradation of the generated factor graphs. The motivation of the additional head is that, a well disentangled factor graph should have enough information to be distinguished from the rest, only based on its structure. Obtaining the solution that all the disentangled factor graphs differ from each other to the maximal degree, unfortunately, is not trivial. We thus approximate the solution by giving unique labels to the factor graphs and optimizing the factor graphs as a graph classification problem. Our additional head will serve as a discriminator, shown in Eq. 2, to distinguish which label a given graph has: Ge = Softmax ( f ( Readout(A(Ee,h′)) )) . (2) The discriminator contains a three-layer graph auto-encoder A, which takes the transformed feature h′ and the generated attention coefficients of factor graph Ee as inputs, and generates the new node features. These features are then readout to generate the representation of the whole factor graph. Next, the feature vectors will be sent to a classifier with one fully connected layer. Note that all the factor graphs share the same node features, making sure that the information discovered by the discriminator only comes from the difference among the structure of the factor graphs. More details about the discriminator architecture can be found in the supplementary materials. The loss used to train the discriminator is taken as follows: Ld = − 1 N N∑ i ( Ne∑ c=1 1e=clog(G e i [c]) ) , (3) where N is the number of training samples, set to be the number of input graphs multiplies by the number of factor graphs; Ne is the number of factor graphs; Gei is the distribution of sample i and Gei [c] represents the probability that the generated factor graph has label c. 1e=c is an indicator function, taken to be one when the predicted label is correct. 3.2 Aggregation Step As the factor graphs derived from the disentangling step is optimized to be as diverse as possible, in the aggregation step, we will use the generated factor graphs to aggregate information in different structural spaces. This step is similar as the most GCN models, where the new node feature is generated by taking the weighted sum of its neighbors. Our aggregation mechanism is based on the simplest one, which is used in GCN [28]. The only difference is that the aggregation will take place independently for each of the factor graphs. The aggregation process is formulated as h (l+1)e i = σ( ∑ j∈Ni Eije/cijh (l) j W (l)), cij = (|Ni||Nj |)1/2 , (4) where h(l+1)ei represents the new feature for node i in l + 1 layer aggregated from the factor graph e; Ni represents all the neighbours of node i in the input graph; Eije is the coefficient of the edge from node i to node j in the factor graph e; cij is the normalization term that is computed according to the degree of node i and node j; W(l) is a linear transformation matrix, which is the same as the matrix used in the disentangling step. Note that although we use all the neighbours of a node in the input graph to aggregate information, some of them are making no contribution if the corresponding coefficient in the factor graph is zero. 3.3 Merging Step Once the aggregation step is complete, different factor graphs will lead to different features of nodes. We merge these features generated from different factor graphs by applying h (l+1) i = || Ne e=1h (l+1)e i , (5) where h(l+1)i is the output feature of node i; Ne is the number of factor graphs; || represents the concatenation operation. 3.4 Architecture We discuss above the design of one disentangle layer, which contains three steps. The FactorGCN model we used in the experimental section contains several such disentangle layers, increasing the power of expression. Moreover, by setting different number of factor graphs in different layers, the proposed model can disentangle the input data in a hierarchical manner. The total loss to train FactorGCN model is L = Lt +λ ∗Ld. Lt is the loss of the original task, which is taken to be a binary cross entropy loss for multi-label classification task, cross entropy loss for multi-class classification task, or L1 loss for regression task. Ld is the loss of the discriminator we mentioned above. λ is the weight to balance these two losses. 4 Experiments In this section, we show the effectiveness of the proposed FactorGCN, and provide discussions on its various components as well as the sensitivity with respect to the key hyper-parameters. More results can be found in the supplementary materials. 4.1 Experimental setups Datasets. Here, we use six datasets to evaluate the effectiveness of the proposed method. The first one is a synthetic dataset that contains a fixed number of predefined graphs as factor graphs. The second one is the ZINC dataset [31] built from molecular graphs. The third one is Pattern dataset [31], which is a large scale dataset for node classification task. The other three are widely used graph classification datasets include social networks (COLLAB,IMDB-B) and bioinformatics graph (MUTAG) [32]. To generate the synthetic dataset that contains Ne factor graphs, we first generate Ne predefined graphs, which are the well-known graphs like Turán graph, house-x graph, and balanced-tree graph. We then choose half of them and pad them with isolated nodes to make the number of nodes to be 15. The padded graphs will be merged together as a training sample. The label of the synthetic data is a binary vector, with the dimension Ne. Half of the labels will be set to one according to the types of graphs that the sample generated from, and the rest are set to zero. More information about the datasets can be found in the supplemental materials. Baselines. We adopt several methods, including state-of-the-art ones, as the baselines. Among all, MLP is the simplest one, which contains multiple fully connected layers. Although this method is simple, it can in fact perform well when comparing with other methods that consider the structural information. We use MLP to check whether the other compared methods benefit from using the structural information as well. GCN aggregates the information in the graph according to the laplacian matrix of the graph, which can be seen as a fixed weighted sum on the neighbours of a node. GAT [27] extends the idea of GCN by introducing the attention mechanism. The weights when doing the aggregation is computed dynamically according to all the neighbours. For the ZINC dataset, we also add MoNet [25] and GatedGCNE [31] as baselines. The former one is the state-of-the-art method that does not use the type information of edges while the latter one is the state-of-the-art one that uses additional edge information. Random method is also added to provide the result of random guess for reference. For the other three graph datasets, we add non DL-based methods (WL subtree, PATCHYSAN, AWL) and DL-based methods (GCN, GraphSage [33], GIN) as baselines. DisenGCN [3] and IPDGN [4] are also added. Hyper-parameters. For the synthetic dataset, Adam optimizer is used with a learning rate of 0.005, the number of training epochs is set to 80, the weight decay is set to 5e-5. The row of the adjacent matrix of the generated synthetic graph is used as the feature of nodes. The negative slope of LeakyReLU for GAT model is set to 0.2, which is the same as the original setting. The number of hidden layers for all models is set to two. The dimension of the hidden feature is set to 32 when the number of factor graphs is no more than four and 64 otherwise. The weight for the loss of discriminator in FactorGCN is set to 0.5. For the molecular dataset, the dimension of the hidden feature is set to 144 for all methods and the number of layers is set to four. Adam optimizer is used with a learning rate of 0.002. No weight decay is used. λ of FactorGCN is set to 0.2. All the methods are trained for 500 epochs. The test results are obtained using the model with the best performance on validation set. For the other three datasets, three layers FactorGCN is used. 4.2 Qualitative Evaluation We first provide the qualitative evaluations of disentanglement performance, including the visualization of the disentangled factor graphs and the correlation analysis of the latent features. Visualization of disentangled factor graphs. To give an intuitive understanding of the disentanglement. We provide in Fig. 2 some examples of the generated factor graphs. We remove the isolated nodes and visualize the best-matched factor graphs with ground truths. More results and analyses can be found in the supplemental materials. Correlation of disentangled features. Fig. 3 shows the correlation analysis of the latent features obtained from several pre-trained models on the synthetic dataset. It can be seen that also GCN and MLP models can achieve a high performance in the downstream task, and their latent features are hidden entangled. GAT gives more independent latent features but the performance is degraded in the original task. FactorGCN is able to extract the highly independent latent features and meanwhile achieve a better performance in the downstream task. 4.3 Quantitative Evaluation The quantitative evaluation focuses on two parts, the performance of the downstream tasks and that of the disentanglement. Evaluation protocol. For the downstream tasks, we adopt the corresponding metrics to evaluate, i.e., Micro-F1 for the multi-label classification task, mean absolute error (MAE) for the regression task. We design two new metrics to evaluate the disentanglement performance on the graph data. The first one is graph edit distance on edge (GEDE). This metric is inspired by the traditional graph edit distance (GED). Since the input graph already provides the information about the order of nodes, the disentanglement of the input data, in reality, only involves the changing of edges. Therefore, we restrict the GED by only allowing adding and removing the edges, and thus obtain a score of GEDE by Hungarian match between the generated factor graphs and the ground truth. Specifically, for each pair of the generated factor graph and the ground truth graph, we first convert the continuous value in the factor graph to 1/0 value by setting the threshold to make the number of edges in these two graphs are the same. Then, GEDEs can be computed for every such combination. Finally, Hungarian match is adopted to obtain the best bipartite matching results as the GEDE score. Besides the GEDE score, we also care about the consistency of the generated factor graph. In other words, the best-matched pairs between the generated factor graphs and the ground truths, optimally, should be identical across all samples. We therefore introduce the second metric named as consistency score (C-Score), related to GEDE . C-Score is computed as the average percentage of the most frequently matched factor graphs. The C-score will be one if the ground truth graphs are always matched to the fixed factor graphs. A more detailed description of evaluation protocol can be found in the supplemental materials. Evaluation on the synthetic dataset. We first evaluate the disentanglement performance on a synthetic dataset. The results are shown in Tab. 1. Although MLP and GCN achieve good classification performances, they are not capable of disentanglement. GAT disentangles the input by using multihead attention, but the performance of the original task is degraded. Our proposed method, on the other hand, achieves a much better performance in terms of both disentanglement and the original task. We also evaluate the compared methods on the synthetic dataset with various numbers of factor graphs, shown in Tab. 2. As the number of latent factor graphs increase, the performance gain of the FactorGCN becomes large. However, when the number of factor graphs becomes too large, the task will be more challenging, yielding lower performance gains. Evaluation on the ZINC dataset. For this dataset, the type information of edges is hidden during the training process, and is serve as the ground truth to evaluate the performance of disentanglement. Tab. 3 shows the results. The proposed method achieves the best performance on both the disentan- glement and the downstream task. We also show the state-of-the-art method GatedGCNE on this dataset on the right side of Tab. 3, which utilizes the type information of edges during the training process. Our proposed method, without any additional edge information, achieves truly promising results that are to that of GatedGCNE , which needs the bond information of edges during training. Evaluation on more datasets. To provide a thorough understanding of the proposed method, We also carry out evaluations on three widely used graph classification datasets and one node classification dataset to see the performances of FactorGCN as a general GCN framework. The same 10-fold evaluation protocol as [21] is adopted. Since there are no ground truth factor graphs, we only report the accuracy, shown in Tab. 4 and Tab. 5. Our method achieves consistently the best performance, showing the potential of the FactorGCN as a general GCN framework, even putting aside its disentangling capability. More details about the evaluation protocol, the setup of our method, and the statistic information about these datasets can be found in the supplemental materials. 4.4 Ablation and sensitivity analysis We show in Fig. 4 the ablation study and sensitivity analysis of the proposed method. When varying λ, the number of factors is set to be eight; when varying the number of factors , λ is set to be 0.2. As can be seen from the left figure, the performance of both the disentanglement and the downstream task will degrade without the discriminator. The right figure shows the relations between the performance and the number of factor graphs we used in FactorGCN. Setting the number of factor graphs to be slightly larger than that of the ground truth, in practice, leads to a better performance. Table 5: Accuracy (%) on the Pattern dataset for node-classification task. FactorGCN achieves the best performance, showing its ability to serve as a general GCN framework. GCN GatedGCN GIN MoNet DisenGCN IPDGN FactorGCN 63.88 ± 0.07 84.48 ± 0.12 85.59 ± 0.01 85.48 ± 0.04 75.01 ± 0.15 78.70 ± 0.11 86.57 ± 0.02 Figure 4: The influence of the balanced weight λ and the number of factor graphs. 5 Conclusion We propose a novel GCN framework, termed as FactorGCN, which achieves graph convolution through graph-level disentangling. Given an input graph, FactorGCN decomposes it into several interpretable factor graphs, each of which denotes an underlying interconnections between entities, and then carries out topology-aware convolutions on each such factor graph to produce the final node features. The node features, derived under the explicit disentangling, are therefore block-wise explainable and beneficial to the downstream tasks. Specifically, FactorGCN enables multi-relation disentangling, allowing information propagation between two nodes to take places in disjoint spaces. We also introduce two new metrics to measure the graph disentanglement performance quantitatively. FactorGCN outperforms other methods on both the disentanglement and the downstream tasks, indicating the proposed method is ready to serve as a general GCN framework with the capability of graph-level disentanglement. Acknowledgments This work is supported by the startup funding of Stevens Institute of Technology. Broader Impact In this work we introduce a GCN framework, termed as FactorGCN, that explicitly accounts for disentanglement FactorGCN is applicable to various scenarios, both technical and social. For conventional graph-related tasks, like node classification of the social network and graph classification of the molecular graph, our proposed method can serve as a general GCN framework. For disentangling tasks, our method generates factor graphs that reveal the latent relations among entities, and facilitate the further decision making process like recommendation. Furthermore, given sufficient data, FactorGCN can be used as a tool to analyze social issues like discovering the reasons for the quick spread of the epidemic disease in some areas. Like all learning-based methods, FactorGCN is not free of errors. If the produced disentangled factor graphs are incorrect, for example, the subsequent inference and prediction results will be downgraded, possibly yielding undesirable bias.
1. What is the focus and contribution of the paper on graph disentanglement? 2. What are the strengths of the proposed approach, particularly in terms of its attentional aggregation strategy and evaluation protocol? 3. Do you have any concerns or questions regarding the computation of the evaluation metric $GED_E$? 4. How do you think the proposed method performs compared to other methods in downstream tasks without using additional edge information? 5. What are your suggestions for improving the paper, such as including visualizations of the generated features or reporting the standard deviation of the random method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors of this submission propose a GCN-based method to conduct disentanglement in the graph domain. The proposed method disentangles the input graph into several factor graphs in a graph level, and allows for multiple relations between a pair of entities. This is done by the graph-level attentional aggregation strategy during the training process. A new metric is designed to evaluate the disentanglement performance quantitatively. The proposed method achieves the best disentanglement performance and very competitive performance on the downstream tasks even without the additional edge information. Strengths 1)This paper is well written. The proposed method is well motivated and the description is clear. 2)Besides the qualitative evaluation used in the previous methods, this paper provides a legitimate and practical quantitative evaluation protocol to evaluate the performance of the disentanglement in the graph domain, which could be of interest to a large audience. 3)The experimental results on the ZINC dataset are impressive. The proposed method achieves very similar performance as the model that uses the edge information. Such a result aligns well with the claim that the disentangled factor graphs can further boost the performance of the downstream tasks. Weaknesses 1) As the first try on the quantitative evaluation of the disentanglement performance in the graph domain, the proposed evaluation protocol is interesting and deserves attention from a larger audience. But I have one concern about the computation of the $GED\_E$. As mentioned in the main paper, computing the full GED between the predicted factor graphs and the ground truth can be too expensive. So $GED\_E$ assumes that the order of nodes is fixed and correct. This would be a problem at least for the synthetic dataset since mixing the factor graphs together may generate some new factor graphs that are not included in the ground truth. Correct me if I am wrong. 2)In Tab. 1, it is great to have the random method to show the lower bound of the evaluation metric, but why only the $GED\_E$ is reported? The std of the random method should also be reported. 3)It would be good to have a visualization of the generated features from the DisenGCN.
NIPS
Title Factorizable Graph Convolutional Networks Abstract Graphs have been widely adopted to denote structural connections between entities. The relations are in many cases heterogeneous, but entangled together and denoted merely as a single edge between a pair of nodes. For example, in a social network graph, users in different latent relationships like friends and colleagues, are usually connected via a bare edge that conceals such intrinsic connections. In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network (FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph. FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs, each of which represents a latent and disentangled relation among nodes. The features of the nodes are then aggregated separately in each factorized latent space to produce disentangled features, which further leads to better performances for downstream tasks. We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets, and demonstrate that it yields truly encouraging results in terms of both disentangling and feature aggregation. Code is publicly available at https://github.com/ihollywhy/FactorGCN.PyTorch. 1 Introduction Disentangling aims to factorize an entity, like a feature vector, into several interpretable components, so that the behavior of a learning model can be better understood. In recent years, many approaches have been proposed towards tackling disentangling in deep neural networks and have achieved promising results. Most prior efforts, however, have been focused on the disentanglement of convolutional neural network (CNN) especially the auto-encoder architecture, where disentangling takes place during the stage of latent feature generation. For example, VAE [1] restrains the distribution of the latent features to Gaussian and generates disentangled representation; β-VAE [2] further improves the disentangling by introducing β to balance the independence constraints and reconstruction accuracy. Despite the many prior efforts in CNN disentangling, there are few endeavors toward disentangling in the irregular structural domain, where graph convolutional network (GCN) models are applied. Meanwhile, the inherent differences between grid-like data and structural data precludes applying CNN-based disentangling methods to GCN ones. The works of [3, 4], as pioneering attempts, focus on the node-level neighbour partition and ignore the latent multi-relations among nodes. We introduce in this paper a novel GCN, that aims to explicitly conduct graph-level disentangling, based on which convolutional features are aggregated. Our approach, termed as factorizable graph ∗Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convolutional network (FactorGCN), takes as input a simple graph, and decomposes it into several factor graphs, each of which corresponds to a disentangled and interpretable relation space, as shown in Fig. 1. Each such graph then undergoes a GCN, tailored to aggregate features only from one disentangled latent space, followed by a merging operation that concatenates all derived features from disentangled spaces, so as to produce the final block-wise interpretable features. These steps constitute one layer of the proposed FactorGCN. As the output graph with updated features share the identical topology as input, nothing prevents us from stacking a number of layers to disentangle the input data at different levels, yielding a hierarchical disentanglement with various numbers of factor graph at different levels. FactorGCN, therefore, potentially finds application in a wide spectrum of scenarios. In many realworld graphs, multiple heterogeneous relations between nodes are mixed and collapsed to one single edge. In the case of social networks, two people may be friends, colleagues, and living in the same city simultaneously, but linked via one single edge that omits such interconnections; in the co-purchasing scenario [5], products are bought together for different reasons like promotion, and functional complementary, but are often ignored in the graph construction. FactorGCN would, in these cases, deliver a disentangled and interpretable solution towards explaining the underlying rationale, and provide discriminant learned features for the target task. Specifically, the contributions of FactorGCN are summarized as follows. • Graph-level Disentangling. FactorGCN conducts disentangling and produces block-wise interpretable node features by analyzing the whole graph all at once, during which process the global-level topological semantics, such as the higher-order relations between edges and nodes, is explicitly accounted for. The disentangled factor graphs reveal latent-relation specific interconnections between the entities of interests, and yield interpretable features that benefit the downstream tasks. This scheme therefore contrasts to the prior approaches of [3, 4], where the disentanglement takes place only within a local neighborhood, without accounting for global contexts. • Multi-relation Disentangling. Unlike prior methods that decode only a single attribute for a neighboring node, FactorGCN enables multi-relation disentangling, meaning that the center node may aggregate information from a neighbour under multiple types of relations. This mechanism is crucial since real-world data may contain various relations among the same pair of entities. In the case of a social network graph, for example, FactorGCN would produce disentangled results allowing for two users to be both friends and living in the same city; such multi-relation disentangling is not supported by prior GCN methods. • Quantitative Evaluation Metric. Existing quantitative evaluation methods [6, 7] in the grid domain rely on generative models, like auto-encoder [8] or GAN [9]. Yet in the irregular domain, unfortunately, state-of-the-art graph generative models are only applicable for generating small graphs or larger ones without features. Moreover, these models comprise a sequential generation step, making it infeasible to be integrated into the graph disentangling frameworks. To this end, we propose a graph edit-distance based metric, which bypasses the generation step and estimates the similarity between the factor graphs and the ground truth. We conducted experiments on five datasets in various domains, and demonstrate that the proposed FactorGCN yields state-of-the-art performances for both disentanglement and downstream tasks. This indicates that, even putting side its disentangling capability, FactorGCN may well serve as a general GCN framework. Specifically, on the ZINC dataset [10], FactorGCN outperforms other methods by a large margin, and, without the bond information of the edges, FactorGCN achieves a performance on par with the state-of-the-art method that explicitly utilizes edge-type information. 2 Related Work Disentangled representation learning. Learning disentangled representations has recently emerged as a significant task towards interpretable AI [11, 12]. Unlike earlier attempts that rely on handcrafted disentangled representations or variables [13, 14], most of the recent works in disentangled representation learning are based on the architecture of auto-encoder [2, 15, 16, 7, 17, 8] or generative model [9, 18, 19]. One mainstream auto-encoder approach is to constrain the latent feature generated from the encoder to make it independent in each dimension. For example, VAE [1] constrains the distribution of the latent features to Gaussian; β-VAE[2] enlarges the weight of the KL divergence term to balance the independence constraints and reconstruction accuracy; [20] disentangles the latent features by ensuring that each block of latent features cannot be predicted from the rest; DSD [15] swaps some of the latent features twice to achieve semi-supervised disentanglement. For the generative model, extra information is introduced during the generation. For example, InfoGAN [9] adds the class code to the model and maximizes the mutual information between the generated data and the class code. Graph convolutional network. Graph convolutional network (GCN) has shown its potential in the non-grid domain [21–26], achieving promising results on various type of structural data, like citation graph [27], social graph [28], and relational graph [29]. Besides designing GCN to better extract information from non-grid data, there are also a couple of works that explore the disentangled GCNs [30, 4]. DisenGCN [3] adopts neighbour routine to divide the neighbours of the node into several mutually exclusive parts. IPGDN [4] improves DisenGCN by making the different parts of the embedded feature independent. Despite results of the previous works, there remain still several problems: the disentanglement is in the node level, which does not consider the information of the whole graph, and there is no quantitative metrics to evaluate the performance of disentanglement. 3 Method In this section, we will give a detailed description about the architecture of FactorGCN, whose basic component is the disentangle layer, as shown in Fig. 1. 3.1 Disentangling Step The goal of this step is to factorize the input graph into several factor graphs. To this end, we treat the edges equally across the whole graph. The mechanism we adopt to generate these factorized coefficient is similar to that of graph attention network [27]. We denote the input of the disentangle layer as h = {h0, h1, ..., hn}, hi ∈ RF and e = {e0, e1, ..., em}, ek = (hi, hj). h denotes the set of nodes with feature of F dimension, and e denotes the set of edges. The input nodes are transformed to a new space, done by multiplying the features of nodes with a linear transformation matrix W ∈ RF ′×F . This is a standard operation in most GCN models, which increases the capacity of the model. The transformed features are then used to generate the factor coefficients as follows Eije = 1/ ( 1 + e−Ψe(h ′ i,h ′ j) ) ;h′ = Wh, (1) where Ψe is the function that takes the features of node i and node j as input and computes the attention score of the edge for factor graph e, and takes the form of an one-layer MLP in our implementation; Eije then can be obtained by normalizing the attention score to [0, 1], representing the coefficient of edge from node i to node j in the factor graph e; h′ is the transformed node feature, shared across all functions Ψ∗. Different from most previous forms of attention-based GCNs that normalize the attention coefficients among all the neighbours of nodes, our proposed model generates these coefficients directly as the factor graph. Once all the coefficients are computed, a factor graph e can be represented by its own Ee, which will be used for the next aggregation step. However, without any other constrain, some of the generated factor graphs may contain a similar structure, degrading the disentanglement performance and capacity of the model. We therefore introduce an additional head in the disentangle layer, aiming to avoid the degradation of the generated factor graphs. The motivation of the additional head is that, a well disentangled factor graph should have enough information to be distinguished from the rest, only based on its structure. Obtaining the solution that all the disentangled factor graphs differ from each other to the maximal degree, unfortunately, is not trivial. We thus approximate the solution by giving unique labels to the factor graphs and optimizing the factor graphs as a graph classification problem. Our additional head will serve as a discriminator, shown in Eq. 2, to distinguish which label a given graph has: Ge = Softmax ( f ( Readout(A(Ee,h′)) )) . (2) The discriminator contains a three-layer graph auto-encoder A, which takes the transformed feature h′ and the generated attention coefficients of factor graph Ee as inputs, and generates the new node features. These features are then readout to generate the representation of the whole factor graph. Next, the feature vectors will be sent to a classifier with one fully connected layer. Note that all the factor graphs share the same node features, making sure that the information discovered by the discriminator only comes from the difference among the structure of the factor graphs. More details about the discriminator architecture can be found in the supplementary materials. The loss used to train the discriminator is taken as follows: Ld = − 1 N N∑ i ( Ne∑ c=1 1e=clog(G e i [c]) ) , (3) where N is the number of training samples, set to be the number of input graphs multiplies by the number of factor graphs; Ne is the number of factor graphs; Gei is the distribution of sample i and Gei [c] represents the probability that the generated factor graph has label c. 1e=c is an indicator function, taken to be one when the predicted label is correct. 3.2 Aggregation Step As the factor graphs derived from the disentangling step is optimized to be as diverse as possible, in the aggregation step, we will use the generated factor graphs to aggregate information in different structural spaces. This step is similar as the most GCN models, where the new node feature is generated by taking the weighted sum of its neighbors. Our aggregation mechanism is based on the simplest one, which is used in GCN [28]. The only difference is that the aggregation will take place independently for each of the factor graphs. The aggregation process is formulated as h (l+1)e i = σ( ∑ j∈Ni Eije/cijh (l) j W (l)), cij = (|Ni||Nj |)1/2 , (4) where h(l+1)ei represents the new feature for node i in l + 1 layer aggregated from the factor graph e; Ni represents all the neighbours of node i in the input graph; Eije is the coefficient of the edge from node i to node j in the factor graph e; cij is the normalization term that is computed according to the degree of node i and node j; W(l) is a linear transformation matrix, which is the same as the matrix used in the disentangling step. Note that although we use all the neighbours of a node in the input graph to aggregate information, some of them are making no contribution if the corresponding coefficient in the factor graph is zero. 3.3 Merging Step Once the aggregation step is complete, different factor graphs will lead to different features of nodes. We merge these features generated from different factor graphs by applying h (l+1) i = || Ne e=1h (l+1)e i , (5) where h(l+1)i is the output feature of node i; Ne is the number of factor graphs; || represents the concatenation operation. 3.4 Architecture We discuss above the design of one disentangle layer, which contains three steps. The FactorGCN model we used in the experimental section contains several such disentangle layers, increasing the power of expression. Moreover, by setting different number of factor graphs in different layers, the proposed model can disentangle the input data in a hierarchical manner. The total loss to train FactorGCN model is L = Lt +λ ∗Ld. Lt is the loss of the original task, which is taken to be a binary cross entropy loss for multi-label classification task, cross entropy loss for multi-class classification task, or L1 loss for regression task. Ld is the loss of the discriminator we mentioned above. λ is the weight to balance these two losses. 4 Experiments In this section, we show the effectiveness of the proposed FactorGCN, and provide discussions on its various components as well as the sensitivity with respect to the key hyper-parameters. More results can be found in the supplementary materials. 4.1 Experimental setups Datasets. Here, we use six datasets to evaluate the effectiveness of the proposed method. The first one is a synthetic dataset that contains a fixed number of predefined graphs as factor graphs. The second one is the ZINC dataset [31] built from molecular graphs. The third one is Pattern dataset [31], which is a large scale dataset for node classification task. The other three are widely used graph classification datasets include social networks (COLLAB,IMDB-B) and bioinformatics graph (MUTAG) [32]. To generate the synthetic dataset that contains Ne factor graphs, we first generate Ne predefined graphs, which are the well-known graphs like Turán graph, house-x graph, and balanced-tree graph. We then choose half of them and pad them with isolated nodes to make the number of nodes to be 15. The padded graphs will be merged together as a training sample. The label of the synthetic data is a binary vector, with the dimension Ne. Half of the labels will be set to one according to the types of graphs that the sample generated from, and the rest are set to zero. More information about the datasets can be found in the supplemental materials. Baselines. We adopt several methods, including state-of-the-art ones, as the baselines. Among all, MLP is the simplest one, which contains multiple fully connected layers. Although this method is simple, it can in fact perform well when comparing with other methods that consider the structural information. We use MLP to check whether the other compared methods benefit from using the structural information as well. GCN aggregates the information in the graph according to the laplacian matrix of the graph, which can be seen as a fixed weighted sum on the neighbours of a node. GAT [27] extends the idea of GCN by introducing the attention mechanism. The weights when doing the aggregation is computed dynamically according to all the neighbours. For the ZINC dataset, we also add MoNet [25] and GatedGCNE [31] as baselines. The former one is the state-of-the-art method that does not use the type information of edges while the latter one is the state-of-the-art one that uses additional edge information. Random method is also added to provide the result of random guess for reference. For the other three graph datasets, we add non DL-based methods (WL subtree, PATCHYSAN, AWL) and DL-based methods (GCN, GraphSage [33], GIN) as baselines. DisenGCN [3] and IPDGN [4] are also added. Hyper-parameters. For the synthetic dataset, Adam optimizer is used with a learning rate of 0.005, the number of training epochs is set to 80, the weight decay is set to 5e-5. The row of the adjacent matrix of the generated synthetic graph is used as the feature of nodes. The negative slope of LeakyReLU for GAT model is set to 0.2, which is the same as the original setting. The number of hidden layers for all models is set to two. The dimension of the hidden feature is set to 32 when the number of factor graphs is no more than four and 64 otherwise. The weight for the loss of discriminator in FactorGCN is set to 0.5. For the molecular dataset, the dimension of the hidden feature is set to 144 for all methods and the number of layers is set to four. Adam optimizer is used with a learning rate of 0.002. No weight decay is used. λ of FactorGCN is set to 0.2. All the methods are trained for 500 epochs. The test results are obtained using the model with the best performance on validation set. For the other three datasets, three layers FactorGCN is used. 4.2 Qualitative Evaluation We first provide the qualitative evaluations of disentanglement performance, including the visualization of the disentangled factor graphs and the correlation analysis of the latent features. Visualization of disentangled factor graphs. To give an intuitive understanding of the disentanglement. We provide in Fig. 2 some examples of the generated factor graphs. We remove the isolated nodes and visualize the best-matched factor graphs with ground truths. More results and analyses can be found in the supplemental materials. Correlation of disentangled features. Fig. 3 shows the correlation analysis of the latent features obtained from several pre-trained models on the synthetic dataset. It can be seen that also GCN and MLP models can achieve a high performance in the downstream task, and their latent features are hidden entangled. GAT gives more independent latent features but the performance is degraded in the original task. FactorGCN is able to extract the highly independent latent features and meanwhile achieve a better performance in the downstream task. 4.3 Quantitative Evaluation The quantitative evaluation focuses on two parts, the performance of the downstream tasks and that of the disentanglement. Evaluation protocol. For the downstream tasks, we adopt the corresponding metrics to evaluate, i.e., Micro-F1 for the multi-label classification task, mean absolute error (MAE) for the regression task. We design two new metrics to evaluate the disentanglement performance on the graph data. The first one is graph edit distance on edge (GEDE). This metric is inspired by the traditional graph edit distance (GED). Since the input graph already provides the information about the order of nodes, the disentanglement of the input data, in reality, only involves the changing of edges. Therefore, we restrict the GED by only allowing adding and removing the edges, and thus obtain a score of GEDE by Hungarian match between the generated factor graphs and the ground truth. Specifically, for each pair of the generated factor graph and the ground truth graph, we first convert the continuous value in the factor graph to 1/0 value by setting the threshold to make the number of edges in these two graphs are the same. Then, GEDEs can be computed for every such combination. Finally, Hungarian match is adopted to obtain the best bipartite matching results as the GEDE score. Besides the GEDE score, we also care about the consistency of the generated factor graph. In other words, the best-matched pairs between the generated factor graphs and the ground truths, optimally, should be identical across all samples. We therefore introduce the second metric named as consistency score (C-Score), related to GEDE . C-Score is computed as the average percentage of the most frequently matched factor graphs. The C-score will be one if the ground truth graphs are always matched to the fixed factor graphs. A more detailed description of evaluation protocol can be found in the supplemental materials. Evaluation on the synthetic dataset. We first evaluate the disentanglement performance on a synthetic dataset. The results are shown in Tab. 1. Although MLP and GCN achieve good classification performances, they are not capable of disentanglement. GAT disentangles the input by using multihead attention, but the performance of the original task is degraded. Our proposed method, on the other hand, achieves a much better performance in terms of both disentanglement and the original task. We also evaluate the compared methods on the synthetic dataset with various numbers of factor graphs, shown in Tab. 2. As the number of latent factor graphs increase, the performance gain of the FactorGCN becomes large. However, when the number of factor graphs becomes too large, the task will be more challenging, yielding lower performance gains. Evaluation on the ZINC dataset. For this dataset, the type information of edges is hidden during the training process, and is serve as the ground truth to evaluate the performance of disentanglement. Tab. 3 shows the results. The proposed method achieves the best performance on both the disentan- glement and the downstream task. We also show the state-of-the-art method GatedGCNE on this dataset on the right side of Tab. 3, which utilizes the type information of edges during the training process. Our proposed method, without any additional edge information, achieves truly promising results that are to that of GatedGCNE , which needs the bond information of edges during training. Evaluation on more datasets. To provide a thorough understanding of the proposed method, We also carry out evaluations on three widely used graph classification datasets and one node classification dataset to see the performances of FactorGCN as a general GCN framework. The same 10-fold evaluation protocol as [21] is adopted. Since there are no ground truth factor graphs, we only report the accuracy, shown in Tab. 4 and Tab. 5. Our method achieves consistently the best performance, showing the potential of the FactorGCN as a general GCN framework, even putting aside its disentangling capability. More details about the evaluation protocol, the setup of our method, and the statistic information about these datasets can be found in the supplemental materials. 4.4 Ablation and sensitivity analysis We show in Fig. 4 the ablation study and sensitivity analysis of the proposed method. When varying λ, the number of factors is set to be eight; when varying the number of factors , λ is set to be 0.2. As can be seen from the left figure, the performance of both the disentanglement and the downstream task will degrade without the discriminator. The right figure shows the relations between the performance and the number of factor graphs we used in FactorGCN. Setting the number of factor graphs to be slightly larger than that of the ground truth, in practice, leads to a better performance. Table 5: Accuracy (%) on the Pattern dataset for node-classification task. FactorGCN achieves the best performance, showing its ability to serve as a general GCN framework. GCN GatedGCN GIN MoNet DisenGCN IPDGN FactorGCN 63.88 ± 0.07 84.48 ± 0.12 85.59 ± 0.01 85.48 ± 0.04 75.01 ± 0.15 78.70 ± 0.11 86.57 ± 0.02 Figure 4: The influence of the balanced weight λ and the number of factor graphs. 5 Conclusion We propose a novel GCN framework, termed as FactorGCN, which achieves graph convolution through graph-level disentangling. Given an input graph, FactorGCN decomposes it into several interpretable factor graphs, each of which denotes an underlying interconnections between entities, and then carries out topology-aware convolutions on each such factor graph to produce the final node features. The node features, derived under the explicit disentangling, are therefore block-wise explainable and beneficial to the downstream tasks. Specifically, FactorGCN enables multi-relation disentangling, allowing information propagation between two nodes to take places in disjoint spaces. We also introduce two new metrics to measure the graph disentanglement performance quantitatively. FactorGCN outperforms other methods on both the disentanglement and the downstream tasks, indicating the proposed method is ready to serve as a general GCN framework with the capability of graph-level disentanglement. Acknowledgments This work is supported by the startup funding of Stevens Institute of Technology. Broader Impact In this work we introduce a GCN framework, termed as FactorGCN, that explicitly accounts for disentanglement FactorGCN is applicable to various scenarios, both technical and social. For conventional graph-related tasks, like node classification of the social network and graph classification of the molecular graph, our proposed method can serve as a general GCN framework. For disentangling tasks, our method generates factor graphs that reveal the latent relations among entities, and facilitate the further decision making process like recommendation. Furthermore, given sufficient data, FactorGCN can be used as a tool to analyze social issues like discovering the reasons for the quick spread of the epidemic disease in some areas. Like all learning-based methods, FactorGCN is not free of errors. If the produced disentangled factor graphs are incorrect, for example, the subsequent inference and prediction results will be downgraded, possibly yielding undesirable bias.
1. What is the focus and contribution of the paper on graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its ability to automatically learn factor graphs? 3. What are the weaknesses of the paper, especially regarding the experiment section and comparisons with other works? 4. How does the reviewer assess the relevance of the work to the community? 5. Are there any questions or concerns regarding the applicability of the method to node classification and its comparison with related works such as Liu et al. (2019)?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The submission proposes a new architecture of graph neural networks. In this new architecture, the model factorizes the graph into several graphs and then put GNNs over these graphs, then the learned representations are concatenated to get the final representation. Comparing to previous methods, which need graph factors beforehand, this method autometically learns factor graphs from the input graph. The empirical evaluation indicates that the proposed algorithm improves over several baselines. Strengths The strengh of the proposed method is the learnable factorization. This design does not require manually designed graph factors. Furthermore, the graph factor learning also have chances to improve the model performance. The work is very relevant to the community. Weaknesses The expriment of the proposed method is weak. The comparison does not include recent methods on graph classification. The performance improvements are marginal on real datasets. Does the method work for node classification? If so, then there should be a comparison with [Liu et al. 2019].
NIPS
Title Factorizable Graph Convolutional Networks Abstract Graphs have been widely adopted to denote structural connections between entities. The relations are in many cases heterogeneous, but entangled together and denoted merely as a single edge between a pair of nodes. For example, in a social network graph, users in different latent relationships like friends and colleagues, are usually connected via a bare edge that conceals such intrinsic connections. In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network (FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph. FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs, each of which represents a latent and disentangled relation among nodes. The features of the nodes are then aggregated separately in each factorized latent space to produce disentangled features, which further leads to better performances for downstream tasks. We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets, and demonstrate that it yields truly encouraging results in terms of both disentangling and feature aggregation. Code is publicly available at https://github.com/ihollywhy/FactorGCN.PyTorch. 1 Introduction Disentangling aims to factorize an entity, like a feature vector, into several interpretable components, so that the behavior of a learning model can be better understood. In recent years, many approaches have been proposed towards tackling disentangling in deep neural networks and have achieved promising results. Most prior efforts, however, have been focused on the disentanglement of convolutional neural network (CNN) especially the auto-encoder architecture, where disentangling takes place during the stage of latent feature generation. For example, VAE [1] restrains the distribution of the latent features to Gaussian and generates disentangled representation; β-VAE [2] further improves the disentangling by introducing β to balance the independence constraints and reconstruction accuracy. Despite the many prior efforts in CNN disentangling, there are few endeavors toward disentangling in the irregular structural domain, where graph convolutional network (GCN) models are applied. Meanwhile, the inherent differences between grid-like data and structural data precludes applying CNN-based disentangling methods to GCN ones. The works of [3, 4], as pioneering attempts, focus on the node-level neighbour partition and ignore the latent multi-relations among nodes. We introduce in this paper a novel GCN, that aims to explicitly conduct graph-level disentangling, based on which convolutional features are aggregated. Our approach, termed as factorizable graph ∗Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convolutional network (FactorGCN), takes as input a simple graph, and decomposes it into several factor graphs, each of which corresponds to a disentangled and interpretable relation space, as shown in Fig. 1. Each such graph then undergoes a GCN, tailored to aggregate features only from one disentangled latent space, followed by a merging operation that concatenates all derived features from disentangled spaces, so as to produce the final block-wise interpretable features. These steps constitute one layer of the proposed FactorGCN. As the output graph with updated features share the identical topology as input, nothing prevents us from stacking a number of layers to disentangle the input data at different levels, yielding a hierarchical disentanglement with various numbers of factor graph at different levels. FactorGCN, therefore, potentially finds application in a wide spectrum of scenarios. In many realworld graphs, multiple heterogeneous relations between nodes are mixed and collapsed to one single edge. In the case of social networks, two people may be friends, colleagues, and living in the same city simultaneously, but linked via one single edge that omits such interconnections; in the co-purchasing scenario [5], products are bought together for different reasons like promotion, and functional complementary, but are often ignored in the graph construction. FactorGCN would, in these cases, deliver a disentangled and interpretable solution towards explaining the underlying rationale, and provide discriminant learned features for the target task. Specifically, the contributions of FactorGCN are summarized as follows. • Graph-level Disentangling. FactorGCN conducts disentangling and produces block-wise interpretable node features by analyzing the whole graph all at once, during which process the global-level topological semantics, such as the higher-order relations between edges and nodes, is explicitly accounted for. The disentangled factor graphs reveal latent-relation specific interconnections between the entities of interests, and yield interpretable features that benefit the downstream tasks. This scheme therefore contrasts to the prior approaches of [3, 4], where the disentanglement takes place only within a local neighborhood, without accounting for global contexts. • Multi-relation Disentangling. Unlike prior methods that decode only a single attribute for a neighboring node, FactorGCN enables multi-relation disentangling, meaning that the center node may aggregate information from a neighbour under multiple types of relations. This mechanism is crucial since real-world data may contain various relations among the same pair of entities. In the case of a social network graph, for example, FactorGCN would produce disentangled results allowing for two users to be both friends and living in the same city; such multi-relation disentangling is not supported by prior GCN methods. • Quantitative Evaluation Metric. Existing quantitative evaluation methods [6, 7] in the grid domain rely on generative models, like auto-encoder [8] or GAN [9]. Yet in the irregular domain, unfortunately, state-of-the-art graph generative models are only applicable for generating small graphs or larger ones without features. Moreover, these models comprise a sequential generation step, making it infeasible to be integrated into the graph disentangling frameworks. To this end, we propose a graph edit-distance based metric, which bypasses the generation step and estimates the similarity between the factor graphs and the ground truth. We conducted experiments on five datasets in various domains, and demonstrate that the proposed FactorGCN yields state-of-the-art performances for both disentanglement and downstream tasks. This indicates that, even putting side its disentangling capability, FactorGCN may well serve as a general GCN framework. Specifically, on the ZINC dataset [10], FactorGCN outperforms other methods by a large margin, and, without the bond information of the edges, FactorGCN achieves a performance on par with the state-of-the-art method that explicitly utilizes edge-type information. 2 Related Work Disentangled representation learning. Learning disentangled representations has recently emerged as a significant task towards interpretable AI [11, 12]. Unlike earlier attempts that rely on handcrafted disentangled representations or variables [13, 14], most of the recent works in disentangled representation learning are based on the architecture of auto-encoder [2, 15, 16, 7, 17, 8] or generative model [9, 18, 19]. One mainstream auto-encoder approach is to constrain the latent feature generated from the encoder to make it independent in each dimension. For example, VAE [1] constrains the distribution of the latent features to Gaussian; β-VAE[2] enlarges the weight of the KL divergence term to balance the independence constraints and reconstruction accuracy; [20] disentangles the latent features by ensuring that each block of latent features cannot be predicted from the rest; DSD [15] swaps some of the latent features twice to achieve semi-supervised disentanglement. For the generative model, extra information is introduced during the generation. For example, InfoGAN [9] adds the class code to the model and maximizes the mutual information between the generated data and the class code. Graph convolutional network. Graph convolutional network (GCN) has shown its potential in the non-grid domain [21–26], achieving promising results on various type of structural data, like citation graph [27], social graph [28], and relational graph [29]. Besides designing GCN to better extract information from non-grid data, there are also a couple of works that explore the disentangled GCNs [30, 4]. DisenGCN [3] adopts neighbour routine to divide the neighbours of the node into several mutually exclusive parts. IPGDN [4] improves DisenGCN by making the different parts of the embedded feature independent. Despite results of the previous works, there remain still several problems: the disentanglement is in the node level, which does not consider the information of the whole graph, and there is no quantitative metrics to evaluate the performance of disentanglement. 3 Method In this section, we will give a detailed description about the architecture of FactorGCN, whose basic component is the disentangle layer, as shown in Fig. 1. 3.1 Disentangling Step The goal of this step is to factorize the input graph into several factor graphs. To this end, we treat the edges equally across the whole graph. The mechanism we adopt to generate these factorized coefficient is similar to that of graph attention network [27]. We denote the input of the disentangle layer as h = {h0, h1, ..., hn}, hi ∈ RF and e = {e0, e1, ..., em}, ek = (hi, hj). h denotes the set of nodes with feature of F dimension, and e denotes the set of edges. The input nodes are transformed to a new space, done by multiplying the features of nodes with a linear transformation matrix W ∈ RF ′×F . This is a standard operation in most GCN models, which increases the capacity of the model. The transformed features are then used to generate the factor coefficients as follows Eije = 1/ ( 1 + e−Ψe(h ′ i,h ′ j) ) ;h′ = Wh, (1) where Ψe is the function that takes the features of node i and node j as input and computes the attention score of the edge for factor graph e, and takes the form of an one-layer MLP in our implementation; Eije then can be obtained by normalizing the attention score to [0, 1], representing the coefficient of edge from node i to node j in the factor graph e; h′ is the transformed node feature, shared across all functions Ψ∗. Different from most previous forms of attention-based GCNs that normalize the attention coefficients among all the neighbours of nodes, our proposed model generates these coefficients directly as the factor graph. Once all the coefficients are computed, a factor graph e can be represented by its own Ee, which will be used for the next aggregation step. However, without any other constrain, some of the generated factor graphs may contain a similar structure, degrading the disentanglement performance and capacity of the model. We therefore introduce an additional head in the disentangle layer, aiming to avoid the degradation of the generated factor graphs. The motivation of the additional head is that, a well disentangled factor graph should have enough information to be distinguished from the rest, only based on its structure. Obtaining the solution that all the disentangled factor graphs differ from each other to the maximal degree, unfortunately, is not trivial. We thus approximate the solution by giving unique labels to the factor graphs and optimizing the factor graphs as a graph classification problem. Our additional head will serve as a discriminator, shown in Eq. 2, to distinguish which label a given graph has: Ge = Softmax ( f ( Readout(A(Ee,h′)) )) . (2) The discriminator contains a three-layer graph auto-encoder A, which takes the transformed feature h′ and the generated attention coefficients of factor graph Ee as inputs, and generates the new node features. These features are then readout to generate the representation of the whole factor graph. Next, the feature vectors will be sent to a classifier with one fully connected layer. Note that all the factor graphs share the same node features, making sure that the information discovered by the discriminator only comes from the difference among the structure of the factor graphs. More details about the discriminator architecture can be found in the supplementary materials. The loss used to train the discriminator is taken as follows: Ld = − 1 N N∑ i ( Ne∑ c=1 1e=clog(G e i [c]) ) , (3) where N is the number of training samples, set to be the number of input graphs multiplies by the number of factor graphs; Ne is the number of factor graphs; Gei is the distribution of sample i and Gei [c] represents the probability that the generated factor graph has label c. 1e=c is an indicator function, taken to be one when the predicted label is correct. 3.2 Aggregation Step As the factor graphs derived from the disentangling step is optimized to be as diverse as possible, in the aggregation step, we will use the generated factor graphs to aggregate information in different structural spaces. This step is similar as the most GCN models, where the new node feature is generated by taking the weighted sum of its neighbors. Our aggregation mechanism is based on the simplest one, which is used in GCN [28]. The only difference is that the aggregation will take place independently for each of the factor graphs. The aggregation process is formulated as h (l+1)e i = σ( ∑ j∈Ni Eije/cijh (l) j W (l)), cij = (|Ni||Nj |)1/2 , (4) where h(l+1)ei represents the new feature for node i in l + 1 layer aggregated from the factor graph e; Ni represents all the neighbours of node i in the input graph; Eije is the coefficient of the edge from node i to node j in the factor graph e; cij is the normalization term that is computed according to the degree of node i and node j; W(l) is a linear transformation matrix, which is the same as the matrix used in the disentangling step. Note that although we use all the neighbours of a node in the input graph to aggregate information, some of them are making no contribution if the corresponding coefficient in the factor graph is zero. 3.3 Merging Step Once the aggregation step is complete, different factor graphs will lead to different features of nodes. We merge these features generated from different factor graphs by applying h (l+1) i = || Ne e=1h (l+1)e i , (5) where h(l+1)i is the output feature of node i; Ne is the number of factor graphs; || represents the concatenation operation. 3.4 Architecture We discuss above the design of one disentangle layer, which contains three steps. The FactorGCN model we used in the experimental section contains several such disentangle layers, increasing the power of expression. Moreover, by setting different number of factor graphs in different layers, the proposed model can disentangle the input data in a hierarchical manner. The total loss to train FactorGCN model is L = Lt +λ ∗Ld. Lt is the loss of the original task, which is taken to be a binary cross entropy loss for multi-label classification task, cross entropy loss for multi-class classification task, or L1 loss for regression task. Ld is the loss of the discriminator we mentioned above. λ is the weight to balance these two losses. 4 Experiments In this section, we show the effectiveness of the proposed FactorGCN, and provide discussions on its various components as well as the sensitivity with respect to the key hyper-parameters. More results can be found in the supplementary materials. 4.1 Experimental setups Datasets. Here, we use six datasets to evaluate the effectiveness of the proposed method. The first one is a synthetic dataset that contains a fixed number of predefined graphs as factor graphs. The second one is the ZINC dataset [31] built from molecular graphs. The third one is Pattern dataset [31], which is a large scale dataset for node classification task. The other three are widely used graph classification datasets include social networks (COLLAB,IMDB-B) and bioinformatics graph (MUTAG) [32]. To generate the synthetic dataset that contains Ne factor graphs, we first generate Ne predefined graphs, which are the well-known graphs like Turán graph, house-x graph, and balanced-tree graph. We then choose half of them and pad them with isolated nodes to make the number of nodes to be 15. The padded graphs will be merged together as a training sample. The label of the synthetic data is a binary vector, with the dimension Ne. Half of the labels will be set to one according to the types of graphs that the sample generated from, and the rest are set to zero. More information about the datasets can be found in the supplemental materials. Baselines. We adopt several methods, including state-of-the-art ones, as the baselines. Among all, MLP is the simplest one, which contains multiple fully connected layers. Although this method is simple, it can in fact perform well when comparing with other methods that consider the structural information. We use MLP to check whether the other compared methods benefit from using the structural information as well. GCN aggregates the information in the graph according to the laplacian matrix of the graph, which can be seen as a fixed weighted sum on the neighbours of a node. GAT [27] extends the idea of GCN by introducing the attention mechanism. The weights when doing the aggregation is computed dynamically according to all the neighbours. For the ZINC dataset, we also add MoNet [25] and GatedGCNE [31] as baselines. The former one is the state-of-the-art method that does not use the type information of edges while the latter one is the state-of-the-art one that uses additional edge information. Random method is also added to provide the result of random guess for reference. For the other three graph datasets, we add non DL-based methods (WL subtree, PATCHYSAN, AWL) and DL-based methods (GCN, GraphSage [33], GIN) as baselines. DisenGCN [3] and IPDGN [4] are also added. Hyper-parameters. For the synthetic dataset, Adam optimizer is used with a learning rate of 0.005, the number of training epochs is set to 80, the weight decay is set to 5e-5. The row of the adjacent matrix of the generated synthetic graph is used as the feature of nodes. The negative slope of LeakyReLU for GAT model is set to 0.2, which is the same as the original setting. The number of hidden layers for all models is set to two. The dimension of the hidden feature is set to 32 when the number of factor graphs is no more than four and 64 otherwise. The weight for the loss of discriminator in FactorGCN is set to 0.5. For the molecular dataset, the dimension of the hidden feature is set to 144 for all methods and the number of layers is set to four. Adam optimizer is used with a learning rate of 0.002. No weight decay is used. λ of FactorGCN is set to 0.2. All the methods are trained for 500 epochs. The test results are obtained using the model with the best performance on validation set. For the other three datasets, three layers FactorGCN is used. 4.2 Qualitative Evaluation We first provide the qualitative evaluations of disentanglement performance, including the visualization of the disentangled factor graphs and the correlation analysis of the latent features. Visualization of disentangled factor graphs. To give an intuitive understanding of the disentanglement. We provide in Fig. 2 some examples of the generated factor graphs. We remove the isolated nodes and visualize the best-matched factor graphs with ground truths. More results and analyses can be found in the supplemental materials. Correlation of disentangled features. Fig. 3 shows the correlation analysis of the latent features obtained from several pre-trained models on the synthetic dataset. It can be seen that also GCN and MLP models can achieve a high performance in the downstream task, and their latent features are hidden entangled. GAT gives more independent latent features but the performance is degraded in the original task. FactorGCN is able to extract the highly independent latent features and meanwhile achieve a better performance in the downstream task. 4.3 Quantitative Evaluation The quantitative evaluation focuses on two parts, the performance of the downstream tasks and that of the disentanglement. Evaluation protocol. For the downstream tasks, we adopt the corresponding metrics to evaluate, i.e., Micro-F1 for the multi-label classification task, mean absolute error (MAE) for the regression task. We design two new metrics to evaluate the disentanglement performance on the graph data. The first one is graph edit distance on edge (GEDE). This metric is inspired by the traditional graph edit distance (GED). Since the input graph already provides the information about the order of nodes, the disentanglement of the input data, in reality, only involves the changing of edges. Therefore, we restrict the GED by only allowing adding and removing the edges, and thus obtain a score of GEDE by Hungarian match between the generated factor graphs and the ground truth. Specifically, for each pair of the generated factor graph and the ground truth graph, we first convert the continuous value in the factor graph to 1/0 value by setting the threshold to make the number of edges in these two graphs are the same. Then, GEDEs can be computed for every such combination. Finally, Hungarian match is adopted to obtain the best bipartite matching results as the GEDE score. Besides the GEDE score, we also care about the consistency of the generated factor graph. In other words, the best-matched pairs between the generated factor graphs and the ground truths, optimally, should be identical across all samples. We therefore introduce the second metric named as consistency score (C-Score), related to GEDE . C-Score is computed as the average percentage of the most frequently matched factor graphs. The C-score will be one if the ground truth graphs are always matched to the fixed factor graphs. A more detailed description of evaluation protocol can be found in the supplemental materials. Evaluation on the synthetic dataset. We first evaluate the disentanglement performance on a synthetic dataset. The results are shown in Tab. 1. Although MLP and GCN achieve good classification performances, they are not capable of disentanglement. GAT disentangles the input by using multihead attention, but the performance of the original task is degraded. Our proposed method, on the other hand, achieves a much better performance in terms of both disentanglement and the original task. We also evaluate the compared methods on the synthetic dataset with various numbers of factor graphs, shown in Tab. 2. As the number of latent factor graphs increase, the performance gain of the FactorGCN becomes large. However, when the number of factor graphs becomes too large, the task will be more challenging, yielding lower performance gains. Evaluation on the ZINC dataset. For this dataset, the type information of edges is hidden during the training process, and is serve as the ground truth to evaluate the performance of disentanglement. Tab. 3 shows the results. The proposed method achieves the best performance on both the disentan- glement and the downstream task. We also show the state-of-the-art method GatedGCNE on this dataset on the right side of Tab. 3, which utilizes the type information of edges during the training process. Our proposed method, without any additional edge information, achieves truly promising results that are to that of GatedGCNE , which needs the bond information of edges during training. Evaluation on more datasets. To provide a thorough understanding of the proposed method, We also carry out evaluations on three widely used graph classification datasets and one node classification dataset to see the performances of FactorGCN as a general GCN framework. The same 10-fold evaluation protocol as [21] is adopted. Since there are no ground truth factor graphs, we only report the accuracy, shown in Tab. 4 and Tab. 5. Our method achieves consistently the best performance, showing the potential of the FactorGCN as a general GCN framework, even putting aside its disentangling capability. More details about the evaluation protocol, the setup of our method, and the statistic information about these datasets can be found in the supplemental materials. 4.4 Ablation and sensitivity analysis We show in Fig. 4 the ablation study and sensitivity analysis of the proposed method. When varying λ, the number of factors is set to be eight; when varying the number of factors , λ is set to be 0.2. As can be seen from the left figure, the performance of both the disentanglement and the downstream task will degrade without the discriminator. The right figure shows the relations between the performance and the number of factor graphs we used in FactorGCN. Setting the number of factor graphs to be slightly larger than that of the ground truth, in practice, leads to a better performance. Table 5: Accuracy (%) on the Pattern dataset for node-classification task. FactorGCN achieves the best performance, showing its ability to serve as a general GCN framework. GCN GatedGCN GIN MoNet DisenGCN IPDGN FactorGCN 63.88 ± 0.07 84.48 ± 0.12 85.59 ± 0.01 85.48 ± 0.04 75.01 ± 0.15 78.70 ± 0.11 86.57 ± 0.02 Figure 4: The influence of the balanced weight λ and the number of factor graphs. 5 Conclusion We propose a novel GCN framework, termed as FactorGCN, which achieves graph convolution through graph-level disentangling. Given an input graph, FactorGCN decomposes it into several interpretable factor graphs, each of which denotes an underlying interconnections between entities, and then carries out topology-aware convolutions on each such factor graph to produce the final node features. The node features, derived under the explicit disentangling, are therefore block-wise explainable and beneficial to the downstream tasks. Specifically, FactorGCN enables multi-relation disentangling, allowing information propagation between two nodes to take places in disjoint spaces. We also introduce two new metrics to measure the graph disentanglement performance quantitatively. FactorGCN outperforms other methods on both the disentanglement and the downstream tasks, indicating the proposed method is ready to serve as a general GCN framework with the capability of graph-level disentanglement. Acknowledgments This work is supported by the startup funding of Stevens Institute of Technology. Broader Impact In this work we introduce a GCN framework, termed as FactorGCN, that explicitly accounts for disentanglement FactorGCN is applicable to various scenarios, both technical and social. For conventional graph-related tasks, like node classification of the social network and graph classification of the molecular graph, our proposed method can serve as a general GCN framework. For disentangling tasks, our method generates factor graphs that reveal the latent relations among entities, and facilitate the further decision making process like recommendation. Furthermore, given sufficient data, FactorGCN can be used as a tool to analyze social issues like discovering the reasons for the quick spread of the epidemic disease in some areas. Like all learning-based methods, FactorGCN is not free of errors. If the produced disentangled factor graphs are incorrect, for example, the subsequent inference and prediction results will be downgraded, possibly yielding undesirable bias.
1. What is the focus and contribution of the paper on graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its ability to learn disentangled graphs? 3. What are the weaknesses of the paper, especially regarding its similarity to other works such as GAT? 4. Do you have any questions about the experimental results or the minor differences between the proposed framework and GAT? 5. Are there any concerns regarding the lack of detail in certain parts of the paper, such as the correlation analysis or the specific datasets used in the experiments?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper propose a new graph neural network Factor GCN which first explicitly construct several factor graphs based attention-like mechanism, then perform feature aggregation on each factor graphs separately. The main contribution of the paper is (i) a new GNN framework that can learn disentangled graphs; and (ii) experimental results show the effectiveness of the proposed method for learning disentangled graphs and graph classification Strengths 1. The idea is interesting and novel 2. Experimental results show the effectiveness of the proposed method for learning disentangled graphs and graph classification 3. The paper is well written and easy to understand Weaknesses 1. The proposed framework is very similar to GAT, except some minor difference in the attention-mechanism and the discriminator. The authors are encouraged to add the discriminator to GAT to show if the designed approach for constructing factor graphs is really more effective than GAT. 2. Some details are missing. For example, in figure 3, are the correlation analysis by averaging all the graphs or only on one graph? In Fig. 4, it is unclear on which dataset are the experiments conducted. When varying lambda,what is the number of factors and when varying number of factors, what is the value of labmda.
NIPS
Title Factorizable Graph Convolutional Networks Abstract Graphs have been widely adopted to denote structural connections between entities. The relations are in many cases heterogeneous, but entangled together and denoted merely as a single edge between a pair of nodes. For example, in a social network graph, users in different latent relationships like friends and colleagues, are usually connected via a bare edge that conceals such intrinsic connections. In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network (FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph. FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs, each of which represents a latent and disentangled relation among nodes. The features of the nodes are then aggregated separately in each factorized latent space to produce disentangled features, which further leads to better performances for downstream tasks. We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets, and demonstrate that it yields truly encouraging results in terms of both disentangling and feature aggregation. Code is publicly available at https://github.com/ihollywhy/FactorGCN.PyTorch. 1 Introduction Disentangling aims to factorize an entity, like a feature vector, into several interpretable components, so that the behavior of a learning model can be better understood. In recent years, many approaches have been proposed towards tackling disentangling in deep neural networks and have achieved promising results. Most prior efforts, however, have been focused on the disentanglement of convolutional neural network (CNN) especially the auto-encoder architecture, where disentangling takes place during the stage of latent feature generation. For example, VAE [1] restrains the distribution of the latent features to Gaussian and generates disentangled representation; β-VAE [2] further improves the disentangling by introducing β to balance the independence constraints and reconstruction accuracy. Despite the many prior efforts in CNN disentangling, there are few endeavors toward disentangling in the irregular structural domain, where graph convolutional network (GCN) models are applied. Meanwhile, the inherent differences between grid-like data and structural data precludes applying CNN-based disentangling methods to GCN ones. The works of [3, 4], as pioneering attempts, focus on the node-level neighbour partition and ignore the latent multi-relations among nodes. We introduce in this paper a novel GCN, that aims to explicitly conduct graph-level disentangling, based on which convolutional features are aggregated. Our approach, termed as factorizable graph ∗Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. convolutional network (FactorGCN), takes as input a simple graph, and decomposes it into several factor graphs, each of which corresponds to a disentangled and interpretable relation space, as shown in Fig. 1. Each such graph then undergoes a GCN, tailored to aggregate features only from one disentangled latent space, followed by a merging operation that concatenates all derived features from disentangled spaces, so as to produce the final block-wise interpretable features. These steps constitute one layer of the proposed FactorGCN. As the output graph with updated features share the identical topology as input, nothing prevents us from stacking a number of layers to disentangle the input data at different levels, yielding a hierarchical disentanglement with various numbers of factor graph at different levels. FactorGCN, therefore, potentially finds application in a wide spectrum of scenarios. In many realworld graphs, multiple heterogeneous relations between nodes are mixed and collapsed to one single edge. In the case of social networks, two people may be friends, colleagues, and living in the same city simultaneously, but linked via one single edge that omits such interconnections; in the co-purchasing scenario [5], products are bought together for different reasons like promotion, and functional complementary, but are often ignored in the graph construction. FactorGCN would, in these cases, deliver a disentangled and interpretable solution towards explaining the underlying rationale, and provide discriminant learned features for the target task. Specifically, the contributions of FactorGCN are summarized as follows. • Graph-level Disentangling. FactorGCN conducts disentangling and produces block-wise interpretable node features by analyzing the whole graph all at once, during which process the global-level topological semantics, such as the higher-order relations between edges and nodes, is explicitly accounted for. The disentangled factor graphs reveal latent-relation specific interconnections between the entities of interests, and yield interpretable features that benefit the downstream tasks. This scheme therefore contrasts to the prior approaches of [3, 4], where the disentanglement takes place only within a local neighborhood, without accounting for global contexts. • Multi-relation Disentangling. Unlike prior methods that decode only a single attribute for a neighboring node, FactorGCN enables multi-relation disentangling, meaning that the center node may aggregate information from a neighbour under multiple types of relations. This mechanism is crucial since real-world data may contain various relations among the same pair of entities. In the case of a social network graph, for example, FactorGCN would produce disentangled results allowing for two users to be both friends and living in the same city; such multi-relation disentangling is not supported by prior GCN methods. • Quantitative Evaluation Metric. Existing quantitative evaluation methods [6, 7] in the grid domain rely on generative models, like auto-encoder [8] or GAN [9]. Yet in the irregular domain, unfortunately, state-of-the-art graph generative models are only applicable for generating small graphs or larger ones without features. Moreover, these models comprise a sequential generation step, making it infeasible to be integrated into the graph disentangling frameworks. To this end, we propose a graph edit-distance based metric, which bypasses the generation step and estimates the similarity between the factor graphs and the ground truth. We conducted experiments on five datasets in various domains, and demonstrate that the proposed FactorGCN yields state-of-the-art performances for both disentanglement and downstream tasks. This indicates that, even putting side its disentangling capability, FactorGCN may well serve as a general GCN framework. Specifically, on the ZINC dataset [10], FactorGCN outperforms other methods by a large margin, and, without the bond information of the edges, FactorGCN achieves a performance on par with the state-of-the-art method that explicitly utilizes edge-type information. 2 Related Work Disentangled representation learning. Learning disentangled representations has recently emerged as a significant task towards interpretable AI [11, 12]. Unlike earlier attempts that rely on handcrafted disentangled representations or variables [13, 14], most of the recent works in disentangled representation learning are based on the architecture of auto-encoder [2, 15, 16, 7, 17, 8] or generative model [9, 18, 19]. One mainstream auto-encoder approach is to constrain the latent feature generated from the encoder to make it independent in each dimension. For example, VAE [1] constrains the distribution of the latent features to Gaussian; β-VAE[2] enlarges the weight of the KL divergence term to balance the independence constraints and reconstruction accuracy; [20] disentangles the latent features by ensuring that each block of latent features cannot be predicted from the rest; DSD [15] swaps some of the latent features twice to achieve semi-supervised disentanglement. For the generative model, extra information is introduced during the generation. For example, InfoGAN [9] adds the class code to the model and maximizes the mutual information between the generated data and the class code. Graph convolutional network. Graph convolutional network (GCN) has shown its potential in the non-grid domain [21–26], achieving promising results on various type of structural data, like citation graph [27], social graph [28], and relational graph [29]. Besides designing GCN to better extract information from non-grid data, there are also a couple of works that explore the disentangled GCNs [30, 4]. DisenGCN [3] adopts neighbour routine to divide the neighbours of the node into several mutually exclusive parts. IPGDN [4] improves DisenGCN by making the different parts of the embedded feature independent. Despite results of the previous works, there remain still several problems: the disentanglement is in the node level, which does not consider the information of the whole graph, and there is no quantitative metrics to evaluate the performance of disentanglement. 3 Method In this section, we will give a detailed description about the architecture of FactorGCN, whose basic component is the disentangle layer, as shown in Fig. 1. 3.1 Disentangling Step The goal of this step is to factorize the input graph into several factor graphs. To this end, we treat the edges equally across the whole graph. The mechanism we adopt to generate these factorized coefficient is similar to that of graph attention network [27]. We denote the input of the disentangle layer as h = {h0, h1, ..., hn}, hi ∈ RF and e = {e0, e1, ..., em}, ek = (hi, hj). h denotes the set of nodes with feature of F dimension, and e denotes the set of edges. The input nodes are transformed to a new space, done by multiplying the features of nodes with a linear transformation matrix W ∈ RF ′×F . This is a standard operation in most GCN models, which increases the capacity of the model. The transformed features are then used to generate the factor coefficients as follows Eije = 1/ ( 1 + e−Ψe(h ′ i,h ′ j) ) ;h′ = Wh, (1) where Ψe is the function that takes the features of node i and node j as input and computes the attention score of the edge for factor graph e, and takes the form of an one-layer MLP in our implementation; Eije then can be obtained by normalizing the attention score to [0, 1], representing the coefficient of edge from node i to node j in the factor graph e; h′ is the transformed node feature, shared across all functions Ψ∗. Different from most previous forms of attention-based GCNs that normalize the attention coefficients among all the neighbours of nodes, our proposed model generates these coefficients directly as the factor graph. Once all the coefficients are computed, a factor graph e can be represented by its own Ee, which will be used for the next aggregation step. However, without any other constrain, some of the generated factor graphs may contain a similar structure, degrading the disentanglement performance and capacity of the model. We therefore introduce an additional head in the disentangle layer, aiming to avoid the degradation of the generated factor graphs. The motivation of the additional head is that, a well disentangled factor graph should have enough information to be distinguished from the rest, only based on its structure. Obtaining the solution that all the disentangled factor graphs differ from each other to the maximal degree, unfortunately, is not trivial. We thus approximate the solution by giving unique labels to the factor graphs and optimizing the factor graphs as a graph classification problem. Our additional head will serve as a discriminator, shown in Eq. 2, to distinguish which label a given graph has: Ge = Softmax ( f ( Readout(A(Ee,h′)) )) . (2) The discriminator contains a three-layer graph auto-encoder A, which takes the transformed feature h′ and the generated attention coefficients of factor graph Ee as inputs, and generates the new node features. These features are then readout to generate the representation of the whole factor graph. Next, the feature vectors will be sent to a classifier with one fully connected layer. Note that all the factor graphs share the same node features, making sure that the information discovered by the discriminator only comes from the difference among the structure of the factor graphs. More details about the discriminator architecture can be found in the supplementary materials. The loss used to train the discriminator is taken as follows: Ld = − 1 N N∑ i ( Ne∑ c=1 1e=clog(G e i [c]) ) , (3) where N is the number of training samples, set to be the number of input graphs multiplies by the number of factor graphs; Ne is the number of factor graphs; Gei is the distribution of sample i and Gei [c] represents the probability that the generated factor graph has label c. 1e=c is an indicator function, taken to be one when the predicted label is correct. 3.2 Aggregation Step As the factor graphs derived from the disentangling step is optimized to be as diverse as possible, in the aggregation step, we will use the generated factor graphs to aggregate information in different structural spaces. This step is similar as the most GCN models, where the new node feature is generated by taking the weighted sum of its neighbors. Our aggregation mechanism is based on the simplest one, which is used in GCN [28]. The only difference is that the aggregation will take place independently for each of the factor graphs. The aggregation process is formulated as h (l+1)e i = σ( ∑ j∈Ni Eije/cijh (l) j W (l)), cij = (|Ni||Nj |)1/2 , (4) where h(l+1)ei represents the new feature for node i in l + 1 layer aggregated from the factor graph e; Ni represents all the neighbours of node i in the input graph; Eije is the coefficient of the edge from node i to node j in the factor graph e; cij is the normalization term that is computed according to the degree of node i and node j; W(l) is a linear transformation matrix, which is the same as the matrix used in the disentangling step. Note that although we use all the neighbours of a node in the input graph to aggregate information, some of them are making no contribution if the corresponding coefficient in the factor graph is zero. 3.3 Merging Step Once the aggregation step is complete, different factor graphs will lead to different features of nodes. We merge these features generated from different factor graphs by applying h (l+1) i = || Ne e=1h (l+1)e i , (5) where h(l+1)i is the output feature of node i; Ne is the number of factor graphs; || represents the concatenation operation. 3.4 Architecture We discuss above the design of one disentangle layer, which contains three steps. The FactorGCN model we used in the experimental section contains several such disentangle layers, increasing the power of expression. Moreover, by setting different number of factor graphs in different layers, the proposed model can disentangle the input data in a hierarchical manner. The total loss to train FactorGCN model is L = Lt +λ ∗Ld. Lt is the loss of the original task, which is taken to be a binary cross entropy loss for multi-label classification task, cross entropy loss for multi-class classification task, or L1 loss for regression task. Ld is the loss of the discriminator we mentioned above. λ is the weight to balance these two losses. 4 Experiments In this section, we show the effectiveness of the proposed FactorGCN, and provide discussions on its various components as well as the sensitivity with respect to the key hyper-parameters. More results can be found in the supplementary materials. 4.1 Experimental setups Datasets. Here, we use six datasets to evaluate the effectiveness of the proposed method. The first one is a synthetic dataset that contains a fixed number of predefined graphs as factor graphs. The second one is the ZINC dataset [31] built from molecular graphs. The third one is Pattern dataset [31], which is a large scale dataset for node classification task. The other three are widely used graph classification datasets include social networks (COLLAB,IMDB-B) and bioinformatics graph (MUTAG) [32]. To generate the synthetic dataset that contains Ne factor graphs, we first generate Ne predefined graphs, which are the well-known graphs like Turán graph, house-x graph, and balanced-tree graph. We then choose half of them and pad them with isolated nodes to make the number of nodes to be 15. The padded graphs will be merged together as a training sample. The label of the synthetic data is a binary vector, with the dimension Ne. Half of the labels will be set to one according to the types of graphs that the sample generated from, and the rest are set to zero. More information about the datasets can be found in the supplemental materials. Baselines. We adopt several methods, including state-of-the-art ones, as the baselines. Among all, MLP is the simplest one, which contains multiple fully connected layers. Although this method is simple, it can in fact perform well when comparing with other methods that consider the structural information. We use MLP to check whether the other compared methods benefit from using the structural information as well. GCN aggregates the information in the graph according to the laplacian matrix of the graph, which can be seen as a fixed weighted sum on the neighbours of a node. GAT [27] extends the idea of GCN by introducing the attention mechanism. The weights when doing the aggregation is computed dynamically according to all the neighbours. For the ZINC dataset, we also add MoNet [25] and GatedGCNE [31] as baselines. The former one is the state-of-the-art method that does not use the type information of edges while the latter one is the state-of-the-art one that uses additional edge information. Random method is also added to provide the result of random guess for reference. For the other three graph datasets, we add non DL-based methods (WL subtree, PATCHYSAN, AWL) and DL-based methods (GCN, GraphSage [33], GIN) as baselines. DisenGCN [3] and IPDGN [4] are also added. Hyper-parameters. For the synthetic dataset, Adam optimizer is used with a learning rate of 0.005, the number of training epochs is set to 80, the weight decay is set to 5e-5. The row of the adjacent matrix of the generated synthetic graph is used as the feature of nodes. The negative slope of LeakyReLU for GAT model is set to 0.2, which is the same as the original setting. The number of hidden layers for all models is set to two. The dimension of the hidden feature is set to 32 when the number of factor graphs is no more than four and 64 otherwise. The weight for the loss of discriminator in FactorGCN is set to 0.5. For the molecular dataset, the dimension of the hidden feature is set to 144 for all methods and the number of layers is set to four. Adam optimizer is used with a learning rate of 0.002. No weight decay is used. λ of FactorGCN is set to 0.2. All the methods are trained for 500 epochs. The test results are obtained using the model with the best performance on validation set. For the other three datasets, three layers FactorGCN is used. 4.2 Qualitative Evaluation We first provide the qualitative evaluations of disentanglement performance, including the visualization of the disentangled factor graphs and the correlation analysis of the latent features. Visualization of disentangled factor graphs. To give an intuitive understanding of the disentanglement. We provide in Fig. 2 some examples of the generated factor graphs. We remove the isolated nodes and visualize the best-matched factor graphs with ground truths. More results and analyses can be found in the supplemental materials. Correlation of disentangled features. Fig. 3 shows the correlation analysis of the latent features obtained from several pre-trained models on the synthetic dataset. It can be seen that also GCN and MLP models can achieve a high performance in the downstream task, and their latent features are hidden entangled. GAT gives more independent latent features but the performance is degraded in the original task. FactorGCN is able to extract the highly independent latent features and meanwhile achieve a better performance in the downstream task. 4.3 Quantitative Evaluation The quantitative evaluation focuses on two parts, the performance of the downstream tasks and that of the disentanglement. Evaluation protocol. For the downstream tasks, we adopt the corresponding metrics to evaluate, i.e., Micro-F1 for the multi-label classification task, mean absolute error (MAE) for the regression task. We design two new metrics to evaluate the disentanglement performance on the graph data. The first one is graph edit distance on edge (GEDE). This metric is inspired by the traditional graph edit distance (GED). Since the input graph already provides the information about the order of nodes, the disentanglement of the input data, in reality, only involves the changing of edges. Therefore, we restrict the GED by only allowing adding and removing the edges, and thus obtain a score of GEDE by Hungarian match between the generated factor graphs and the ground truth. Specifically, for each pair of the generated factor graph and the ground truth graph, we first convert the continuous value in the factor graph to 1/0 value by setting the threshold to make the number of edges in these two graphs are the same. Then, GEDEs can be computed for every such combination. Finally, Hungarian match is adopted to obtain the best bipartite matching results as the GEDE score. Besides the GEDE score, we also care about the consistency of the generated factor graph. In other words, the best-matched pairs between the generated factor graphs and the ground truths, optimally, should be identical across all samples. We therefore introduce the second metric named as consistency score (C-Score), related to GEDE . C-Score is computed as the average percentage of the most frequently matched factor graphs. The C-score will be one if the ground truth graphs are always matched to the fixed factor graphs. A more detailed description of evaluation protocol can be found in the supplemental materials. Evaluation on the synthetic dataset. We first evaluate the disentanglement performance on a synthetic dataset. The results are shown in Tab. 1. Although MLP and GCN achieve good classification performances, they are not capable of disentanglement. GAT disentangles the input by using multihead attention, but the performance of the original task is degraded. Our proposed method, on the other hand, achieves a much better performance in terms of both disentanglement and the original task. We also evaluate the compared methods on the synthetic dataset with various numbers of factor graphs, shown in Tab. 2. As the number of latent factor graphs increase, the performance gain of the FactorGCN becomes large. However, when the number of factor graphs becomes too large, the task will be more challenging, yielding lower performance gains. Evaluation on the ZINC dataset. For this dataset, the type information of edges is hidden during the training process, and is serve as the ground truth to evaluate the performance of disentanglement. Tab. 3 shows the results. The proposed method achieves the best performance on both the disentan- glement and the downstream task. We also show the state-of-the-art method GatedGCNE on this dataset on the right side of Tab. 3, which utilizes the type information of edges during the training process. Our proposed method, without any additional edge information, achieves truly promising results that are to that of GatedGCNE , which needs the bond information of edges during training. Evaluation on more datasets. To provide a thorough understanding of the proposed method, We also carry out evaluations on three widely used graph classification datasets and one node classification dataset to see the performances of FactorGCN as a general GCN framework. The same 10-fold evaluation protocol as [21] is adopted. Since there are no ground truth factor graphs, we only report the accuracy, shown in Tab. 4 and Tab. 5. Our method achieves consistently the best performance, showing the potential of the FactorGCN as a general GCN framework, even putting aside its disentangling capability. More details about the evaluation protocol, the setup of our method, and the statistic information about these datasets can be found in the supplemental materials. 4.4 Ablation and sensitivity analysis We show in Fig. 4 the ablation study and sensitivity analysis of the proposed method. When varying λ, the number of factors is set to be eight; when varying the number of factors , λ is set to be 0.2. As can be seen from the left figure, the performance of both the disentanglement and the downstream task will degrade without the discriminator. The right figure shows the relations between the performance and the number of factor graphs we used in FactorGCN. Setting the number of factor graphs to be slightly larger than that of the ground truth, in practice, leads to a better performance. Table 5: Accuracy (%) on the Pattern dataset for node-classification task. FactorGCN achieves the best performance, showing its ability to serve as a general GCN framework. GCN GatedGCN GIN MoNet DisenGCN IPDGN FactorGCN 63.88 ± 0.07 84.48 ± 0.12 85.59 ± 0.01 85.48 ± 0.04 75.01 ± 0.15 78.70 ± 0.11 86.57 ± 0.02 Figure 4: The influence of the balanced weight λ and the number of factor graphs. 5 Conclusion We propose a novel GCN framework, termed as FactorGCN, which achieves graph convolution through graph-level disentangling. Given an input graph, FactorGCN decomposes it into several interpretable factor graphs, each of which denotes an underlying interconnections between entities, and then carries out topology-aware convolutions on each such factor graph to produce the final node features. The node features, derived under the explicit disentangling, are therefore block-wise explainable and beneficial to the downstream tasks. Specifically, FactorGCN enables multi-relation disentangling, allowing information propagation between two nodes to take places in disjoint spaces. We also introduce two new metrics to measure the graph disentanglement performance quantitatively. FactorGCN outperforms other methods on both the disentanglement and the downstream tasks, indicating the proposed method is ready to serve as a general GCN framework with the capability of graph-level disentanglement. Acknowledgments This work is supported by the startup funding of Stevens Institute of Technology. Broader Impact In this work we introduce a GCN framework, termed as FactorGCN, that explicitly accounts for disentanglement FactorGCN is applicable to various scenarios, both technical and social. For conventional graph-related tasks, like node classification of the social network and graph classification of the molecular graph, our proposed method can serve as a general GCN framework. For disentangling tasks, our method generates factor graphs that reveal the latent relations among entities, and facilitate the further decision making process like recommendation. Furthermore, given sufficient data, FactorGCN can be used as a tool to analyze social issues like discovering the reasons for the quick spread of the epidemic disease in some areas. Like all learning-based methods, FactorGCN is not free of errors. If the produced disentangled factor graphs are incorrect, for example, the subsequent inference and prediction results will be downgraded, possibly yielding undesirable bias.
1. What is the primary objective of the paper, and what are the proposed methods to achieve this goal? 2. What are the strengths of the paper regarding its contributions and experimental results? 3. What are the weaknesses of the paper, particularly concerning its novelty and focus on multi-relational disentangling? 4. How does the reviewer assess the effectiveness of the proposed method in providing low-dimensional features for nodes of a graph that are disentangled? 5. Do you have any questions about the paper's content or arguments presented within it?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The goal of paper is to find disentangled representation for nodes of a graph. The method named FactorGCN has three steps: 1) Disentangling: Extracting disentangled factor graphs (which are independent as much as possible) 2) Aggregation: Aggregating features of nodes in each factor graph by utilizing GCN layers 3) Merging: Concatenation of features of one node in different factor graphs Its goal is providing low-dimensional features for nodes of the graph that are disentangled. The final goal is to have better results in the graph analysis tasks (like graph classification). Strengths The authors have worked on the experiments to show that the effectiveness of the proposed method. They have also proposed two metrics for evaluating how good the factor graphs are. The introduced synthetic dataset which makes us capable to compare factor graphs with the ground truth ones is also interesting. Weaknesses 1) The novelty is limited. 2) Although the paper starts with the claim that their main focus is on the multi-relation disentangling, there is nothing explicit in the method that emphasizes on the multi-relational graphs. It's not also provided explicitly in the experiment part. Can the proposed method give a heterogenous or multi-relational network (for which the type of edges is available and thus can be utilized for evaluation) and distinguish edges of different types by factorizing? 3) It seems that the graph classification loss may not provide enough supervisory signal for disentanglement. Why does enforcing classification of factor graphs make them disentangled necessarily?
NIPS
Title Turing Completeness of Bounded-Precision Recurrent Neural Networks Abstract Previous works have proved that recurrent neural networks (RNNs) are Turingcomplete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine’s time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and boundedprecision RNNs, revisiting and extending the theoretical foundations of RNNs. 1 Introduction Symbolic (such as Turing Machines) and sub-symbolic processing (such as adaptive neural networks) are two competing methods of representing and processing information, each with its own advantages. An ultimate way to combine symbolic and sub-symbolic capabilities is by enabling the running of algorithms on a neural substrate, which means a neural network that can simulate a Universal Turing Machine (UTM). Previous works [1, 2, 3] have shown that this is possible – there exists a recurrent neural network (RNN) that can simulate a UTM. These proofs assumed a couple of neurons with unbounded precision that equals the number of symbols used in the Turing tape. Here we provide an alternative simulation of a UTM by RNNs with bounded-precision neurons only. The general idea works as follows. The Turing Machine’s tape is stored in a growing memory module, which is a stack of neurons with pushing and popping operations controlled by neurons in the RNN. The size of the growing memory module is determined by the usage of the Turing tape - it dynamically recruits new neurons when more memories are needed and releases them when memories become irrelevant. The neurons in the stack, except for the top neuron, are not regularly updated (and hence can be referred to as passive), saving computational cost for memories that are not in the focus of the computing and do not require change. Using growing memory modules, a 54-neuron bounded-precision RNN is constructed that can simulate any Turing Machine. Our proposed growing memory modules are inspired by biological memory systems. The process of dynamically recruiting new neurons when more memories are necessary is also observed in biological memory systems. Neurogenesis is the process by which new neurons are produced in the central nervous system; it is most active during early development, but continues through life. *Both authors contributed equally. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). In adult vertebrates, neurogenesis is known to occur in the dentate gyrus (DG) of the hippocampal formation [4] and the subventricular zone (SVZ) of the lateral ventricles [5]. Since DG is well-known in neuroscience for its role in pattern separation for memory encoding [6, 7], this suggests that biological memory systems also dynamically recruit new neurons. The rate of neurogenesis in adult mice has been shown to be higher if they are exposed to a wider variety of experiences [8]. This further suggests a role for self-regulated neurogenesis in scaling up the number of new memory that can be encoded and stored during one’s lifetime without catastrophic forgetting of previously consolidated memory. Besides the mechanism of recruiting new neurons, the process of storing neurons in growing memory modules also shares some similarities with biological memory consolidation, a process by which short-term memory is transformed into long-term memory [9, 10]. Compared to short-term memory, long-term memory is more long-lasting and robust to interference. This is similar to the neurons stored in growing memory modules - the values of these neurons (except the top neuron in the stack) remain unchanged and cannot be interfered by the RNN, providing a mechanism to store information stably. Growing memory modules share similarities with other stack-augmented RNNs [11, 12, 13, 14, 15]. In neural stacks [11], the RNN outputs two continuous scalars that control the strength of pushing and popping operations, and the stack is made differentiable by adding a strength vector. In stack RNNs [12] and DiffStk-RNN [15], the RNN outputs the probability vector corresponding to pushing, popping, and no operation. In NNPDA [13], the RNN outputs a continuous scalar that controls the pushing and popping operations, with the minimum value corresponding to popping and the maximum value corresponding to pushing. NSPDA [14] uses a discrete-valued action neurons to control pushing and popping operations. In contrast to these models, growing memory modules have a simple design and do not need to be differentiable. However, it can be easily shown that growing memory modules can be simulated by these stack-augmented RNNs in linear time, and thus growing memory modules can be considered a generic type of stack-augmented RNNs. Therefore, our proof on the Turing completeness of an RNN with growing memory modules can be extended to stack-augmented RNNs in general, establishing their theoretical motivation. A Turing-complete RNN that is fully differentiable was introduced in 1996 [16]; this feature is a prerequisite to have the network trainable by gradient descent. It was followed by the Neural Turing Machine (NTM) [17] and its improved version, the differentiable neural computer [18], which are both differentiable and trainable RNNs equipped with memory banks. Though inspired by Turing Machines, bounded-precision NTMs and differentiable neural computers are not Turing-complete due to the fixed-sized memory bank, but they can simulate space-bounded Turing Machines (see Section 5). Simulation of a Turing Machine by an RNN with growing memory modules represents a practical and biologically inspired way to combine symbolic and sub-symbolic capabilities. All neurons in the RNN and growing memory modules have fixed precision. While the size of growing memory modules is linear in the number of symbols used in the Turing tape, the number of neurons in the RNN is still constant. Moreover, the neurons in growing memory modules (except the top neuron in the stack) are passive at most times. As a result, the time complexity of the simulation is linear in the simulated machine’s time and independent of the memory size. By showing how to simulate a Turing Machine with a bounded-precision RNN and thereby constructing a bounded-precision RNN that can run any algorithms, our paper proposes a practical method that combines symbolic and sub-symbolic capabilities. The remainder of the paper is structured as follows. Section 2 describes the preliminary of the paper, including the definition of Turing Machines and RNNs. Section 3 revisits and extends theories relating to simulating a Turing Machine with unbounded-precision RNNs and shows the existence of a 40-neuron unbounded-precision RNN that is Turing-complete. Section 4 presents the growing memory modules and proves the existence of a 54-neuron bounded-precision RNN with two growing memory modules that is Turing-complete. Section 5 relates the number of neurons and the precision of RNNs when simulating Turing Machines. Section 6 concludes the paper. 2 Background and Notation A Turing Machine is a 7-tuple M = (Q,Σ,Γ, δ, q0, ♯, F ), where Q is a finite set of states, Σ is a finite set of input symbols, Γ is a finite set of tape symbols (note that Σ ⊂ Γ), δ : Q×Γ → Q×Γ×{L,R} is the machine’s transition rule, q0 ∈ Q is the initial starting state, ♯ is the blank symbol (note that ♯ ∈ Γ but ♯ /∈ Σ), and F ⊂ Q is the set of final state. We only consider deterministic Turing Machines in this paper. The instantaneous description (or the configuration) of a Turing Machine is typically defined as a tuple of state, tape, and the location of the read/write head. However, we use a slightly different definition in the paper. We define the left-tape symbols (or the left tape in short), denoted as sL, to be the string of symbols starting at the symbol under the read/write head and extending to the left. We define the right-tape symbols (or the right tape in short), denoted as sR, to be the string of symbols starting at the symbol to the right of the read/write head and extending to the right. The first symbol in both strings is the closest to the read/write head in both sL and sR (that is, the left tape is reversed in the representation) and the blank symbols at the two ends are omitted. Therefore, the length of sL and sR combined equals to the number of symbols used in the Turing tape in each step (that is, unbounded but not infinite). Since sL and sR encode both the tape and the location of the read/write head, we define the instantaneous description of a Turing Machine as a 3-tuple (q, sL, sR) ∈ (Q,Γ∗,Γ∗), where q denotes the state, sL denotes the left-tape symbols, and sR denotes the right-tape symbols. Though the two definitions are equivalent, this definition allows easy encoding of tape symbols into neurons’ values. The set of all possible instantaneous description is denoted as X := (Q,Γ∗,Γ∗). In each step of a Turing Machine, the symbol under the read/write head is read and, together with the state, determines the symbol to be written under the read/write head, the direction of moving the tape, and the next state. To be precise, the complete dynamic map of M, denoted as PM : X → X , is defined as follows: 1. Let x = (q, sL, sR) be the input configuration, sL,(1) denote the first symbol in sL and sR,(1) denote the first symbol in sR. The transition is defined by the 3-tuple (q′, y, d) = δ(q, sL,(1)); 2. Replace the state of the machine q with q′ and the first symbol in sL by y; 3. Move the symbol in sL,(1) to become the new sR,(1) if d = L, and move sR,(1) to become the new sL,(1) if d = R (if there are no symbols left in sL or sR for moving, append a blank symbol ♯ to it before moving). Denote the left-tape symbols and the right-tape symbols after 2. and 3. by s′L and s ′ R respectively. Then PM(x) = (q′, s′L, s′R) represents one transition of the Turing Machine M from one configuration to the next. The partial input-output function of M, denoted as P∗M : X → X , is defined by applying PM repeatedly until q ∈ F , and is undefined if it is not possible to have q ∈ F by applying PM repeatedly. A recurrent neural network (RNN) is a neural network consisting of n neurons. The value of neuron i at time t ∈ {1, 2, ...}, denoted as xi(t) ∈ Q (Q is the set of rational numbers), is computed by an affine transformation of the values of neurons in the previous state followed by an activation function σ, i.e. xi(t) = σ( ∑n j=1 wijxj(t− 1) + bi), where wij are the weights and bi is the bias; or in vector form: x(t) = σ(Wx(t− 1) + b), (1) where x(t) ∈ Qn, W ∈ Rn×n and b ∈ Rn. This defines a mapping TW,b : Qn → Qn which characterizes an RNN. For simplicity, we consider the saturated-linear function in this paper; that is: σ(x) := 0 if x < 0, x if 0 ≤ x ≤ 1, 1 if x > 1. (2) Thus, x(t) ∈ (Q ∩ [0, 1])n for all t > 0. We say that a neuron xi(t) has precision p in base b if for all t > 0, xi(t) can be expressed as∑p l=1 a(l)(t)∏l j=1 c(j)(t) for some strings a(t) ∈ {0, 1, ..., b}p and c(t) ∈ {1, ..., b}p. For a string a, we use a(i) to denote the ith symbol in a and a(i:j) to denote the string a(i)a(i+1)...a(j). For a function f that maps from a set Y to a subset of Y, we use fn to denote the nth iterate of f where 1 ≤ n < ∞. For any two vectors x ∈ Rm,y ∈ Rn, we use x ⊕ y ∈ Rm+n to denote the concatenation of the two vectors. We use A∗ to denote all possible strings formed by elements from set A. The notation is summarized in the table found in Appendix D. 3 Turing Completeness of Unbounded-Precision RNNs To simulate a Turing Machine M by an RNN, we first consider how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers. For the state q ∈ Q, we encode it with ⌈log2 |Q|⌉ binary values, denoted as ρ(q) : Q → {0, 1}⌈log2 |Q|⌉, with each possible combination of binary values representing a specific state. Example. Let Q = {1, 2, 3, 4, 5, 6}. We can encode it by ⌈log2 |Q|⌉ = 3 binary values, with ρ(q)(1) = [0, 0, 0], ρ(q)(2) = [0, 0, 1], ρ(q)(3) = [0, 1, 0], ρ(q)(4) = [0, 1, 1], ρ(q)(5) = [1, 0, 0], and ρ(q)(6) = [1, 0, 1]. For the left tape sL ∈ Γ∗ and the right tape sR ∈ Γ∗, we use fractal encoding in two rational numbers, as recommended in [1, 2]. The fractal encoding, bearing similarity to Cantor sets, enables fast manipulation of the top symbols. Without loss of generality, assume that the tape symbols Γ are encoded into numbers {1, 3, 5, ..., 2|Γ|− 1} and that the blank symbol ♯ is encoded by 1. Then, define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i + 1 (2|Γ|)|y| · (2|Γ| − 1) . (3) Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5), and ♯ = 1. Then ρ(s)(sL) = 38 + 5 82 + 7 83 + 3 84 + 5 85 + 1 86 + 1 87 + 1 88 + ... = 3 8 + 5 82 + 7 83 + 3 84 + 5 85 + 1 85·7 . The last term in (3) represents the infinite blank symbols of a tape. This encoding requires the tape symbol neurons to have the same precision as the size of the active (non-blank) part of the tape in base 2|Γ|. As the tape in a Turing Machine has an unbounded size, it means that we require neurons with unbounded precision. This is different from infinite precision but still not applicable. Hence, we will discuss how to remove this unbounded-precision assumption in Section 4. Finally, we encode the value of the top symbol in each tape neuron, sL,(1) and sR,(1), by a binary tuple ρ(r) : Γ → {0, 1}|Γ|−1: ρ (r) i (y) := 1{y > 2i} (1 ≤ i ≤ |Γ| − 1). (4) That is, the i coordinate of ρ(r)(y) is 1 if and only if the symbol y has a value larger than 2i, and is 0 otherwise. Example. Let Γ = {1, 3, 5, 7}. Then ρ(r)(1) = [0, 0, 0], ρ(r)(3) = [1, 0, 0], ρ(r)(5) = [1, 1, 0], and ρ(r)(7) = [1, 1, 1]. Combining the above discussion, we define the encoding function of configurations ρ : X → Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+5 by: ρ(q, sL, sR) = ρ (q)(q)⊕ ρ(s)(sL)⊕ ρ(s)(sR)⊕ ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0, (5) where 0 is a zero vector of size |Q||Γ|+ 5. Let ρ−1 : ρ(X ) → X be the inverse function such that ρ−1(ρ(x)) = x for all x ∈ X (note that ρ is injective, i.e. ρ(x) = ρ(x′) implies x = x′); we call ρ−1 the decoder function. It should be noted that ρ(q)(q) ⊕ ρ(s)(sL) ⊕ ρ(s)(sR) is sufficient to decode the instantaneous description. We include ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0 only because it facilitates the full simulation on the RNN. This completes the construction of the encoding function. Given an instantaneous description x ∈ X , we initialize the neurons in an RNN with values ρ(x). Then, it is possible to construct the parameters of RNN such that the update given by the RNN on these neurons is the same as the update given by the Turing Machine: Theorem 1. Given a Turing Machine M, there exists an injective function ρ : X → QN and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (6) Proof. A sketch of the proof is as follows. Neurons in the RNNs are grouped by their function. Tape neurons, initialized with ρ(s)(sL) and ρ(s)(sR), encode the tape in fractal encoding. Readout neurons, initialized with ρ(r)(sL,(1)) and ρ(r)(sR,(1)), encode the first symbol in the left and the right tape. State neurons, initialized with ρ(q)(q), encode the state. We need to update the values of these neurons to simulate one step of the Turing Machine. Three steps (or stages) of RNN are required to simulate one step of a Turing Machine. In the first stage, entry neurons, initialized with 0, compute the combination of the state and the symbol under the head. Since this combination fully determines the next transition, we use it to update the state neurons and the temporary tape neurons during stage two. Temporary tape neurons serve as a buffer for tape neurons when shifting the tape to the left or right. In stage three, we move the values from the temporary tape neurons to tape neurons to complete the update. The detailed proof can be found in Appendix A. In other words, to simulate one step of a Turing Machine M, we can first encode the instantaneous description x by the encoder function ρ, apply the RNN three times T 3W,b, and decode the values back by ρ−1 to obtain PM(x), the instantaneous description after one step of the Turing Machine. Or equivalently, for any Turing Machine M, there exists an RNN such that every three steps of the RNN yield the same result as one step of the Turing Machine. By applying Theorem 1 repeatedly, we simulate a Turing Machine with an RNN in a linear time. To be specific, the partial input-output function of an RNN, denoted as T ∗W,b : QN → QN , is defined by applying T 3W,b repeatedly until q ∈ F (where q is the state that the RNN simulates), and is undefined if it is not possible to have q ∈ F by applying T 3W,b repeatedly. Based on this definition and Theorem 1, it follows that: Corollary 1.1. Given a Turing Machine M, there exists an injective function ρ : X → Qn and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (7) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Corollary 1.1 shares similarities with Theorem 1 in [1]. However, our theorem states that 3T , instead of 4T , is sufficient to simulate a Turing Machine. We also give the relationship between the number of neurons required by the RNN and the size of Q and Γ in the Turing Machine. A small UTM with 6 states and 4 symbols, denoted by U6,4, was proposed [19] and can simulate any Turing Machine in time O(T 6), where T is the number of steps required by the Turing Machine (the one to be simulated) to compute the result. As U6,4 is also a Turing Machine, we apply Corollary 1.1 to simulate U6,4, leading to a Turing-complete RNN. Plugging in |Q| = 6 and |Γ| = 4, we obtain the following result: Corollary 1.2. There exists a 40-neuron unbounded-precision RNN that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result. It should be noted that [1] focused on capabilities and proved a Turing-complete RNN with 1058 neurons; [20] proposed a Turing-complete RNN with 52 neurons. Here we provide a plug-and-play formula to simulate any Turing Machine. 4 Turing Completeness of Bounded-Precision RNNs with Growing Memory In the following, we consider how to remove the assumption of unbounded neural precision (Section 3) without reducing computational capacity. If we assume all neurons to have precision bounded by p in base 2|Γ|, then the tape can be encoded by ⌈|sj |/p⌉ neurons using fractal encoding, where j ∈ {L,R}. We do this by encoding every p symbols of the tape into a single neuron. Since most of these neurons do not require updates, just like symbols far from the read/write head in a Turing tape, we propose to store them in a separate growing memory module organized in a stack-like manner: Definition 1. A growing memory module is a stack of non-zero neurons with push and pop operations controlled by two neurons in an RNN, denoted as push neuron u(t) and pop neuron o(t), in the following way: for every step t after the RNN finished updating the values of neurons, (i) if u(t) > 0, then a new neuron with the value u(t) is pushed to the stack and u(t) is set to 0; (ii) if o(t) = 0 and the stack is not empty, then the top neuron is popped from the stack and o(t) is set to the value of the top neuron in the updated stack; (iii) if o(t) = 0 and the stack is empty, then o(t) is set to a default value c. An RNN with a growing memory module is a mapping TW,b : (QN ,Q∗) → (QN ,Q∗) where the first element of the tuple corresponds to values of neurons in the RNN and the second element of the tuple corresponds to the stack of the growing memory module. We can equip an RNN with multiple growing memory modules, with each module having its own push and pop neurons controlled by the RNN. An RNN with two growing memory modules will be defined by a mapping TW,b : (QN ,Q∗,Q∗) → (QN ,Q∗,Q∗). We can view the growing memory module as a way of dynamically pointing to a sequence of non-zero neurons appended by one zero neuron. o(t) can be viewed as the pointer for the last non-zero neuron in the sequence, and u(t) can be viewed as the pointer for the zero neuron at the beginning of the sequence; see Figure 1. With two growing memory modules (one for the left tape and one for the right tape), we can construct an RNN with bounded-precision neurons that can simulate any Turing Machine. We first describe how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers and two stacks, with which an RNN and its growing memory modules can be initialized. In the following discussion, we assume that all neurons have precision bounded by p ≥ 2 in base 2|Γ|; that is, each neuron can encode p symbols at most. Both the state q and the top symbols sL,(1), sR,(1) are encoded with binary values as in Section 3. For the tape sj (j ∈ {L,R}), we define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i , (8) which is similar to (3) except for the encoding of infinite blank symbols. Then, we encode the tape sj into a stack of neurons as follows: First, encode the rightmost p symbols (the ones farthest from the read/write head) with ρt and push it to an empty stack, denoted as Mj . Then, encode the next rightmost p symbols and push it to Mj again, and repeat until at least one and at most p symbols remain in the tape. Denote this encoding function for the tape as ρ(M) : Γ∗ → Q∗. The remaining symbols in the tape, denoted as sj,(1:h(|sj |)), where h(y) := ((y − 1) mod p) + 1, will be encoded with fractal encoding ρ(s) as well, but would appear in neurons inside the RNN. The general idea is to let only the symbols closest to the read/write head (sj,(1:h(|sj |))) reside in the RNN. If the number of symbols residing in the RNN reaches 0 or p, then we pop from or push to the stack respectively to ensure that at least 1 and at most p symbols reside in the tape neurons. It is interesting to note that the kth neuron in the stack (from the top) requires at least kp steps of the Turing Machine before it may be updated, so the values of neurons near the bottom of the stack will not be changed for many steps. That is, the neurons in the stack, except for the top neurons, are passive. Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5, 5, 3, 7) and p = 3. Then the number of symbols to remain in the RNN is 2 and they are encoded by ρ(s)(sL,(1:h(|sL|))) = 3 8 + 5 82 . The remaining six symbols are stored in the stack: ρ(M)(sL) = [ 78 + 3 82 + 5 83 , 5 8 + 3 82 + 7 83 ]. We encode in neuron h(|sj |), the number of symbols in sj (j ∈ {L,R}). In the above example, h(|sL|) = 2. These neurons help the RNN to know when pushing or popping operations are required. We use the encoding ρ(h) : {1, 2, ..., p} → Q, defined by: ρ(h)(y) = y p+ 1 . (9) Together, define the encoding function ρ : X → (Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+19,Q∗,Q∗) by: ρ(q, sL, sR) =(ρ (q)(q)⊕ ρ(s)(sL,(1:h(|sL|)))⊕ ρ (s)(sR,(1:h(|sR|)))⊕ ρ (s)(sL,(h(|sL|)+1:h(|sL|)+p))⊕ ρ(s)(sR,(h(|sR|)+1:h(|sR|)+p))⊕ ρ (r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ ρ(h)(h(|sL|))⊕ ρ(h)(h(|sR|))⊕ 0, ρ(M)(sL), ρ(M)(sR)), (10) where 0 is a zero vector of size |Q||Γ| + 15. The first element of the tuple is for initializing the neurons in the RNN, while the second and third element of the tuple is for initializing the two growing memory stack modules. All encoded values have precision of p. Similar to the previous section, ρ is injective and so we can define the decoder function ρ−1 : ρ(X ) → X . With the new encoding function ρ and growing memory modules, we can prove an alternative version to Theorem 1 that only requires bounded precision neurons: Theorem 2. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base 2|Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (11) Proof. The detailed proof is in Appendix B. To illustrate the construction of the RNN, the parameters for the neuron initialized with ρ(h)(h(|sL|)), called the left guard neuron and denoted as gL(t), will be described here. We assume all neurons in the RNN are initialized with values from the encoder function ρ(x) at time t = 1. The guard neuron gL(t) encodes the number of left-tape symbols residing in the RNN. In three stages of an RNN, we need to update its value from gL(1) = h(|sL|)/(p + 1) to gL(4) = h(|s′L|)/(p+1), where s′L is the left tape after one step of the Turing Machine. First, notice that h(|s′L|) can be expressed as: h(|s′L|) = h(|sL|)− 1 if d = L and h(|sL|) ≥ 2, p if d = L and h(|sL|) = 1, h(|sL|) + 1 if d = R and h(|sL|) ≤ p− 1, 1 if d = R and h(|sL|) = p, (12) where d is the direction that the Turing Machine’s head is moved. Example. Assume d = L and the size of the active symbols is h(|sL|) = 1, which means that the Turing Machine is moving left and there is only one symbol residing in the RNN. Since after the move, sL will have one symbol less and so the corresponding neuron sL(t) will encode no symbols. As a result, the top neuron of the left growing memory module would be popped out, and sL(t) would assume its value. This way the neuron sL(t) again encodes the active symbols (1 to p) of the left tape series, and h(|sL|) is set to p. (Alternatively, one may prove it directly by the definition of h and the fact that |s′L| = |sL| − 1.) An analogous process holds when the Turing Machine is moving right. We implement (12) with an RNN as follows. Define stage neurons as: c1(t+ 1) = σ(1− c1(t)− c2(t)), (13) c2(t+ 1) = σ(c1(t)), (14) with both neurons initialized to be zero. Define c(t) := [c1(t), c2(t), c3(t)] where c3(t) := 1 − c1(t) − c2(t), then c(1) = [0, 0, 1]; c(2) = [1, 0, 0]; c(3) = [0, 1, 0]. These stage neurons signal which one of the three stages that the RNN is in. In the construction of the RNN, there exists a linear sum of neurons, denoted as d(t) = [dL(t), dR(t)], such that if the Turing Machine is moving left, d(1) = [0, 0]; d(2) = [1, 0]; d(3) = [0, 0]; and if the Turing Machine is moving right, d(1) = [0, 0]; d(2) = [0, 1]; d(3) = [0, 0]; this signals which direction the Turing Machine is moving to (formulas of d(t) appear in Appendix B). Then consider the following update rule for the left guard neurons: gL(t+ 1) = σ(gL(t) + (dR(t)− dL(t)− pg′L(t) + pg′′L(t))/(p+ 1)), (15) g′L(t+ 1) = σ((p+ 1)gL(t) + dR(t)− p− 2c2(t)− 2c3(t)), (16) g′′L(t+ 1) = σ(2− (p+ 1)gL(t)− dR(t)− 2c2(t)− 2c3(t)), (17) where gL(1) = h(|sL|)/(p+1), g′L(1) = g′′L(1) = 0. It can be verified that gL(4) = h(|t′L|)/(p+1) as defined by (12), completing the proof for gL(t). Example. Assume d = L and h(|sL|) = 1. Then gL(1) = gL(2) = 1/(p+ 1) and g′L(1) = g′′L(1) = g′L(2) = g ′′ L(2) = 0. On the second stage, gL(3) = σ(1/(p + 1) − 1/(p + 1)) = 0, g′L(3) = σ(1−p) = 0, and g′′L(3) = σ(2−1) = 1. On the third stage, gL(4) = σ(0+p/(p+1)) = p/(p+1) as required. The full proof appears in Appendix B. Similar to Corollary 1.1, it follows that: Corollary 2.1. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base |2Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (18) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Finally, applying Corollary 2.1 to U6,4, we obtain: Corollary 2.2. There exists a 54-neuron p-precision (in base 8) RNN with two growing memory modules that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result and p ≥ 2. The architecture of the Turing-complete 54-neuron RNN (fully described in the proof of Theorem 2) is depicted in Figure 2. 4.1 Relationship of the Growing Memory Modules with Stack-augmented RNNs The proposed growing memory module belongs to the generic class of stack-augmented RNNs, which refers to any RNNs augmented with a stack-like mechanism. Many different forms of stackaugmented RNNs have been proposed [11, 12, 13, 14, 15]. Given the simplicity of the design, the growing memory module represents the foundation form of these stack-augmented RNNs. It is easy to show that these stack-augmented RNNs can simulate the growing memory module in linear time. For example, the growing memory modules can be simulated by the neural stack [11] as follows. For pushing operation, set vt in the neural stack to u(t) in the growing memory module, and dt in the neural stack to 1{u(t) > 0} in the growing memory module. For popping operation, set ut in the neural stack to 1{o(t) = 1} in the growing memory module. Therefore, the proof for the Turing completeness of bounded-precision RNNs with growing memory modules can extend to other stack-augmented RNNs. That is, bounded-precision stack-augmented RNNs are also Turing-complete. Different from other stack-augmented RNNs, the proposed growing memory modules use a simple mechanism to control pushing and popping, as only the top neurons in the stack are included in the RNNs. This allows theories relating to growing memory modules to be easily extended to other forms of RNNs. 5 Bounded-Precision RNNs and Space-Bounded Turing Machines As discussed above, it is inefficient to update all neurons that encode tape information, but if one still wants to remove the growing memory modules and focuses purely on a bounded-precision RNN only, then the resulting network can simulate space-bounded Turing Machines (that is, Turing Machines with a bounded-size tape) only. Theorem 3. Given a Turing Machine M with a bounded tape of size F , there exists an injective function ρ : X → Qn and an n-neuron p-precision (in base |2Γ|) RNN TW,b : Qn → Qn, where n = O(⌈F/p⌉) and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (19) The proof, which is constructive, can be found in Appendix C. The general idea of the proof is to implement the growing memory module in Section 4 by an RNN as well and place all neurons inside the RNN. The theorem shows that the number of neurons required to simulate a space-bounded Turing Machine correlates with the tape size. To simulate a Turing Machine with an unbounded tape, we would need to add neurons to the RNN once the read/write head reaches the end of the memory. To be specific, we say an RNN has an unbounded number of neurons if the RNN either has an infinite number of neurons to start with or can increase the number of neurons after each update step depending on the neurons’ values. The UTM that simulates any Turing Machine in a fast way was described in [21], which does so with only O(T log T ) slowdown. While this UTM has multiple tapes, Theorem 3 can be generalized to multiple-tape Turing Machines easily. We now obtain: Corollary 3.1. There exists an unbounded-neuron bounded-precision RNN that can simulate any Turing Machine in O(T log T ), where T is the number of steps required for the Turing Machine to compute the result. 6 Discussion and Conclusion To construct a Turing-complete RNN, we have to incorporate some encoding for the unbounded number of symbols on the Turing tape. This encoding can be done by: (a) unbounded precision of some neurons (Theorem 1), (b) an unbounded number of neurons (Theorem 3), or (c) a separate growing memory module (Theorem 2). The main contribution of this paper is spelling out the details of (c), which provides a practical way to construct an RNN that runs any given algorithms. We prove the Turing completeness of a 40-neuron unbounded-precision RNN, which is the smallest Turing-complete RNN to date. We analyze the relationship between the number of neurons and the precision of an RNN when simulating a Turing Machine. Most importantly, we propose a 54-neuron bounded-precision RNN with growing memory modules that is Turing-complete, and this proof of Turing completeness can be extended to stack-augmented RNNs in general. This paper focuses on the computational capabilities and representability of symbolic and subsymbolic processing via stack-augmented RNNs; it does not engage yet with methods to train them. Since growing memory modules are not differentiable, we cannot train them directly by the frequently used error backpropagation algorithm. One may want to construct a differentiable version of the modules, or alternatively use a different learning rule (e.g., REINFORCE [22]) to deal with the discrete pushing and popping operations. Understanding methods to train growing memory modules, incorporate symbolic information into the sub-symbolic representation efficiently, and retrieve both symbolic and non-symbolic information are the next steps towards the goal of combining symbolic and sub-symbolic capabilities in an adaptive and applicable manner.
1. What is the main contribution of the paper regarding Turing machine simulation? 2. What are the strengths and weaknesses of the proposed memory-augmented RNN architecture? 3. How does the reviewer assess the novelty and significance of the improved construction compared to prior works? 4. What are some concerns regarding the practical evaluation and training of the proposed architecture? 5. How does the reviewer suggest improving the architecture and its comparison to other related models?
Summary Of The Paper Review
Summary Of The Paper This paper analyses the Turing completeness of recurrent neural networks, and proposes a memory-augmented RNN architecture reminiscent of existing stack RNNs. Specifically: First, a construction for simulating any Turing machine with an unbounded-precision RNN is presented, in which effectively the left and right sides of the Turing machine's tape (with respect to its head) are encoded as two 'stacks' in the RNN. This construction is analysed and requires less simulation time than that of Siegelmann and Sonntag [1] (3T instead of 4T+O(used tape length), where T is computation time of the simulated Turing machine). Additionally the authors analyse the exact precision of the neurons necessary (as function of the simulated Turing machine's computation length) for the simulation (as opposed to previous works, which simply note a requirement for infinite precision). Second, noting that it is unavoidable that the precision must grow when simulating longer Turing machine computations (to maintain the contents of the tape), the authors propose augmenting the RNN with a dynamic memory module, such that the neurons themselves may have constant, bounded, precision. They theoretically analyse their proposed architecture and show how a Turing machine may be simulated in it. Review The construction of this dynamic memory module is reminiscent of recent work on "stack RNNs" [2-4]. Much like the model proposed in this paper, stack RNNs push and pop to an external 'memory module' (generally referred to as a 'stack'), which allows them to simulate pushdown automata (although most of these models do not discuss 2 stacks as used here, their constructions can certainly be generalised to multiple stacks, and so it is conceivable that once sufficient success is achieved with one stack then there would be a shift to 2 stacks or more). Much of the challenge with these stack-RNNs is making this stack differentiable, so that the model can 'learn' whether it should be pushing or popping from its stack during training (i.e., the push and pop operations need to be differentiable). Multiple different methods are considered to attain this quality (see partial list of works below). In this paper it does not seem that this challenge has been addressed, and indeed the current push and pop operations appear to be happening discretely, i.e., I do not think it is possible to train the current architecture with SGD. Another thing I would like to note is that it is not generally convincing to propose a new architecture without some practical evaluation. While the theoretical power of this architecture is shown in this work (a construction for simulating any Turing machine), its ability to learn any language, formal or natural, unfortunately has not been evaluated. I suspect that this is also why the challenge of making differentiable push and pop operations has gone unnoticed in this work. To conclude: In itself the construction of a Turing machine simulation in RNNs is not new (e.g., it has already been done in [1]), but I appreciate the analysis showing that this construction is faster than previous constructions, and moreover the analysis of necessary precision per Turing machine and input, which (if I remember correctly) is not present in previous such works. However, I do not see this result as sufficient for accepting this paper alone: I believe the main interest of the community in this line of results is more the fact that RNNs can simulate Turing machines, rather than how efficiently they can do so. (Maybe if the improved construction was significantly more efficient, that would be nice, but this is only a constant-factor improvement: from 5T (we can say that the tape will never be longer than T) to 3T). I appreciate the attempt to create a memory augmented network. However at this point I would only want to accept new architectures after they have been evaluated, and this one has not (after all, if we wanted only to encode Turing machines in them, then we can simply use those Turing machines directly instead). Moreover I suspect that if we do come to train this architecture, we will hit many obstacles (see e.g. discreteness of push and pop operations, above). If the authors pursue making this architecture differentiable, I strongly suggest they read more about stack RNNs (I am not an expert on these, so I give only a partial list), where I think they will find a lot of inspiration. Either way, I ask that they evaluate their architecture, and moreover compare it to existing stack RNNs and at the very least vanilla RNNs (including variants such as GRUs and LSTMs), before they resubmit. references: [1] On the computational power of neural nets (Siegelmann and Sonntag) [2] Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets (Joulin and Mikolov) [3] The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations (Sun, Giles, Chen, Lee) [4] Learning to Transduce with Unbounded Memory (Grefenstette, Hermann, Suleyman, Blunsom)
NIPS
Title Turing Completeness of Bounded-Precision Recurrent Neural Networks Abstract Previous works have proved that recurrent neural networks (RNNs) are Turingcomplete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine’s time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and boundedprecision RNNs, revisiting and extending the theoretical foundations of RNNs. 1 Introduction Symbolic (such as Turing Machines) and sub-symbolic processing (such as adaptive neural networks) are two competing methods of representing and processing information, each with its own advantages. An ultimate way to combine symbolic and sub-symbolic capabilities is by enabling the running of algorithms on a neural substrate, which means a neural network that can simulate a Universal Turing Machine (UTM). Previous works [1, 2, 3] have shown that this is possible – there exists a recurrent neural network (RNN) that can simulate a UTM. These proofs assumed a couple of neurons with unbounded precision that equals the number of symbols used in the Turing tape. Here we provide an alternative simulation of a UTM by RNNs with bounded-precision neurons only. The general idea works as follows. The Turing Machine’s tape is stored in a growing memory module, which is a stack of neurons with pushing and popping operations controlled by neurons in the RNN. The size of the growing memory module is determined by the usage of the Turing tape - it dynamically recruits new neurons when more memories are needed and releases them when memories become irrelevant. The neurons in the stack, except for the top neuron, are not regularly updated (and hence can be referred to as passive), saving computational cost for memories that are not in the focus of the computing and do not require change. Using growing memory modules, a 54-neuron bounded-precision RNN is constructed that can simulate any Turing Machine. Our proposed growing memory modules are inspired by biological memory systems. The process of dynamically recruiting new neurons when more memories are necessary is also observed in biological memory systems. Neurogenesis is the process by which new neurons are produced in the central nervous system; it is most active during early development, but continues through life. *Both authors contributed equally. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). In adult vertebrates, neurogenesis is known to occur in the dentate gyrus (DG) of the hippocampal formation [4] and the subventricular zone (SVZ) of the lateral ventricles [5]. Since DG is well-known in neuroscience for its role in pattern separation for memory encoding [6, 7], this suggests that biological memory systems also dynamically recruit new neurons. The rate of neurogenesis in adult mice has been shown to be higher if they are exposed to a wider variety of experiences [8]. This further suggests a role for self-regulated neurogenesis in scaling up the number of new memory that can be encoded and stored during one’s lifetime without catastrophic forgetting of previously consolidated memory. Besides the mechanism of recruiting new neurons, the process of storing neurons in growing memory modules also shares some similarities with biological memory consolidation, a process by which short-term memory is transformed into long-term memory [9, 10]. Compared to short-term memory, long-term memory is more long-lasting and robust to interference. This is similar to the neurons stored in growing memory modules - the values of these neurons (except the top neuron in the stack) remain unchanged and cannot be interfered by the RNN, providing a mechanism to store information stably. Growing memory modules share similarities with other stack-augmented RNNs [11, 12, 13, 14, 15]. In neural stacks [11], the RNN outputs two continuous scalars that control the strength of pushing and popping operations, and the stack is made differentiable by adding a strength vector. In stack RNNs [12] and DiffStk-RNN [15], the RNN outputs the probability vector corresponding to pushing, popping, and no operation. In NNPDA [13], the RNN outputs a continuous scalar that controls the pushing and popping operations, with the minimum value corresponding to popping and the maximum value corresponding to pushing. NSPDA [14] uses a discrete-valued action neurons to control pushing and popping operations. In contrast to these models, growing memory modules have a simple design and do not need to be differentiable. However, it can be easily shown that growing memory modules can be simulated by these stack-augmented RNNs in linear time, and thus growing memory modules can be considered a generic type of stack-augmented RNNs. Therefore, our proof on the Turing completeness of an RNN with growing memory modules can be extended to stack-augmented RNNs in general, establishing their theoretical motivation. A Turing-complete RNN that is fully differentiable was introduced in 1996 [16]; this feature is a prerequisite to have the network trainable by gradient descent. It was followed by the Neural Turing Machine (NTM) [17] and its improved version, the differentiable neural computer [18], which are both differentiable and trainable RNNs equipped with memory banks. Though inspired by Turing Machines, bounded-precision NTMs and differentiable neural computers are not Turing-complete due to the fixed-sized memory bank, but they can simulate space-bounded Turing Machines (see Section 5). Simulation of a Turing Machine by an RNN with growing memory modules represents a practical and biologically inspired way to combine symbolic and sub-symbolic capabilities. All neurons in the RNN and growing memory modules have fixed precision. While the size of growing memory modules is linear in the number of symbols used in the Turing tape, the number of neurons in the RNN is still constant. Moreover, the neurons in growing memory modules (except the top neuron in the stack) are passive at most times. As a result, the time complexity of the simulation is linear in the simulated machine’s time and independent of the memory size. By showing how to simulate a Turing Machine with a bounded-precision RNN and thereby constructing a bounded-precision RNN that can run any algorithms, our paper proposes a practical method that combines symbolic and sub-symbolic capabilities. The remainder of the paper is structured as follows. Section 2 describes the preliminary of the paper, including the definition of Turing Machines and RNNs. Section 3 revisits and extends theories relating to simulating a Turing Machine with unbounded-precision RNNs and shows the existence of a 40-neuron unbounded-precision RNN that is Turing-complete. Section 4 presents the growing memory modules and proves the existence of a 54-neuron bounded-precision RNN with two growing memory modules that is Turing-complete. Section 5 relates the number of neurons and the precision of RNNs when simulating Turing Machines. Section 6 concludes the paper. 2 Background and Notation A Turing Machine is a 7-tuple M = (Q,Σ,Γ, δ, q0, ♯, F ), where Q is a finite set of states, Σ is a finite set of input symbols, Γ is a finite set of tape symbols (note that Σ ⊂ Γ), δ : Q×Γ → Q×Γ×{L,R} is the machine’s transition rule, q0 ∈ Q is the initial starting state, ♯ is the blank symbol (note that ♯ ∈ Γ but ♯ /∈ Σ), and F ⊂ Q is the set of final state. We only consider deterministic Turing Machines in this paper. The instantaneous description (or the configuration) of a Turing Machine is typically defined as a tuple of state, tape, and the location of the read/write head. However, we use a slightly different definition in the paper. We define the left-tape symbols (or the left tape in short), denoted as sL, to be the string of symbols starting at the symbol under the read/write head and extending to the left. We define the right-tape symbols (or the right tape in short), denoted as sR, to be the string of symbols starting at the symbol to the right of the read/write head and extending to the right. The first symbol in both strings is the closest to the read/write head in both sL and sR (that is, the left tape is reversed in the representation) and the blank symbols at the two ends are omitted. Therefore, the length of sL and sR combined equals to the number of symbols used in the Turing tape in each step (that is, unbounded but not infinite). Since sL and sR encode both the tape and the location of the read/write head, we define the instantaneous description of a Turing Machine as a 3-tuple (q, sL, sR) ∈ (Q,Γ∗,Γ∗), where q denotes the state, sL denotes the left-tape symbols, and sR denotes the right-tape symbols. Though the two definitions are equivalent, this definition allows easy encoding of tape symbols into neurons’ values. The set of all possible instantaneous description is denoted as X := (Q,Γ∗,Γ∗). In each step of a Turing Machine, the symbol under the read/write head is read and, together with the state, determines the symbol to be written under the read/write head, the direction of moving the tape, and the next state. To be precise, the complete dynamic map of M, denoted as PM : X → X , is defined as follows: 1. Let x = (q, sL, sR) be the input configuration, sL,(1) denote the first symbol in sL and sR,(1) denote the first symbol in sR. The transition is defined by the 3-tuple (q′, y, d) = δ(q, sL,(1)); 2. Replace the state of the machine q with q′ and the first symbol in sL by y; 3. Move the symbol in sL,(1) to become the new sR,(1) if d = L, and move sR,(1) to become the new sL,(1) if d = R (if there are no symbols left in sL or sR for moving, append a blank symbol ♯ to it before moving). Denote the left-tape symbols and the right-tape symbols after 2. and 3. by s′L and s ′ R respectively. Then PM(x) = (q′, s′L, s′R) represents one transition of the Turing Machine M from one configuration to the next. The partial input-output function of M, denoted as P∗M : X → X , is defined by applying PM repeatedly until q ∈ F , and is undefined if it is not possible to have q ∈ F by applying PM repeatedly. A recurrent neural network (RNN) is a neural network consisting of n neurons. The value of neuron i at time t ∈ {1, 2, ...}, denoted as xi(t) ∈ Q (Q is the set of rational numbers), is computed by an affine transformation of the values of neurons in the previous state followed by an activation function σ, i.e. xi(t) = σ( ∑n j=1 wijxj(t− 1) + bi), where wij are the weights and bi is the bias; or in vector form: x(t) = σ(Wx(t− 1) + b), (1) where x(t) ∈ Qn, W ∈ Rn×n and b ∈ Rn. This defines a mapping TW,b : Qn → Qn which characterizes an RNN. For simplicity, we consider the saturated-linear function in this paper; that is: σ(x) := 0 if x < 0, x if 0 ≤ x ≤ 1, 1 if x > 1. (2) Thus, x(t) ∈ (Q ∩ [0, 1])n for all t > 0. We say that a neuron xi(t) has precision p in base b if for all t > 0, xi(t) can be expressed as∑p l=1 a(l)(t)∏l j=1 c(j)(t) for some strings a(t) ∈ {0, 1, ..., b}p and c(t) ∈ {1, ..., b}p. For a string a, we use a(i) to denote the ith symbol in a and a(i:j) to denote the string a(i)a(i+1)...a(j). For a function f that maps from a set Y to a subset of Y, we use fn to denote the nth iterate of f where 1 ≤ n < ∞. For any two vectors x ∈ Rm,y ∈ Rn, we use x ⊕ y ∈ Rm+n to denote the concatenation of the two vectors. We use A∗ to denote all possible strings formed by elements from set A. The notation is summarized in the table found in Appendix D. 3 Turing Completeness of Unbounded-Precision RNNs To simulate a Turing Machine M by an RNN, we first consider how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers. For the state q ∈ Q, we encode it with ⌈log2 |Q|⌉ binary values, denoted as ρ(q) : Q → {0, 1}⌈log2 |Q|⌉, with each possible combination of binary values representing a specific state. Example. Let Q = {1, 2, 3, 4, 5, 6}. We can encode it by ⌈log2 |Q|⌉ = 3 binary values, with ρ(q)(1) = [0, 0, 0], ρ(q)(2) = [0, 0, 1], ρ(q)(3) = [0, 1, 0], ρ(q)(4) = [0, 1, 1], ρ(q)(5) = [1, 0, 0], and ρ(q)(6) = [1, 0, 1]. For the left tape sL ∈ Γ∗ and the right tape sR ∈ Γ∗, we use fractal encoding in two rational numbers, as recommended in [1, 2]. The fractal encoding, bearing similarity to Cantor sets, enables fast manipulation of the top symbols. Without loss of generality, assume that the tape symbols Γ are encoded into numbers {1, 3, 5, ..., 2|Γ|− 1} and that the blank symbol ♯ is encoded by 1. Then, define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i + 1 (2|Γ|)|y| · (2|Γ| − 1) . (3) Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5), and ♯ = 1. Then ρ(s)(sL) = 38 + 5 82 + 7 83 + 3 84 + 5 85 + 1 86 + 1 87 + 1 88 + ... = 3 8 + 5 82 + 7 83 + 3 84 + 5 85 + 1 85·7 . The last term in (3) represents the infinite blank symbols of a tape. This encoding requires the tape symbol neurons to have the same precision as the size of the active (non-blank) part of the tape in base 2|Γ|. As the tape in a Turing Machine has an unbounded size, it means that we require neurons with unbounded precision. This is different from infinite precision but still not applicable. Hence, we will discuss how to remove this unbounded-precision assumption in Section 4. Finally, we encode the value of the top symbol in each tape neuron, sL,(1) and sR,(1), by a binary tuple ρ(r) : Γ → {0, 1}|Γ|−1: ρ (r) i (y) := 1{y > 2i} (1 ≤ i ≤ |Γ| − 1). (4) That is, the i coordinate of ρ(r)(y) is 1 if and only if the symbol y has a value larger than 2i, and is 0 otherwise. Example. Let Γ = {1, 3, 5, 7}. Then ρ(r)(1) = [0, 0, 0], ρ(r)(3) = [1, 0, 0], ρ(r)(5) = [1, 1, 0], and ρ(r)(7) = [1, 1, 1]. Combining the above discussion, we define the encoding function of configurations ρ : X → Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+5 by: ρ(q, sL, sR) = ρ (q)(q)⊕ ρ(s)(sL)⊕ ρ(s)(sR)⊕ ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0, (5) where 0 is a zero vector of size |Q||Γ|+ 5. Let ρ−1 : ρ(X ) → X be the inverse function such that ρ−1(ρ(x)) = x for all x ∈ X (note that ρ is injective, i.e. ρ(x) = ρ(x′) implies x = x′); we call ρ−1 the decoder function. It should be noted that ρ(q)(q) ⊕ ρ(s)(sL) ⊕ ρ(s)(sR) is sufficient to decode the instantaneous description. We include ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0 only because it facilitates the full simulation on the RNN. This completes the construction of the encoding function. Given an instantaneous description x ∈ X , we initialize the neurons in an RNN with values ρ(x). Then, it is possible to construct the parameters of RNN such that the update given by the RNN on these neurons is the same as the update given by the Turing Machine: Theorem 1. Given a Turing Machine M, there exists an injective function ρ : X → QN and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (6) Proof. A sketch of the proof is as follows. Neurons in the RNNs are grouped by their function. Tape neurons, initialized with ρ(s)(sL) and ρ(s)(sR), encode the tape in fractal encoding. Readout neurons, initialized with ρ(r)(sL,(1)) and ρ(r)(sR,(1)), encode the first symbol in the left and the right tape. State neurons, initialized with ρ(q)(q), encode the state. We need to update the values of these neurons to simulate one step of the Turing Machine. Three steps (or stages) of RNN are required to simulate one step of a Turing Machine. In the first stage, entry neurons, initialized with 0, compute the combination of the state and the symbol under the head. Since this combination fully determines the next transition, we use it to update the state neurons and the temporary tape neurons during stage two. Temporary tape neurons serve as a buffer for tape neurons when shifting the tape to the left or right. In stage three, we move the values from the temporary tape neurons to tape neurons to complete the update. The detailed proof can be found in Appendix A. In other words, to simulate one step of a Turing Machine M, we can first encode the instantaneous description x by the encoder function ρ, apply the RNN three times T 3W,b, and decode the values back by ρ−1 to obtain PM(x), the instantaneous description after one step of the Turing Machine. Or equivalently, for any Turing Machine M, there exists an RNN such that every three steps of the RNN yield the same result as one step of the Turing Machine. By applying Theorem 1 repeatedly, we simulate a Turing Machine with an RNN in a linear time. To be specific, the partial input-output function of an RNN, denoted as T ∗W,b : QN → QN , is defined by applying T 3W,b repeatedly until q ∈ F (where q is the state that the RNN simulates), and is undefined if it is not possible to have q ∈ F by applying T 3W,b repeatedly. Based on this definition and Theorem 1, it follows that: Corollary 1.1. Given a Turing Machine M, there exists an injective function ρ : X → Qn and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (7) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Corollary 1.1 shares similarities with Theorem 1 in [1]. However, our theorem states that 3T , instead of 4T , is sufficient to simulate a Turing Machine. We also give the relationship between the number of neurons required by the RNN and the size of Q and Γ in the Turing Machine. A small UTM with 6 states and 4 symbols, denoted by U6,4, was proposed [19] and can simulate any Turing Machine in time O(T 6), where T is the number of steps required by the Turing Machine (the one to be simulated) to compute the result. As U6,4 is also a Turing Machine, we apply Corollary 1.1 to simulate U6,4, leading to a Turing-complete RNN. Plugging in |Q| = 6 and |Γ| = 4, we obtain the following result: Corollary 1.2. There exists a 40-neuron unbounded-precision RNN that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result. It should be noted that [1] focused on capabilities and proved a Turing-complete RNN with 1058 neurons; [20] proposed a Turing-complete RNN with 52 neurons. Here we provide a plug-and-play formula to simulate any Turing Machine. 4 Turing Completeness of Bounded-Precision RNNs with Growing Memory In the following, we consider how to remove the assumption of unbounded neural precision (Section 3) without reducing computational capacity. If we assume all neurons to have precision bounded by p in base 2|Γ|, then the tape can be encoded by ⌈|sj |/p⌉ neurons using fractal encoding, where j ∈ {L,R}. We do this by encoding every p symbols of the tape into a single neuron. Since most of these neurons do not require updates, just like symbols far from the read/write head in a Turing tape, we propose to store them in a separate growing memory module organized in a stack-like manner: Definition 1. A growing memory module is a stack of non-zero neurons with push and pop operations controlled by two neurons in an RNN, denoted as push neuron u(t) and pop neuron o(t), in the following way: for every step t after the RNN finished updating the values of neurons, (i) if u(t) > 0, then a new neuron with the value u(t) is pushed to the stack and u(t) is set to 0; (ii) if o(t) = 0 and the stack is not empty, then the top neuron is popped from the stack and o(t) is set to the value of the top neuron in the updated stack; (iii) if o(t) = 0 and the stack is empty, then o(t) is set to a default value c. An RNN with a growing memory module is a mapping TW,b : (QN ,Q∗) → (QN ,Q∗) where the first element of the tuple corresponds to values of neurons in the RNN and the second element of the tuple corresponds to the stack of the growing memory module. We can equip an RNN with multiple growing memory modules, with each module having its own push and pop neurons controlled by the RNN. An RNN with two growing memory modules will be defined by a mapping TW,b : (QN ,Q∗,Q∗) → (QN ,Q∗,Q∗). We can view the growing memory module as a way of dynamically pointing to a sequence of non-zero neurons appended by one zero neuron. o(t) can be viewed as the pointer for the last non-zero neuron in the sequence, and u(t) can be viewed as the pointer for the zero neuron at the beginning of the sequence; see Figure 1. With two growing memory modules (one for the left tape and one for the right tape), we can construct an RNN with bounded-precision neurons that can simulate any Turing Machine. We first describe how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers and two stacks, with which an RNN and its growing memory modules can be initialized. In the following discussion, we assume that all neurons have precision bounded by p ≥ 2 in base 2|Γ|; that is, each neuron can encode p symbols at most. Both the state q and the top symbols sL,(1), sR,(1) are encoded with binary values as in Section 3. For the tape sj (j ∈ {L,R}), we define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i , (8) which is similar to (3) except for the encoding of infinite blank symbols. Then, we encode the tape sj into a stack of neurons as follows: First, encode the rightmost p symbols (the ones farthest from the read/write head) with ρt and push it to an empty stack, denoted as Mj . Then, encode the next rightmost p symbols and push it to Mj again, and repeat until at least one and at most p symbols remain in the tape. Denote this encoding function for the tape as ρ(M) : Γ∗ → Q∗. The remaining symbols in the tape, denoted as sj,(1:h(|sj |)), where h(y) := ((y − 1) mod p) + 1, will be encoded with fractal encoding ρ(s) as well, but would appear in neurons inside the RNN. The general idea is to let only the symbols closest to the read/write head (sj,(1:h(|sj |))) reside in the RNN. If the number of symbols residing in the RNN reaches 0 or p, then we pop from or push to the stack respectively to ensure that at least 1 and at most p symbols reside in the tape neurons. It is interesting to note that the kth neuron in the stack (from the top) requires at least kp steps of the Turing Machine before it may be updated, so the values of neurons near the bottom of the stack will not be changed for many steps. That is, the neurons in the stack, except for the top neurons, are passive. Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5, 5, 3, 7) and p = 3. Then the number of symbols to remain in the RNN is 2 and they are encoded by ρ(s)(sL,(1:h(|sL|))) = 3 8 + 5 82 . The remaining six symbols are stored in the stack: ρ(M)(sL) = [ 78 + 3 82 + 5 83 , 5 8 + 3 82 + 7 83 ]. We encode in neuron h(|sj |), the number of symbols in sj (j ∈ {L,R}). In the above example, h(|sL|) = 2. These neurons help the RNN to know when pushing or popping operations are required. We use the encoding ρ(h) : {1, 2, ..., p} → Q, defined by: ρ(h)(y) = y p+ 1 . (9) Together, define the encoding function ρ : X → (Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+19,Q∗,Q∗) by: ρ(q, sL, sR) =(ρ (q)(q)⊕ ρ(s)(sL,(1:h(|sL|)))⊕ ρ (s)(sR,(1:h(|sR|)))⊕ ρ (s)(sL,(h(|sL|)+1:h(|sL|)+p))⊕ ρ(s)(sR,(h(|sR|)+1:h(|sR|)+p))⊕ ρ (r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ ρ(h)(h(|sL|))⊕ ρ(h)(h(|sR|))⊕ 0, ρ(M)(sL), ρ(M)(sR)), (10) where 0 is a zero vector of size |Q||Γ| + 15. The first element of the tuple is for initializing the neurons in the RNN, while the second and third element of the tuple is for initializing the two growing memory stack modules. All encoded values have precision of p. Similar to the previous section, ρ is injective and so we can define the decoder function ρ−1 : ρ(X ) → X . With the new encoding function ρ and growing memory modules, we can prove an alternative version to Theorem 1 that only requires bounded precision neurons: Theorem 2. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base 2|Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (11) Proof. The detailed proof is in Appendix B. To illustrate the construction of the RNN, the parameters for the neuron initialized with ρ(h)(h(|sL|)), called the left guard neuron and denoted as gL(t), will be described here. We assume all neurons in the RNN are initialized with values from the encoder function ρ(x) at time t = 1. The guard neuron gL(t) encodes the number of left-tape symbols residing in the RNN. In three stages of an RNN, we need to update its value from gL(1) = h(|sL|)/(p + 1) to gL(4) = h(|s′L|)/(p+1), where s′L is the left tape after one step of the Turing Machine. First, notice that h(|s′L|) can be expressed as: h(|s′L|) = h(|sL|)− 1 if d = L and h(|sL|) ≥ 2, p if d = L and h(|sL|) = 1, h(|sL|) + 1 if d = R and h(|sL|) ≤ p− 1, 1 if d = R and h(|sL|) = p, (12) where d is the direction that the Turing Machine’s head is moved. Example. Assume d = L and the size of the active symbols is h(|sL|) = 1, which means that the Turing Machine is moving left and there is only one symbol residing in the RNN. Since after the move, sL will have one symbol less and so the corresponding neuron sL(t) will encode no symbols. As a result, the top neuron of the left growing memory module would be popped out, and sL(t) would assume its value. This way the neuron sL(t) again encodes the active symbols (1 to p) of the left tape series, and h(|sL|) is set to p. (Alternatively, one may prove it directly by the definition of h and the fact that |s′L| = |sL| − 1.) An analogous process holds when the Turing Machine is moving right. We implement (12) with an RNN as follows. Define stage neurons as: c1(t+ 1) = σ(1− c1(t)− c2(t)), (13) c2(t+ 1) = σ(c1(t)), (14) with both neurons initialized to be zero. Define c(t) := [c1(t), c2(t), c3(t)] where c3(t) := 1 − c1(t) − c2(t), then c(1) = [0, 0, 1]; c(2) = [1, 0, 0]; c(3) = [0, 1, 0]. These stage neurons signal which one of the three stages that the RNN is in. In the construction of the RNN, there exists a linear sum of neurons, denoted as d(t) = [dL(t), dR(t)], such that if the Turing Machine is moving left, d(1) = [0, 0]; d(2) = [1, 0]; d(3) = [0, 0]; and if the Turing Machine is moving right, d(1) = [0, 0]; d(2) = [0, 1]; d(3) = [0, 0]; this signals which direction the Turing Machine is moving to (formulas of d(t) appear in Appendix B). Then consider the following update rule for the left guard neurons: gL(t+ 1) = σ(gL(t) + (dR(t)− dL(t)− pg′L(t) + pg′′L(t))/(p+ 1)), (15) g′L(t+ 1) = σ((p+ 1)gL(t) + dR(t)− p− 2c2(t)− 2c3(t)), (16) g′′L(t+ 1) = σ(2− (p+ 1)gL(t)− dR(t)− 2c2(t)− 2c3(t)), (17) where gL(1) = h(|sL|)/(p+1), g′L(1) = g′′L(1) = 0. It can be verified that gL(4) = h(|t′L|)/(p+1) as defined by (12), completing the proof for gL(t). Example. Assume d = L and h(|sL|) = 1. Then gL(1) = gL(2) = 1/(p+ 1) and g′L(1) = g′′L(1) = g′L(2) = g ′′ L(2) = 0. On the second stage, gL(3) = σ(1/(p + 1) − 1/(p + 1)) = 0, g′L(3) = σ(1−p) = 0, and g′′L(3) = σ(2−1) = 1. On the third stage, gL(4) = σ(0+p/(p+1)) = p/(p+1) as required. The full proof appears in Appendix B. Similar to Corollary 1.1, it follows that: Corollary 2.1. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base |2Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (18) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Finally, applying Corollary 2.1 to U6,4, we obtain: Corollary 2.2. There exists a 54-neuron p-precision (in base 8) RNN with two growing memory modules that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result and p ≥ 2. The architecture of the Turing-complete 54-neuron RNN (fully described in the proof of Theorem 2) is depicted in Figure 2. 4.1 Relationship of the Growing Memory Modules with Stack-augmented RNNs The proposed growing memory module belongs to the generic class of stack-augmented RNNs, which refers to any RNNs augmented with a stack-like mechanism. Many different forms of stackaugmented RNNs have been proposed [11, 12, 13, 14, 15]. Given the simplicity of the design, the growing memory module represents the foundation form of these stack-augmented RNNs. It is easy to show that these stack-augmented RNNs can simulate the growing memory module in linear time. For example, the growing memory modules can be simulated by the neural stack [11] as follows. For pushing operation, set vt in the neural stack to u(t) in the growing memory module, and dt in the neural stack to 1{u(t) > 0} in the growing memory module. For popping operation, set ut in the neural stack to 1{o(t) = 1} in the growing memory module. Therefore, the proof for the Turing completeness of bounded-precision RNNs with growing memory modules can extend to other stack-augmented RNNs. That is, bounded-precision stack-augmented RNNs are also Turing-complete. Different from other stack-augmented RNNs, the proposed growing memory modules use a simple mechanism to control pushing and popping, as only the top neurons in the stack are included in the RNNs. This allows theories relating to growing memory modules to be easily extended to other forms of RNNs. 5 Bounded-Precision RNNs and Space-Bounded Turing Machines As discussed above, it is inefficient to update all neurons that encode tape information, but if one still wants to remove the growing memory modules and focuses purely on a bounded-precision RNN only, then the resulting network can simulate space-bounded Turing Machines (that is, Turing Machines with a bounded-size tape) only. Theorem 3. Given a Turing Machine M with a bounded tape of size F , there exists an injective function ρ : X → Qn and an n-neuron p-precision (in base |2Γ|) RNN TW,b : Qn → Qn, where n = O(⌈F/p⌉) and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (19) The proof, which is constructive, can be found in Appendix C. The general idea of the proof is to implement the growing memory module in Section 4 by an RNN as well and place all neurons inside the RNN. The theorem shows that the number of neurons required to simulate a space-bounded Turing Machine correlates with the tape size. To simulate a Turing Machine with an unbounded tape, we would need to add neurons to the RNN once the read/write head reaches the end of the memory. To be specific, we say an RNN has an unbounded number of neurons if the RNN either has an infinite number of neurons to start with or can increase the number of neurons after each update step depending on the neurons’ values. The UTM that simulates any Turing Machine in a fast way was described in [21], which does so with only O(T log T ) slowdown. While this UTM has multiple tapes, Theorem 3 can be generalized to multiple-tape Turing Machines easily. We now obtain: Corollary 3.1. There exists an unbounded-neuron bounded-precision RNN that can simulate any Turing Machine in O(T log T ), where T is the number of steps required for the Turing Machine to compute the result. 6 Discussion and Conclusion To construct a Turing-complete RNN, we have to incorporate some encoding for the unbounded number of symbols on the Turing tape. This encoding can be done by: (a) unbounded precision of some neurons (Theorem 1), (b) an unbounded number of neurons (Theorem 3), or (c) a separate growing memory module (Theorem 2). The main contribution of this paper is spelling out the details of (c), which provides a practical way to construct an RNN that runs any given algorithms. We prove the Turing completeness of a 40-neuron unbounded-precision RNN, which is the smallest Turing-complete RNN to date. We analyze the relationship between the number of neurons and the precision of an RNN when simulating a Turing Machine. Most importantly, we propose a 54-neuron bounded-precision RNN with growing memory modules that is Turing-complete, and this proof of Turing completeness can be extended to stack-augmented RNNs in general. This paper focuses on the computational capabilities and representability of symbolic and subsymbolic processing via stack-augmented RNNs; it does not engage yet with methods to train them. Since growing memory modules are not differentiable, we cannot train them directly by the frequently used error backpropagation algorithm. One may want to construct a differentiable version of the modules, or alternatively use a different learning rule (e.g., REINFORCE [22]) to deal with the discrete pushing and popping operations. Understanding methods to train growing memory modules, incorporate symbolic information into the sub-symbolic representation efficiently, and retrieve both symbolic and non-symbolic information are the next steps towards the goal of combining symbolic and sub-symbolic capabilities in an adaptive and applicable manner.
1. What are the main contributions of the paper in terms of theoretical results? 2. How does the reviewer assess the practical significance of the paper's findings? 3. What are the limitations of the proposed approach regarding the use of unbounded-precision neurons and bounded-precision neurons with two stacks? 4. How does the reviewer compare the paper's results with prior works on Neural Turing Machines and their practical applications?
Summary Of The Paper Review
Summary Of The Paper The paper presents 3 main theoretical results: An RNN with 40 unbounded-precision neurons is Turing-complete. An RNN with 54 bounded-precision neurons and two stacks is Turing-complete. An RNN with a finite number of bounded-precision neurons and no stacks can simulate a Turing machine with a bounded tape, where the maximum tape length is related to the number of RNN neurons. Review The paper has a limited practical significance. About unbounded-precision neurons: Unbounded-precision is not practical. About the RNN with bounded-precision neurons and two stacks: It is well known that a finite-state machine with two stacks is Turing-complete. And an RNN with bounded-precision is a finite-state machine. The proposed stacks are not differentiable. The original Neural Turing Machine was already Turing-complete, if using an unbounded tape. The location-based addressing allows to shift the tape. By shifting a one-hot attention pattern, only the attended part of the tape needs to be accessed. This was practically used by Spatial Transformer Networks (NeurIPS 2015) and later works.
NIPS
Title Turing Completeness of Bounded-Precision Recurrent Neural Networks Abstract Previous works have proved that recurrent neural networks (RNNs) are Turingcomplete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine’s time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and boundedprecision RNNs, revisiting and extending the theoretical foundations of RNNs. 1 Introduction Symbolic (such as Turing Machines) and sub-symbolic processing (such as adaptive neural networks) are two competing methods of representing and processing information, each with its own advantages. An ultimate way to combine symbolic and sub-symbolic capabilities is by enabling the running of algorithms on a neural substrate, which means a neural network that can simulate a Universal Turing Machine (UTM). Previous works [1, 2, 3] have shown that this is possible – there exists a recurrent neural network (RNN) that can simulate a UTM. These proofs assumed a couple of neurons with unbounded precision that equals the number of symbols used in the Turing tape. Here we provide an alternative simulation of a UTM by RNNs with bounded-precision neurons only. The general idea works as follows. The Turing Machine’s tape is stored in a growing memory module, which is a stack of neurons with pushing and popping operations controlled by neurons in the RNN. The size of the growing memory module is determined by the usage of the Turing tape - it dynamically recruits new neurons when more memories are needed and releases them when memories become irrelevant. The neurons in the stack, except for the top neuron, are not regularly updated (and hence can be referred to as passive), saving computational cost for memories that are not in the focus of the computing and do not require change. Using growing memory modules, a 54-neuron bounded-precision RNN is constructed that can simulate any Turing Machine. Our proposed growing memory modules are inspired by biological memory systems. The process of dynamically recruiting new neurons when more memories are necessary is also observed in biological memory systems. Neurogenesis is the process by which new neurons are produced in the central nervous system; it is most active during early development, but continues through life. *Both authors contributed equally. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). In adult vertebrates, neurogenesis is known to occur in the dentate gyrus (DG) of the hippocampal formation [4] and the subventricular zone (SVZ) of the lateral ventricles [5]. Since DG is well-known in neuroscience for its role in pattern separation for memory encoding [6, 7], this suggests that biological memory systems also dynamically recruit new neurons. The rate of neurogenesis in adult mice has been shown to be higher if they are exposed to a wider variety of experiences [8]. This further suggests a role for self-regulated neurogenesis in scaling up the number of new memory that can be encoded and stored during one’s lifetime without catastrophic forgetting of previously consolidated memory. Besides the mechanism of recruiting new neurons, the process of storing neurons in growing memory modules also shares some similarities with biological memory consolidation, a process by which short-term memory is transformed into long-term memory [9, 10]. Compared to short-term memory, long-term memory is more long-lasting and robust to interference. This is similar to the neurons stored in growing memory modules - the values of these neurons (except the top neuron in the stack) remain unchanged and cannot be interfered by the RNN, providing a mechanism to store information stably. Growing memory modules share similarities with other stack-augmented RNNs [11, 12, 13, 14, 15]. In neural stacks [11], the RNN outputs two continuous scalars that control the strength of pushing and popping operations, and the stack is made differentiable by adding a strength vector. In stack RNNs [12] and DiffStk-RNN [15], the RNN outputs the probability vector corresponding to pushing, popping, and no operation. In NNPDA [13], the RNN outputs a continuous scalar that controls the pushing and popping operations, with the minimum value corresponding to popping and the maximum value corresponding to pushing. NSPDA [14] uses a discrete-valued action neurons to control pushing and popping operations. In contrast to these models, growing memory modules have a simple design and do not need to be differentiable. However, it can be easily shown that growing memory modules can be simulated by these stack-augmented RNNs in linear time, and thus growing memory modules can be considered a generic type of stack-augmented RNNs. Therefore, our proof on the Turing completeness of an RNN with growing memory modules can be extended to stack-augmented RNNs in general, establishing their theoretical motivation. A Turing-complete RNN that is fully differentiable was introduced in 1996 [16]; this feature is a prerequisite to have the network trainable by gradient descent. It was followed by the Neural Turing Machine (NTM) [17] and its improved version, the differentiable neural computer [18], which are both differentiable and trainable RNNs equipped with memory banks. Though inspired by Turing Machines, bounded-precision NTMs and differentiable neural computers are not Turing-complete due to the fixed-sized memory bank, but they can simulate space-bounded Turing Machines (see Section 5). Simulation of a Turing Machine by an RNN with growing memory modules represents a practical and biologically inspired way to combine symbolic and sub-symbolic capabilities. All neurons in the RNN and growing memory modules have fixed precision. While the size of growing memory modules is linear in the number of symbols used in the Turing tape, the number of neurons in the RNN is still constant. Moreover, the neurons in growing memory modules (except the top neuron in the stack) are passive at most times. As a result, the time complexity of the simulation is linear in the simulated machine’s time and independent of the memory size. By showing how to simulate a Turing Machine with a bounded-precision RNN and thereby constructing a bounded-precision RNN that can run any algorithms, our paper proposes a practical method that combines symbolic and sub-symbolic capabilities. The remainder of the paper is structured as follows. Section 2 describes the preliminary of the paper, including the definition of Turing Machines and RNNs. Section 3 revisits and extends theories relating to simulating a Turing Machine with unbounded-precision RNNs and shows the existence of a 40-neuron unbounded-precision RNN that is Turing-complete. Section 4 presents the growing memory modules and proves the existence of a 54-neuron bounded-precision RNN with two growing memory modules that is Turing-complete. Section 5 relates the number of neurons and the precision of RNNs when simulating Turing Machines. Section 6 concludes the paper. 2 Background and Notation A Turing Machine is a 7-tuple M = (Q,Σ,Γ, δ, q0, ♯, F ), where Q is a finite set of states, Σ is a finite set of input symbols, Γ is a finite set of tape symbols (note that Σ ⊂ Γ), δ : Q×Γ → Q×Γ×{L,R} is the machine’s transition rule, q0 ∈ Q is the initial starting state, ♯ is the blank symbol (note that ♯ ∈ Γ but ♯ /∈ Σ), and F ⊂ Q is the set of final state. We only consider deterministic Turing Machines in this paper. The instantaneous description (or the configuration) of a Turing Machine is typically defined as a tuple of state, tape, and the location of the read/write head. However, we use a slightly different definition in the paper. We define the left-tape symbols (or the left tape in short), denoted as sL, to be the string of symbols starting at the symbol under the read/write head and extending to the left. We define the right-tape symbols (or the right tape in short), denoted as sR, to be the string of symbols starting at the symbol to the right of the read/write head and extending to the right. The first symbol in both strings is the closest to the read/write head in both sL and sR (that is, the left tape is reversed in the representation) and the blank symbols at the two ends are omitted. Therefore, the length of sL and sR combined equals to the number of symbols used in the Turing tape in each step (that is, unbounded but not infinite). Since sL and sR encode both the tape and the location of the read/write head, we define the instantaneous description of a Turing Machine as a 3-tuple (q, sL, sR) ∈ (Q,Γ∗,Γ∗), where q denotes the state, sL denotes the left-tape symbols, and sR denotes the right-tape symbols. Though the two definitions are equivalent, this definition allows easy encoding of tape symbols into neurons’ values. The set of all possible instantaneous description is denoted as X := (Q,Γ∗,Γ∗). In each step of a Turing Machine, the symbol under the read/write head is read and, together with the state, determines the symbol to be written under the read/write head, the direction of moving the tape, and the next state. To be precise, the complete dynamic map of M, denoted as PM : X → X , is defined as follows: 1. Let x = (q, sL, sR) be the input configuration, sL,(1) denote the first symbol in sL and sR,(1) denote the first symbol in sR. The transition is defined by the 3-tuple (q′, y, d) = δ(q, sL,(1)); 2. Replace the state of the machine q with q′ and the first symbol in sL by y; 3. Move the symbol in sL,(1) to become the new sR,(1) if d = L, and move sR,(1) to become the new sL,(1) if d = R (if there are no symbols left in sL or sR for moving, append a blank symbol ♯ to it before moving). Denote the left-tape symbols and the right-tape symbols after 2. and 3. by s′L and s ′ R respectively. Then PM(x) = (q′, s′L, s′R) represents one transition of the Turing Machine M from one configuration to the next. The partial input-output function of M, denoted as P∗M : X → X , is defined by applying PM repeatedly until q ∈ F , and is undefined if it is not possible to have q ∈ F by applying PM repeatedly. A recurrent neural network (RNN) is a neural network consisting of n neurons. The value of neuron i at time t ∈ {1, 2, ...}, denoted as xi(t) ∈ Q (Q is the set of rational numbers), is computed by an affine transformation of the values of neurons in the previous state followed by an activation function σ, i.e. xi(t) = σ( ∑n j=1 wijxj(t− 1) + bi), where wij are the weights and bi is the bias; or in vector form: x(t) = σ(Wx(t− 1) + b), (1) where x(t) ∈ Qn, W ∈ Rn×n and b ∈ Rn. This defines a mapping TW,b : Qn → Qn which characterizes an RNN. For simplicity, we consider the saturated-linear function in this paper; that is: σ(x) := 0 if x < 0, x if 0 ≤ x ≤ 1, 1 if x > 1. (2) Thus, x(t) ∈ (Q ∩ [0, 1])n for all t > 0. We say that a neuron xi(t) has precision p in base b if for all t > 0, xi(t) can be expressed as∑p l=1 a(l)(t)∏l j=1 c(j)(t) for some strings a(t) ∈ {0, 1, ..., b}p and c(t) ∈ {1, ..., b}p. For a string a, we use a(i) to denote the ith symbol in a and a(i:j) to denote the string a(i)a(i+1)...a(j). For a function f that maps from a set Y to a subset of Y, we use fn to denote the nth iterate of f where 1 ≤ n < ∞. For any two vectors x ∈ Rm,y ∈ Rn, we use x ⊕ y ∈ Rm+n to denote the concatenation of the two vectors. We use A∗ to denote all possible strings formed by elements from set A. The notation is summarized in the table found in Appendix D. 3 Turing Completeness of Unbounded-Precision RNNs To simulate a Turing Machine M by an RNN, we first consider how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers. For the state q ∈ Q, we encode it with ⌈log2 |Q|⌉ binary values, denoted as ρ(q) : Q → {0, 1}⌈log2 |Q|⌉, with each possible combination of binary values representing a specific state. Example. Let Q = {1, 2, 3, 4, 5, 6}. We can encode it by ⌈log2 |Q|⌉ = 3 binary values, with ρ(q)(1) = [0, 0, 0], ρ(q)(2) = [0, 0, 1], ρ(q)(3) = [0, 1, 0], ρ(q)(4) = [0, 1, 1], ρ(q)(5) = [1, 0, 0], and ρ(q)(6) = [1, 0, 1]. For the left tape sL ∈ Γ∗ and the right tape sR ∈ Γ∗, we use fractal encoding in two rational numbers, as recommended in [1, 2]. The fractal encoding, bearing similarity to Cantor sets, enables fast manipulation of the top symbols. Without loss of generality, assume that the tape symbols Γ are encoded into numbers {1, 3, 5, ..., 2|Γ|− 1} and that the blank symbol ♯ is encoded by 1. Then, define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i + 1 (2|Γ|)|y| · (2|Γ| − 1) . (3) Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5), and ♯ = 1. Then ρ(s)(sL) = 38 + 5 82 + 7 83 + 3 84 + 5 85 + 1 86 + 1 87 + 1 88 + ... = 3 8 + 5 82 + 7 83 + 3 84 + 5 85 + 1 85·7 . The last term in (3) represents the infinite blank symbols of a tape. This encoding requires the tape symbol neurons to have the same precision as the size of the active (non-blank) part of the tape in base 2|Γ|. As the tape in a Turing Machine has an unbounded size, it means that we require neurons with unbounded precision. This is different from infinite precision but still not applicable. Hence, we will discuss how to remove this unbounded-precision assumption in Section 4. Finally, we encode the value of the top symbol in each tape neuron, sL,(1) and sR,(1), by a binary tuple ρ(r) : Γ → {0, 1}|Γ|−1: ρ (r) i (y) := 1{y > 2i} (1 ≤ i ≤ |Γ| − 1). (4) That is, the i coordinate of ρ(r)(y) is 1 if and only if the symbol y has a value larger than 2i, and is 0 otherwise. Example. Let Γ = {1, 3, 5, 7}. Then ρ(r)(1) = [0, 0, 0], ρ(r)(3) = [1, 0, 0], ρ(r)(5) = [1, 1, 0], and ρ(r)(7) = [1, 1, 1]. Combining the above discussion, we define the encoding function of configurations ρ : X → Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+5 by: ρ(q, sL, sR) = ρ (q)(q)⊕ ρ(s)(sL)⊕ ρ(s)(sR)⊕ ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0, (5) where 0 is a zero vector of size |Q||Γ|+ 5. Let ρ−1 : ρ(X ) → X be the inverse function such that ρ−1(ρ(x)) = x for all x ∈ X (note that ρ is injective, i.e. ρ(x) = ρ(x′) implies x = x′); we call ρ−1 the decoder function. It should be noted that ρ(q)(q) ⊕ ρ(s)(sL) ⊕ ρ(s)(sR) is sufficient to decode the instantaneous description. We include ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0 only because it facilitates the full simulation on the RNN. This completes the construction of the encoding function. Given an instantaneous description x ∈ X , we initialize the neurons in an RNN with values ρ(x). Then, it is possible to construct the parameters of RNN such that the update given by the RNN on these neurons is the same as the update given by the Turing Machine: Theorem 1. Given a Turing Machine M, there exists an injective function ρ : X → QN and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (6) Proof. A sketch of the proof is as follows. Neurons in the RNNs are grouped by their function. Tape neurons, initialized with ρ(s)(sL) and ρ(s)(sR), encode the tape in fractal encoding. Readout neurons, initialized with ρ(r)(sL,(1)) and ρ(r)(sR,(1)), encode the first symbol in the left and the right tape. State neurons, initialized with ρ(q)(q), encode the state. We need to update the values of these neurons to simulate one step of the Turing Machine. Three steps (or stages) of RNN are required to simulate one step of a Turing Machine. In the first stage, entry neurons, initialized with 0, compute the combination of the state and the symbol under the head. Since this combination fully determines the next transition, we use it to update the state neurons and the temporary tape neurons during stage two. Temporary tape neurons serve as a buffer for tape neurons when shifting the tape to the left or right. In stage three, we move the values from the temporary tape neurons to tape neurons to complete the update. The detailed proof can be found in Appendix A. In other words, to simulate one step of a Turing Machine M, we can first encode the instantaneous description x by the encoder function ρ, apply the RNN three times T 3W,b, and decode the values back by ρ−1 to obtain PM(x), the instantaneous description after one step of the Turing Machine. Or equivalently, for any Turing Machine M, there exists an RNN such that every three steps of the RNN yield the same result as one step of the Turing Machine. By applying Theorem 1 repeatedly, we simulate a Turing Machine with an RNN in a linear time. To be specific, the partial input-output function of an RNN, denoted as T ∗W,b : QN → QN , is defined by applying T 3W,b repeatedly until q ∈ F (where q is the state that the RNN simulates), and is undefined if it is not possible to have q ∈ F by applying T 3W,b repeatedly. Based on this definition and Theorem 1, it follows that: Corollary 1.1. Given a Turing Machine M, there exists an injective function ρ : X → Qn and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (7) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Corollary 1.1 shares similarities with Theorem 1 in [1]. However, our theorem states that 3T , instead of 4T , is sufficient to simulate a Turing Machine. We also give the relationship between the number of neurons required by the RNN and the size of Q and Γ in the Turing Machine. A small UTM with 6 states and 4 symbols, denoted by U6,4, was proposed [19] and can simulate any Turing Machine in time O(T 6), where T is the number of steps required by the Turing Machine (the one to be simulated) to compute the result. As U6,4 is also a Turing Machine, we apply Corollary 1.1 to simulate U6,4, leading to a Turing-complete RNN. Plugging in |Q| = 6 and |Γ| = 4, we obtain the following result: Corollary 1.2. There exists a 40-neuron unbounded-precision RNN that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result. It should be noted that [1] focused on capabilities and proved a Turing-complete RNN with 1058 neurons; [20] proposed a Turing-complete RNN with 52 neurons. Here we provide a plug-and-play formula to simulate any Turing Machine. 4 Turing Completeness of Bounded-Precision RNNs with Growing Memory In the following, we consider how to remove the assumption of unbounded neural precision (Section 3) without reducing computational capacity. If we assume all neurons to have precision bounded by p in base 2|Γ|, then the tape can be encoded by ⌈|sj |/p⌉ neurons using fractal encoding, where j ∈ {L,R}. We do this by encoding every p symbols of the tape into a single neuron. Since most of these neurons do not require updates, just like symbols far from the read/write head in a Turing tape, we propose to store them in a separate growing memory module organized in a stack-like manner: Definition 1. A growing memory module is a stack of non-zero neurons with push and pop operations controlled by two neurons in an RNN, denoted as push neuron u(t) and pop neuron o(t), in the following way: for every step t after the RNN finished updating the values of neurons, (i) if u(t) > 0, then a new neuron with the value u(t) is pushed to the stack and u(t) is set to 0; (ii) if o(t) = 0 and the stack is not empty, then the top neuron is popped from the stack and o(t) is set to the value of the top neuron in the updated stack; (iii) if o(t) = 0 and the stack is empty, then o(t) is set to a default value c. An RNN with a growing memory module is a mapping TW,b : (QN ,Q∗) → (QN ,Q∗) where the first element of the tuple corresponds to values of neurons in the RNN and the second element of the tuple corresponds to the stack of the growing memory module. We can equip an RNN with multiple growing memory modules, with each module having its own push and pop neurons controlled by the RNN. An RNN with two growing memory modules will be defined by a mapping TW,b : (QN ,Q∗,Q∗) → (QN ,Q∗,Q∗). We can view the growing memory module as a way of dynamically pointing to a sequence of non-zero neurons appended by one zero neuron. o(t) can be viewed as the pointer for the last non-zero neuron in the sequence, and u(t) can be viewed as the pointer for the zero neuron at the beginning of the sequence; see Figure 1. With two growing memory modules (one for the left tape and one for the right tape), we can construct an RNN with bounded-precision neurons that can simulate any Turing Machine. We first describe how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers and two stacks, with which an RNN and its growing memory modules can be initialized. In the following discussion, we assume that all neurons have precision bounded by p ≥ 2 in base 2|Γ|; that is, each neuron can encode p symbols at most. Both the state q and the top symbols sL,(1), sR,(1) are encoded with binary values as in Section 3. For the tape sj (j ∈ {L,R}), we define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i , (8) which is similar to (3) except for the encoding of infinite blank symbols. Then, we encode the tape sj into a stack of neurons as follows: First, encode the rightmost p symbols (the ones farthest from the read/write head) with ρt and push it to an empty stack, denoted as Mj . Then, encode the next rightmost p symbols and push it to Mj again, and repeat until at least one and at most p symbols remain in the tape. Denote this encoding function for the tape as ρ(M) : Γ∗ → Q∗. The remaining symbols in the tape, denoted as sj,(1:h(|sj |)), where h(y) := ((y − 1) mod p) + 1, will be encoded with fractal encoding ρ(s) as well, but would appear in neurons inside the RNN. The general idea is to let only the symbols closest to the read/write head (sj,(1:h(|sj |))) reside in the RNN. If the number of symbols residing in the RNN reaches 0 or p, then we pop from or push to the stack respectively to ensure that at least 1 and at most p symbols reside in the tape neurons. It is interesting to note that the kth neuron in the stack (from the top) requires at least kp steps of the Turing Machine before it may be updated, so the values of neurons near the bottom of the stack will not be changed for many steps. That is, the neurons in the stack, except for the top neurons, are passive. Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5, 5, 3, 7) and p = 3. Then the number of symbols to remain in the RNN is 2 and they are encoded by ρ(s)(sL,(1:h(|sL|))) = 3 8 + 5 82 . The remaining six symbols are stored in the stack: ρ(M)(sL) = [ 78 + 3 82 + 5 83 , 5 8 + 3 82 + 7 83 ]. We encode in neuron h(|sj |), the number of symbols in sj (j ∈ {L,R}). In the above example, h(|sL|) = 2. These neurons help the RNN to know when pushing or popping operations are required. We use the encoding ρ(h) : {1, 2, ..., p} → Q, defined by: ρ(h)(y) = y p+ 1 . (9) Together, define the encoding function ρ : X → (Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+19,Q∗,Q∗) by: ρ(q, sL, sR) =(ρ (q)(q)⊕ ρ(s)(sL,(1:h(|sL|)))⊕ ρ (s)(sR,(1:h(|sR|)))⊕ ρ (s)(sL,(h(|sL|)+1:h(|sL|)+p))⊕ ρ(s)(sR,(h(|sR|)+1:h(|sR|)+p))⊕ ρ (r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ ρ(h)(h(|sL|))⊕ ρ(h)(h(|sR|))⊕ 0, ρ(M)(sL), ρ(M)(sR)), (10) where 0 is a zero vector of size |Q||Γ| + 15. The first element of the tuple is for initializing the neurons in the RNN, while the second and third element of the tuple is for initializing the two growing memory stack modules. All encoded values have precision of p. Similar to the previous section, ρ is injective and so we can define the decoder function ρ−1 : ρ(X ) → X . With the new encoding function ρ and growing memory modules, we can prove an alternative version to Theorem 1 that only requires bounded precision neurons: Theorem 2. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base 2|Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (11) Proof. The detailed proof is in Appendix B. To illustrate the construction of the RNN, the parameters for the neuron initialized with ρ(h)(h(|sL|)), called the left guard neuron and denoted as gL(t), will be described here. We assume all neurons in the RNN are initialized with values from the encoder function ρ(x) at time t = 1. The guard neuron gL(t) encodes the number of left-tape symbols residing in the RNN. In three stages of an RNN, we need to update its value from gL(1) = h(|sL|)/(p + 1) to gL(4) = h(|s′L|)/(p+1), where s′L is the left tape after one step of the Turing Machine. First, notice that h(|s′L|) can be expressed as: h(|s′L|) = h(|sL|)− 1 if d = L and h(|sL|) ≥ 2, p if d = L and h(|sL|) = 1, h(|sL|) + 1 if d = R and h(|sL|) ≤ p− 1, 1 if d = R and h(|sL|) = p, (12) where d is the direction that the Turing Machine’s head is moved. Example. Assume d = L and the size of the active symbols is h(|sL|) = 1, which means that the Turing Machine is moving left and there is only one symbol residing in the RNN. Since after the move, sL will have one symbol less and so the corresponding neuron sL(t) will encode no symbols. As a result, the top neuron of the left growing memory module would be popped out, and sL(t) would assume its value. This way the neuron sL(t) again encodes the active symbols (1 to p) of the left tape series, and h(|sL|) is set to p. (Alternatively, one may prove it directly by the definition of h and the fact that |s′L| = |sL| − 1.) An analogous process holds when the Turing Machine is moving right. We implement (12) with an RNN as follows. Define stage neurons as: c1(t+ 1) = σ(1− c1(t)− c2(t)), (13) c2(t+ 1) = σ(c1(t)), (14) with both neurons initialized to be zero. Define c(t) := [c1(t), c2(t), c3(t)] where c3(t) := 1 − c1(t) − c2(t), then c(1) = [0, 0, 1]; c(2) = [1, 0, 0]; c(3) = [0, 1, 0]. These stage neurons signal which one of the three stages that the RNN is in. In the construction of the RNN, there exists a linear sum of neurons, denoted as d(t) = [dL(t), dR(t)], such that if the Turing Machine is moving left, d(1) = [0, 0]; d(2) = [1, 0]; d(3) = [0, 0]; and if the Turing Machine is moving right, d(1) = [0, 0]; d(2) = [0, 1]; d(3) = [0, 0]; this signals which direction the Turing Machine is moving to (formulas of d(t) appear in Appendix B). Then consider the following update rule for the left guard neurons: gL(t+ 1) = σ(gL(t) + (dR(t)− dL(t)− pg′L(t) + pg′′L(t))/(p+ 1)), (15) g′L(t+ 1) = σ((p+ 1)gL(t) + dR(t)− p− 2c2(t)− 2c3(t)), (16) g′′L(t+ 1) = σ(2− (p+ 1)gL(t)− dR(t)− 2c2(t)− 2c3(t)), (17) where gL(1) = h(|sL|)/(p+1), g′L(1) = g′′L(1) = 0. It can be verified that gL(4) = h(|t′L|)/(p+1) as defined by (12), completing the proof for gL(t). Example. Assume d = L and h(|sL|) = 1. Then gL(1) = gL(2) = 1/(p+ 1) and g′L(1) = g′′L(1) = g′L(2) = g ′′ L(2) = 0. On the second stage, gL(3) = σ(1/(p + 1) − 1/(p + 1)) = 0, g′L(3) = σ(1−p) = 0, and g′′L(3) = σ(2−1) = 1. On the third stage, gL(4) = σ(0+p/(p+1)) = p/(p+1) as required. The full proof appears in Appendix B. Similar to Corollary 1.1, it follows that: Corollary 2.1. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base |2Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (18) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Finally, applying Corollary 2.1 to U6,4, we obtain: Corollary 2.2. There exists a 54-neuron p-precision (in base 8) RNN with two growing memory modules that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result and p ≥ 2. The architecture of the Turing-complete 54-neuron RNN (fully described in the proof of Theorem 2) is depicted in Figure 2. 4.1 Relationship of the Growing Memory Modules with Stack-augmented RNNs The proposed growing memory module belongs to the generic class of stack-augmented RNNs, which refers to any RNNs augmented with a stack-like mechanism. Many different forms of stackaugmented RNNs have been proposed [11, 12, 13, 14, 15]. Given the simplicity of the design, the growing memory module represents the foundation form of these stack-augmented RNNs. It is easy to show that these stack-augmented RNNs can simulate the growing memory module in linear time. For example, the growing memory modules can be simulated by the neural stack [11] as follows. For pushing operation, set vt in the neural stack to u(t) in the growing memory module, and dt in the neural stack to 1{u(t) > 0} in the growing memory module. For popping operation, set ut in the neural stack to 1{o(t) = 1} in the growing memory module. Therefore, the proof for the Turing completeness of bounded-precision RNNs with growing memory modules can extend to other stack-augmented RNNs. That is, bounded-precision stack-augmented RNNs are also Turing-complete. Different from other stack-augmented RNNs, the proposed growing memory modules use a simple mechanism to control pushing and popping, as only the top neurons in the stack are included in the RNNs. This allows theories relating to growing memory modules to be easily extended to other forms of RNNs. 5 Bounded-Precision RNNs and Space-Bounded Turing Machines As discussed above, it is inefficient to update all neurons that encode tape information, but if one still wants to remove the growing memory modules and focuses purely on a bounded-precision RNN only, then the resulting network can simulate space-bounded Turing Machines (that is, Turing Machines with a bounded-size tape) only. Theorem 3. Given a Turing Machine M with a bounded tape of size F , there exists an injective function ρ : X → Qn and an n-neuron p-precision (in base |2Γ|) RNN TW,b : Qn → Qn, where n = O(⌈F/p⌉) and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (19) The proof, which is constructive, can be found in Appendix C. The general idea of the proof is to implement the growing memory module in Section 4 by an RNN as well and place all neurons inside the RNN. The theorem shows that the number of neurons required to simulate a space-bounded Turing Machine correlates with the tape size. To simulate a Turing Machine with an unbounded tape, we would need to add neurons to the RNN once the read/write head reaches the end of the memory. To be specific, we say an RNN has an unbounded number of neurons if the RNN either has an infinite number of neurons to start with or can increase the number of neurons after each update step depending on the neurons’ values. The UTM that simulates any Turing Machine in a fast way was described in [21], which does so with only O(T log T ) slowdown. While this UTM has multiple tapes, Theorem 3 can be generalized to multiple-tape Turing Machines easily. We now obtain: Corollary 3.1. There exists an unbounded-neuron bounded-precision RNN that can simulate any Turing Machine in O(T log T ), where T is the number of steps required for the Turing Machine to compute the result. 6 Discussion and Conclusion To construct a Turing-complete RNN, we have to incorporate some encoding for the unbounded number of symbols on the Turing tape. This encoding can be done by: (a) unbounded precision of some neurons (Theorem 1), (b) an unbounded number of neurons (Theorem 3), or (c) a separate growing memory module (Theorem 2). The main contribution of this paper is spelling out the details of (c), which provides a practical way to construct an RNN that runs any given algorithms. We prove the Turing completeness of a 40-neuron unbounded-precision RNN, which is the smallest Turing-complete RNN to date. We analyze the relationship between the number of neurons and the precision of an RNN when simulating a Turing Machine. Most importantly, we propose a 54-neuron bounded-precision RNN with growing memory modules that is Turing-complete, and this proof of Turing completeness can be extended to stack-augmented RNNs in general. This paper focuses on the computational capabilities and representability of symbolic and subsymbolic processing via stack-augmented RNNs; it does not engage yet with methods to train them. Since growing memory modules are not differentiable, we cannot train them directly by the frequently used error backpropagation algorithm. One may want to construct a differentiable version of the modules, or alternatively use a different learning rule (e.g., REINFORCE [22]) to deal with the discrete pushing and popping operations. Understanding methods to train growing memory modules, incorporate symbolic information into the sub-symbolic representation efficiently, and retrieve both symbolic and non-symbolic information are the next steps towards the goal of combining symbolic and sub-symbolic capabilities in an adaptive and applicable manner.
1. What is the focus of the paper regarding Turing machines and RNNs? 2. What are the strengths of the proposed dynamic-growing memory module for RNNs? 3. Do you have any questions or suggestions regarding the paper's content, proofs, or citations? 4. How does the reviewer assess the significance, originality, and impact of the work? 5. Are there any minor issues or typos in the paper that could be improved?
Summary Of The Paper Review
Summary Of The Paper This work proposes a dynamic-growing memory module for RNNs, which serves to simulate Turing machines of bounded precision. First, the authors prove how to encode a Turing machine to an unbounded precision RNN. The encoding uses fractal encoding for symbols, like previous work, and it replaces each step of the Turing machine as 3 steps in the RNN. Leaning in the work of [14], they prove that there is a 40-neuron unbounded precision RNN to simulate any Turing machine (i.e., showing Turing-completeness). Next, they consider the previous unbounded-precision definitions with the memory module to obtain a bounded-precision RNN.The memory module is a stack of neurons, that requires two neurons in the RNN to control it: push and pop. Then it is proven that the proposed bounded-precision RNN utilizes 2 of these stack to simulate a Turing machine. Further, each stack is divided in groups of p neurons, such that the most active neurons are in the RNN (near to each side of the memory head in the Turing machine). Leaning again in [14], they can show that there is a 54-neuron with p -precision RNN with 2 growing memory module, that can simulate any Turing machine. This proves Turing-completeness of the precision-bounded RNN. Finally, they remove the growing memory module and simulate the memory within the neurons of an RNN, showing that an infinite number of bounded-precision RNN can simulate any Turing machine. the work finishes with an interesting discussion of the results and limitations. Review The theoretical results presented in this work are novel to the best of my knowledge. The ideas to build the mappings are simple and lean, citing related work as needed. The idea of using stacks with RNNs is not novel, the authors may want to have a look at the works [1],[2],[3] that apply it in practice (and require a differentiable stack). The paper is clear and proofs look sound. The claims are properly supported and discussed, facilitating the understanding of the work. The flow of the text read smoothly (despite the number of symbols). On one hand, the results improve over the existing state-of-the-art in terms of the number of neurons for unbounded precision. On the other hand, shows that bounded-precision RNNs with memory are Turing-complete. Section 5 and the final discussion are a very interesting ending to this paper. A little more detail for Figure 2 in the main paper will be highly appreciated by this reader. Significance: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach? This work is clearly very important to the NeurIPS community and the deep learning community in particular. This work does not only make solid theoretical contributions where needed, but also emphasizes the importance of memory augmentation in RNNs. It was very pleasant to read this paper and the nice results. Thank you! Minor: Line 112: "without loss of generosity" --> "without loss of generality" References [1] Mali et al. "Recognizing Long Grammatical Sequences Using Recurrent Networks Augmented With An External Differentiable Stack", 2020. [2] Mali et al. "The Neural State Pushdown Automata", IEEE Transactions on Artificial Intelligence, 2020. [3] Suzgin et al. "Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages", 2019.
NIPS
Title Turing Completeness of Bounded-Precision Recurrent Neural Networks Abstract Previous works have proved that recurrent neural networks (RNNs) are Turingcomplete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine’s time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and boundedprecision RNNs, revisiting and extending the theoretical foundations of RNNs. 1 Introduction Symbolic (such as Turing Machines) and sub-symbolic processing (such as adaptive neural networks) are two competing methods of representing and processing information, each with its own advantages. An ultimate way to combine symbolic and sub-symbolic capabilities is by enabling the running of algorithms on a neural substrate, which means a neural network that can simulate a Universal Turing Machine (UTM). Previous works [1, 2, 3] have shown that this is possible – there exists a recurrent neural network (RNN) that can simulate a UTM. These proofs assumed a couple of neurons with unbounded precision that equals the number of symbols used in the Turing tape. Here we provide an alternative simulation of a UTM by RNNs with bounded-precision neurons only. The general idea works as follows. The Turing Machine’s tape is stored in a growing memory module, which is a stack of neurons with pushing and popping operations controlled by neurons in the RNN. The size of the growing memory module is determined by the usage of the Turing tape - it dynamically recruits new neurons when more memories are needed and releases them when memories become irrelevant. The neurons in the stack, except for the top neuron, are not regularly updated (and hence can be referred to as passive), saving computational cost for memories that are not in the focus of the computing and do not require change. Using growing memory modules, a 54-neuron bounded-precision RNN is constructed that can simulate any Turing Machine. Our proposed growing memory modules are inspired by biological memory systems. The process of dynamically recruiting new neurons when more memories are necessary is also observed in biological memory systems. Neurogenesis is the process by which new neurons are produced in the central nervous system; it is most active during early development, but continues through life. *Both authors contributed equally. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). In adult vertebrates, neurogenesis is known to occur in the dentate gyrus (DG) of the hippocampal formation [4] and the subventricular zone (SVZ) of the lateral ventricles [5]. Since DG is well-known in neuroscience for its role in pattern separation for memory encoding [6, 7], this suggests that biological memory systems also dynamically recruit new neurons. The rate of neurogenesis in adult mice has been shown to be higher if they are exposed to a wider variety of experiences [8]. This further suggests a role for self-regulated neurogenesis in scaling up the number of new memory that can be encoded and stored during one’s lifetime without catastrophic forgetting of previously consolidated memory. Besides the mechanism of recruiting new neurons, the process of storing neurons in growing memory modules also shares some similarities with biological memory consolidation, a process by which short-term memory is transformed into long-term memory [9, 10]. Compared to short-term memory, long-term memory is more long-lasting and robust to interference. This is similar to the neurons stored in growing memory modules - the values of these neurons (except the top neuron in the stack) remain unchanged and cannot be interfered by the RNN, providing a mechanism to store information stably. Growing memory modules share similarities with other stack-augmented RNNs [11, 12, 13, 14, 15]. In neural stacks [11], the RNN outputs two continuous scalars that control the strength of pushing and popping operations, and the stack is made differentiable by adding a strength vector. In stack RNNs [12] and DiffStk-RNN [15], the RNN outputs the probability vector corresponding to pushing, popping, and no operation. In NNPDA [13], the RNN outputs a continuous scalar that controls the pushing and popping operations, with the minimum value corresponding to popping and the maximum value corresponding to pushing. NSPDA [14] uses a discrete-valued action neurons to control pushing and popping operations. In contrast to these models, growing memory modules have a simple design and do not need to be differentiable. However, it can be easily shown that growing memory modules can be simulated by these stack-augmented RNNs in linear time, and thus growing memory modules can be considered a generic type of stack-augmented RNNs. Therefore, our proof on the Turing completeness of an RNN with growing memory modules can be extended to stack-augmented RNNs in general, establishing their theoretical motivation. A Turing-complete RNN that is fully differentiable was introduced in 1996 [16]; this feature is a prerequisite to have the network trainable by gradient descent. It was followed by the Neural Turing Machine (NTM) [17] and its improved version, the differentiable neural computer [18], which are both differentiable and trainable RNNs equipped with memory banks. Though inspired by Turing Machines, bounded-precision NTMs and differentiable neural computers are not Turing-complete due to the fixed-sized memory bank, but they can simulate space-bounded Turing Machines (see Section 5). Simulation of a Turing Machine by an RNN with growing memory modules represents a practical and biologically inspired way to combine symbolic and sub-symbolic capabilities. All neurons in the RNN and growing memory modules have fixed precision. While the size of growing memory modules is linear in the number of symbols used in the Turing tape, the number of neurons in the RNN is still constant. Moreover, the neurons in growing memory modules (except the top neuron in the stack) are passive at most times. As a result, the time complexity of the simulation is linear in the simulated machine’s time and independent of the memory size. By showing how to simulate a Turing Machine with a bounded-precision RNN and thereby constructing a bounded-precision RNN that can run any algorithms, our paper proposes a practical method that combines symbolic and sub-symbolic capabilities. The remainder of the paper is structured as follows. Section 2 describes the preliminary of the paper, including the definition of Turing Machines and RNNs. Section 3 revisits and extends theories relating to simulating a Turing Machine with unbounded-precision RNNs and shows the existence of a 40-neuron unbounded-precision RNN that is Turing-complete. Section 4 presents the growing memory modules and proves the existence of a 54-neuron bounded-precision RNN with two growing memory modules that is Turing-complete. Section 5 relates the number of neurons and the precision of RNNs when simulating Turing Machines. Section 6 concludes the paper. 2 Background and Notation A Turing Machine is a 7-tuple M = (Q,Σ,Γ, δ, q0, ♯, F ), where Q is a finite set of states, Σ is a finite set of input symbols, Γ is a finite set of tape symbols (note that Σ ⊂ Γ), δ : Q×Γ → Q×Γ×{L,R} is the machine’s transition rule, q0 ∈ Q is the initial starting state, ♯ is the blank symbol (note that ♯ ∈ Γ but ♯ /∈ Σ), and F ⊂ Q is the set of final state. We only consider deterministic Turing Machines in this paper. The instantaneous description (or the configuration) of a Turing Machine is typically defined as a tuple of state, tape, and the location of the read/write head. However, we use a slightly different definition in the paper. We define the left-tape symbols (or the left tape in short), denoted as sL, to be the string of symbols starting at the symbol under the read/write head and extending to the left. We define the right-tape symbols (or the right tape in short), denoted as sR, to be the string of symbols starting at the symbol to the right of the read/write head and extending to the right. The first symbol in both strings is the closest to the read/write head in both sL and sR (that is, the left tape is reversed in the representation) and the blank symbols at the two ends are omitted. Therefore, the length of sL and sR combined equals to the number of symbols used in the Turing tape in each step (that is, unbounded but not infinite). Since sL and sR encode both the tape and the location of the read/write head, we define the instantaneous description of a Turing Machine as a 3-tuple (q, sL, sR) ∈ (Q,Γ∗,Γ∗), where q denotes the state, sL denotes the left-tape symbols, and sR denotes the right-tape symbols. Though the two definitions are equivalent, this definition allows easy encoding of tape symbols into neurons’ values. The set of all possible instantaneous description is denoted as X := (Q,Γ∗,Γ∗). In each step of a Turing Machine, the symbol under the read/write head is read and, together with the state, determines the symbol to be written under the read/write head, the direction of moving the tape, and the next state. To be precise, the complete dynamic map of M, denoted as PM : X → X , is defined as follows: 1. Let x = (q, sL, sR) be the input configuration, sL,(1) denote the first symbol in sL and sR,(1) denote the first symbol in sR. The transition is defined by the 3-tuple (q′, y, d) = δ(q, sL,(1)); 2. Replace the state of the machine q with q′ and the first symbol in sL by y; 3. Move the symbol in sL,(1) to become the new sR,(1) if d = L, and move sR,(1) to become the new sL,(1) if d = R (if there are no symbols left in sL or sR for moving, append a blank symbol ♯ to it before moving). Denote the left-tape symbols and the right-tape symbols after 2. and 3. by s′L and s ′ R respectively. Then PM(x) = (q′, s′L, s′R) represents one transition of the Turing Machine M from one configuration to the next. The partial input-output function of M, denoted as P∗M : X → X , is defined by applying PM repeatedly until q ∈ F , and is undefined if it is not possible to have q ∈ F by applying PM repeatedly. A recurrent neural network (RNN) is a neural network consisting of n neurons. The value of neuron i at time t ∈ {1, 2, ...}, denoted as xi(t) ∈ Q (Q is the set of rational numbers), is computed by an affine transformation of the values of neurons in the previous state followed by an activation function σ, i.e. xi(t) = σ( ∑n j=1 wijxj(t− 1) + bi), where wij are the weights and bi is the bias; or in vector form: x(t) = σ(Wx(t− 1) + b), (1) where x(t) ∈ Qn, W ∈ Rn×n and b ∈ Rn. This defines a mapping TW,b : Qn → Qn which characterizes an RNN. For simplicity, we consider the saturated-linear function in this paper; that is: σ(x) := 0 if x < 0, x if 0 ≤ x ≤ 1, 1 if x > 1. (2) Thus, x(t) ∈ (Q ∩ [0, 1])n for all t > 0. We say that a neuron xi(t) has precision p in base b if for all t > 0, xi(t) can be expressed as∑p l=1 a(l)(t)∏l j=1 c(j)(t) for some strings a(t) ∈ {0, 1, ..., b}p and c(t) ∈ {1, ..., b}p. For a string a, we use a(i) to denote the ith symbol in a and a(i:j) to denote the string a(i)a(i+1)...a(j). For a function f that maps from a set Y to a subset of Y, we use fn to denote the nth iterate of f where 1 ≤ n < ∞. For any two vectors x ∈ Rm,y ∈ Rn, we use x ⊕ y ∈ Rm+n to denote the concatenation of the two vectors. We use A∗ to denote all possible strings formed by elements from set A. The notation is summarized in the table found in Appendix D. 3 Turing Completeness of Unbounded-Precision RNNs To simulate a Turing Machine M by an RNN, we first consider how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers. For the state q ∈ Q, we encode it with ⌈log2 |Q|⌉ binary values, denoted as ρ(q) : Q → {0, 1}⌈log2 |Q|⌉, with each possible combination of binary values representing a specific state. Example. Let Q = {1, 2, 3, 4, 5, 6}. We can encode it by ⌈log2 |Q|⌉ = 3 binary values, with ρ(q)(1) = [0, 0, 0], ρ(q)(2) = [0, 0, 1], ρ(q)(3) = [0, 1, 0], ρ(q)(4) = [0, 1, 1], ρ(q)(5) = [1, 0, 0], and ρ(q)(6) = [1, 0, 1]. For the left tape sL ∈ Γ∗ and the right tape sR ∈ Γ∗, we use fractal encoding in two rational numbers, as recommended in [1, 2]. The fractal encoding, bearing similarity to Cantor sets, enables fast manipulation of the top symbols. Without loss of generality, assume that the tape symbols Γ are encoded into numbers {1, 3, 5, ..., 2|Γ|− 1} and that the blank symbol ♯ is encoded by 1. Then, define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i + 1 (2|Γ|)|y| · (2|Γ| − 1) . (3) Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5), and ♯ = 1. Then ρ(s)(sL) = 38 + 5 82 + 7 83 + 3 84 + 5 85 + 1 86 + 1 87 + 1 88 + ... = 3 8 + 5 82 + 7 83 + 3 84 + 5 85 + 1 85·7 . The last term in (3) represents the infinite blank symbols of a tape. This encoding requires the tape symbol neurons to have the same precision as the size of the active (non-blank) part of the tape in base 2|Γ|. As the tape in a Turing Machine has an unbounded size, it means that we require neurons with unbounded precision. This is different from infinite precision but still not applicable. Hence, we will discuss how to remove this unbounded-precision assumption in Section 4. Finally, we encode the value of the top symbol in each tape neuron, sL,(1) and sR,(1), by a binary tuple ρ(r) : Γ → {0, 1}|Γ|−1: ρ (r) i (y) := 1{y > 2i} (1 ≤ i ≤ |Γ| − 1). (4) That is, the i coordinate of ρ(r)(y) is 1 if and only if the symbol y has a value larger than 2i, and is 0 otherwise. Example. Let Γ = {1, 3, 5, 7}. Then ρ(r)(1) = [0, 0, 0], ρ(r)(3) = [1, 0, 0], ρ(r)(5) = [1, 1, 0], and ρ(r)(7) = [1, 1, 1]. Combining the above discussion, we define the encoding function of configurations ρ : X → Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+5 by: ρ(q, sL, sR) = ρ (q)(q)⊕ ρ(s)(sL)⊕ ρ(s)(sR)⊕ ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0, (5) where 0 is a zero vector of size |Q||Γ|+ 5. Let ρ−1 : ρ(X ) → X be the inverse function such that ρ−1(ρ(x)) = x for all x ∈ X (note that ρ is injective, i.e. ρ(x) = ρ(x′) implies x = x′); we call ρ−1 the decoder function. It should be noted that ρ(q)(q) ⊕ ρ(s)(sL) ⊕ ρ(s)(sR) is sufficient to decode the instantaneous description. We include ρ(r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ 0 only because it facilitates the full simulation on the RNN. This completes the construction of the encoding function. Given an instantaneous description x ∈ X , we initialize the neurons in an RNN with values ρ(x). Then, it is possible to construct the parameters of RNN such that the update given by the RNN on these neurons is the same as the update given by the Turing Machine: Theorem 1. Given a Turing Machine M, there exists an injective function ρ : X → QN and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (6) Proof. A sketch of the proof is as follows. Neurons in the RNNs are grouped by their function. Tape neurons, initialized with ρ(s)(sL) and ρ(s)(sR), encode the tape in fractal encoding. Readout neurons, initialized with ρ(r)(sL,(1)) and ρ(r)(sR,(1)), encode the first symbol in the left and the right tape. State neurons, initialized with ρ(q)(q), encode the state. We need to update the values of these neurons to simulate one step of the Turing Machine. Three steps (or stages) of RNN are required to simulate one step of a Turing Machine. In the first stage, entry neurons, initialized with 0, compute the combination of the state and the symbol under the head. Since this combination fully determines the next transition, we use it to update the state neurons and the temporary tape neurons during stage two. Temporary tape neurons serve as a buffer for tape neurons when shifting the tape to the left or right. In stage three, we move the values from the temporary tape neurons to tape neurons to complete the update. The detailed proof can be found in Appendix A. In other words, to simulate one step of a Turing Machine M, we can first encode the instantaneous description x by the encoder function ρ, apply the RNN three times T 3W,b, and decode the values back by ρ−1 to obtain PM(x), the instantaneous description after one step of the Turing Machine. Or equivalently, for any Turing Machine M, there exists an RNN such that every three steps of the RNN yield the same result as one step of the Turing Machine. By applying Theorem 1 repeatedly, we simulate a Turing Machine with an RNN in a linear time. To be specific, the partial input-output function of an RNN, denoted as T ∗W,b : QN → QN , is defined by applying T 3W,b repeatedly until q ∈ F (where q is the state that the RNN simulates), and is undefined if it is not possible to have q ∈ F by applying T 3W,b repeatedly. Based on this definition and Theorem 1, it follows that: Corollary 1.1. Given a Turing Machine M, there exists an injective function ρ : X → Qn and an n-neuron unbounded-precision RNN TW,b : Qn → Qn, where n = 2|Γ|+ ⌈log2 |Q|⌉+ |Q||Γ|+ 5, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (7) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Corollary 1.1 shares similarities with Theorem 1 in [1]. However, our theorem states that 3T , instead of 4T , is sufficient to simulate a Turing Machine. We also give the relationship between the number of neurons required by the RNN and the size of Q and Γ in the Turing Machine. A small UTM with 6 states and 4 symbols, denoted by U6,4, was proposed [19] and can simulate any Turing Machine in time O(T 6), where T is the number of steps required by the Turing Machine (the one to be simulated) to compute the result. As U6,4 is also a Turing Machine, we apply Corollary 1.1 to simulate U6,4, leading to a Turing-complete RNN. Plugging in |Q| = 6 and |Γ| = 4, we obtain the following result: Corollary 1.2. There exists a 40-neuron unbounded-precision RNN that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result. It should be noted that [1] focused on capabilities and proved a Turing-complete RNN with 1058 neurons; [20] proposed a Turing-complete RNN with 52 neurons. Here we provide a plug-and-play formula to simulate any Turing Machine. 4 Turing Completeness of Bounded-Precision RNNs with Growing Memory In the following, we consider how to remove the assumption of unbounded neural precision (Section 3) without reducing computational capacity. If we assume all neurons to have precision bounded by p in base 2|Γ|, then the tape can be encoded by ⌈|sj |/p⌉ neurons using fractal encoding, where j ∈ {L,R}. We do this by encoding every p symbols of the tape into a single neuron. Since most of these neurons do not require updates, just like symbols far from the read/write head in a Turing tape, we propose to store them in a separate growing memory module organized in a stack-like manner: Definition 1. A growing memory module is a stack of non-zero neurons with push and pop operations controlled by two neurons in an RNN, denoted as push neuron u(t) and pop neuron o(t), in the following way: for every step t after the RNN finished updating the values of neurons, (i) if u(t) > 0, then a new neuron with the value u(t) is pushed to the stack and u(t) is set to 0; (ii) if o(t) = 0 and the stack is not empty, then the top neuron is popped from the stack and o(t) is set to the value of the top neuron in the updated stack; (iii) if o(t) = 0 and the stack is empty, then o(t) is set to a default value c. An RNN with a growing memory module is a mapping TW,b : (QN ,Q∗) → (QN ,Q∗) where the first element of the tuple corresponds to values of neurons in the RNN and the second element of the tuple corresponds to the stack of the growing memory module. We can equip an RNN with multiple growing memory modules, with each module having its own push and pop neurons controlled by the RNN. An RNN with two growing memory modules will be defined by a mapping TW,b : (QN ,Q∗,Q∗) → (QN ,Q∗,Q∗). We can view the growing memory module as a way of dynamically pointing to a sequence of non-zero neurons appended by one zero neuron. o(t) can be viewed as the pointer for the last non-zero neuron in the sequence, and u(t) can be viewed as the pointer for the zero neuron at the beginning of the sequence; see Figure 1. With two growing memory modules (one for the left tape and one for the right tape), we can construct an RNN with bounded-precision neurons that can simulate any Turing Machine. We first describe how to encode the instantaneous description (q, sL, sR) ∈ X by a vector of rational numbers and two stacks, with which an RNN and its growing memory modules can be initialized. In the following discussion, we assume that all neurons have precision bounded by p ≥ 2 in base 2|Γ|; that is, each neuron can encode p symbols at most. Both the state q and the top symbols sL,(1), sR,(1) are encoded with binary values as in Section 3. For the tape sj (j ∈ {L,R}), we define the fractal encoding ρ(s) : Γ∗ → Q by: ρ(s)(y) := |y|∑ i=1 y(i) (2|Γ|)i , (8) which is similar to (3) except for the encoding of infinite blank symbols. Then, we encode the tape sj into a stack of neurons as follows: First, encode the rightmost p symbols (the ones farthest from the read/write head) with ρt and push it to an empty stack, denoted as Mj . Then, encode the next rightmost p symbols and push it to Mj again, and repeat until at least one and at most p symbols remain in the tape. Denote this encoding function for the tape as ρ(M) : Γ∗ → Q∗. The remaining symbols in the tape, denoted as sj,(1:h(|sj |)), where h(y) := ((y − 1) mod p) + 1, will be encoded with fractal encoding ρ(s) as well, but would appear in neurons inside the RNN. The general idea is to let only the symbols closest to the read/write head (sj,(1:h(|sj |))) reside in the RNN. If the number of symbols residing in the RNN reaches 0 or p, then we pop from or push to the stack respectively to ensure that at least 1 and at most p symbols reside in the tape neurons. It is interesting to note that the kth neuron in the stack (from the top) requires at least kp steps of the Turing Machine before it may be updated, so the values of neurons near the bottom of the stack will not be changed for many steps. That is, the neurons in the stack, except for the top neurons, are passive. Example. Let Γ = {1, 3, 5, 7}, sL = (3, 5, 7, 3, 5, 5, 3, 7) and p = 3. Then the number of symbols to remain in the RNN is 2 and they are encoded by ρ(s)(sL,(1:h(|sL|))) = 3 8 + 5 82 . The remaining six symbols are stored in the stack: ρ(M)(sL) = [ 78 + 3 82 + 5 83 , 5 8 + 3 82 + 7 83 ]. We encode in neuron h(|sj |), the number of symbols in sj (j ∈ {L,R}). In the above example, h(|sL|) = 2. These neurons help the RNN to know when pushing or popping operations are required. We use the encoding ρ(h) : {1, 2, ..., p} → Q, defined by: ρ(h)(y) = y p+ 1 . (9) Together, define the encoding function ρ : X → (Q2|Γ|+⌈log2 |Q|⌉+|Q||Γ|+19,Q∗,Q∗) by: ρ(q, sL, sR) =(ρ (q)(q)⊕ ρ(s)(sL,(1:h(|sL|)))⊕ ρ (s)(sR,(1:h(|sR|)))⊕ ρ (s)(sL,(h(|sL|)+1:h(|sL|)+p))⊕ ρ(s)(sR,(h(|sR|)+1:h(|sR|)+p))⊕ ρ (r)(sL,(1))⊕ ρ(r)(sR,(1))⊕ ρ(h)(h(|sL|))⊕ ρ(h)(h(|sR|))⊕ 0, ρ(M)(sL), ρ(M)(sR)), (10) where 0 is a zero vector of size |Q||Γ| + 15. The first element of the tuple is for initializing the neurons in the RNN, while the second and third element of the tuple is for initializing the two growing memory stack modules. All encoded values have precision of p. Similar to the previous section, ρ is injective and so we can define the decoder function ρ−1 : ρ(X ) → X . With the new encoding function ρ and growing memory modules, we can prove an alternative version to Theorem 1 that only requires bounded precision neurons: Theorem 2. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base 2|Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (11) Proof. The detailed proof is in Appendix B. To illustrate the construction of the RNN, the parameters for the neuron initialized with ρ(h)(h(|sL|)), called the left guard neuron and denoted as gL(t), will be described here. We assume all neurons in the RNN are initialized with values from the encoder function ρ(x) at time t = 1. The guard neuron gL(t) encodes the number of left-tape symbols residing in the RNN. In three stages of an RNN, we need to update its value from gL(1) = h(|sL|)/(p + 1) to gL(4) = h(|s′L|)/(p+1), where s′L is the left tape after one step of the Turing Machine. First, notice that h(|s′L|) can be expressed as: h(|s′L|) = h(|sL|)− 1 if d = L and h(|sL|) ≥ 2, p if d = L and h(|sL|) = 1, h(|sL|) + 1 if d = R and h(|sL|) ≤ p− 1, 1 if d = R and h(|sL|) = p, (12) where d is the direction that the Turing Machine’s head is moved. Example. Assume d = L and the size of the active symbols is h(|sL|) = 1, which means that the Turing Machine is moving left and there is only one symbol residing in the RNN. Since after the move, sL will have one symbol less and so the corresponding neuron sL(t) will encode no symbols. As a result, the top neuron of the left growing memory module would be popped out, and sL(t) would assume its value. This way the neuron sL(t) again encodes the active symbols (1 to p) of the left tape series, and h(|sL|) is set to p. (Alternatively, one may prove it directly by the definition of h and the fact that |s′L| = |sL| − 1.) An analogous process holds when the Turing Machine is moving right. We implement (12) with an RNN as follows. Define stage neurons as: c1(t+ 1) = σ(1− c1(t)− c2(t)), (13) c2(t+ 1) = σ(c1(t)), (14) with both neurons initialized to be zero. Define c(t) := [c1(t), c2(t), c3(t)] where c3(t) := 1 − c1(t) − c2(t), then c(1) = [0, 0, 1]; c(2) = [1, 0, 0]; c(3) = [0, 1, 0]. These stage neurons signal which one of the three stages that the RNN is in. In the construction of the RNN, there exists a linear sum of neurons, denoted as d(t) = [dL(t), dR(t)], such that if the Turing Machine is moving left, d(1) = [0, 0]; d(2) = [1, 0]; d(3) = [0, 0]; and if the Turing Machine is moving right, d(1) = [0, 0]; d(2) = [0, 1]; d(3) = [0, 0]; this signals which direction the Turing Machine is moving to (formulas of d(t) appear in Appendix B). Then consider the following update rule for the left guard neurons: gL(t+ 1) = σ(gL(t) + (dR(t)− dL(t)− pg′L(t) + pg′′L(t))/(p+ 1)), (15) g′L(t+ 1) = σ((p+ 1)gL(t) + dR(t)− p− 2c2(t)− 2c3(t)), (16) g′′L(t+ 1) = σ(2− (p+ 1)gL(t)− dR(t)− 2c2(t)− 2c3(t)), (17) where gL(1) = h(|sL|)/(p+1), g′L(1) = g′′L(1) = 0. It can be verified that gL(4) = h(|t′L|)/(p+1) as defined by (12), completing the proof for gL(t). Example. Assume d = L and h(|sL|) = 1. Then gL(1) = gL(2) = 1/(p+ 1) and g′L(1) = g′′L(1) = g′L(2) = g ′′ L(2) = 0. On the second stage, gL(3) = σ(1/(p + 1) − 1/(p + 1)) = 0, g′L(3) = σ(1−p) = 0, and g′′L(3) = σ(2−1) = 1. On the third stage, gL(4) = σ(0+p/(p+1)) = p/(p+1) as required. The full proof appears in Appendix B. Similar to Corollary 1.1, it follows that: Corollary 2.1. Given a Turing Machine M, there exists an injective function ρ : X → (Qn,Q∗,Q∗) and an n-neuron p-precision (in base |2Γ|) RNN with two growing memory modules TW,b : (Qn,Q∗,Q∗) → (Qn,Q∗,Q∗), where n = 2|Γ| + ⌈log2 |Q|⌉ + |Q||Γ| + 19 and p ≥ 2, such that for all instantaneous descriptions x ∈ X , the following holds: If P∗M(x) is defined, then ρ−1(T ∗W,b(ρ(x))) = P∗M(x), (18) and if P∗M(x) is not defined, then T ∗W,b(ρ(x)) is also not defined. If P∗M(x) is defined and computed in T steps by M, then T ∗W,b(ρ(x)) is computed in 3T steps by the RNN. Finally, applying Corollary 2.1 to U6,4, we obtain: Corollary 2.2. There exists a 54-neuron p-precision (in base 8) RNN with two growing memory modules that can simulate any Turing Machine in O(T 6), where T is the number of steps required for the Turing Machine to compute the result and p ≥ 2. The architecture of the Turing-complete 54-neuron RNN (fully described in the proof of Theorem 2) is depicted in Figure 2. 4.1 Relationship of the Growing Memory Modules with Stack-augmented RNNs The proposed growing memory module belongs to the generic class of stack-augmented RNNs, which refers to any RNNs augmented with a stack-like mechanism. Many different forms of stackaugmented RNNs have been proposed [11, 12, 13, 14, 15]. Given the simplicity of the design, the growing memory module represents the foundation form of these stack-augmented RNNs. It is easy to show that these stack-augmented RNNs can simulate the growing memory module in linear time. For example, the growing memory modules can be simulated by the neural stack [11] as follows. For pushing operation, set vt in the neural stack to u(t) in the growing memory module, and dt in the neural stack to 1{u(t) > 0} in the growing memory module. For popping operation, set ut in the neural stack to 1{o(t) = 1} in the growing memory module. Therefore, the proof for the Turing completeness of bounded-precision RNNs with growing memory modules can extend to other stack-augmented RNNs. That is, bounded-precision stack-augmented RNNs are also Turing-complete. Different from other stack-augmented RNNs, the proposed growing memory modules use a simple mechanism to control pushing and popping, as only the top neurons in the stack are included in the RNNs. This allows theories relating to growing memory modules to be easily extended to other forms of RNNs. 5 Bounded-Precision RNNs and Space-Bounded Turing Machines As discussed above, it is inefficient to update all neurons that encode tape information, but if one still wants to remove the growing memory modules and focuses purely on a bounded-precision RNN only, then the resulting network can simulate space-bounded Turing Machines (that is, Turing Machines with a bounded-size tape) only. Theorem 3. Given a Turing Machine M with a bounded tape of size F , there exists an injective function ρ : X → Qn and an n-neuron p-precision (in base |2Γ|) RNN TW,b : Qn → Qn, where n = O(⌈F/p⌉) and p ≥ 2, such that for all instantaneous descriptions x ∈ X , ρ−1(T 3W,b(ρ(x))) = PM(x). (19) The proof, which is constructive, can be found in Appendix C. The general idea of the proof is to implement the growing memory module in Section 4 by an RNN as well and place all neurons inside the RNN. The theorem shows that the number of neurons required to simulate a space-bounded Turing Machine correlates with the tape size. To simulate a Turing Machine with an unbounded tape, we would need to add neurons to the RNN once the read/write head reaches the end of the memory. To be specific, we say an RNN has an unbounded number of neurons if the RNN either has an infinite number of neurons to start with or can increase the number of neurons after each update step depending on the neurons’ values. The UTM that simulates any Turing Machine in a fast way was described in [21], which does so with only O(T log T ) slowdown. While this UTM has multiple tapes, Theorem 3 can be generalized to multiple-tape Turing Machines easily. We now obtain: Corollary 3.1. There exists an unbounded-neuron bounded-precision RNN that can simulate any Turing Machine in O(T log T ), where T is the number of steps required for the Turing Machine to compute the result. 6 Discussion and Conclusion To construct a Turing-complete RNN, we have to incorporate some encoding for the unbounded number of symbols on the Turing tape. This encoding can be done by: (a) unbounded precision of some neurons (Theorem 1), (b) an unbounded number of neurons (Theorem 3), or (c) a separate growing memory module (Theorem 2). The main contribution of this paper is spelling out the details of (c), which provides a practical way to construct an RNN that runs any given algorithms. We prove the Turing completeness of a 40-neuron unbounded-precision RNN, which is the smallest Turing-complete RNN to date. We analyze the relationship between the number of neurons and the precision of an RNN when simulating a Turing Machine. Most importantly, we propose a 54-neuron bounded-precision RNN with growing memory modules that is Turing-complete, and this proof of Turing completeness can be extended to stack-augmented RNNs in general. This paper focuses on the computational capabilities and representability of symbolic and subsymbolic processing via stack-augmented RNNs; it does not engage yet with methods to train them. Since growing memory modules are not differentiable, we cannot train them directly by the frequently used error backpropagation algorithm. One may want to construct a differentiable version of the modules, or alternatively use a different learning rule (e.g., REINFORCE [22]) to deal with the discrete pushing and popping operations. Understanding methods to train growing memory modules, incorporate symbolic information into the sub-symbolic representation efficiently, and retrieve both symbolic and non-symbolic information are the next steps towards the goal of combining symbolic and sub-symbolic capabilities in an adaptive and applicable manner.
1. What is the focus and contribution of the paper regarding unbounded precision RNNs for simulating Turing machines? 2. What are the strengths of the paper, particularly in its novelty and writing quality? 3. What are the limitations and potential concerns of the proposed approach, especially regarding its non-trainability and practical applicability? 4. How does the reviewer assess the relevance of other works, such as "Implementing Neural Turing Machines" and a related article in Nature?
Summary Of The Paper Review
Summary Of The Paper This theoretical work shows the existence of un-bounded precision RNNs for simulating turing machines, with growing/shrinking memory module. Although the paper seems quite interesting and well-written, I am afraid I don't have the right background to make an accurate judgment on the novelty and potential impact of the work. I share few thoughts below, similarly due to lack-of-knowledge and limited reviewing time I was not able to check the proofs. Review Strengths Most of the results seems novel and paper is very well-written. Questions/Possible-Limitations Given proof shows the existence of bounded/non-bounded precision RNNs with particular size (for simulating turing machines) and these networks are not-trainable (due to non-differentiable parts). As mentioned in the conclusion this is a limitation and I wonder (apart from inherit value of it) what other values the proposed architecture might bring to the field (practical/theoretical). This part is not clear to me and I think it would be nice to discuss this in the rebuttal. Can we train similar architectures if we make growing differentiable? What would that enable that is not already enabled? I wonder whether copying experiments (like in NTM paper) is appropriate here. It would be nice to have a working implementation of proposed RNN. Minor Without loss of generosity, -> generality. "Implementing Neural Turing Machines" https://arxiv.org/abs/1807.08518 Might be a relevant work. "https://www.nature.com/articles/nature20101" Same with this.
NIPS
Title A General Method for Robust Learning from Batches Abstract In many applications, data is collected in batches, some of which may be corrupt or even adversarial. Recent work derived optimal robust algorithms for estimating finite distributions in this setting. We develop a general framework of robust learning from batches, and determine the limits of both distribution estimation, and notably, classification, over arbitrary, including continuous, domains. Building on this framework, we derive the first robust agnostic: (1) polynomial-time distribution estimation algorithms for structured distributions, including piecewisepolynomial, monotone, log-concave, and gaussian-mixtures, and also significantly improve their sample complexity; (2) classification algorithms, and also establish their near-optimal sample complexity; (3) computationally-efficient algorithms for the fundamental problem of interval-based classification that underlies nearly all natural-1-dimensional classification problems. 1 Introduction 1.1 Motivation In many learning applications, some samples are inadvertently or maliciously corrupted. A simple and intuitive example shows that this erroneous data limits the extent to which a distribution can be learned, even with infinitely many samples. Consider p that could be one of two possible binary distributions: ( 12 − β 2 , 1 2 + β 2 ) and ( 1 2 + β 2 , 1 2 − β 2 ). Given any number of samples from p, an adversary who observes a 1 − β fraction of the samples and can determine the rest, could use the observed samples to learn p, and set the remaining samples to make the distribution always appear to be (0.5, 0.5). Even with arbitrarily many samples, any estimator for p fails to decide which p is in effect, hence incurs a total-variation (TV) distance ≥ β2 , that we call the adversarial lower bound. The example may seem to suggest the pessimistic conclusion that if an adversary can corrupt a β fraction of the data, a TV-loss of≥ β2 is inevitable. Fortunately, in many applications it can be avoided. In the following applications, and many others, data is collected in batches, most of which are genuine, but some possibly corrupted. Data may be gathered by sensors, each providing a large amount of data, and some sensors may be faulty. The word frequency of an author may be estimated from several large texts, some of which are mis-attributed. User preferences may be learned by querying several individuals, some intentionally biasing their feedback. Multiple agents may contribute to a crowd-sourcing platform, but some may be unreliable or malicious. Interestingly, for data arriving in batches, even when a β-fraction of which are corrupted, more can be said. Recently, [QV17] formalized the problem for finite domains. They considered estimating a distribution p over [k] in TV-distance when the samples are provided in batches of size ≥ n. Out of a total of m batches, a fraction ≤ β may be arbitrarily and adversarially corrupted, while in every other batch b the samples are drawn according to a distribution p. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. For β<1/900, they derived an estimation algorithm that approximates any p over a finite domain to TV-distance O(β/ √ n), surprisingly, much lower than the individual samples limit of Θ(β). They also derived a matching lower bound, showing that even for binary distributions, and hence for general finite distributions, given any number m of batches, the lowest achievable TV distance is ∆min := ∆min(β, n) := β 2 √ 2n . We refer to ∆min as the adversarial batch lower bound. Their estimator requires Ω( n+k n·∆2min ) batches of samples, or equivalently Ω(n+k ∆2min ) samples, which is not optimal if n >> k. It also runs in time exponential in the domain size, rendering it impractical. Recently, [CLM19] used a novel application of the sum-of-squares technique to reduce the exponential time complexity. Using quasi-polynomial sample size and run time, both roughly (k/∆)O(log(1/β)), they derived an estimator that achieves TV distanceO(∆), where ∆ := ∆(β, n) := ∆min · √ ln(1/β). Concurrently, [JO19] derived the first polynomial-time and optimal Ω(k/∆2) sample estimator, that achieves the same O(∆) TV distance. To limit the impact of adversarial batches, the algorithm filters the data by removing batches that skews the estimator. For general distributions, the sample complexity of both TV-distance estimation, and Bayes-optimal classification, grows linearly in the domain size, even when all samples are genuine. Hence, general estimation and classification over large discrete, let alone continuous domains, is infeasible. Since most modern applications are over very large or continuous domains, this may again lead to the pessimistic conclusion that not much can be done. Fortunately, typical distributions are not arbitrary and possess some structure. For example, they may be monotone, smooth, Lipchitz, etc., or well approximated by structured distributions. These structural properties enable learning over large and even infinite domains. For example, as is well known, classifiers can be learned using a number of samples proportional to the VC-dimension of the classifier class. But so far, our understanding of how to incorporate the distribution structure in Robust batch learning has been quite limited. The first application of structure to reduce the linear dependence of the sample complexity [CLM19] considered robust batch learning of t-piecewise degree-d polynomials over the finite set [k] = {1, . . . ,k}. It learned these distributions with number of samples that grows only quasipoly-logarithmically in the domain size k. Yet this number still grows with k, hence does not extend to continuous distributions. It is also quasi-polynomial in the other parameters t, d, batch size n, and 1/β, much larger than in the non-robust setting. And the algorithm’s computational complexity is quasi-polynomial in these parameters and the domain size k. This leaves several natural questions: (1) Can other non-finite, and even continuous, structured distribution classes, be learned robustly to an estimation error comparable to the adversarial batch lower ∆min? (2) Can it be achieved with sample complexity comparable to the non-adversarial learning? (3) Can robust learning of structured distributions be accomplished in strict polynomial time? (4) Even more generally can other tasks such as classification be accomplished with adversarial batches? (5) Most importantly, is there a general and systematic theory of learning with adversarial batches? 1.2 Summary of techniques and contributions VC theory helps answer some of the above questions when all the samples are generated i.i.d. from a distribution. We adapt the theory to address robust batch learning as well. Let F be a family of subsets of an Euclidean domain Ω. The F-distance between two distributions p and q over Ω is the largest difference between the probabilities p and q assign to any subset in F , ||p− q||F := supS∈F |p(S)− q(S)|. It is easy to see that TV, and hence L1, distances are a special case of F-distance where F is the collection Σ of all Borel subsets of Ω, ||p− q||Σ = ||p− q||TV = 12 ||p− q||1. Without adversarial batches, the VC inequality guarantees that for a subset family F with finite VC-dimension, the empirical distribution of samples from p estimates p to a small F-distance. But with adversarial batches, the F-distance between the empirical distribution and p could be large. For learning with adversarial batches over finite domains, [JO19] presented an algorithm that learns the distribution to a small TV distance with a number of batches proportional to the domain size. We generalize this algorithm to learn any finite-VC subset family F to a small F -distance using samples linear in the family’s VC-dimension, rather than the domain size. Recall that ∆min = β/(2 √ 2n) is the adversarial batch lower bound for TV-distance learning. No algorithm achieves an error below ∆min, even with the number of batches→ ∞. Since the ∆min lower bound applies even to binary domains, it can be shown to also lower bound F -distance learning. Our proposed algorithm filters the batches and returns a sub-collection of batches whose empirical distribution estimates p to F-distance O(∆), where ∆ = ∆min · √ log(1/β) is only a small factor above the lower bound. The number of batches it requires for any VC family F is only a logarithmic factor more than needed to achieve the same error without adversarial batches, showing that robustness can be incorporated at little extra cost. This provides the first demonstration that distributions can be learned (1) robustly and (2) sample-efficiently, over infinite, and even continuous domains. As expected from the setting’s vast generality, as in the non-adversarial setting, for some VC families, one cannot expect to find a computationally efficient algorithm. We, therefore, consider a natural and important VC family over the reals that, as we shall soon see, translates into efficient and robust algorithms for TV-learning and classification over R. Let Fk be the family of all unions of at most k intervals over R. We derive a computationally efficient algorithm that estimates distributions to Fk-distance O(∆) using only Õ(1/∆) times more samples than the non-adversarial, or information-theoretic adversarial cases. Building on these techniques, we return to estimation in total variation (TV) distance. We consider the family of distributions whose Yatracos Class [Yat85] have finite VC dimension. This family consists of both discrete and continuous distributions, and includes piecewise polynomials, Gaussians in one or more dimensions, and arguably most practical distribution families. We show that all these distributions can be learned robustly from batches to a TV distance O(∆), which is only a factor √ log(1/β) above the adversarial TV-distance lower bound of ∆min. It also achieves sample complexity that is at most a logarithmic factor more than required for non-adversarial case. These results too are very general, hence as in the non-adversarial case, one cannot expect a computationally efficient algorithm for all cases. We therefore consider the natural and important general class Pt,d of t-piecewise degree-d polynomial distributions over the reals. To agnostically learn distributions Pt,d, we combine the results above with an existing, nonadversarial, polynomial-learning algorithm [ADLS17]. We derive a polynomial-time algorithm for estimating polynomials in Pt,d to a TV distance O(∆). The algorithm’s sample complexity is linear in td, which is the best possible, and similar to learning in Fk-distance, only Õ(1/∆) times above the non-adversarial, or information-theoretic adversarial sample complexity. This is the first algorithm that achieves polynomial sample and time complexity for robust learning for this class, and the first that applies to the non-finite domains. The general formulation also allows us to use batch-structure for robustness in other learning tasks. We apply this framework to derive the first robust agnostic classifiers. The goal is to minimize the excess loss in comparison to the best hypothesis, in the presence of adversarial batches. We first modify the lower bound on distribution learning to show that any classification algorithm with adversarial batches must incur an excess loss O(∆min), even with the number of batches→∞. We then derive a general algorithm that achieves additive excess loss O(∆) for general binary classification using a number of samples that is again only a logarithmic factor larger than required to achieve the same excess loss in the non-adversarial setting. Finally, we consider classification over R. Many natural and practical classifiers have decision regions consisting of finitely many disjoint intervals. We apply the above results to derive a computationally efficient algorithm for hypotheses consisting of k intervals. Similar to previous results, its sample complexity is linear in k and only a factor O(1/∆) larger than required in the non-adversarial setup. The rest of the paper is organized as follows. Section 2 describes the main technical results and their applications to distribution estimation and classification. Section 3 discusses the other related work. Section 4 provides an overview of the filtering algorithm that enables these results. Proofs and more details are relegated to the appendix. 2 Results We consider learning from batches of samples, when a β−fraction of batches are adversarial. More precisely, B is a collection of m batches, composed of two unknown sub-collections. A good sub-collection BG ⊆ B of ≥ (1− β)m good batches, where each batch b consists of n independent samples from a common distribution p over Ω. And an adversarial sub-collection BA = B \BG of the remaining ≤ βm batches, each consisting of the same number n of arbitrary Ω elements, that for simplicity we call samples as well. Note that the adversarial samples may be chosen in any way, including after observing the good samples. The next subsection describes the main technical results for learning in F distance. Subsequent subsections apply these results to learn distributions in TV distance and to achieve robust binary classification. 2.1 Estimating distributions in F distance Our goal is to use samples generated by a target distribution p to approximate it to a small F -distance. For general families F , this goal cannot be accomplished even with just good batches. Let F = Σ be the collection of all subsets of the real interval domain Ω = [0, 1]. For any total number t of samples, with high probability, it is impossible to distinguish the uniform distribution over [0, 1] from a uniform discrete distribution over a random collection of t2 elements in [0, 1]. Hence any estimator must incur TV-distance 1 for some distribution. This difficulty is addressed by Vapnik-Chervonenkis (VC) Theory. The collection F shatters a subset S ⊆ Ω if every subset of S is the intersection of S with a subset in F . The VC-dimension VF of F is the size of the largest subset shattered by F . Let Xt = X1, . . . ,Xt, be i.i.d. samples from a distribution p. The empirical probability of S ⊆ Ω is p̄t(S) := |{i : Xi ∈ S}|/t. The fundamental Uniform deviation inequality of VC theory [VC71, Tal94] states that if F has finite VC-dimension VF , then p̄t estimates p well in F distance. For all δ > 0, with probability > 1− δ, ||p− p̄t||F ≤ O (√ (VF + log 1/δ)/t ) . The above is also the lowest achievable F-distance, hence we call it the information-theoretic limit. In the adversarial-batch scenario, a fraction β of the batches may be corrupted. It is easy to see that for any number m of batches, however large, the adversary can cause p̄t to approximate p to F-distance ≥ β/2, namely ||p̄t − p||F ≥ β/2. Let p̄B′ be the empirical distribution induced by the samples in a collection B′ ⊆ B. Our first result states that if F has a finite VC-dimension, for total samples m · n ≥ Õ(VF/∆2), the batches in B can be "cleaned" to a sub-collection B∗ where ||p− p̄B∗ ||F = O(∆), namely, a simple empirical estimator of the samples in B∗ recovers p to a small F-distance. Theorem 1. For any F , β ≤ 0.4, δ > 0, and mn ≥ Õ ( VF+log 1/δ ∆2 ) , there is an algorithm that w.p.≥1−δ returns a sub-collectionB∗⊆B s.t. |B∗∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||F ≤ O(∆). The F -distance bound matches the lower bound ∆min up to a small O( √ log(1/β)) factor. The number m · n of samples required to achieve this estimation error are the same (up to a logarithmic factor) as the minimum required to achieve the same estimation error even for the non-adversarial setting. The theorem applies to all families with finite VC dimension, and like most other results of this generality, it is necessarily non-constructive in nature. Yet it provides a road map for constructing efficient algorithms for many specific natural problems. In Section 4 we use this approach to derive a polynomial-time algorithm that learns distributions with respect to one of the most important and practical VC classes, where Ω = R, and F = Fk is the collection of all unions of at most k intervals. Theorem 2. For any k > 0, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is an algorithm that runs in time polynomial in all parameters, and with probability ≥ 1 − δ returns a sub-collection B∗ ⊆ B, such that |B∗ ∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||Fk ≤ O(∆). The above polynomial-time algorithm can achieve Fk error ∆ using the number of samples only Õ(1/∆) times the minimum required to achieve the same estimation error by any algorithm even for the non-adversarial setting. Note that the sample complexity in both Theorems 1 and 2 are independent of the domain size and depends linearly on the VC dimension of the subset family. Section 4 provides a short overview of the algorithms used in the above theorems. The complete algorithms and proof of the two theorems appear in the appendix. 2.2 Learning distributions in total-variation distance Our ultimate objective is to estimate the target distribution in total variation (TV) distance, one of the most common measures in distribution estimation. In this and the next subsection, we follow a framework developed in [DL01], see also [Dia16]. As noted earlier, the sample complexity of estimating distributions in TV-distance grows with the domain size, becoming infeasible for large discrete domains and impossible for continuous domains. A natural approach to address this intractability is to assume that the underlying distribution belongs to, or is near, a structured class P of distributions. Let optP(p) := infq∈P ||p− q||TV be the TV-distance of p from the closest distribution in P . For example, for p ∈ P , optP(p) = 0. Given , δ > 0, we try to use samples from p to find an estimate p̂ such that, with probability ≥ 1− δ, ||p− p̂||TV ≤ α · optP(p) + for a universal constant α≥1, namely, to approximate p about as well as the closest distribution in P . Following [DL01], we utilize a connection between distribution estimation and VC dimension. Let P be a class of distributions over Ω. The Yatracos class [Yat85] of P is the family of Ω subsets Y(P) := {{ω ∈ Ω : p(ω) ≥ q(ω)} : p, q ∈ P}. It is easy to verify that for distributions p, q ∈ P , ||p−q||TV = ||p−q||Y(P). The Yatracos minimizer of a distribution p is its closest distribution, by Y(P)-distance, in P , ψP(p) := arg min q∈P ||q − p||Y(P), where ties are broken arbitrarily. Theorem 6.3 in [DL01] uses these definitions and a sequence of triangle inequalities to show that for any distributions p, p′, and any distribution class P , ||p− ψP(p′)||TV ≤ 3 · optP(p) + 4||p− p′||Y(P). (1) Therefore, given a distribution p′ that approximates p in Y(P)-distance, its Yatracos minimizer ψP(p ′) approximates p in TV-distance. If the Yatracos class Y(P) has finite VC dimension, the VC-bound ensures that for the empirical distribution p̄t of t i.i.d. samples from p, ||p̄t − p||Y(P) decreases to zero as t increases, and ψP(p̄t) can be used to approximate p in TV-distance. This general method has led to many sample-and computationally-efficient algorithms for estimating structured distributions, e.g., [ADLS17]. However, as discussed earlier, with a β-fraction of adversarial batches, the empirical distribution of all samples can be at a Y(P)-distance as large as Θ(β) from p, leading to a large TV-distance. Yet Theorem 1 shows that data can be "cleaned" to remove outlier batches and retain batches B∗ ⊆ B whose empirical distribution p̄B∗ approximates p to a much smaller Y(P)-distance of O(∆). Letting p∗ = ψP(p̄B∗) and using Equation (1), we obtain a much better approximation of p in TV distance. Theorem 3. For a distribution class P with Yatracos Class of finite VC dimension v, for any β ≤ 0.4, δ > 0, and mn ≥ Õ ( v+log 1/δ ∆2 ) , there is an algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ P such that ||p− p∗||TV ≤ 3 · optP(p) +O(∆). The estimation error achieved in the theorem for TV-distance matches the lower bound to a small log factor of O( √ log(1/β)), and is valid for any class P with finite VC Dimensional Yatracos Class. Moreover, the upper bound on the number of samples (or batches) required by the algorithm to estimate p to the above distance matches a similar general upper bound obtained for non-adversarial setting to a log factor. This results for the first time shows that it is possible to learn a wide variety of distributions robustly using batches, even over continuous domains. 2.3 Learning univariate structured distributions We apply the general results in the last two subsections to estimate distributions over the real line. We focus on one of the most studied, and important, distribution families, the class of piecewisepolynomial distributions. A distribution p over [a, b] is t-piecewise, degree-d, if there is a partition of [a, b] into t intervals I1, . . . ,It, and degree-d polynomials r1, . . . ,rt such that ∀j and x ∈ Ij , p(x) = rj(x). The definition extends naturally to finite distributions over [k] = {1, . . . ,k}. Let Pt,d denote the collection of all t-piecewise degree d distributions. Pt,d is interesting in its own right, as it contains important distribution classes such as histograms. In addition, it approximates other important distribution classes, such as monotone, log-concave, Gaussians, and their mixtures, arbitrarily well, e.g., [ADLS17]. Note that for any two distributions p, q ∈ Pt,d, the difference p − q is a 2t-piecewise degree-d polynomial, hence every set in the Yatracos class of Pt,d, is the union of at most 2t · d intervals in R. Therefore, Y(Pt,d) ⊆ F2t·d. And since VFk = O(k) for any k, Y(Pt,d) has VC dimension O(td). Theorem 3 can then be applied to show that any target distribution p can be estimated by a distribution in Pt,d to a TV-distance ∆, using a number of samples, that is within a logarithmic factor from the minimum required [CDSS14] even when all samples are i.i.d. generated from p. Corollary 4. For any distribution p over R, t, d, β ≤ 0.4, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆2 ) , there is an algorithm that with probability ≥ 1− δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ 3 · optPt,d(p) +O(∆). Next we provide a polynomial-time algorithm for estimating p to the same O(∆) TV-distance, but with an extra Õ(1/∆) factor in sample complexity. Theorem 2 provides a polynomial time algorithm that returns a sub-collection B∗ ⊆ B of batches whose empirical distribution p̄B∗ is close to p in F2td-distance. [ADLS17] provides a polynomial time algorithm that for any distribution q returns a distribution in p′ ∈ Pt,d minimizing ||p′ − q||F2td to a small additive error. Then equation (1) and Theorem 2 yield the following result. We provide formal proof of the theorem in the appendix. Theorem 5. For any distribution p over R, n, m, β≤ 0.4, t, d, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆3 ) , there is a polynomial time algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ O(optPt,s(p)) +O(∆). 2.4 Binary classification Our framework extends beyond distribution estimation. Here we describe its application to Binary classification. Consider a family H : Ω → {0, 1} of Boolean functions, and a distribution p over Ω × {0, 1}. Let (X,Y ) ∼ p, where X ∈ Ω and Y ∈ {0, 1}. The loss of classifier h ∈ H for distribution p is rp(h) := Pr(X,Y )∼p[h(X) 6= Y ]. The optimal classifier for distribution p is hopt(p) := arg minh∈H rp(h), and hence the optimal loss is r opt p (H) := rp(hopt(p)). The goal is to return a classifier h ∈ H whose excess loss rp(h)− roptp (H) compared to the optimal loss is small. Consider the following natural extension of VC-dimension from families of subsets to families of Boolean functions. For a boolean-function familyH, define the family FH := {({ω ∈ Ω : h(ω) = y}, ȳ) : h ∈ H, y ∈ {0, 1}} of subsets of Ω× {0, 1}, and let the VC dimension ofH be VH := VFH . The next simple lemma, proved in the appendix, upper bounds the excess loss of the optimal classifier inH for a distribution q for another distribution p in terms of FH distance between the distributions. Lemma 6. For any classH and distributions p and q, rp(hopt(q))− roptp (H) ≤ 4||p− q||FH . When q is an empirical distribution of the samples, hopt(q) is called the empirical-risk minimizer. If q is the empirical distribution of the samples generated i.i.d. from p, from VC inequality, the excess loss of the empirical-risk minimizer in the above equation goes to zero if VC dimension ofH is finite. Yet as discussed earlier, when a β-fractions of the batches, and hence samples, are chosen by an adversary, the empirical distribution of all samples can be at a large FH-distance O(β) from p, leading to an excess-classification-loss up to Ω(β) for the empirical-risk minimizer. Theorem 1 states that the collection of batches can be "cleaned" to obtain a sub-collection B∗ ⊆ B whose empirical distribution has a lower FH-distance from p. The above lemma then implies that the optimal classifier hopt(p̄B∗) for the empirical distribution p̄B∗ of the cleaner batches will have a small-excess-classification-loss for p as well. The resulting non-constructive algorithm has excess-classification-loss and sample complexity that are optimal to a logarithmic factor. Theorem 7. For any H, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( VH+log 1/δ ∆2 ) , there is an algorithm that with probability ≥1−δ returns a classifier h∗, whose excess lose is rp(h∗)− roptp (H) ≤ O(∆). To complement this result, we show an information-theoretic lower bound of Ω(∆min) on the excess loss. The proof is in the appendix. Recall that a similar lower bound holds for learning distribution. Theorem 8. For any β, n, andH s.t. VH ≥ 1, there are a distribution p and an adversary, such that any algorithm, with probability ≥ 1/2, incurs an excess loss Ω(∆min), even as number of batches m→∞. To derive a computationally-efficient algorithm, we focus on the following class of binary functions. For k ≥ 1, letHk denote the collection of all binary functions over R whose decision region, namely values mapping to 0 or 1, consists of at most k-intervals. The VC dimension of FHk is clearly O(k). Theorem 2 describes a polynomial-time algorithm that returns a cleaner data w.r.t. FHk distance. From Lemma 6, the classifier that minimizes the loss for the empirical distribution of this cleaner data will have a small excess loss. Furthermore, [Maa94] derived a polynomial-time algorithm to find the empirical risk minimizer h ∈ Hk for any given samples. Combining these results, gives a robust computationally efficient classifier inHk. We provide a formal proof in the appendix. Theorem 9. For any k, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is a polynomial-time algorithm that w. p. ≥1−δ returns a classifier h∗, whose excess loss is rp(h∗)− roptp (Hk) ≤ O(∆). 3 Other related and concurrent work The current results extend several long lines of work on estimating structured distributions, including [O’B16, Dia16, AM18, ADLS17]. The results also relate to classical robust-statistics work [Tuk60, Hub92]. There has also been significant recent work leading to practical distribution learning algorithms that are robust to adversarial contamination of the data. For example, [DKK+16, LRV16] presented algorithms for learning the mean and covariance matrix of highdimensional sub-gaussian and other distributions with bounded fourth moments in presence of the adversarial samples. Their estimation guarantees are typically in terms of L2, and do not yield the L1- distance results required for discrete distributions. The work was extended in [CSV17] to the case when more than half of the samples are adversarial. Their algorithm returns a small set of candidate distributions one of which is a good approximate of the underlying distribution. The filtering based method has also played a key role in other robust learning algorithms in high dimension [DKK+17, DKK+18, SCV17, DKK+19]. These works apply filtering on samples instead on batches of samples, as in [JO19] and in this paper, and recover in a different metric. For a more extensive survey on robust learning algorithms see [SCV17, DKK+19]. Another motivation for this work derives from the practical federated-learning problem, where information arrives in batches [MMR+16, MR17]. Concurrent work Concurrent to our work, [CLM20] also extends the filtering algorithm of [JO19] to obtain robust batch learning algorithms for estimating piecewise polynomials. They derive a polynomial-time algorithm that learns distributions in Pt,d over a finite domain [k] to the same TV distance O(∆) as we do, but requires Õ((td)2 log3(k)/∆2) samples, where Õ hides a logarithmic factor in 1/∆. In contrast, our results show that this accuracy can be achieved using Õ(td/∆2) samples, and by a polynomial-time algorithm with sample complexity is Õ(td/∆3). Importantly, our algorithms’ complexity does not depend on the alphabet size [k], which allows us to extend them to more general non-finite and even continuous domains. In addition, we considered other distribution classes and learning tasks such as classification. Another concurrent work [KFAL20] focuses on the sample complexity of robust batch classification using adversarial batches. Their results achieve an excess loss of O( √ VH · ∆), where VH is the VC-dimension of the hypothesis class, whereas we achieve an excess loss only O(∆). 4 Overview of the filtering framework for learning in F distance To derive both the information-theoretic and computationally-efficient algorithms for general robust learning from batches, we generalize a finite filtering-based approach in [JO19]. We first describe the original algorithm and outline how it can be extended to general learning problems. A more complete and formal presentation appears in the appendix. Recall that B is the collection of all m batches and each batch b ∈ B has n samples from the domain Ω. A batch b estimates the probability p(S) of a subset S ∈ Σ by its empirical probability. Each subset S ∈ Σ, assigns to every batch b ∈ B, a corruption score ψb(S), defined in the appendix, based on how far the batch’s estimate of p(S) is from the median of the estimates for all batches. Similarly, each subset S assigns to every sub-collection B′ ⊆ B of batches a corruption score ψB′(S) := ∑ b∈B′ ψb(S), the sum of individual corruption score of each batch. We first describe a general filtering approach to robust learning from batches. A collection C ⊆ Σ of subsets, is learnable via filtering if one can "filter out" bad batches in B and find a "good" subset B∗ ⊆ B of batches that approximates p to a small C-distance, ||p− p̄B∗ ||C = max S∈C |p(S)− p̄B∗(S)| ≤ O(∆). (2) We describe two properties ensuring that C is learnable via filtering. A finite C ⊆ Σ is learnable via filtering if there is a threshold τ such that all subsets S ∈ C and all sub-collection B′ ⊆ B that contain most good batches, namely |B′∩BG| ≥ (1−β/6)|BG|, satisfy the following two properties: 1. If the corruption score is low, ψB′(S)<τ , thenB′ estimates p(S) well, |p(S)− p̄B′(S)| = O(∆). 2. If ψB′(S) > τ , then there is a (probabilistic) method that removes batches in B′, while ensuring that and each batch removed is adversarial with probability at least 0.95, until ψB′(S) < τ . A simple algorithm shows that these two properties imply that C is learnable by filtering. Start with B′ = B, find a filter S ∈ C with ψB′(S) > τ , remove the batches from B′, and repeat the process until the corruption is small, ψB′(S) < τ , for all filters in C. By property 2, each deleted batch is adversarial with probability > 0.95. Since there are at most βm adversarial batches, w.h.p. at most 0.1βm good batches are deleted. Consequently |B′ ∩BG| ≥ (1− β/6)|BG|. By property 1, when the algorithm ends, B∗ = B′ achieves (2). While this algorithm describes the core of the technique, three significant challenges remain. The above algorithm applies for finite classes C. However, the VC class F may be infinite, or even uncountable. To apply the algorithm we need to find a finite subset C such that learning in C distance implies learning in F distance. In the appendix, we prove an essential Robust Covering Theorem, showing that for an appropriate , letting C be an -cover of F under empirical density p̄B , suffices to learn p in F distance. This is despite the fact that a fraction β of the batches inB may be adversarially chosen, and even depend on good samples. The next key challenge is to show that the two properties hold for all subsets in the -cover. We establish this fact by showing that with sufficiently many batches, w.h.p., the two properties hold for all subsets S ∈ F . The proof requires addressing additional technical challenges, as number of subsets in F could be infinite. Choosing any finite -cover C ⊆ F under density p̄B , therefore yields an information-theoretic algorithm with near-optimal sample complexity. This gives us the near sample optimal algorithm in Theorem 1. However, computationally-efficient algorithms pose one additional challenge. The size of C may be exponential in the VC dimension, and hence searching for a subset in C with a high corruption score may be computationally infeasible. For the VC class Fk, we overcome this difficulty by choosing the set C of filters from a larger class than Fk itself so that that still obeys the two properties, but allows for an efficient search. Though C is chosen from a larger class, we ensure that the sample complexity increase is small. Specifically, we let C be the collection of all subsets of a k′-partition of Ω, for an appropriate k′ that is linear in k. Subsets in such a cover C correspond to binary vectors in {0, 1}k′ . A novel semi-definite-programming based algorithm derived in [JO19] finds a subset S ∈ C with nearly the highest corruption ψB′(S) in time only polynomial in k′. This allows us to obtain the polynomial-time algorithm in Theorem 2. To summarize, this universal filtering approach allows us to "clean" the data and enables the general robust distribution estimators and classifiers we construct. Remark. In some applications the distributions underlying genuine batches may differ from the common target distribution p by a small TV distance, say η > 0. For simplicity, in this paper we presented the analysis for η = 0, where all the good batches have the same distribution p. For η > 0, even for binary alphabets, [QV17] derived the adversarial batch lower bound of Ω(η+ β/ √ n) on TV distance. And even the trivial empirical estimator achieves O(η + β) TV-error, which has optimal linear dependence on η. Therefore, filtering algorithms do not need to do anything sophisticated for general η and incurs only an extra O(η) error as noted in [JO19] for unstructured distributions, and the same holds for our algorithms for learning structured distributions and binary classification. Broader impact With the vast increase in data availability, data sources are often corrupt or untrustworthy. This untrusted data severely limits the efficacy of several leaning algorithms, even when a vast amount of the data is available. Yet in many applications the data is collected in batches. We consider two essential problems in machine learning, Classification and distribution estimation. We show that in these applications, the effect of data corruption diminishes with the batch size, and demonstrate how batch structure can be used to reduce the effect of the corrupted or adversarial data, thereby paving a path to more reliable machine learning algorithms. Acknowledgements We thank Vaishakh Ravindrakumar and Yi Hao for helpful comments in the prepration of this manuscript and the authors of the concurrent work [CLM20] for coordinating submission with us. We are grateful to the National Science Foundation (NSF) for supporting this work through grants CIF-1564355 and CIF-1619448.
1. What is the main contribution of the paper in the context of learning from untrusted batches? 2. What are the strengths of the paper in terms of its theoretical relevance and novelty? 3. What are the weaknesses of the paper regarding its assumptions and limitations?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper extends recent algorithmic work for the problem of learning from untrusted batches. The focus is on the setting where the underlying distribution has additional structure, in which case more sample efficient algorithms are possible. This paper develops sample and computationally efficient algorithms for such settings. Strengths The paper presents novel theoretical results that are highly relevant for the machine learning community. At a high-level, the paper combines in a natural but non-trivial way VC theory with the filtering framework (developed in prior work). Weaknesses One potential limitation is the assumption that each good batch contains samples from the distribution p -- as opposed to some distribution that is close to p in L1 distance. The original framework of Qiao-Valiant covers this setting and some of the previous efficient algorithms work in this setting as well. It is not immediately clear from the writeup if this is an inherent limitation of the methodology proposed in this paper.
NIPS
Title A General Method for Robust Learning from Batches Abstract In many applications, data is collected in batches, some of which may be corrupt or even adversarial. Recent work derived optimal robust algorithms for estimating finite distributions in this setting. We develop a general framework of robust learning from batches, and determine the limits of both distribution estimation, and notably, classification, over arbitrary, including continuous, domains. Building on this framework, we derive the first robust agnostic: (1) polynomial-time distribution estimation algorithms for structured distributions, including piecewisepolynomial, monotone, log-concave, and gaussian-mixtures, and also significantly improve their sample complexity; (2) classification algorithms, and also establish their near-optimal sample complexity; (3) computationally-efficient algorithms for the fundamental problem of interval-based classification that underlies nearly all natural-1-dimensional classification problems. 1 Introduction 1.1 Motivation In many learning applications, some samples are inadvertently or maliciously corrupted. A simple and intuitive example shows that this erroneous data limits the extent to which a distribution can be learned, even with infinitely many samples. Consider p that could be one of two possible binary distributions: ( 12 − β 2 , 1 2 + β 2 ) and ( 1 2 + β 2 , 1 2 − β 2 ). Given any number of samples from p, an adversary who observes a 1 − β fraction of the samples and can determine the rest, could use the observed samples to learn p, and set the remaining samples to make the distribution always appear to be (0.5, 0.5). Even with arbitrarily many samples, any estimator for p fails to decide which p is in effect, hence incurs a total-variation (TV) distance ≥ β2 , that we call the adversarial lower bound. The example may seem to suggest the pessimistic conclusion that if an adversary can corrupt a β fraction of the data, a TV-loss of≥ β2 is inevitable. Fortunately, in many applications it can be avoided. In the following applications, and many others, data is collected in batches, most of which are genuine, but some possibly corrupted. Data may be gathered by sensors, each providing a large amount of data, and some sensors may be faulty. The word frequency of an author may be estimated from several large texts, some of which are mis-attributed. User preferences may be learned by querying several individuals, some intentionally biasing their feedback. Multiple agents may contribute to a crowd-sourcing platform, but some may be unreliable or malicious. Interestingly, for data arriving in batches, even when a β-fraction of which are corrupted, more can be said. Recently, [QV17] formalized the problem for finite domains. They considered estimating a distribution p over [k] in TV-distance when the samples are provided in batches of size ≥ n. Out of a total of m batches, a fraction ≤ β may be arbitrarily and adversarially corrupted, while in every other batch b the samples are drawn according to a distribution p. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. For β<1/900, they derived an estimation algorithm that approximates any p over a finite domain to TV-distance O(β/ √ n), surprisingly, much lower than the individual samples limit of Θ(β). They also derived a matching lower bound, showing that even for binary distributions, and hence for general finite distributions, given any number m of batches, the lowest achievable TV distance is ∆min := ∆min(β, n) := β 2 √ 2n . We refer to ∆min as the adversarial batch lower bound. Their estimator requires Ω( n+k n·∆2min ) batches of samples, or equivalently Ω(n+k ∆2min ) samples, which is not optimal if n >> k. It also runs in time exponential in the domain size, rendering it impractical. Recently, [CLM19] used a novel application of the sum-of-squares technique to reduce the exponential time complexity. Using quasi-polynomial sample size and run time, both roughly (k/∆)O(log(1/β)), they derived an estimator that achieves TV distanceO(∆), where ∆ := ∆(β, n) := ∆min · √ ln(1/β). Concurrently, [JO19] derived the first polynomial-time and optimal Ω(k/∆2) sample estimator, that achieves the same O(∆) TV distance. To limit the impact of adversarial batches, the algorithm filters the data by removing batches that skews the estimator. For general distributions, the sample complexity of both TV-distance estimation, and Bayes-optimal classification, grows linearly in the domain size, even when all samples are genuine. Hence, general estimation and classification over large discrete, let alone continuous domains, is infeasible. Since most modern applications are over very large or continuous domains, this may again lead to the pessimistic conclusion that not much can be done. Fortunately, typical distributions are not arbitrary and possess some structure. For example, they may be monotone, smooth, Lipchitz, etc., or well approximated by structured distributions. These structural properties enable learning over large and even infinite domains. For example, as is well known, classifiers can be learned using a number of samples proportional to the VC-dimension of the classifier class. But so far, our understanding of how to incorporate the distribution structure in Robust batch learning has been quite limited. The first application of structure to reduce the linear dependence of the sample complexity [CLM19] considered robust batch learning of t-piecewise degree-d polynomials over the finite set [k] = {1, . . . ,k}. It learned these distributions with number of samples that grows only quasipoly-logarithmically in the domain size k. Yet this number still grows with k, hence does not extend to continuous distributions. It is also quasi-polynomial in the other parameters t, d, batch size n, and 1/β, much larger than in the non-robust setting. And the algorithm’s computational complexity is quasi-polynomial in these parameters and the domain size k. This leaves several natural questions: (1) Can other non-finite, and even continuous, structured distribution classes, be learned robustly to an estimation error comparable to the adversarial batch lower ∆min? (2) Can it be achieved with sample complexity comparable to the non-adversarial learning? (3) Can robust learning of structured distributions be accomplished in strict polynomial time? (4) Even more generally can other tasks such as classification be accomplished with adversarial batches? (5) Most importantly, is there a general and systematic theory of learning with adversarial batches? 1.2 Summary of techniques and contributions VC theory helps answer some of the above questions when all the samples are generated i.i.d. from a distribution. We adapt the theory to address robust batch learning as well. Let F be a family of subsets of an Euclidean domain Ω. The F-distance between two distributions p and q over Ω is the largest difference between the probabilities p and q assign to any subset in F , ||p− q||F := supS∈F |p(S)− q(S)|. It is easy to see that TV, and hence L1, distances are a special case of F-distance where F is the collection Σ of all Borel subsets of Ω, ||p− q||Σ = ||p− q||TV = 12 ||p− q||1. Without adversarial batches, the VC inequality guarantees that for a subset family F with finite VC-dimension, the empirical distribution of samples from p estimates p to a small F-distance. But with adversarial batches, the F-distance between the empirical distribution and p could be large. For learning with adversarial batches over finite domains, [JO19] presented an algorithm that learns the distribution to a small TV distance with a number of batches proportional to the domain size. We generalize this algorithm to learn any finite-VC subset family F to a small F -distance using samples linear in the family’s VC-dimension, rather than the domain size. Recall that ∆min = β/(2 √ 2n) is the adversarial batch lower bound for TV-distance learning. No algorithm achieves an error below ∆min, even with the number of batches→ ∞. Since the ∆min lower bound applies even to binary domains, it can be shown to also lower bound F -distance learning. Our proposed algorithm filters the batches and returns a sub-collection of batches whose empirical distribution estimates p to F-distance O(∆), where ∆ = ∆min · √ log(1/β) is only a small factor above the lower bound. The number of batches it requires for any VC family F is only a logarithmic factor more than needed to achieve the same error without adversarial batches, showing that robustness can be incorporated at little extra cost. This provides the first demonstration that distributions can be learned (1) robustly and (2) sample-efficiently, over infinite, and even continuous domains. As expected from the setting’s vast generality, as in the non-adversarial setting, for some VC families, one cannot expect to find a computationally efficient algorithm. We, therefore, consider a natural and important VC family over the reals that, as we shall soon see, translates into efficient and robust algorithms for TV-learning and classification over R. Let Fk be the family of all unions of at most k intervals over R. We derive a computationally efficient algorithm that estimates distributions to Fk-distance O(∆) using only Õ(1/∆) times more samples than the non-adversarial, or information-theoretic adversarial cases. Building on these techniques, we return to estimation in total variation (TV) distance. We consider the family of distributions whose Yatracos Class [Yat85] have finite VC dimension. This family consists of both discrete and continuous distributions, and includes piecewise polynomials, Gaussians in one or more dimensions, and arguably most practical distribution families. We show that all these distributions can be learned robustly from batches to a TV distance O(∆), which is only a factor √ log(1/β) above the adversarial TV-distance lower bound of ∆min. It also achieves sample complexity that is at most a logarithmic factor more than required for non-adversarial case. These results too are very general, hence as in the non-adversarial case, one cannot expect a computationally efficient algorithm for all cases. We therefore consider the natural and important general class Pt,d of t-piecewise degree-d polynomial distributions over the reals. To agnostically learn distributions Pt,d, we combine the results above with an existing, nonadversarial, polynomial-learning algorithm [ADLS17]. We derive a polynomial-time algorithm for estimating polynomials in Pt,d to a TV distance O(∆). The algorithm’s sample complexity is linear in td, which is the best possible, and similar to learning in Fk-distance, only Õ(1/∆) times above the non-adversarial, or information-theoretic adversarial sample complexity. This is the first algorithm that achieves polynomial sample and time complexity for robust learning for this class, and the first that applies to the non-finite domains. The general formulation also allows us to use batch-structure for robustness in other learning tasks. We apply this framework to derive the first robust agnostic classifiers. The goal is to minimize the excess loss in comparison to the best hypothesis, in the presence of adversarial batches. We first modify the lower bound on distribution learning to show that any classification algorithm with adversarial batches must incur an excess loss O(∆min), even with the number of batches→∞. We then derive a general algorithm that achieves additive excess loss O(∆) for general binary classification using a number of samples that is again only a logarithmic factor larger than required to achieve the same excess loss in the non-adversarial setting. Finally, we consider classification over R. Many natural and practical classifiers have decision regions consisting of finitely many disjoint intervals. We apply the above results to derive a computationally efficient algorithm for hypotheses consisting of k intervals. Similar to previous results, its sample complexity is linear in k and only a factor O(1/∆) larger than required in the non-adversarial setup. The rest of the paper is organized as follows. Section 2 describes the main technical results and their applications to distribution estimation and classification. Section 3 discusses the other related work. Section 4 provides an overview of the filtering algorithm that enables these results. Proofs and more details are relegated to the appendix. 2 Results We consider learning from batches of samples, when a β−fraction of batches are adversarial. More precisely, B is a collection of m batches, composed of two unknown sub-collections. A good sub-collection BG ⊆ B of ≥ (1− β)m good batches, where each batch b consists of n independent samples from a common distribution p over Ω. And an adversarial sub-collection BA = B \BG of the remaining ≤ βm batches, each consisting of the same number n of arbitrary Ω elements, that for simplicity we call samples as well. Note that the adversarial samples may be chosen in any way, including after observing the good samples. The next subsection describes the main technical results for learning in F distance. Subsequent subsections apply these results to learn distributions in TV distance and to achieve robust binary classification. 2.1 Estimating distributions in F distance Our goal is to use samples generated by a target distribution p to approximate it to a small F -distance. For general families F , this goal cannot be accomplished even with just good batches. Let F = Σ be the collection of all subsets of the real interval domain Ω = [0, 1]. For any total number t of samples, with high probability, it is impossible to distinguish the uniform distribution over [0, 1] from a uniform discrete distribution over a random collection of t2 elements in [0, 1]. Hence any estimator must incur TV-distance 1 for some distribution. This difficulty is addressed by Vapnik-Chervonenkis (VC) Theory. The collection F shatters a subset S ⊆ Ω if every subset of S is the intersection of S with a subset in F . The VC-dimension VF of F is the size of the largest subset shattered by F . Let Xt = X1, . . . ,Xt, be i.i.d. samples from a distribution p. The empirical probability of S ⊆ Ω is p̄t(S) := |{i : Xi ∈ S}|/t. The fundamental Uniform deviation inequality of VC theory [VC71, Tal94] states that if F has finite VC-dimension VF , then p̄t estimates p well in F distance. For all δ > 0, with probability > 1− δ, ||p− p̄t||F ≤ O (√ (VF + log 1/δ)/t ) . The above is also the lowest achievable F-distance, hence we call it the information-theoretic limit. In the adversarial-batch scenario, a fraction β of the batches may be corrupted. It is easy to see that for any number m of batches, however large, the adversary can cause p̄t to approximate p to F-distance ≥ β/2, namely ||p̄t − p||F ≥ β/2. Let p̄B′ be the empirical distribution induced by the samples in a collection B′ ⊆ B. Our first result states that if F has a finite VC-dimension, for total samples m · n ≥ Õ(VF/∆2), the batches in B can be "cleaned" to a sub-collection B∗ where ||p− p̄B∗ ||F = O(∆), namely, a simple empirical estimator of the samples in B∗ recovers p to a small F-distance. Theorem 1. For any F , β ≤ 0.4, δ > 0, and mn ≥ Õ ( VF+log 1/δ ∆2 ) , there is an algorithm that w.p.≥1−δ returns a sub-collectionB∗⊆B s.t. |B∗∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||F ≤ O(∆). The F -distance bound matches the lower bound ∆min up to a small O( √ log(1/β)) factor. The number m · n of samples required to achieve this estimation error are the same (up to a logarithmic factor) as the minimum required to achieve the same estimation error even for the non-adversarial setting. The theorem applies to all families with finite VC dimension, and like most other results of this generality, it is necessarily non-constructive in nature. Yet it provides a road map for constructing efficient algorithms for many specific natural problems. In Section 4 we use this approach to derive a polynomial-time algorithm that learns distributions with respect to one of the most important and practical VC classes, where Ω = R, and F = Fk is the collection of all unions of at most k intervals. Theorem 2. For any k > 0, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is an algorithm that runs in time polynomial in all parameters, and with probability ≥ 1 − δ returns a sub-collection B∗ ⊆ B, such that |B∗ ∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||Fk ≤ O(∆). The above polynomial-time algorithm can achieve Fk error ∆ using the number of samples only Õ(1/∆) times the minimum required to achieve the same estimation error by any algorithm even for the non-adversarial setting. Note that the sample complexity in both Theorems 1 and 2 are independent of the domain size and depends linearly on the VC dimension of the subset family. Section 4 provides a short overview of the algorithms used in the above theorems. The complete algorithms and proof of the two theorems appear in the appendix. 2.2 Learning distributions in total-variation distance Our ultimate objective is to estimate the target distribution in total variation (TV) distance, one of the most common measures in distribution estimation. In this and the next subsection, we follow a framework developed in [DL01], see also [Dia16]. As noted earlier, the sample complexity of estimating distributions in TV-distance grows with the domain size, becoming infeasible for large discrete domains and impossible for continuous domains. A natural approach to address this intractability is to assume that the underlying distribution belongs to, or is near, a structured class P of distributions. Let optP(p) := infq∈P ||p− q||TV be the TV-distance of p from the closest distribution in P . For example, for p ∈ P , optP(p) = 0. Given , δ > 0, we try to use samples from p to find an estimate p̂ such that, with probability ≥ 1− δ, ||p− p̂||TV ≤ α · optP(p) + for a universal constant α≥1, namely, to approximate p about as well as the closest distribution in P . Following [DL01], we utilize a connection between distribution estimation and VC dimension. Let P be a class of distributions over Ω. The Yatracos class [Yat85] of P is the family of Ω subsets Y(P) := {{ω ∈ Ω : p(ω) ≥ q(ω)} : p, q ∈ P}. It is easy to verify that for distributions p, q ∈ P , ||p−q||TV = ||p−q||Y(P). The Yatracos minimizer of a distribution p is its closest distribution, by Y(P)-distance, in P , ψP(p) := arg min q∈P ||q − p||Y(P), where ties are broken arbitrarily. Theorem 6.3 in [DL01] uses these definitions and a sequence of triangle inequalities to show that for any distributions p, p′, and any distribution class P , ||p− ψP(p′)||TV ≤ 3 · optP(p) + 4||p− p′||Y(P). (1) Therefore, given a distribution p′ that approximates p in Y(P)-distance, its Yatracos minimizer ψP(p ′) approximates p in TV-distance. If the Yatracos class Y(P) has finite VC dimension, the VC-bound ensures that for the empirical distribution p̄t of t i.i.d. samples from p, ||p̄t − p||Y(P) decreases to zero as t increases, and ψP(p̄t) can be used to approximate p in TV-distance. This general method has led to many sample-and computationally-efficient algorithms for estimating structured distributions, e.g., [ADLS17]. However, as discussed earlier, with a β-fraction of adversarial batches, the empirical distribution of all samples can be at a Y(P)-distance as large as Θ(β) from p, leading to a large TV-distance. Yet Theorem 1 shows that data can be "cleaned" to remove outlier batches and retain batches B∗ ⊆ B whose empirical distribution p̄B∗ approximates p to a much smaller Y(P)-distance of O(∆). Letting p∗ = ψP(p̄B∗) and using Equation (1), we obtain a much better approximation of p in TV distance. Theorem 3. For a distribution class P with Yatracos Class of finite VC dimension v, for any β ≤ 0.4, δ > 0, and mn ≥ Õ ( v+log 1/δ ∆2 ) , there is an algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ P such that ||p− p∗||TV ≤ 3 · optP(p) +O(∆). The estimation error achieved in the theorem for TV-distance matches the lower bound to a small log factor of O( √ log(1/β)), and is valid for any class P with finite VC Dimensional Yatracos Class. Moreover, the upper bound on the number of samples (or batches) required by the algorithm to estimate p to the above distance matches a similar general upper bound obtained for non-adversarial setting to a log factor. This results for the first time shows that it is possible to learn a wide variety of distributions robustly using batches, even over continuous domains. 2.3 Learning univariate structured distributions We apply the general results in the last two subsections to estimate distributions over the real line. We focus on one of the most studied, and important, distribution families, the class of piecewisepolynomial distributions. A distribution p over [a, b] is t-piecewise, degree-d, if there is a partition of [a, b] into t intervals I1, . . . ,It, and degree-d polynomials r1, . . . ,rt such that ∀j and x ∈ Ij , p(x) = rj(x). The definition extends naturally to finite distributions over [k] = {1, . . . ,k}. Let Pt,d denote the collection of all t-piecewise degree d distributions. Pt,d is interesting in its own right, as it contains important distribution classes such as histograms. In addition, it approximates other important distribution classes, such as monotone, log-concave, Gaussians, and their mixtures, arbitrarily well, e.g., [ADLS17]. Note that for any two distributions p, q ∈ Pt,d, the difference p − q is a 2t-piecewise degree-d polynomial, hence every set in the Yatracos class of Pt,d, is the union of at most 2t · d intervals in R. Therefore, Y(Pt,d) ⊆ F2t·d. And since VFk = O(k) for any k, Y(Pt,d) has VC dimension O(td). Theorem 3 can then be applied to show that any target distribution p can be estimated by a distribution in Pt,d to a TV-distance ∆, using a number of samples, that is within a logarithmic factor from the minimum required [CDSS14] even when all samples are i.i.d. generated from p. Corollary 4. For any distribution p over R, t, d, β ≤ 0.4, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆2 ) , there is an algorithm that with probability ≥ 1− δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ 3 · optPt,d(p) +O(∆). Next we provide a polynomial-time algorithm for estimating p to the same O(∆) TV-distance, but with an extra Õ(1/∆) factor in sample complexity. Theorem 2 provides a polynomial time algorithm that returns a sub-collection B∗ ⊆ B of batches whose empirical distribution p̄B∗ is close to p in F2td-distance. [ADLS17] provides a polynomial time algorithm that for any distribution q returns a distribution in p′ ∈ Pt,d minimizing ||p′ − q||F2td to a small additive error. Then equation (1) and Theorem 2 yield the following result. We provide formal proof of the theorem in the appendix. Theorem 5. For any distribution p over R, n, m, β≤ 0.4, t, d, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆3 ) , there is a polynomial time algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ O(optPt,s(p)) +O(∆). 2.4 Binary classification Our framework extends beyond distribution estimation. Here we describe its application to Binary classification. Consider a family H : Ω → {0, 1} of Boolean functions, and a distribution p over Ω × {0, 1}. Let (X,Y ) ∼ p, where X ∈ Ω and Y ∈ {0, 1}. The loss of classifier h ∈ H for distribution p is rp(h) := Pr(X,Y )∼p[h(X) 6= Y ]. The optimal classifier for distribution p is hopt(p) := arg minh∈H rp(h), and hence the optimal loss is r opt p (H) := rp(hopt(p)). The goal is to return a classifier h ∈ H whose excess loss rp(h)− roptp (H) compared to the optimal loss is small. Consider the following natural extension of VC-dimension from families of subsets to families of Boolean functions. For a boolean-function familyH, define the family FH := {({ω ∈ Ω : h(ω) = y}, ȳ) : h ∈ H, y ∈ {0, 1}} of subsets of Ω× {0, 1}, and let the VC dimension ofH be VH := VFH . The next simple lemma, proved in the appendix, upper bounds the excess loss of the optimal classifier inH for a distribution q for another distribution p in terms of FH distance between the distributions. Lemma 6. For any classH and distributions p and q, rp(hopt(q))− roptp (H) ≤ 4||p− q||FH . When q is an empirical distribution of the samples, hopt(q) is called the empirical-risk minimizer. If q is the empirical distribution of the samples generated i.i.d. from p, from VC inequality, the excess loss of the empirical-risk minimizer in the above equation goes to zero if VC dimension ofH is finite. Yet as discussed earlier, when a β-fractions of the batches, and hence samples, are chosen by an adversary, the empirical distribution of all samples can be at a large FH-distance O(β) from p, leading to an excess-classification-loss up to Ω(β) for the empirical-risk minimizer. Theorem 1 states that the collection of batches can be "cleaned" to obtain a sub-collection B∗ ⊆ B whose empirical distribution has a lower FH-distance from p. The above lemma then implies that the optimal classifier hopt(p̄B∗) for the empirical distribution p̄B∗ of the cleaner batches will have a small-excess-classification-loss for p as well. The resulting non-constructive algorithm has excess-classification-loss and sample complexity that are optimal to a logarithmic factor. Theorem 7. For any H, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( VH+log 1/δ ∆2 ) , there is an algorithm that with probability ≥1−δ returns a classifier h∗, whose excess lose is rp(h∗)− roptp (H) ≤ O(∆). To complement this result, we show an information-theoretic lower bound of Ω(∆min) on the excess loss. The proof is in the appendix. Recall that a similar lower bound holds for learning distribution. Theorem 8. For any β, n, andH s.t. VH ≥ 1, there are a distribution p and an adversary, such that any algorithm, with probability ≥ 1/2, incurs an excess loss Ω(∆min), even as number of batches m→∞. To derive a computationally-efficient algorithm, we focus on the following class of binary functions. For k ≥ 1, letHk denote the collection of all binary functions over R whose decision region, namely values mapping to 0 or 1, consists of at most k-intervals. The VC dimension of FHk is clearly O(k). Theorem 2 describes a polynomial-time algorithm that returns a cleaner data w.r.t. FHk distance. From Lemma 6, the classifier that minimizes the loss for the empirical distribution of this cleaner data will have a small excess loss. Furthermore, [Maa94] derived a polynomial-time algorithm to find the empirical risk minimizer h ∈ Hk for any given samples. Combining these results, gives a robust computationally efficient classifier inHk. We provide a formal proof in the appendix. Theorem 9. For any k, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is a polynomial-time algorithm that w. p. ≥1−δ returns a classifier h∗, whose excess loss is rp(h∗)− roptp (Hk) ≤ O(∆). 3 Other related and concurrent work The current results extend several long lines of work on estimating structured distributions, including [O’B16, Dia16, AM18, ADLS17]. The results also relate to classical robust-statistics work [Tuk60, Hub92]. There has also been significant recent work leading to practical distribution learning algorithms that are robust to adversarial contamination of the data. For example, [DKK+16, LRV16] presented algorithms for learning the mean and covariance matrix of highdimensional sub-gaussian and other distributions with bounded fourth moments in presence of the adversarial samples. Their estimation guarantees are typically in terms of L2, and do not yield the L1- distance results required for discrete distributions. The work was extended in [CSV17] to the case when more than half of the samples are adversarial. Their algorithm returns a small set of candidate distributions one of which is a good approximate of the underlying distribution. The filtering based method has also played a key role in other robust learning algorithms in high dimension [DKK+17, DKK+18, SCV17, DKK+19]. These works apply filtering on samples instead on batches of samples, as in [JO19] and in this paper, and recover in a different metric. For a more extensive survey on robust learning algorithms see [SCV17, DKK+19]. Another motivation for this work derives from the practical federated-learning problem, where information arrives in batches [MMR+16, MR17]. Concurrent work Concurrent to our work, [CLM20] also extends the filtering algorithm of [JO19] to obtain robust batch learning algorithms for estimating piecewise polynomials. They derive a polynomial-time algorithm that learns distributions in Pt,d over a finite domain [k] to the same TV distance O(∆) as we do, but requires Õ((td)2 log3(k)/∆2) samples, where Õ hides a logarithmic factor in 1/∆. In contrast, our results show that this accuracy can be achieved using Õ(td/∆2) samples, and by a polynomial-time algorithm with sample complexity is Õ(td/∆3). Importantly, our algorithms’ complexity does not depend on the alphabet size [k], which allows us to extend them to more general non-finite and even continuous domains. In addition, we considered other distribution classes and learning tasks such as classification. Another concurrent work [KFAL20] focuses on the sample complexity of robust batch classification using adversarial batches. Their results achieve an excess loss of O( √ VH · ∆), where VH is the VC-dimension of the hypothesis class, whereas we achieve an excess loss only O(∆). 4 Overview of the filtering framework for learning in F distance To derive both the information-theoretic and computationally-efficient algorithms for general robust learning from batches, we generalize a finite filtering-based approach in [JO19]. We first describe the original algorithm and outline how it can be extended to general learning problems. A more complete and formal presentation appears in the appendix. Recall that B is the collection of all m batches and each batch b ∈ B has n samples from the domain Ω. A batch b estimates the probability p(S) of a subset S ∈ Σ by its empirical probability. Each subset S ∈ Σ, assigns to every batch b ∈ B, a corruption score ψb(S), defined in the appendix, based on how far the batch’s estimate of p(S) is from the median of the estimates for all batches. Similarly, each subset S assigns to every sub-collection B′ ⊆ B of batches a corruption score ψB′(S) := ∑ b∈B′ ψb(S), the sum of individual corruption score of each batch. We first describe a general filtering approach to robust learning from batches. A collection C ⊆ Σ of subsets, is learnable via filtering if one can "filter out" bad batches in B and find a "good" subset B∗ ⊆ B of batches that approximates p to a small C-distance, ||p− p̄B∗ ||C = max S∈C |p(S)− p̄B∗(S)| ≤ O(∆). (2) We describe two properties ensuring that C is learnable via filtering. A finite C ⊆ Σ is learnable via filtering if there is a threshold τ such that all subsets S ∈ C and all sub-collection B′ ⊆ B that contain most good batches, namely |B′∩BG| ≥ (1−β/6)|BG|, satisfy the following two properties: 1. If the corruption score is low, ψB′(S)<τ , thenB′ estimates p(S) well, |p(S)− p̄B′(S)| = O(∆). 2. If ψB′(S) > τ , then there is a (probabilistic) method that removes batches in B′, while ensuring that and each batch removed is adversarial with probability at least 0.95, until ψB′(S) < τ . A simple algorithm shows that these two properties imply that C is learnable by filtering. Start with B′ = B, find a filter S ∈ C with ψB′(S) > τ , remove the batches from B′, and repeat the process until the corruption is small, ψB′(S) < τ , for all filters in C. By property 2, each deleted batch is adversarial with probability > 0.95. Since there are at most βm adversarial batches, w.h.p. at most 0.1βm good batches are deleted. Consequently |B′ ∩BG| ≥ (1− β/6)|BG|. By property 1, when the algorithm ends, B∗ = B′ achieves (2). While this algorithm describes the core of the technique, three significant challenges remain. The above algorithm applies for finite classes C. However, the VC class F may be infinite, or even uncountable. To apply the algorithm we need to find a finite subset C such that learning in C distance implies learning in F distance. In the appendix, we prove an essential Robust Covering Theorem, showing that for an appropriate , letting C be an -cover of F under empirical density p̄B , suffices to learn p in F distance. This is despite the fact that a fraction β of the batches inB may be adversarially chosen, and even depend on good samples. The next key challenge is to show that the two properties hold for all subsets in the -cover. We establish this fact by showing that with sufficiently many batches, w.h.p., the two properties hold for all subsets S ∈ F . The proof requires addressing additional technical challenges, as number of subsets in F could be infinite. Choosing any finite -cover C ⊆ F under density p̄B , therefore yields an information-theoretic algorithm with near-optimal sample complexity. This gives us the near sample optimal algorithm in Theorem 1. However, computationally-efficient algorithms pose one additional challenge. The size of C may be exponential in the VC dimension, and hence searching for a subset in C with a high corruption score may be computationally infeasible. For the VC class Fk, we overcome this difficulty by choosing the set C of filters from a larger class than Fk itself so that that still obeys the two properties, but allows for an efficient search. Though C is chosen from a larger class, we ensure that the sample complexity increase is small. Specifically, we let C be the collection of all subsets of a k′-partition of Ω, for an appropriate k′ that is linear in k. Subsets in such a cover C correspond to binary vectors in {0, 1}k′ . A novel semi-definite-programming based algorithm derived in [JO19] finds a subset S ∈ C with nearly the highest corruption ψB′(S) in time only polynomial in k′. This allows us to obtain the polynomial-time algorithm in Theorem 2. To summarize, this universal filtering approach allows us to "clean" the data and enables the general robust distribution estimators and classifiers we construct. Remark. In some applications the distributions underlying genuine batches may differ from the common target distribution p by a small TV distance, say η > 0. For simplicity, in this paper we presented the analysis for η = 0, where all the good batches have the same distribution p. For η > 0, even for binary alphabets, [QV17] derived the adversarial batch lower bound of Ω(η+ β/ √ n) on TV distance. And even the trivial empirical estimator achieves O(η + β) TV-error, which has optimal linear dependence on η. Therefore, filtering algorithms do not need to do anything sophisticated for general η and incurs only an extra O(η) error as noted in [JO19] for unstructured distributions, and the same holds for our algorithms for learning structured distributions and binary classification. Broader impact With the vast increase in data availability, data sources are often corrupt or untrustworthy. This untrusted data severely limits the efficacy of several leaning algorithms, even when a vast amount of the data is available. Yet in many applications the data is collected in batches. We consider two essential problems in machine learning, Classification and distribution estimation. We show that in these applications, the effect of data corruption diminishes with the batch size, and demonstrate how batch structure can be used to reduce the effect of the corrupted or adversarial data, thereby paving a path to more reliable machine learning algorithms. Acknowledgements We thank Vaishakh Ravindrakumar and Yi Hao for helpful comments in the prepration of this manuscript and the authors of the concurrent work [CLM20] for coordinating submission with us. We are grateful to the National Science Foundation (NSF) for supporting this work through grants CIF-1564355 and CIF-1619448.
1. What is the focus and contribution of the paper on learning from batches? 2. What are the strengths of the proposed filter algorithm, particularly in terms of its ability to "clean" batches? 3. What are the weaknesses of the paper, especially regarding the similarity between the current work and prior works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper generalizes the filter algorithm proposed in [JO19] for learning from batches from discrete distribution learning to continuous domain. The main technical lemma shows that the proposed filter algorithm can “clean” the batches to guarantee uniform convergence with near optimal sample complexity. This lemma is leveraged to learn piecewise polynomial distribution, binary classification. Though the result is generally statistical in its nature and does not yield a computation efficient algorithm, the paper leverages an efficient learning algorithm for unions of intervals to give efficient algorithms for learning piecewise polynomial distributions and union of intervals classifiers. Strengths This paper generalizes the filter algorithm proposed in [JO19] for learning from batches from discrete distribution learning to continuous domain, and provides the first efficient robust batch learning algorithm for several fundamental learning problems. Weaknesses The proof for the sample efficient filter for general VC class follows from a very similar technical approach as in [JO19].
NIPS
Title A General Method for Robust Learning from Batches Abstract In many applications, data is collected in batches, some of which may be corrupt or even adversarial. Recent work derived optimal robust algorithms for estimating finite distributions in this setting. We develop a general framework of robust learning from batches, and determine the limits of both distribution estimation, and notably, classification, over arbitrary, including continuous, domains. Building on this framework, we derive the first robust agnostic: (1) polynomial-time distribution estimation algorithms for structured distributions, including piecewisepolynomial, monotone, log-concave, and gaussian-mixtures, and also significantly improve their sample complexity; (2) classification algorithms, and also establish their near-optimal sample complexity; (3) computationally-efficient algorithms for the fundamental problem of interval-based classification that underlies nearly all natural-1-dimensional classification problems. 1 Introduction 1.1 Motivation In many learning applications, some samples are inadvertently or maliciously corrupted. A simple and intuitive example shows that this erroneous data limits the extent to which a distribution can be learned, even with infinitely many samples. Consider p that could be one of two possible binary distributions: ( 12 − β 2 , 1 2 + β 2 ) and ( 1 2 + β 2 , 1 2 − β 2 ). Given any number of samples from p, an adversary who observes a 1 − β fraction of the samples and can determine the rest, could use the observed samples to learn p, and set the remaining samples to make the distribution always appear to be (0.5, 0.5). Even with arbitrarily many samples, any estimator for p fails to decide which p is in effect, hence incurs a total-variation (TV) distance ≥ β2 , that we call the adversarial lower bound. The example may seem to suggest the pessimistic conclusion that if an adversary can corrupt a β fraction of the data, a TV-loss of≥ β2 is inevitable. Fortunately, in many applications it can be avoided. In the following applications, and many others, data is collected in batches, most of which are genuine, but some possibly corrupted. Data may be gathered by sensors, each providing a large amount of data, and some sensors may be faulty. The word frequency of an author may be estimated from several large texts, some of which are mis-attributed. User preferences may be learned by querying several individuals, some intentionally biasing their feedback. Multiple agents may contribute to a crowd-sourcing platform, but some may be unreliable or malicious. Interestingly, for data arriving in batches, even when a β-fraction of which are corrupted, more can be said. Recently, [QV17] formalized the problem for finite domains. They considered estimating a distribution p over [k] in TV-distance when the samples are provided in batches of size ≥ n. Out of a total of m batches, a fraction ≤ β may be arbitrarily and adversarially corrupted, while in every other batch b the samples are drawn according to a distribution p. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. For β<1/900, they derived an estimation algorithm that approximates any p over a finite domain to TV-distance O(β/ √ n), surprisingly, much lower than the individual samples limit of Θ(β). They also derived a matching lower bound, showing that even for binary distributions, and hence for general finite distributions, given any number m of batches, the lowest achievable TV distance is ∆min := ∆min(β, n) := β 2 √ 2n . We refer to ∆min as the adversarial batch lower bound. Their estimator requires Ω( n+k n·∆2min ) batches of samples, or equivalently Ω(n+k ∆2min ) samples, which is not optimal if n >> k. It also runs in time exponential in the domain size, rendering it impractical. Recently, [CLM19] used a novel application of the sum-of-squares technique to reduce the exponential time complexity. Using quasi-polynomial sample size and run time, both roughly (k/∆)O(log(1/β)), they derived an estimator that achieves TV distanceO(∆), where ∆ := ∆(β, n) := ∆min · √ ln(1/β). Concurrently, [JO19] derived the first polynomial-time and optimal Ω(k/∆2) sample estimator, that achieves the same O(∆) TV distance. To limit the impact of adversarial batches, the algorithm filters the data by removing batches that skews the estimator. For general distributions, the sample complexity of both TV-distance estimation, and Bayes-optimal classification, grows linearly in the domain size, even when all samples are genuine. Hence, general estimation and classification over large discrete, let alone continuous domains, is infeasible. Since most modern applications are over very large or continuous domains, this may again lead to the pessimistic conclusion that not much can be done. Fortunately, typical distributions are not arbitrary and possess some structure. For example, they may be monotone, smooth, Lipchitz, etc., or well approximated by structured distributions. These structural properties enable learning over large and even infinite domains. For example, as is well known, classifiers can be learned using a number of samples proportional to the VC-dimension of the classifier class. But so far, our understanding of how to incorporate the distribution structure in Robust batch learning has been quite limited. The first application of structure to reduce the linear dependence of the sample complexity [CLM19] considered robust batch learning of t-piecewise degree-d polynomials over the finite set [k] = {1, . . . ,k}. It learned these distributions with number of samples that grows only quasipoly-logarithmically in the domain size k. Yet this number still grows with k, hence does not extend to continuous distributions. It is also quasi-polynomial in the other parameters t, d, batch size n, and 1/β, much larger than in the non-robust setting. And the algorithm’s computational complexity is quasi-polynomial in these parameters and the domain size k. This leaves several natural questions: (1) Can other non-finite, and even continuous, structured distribution classes, be learned robustly to an estimation error comparable to the adversarial batch lower ∆min? (2) Can it be achieved with sample complexity comparable to the non-adversarial learning? (3) Can robust learning of structured distributions be accomplished in strict polynomial time? (4) Even more generally can other tasks such as classification be accomplished with adversarial batches? (5) Most importantly, is there a general and systematic theory of learning with adversarial batches? 1.2 Summary of techniques and contributions VC theory helps answer some of the above questions when all the samples are generated i.i.d. from a distribution. We adapt the theory to address robust batch learning as well. Let F be a family of subsets of an Euclidean domain Ω. The F-distance between two distributions p and q over Ω is the largest difference between the probabilities p and q assign to any subset in F , ||p− q||F := supS∈F |p(S)− q(S)|. It is easy to see that TV, and hence L1, distances are a special case of F-distance where F is the collection Σ of all Borel subsets of Ω, ||p− q||Σ = ||p− q||TV = 12 ||p− q||1. Without adversarial batches, the VC inequality guarantees that for a subset family F with finite VC-dimension, the empirical distribution of samples from p estimates p to a small F-distance. But with adversarial batches, the F-distance between the empirical distribution and p could be large. For learning with adversarial batches over finite domains, [JO19] presented an algorithm that learns the distribution to a small TV distance with a number of batches proportional to the domain size. We generalize this algorithm to learn any finite-VC subset family F to a small F -distance using samples linear in the family’s VC-dimension, rather than the domain size. Recall that ∆min = β/(2 √ 2n) is the adversarial batch lower bound for TV-distance learning. No algorithm achieves an error below ∆min, even with the number of batches→ ∞. Since the ∆min lower bound applies even to binary domains, it can be shown to also lower bound F -distance learning. Our proposed algorithm filters the batches and returns a sub-collection of batches whose empirical distribution estimates p to F-distance O(∆), where ∆ = ∆min · √ log(1/β) is only a small factor above the lower bound. The number of batches it requires for any VC family F is only a logarithmic factor more than needed to achieve the same error without adversarial batches, showing that robustness can be incorporated at little extra cost. This provides the first demonstration that distributions can be learned (1) robustly and (2) sample-efficiently, over infinite, and even continuous domains. As expected from the setting’s vast generality, as in the non-adversarial setting, for some VC families, one cannot expect to find a computationally efficient algorithm. We, therefore, consider a natural and important VC family over the reals that, as we shall soon see, translates into efficient and robust algorithms for TV-learning and classification over R. Let Fk be the family of all unions of at most k intervals over R. We derive a computationally efficient algorithm that estimates distributions to Fk-distance O(∆) using only Õ(1/∆) times more samples than the non-adversarial, or information-theoretic adversarial cases. Building on these techniques, we return to estimation in total variation (TV) distance. We consider the family of distributions whose Yatracos Class [Yat85] have finite VC dimension. This family consists of both discrete and continuous distributions, and includes piecewise polynomials, Gaussians in one or more dimensions, and arguably most practical distribution families. We show that all these distributions can be learned robustly from batches to a TV distance O(∆), which is only a factor √ log(1/β) above the adversarial TV-distance lower bound of ∆min. It also achieves sample complexity that is at most a logarithmic factor more than required for non-adversarial case. These results too are very general, hence as in the non-adversarial case, one cannot expect a computationally efficient algorithm for all cases. We therefore consider the natural and important general class Pt,d of t-piecewise degree-d polynomial distributions over the reals. To agnostically learn distributions Pt,d, we combine the results above with an existing, nonadversarial, polynomial-learning algorithm [ADLS17]. We derive a polynomial-time algorithm for estimating polynomials in Pt,d to a TV distance O(∆). The algorithm’s sample complexity is linear in td, which is the best possible, and similar to learning in Fk-distance, only Õ(1/∆) times above the non-adversarial, or information-theoretic adversarial sample complexity. This is the first algorithm that achieves polynomial sample and time complexity for robust learning for this class, and the first that applies to the non-finite domains. The general formulation also allows us to use batch-structure for robustness in other learning tasks. We apply this framework to derive the first robust agnostic classifiers. The goal is to minimize the excess loss in comparison to the best hypothesis, in the presence of adversarial batches. We first modify the lower bound on distribution learning to show that any classification algorithm with adversarial batches must incur an excess loss O(∆min), even with the number of batches→∞. We then derive a general algorithm that achieves additive excess loss O(∆) for general binary classification using a number of samples that is again only a logarithmic factor larger than required to achieve the same excess loss in the non-adversarial setting. Finally, we consider classification over R. Many natural and practical classifiers have decision regions consisting of finitely many disjoint intervals. We apply the above results to derive a computationally efficient algorithm for hypotheses consisting of k intervals. Similar to previous results, its sample complexity is linear in k and only a factor O(1/∆) larger than required in the non-adversarial setup. The rest of the paper is organized as follows. Section 2 describes the main technical results and their applications to distribution estimation and classification. Section 3 discusses the other related work. Section 4 provides an overview of the filtering algorithm that enables these results. Proofs and more details are relegated to the appendix. 2 Results We consider learning from batches of samples, when a β−fraction of batches are adversarial. More precisely, B is a collection of m batches, composed of two unknown sub-collections. A good sub-collection BG ⊆ B of ≥ (1− β)m good batches, where each batch b consists of n independent samples from a common distribution p over Ω. And an adversarial sub-collection BA = B \BG of the remaining ≤ βm batches, each consisting of the same number n of arbitrary Ω elements, that for simplicity we call samples as well. Note that the adversarial samples may be chosen in any way, including after observing the good samples. The next subsection describes the main technical results for learning in F distance. Subsequent subsections apply these results to learn distributions in TV distance and to achieve robust binary classification. 2.1 Estimating distributions in F distance Our goal is to use samples generated by a target distribution p to approximate it to a small F -distance. For general families F , this goal cannot be accomplished even with just good batches. Let F = Σ be the collection of all subsets of the real interval domain Ω = [0, 1]. For any total number t of samples, with high probability, it is impossible to distinguish the uniform distribution over [0, 1] from a uniform discrete distribution over a random collection of t2 elements in [0, 1]. Hence any estimator must incur TV-distance 1 for some distribution. This difficulty is addressed by Vapnik-Chervonenkis (VC) Theory. The collection F shatters a subset S ⊆ Ω if every subset of S is the intersection of S with a subset in F . The VC-dimension VF of F is the size of the largest subset shattered by F . Let Xt = X1, . . . ,Xt, be i.i.d. samples from a distribution p. The empirical probability of S ⊆ Ω is p̄t(S) := |{i : Xi ∈ S}|/t. The fundamental Uniform deviation inequality of VC theory [VC71, Tal94] states that if F has finite VC-dimension VF , then p̄t estimates p well in F distance. For all δ > 0, with probability > 1− δ, ||p− p̄t||F ≤ O (√ (VF + log 1/δ)/t ) . The above is also the lowest achievable F-distance, hence we call it the information-theoretic limit. In the adversarial-batch scenario, a fraction β of the batches may be corrupted. It is easy to see that for any number m of batches, however large, the adversary can cause p̄t to approximate p to F-distance ≥ β/2, namely ||p̄t − p||F ≥ β/2. Let p̄B′ be the empirical distribution induced by the samples in a collection B′ ⊆ B. Our first result states that if F has a finite VC-dimension, for total samples m · n ≥ Õ(VF/∆2), the batches in B can be "cleaned" to a sub-collection B∗ where ||p− p̄B∗ ||F = O(∆), namely, a simple empirical estimator of the samples in B∗ recovers p to a small F-distance. Theorem 1. For any F , β ≤ 0.4, δ > 0, and mn ≥ Õ ( VF+log 1/δ ∆2 ) , there is an algorithm that w.p.≥1−δ returns a sub-collectionB∗⊆B s.t. |B∗∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||F ≤ O(∆). The F -distance bound matches the lower bound ∆min up to a small O( √ log(1/β)) factor. The number m · n of samples required to achieve this estimation error are the same (up to a logarithmic factor) as the minimum required to achieve the same estimation error even for the non-adversarial setting. The theorem applies to all families with finite VC dimension, and like most other results of this generality, it is necessarily non-constructive in nature. Yet it provides a road map for constructing efficient algorithms for many specific natural problems. In Section 4 we use this approach to derive a polynomial-time algorithm that learns distributions with respect to one of the most important and practical VC classes, where Ω = R, and F = Fk is the collection of all unions of at most k intervals. Theorem 2. For any k > 0, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is an algorithm that runs in time polynomial in all parameters, and with probability ≥ 1 − δ returns a sub-collection B∗ ⊆ B, such that |B∗ ∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||Fk ≤ O(∆). The above polynomial-time algorithm can achieve Fk error ∆ using the number of samples only Õ(1/∆) times the minimum required to achieve the same estimation error by any algorithm even for the non-adversarial setting. Note that the sample complexity in both Theorems 1 and 2 are independent of the domain size and depends linearly on the VC dimension of the subset family. Section 4 provides a short overview of the algorithms used in the above theorems. The complete algorithms and proof of the two theorems appear in the appendix. 2.2 Learning distributions in total-variation distance Our ultimate objective is to estimate the target distribution in total variation (TV) distance, one of the most common measures in distribution estimation. In this and the next subsection, we follow a framework developed in [DL01], see also [Dia16]. As noted earlier, the sample complexity of estimating distributions in TV-distance grows with the domain size, becoming infeasible for large discrete domains and impossible for continuous domains. A natural approach to address this intractability is to assume that the underlying distribution belongs to, or is near, a structured class P of distributions. Let optP(p) := infq∈P ||p− q||TV be the TV-distance of p from the closest distribution in P . For example, for p ∈ P , optP(p) = 0. Given , δ > 0, we try to use samples from p to find an estimate p̂ such that, with probability ≥ 1− δ, ||p− p̂||TV ≤ α · optP(p) + for a universal constant α≥1, namely, to approximate p about as well as the closest distribution in P . Following [DL01], we utilize a connection between distribution estimation and VC dimension. Let P be a class of distributions over Ω. The Yatracos class [Yat85] of P is the family of Ω subsets Y(P) := {{ω ∈ Ω : p(ω) ≥ q(ω)} : p, q ∈ P}. It is easy to verify that for distributions p, q ∈ P , ||p−q||TV = ||p−q||Y(P). The Yatracos minimizer of a distribution p is its closest distribution, by Y(P)-distance, in P , ψP(p) := arg min q∈P ||q − p||Y(P), where ties are broken arbitrarily. Theorem 6.3 in [DL01] uses these definitions and a sequence of triangle inequalities to show that for any distributions p, p′, and any distribution class P , ||p− ψP(p′)||TV ≤ 3 · optP(p) + 4||p− p′||Y(P). (1) Therefore, given a distribution p′ that approximates p in Y(P)-distance, its Yatracos minimizer ψP(p ′) approximates p in TV-distance. If the Yatracos class Y(P) has finite VC dimension, the VC-bound ensures that for the empirical distribution p̄t of t i.i.d. samples from p, ||p̄t − p||Y(P) decreases to zero as t increases, and ψP(p̄t) can be used to approximate p in TV-distance. This general method has led to many sample-and computationally-efficient algorithms for estimating structured distributions, e.g., [ADLS17]. However, as discussed earlier, with a β-fraction of adversarial batches, the empirical distribution of all samples can be at a Y(P)-distance as large as Θ(β) from p, leading to a large TV-distance. Yet Theorem 1 shows that data can be "cleaned" to remove outlier batches and retain batches B∗ ⊆ B whose empirical distribution p̄B∗ approximates p to a much smaller Y(P)-distance of O(∆). Letting p∗ = ψP(p̄B∗) and using Equation (1), we obtain a much better approximation of p in TV distance. Theorem 3. For a distribution class P with Yatracos Class of finite VC dimension v, for any β ≤ 0.4, δ > 0, and mn ≥ Õ ( v+log 1/δ ∆2 ) , there is an algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ P such that ||p− p∗||TV ≤ 3 · optP(p) +O(∆). The estimation error achieved in the theorem for TV-distance matches the lower bound to a small log factor of O( √ log(1/β)), and is valid for any class P with finite VC Dimensional Yatracos Class. Moreover, the upper bound on the number of samples (or batches) required by the algorithm to estimate p to the above distance matches a similar general upper bound obtained for non-adversarial setting to a log factor. This results for the first time shows that it is possible to learn a wide variety of distributions robustly using batches, even over continuous domains. 2.3 Learning univariate structured distributions We apply the general results in the last two subsections to estimate distributions over the real line. We focus on one of the most studied, and important, distribution families, the class of piecewisepolynomial distributions. A distribution p over [a, b] is t-piecewise, degree-d, if there is a partition of [a, b] into t intervals I1, . . . ,It, and degree-d polynomials r1, . . . ,rt such that ∀j and x ∈ Ij , p(x) = rj(x). The definition extends naturally to finite distributions over [k] = {1, . . . ,k}. Let Pt,d denote the collection of all t-piecewise degree d distributions. Pt,d is interesting in its own right, as it contains important distribution classes such as histograms. In addition, it approximates other important distribution classes, such as monotone, log-concave, Gaussians, and their mixtures, arbitrarily well, e.g., [ADLS17]. Note that for any two distributions p, q ∈ Pt,d, the difference p − q is a 2t-piecewise degree-d polynomial, hence every set in the Yatracos class of Pt,d, is the union of at most 2t · d intervals in R. Therefore, Y(Pt,d) ⊆ F2t·d. And since VFk = O(k) for any k, Y(Pt,d) has VC dimension O(td). Theorem 3 can then be applied to show that any target distribution p can be estimated by a distribution in Pt,d to a TV-distance ∆, using a number of samples, that is within a logarithmic factor from the minimum required [CDSS14] even when all samples are i.i.d. generated from p. Corollary 4. For any distribution p over R, t, d, β ≤ 0.4, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆2 ) , there is an algorithm that with probability ≥ 1− δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ 3 · optPt,d(p) +O(∆). Next we provide a polynomial-time algorithm for estimating p to the same O(∆) TV-distance, but with an extra Õ(1/∆) factor in sample complexity. Theorem 2 provides a polynomial time algorithm that returns a sub-collection B∗ ⊆ B of batches whose empirical distribution p̄B∗ is close to p in F2td-distance. [ADLS17] provides a polynomial time algorithm that for any distribution q returns a distribution in p′ ∈ Pt,d minimizing ||p′ − q||F2td to a small additive error. Then equation (1) and Theorem 2 yield the following result. We provide formal proof of the theorem in the appendix. Theorem 5. For any distribution p over R, n, m, β≤ 0.4, t, d, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆3 ) , there is a polynomial time algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ O(optPt,s(p)) +O(∆). 2.4 Binary classification Our framework extends beyond distribution estimation. Here we describe its application to Binary classification. Consider a family H : Ω → {0, 1} of Boolean functions, and a distribution p over Ω × {0, 1}. Let (X,Y ) ∼ p, where X ∈ Ω and Y ∈ {0, 1}. The loss of classifier h ∈ H for distribution p is rp(h) := Pr(X,Y )∼p[h(X) 6= Y ]. The optimal classifier for distribution p is hopt(p) := arg minh∈H rp(h), and hence the optimal loss is r opt p (H) := rp(hopt(p)). The goal is to return a classifier h ∈ H whose excess loss rp(h)− roptp (H) compared to the optimal loss is small. Consider the following natural extension of VC-dimension from families of subsets to families of Boolean functions. For a boolean-function familyH, define the family FH := {({ω ∈ Ω : h(ω) = y}, ȳ) : h ∈ H, y ∈ {0, 1}} of subsets of Ω× {0, 1}, and let the VC dimension ofH be VH := VFH . The next simple lemma, proved in the appendix, upper bounds the excess loss of the optimal classifier inH for a distribution q for another distribution p in terms of FH distance between the distributions. Lemma 6. For any classH and distributions p and q, rp(hopt(q))− roptp (H) ≤ 4||p− q||FH . When q is an empirical distribution of the samples, hopt(q) is called the empirical-risk minimizer. If q is the empirical distribution of the samples generated i.i.d. from p, from VC inequality, the excess loss of the empirical-risk minimizer in the above equation goes to zero if VC dimension ofH is finite. Yet as discussed earlier, when a β-fractions of the batches, and hence samples, are chosen by an adversary, the empirical distribution of all samples can be at a large FH-distance O(β) from p, leading to an excess-classification-loss up to Ω(β) for the empirical-risk minimizer. Theorem 1 states that the collection of batches can be "cleaned" to obtain a sub-collection B∗ ⊆ B whose empirical distribution has a lower FH-distance from p. The above lemma then implies that the optimal classifier hopt(p̄B∗) for the empirical distribution p̄B∗ of the cleaner batches will have a small-excess-classification-loss for p as well. The resulting non-constructive algorithm has excess-classification-loss and sample complexity that are optimal to a logarithmic factor. Theorem 7. For any H, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( VH+log 1/δ ∆2 ) , there is an algorithm that with probability ≥1−δ returns a classifier h∗, whose excess lose is rp(h∗)− roptp (H) ≤ O(∆). To complement this result, we show an information-theoretic lower bound of Ω(∆min) on the excess loss. The proof is in the appendix. Recall that a similar lower bound holds for learning distribution. Theorem 8. For any β, n, andH s.t. VH ≥ 1, there are a distribution p and an adversary, such that any algorithm, with probability ≥ 1/2, incurs an excess loss Ω(∆min), even as number of batches m→∞. To derive a computationally-efficient algorithm, we focus on the following class of binary functions. For k ≥ 1, letHk denote the collection of all binary functions over R whose decision region, namely values mapping to 0 or 1, consists of at most k-intervals. The VC dimension of FHk is clearly O(k). Theorem 2 describes a polynomial-time algorithm that returns a cleaner data w.r.t. FHk distance. From Lemma 6, the classifier that minimizes the loss for the empirical distribution of this cleaner data will have a small excess loss. Furthermore, [Maa94] derived a polynomial-time algorithm to find the empirical risk minimizer h ∈ Hk for any given samples. Combining these results, gives a robust computationally efficient classifier inHk. We provide a formal proof in the appendix. Theorem 9. For any k, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is a polynomial-time algorithm that w. p. ≥1−δ returns a classifier h∗, whose excess loss is rp(h∗)− roptp (Hk) ≤ O(∆). 3 Other related and concurrent work The current results extend several long lines of work on estimating structured distributions, including [O’B16, Dia16, AM18, ADLS17]. The results also relate to classical robust-statistics work [Tuk60, Hub92]. There has also been significant recent work leading to practical distribution learning algorithms that are robust to adversarial contamination of the data. For example, [DKK+16, LRV16] presented algorithms for learning the mean and covariance matrix of highdimensional sub-gaussian and other distributions with bounded fourth moments in presence of the adversarial samples. Their estimation guarantees are typically in terms of L2, and do not yield the L1- distance results required for discrete distributions. The work was extended in [CSV17] to the case when more than half of the samples are adversarial. Their algorithm returns a small set of candidate distributions one of which is a good approximate of the underlying distribution. The filtering based method has also played a key role in other robust learning algorithms in high dimension [DKK+17, DKK+18, SCV17, DKK+19]. These works apply filtering on samples instead on batches of samples, as in [JO19] and in this paper, and recover in a different metric. For a more extensive survey on robust learning algorithms see [SCV17, DKK+19]. Another motivation for this work derives from the practical federated-learning problem, where information arrives in batches [MMR+16, MR17]. Concurrent work Concurrent to our work, [CLM20] also extends the filtering algorithm of [JO19] to obtain robust batch learning algorithms for estimating piecewise polynomials. They derive a polynomial-time algorithm that learns distributions in Pt,d over a finite domain [k] to the same TV distance O(∆) as we do, but requires Õ((td)2 log3(k)/∆2) samples, where Õ hides a logarithmic factor in 1/∆. In contrast, our results show that this accuracy can be achieved using Õ(td/∆2) samples, and by a polynomial-time algorithm with sample complexity is Õ(td/∆3). Importantly, our algorithms’ complexity does not depend on the alphabet size [k], which allows us to extend them to more general non-finite and even continuous domains. In addition, we considered other distribution classes and learning tasks such as classification. Another concurrent work [KFAL20] focuses on the sample complexity of robust batch classification using adversarial batches. Their results achieve an excess loss of O( √ VH · ∆), where VH is the VC-dimension of the hypothesis class, whereas we achieve an excess loss only O(∆). 4 Overview of the filtering framework for learning in F distance To derive both the information-theoretic and computationally-efficient algorithms for general robust learning from batches, we generalize a finite filtering-based approach in [JO19]. We first describe the original algorithm and outline how it can be extended to general learning problems. A more complete and formal presentation appears in the appendix. Recall that B is the collection of all m batches and each batch b ∈ B has n samples from the domain Ω. A batch b estimates the probability p(S) of a subset S ∈ Σ by its empirical probability. Each subset S ∈ Σ, assigns to every batch b ∈ B, a corruption score ψb(S), defined in the appendix, based on how far the batch’s estimate of p(S) is from the median of the estimates for all batches. Similarly, each subset S assigns to every sub-collection B′ ⊆ B of batches a corruption score ψB′(S) := ∑ b∈B′ ψb(S), the sum of individual corruption score of each batch. We first describe a general filtering approach to robust learning from batches. A collection C ⊆ Σ of subsets, is learnable via filtering if one can "filter out" bad batches in B and find a "good" subset B∗ ⊆ B of batches that approximates p to a small C-distance, ||p− p̄B∗ ||C = max S∈C |p(S)− p̄B∗(S)| ≤ O(∆). (2) We describe two properties ensuring that C is learnable via filtering. A finite C ⊆ Σ is learnable via filtering if there is a threshold τ such that all subsets S ∈ C and all sub-collection B′ ⊆ B that contain most good batches, namely |B′∩BG| ≥ (1−β/6)|BG|, satisfy the following two properties: 1. If the corruption score is low, ψB′(S)<τ , thenB′ estimates p(S) well, |p(S)− p̄B′(S)| = O(∆). 2. If ψB′(S) > τ , then there is a (probabilistic) method that removes batches in B′, while ensuring that and each batch removed is adversarial with probability at least 0.95, until ψB′(S) < τ . A simple algorithm shows that these two properties imply that C is learnable by filtering. Start with B′ = B, find a filter S ∈ C with ψB′(S) > τ , remove the batches from B′, and repeat the process until the corruption is small, ψB′(S) < τ , for all filters in C. By property 2, each deleted batch is adversarial with probability > 0.95. Since there are at most βm adversarial batches, w.h.p. at most 0.1βm good batches are deleted. Consequently |B′ ∩BG| ≥ (1− β/6)|BG|. By property 1, when the algorithm ends, B∗ = B′ achieves (2). While this algorithm describes the core of the technique, three significant challenges remain. The above algorithm applies for finite classes C. However, the VC class F may be infinite, or even uncountable. To apply the algorithm we need to find a finite subset C such that learning in C distance implies learning in F distance. In the appendix, we prove an essential Robust Covering Theorem, showing that for an appropriate , letting C be an -cover of F under empirical density p̄B , suffices to learn p in F distance. This is despite the fact that a fraction β of the batches inB may be adversarially chosen, and even depend on good samples. The next key challenge is to show that the two properties hold for all subsets in the -cover. We establish this fact by showing that with sufficiently many batches, w.h.p., the two properties hold for all subsets S ∈ F . The proof requires addressing additional technical challenges, as number of subsets in F could be infinite. Choosing any finite -cover C ⊆ F under density p̄B , therefore yields an information-theoretic algorithm with near-optimal sample complexity. This gives us the near sample optimal algorithm in Theorem 1. However, computationally-efficient algorithms pose one additional challenge. The size of C may be exponential in the VC dimension, and hence searching for a subset in C with a high corruption score may be computationally infeasible. For the VC class Fk, we overcome this difficulty by choosing the set C of filters from a larger class than Fk itself so that that still obeys the two properties, but allows for an efficient search. Though C is chosen from a larger class, we ensure that the sample complexity increase is small. Specifically, we let C be the collection of all subsets of a k′-partition of Ω, for an appropriate k′ that is linear in k. Subsets in such a cover C correspond to binary vectors in {0, 1}k′ . A novel semi-definite-programming based algorithm derived in [JO19] finds a subset S ∈ C with nearly the highest corruption ψB′(S) in time only polynomial in k′. This allows us to obtain the polynomial-time algorithm in Theorem 2. To summarize, this universal filtering approach allows us to "clean" the data and enables the general robust distribution estimators and classifiers we construct. Remark. In some applications the distributions underlying genuine batches may differ from the common target distribution p by a small TV distance, say η > 0. For simplicity, in this paper we presented the analysis for η = 0, where all the good batches have the same distribution p. For η > 0, even for binary alphabets, [QV17] derived the adversarial batch lower bound of Ω(η+ β/ √ n) on TV distance. And even the trivial empirical estimator achieves O(η + β) TV-error, which has optimal linear dependence on η. Therefore, filtering algorithms do not need to do anything sophisticated for general η and incurs only an extra O(η) error as noted in [JO19] for unstructured distributions, and the same holds for our algorithms for learning structured distributions and binary classification. Broader impact With the vast increase in data availability, data sources are often corrupt or untrustworthy. This untrusted data severely limits the efficacy of several leaning algorithms, even when a vast amount of the data is available. Yet in many applications the data is collected in batches. We consider two essential problems in machine learning, Classification and distribution estimation. We show that in these applications, the effect of data corruption diminishes with the batch size, and demonstrate how batch structure can be used to reduce the effect of the corrupted or adversarial data, thereby paving a path to more reliable machine learning algorithms. Acknowledgements We thank Vaishakh Ravindrakumar and Yi Hao for helpful comments in the prepration of this manuscript and the authors of the concurrent work [CLM20] for coordinating submission with us. We are grateful to the National Science Foundation (NSF) for supporting this work through grants CIF-1564355 and CIF-1619448.
1. What is the focus and contribution of the paper on robust learning algorithms? 2. What are the strengths of the proposed algorithms, particularly in terms of matching sample complexity lower bounds? 3. What are the weaknesses of the paper, especially regarding the assumptions and time complexity? 4. How does the reviewer assess the technical superiority of the paper compared to other works, such as [JO19]?
Summary and Contributions Strengths Weaknesses
Summary and Contributions I thank the authors for their feedback. --- The authors present robust learning algorithms using batches for 1) learning structured distributions 2) classification problems 3) 1d classification problems All above algorithms are based on a common filtering framework. Strengths The authors give tractable algorithms that match sample complexity lower bounds up to log factors. Weaknesses It's unclear how the authors' bounds will break down if we do not make the assumption that the distribution is structured. Also, it is unclear how large is the polynomial time complexity. If time complexity's polynomial degree is too high, then the algorithms are still impractical. And the authors should also emphasize more how their work is technically superior to [JO19].
NIPS
Title A General Method for Robust Learning from Batches Abstract In many applications, data is collected in batches, some of which may be corrupt or even adversarial. Recent work derived optimal robust algorithms for estimating finite distributions in this setting. We develop a general framework of robust learning from batches, and determine the limits of both distribution estimation, and notably, classification, over arbitrary, including continuous, domains. Building on this framework, we derive the first robust agnostic: (1) polynomial-time distribution estimation algorithms for structured distributions, including piecewisepolynomial, monotone, log-concave, and gaussian-mixtures, and also significantly improve their sample complexity; (2) classification algorithms, and also establish their near-optimal sample complexity; (3) computationally-efficient algorithms for the fundamental problem of interval-based classification that underlies nearly all natural-1-dimensional classification problems. 1 Introduction 1.1 Motivation In many learning applications, some samples are inadvertently or maliciously corrupted. A simple and intuitive example shows that this erroneous data limits the extent to which a distribution can be learned, even with infinitely many samples. Consider p that could be one of two possible binary distributions: ( 12 − β 2 , 1 2 + β 2 ) and ( 1 2 + β 2 , 1 2 − β 2 ). Given any number of samples from p, an adversary who observes a 1 − β fraction of the samples and can determine the rest, could use the observed samples to learn p, and set the remaining samples to make the distribution always appear to be (0.5, 0.5). Even with arbitrarily many samples, any estimator for p fails to decide which p is in effect, hence incurs a total-variation (TV) distance ≥ β2 , that we call the adversarial lower bound. The example may seem to suggest the pessimistic conclusion that if an adversary can corrupt a β fraction of the data, a TV-loss of≥ β2 is inevitable. Fortunately, in many applications it can be avoided. In the following applications, and many others, data is collected in batches, most of which are genuine, but some possibly corrupted. Data may be gathered by sensors, each providing a large amount of data, and some sensors may be faulty. The word frequency of an author may be estimated from several large texts, some of which are mis-attributed. User preferences may be learned by querying several individuals, some intentionally biasing their feedback. Multiple agents may contribute to a crowd-sourcing platform, but some may be unreliable or malicious. Interestingly, for data arriving in batches, even when a β-fraction of which are corrupted, more can be said. Recently, [QV17] formalized the problem for finite domains. They considered estimating a distribution p over [k] in TV-distance when the samples are provided in batches of size ≥ n. Out of a total of m batches, a fraction ≤ β may be arbitrarily and adversarially corrupted, while in every other batch b the samples are drawn according to a distribution p. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. For β<1/900, they derived an estimation algorithm that approximates any p over a finite domain to TV-distance O(β/ √ n), surprisingly, much lower than the individual samples limit of Θ(β). They also derived a matching lower bound, showing that even for binary distributions, and hence for general finite distributions, given any number m of batches, the lowest achievable TV distance is ∆min := ∆min(β, n) := β 2 √ 2n . We refer to ∆min as the adversarial batch lower bound. Their estimator requires Ω( n+k n·∆2min ) batches of samples, or equivalently Ω(n+k ∆2min ) samples, which is not optimal if n >> k. It also runs in time exponential in the domain size, rendering it impractical. Recently, [CLM19] used a novel application of the sum-of-squares technique to reduce the exponential time complexity. Using quasi-polynomial sample size and run time, both roughly (k/∆)O(log(1/β)), they derived an estimator that achieves TV distanceO(∆), where ∆ := ∆(β, n) := ∆min · √ ln(1/β). Concurrently, [JO19] derived the first polynomial-time and optimal Ω(k/∆2) sample estimator, that achieves the same O(∆) TV distance. To limit the impact of adversarial batches, the algorithm filters the data by removing batches that skews the estimator. For general distributions, the sample complexity of both TV-distance estimation, and Bayes-optimal classification, grows linearly in the domain size, even when all samples are genuine. Hence, general estimation and classification over large discrete, let alone continuous domains, is infeasible. Since most modern applications are over very large or continuous domains, this may again lead to the pessimistic conclusion that not much can be done. Fortunately, typical distributions are not arbitrary and possess some structure. For example, they may be monotone, smooth, Lipchitz, etc., or well approximated by structured distributions. These structural properties enable learning over large and even infinite domains. For example, as is well known, classifiers can be learned using a number of samples proportional to the VC-dimension of the classifier class. But so far, our understanding of how to incorporate the distribution structure in Robust batch learning has been quite limited. The first application of structure to reduce the linear dependence of the sample complexity [CLM19] considered robust batch learning of t-piecewise degree-d polynomials over the finite set [k] = {1, . . . ,k}. It learned these distributions with number of samples that grows only quasipoly-logarithmically in the domain size k. Yet this number still grows with k, hence does not extend to continuous distributions. It is also quasi-polynomial in the other parameters t, d, batch size n, and 1/β, much larger than in the non-robust setting. And the algorithm’s computational complexity is quasi-polynomial in these parameters and the domain size k. This leaves several natural questions: (1) Can other non-finite, and even continuous, structured distribution classes, be learned robustly to an estimation error comparable to the adversarial batch lower ∆min? (2) Can it be achieved with sample complexity comparable to the non-adversarial learning? (3) Can robust learning of structured distributions be accomplished in strict polynomial time? (4) Even more generally can other tasks such as classification be accomplished with adversarial batches? (5) Most importantly, is there a general and systematic theory of learning with adversarial batches? 1.2 Summary of techniques and contributions VC theory helps answer some of the above questions when all the samples are generated i.i.d. from a distribution. We adapt the theory to address robust batch learning as well. Let F be a family of subsets of an Euclidean domain Ω. The F-distance between two distributions p and q over Ω is the largest difference between the probabilities p and q assign to any subset in F , ||p− q||F := supS∈F |p(S)− q(S)|. It is easy to see that TV, and hence L1, distances are a special case of F-distance where F is the collection Σ of all Borel subsets of Ω, ||p− q||Σ = ||p− q||TV = 12 ||p− q||1. Without adversarial batches, the VC inequality guarantees that for a subset family F with finite VC-dimension, the empirical distribution of samples from p estimates p to a small F-distance. But with adversarial batches, the F-distance between the empirical distribution and p could be large. For learning with adversarial batches over finite domains, [JO19] presented an algorithm that learns the distribution to a small TV distance with a number of batches proportional to the domain size. We generalize this algorithm to learn any finite-VC subset family F to a small F -distance using samples linear in the family’s VC-dimension, rather than the domain size. Recall that ∆min = β/(2 √ 2n) is the adversarial batch lower bound for TV-distance learning. No algorithm achieves an error below ∆min, even with the number of batches→ ∞. Since the ∆min lower bound applies even to binary domains, it can be shown to also lower bound F -distance learning. Our proposed algorithm filters the batches and returns a sub-collection of batches whose empirical distribution estimates p to F-distance O(∆), where ∆ = ∆min · √ log(1/β) is only a small factor above the lower bound. The number of batches it requires for any VC family F is only a logarithmic factor more than needed to achieve the same error without adversarial batches, showing that robustness can be incorporated at little extra cost. This provides the first demonstration that distributions can be learned (1) robustly and (2) sample-efficiently, over infinite, and even continuous domains. As expected from the setting’s vast generality, as in the non-adversarial setting, for some VC families, one cannot expect to find a computationally efficient algorithm. We, therefore, consider a natural and important VC family over the reals that, as we shall soon see, translates into efficient and robust algorithms for TV-learning and classification over R. Let Fk be the family of all unions of at most k intervals over R. We derive a computationally efficient algorithm that estimates distributions to Fk-distance O(∆) using only Õ(1/∆) times more samples than the non-adversarial, or information-theoretic adversarial cases. Building on these techniques, we return to estimation in total variation (TV) distance. We consider the family of distributions whose Yatracos Class [Yat85] have finite VC dimension. This family consists of both discrete and continuous distributions, and includes piecewise polynomials, Gaussians in one or more dimensions, and arguably most practical distribution families. We show that all these distributions can be learned robustly from batches to a TV distance O(∆), which is only a factor √ log(1/β) above the adversarial TV-distance lower bound of ∆min. It also achieves sample complexity that is at most a logarithmic factor more than required for non-adversarial case. These results too are very general, hence as in the non-adversarial case, one cannot expect a computationally efficient algorithm for all cases. We therefore consider the natural and important general class Pt,d of t-piecewise degree-d polynomial distributions over the reals. To agnostically learn distributions Pt,d, we combine the results above with an existing, nonadversarial, polynomial-learning algorithm [ADLS17]. We derive a polynomial-time algorithm for estimating polynomials in Pt,d to a TV distance O(∆). The algorithm’s sample complexity is linear in td, which is the best possible, and similar to learning in Fk-distance, only Õ(1/∆) times above the non-adversarial, or information-theoretic adversarial sample complexity. This is the first algorithm that achieves polynomial sample and time complexity for robust learning for this class, and the first that applies to the non-finite domains. The general formulation also allows us to use batch-structure for robustness in other learning tasks. We apply this framework to derive the first robust agnostic classifiers. The goal is to minimize the excess loss in comparison to the best hypothesis, in the presence of adversarial batches. We first modify the lower bound on distribution learning to show that any classification algorithm with adversarial batches must incur an excess loss O(∆min), even with the number of batches→∞. We then derive a general algorithm that achieves additive excess loss O(∆) for general binary classification using a number of samples that is again only a logarithmic factor larger than required to achieve the same excess loss in the non-adversarial setting. Finally, we consider classification over R. Many natural and practical classifiers have decision regions consisting of finitely many disjoint intervals. We apply the above results to derive a computationally efficient algorithm for hypotheses consisting of k intervals. Similar to previous results, its sample complexity is linear in k and only a factor O(1/∆) larger than required in the non-adversarial setup. The rest of the paper is organized as follows. Section 2 describes the main technical results and their applications to distribution estimation and classification. Section 3 discusses the other related work. Section 4 provides an overview of the filtering algorithm that enables these results. Proofs and more details are relegated to the appendix. 2 Results We consider learning from batches of samples, when a β−fraction of batches are adversarial. More precisely, B is a collection of m batches, composed of two unknown sub-collections. A good sub-collection BG ⊆ B of ≥ (1− β)m good batches, where each batch b consists of n independent samples from a common distribution p over Ω. And an adversarial sub-collection BA = B \BG of the remaining ≤ βm batches, each consisting of the same number n of arbitrary Ω elements, that for simplicity we call samples as well. Note that the adversarial samples may be chosen in any way, including after observing the good samples. The next subsection describes the main technical results for learning in F distance. Subsequent subsections apply these results to learn distributions in TV distance and to achieve robust binary classification. 2.1 Estimating distributions in F distance Our goal is to use samples generated by a target distribution p to approximate it to a small F -distance. For general families F , this goal cannot be accomplished even with just good batches. Let F = Σ be the collection of all subsets of the real interval domain Ω = [0, 1]. For any total number t of samples, with high probability, it is impossible to distinguish the uniform distribution over [0, 1] from a uniform discrete distribution over a random collection of t2 elements in [0, 1]. Hence any estimator must incur TV-distance 1 for some distribution. This difficulty is addressed by Vapnik-Chervonenkis (VC) Theory. The collection F shatters a subset S ⊆ Ω if every subset of S is the intersection of S with a subset in F . The VC-dimension VF of F is the size of the largest subset shattered by F . Let Xt = X1, . . . ,Xt, be i.i.d. samples from a distribution p. The empirical probability of S ⊆ Ω is p̄t(S) := |{i : Xi ∈ S}|/t. The fundamental Uniform deviation inequality of VC theory [VC71, Tal94] states that if F has finite VC-dimension VF , then p̄t estimates p well in F distance. For all δ > 0, with probability > 1− δ, ||p− p̄t||F ≤ O (√ (VF + log 1/δ)/t ) . The above is also the lowest achievable F-distance, hence we call it the information-theoretic limit. In the adversarial-batch scenario, a fraction β of the batches may be corrupted. It is easy to see that for any number m of batches, however large, the adversary can cause p̄t to approximate p to F-distance ≥ β/2, namely ||p̄t − p||F ≥ β/2. Let p̄B′ be the empirical distribution induced by the samples in a collection B′ ⊆ B. Our first result states that if F has a finite VC-dimension, for total samples m · n ≥ Õ(VF/∆2), the batches in B can be "cleaned" to a sub-collection B∗ where ||p− p̄B∗ ||F = O(∆), namely, a simple empirical estimator of the samples in B∗ recovers p to a small F-distance. Theorem 1. For any F , β ≤ 0.4, δ > 0, and mn ≥ Õ ( VF+log 1/δ ∆2 ) , there is an algorithm that w.p.≥1−δ returns a sub-collectionB∗⊆B s.t. |B∗∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||F ≤ O(∆). The F -distance bound matches the lower bound ∆min up to a small O( √ log(1/β)) factor. The number m · n of samples required to achieve this estimation error are the same (up to a logarithmic factor) as the minimum required to achieve the same estimation error even for the non-adversarial setting. The theorem applies to all families with finite VC dimension, and like most other results of this generality, it is necessarily non-constructive in nature. Yet it provides a road map for constructing efficient algorithms for many specific natural problems. In Section 4 we use this approach to derive a polynomial-time algorithm that learns distributions with respect to one of the most important and practical VC classes, where Ω = R, and F = Fk is the collection of all unions of at most k intervals. Theorem 2. For any k > 0, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is an algorithm that runs in time polynomial in all parameters, and with probability ≥ 1 − δ returns a sub-collection B∗ ⊆ B, such that |B∗ ∩BG| ≥ (1− β6 )|BG| and ||p− p̄B∗ ||Fk ≤ O(∆). The above polynomial-time algorithm can achieve Fk error ∆ using the number of samples only Õ(1/∆) times the minimum required to achieve the same estimation error by any algorithm even for the non-adversarial setting. Note that the sample complexity in both Theorems 1 and 2 are independent of the domain size and depends linearly on the VC dimension of the subset family. Section 4 provides a short overview of the algorithms used in the above theorems. The complete algorithms and proof of the two theorems appear in the appendix. 2.2 Learning distributions in total-variation distance Our ultimate objective is to estimate the target distribution in total variation (TV) distance, one of the most common measures in distribution estimation. In this and the next subsection, we follow a framework developed in [DL01], see also [Dia16]. As noted earlier, the sample complexity of estimating distributions in TV-distance grows with the domain size, becoming infeasible for large discrete domains and impossible for continuous domains. A natural approach to address this intractability is to assume that the underlying distribution belongs to, or is near, a structured class P of distributions. Let optP(p) := infq∈P ||p− q||TV be the TV-distance of p from the closest distribution in P . For example, for p ∈ P , optP(p) = 0. Given , δ > 0, we try to use samples from p to find an estimate p̂ such that, with probability ≥ 1− δ, ||p− p̂||TV ≤ α · optP(p) + for a universal constant α≥1, namely, to approximate p about as well as the closest distribution in P . Following [DL01], we utilize a connection between distribution estimation and VC dimension. Let P be a class of distributions over Ω. The Yatracos class [Yat85] of P is the family of Ω subsets Y(P) := {{ω ∈ Ω : p(ω) ≥ q(ω)} : p, q ∈ P}. It is easy to verify that for distributions p, q ∈ P , ||p−q||TV = ||p−q||Y(P). The Yatracos minimizer of a distribution p is its closest distribution, by Y(P)-distance, in P , ψP(p) := arg min q∈P ||q − p||Y(P), where ties are broken arbitrarily. Theorem 6.3 in [DL01] uses these definitions and a sequence of triangle inequalities to show that for any distributions p, p′, and any distribution class P , ||p− ψP(p′)||TV ≤ 3 · optP(p) + 4||p− p′||Y(P). (1) Therefore, given a distribution p′ that approximates p in Y(P)-distance, its Yatracos minimizer ψP(p ′) approximates p in TV-distance. If the Yatracos class Y(P) has finite VC dimension, the VC-bound ensures that for the empirical distribution p̄t of t i.i.d. samples from p, ||p̄t − p||Y(P) decreases to zero as t increases, and ψP(p̄t) can be used to approximate p in TV-distance. This general method has led to many sample-and computationally-efficient algorithms for estimating structured distributions, e.g., [ADLS17]. However, as discussed earlier, with a β-fraction of adversarial batches, the empirical distribution of all samples can be at a Y(P)-distance as large as Θ(β) from p, leading to a large TV-distance. Yet Theorem 1 shows that data can be "cleaned" to remove outlier batches and retain batches B∗ ⊆ B whose empirical distribution p̄B∗ approximates p to a much smaller Y(P)-distance of O(∆). Letting p∗ = ψP(p̄B∗) and using Equation (1), we obtain a much better approximation of p in TV distance. Theorem 3. For a distribution class P with Yatracos Class of finite VC dimension v, for any β ≤ 0.4, δ > 0, and mn ≥ Õ ( v+log 1/δ ∆2 ) , there is an algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ P such that ||p− p∗||TV ≤ 3 · optP(p) +O(∆). The estimation error achieved in the theorem for TV-distance matches the lower bound to a small log factor of O( √ log(1/β)), and is valid for any class P with finite VC Dimensional Yatracos Class. Moreover, the upper bound on the number of samples (or batches) required by the algorithm to estimate p to the above distance matches a similar general upper bound obtained for non-adversarial setting to a log factor. This results for the first time shows that it is possible to learn a wide variety of distributions robustly using batches, even over continuous domains. 2.3 Learning univariate structured distributions We apply the general results in the last two subsections to estimate distributions over the real line. We focus on one of the most studied, and important, distribution families, the class of piecewisepolynomial distributions. A distribution p over [a, b] is t-piecewise, degree-d, if there is a partition of [a, b] into t intervals I1, . . . ,It, and degree-d polynomials r1, . . . ,rt such that ∀j and x ∈ Ij , p(x) = rj(x). The definition extends naturally to finite distributions over [k] = {1, . . . ,k}. Let Pt,d denote the collection of all t-piecewise degree d distributions. Pt,d is interesting in its own right, as it contains important distribution classes such as histograms. In addition, it approximates other important distribution classes, such as monotone, log-concave, Gaussians, and their mixtures, arbitrarily well, e.g., [ADLS17]. Note that for any two distributions p, q ∈ Pt,d, the difference p − q is a 2t-piecewise degree-d polynomial, hence every set in the Yatracos class of Pt,d, is the union of at most 2t · d intervals in R. Therefore, Y(Pt,d) ⊆ F2t·d. And since VFk = O(k) for any k, Y(Pt,d) has VC dimension O(td). Theorem 3 can then be applied to show that any target distribution p can be estimated by a distribution in Pt,d to a TV-distance ∆, using a number of samples, that is within a logarithmic factor from the minimum required [CDSS14] even when all samples are i.i.d. generated from p. Corollary 4. For any distribution p over R, t, d, β ≤ 0.4, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆2 ) , there is an algorithm that with probability ≥ 1− δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ 3 · optPt,d(p) +O(∆). Next we provide a polynomial-time algorithm for estimating p to the same O(∆) TV-distance, but with an extra Õ(1/∆) factor in sample complexity. Theorem 2 provides a polynomial time algorithm that returns a sub-collection B∗ ⊆ B of batches whose empirical distribution p̄B∗ is close to p in F2td-distance. [ADLS17] provides a polynomial time algorithm that for any distribution q returns a distribution in p′ ∈ Pt,d minimizing ||p′ − q||F2td to a small additive error. Then equation (1) and Theorem 2 yield the following result. We provide formal proof of the theorem in the appendix. Theorem 5. For any distribution p over R, n, m, β≤ 0.4, t, d, δ > 0, and mn ≥ Õ ( td+log 1/δ ∆3 ) , there is a polynomial time algorithm that w. p. ≥ 1−δ returns a distribution p∗ ∈ Pt,d such that ||p− p∗||TV ≤ O(optPt,s(p)) +O(∆). 2.4 Binary classification Our framework extends beyond distribution estimation. Here we describe its application to Binary classification. Consider a family H : Ω → {0, 1} of Boolean functions, and a distribution p over Ω × {0, 1}. Let (X,Y ) ∼ p, where X ∈ Ω and Y ∈ {0, 1}. The loss of classifier h ∈ H for distribution p is rp(h) := Pr(X,Y )∼p[h(X) 6= Y ]. The optimal classifier for distribution p is hopt(p) := arg minh∈H rp(h), and hence the optimal loss is r opt p (H) := rp(hopt(p)). The goal is to return a classifier h ∈ H whose excess loss rp(h)− roptp (H) compared to the optimal loss is small. Consider the following natural extension of VC-dimension from families of subsets to families of Boolean functions. For a boolean-function familyH, define the family FH := {({ω ∈ Ω : h(ω) = y}, ȳ) : h ∈ H, y ∈ {0, 1}} of subsets of Ω× {0, 1}, and let the VC dimension ofH be VH := VFH . The next simple lemma, proved in the appendix, upper bounds the excess loss of the optimal classifier inH for a distribution q for another distribution p in terms of FH distance between the distributions. Lemma 6. For any classH and distributions p and q, rp(hopt(q))− roptp (H) ≤ 4||p− q||FH . When q is an empirical distribution of the samples, hopt(q) is called the empirical-risk minimizer. If q is the empirical distribution of the samples generated i.i.d. from p, from VC inequality, the excess loss of the empirical-risk minimizer in the above equation goes to zero if VC dimension ofH is finite. Yet as discussed earlier, when a β-fractions of the batches, and hence samples, are chosen by an adversary, the empirical distribution of all samples can be at a large FH-distance O(β) from p, leading to an excess-classification-loss up to Ω(β) for the empirical-risk minimizer. Theorem 1 states that the collection of batches can be "cleaned" to obtain a sub-collection B∗ ⊆ B whose empirical distribution has a lower FH-distance from p. The above lemma then implies that the optimal classifier hopt(p̄B∗) for the empirical distribution p̄B∗ of the cleaner batches will have a small-excess-classification-loss for p as well. The resulting non-constructive algorithm has excess-classification-loss and sample complexity that are optimal to a logarithmic factor. Theorem 7. For any H, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( VH+log 1/δ ∆2 ) , there is an algorithm that with probability ≥1−δ returns a classifier h∗, whose excess lose is rp(h∗)− roptp (H) ≤ O(∆). To complement this result, we show an information-theoretic lower bound of Ω(∆min) on the excess loss. The proof is in the appendix. Recall that a similar lower bound holds for learning distribution. Theorem 8. For any β, n, andH s.t. VH ≥ 1, there are a distribution p and an adversary, such that any algorithm, with probability ≥ 1/2, incurs an excess loss Ω(∆min), even as number of batches m→∞. To derive a computationally-efficient algorithm, we focus on the following class of binary functions. For k ≥ 1, letHk denote the collection of all binary functions over R whose decision region, namely values mapping to 0 or 1, consists of at most k-intervals. The VC dimension of FHk is clearly O(k). Theorem 2 describes a polynomial-time algorithm that returns a cleaner data w.r.t. FHk distance. From Lemma 6, the classifier that minimizes the loss for the empirical distribution of this cleaner data will have a small excess loss. Furthermore, [Maa94] derived a polynomial-time algorithm to find the empirical risk minimizer h ∈ Hk for any given samples. Combining these results, gives a robust computationally efficient classifier inHk. We provide a formal proof in the appendix. Theorem 9. For any k, p, β ≤ 0.4, δ > 0, and mn ≥ Õ ( k+log 1/δ ∆3 ) , there is a polynomial-time algorithm that w. p. ≥1−δ returns a classifier h∗, whose excess loss is rp(h∗)− roptp (Hk) ≤ O(∆). 3 Other related and concurrent work The current results extend several long lines of work on estimating structured distributions, including [O’B16, Dia16, AM18, ADLS17]. The results also relate to classical robust-statistics work [Tuk60, Hub92]. There has also been significant recent work leading to practical distribution learning algorithms that are robust to adversarial contamination of the data. For example, [DKK+16, LRV16] presented algorithms for learning the mean and covariance matrix of highdimensional sub-gaussian and other distributions with bounded fourth moments in presence of the adversarial samples. Their estimation guarantees are typically in terms of L2, and do not yield the L1- distance results required for discrete distributions. The work was extended in [CSV17] to the case when more than half of the samples are adversarial. Their algorithm returns a small set of candidate distributions one of which is a good approximate of the underlying distribution. The filtering based method has also played a key role in other robust learning algorithms in high dimension [DKK+17, DKK+18, SCV17, DKK+19]. These works apply filtering on samples instead on batches of samples, as in [JO19] and in this paper, and recover in a different metric. For a more extensive survey on robust learning algorithms see [SCV17, DKK+19]. Another motivation for this work derives from the practical federated-learning problem, where information arrives in batches [MMR+16, MR17]. Concurrent work Concurrent to our work, [CLM20] also extends the filtering algorithm of [JO19] to obtain robust batch learning algorithms for estimating piecewise polynomials. They derive a polynomial-time algorithm that learns distributions in Pt,d over a finite domain [k] to the same TV distance O(∆) as we do, but requires Õ((td)2 log3(k)/∆2) samples, where Õ hides a logarithmic factor in 1/∆. In contrast, our results show that this accuracy can be achieved using Õ(td/∆2) samples, and by a polynomial-time algorithm with sample complexity is Õ(td/∆3). Importantly, our algorithms’ complexity does not depend on the alphabet size [k], which allows us to extend them to more general non-finite and even continuous domains. In addition, we considered other distribution classes and learning tasks such as classification. Another concurrent work [KFAL20] focuses on the sample complexity of robust batch classification using adversarial batches. Their results achieve an excess loss of O( √ VH · ∆), where VH is the VC-dimension of the hypothesis class, whereas we achieve an excess loss only O(∆). 4 Overview of the filtering framework for learning in F distance To derive both the information-theoretic and computationally-efficient algorithms for general robust learning from batches, we generalize a finite filtering-based approach in [JO19]. We first describe the original algorithm and outline how it can be extended to general learning problems. A more complete and formal presentation appears in the appendix. Recall that B is the collection of all m batches and each batch b ∈ B has n samples from the domain Ω. A batch b estimates the probability p(S) of a subset S ∈ Σ by its empirical probability. Each subset S ∈ Σ, assigns to every batch b ∈ B, a corruption score ψb(S), defined in the appendix, based on how far the batch’s estimate of p(S) is from the median of the estimates for all batches. Similarly, each subset S assigns to every sub-collection B′ ⊆ B of batches a corruption score ψB′(S) := ∑ b∈B′ ψb(S), the sum of individual corruption score of each batch. We first describe a general filtering approach to robust learning from batches. A collection C ⊆ Σ of subsets, is learnable via filtering if one can "filter out" bad batches in B and find a "good" subset B∗ ⊆ B of batches that approximates p to a small C-distance, ||p− p̄B∗ ||C = max S∈C |p(S)− p̄B∗(S)| ≤ O(∆). (2) We describe two properties ensuring that C is learnable via filtering. A finite C ⊆ Σ is learnable via filtering if there is a threshold τ such that all subsets S ∈ C and all sub-collection B′ ⊆ B that contain most good batches, namely |B′∩BG| ≥ (1−β/6)|BG|, satisfy the following two properties: 1. If the corruption score is low, ψB′(S)<τ , thenB′ estimates p(S) well, |p(S)− p̄B′(S)| = O(∆). 2. If ψB′(S) > τ , then there is a (probabilistic) method that removes batches in B′, while ensuring that and each batch removed is adversarial with probability at least 0.95, until ψB′(S) < τ . A simple algorithm shows that these two properties imply that C is learnable by filtering. Start with B′ = B, find a filter S ∈ C with ψB′(S) > τ , remove the batches from B′, and repeat the process until the corruption is small, ψB′(S) < τ , for all filters in C. By property 2, each deleted batch is adversarial with probability > 0.95. Since there are at most βm adversarial batches, w.h.p. at most 0.1βm good batches are deleted. Consequently |B′ ∩BG| ≥ (1− β/6)|BG|. By property 1, when the algorithm ends, B∗ = B′ achieves (2). While this algorithm describes the core of the technique, three significant challenges remain. The above algorithm applies for finite classes C. However, the VC class F may be infinite, or even uncountable. To apply the algorithm we need to find a finite subset C such that learning in C distance implies learning in F distance. In the appendix, we prove an essential Robust Covering Theorem, showing that for an appropriate , letting C be an -cover of F under empirical density p̄B , suffices to learn p in F distance. This is despite the fact that a fraction β of the batches inB may be adversarially chosen, and even depend on good samples. The next key challenge is to show that the two properties hold for all subsets in the -cover. We establish this fact by showing that with sufficiently many batches, w.h.p., the two properties hold for all subsets S ∈ F . The proof requires addressing additional technical challenges, as number of subsets in F could be infinite. Choosing any finite -cover C ⊆ F under density p̄B , therefore yields an information-theoretic algorithm with near-optimal sample complexity. This gives us the near sample optimal algorithm in Theorem 1. However, computationally-efficient algorithms pose one additional challenge. The size of C may be exponential in the VC dimension, and hence searching for a subset in C with a high corruption score may be computationally infeasible. For the VC class Fk, we overcome this difficulty by choosing the set C of filters from a larger class than Fk itself so that that still obeys the two properties, but allows for an efficient search. Though C is chosen from a larger class, we ensure that the sample complexity increase is small. Specifically, we let C be the collection of all subsets of a k′-partition of Ω, for an appropriate k′ that is linear in k. Subsets in such a cover C correspond to binary vectors in {0, 1}k′ . A novel semi-definite-programming based algorithm derived in [JO19] finds a subset S ∈ C with nearly the highest corruption ψB′(S) in time only polynomial in k′. This allows us to obtain the polynomial-time algorithm in Theorem 2. To summarize, this universal filtering approach allows us to "clean" the data and enables the general robust distribution estimators and classifiers we construct. Remark. In some applications the distributions underlying genuine batches may differ from the common target distribution p by a small TV distance, say η > 0. For simplicity, in this paper we presented the analysis for η = 0, where all the good batches have the same distribution p. For η > 0, even for binary alphabets, [QV17] derived the adversarial batch lower bound of Ω(η+ β/ √ n) on TV distance. And even the trivial empirical estimator achieves O(η + β) TV-error, which has optimal linear dependence on η. Therefore, filtering algorithms do not need to do anything sophisticated for general η and incurs only an extra O(η) error as noted in [JO19] for unstructured distributions, and the same holds for our algorithms for learning structured distributions and binary classification. Broader impact With the vast increase in data availability, data sources are often corrupt or untrustworthy. This untrusted data severely limits the efficacy of several leaning algorithms, even when a vast amount of the data is available. Yet in many applications the data is collected in batches. We consider two essential problems in machine learning, Classification and distribution estimation. We show that in these applications, the effect of data corruption diminishes with the batch size, and demonstrate how batch structure can be used to reduce the effect of the corrupted or adversarial data, thereby paving a path to more reliable machine learning algorithms. Acknowledgements We thank Vaishakh Ravindrakumar and Yi Hao for helpful comments in the prepration of this manuscript and the authors of the concurrent work [CLM20] for coordinating submission with us. We are grateful to the National Science Foundation (NSF) for supporting this work through grants CIF-1564355 and CIF-1619448.
1. What is the focus and contribution of the paper on distribution learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle infinite or continuous domains? 3. What are the weaknesses of the paper, if any? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Sample complexity bound for the task of distribution-learning in the setting of batch sampling with adversarial batches, i.e. a $\beta$ fraction of the batches might get maliciously corrupted. The bounds are almost tight as they match the lower information-theoretic bounds for this case, up to logarithmic factors. The main novelty is that the result can be applied to distributions over infinite or even continues domains. As a corollary the authors provide an efficient learning algorithm, in terms of sample and run-time, for learning piecewise polynomial distributions. The notions of interest are - $\mathcal{F}$-distance, which generalizes TV-distance. - Yatracos class and the Yatracos-minimizer. - VC-dimension and the VC-inequality and convergence theorems. Strengths The paper seems to be of high-quality. The results are impressive, non-trivial and interesting, in the sense that they might lead to further research in their direction. The authors did a through work and provided several applications to their main result which are of individual interest. Update: I read the authors' feedback and my score stays the same. Weaknesses Didn't find any specific weakness.
NIPS
Title Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning Abstract We study two time-scale linear stochastic approximation algorithms, which can be used to model well-known reinforcement learning algorithms such as GTD, GTD2, and TDC. We present finite-time performance bounds for the case where the learning rate is fixed. The key idea in obtaining these bounds is to use a Lyapunov function motivated by singular perturbation theory for linear differential equations. We use the bound to design an adaptive learning rate scheme which significantly improves the convergence rate over the known optimal polynomial decay rule in our experiments, and can be used to potentially improve the performance of any other schedule where the learning rate is changed at pre-determined time instants. 1 Introduction A key component of reinforcement learning algorithms is to learn or approximate value functions under a given policy [Sutton, 1988], [Bertsekas and Tsitsiklis, 1996], [Szepesvári, 2010], [Bertsekas, 2011], [Bhatnagar et al., 2012], [Sutton and Barto, 2018]. Many existing algorithms for learning value functions are variants of the temporal-difference (TD) learning algorithms [Sutton, 1988], [Tsitsiklis and Van Roy, 1997], and can be viewed as stochastic approximation algorithms for minimizing the Bellman error (or objectives related to the Bellman error). Characterizing the convergence of these algorithms, such as TD(0), TD(λ), GTD , nonlinear GTD has been an important objective of reinforcement learning [Szepesvári, 2010], [Bhatnagar et al., 2009], and [Sutton et al., 2016]. The asymptotic convergence of these algorithms with diminishing steps has been established using stochastic approximation theory in many prior works (comprehensive surveys on stochastic approximations can be found in [Benveniste et al., 2012], [Kushner and Yin, 2003], and [Borkar, 2009]). The conditions required for theoretically establishing asymptotic convergence in an algorithm with diminishing step sizes imply that the learning rate becomes very small very quickly. As a result, the algorithm will require a very large number of samples to converge. Reinforcement learning algorithms used in practice follow a pre-determined learning rate (step-size) schedule which, in most cases, uses decaying step sizes first and then a fixed step size. This gap between theory and practice has prompted a sequence of works on finite-time performance of temporal difference learning algorithms with either time-varying step sizes or constant step sizes [Dalal et al., 2017a,b, Liu et al., 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. 2018, Lakshminarayanan and Szepesvari, 2018, Bhandari et al., 2018, Srikant and Ying, 2019]. Most of these results are for single time-scale TD algorithms, except [Dalal et al., 2017b] which considers two time-scale algorithms with decaying step sizes. Two time-scale TD algorithms are an important class of reinforcement learning algorithms because they can improve the convergence rate of TD learning or remedy the instability of single time-scale TD in some cases. This paper focuses on two time-scale linear stochastic approximation algorithms with constant step sizes. The model includes TDC, GTD and GTD2 as special cases (see [Sutton et al., 2008], [Sutton et al., 2009] and [Szepesvári, 2010] for more details). We note that, in contemporaneous work, [Xu et al., 2019] have carried out a two-time-scale analysis of linear stochastic approximation with diminishing step sizes. Besides the theoretical analysis of finite-time performance of two time-scale reinforcement learning algorithms, another important aspect of reinforcement learning algorithms, which is imperative in practice but has been largely overlooked, is the design of learning rate schedule, i.e., how to choose proper step-sizes to improve the learning accuracy and reduce the learning time. This paper addresses this important question by developing principled heuristics based on the finite-time performance bounds. The main contributions of this paper are summarized below. • Finite Time Performance Bounds: We study two time-scale linear stochastic approximation algorithms, driven by Markovian samples. We establish finite time bounds on the mean-square error with respect to the fixed point of the corresponding ordinary differential equations (ODEs). The performance bound consists of two parts: a steady-state error and a transient error, where the steady-state error is determined by the step sizes but independent of the number of samples (or number of iterations), and the transient error depends on both step sizes and the number of samples. The transient error decays geometrically as the number of samples increases. The key differences between this paper and [Dalal et al., 2017b] include (i) we do not require a sparse projection step in the algorithm; and (ii) we assume constant step-sizes which allows us to develop the adaptive step-size selection heuristic mentioned next. • Adaptive Learning Rate Selection: Based on the finite-time performance bounds, in particular, the steady-state error and the transient error terms in the bounds, we propose an adaptive learning rate selection scheme. The intuition is to use a constant learning rate until the transient error is dominated by the steady-state error; after that, running the algorithm further with the same learning rate is not very useful and therefore, we reduce the learning rate at this time. To apply adaptive learning rate selection in a model-free fashion, we develop data-driven heuristics to determine the time at which the transient error is close to the steady-state error. A useful property of our adaptive rate selection scheme is that it can be used with any learning rate schedule which already exists in many machine learning software platforms: one can start with the initial learning rate suggested by such schedules and get improved performance by using our adaptive scheme. Our experiments on Mountain Car and Inverted Pendulum show that our adaptive learning rate selection significantly improves the convergence rates as compared to optimal polynomial decay learning rate strategies (see [Dalal et al., 2017b] and [Konda et al., 2004] for more details on polynomial decay step-size rules). 2 Model, Notation and Assumptions We consider the following two time-scale linear stochastic approximation algorithm: Uk+1 = Uk + α (Auu(Xk)Uk +Auv(Xk)Vk + bu(Xk)) Vk+1 = Vk + β (Avu(Xk)Uk +Avv(Xk)Vk + bv(Xk)) , (1) where {Xk} are the samples from a Markov process. We assume β < α so that, over −β iterations, the change in V is O(1) while the change in U is O ( α−β ) . Therefore, V is updated at a faster time scale than U. In the context of reinforcement learning, when combined with linear function approximation of the value function, GTD, GTD2, and and TDC can be viewed as two time-scale linear stochastic approximation algorithms, and can be described in the same form as (1). For example, GTD2 with linear function approximation is as follows: Uk+1 =Uk + α (φ(Xk)− ζφ(Xk+1))φ>(Xk)Vk Vk+1 =Vk + β ( δk − φ>(Xk)Vk ) φ(Xk), where ζ is the discount factor, φ(x) is the feature vector of state x, Uk is the weight vector such that φ>(x)Uk is the approximation of value function of state x at iteration k, δk = c(Xk) + ζφ >(Xk+1)Uk − φ>(Xk)Uk is the TD error, and Vk is the weight vector that estimates E[φ(Xk)φ(Xk)T ]−1E[δkφ(Xk)]. We now summarize the notation we use throughout the paper and the assumptions we make. • Assumption 1: {Xk} is a Markov chain with state space S. We assume that the following two limits exist: ( Āuu Āuv Āvu Āvv ) = lim k−→∞ ( E [Auu(Xk)] E [Auv(Xk)] E [Avu(Xk)] E [Avv(Xk)] ) ( b̄u b̄v ) = lim k−→∞ ( E[bu(Xk)] E[bv(Xk)]) = 0. Note that without the loss of generality, we assume b̄ = 0 which allows for the fixed point of the associated ODEs to be 0. This can be guaranteed by appropriate centering. We define B(Xk) =Auu(Xk)−Auv(Xk)Ā−1vv Āvu B̃(Xk) =Avu(Xk)−Avv(Xk)Ā−1vv Āvu B̄ =Āuu − ĀuvĀ−1vv Avu ¯̃B =Āvu − ĀvvĀ−1vv Āvu. • Assumption 2: We assume that max{‖bu(x)‖, ‖bv(x)‖} ≤ bmax < ∞ for any x ∈ S. We also assume that max{‖B(x)‖, ‖B̃(x)‖, ‖Auu(x)‖, ‖Avu(x)‖, ‖Auv(x)‖, ‖Avv(x)‖} ≤ 1 for any x ∈ S. Note that these assumptions imply that the steady-state limits of the random matrices/vectors will also satisfy the same inequalities. • Assumption 3: We assume Āvv and B̄ are Hurwitz and Āvv is invertible. Let Pu and Pv be the solutions to the following Lyapunov equations: −I = B̄>Pu + PuB̄ −I = Ā>vvPv + PvĀvv. Since both Āvv and B̄ are Hurwitz, Pu and Pv are real positive definite matrices. • Assumption 4: Define τ∆ ≥ 1 to be the mixing time of the Markov chain {Xk}. We assume ‖E[bk|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖B̄ − E[B(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖ ¯̃B − E[B̃(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖Āuv − E[Auv(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖Āvv − E[Avv(Xk)|X0 = i]‖ ≤ ∆,∀i, ∀k ≥ τ∆. • Assumption 5: As in [Srikant and Ying, 2019], we assume that there exists K ≥ 1 such that τ∆ ≤ K log( 1∆ ). For convenience, we choose ∆ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) and drop the subscript from τ∆, i.e., τ∆ = τ . Also, for convenience, we assume that is small enough such that ̃τ ≤ 14 , where ̃ = ∆ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) . We further define the following notation: • Define matrix P = ( ξv ξu+ξv Pu 0 0 ξuξu+ξvPv ) , (2) where ξu = 2‖PuĀuv‖ and ξv = 2 ∥∥PvĀ−1vv ĀvuB̄∥∥ . • Let γmax and γmin denote the largest and smallest eigenvalues of Pu and Pv, respectively. So γmax and γmin are also upper and lower bounds on the eigenvalues of P. 3 Finite-Time Performance Bounds To establish the finite-time performance guarantees of the two time-scale linear stochastic approximation algorithm (1), we define Zk = Vk + Ā −1 vv ĀvuUk and Θk = ( Uk Zk ) . Then we consider the following Lyapunov function: W (Θk) = Θ > k PΘk, (3) where P is a symmetric positive definite matrix defined in (2) (P is positive definite because both Pu and Pv are positive definite matrices). The reason to introduce Zk will become clear when we introduce the key idea of our analysis based on singular perturbation theory. The following lemma bounds the expected change in the Lyapunov function in one time step. Lemma 1. For any k ≥ τ and , α, and β such that η1̃τ + 2 ̃ 2 α γmax ≤ κ1 2 , the following inequality holds: E[W (Θk+1)−W (Θk)] ≤ − α γmax (κ1 2 − κ2 α−β ) E[W (Θk)] + 2βτη2, where ̃ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) , and η1, η2 κ1, and κ2 are constants independent of . The proof of Lemma 1 is somewhat involved, and is provided in the supplementary material. The definitions of η1, η2, κ1 and κ2 can be found in the supplementary material as well. Here, we provide some intuition behind the result by studying a related ordinary differential equation (ODE). In particular, consider the expected change in the stochastic system divided by the slow time-scale step size α: E[Uk+1 − Uk|Uk−τ = u, Vk−τ = v,Xk−τ = x] α =E [ (Auu(Xk)Uk +Auv(Xk)Vk + bu)|Uk−τ = u, Vk−τ = v,Xk−τ = x] α−β E[Vk+1 − Vk|Uk−τ = u, Vk−τ = v,Xk−τ = x] α =E [ (Avu(Xk)Uk +Avv(Xk)Vk + bv(Xk))|Uk−τ = u, Vk−τ = v,Xk−τ = x] , (4) where the expectation is conditioned sufficiently in the past in terms of the underlying Markov chain (i.e. conditioned on the state at time k − τ instead of k) so the expectation is approximately in steady-state. Approximating the left-hand side by derivatives and the right-hand side using steady-state expectations, we get the following ODEs: u̇ =Āuuu+ Āuvv (5) α−β v̇ =Āvuu+ Āvvv. (6) Note that, in the limit as → 0, the second of the above two ODEs becomes an algebraic equation, instead of a differential equation. In the control theory literature, such systems are called singularlyperturbed differential equations, see for example [Kokotovic et al., 1999]. In [Khalil, 2002, Chapter 11], the following Lyapunov equation has been suggested to study the stability of such singularly perturbed ODEs: W (u, v) = du>Puu+ (1− d) ( v + Ā−1vv Āvuu )> Pv ( v + Ā−1vv Āvuu ) , (7) for d ∈ [0, 1]. The function W mentioned earlier in (3) is the same as above for a carefully chosen d. The rationale behind the use of the Lyapunov function (7) is presented in the appendix. The intuition behind the result in Lemma 1 can be understood by studying the dynamics of the above Lyapunov function in the ODE setting. To simplify the notation, we define z = v + Ā−1vv Āvuu, so the Lyapunov function can also be written as W (u, z) = du>Puu+ (1− d)z>Pvz, (8) and adapting the manipulations for nonlinear ODEs in [Khalil, 2002, Chapter 11] to our linear model, we get Ẇ =2duTPuu̇+ 2(1− d)z>Pv ż (9) ≤− (‖u‖ ‖z‖) Ψ̃ ( ‖u‖ ‖z‖ ) , (10) where Ψ̃ = ( d −dγmax − (1− d)γmaxσmin −dγmax − (1− d)γmaxσmin ( 1−d 2 α−β − (1− d)γmaxσmin )) . (11) Note that Ψ̃ is positive definite when d ( 1− d 2 α−β − (1− d)γmaxσmin ) ≥ (dγmax + (1− d)γmaxσmin)2 , (12) i.e., when α−β ≤ d(1− d) 2d(1− d)γmaxσmin + (dγmax + (1− d)γmaxσmin)2 . (13) Let λ̃min denote the smallest eigenvalue of Ψ̃. We have Ẇ ≤ −λ̃min ( ‖u‖2 + ‖z‖2 ) ≤ − λ̃min γmax W. (14) In particular, recall that we obtained the ODEs by dividing by the step-size α. Therefore, for the discrete equations, we would expect E[W (Θk+1)−W (Θk)] ≈≤ − α λ̃min γmax E [W (Θk)] , (15) which resembles the transient term of the upper bound in Lemma 1. The exact expression in the discrete, stochastic case is of course different and additionally includes a steady-state term, which is not captured by the ODE analysis above. Now, we are ready to the state the main theorem. Theorem 1. For any k ≥ τ, , α and β such that η1̃τ + 2 ̃ 2 α γmax ≤ κ1 2 , we have E[‖Θk‖2] ≤ γmax γmin ( 1− α γmax (κ1 2 − κ2 α−β ))k−τ (1.5‖Θ0‖+ 0.5bmax)2 + 2β−α γmax γmin η2τ( κ1 2 − κ2 α−β ) . Proof. Applying Lemma 1 recursively, we obtain E[W (Θk)] ≤ uk−τE[W (Θτ )] + v 1− uk−τ 1− u ≤ uk−τE[W (Θk)] + v 1 1− u (16) where u = 1− α γmax ( κ1 2 − κ2 α−β) and v = η2τ 2β . Also, we have that E[‖Θk‖2] ≤ 1 γmin E[W (Θk)] ≤ 1 γmin uk−τE[W (Θτ )] + v 1 γmin(1− u) . (17) Furthermore, E[W (Θτ )] ≤ γmaxE[‖Θτ‖2] ≤ γmaxE[(‖Θτ −Θ0‖+ ‖Θ0‖)2] ≤ γmax ((1 + 2̃τ)‖Θ0‖+ 2̃τ bmax)2 . (18) The theorem then holds using the fact that ̃τ ≤ 14 . Theorem 1 essentially states that the expected error for a two-time scale linear stochastic approximation algorithm comprises two terms: a transient error term which decays geometrically with time and a steady-state error term which is directly proportional to 2β−α and the mixing time. This characterization of the finite-time error is useful in understanding the impact of different algorithmic and problem parameters on the rate of convergence, allowing the design of efficient techniques such as the adaptive learning rate rule which we will present in the next section. 4 Adaptive Selection of Learning Rates Equipped with the theoretical results from the previous section, one interesting question that arises is the following: given a time-scale ratio λ = αβ , can we use the finite-time performance bound to design a rule for adapting the learning rate to optimize performance? In order to simplify the discussion, let β = µ and α = µλ. Therefore, Theorem 1 can be simplified and written as E[‖Θk‖2] ≤K1 ( 1− µλ ( κ1 2γmax − κ2 γmax µλ−1 ))k + µ2−λ K2( κ1 2 − κ2µλ−1 ) (19) where K1 and K2 are problem-dependent positive constants. Since we want the system to be stable, we will assume that µ is small enough such that κ12γmax − κ2 γmax µλ−1 = c > 0. Plugging this condition in (19), we get E[‖Θk‖2] ≤K1 ( 1− cµλ )k + K2µ 2−λ γmaxc (20) In order to optimize performance for a given number of samples, we would like to choose the learning rate µ as a function of the time step. In principle, one can assume time-varying learning rates, derive more general mean-squared error expressions (similar to Theorem 1), and then try to optimize over the learning rates to minimize the error for a given number of samples. However, this optimization problem is computationally intractable. We note that even if we assume that we are only going to change the learning rate a finite number of times, the resulting optimization problem of finding the times at which such changes are performed and finding the learning rate at these change points is an equally intractable optimization problem. Therefore, we have to devise simpler adaptive learning rate rules. To motivate our learning rate rule, we first consider a time T such that errors due to the transient and steady-state parts in (20) are equal, i.e., K1(1− cµλ)T = K2µ 2−λ γmaxc (21) From this time onwards, running the two timescale stochastic approximation algorithm any further with µ as the learning rate is not going to significantly improve the mean-squared error. In particular, the mean-squared error beyond this time is upper bounded by twice the steadystate error K2µ 2−λ γmaxc . Thus, at time T, it makes sense to reset µ as µ ← µ/ξ, where ξ > 1 is a hyperparameter. Roughly speaking, T is the time at which one is close to steady-state for a given learning rate, and therefore, it is the time to reduce the learning rate to get to a new "steady-state" with a smaller error. The key difficulty in implementing the above idea is that it is difficult to determine T . For ease of exposition, we considered a system centered around 0 in our analysis (i.e., Θ∗ = 0). More generally, the results presented in Theorem 1 and (19) - (20) will have Θk replaced by Θk−Θ∗. In any practical application, Θ∗ will be unknown. Thus, we cannot determine ‖Θk − Θ∗‖ as a function of k and hence, it is difficult to use this approach. Our idea to overcome this difficulty is to estimate whether the algorithm is close to its steady-state by observing ‖Θk −Θ0‖ where Θ0 is our initial guess for the unknown parameter vector and is thus known to us. Note that ‖Θk −Θ0‖ is zero at k = 0 and will increase (with some fluctuations due to randomness) to ‖Θ∗ −Θ0‖ in steady-state (see Figure 1 for an illustration). Roughly speaking, we approximate the curve in this figure by a sequence of straight lines, i.e., perform a piecewise linear approximation, and conclude that the system has reached steady-state when the lines become approximately horizontal. We provide the details next. To derive a test to estimate whether ‖Θk −Θ0‖ has reached steady-state, we first note the following inequality for k ≥ T (i.e., after the steady-state time defined in (21)): E[‖Θ0 −Θ∗‖]− E[‖Θk −Θ∗‖] ≤E[‖Θk −Θ0‖] ≤ E[‖Θk −Θ∗‖] + E[‖Θ0 −Θ∗‖] ⇒ d− √ 2K2µ2−λ γmaxc ≤E[‖Θk −Θ0‖] ≤ d+ √ 2K2µ2−λ γmaxc (22) where the first pair of inequalities follow from the triangle inequality and the second pair of inequalities follow from (20) - (21), Jensen’s inequality and letting d = E[‖Θ0−Θ∗‖]. Now, for k ≥ T , consider the following N points: {Xi = i, Yi = ‖Θk+i −Θ0‖}Ni=1. Since these points are all obtained after “steady-state" is reached, if we draw the best-fit line through these points, its slope should be small. More precisely, let ψN denote the slope of the best-fit line passing through these N points. Using (22) along with formulas for the slope in linear regression, and after some algebraic manipulations (see Appendix ?? for detailed calculations), one can show that: |E[ψN ]| = O ( µ1− λ 2 N ) , Var(ψN ) = O ( 1 N2 ) (23) Therefore, if N ≥ χ µ λ 2 , then the slope of the best-fit line connecting {Xi, Yi} will be O ( µ1− λ 2 N ) with high probability (for a sufficiently large constant χ > 0). On the other hand, when the algorithm is in the transient state, the difference between ‖Θk+m −Θ0‖ and ‖Θk −Θ0‖ will be O(mµ) since Θk changes by O(µ) from one time slot to the next (see Lemma 3 in Appendix ?? for more details). Using this fact, the slope of the best-fit line through N consecutive points in the transient state can be shown to be O (µ), similar to (23). Since we choose N ≥ χ µ λ 2 , the slope of the best-fit line in steady state, i.e., O ( µ1− λ 2 N ) will be lower than the slope of the best-fit line in the transient phase, i.e., O (µ) (for a sufficiently large χ). We use this fact as a diagnostic test to determine whether or not the algorithm has entered steady-state. If the diagnostic test returns true, we update the learning rate (see Algorithm 1). Algorithm 1 Adaptive Learning Rate Rule Hyperparameters: ρ, σ, ξ,N Initialize µ = ρ, ψN = 2σµ1− λ 2 , Θ0, Θini = Θ0. for i = 1, 2, ... do Do two time-scale algorithm update. Compute ψN = Slope ( {k, ‖Θi−k −Θini‖}N−1k=0 ) . if ψN < σµ 1−λ 2 N then µ = µξ . Θini = Θi. end if end for We note that our adaptive learning rate rule will also work for single time-scale reinforcement learning algorithms such as TD(λ) since our expressions for the mean-square error, when specialized to the case of a single time-scale, will recover the result in [Srikant and Ying, 2019] (see [Gupta et al., 2019] for more details). Therefore, an interesting question that arises from (19) is whether one can optimize the rate of convergence with respect to the time-scale ratio λ? Since the RHS in (19) depends on a variety of problem-dependent parameters, it is difficult to optimize it over λ. An interesting direction of further research is to investigate if practical adaptive strategies for λ can be developed in order to improve the rate of convergence further. 5 Experiments We implemented our adaptive learning rate schedule on two popular classic control problems in reinforcement learning - Mountain Car and Inverted Pendulum, and compared its performance with the optimal polynomial decay learning rate rule suggested in [Dalal et al., 2017b] (described in the next subsection). See Appendix ?? for more details on the Mountain Car and Inverted Pendulum problems. We evaluated the following policies using the two time-scale TDC algorithm (see [Sutton et al., 2009] for more details regarding TDC): • Mountain Car - At each time step, choose a random action ∈ {0, 2}, i.e., accelerate randomly to the left or right. • Inverted Pendulum - At each time step, choose a random action in the entire action space, i.e., apply a random torque ∈ [−2.0, 2.0] at the pivot point. Since the true value of Θ∗ is not known in both the problems we consider, to quantify the performance of the TDC algorithm, we used the error metric known as the norm of the expected TD update (NEU, see [Sutton et al., 2009] for more details). For both problems, we used a O(3) Fourier basis (see [Konidaris et al., 2011] for more details) to approximate the value function and used 0.95 as the discount factor. 5.1 Learning Rate Rules and Tuning 1. The optimal polynomial decay rule suggested in [Dalal et al., 2017b] is the following: at time step k, choose αk = 1 (k+1)α and β k = 1 (k+1)β , where α → 1 and β → 23 . For our experiments, we chose α = 0.99 and β = 0.66. This implies λ = αβ = 1.5. Since the problems we considered require smaller initial step-sizes for convergence, we let αk = ρ0 (k+1)α and β k = ρ0 (k+1)β and did a grid search to determine the best ρ0, i.e., the best initial learning rate. The following values for ρ0 were found to be the best: Mountain Car - ρ0 = 0.2, Inverted Pendulum - ρ0 = 0.2. 2. For our proposed adaptive learning rate rule, we fixed ξ = 1.2, N = 200 in both problems since we did not want the decay in the learning rate to be too aggressive and the resource consumption for slope computation to be high. We also set λ = 1.5 as in the polynomial decay case to have a fair comparison. We then fixed ρ and conducted a grid search to find the best σ. Subsequently, we conducted a grid search over ρ. Interestingly, the adaptive learning rate rule was reasonably robust to the value of ρ. We used ρ = 0.05 in Inverted Pendulum and ρ = 0.1 in Mountain Car. Effectively, the only hyperparameter that affected the rule’s performance significantly was σ. The following values for σ were found to be the best: Mountain Car - σ = 0.001, Inverted Pendulum - σ = 0.01. 5.2 Results For each experiment, one run involved the following: 10, 000 episodes with the number of iterations in each episode being 50 and 200 for Inverted Pendulum and Mountain Car respectively. After every 1, 000 episodes, training/learning was paused and the NEU was computed by averaging over 1, 000 test episodes. We initialized Θ0 = 0. For Mountain Car, 50 such runs were conducted and the results were computed by averaging over these runs. For Inverted Pendulum, 100 runs were conducted and the results were computed by averaging over these runs. Note that the learning rate for each adaptive strategy was adapted at the episodic level due to the episodic nature of the problems. The results are reported in Figures 2a and 2b. As is clear from the figures, our proposed adaptive learning rate rule significantly outperforms the optimal polynomial decay rule. 6 Conclusion We have presented finite-time bounds quantifying the performance of two time-scale linear stochastic approximation algorithms. The bounds give insight into how the different time-scale and learning rate parameters affect the rate of convergence. We utilized these insights and designed an adaptive learning rate selection rule. We implemented our rule on popular classical control problems in reinforcement learning and showed that the proposed rule significantly outperforms the optimal polynomial decay strategy suggested in literature. Acknowledgements Research supported by ONR Grant N00014-19-1-2566, NSF Grants CPS ECCS 1739189, NeTS 1718203, CMMI 1562276, ECCS 16-09370, and NSF/USDA Grant AG 2018-67007-28379. Lei Ying’s work supported by NSF grants CNS 1618768, ECCS 1609202, IIS 1715385, ECCS 1739344, CNS 1824393 and CNS 1813392.
1. What is the originality of the paper's contributions, particularly in comparison to previous works such as Srikant & Ying (2019)? 2. What are the strengths of the paper regarding its quality, clarity, and significance? 3. Are there any minor issues or typos in the paper that can be improved? 4. How does the reviewer assess the novelty and impact of the paper's second contribution, specifically the adaptive learning rate scheduling algorithm? 5. Have the authors adequately addressed the reviewer's concerns in their rebuttal, and what is the final recommendation regarding the paper's acceptance?
Review
Review Originality: Though it seems that the proof for bounds obtained in the paper follows along the lines of [Srikant & Ying, 2019], I don't think the extension is trivial. The key difference being the construction of a Lyapunov function that is motivated by singular perturbation theory. The second contribution on an algorithm that adaptively schedules the learning rate is novel, and not something I have seen before. Quality and Clarity: Except for minor typos, the paper was well-written, and easy to read. Specifically, the intuition for Lemma 1 by studying the associated ODE's was very useful. Significance: As stated above, the paper provides good foundations as well as directions for future research. ** After reading the rebuttal ** The authors addressed my concerns well, and I would like to still recommend acceptance of the paper.