venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Kernel Deformed Exponential Families for Sparse Continuous Attention Abstract Attention mechanisms take an expectation of a data representation with respect to probability weights. This creates summary statistics that focus on important features. Recently, Martins et al. (2020; 2021) proposed continuous attention mechanisms, focusing on unimodal attention densities from the exponential and deformed exponential families: the latter has sparse support. Farinhas et al. (2021) extended this to use Gaussian mixture attention densities, which are a flexible class with dense support. In this paper, we extend this to two general flexible classes: kernel exponential families (Canu & Smola, 2006) and our new sparse counterpart kernel deformed exponential families. Theoretically, we show new existence results for both kernel exponential and deformed exponential families, and that the deformed case has similar approximation capabilities to kernel exponential families. Experiments show that kernel deformed exponential families can attend to multiple compact regions of the data domain. 1 INTRODUCTION Attention mechanisms take weighted averages of data representations (Bahdanau et al., 2015), where the weights are a function of input objects. These are then used as inputs for prediction. Discrete attention 1) cannot easily handle data where observations are irregularly spaced 2) attention maps may be scattered, lacking focus. Martins et al. (2020; 2021) extended attention to continuous settings, showing that attention densities maximize the regularized expectation of a function of the data location (i.e. time). Special case solutions lead to exponential and deformed exponential families: the latter has sparse support. They form a continuous data representation and take expectations with respect to attention densities. Using measure theory to unify discrete and continuous approaches, they show transformer self-attention (Vaswani et al., 2017) is a special case of their formulation. Martins et al. (2020; 2021) explored unimodal attention densities: these only give high importance to one region of data. Farinhas et al. (2021) extended this to use multi-modal mixture of Gaussian attention densities. However 1) mixture of Gaussians do not lie in either the exponential or deformed exponential families, and are difficult to study in the context of Martins et al. (2020; 2021) 2) they have dense support. Sparse support can say that certain regions of data do not matter: a region of time has no effect on class probabilities, or a region of an image is not some object. We would like to use multimodal exponential and deformed exponential family attention densities, and understand how Farinhas et al. (2021) relates to the optimization problem of Martins et al. (2020; 2021). This paper makes three contributions: 1) we introduce kernel deformed exponential families, a multimodal class of densities with sparse support and apply it along with the multimodal kernel exponential families (Canu & Smola, 2006) to attention mechanisms. The latter have been used for density estimation, but not weighting data importance 2) we theoretically analyze normalization for both kernel exponential and deformed exponential families in terms of a base density and kernel, and show approximation properties for the latter 3) we apply them to real world datasets and show that kernel deformed exponential families learn flexible continuous attention densities with sparse support. Approximation properties for the kernel deformed are challenging: similar kernel exponential family results (Sriperumbudur et al., 2017) relied on standard exponential and logarithm properties to bound the difference of the log-partition functional at two functions: these do not hold for deformed analogues. We provide similar bounds via the functional mean value theorem along with bounding the Frechet derivative of the deformed log-partition functional. The paper is organized as follows: we review continuous attention (Martins et al., 2020; 2021). We then describe how mixture of Gaussian attention densities, used in Farinhas et al. (2021), solve a different optimization problem. We next describe kernel exponential families and give novel normalization condition relating the kernel growth to the base density’s tail decay. We then propose kernel deformed exponential families, which can have support over disjoint regions. We describe normalization and prove approximation capabilities. Next we describe use of these densities for continuous attention, including experiments where we show that the kernel deformed case learns multimodal attention densities with sparse support. We conclude with limitations and future work. 2 RELATED WORK Attention Mechanisms closely related are Martins et al. (2020; 2021); Farinhas et al. (2021); Tsai et al. (2019); Shukla & Marlin (2021). Martins et al. (2020; 2021) frame continuous attention as an expectation of a value function over the domain with respect to a density, where the density solves an optimization problem. They only used unimodal exponential and deformed exponential family densities: we extend this to the multimodal setting by leveraging kernel exponential families and proposing a deformed counterpart. Farinhas et al. (2021) proposed a multi-modal continuous attention mechanism via a mixture of Gaussians approach. We show that this solves a slightly different optimization problem from Martins et al. (2020; 2021), and extend to two further general density classes. Shukla & Marlin (2021) provide an attention mechanism for irregularly sampled time series by use of a continuous-time kernel regression framework, but do not actually take an expectation of a data representation over time with respect to a continuous pdf, evaluating the kernel regression model at a fixed set of time points to obtain a discrete representation. This describes importance of data at a set of points rather than over regions. Other papers connect attention and kernels, but focus on the discrete attention setting (Tsai et al., 2019; Choromanski et al., 2020). Also relevant are temporal transformer papers, including Xu et al. (2019); Li et al. (2019; 2020); Song et al. (2018). However none have continuous attention densities. Kernel Exponential Families Canu & Smola (2006) proposed kernel exponential families: Sriperumbudur et al. (2017) analyzed theory for density estimation. Wenliang et al. (2019) parametrized the kernel with a deep neural network. Other density estimation papers include Arbel & Gretton (2018); Dai et al. (2019); Sutherland et al. (2018). We apply kernel exponential families as attention densities to weight a value function which represents the data, rather than to estimate the data density, and extend similar ideas to kernel deformed exponential families with sparse support. Wenliang et al. (2019) showed a condition for an unnormalized kernel exponential family density to have a finite normalizer. However, they used exponential power base densities. We instead relate kernel growth rates to the base density tail decay, allowing non-symmetric base densities. To summarize our theoretical contributions: 1) introducing kernel deformed exponential families with approximation and normalization analysis 2) improved kernel exponential family normalization results. 3 CONTINUOUS ATTENTION MECHANISMS An attention mechanism involves: 1) the value function approximates a data representation. This may be the original data or a learned representation. 2) the attention density is chosen to be ’similar’ to another data representation, encoding it into a density 3) the context combines the two, taking an expectation of the value function with respect to the attention density. Formally, the context is c = ET∼p[V (T )]. (1) Here V (t) the value function approximates a data representation, T ∼ p(t) is the random variable or vector for locations (temporal, spatial, etc), and p(t) is the attention density. To choose the attention density p, one takes a data representation f and finds p ’similar’ to f and thus to a data representation, but regularizing p. Martins et al. (2020; 2021) did this, providing a rigorous formulation of attention mechanisms. Given a probability space (S,A, Q), letM1+(S) be the set of densities with respect to Q. Assume that Q is dominated by a measure ν (i.e. Lebesgue) and that it has density q0 = dQdν with respect to ν. Let S ⊆ R D, F be a function class, and Ω :M1+(S) → R be a lower semi-continuous, proper, strictly convex functional. Given f ∈ F , an attention density (Martins et al., 2020) p̂ : F → R≥0 solves p̂[f ] = arg max p∈M1+(S) 〈p, f〉L2(Q) − Ω(p). (2) This maximizes regularized L2 similarity between p and a data representation f . If Ω(p) is the negative differential entropy, the attention density is Boltzmann Gibbs p̂[f ](t) = exp(f(t)−A(f)), (3) where A(f) ensures ∫ S p̂[f ](t)dQ = 1. If f(t) = θTφ(t) for parameters and statistics θ ∈ RM , φ(t) ∈ RM respectively, Eqn. 3 becomes an exponential family density. For f in a reproducing kernel Hilbert space H, it becomes a kernel exponential family density (Canu & Smola, 2006), which we propose to use as an alternative attention density. One desirable class would be heavy or thin tailed exponential family-like densities. In exponential families, the support, or non-negative region of the density, is controlled by the measure Q. Letting Ω(p) be the α-Tsallis negative entropy Ωα(p) (Tsallis, 1988), Ωα(p) = { 1 α(α−1) (∫ S p(t)αdQ− 1 ) , α 6= 1;∫ S p(t) log p(t)dQ, α = 1, then p̂[f ] for f(t) = θTφ(t) lies in the deformed exponential family (Tsallis, 1988; Naudts, 2004) p̂Ωα [f ](t) = exp2−α(θ Tφ(t)−Aα(f)), (4) where Aα(f) again ensures normalization and the density uses the β-exponential expβ(t) = { [1 + (1− β)t]1/(1−β)+ , β 6= 1; exp(t), β = 1. (5) For β < 1, Eqn. 5 and thus deformed exponential family densities for 1 < α ≤ 2 can return 0 values. Values α > 1 (and thus β < 1) give thinner tails than the exponential family, while α < 1 gives fatter tails. Setting β = 0 is called sparsemax (Martins & Astudillo, 2016). In this paper, we assume 1 < α ≤ 2, which is the sparse case studied in Martins et al. (2020). We again propose to replace f(t) = θTφ(t) with f ∈ H, which leads to the novel kernel deformed exponential families. Computing Eqn. 1’s context vector requires parametrizing V (t). Martins et al. (2020) obtain a value function V : S → RD parametrized by B ∈ RD×N by applying regularized multivariate linear regression to estimate V (t; B) = BΨ(t), where Ψ = {ψn}Nn=1 is a set of basis functions. Let L be the number of observation locations (times in a temporal setting), O be the observation dimension, andN be the number of basis functions. This involves regressing the observation matrix H ∈ RO×L on a matrix F ∈ RN×L of basis functions {ψn}Nn=1 evaluated at observation locations {tl}Ll=1 B∗ = arg min B ‖BF−H‖2F + λ‖B‖2F . (6) 3.1 GAUSSIAN MIXTURE MODEL Farinhas et al. (2021) used mixture of Gaussian attention densities, but did not relate this to the optimization definition of attention densities in Martins et al. (2020; 2021). In fact their attention densities solve a related but different optimization problem. Martins et al. (2020; 2021) show that exponential family attention densities maximize a regularized linear predictor of the expected sufficient statistics of locations. In contrast, Farinhas et al. (2021) find a joint density over locations and latent states, and maximize a regularized linear predictor of the expected joint sufficient statistics. They then take the marginal location densities to be the attention densities. Let Ω(p) be Shannon entropy and consider two optimization problems: arg max p∈M1+(S) 〈θ,Ep[φ(T )]〉l2 − Ω(p) arg max p∈M1+(S) 〈θ,Ep[φ(T,Z)]〉l2 − Ω(p) The first is Eqn. 2 with f = θTφ(t) and rewritten to emphasize expected sufficient statistics. If one solves the second with variables Z, we recover an Exponential family joint density p̂Ωα [f ](t, z) = exp(θ Tφ(t, z)−A(θ)). This encourages the joint density of T,Z to be similar to a complete data representation θTφ(t, z) of both location variables T and latent variables Z, instead of encouraging the density of T to be similar to an observed data representation θTφ(t). The latter optimization is equivalent to arg max p∈M1+(S) Ω(p) s.t. Ep(T,Z)[φm(T,Z)] = cm,m = 1, · · · ,M. The constraint terms cm are determined by θ. Thus, this maximizes the joint entropy of Z and T , subject to constraints on the expected joint sufficient statistics. To recover EM learned Gaussian mixture densities, one must select φm so that the marginal distribution of T will be a mixture of Gaussians, and relate cm to the EM algorithm used to learn the mixture model parameters. For the first, assume that Z is a multinomial random variable taking |Z| possible values and let φ(t, z) = (z1, z2, · · · , z|Z|−1, I(z = 1)t, I(z = 1)t2, · · · , I(z = |Z|)t, I(z = |Z|)t2). These are multinomial sufficient statistics, followed by the sufficient statistics of |Z| Gaussians multiplied by indicators for each z. Then p(T |Z) will be Gaussian, p(Z) will be multinomial, and p(T ) will be a Gaussian mixture. For contraints, Farinhas et al. (2021) have Ep(T,Z)[φm(T,Z)] = L∑ l=1 wl |Z|∑ z=1 pold(z|tl)φm(tl, z),m = 1, · · · ,M (7) at each EM iteration. Here pold(z|xl) is the previous iteration’s latent state density conditional on the observation value, wl are discrete attention weights, and tl is a discrete attention location. That EM has this constraint was shown in Wang et al. (2012). Intuitively, this matches the expected joint sufficient statistics to those implied by discrete attention over locations, taking into account the dependence between z and tl given by old model parameters. An alternative is simply to let θ be the output of a neural network. While the constraints lack the intuition of Eqn. 7, it avoids the need to select an initialization. We focus on this case and use it for our baselines: both approaches are valid. 4 KERNEL EXPONENTIAL AND DEFORMED EXPONENTIAL FAMILIES We now use kernel exponential families and a new deformed counterpart to obtain flexible attention densities solving Eqn. 2 with the same regularizers. We first review kernel exponential families. We then give a novel theoretical result describing when an unnormalized kernel exponential family density can be normalized. Next we introduce kernel deformed exponential families, extending kernel exponential families to have either sparse support or fatter tails: we focus on the former. These can attend to multiple non-overlapping time intervals or spatial regions. We show similar normalization results based on the choice of kernel and base density. Following this we show approximation theory. We conclude by showing how to compute attention densities in practice. Kernel exponential families (Canu & Smola, 2006) extend exponential family distributions, replacing f(t) = θTφ(t) with f in a reproducing kernel Hilbert space H (Aronszajn, 1950) with kernel k : S × S → R. Densities can be written as p(t) = exp(f(t)−A(f)) = exp(〈f, k(·, t)〉H〉 −A(f)), where the second equality follows from the reproducing property. A challenge is to choose H, Q so that a normalizing constant exists, i.e., ∫ S exp(f(t))dQ < ∞. Kernel exponential family distributions can approximate any continuous density over a compact domain arbitrarily well in KL divergence, Hellinger, and Lp distance (Sriperumbudur et al., 2017). However relevant integrals including the normalizing constant generally require numerical integration. To avoid infinite dimensionality one generally assumes a representation of the form f = I∑ i=1 γik(·, ti), where for density estimation (Sriperumbudur et al., 2017) the ti are the observation locations. However, this requires using one parameter per observation value. This level of model complexity may not be necessary, and often one chooses a set of inducing points (Titsias, 2009) {ti}Ii=1 where I is less than the number of observation locations. For a given pair H, k, how can we choose Q to ensure that the normalization constant exists? We first give a simple example ofH, f and Q where the normalizing constant does not exist. Example 1. Let Q be the law of a N (0, 1) distribution and S = R. Let H = span{t3, t4} with k(x, y) = x3y3 + x4y4 and f(t) = t3 + t4 = k(t, 1). Then∫ S exp(f(t))dQ = ∫ R exp( t2 2 + t3 + t4)dt (8) where the integral diverges. 4.1 THEORY FOR KERNEL EXPONENTIAL FAMILIES We provide sufficient conditions for Q and H so that A(f) the log-partition function exists. We relateH’s kernel growth rate to the tail decay of the random variable or vector TQ with law Q. Proposition 1. Let p̃(t) = exp(f(t)) where f ∈ H an RKHS with kernel k. Assume k(t, t) ≤ Lk‖t‖ξ2 + Ck for constants Lk, Ck, ξ > 0. Let Q be the law of a random vector TQ, so that Q(A) = P (TQ ∈ A). Assume ∀u s.t. ‖u‖2 = 1, P (|uTTQ| ≥ z) ≤ Cq exp(−vzη) (9) for some constants η > ξ2 , CQ, v > 0. Then∫ S p̃(t)dQ <∞. Proof. See Appendix A.1 Based on k(t, t)’s growth, we can vary what tail decay rate for TQ ensures we can normalize p̃(t). Wenliang et al. (2019) also proved normalization conditions, but focused on random variables with exponential power density for a specific growth rate of k(t, t) rather than relating tail decay to growth rate. By focusing on tail decay, our result can be applied to non-symmetric base densities. Specific kernel bound growth rate terms ξ lead to allowing different tail decay rates. Corollary 1. For ξ = 4, TQ can be any sub-Gaussian random vector. For ξ = 2 it can be any sub-exponential. For ξ = 0 it can have any density. Proof. See Appendix A.2 4.2 KERNEL DEFORMED EXPONENTIAL FAMILIES We now propose kernel deformed exponential families: flexible sparse non-parametric distributions. These take deformed exponential families and extend them to use kernels in the deformed exponential term. This mirrors kernel exponential families. We write p(t) = exp2−α(f(t)−Aα(f)), where f ∈ H with kernel k. Fig. 1b shows that they can have support over disjoint intervals. 4.2.1 NORMALIZATION THEORY We construct a valid kernel deformed exponential family density from Q and f ∈ H. We first discuss the deformed log normalizer. In exponential family densities, the log-normalizer is the log of the normalizer. For deformed exponentials, the following holds. Lemma 1. Let Z > 0 be a constant. Then for 1 < α ≤ 2, 1 Z exp2−α(Z α−1f(t)) = exp2−α(f(t)− logα Z) where logβ t = t1−β−1 1−β if t > 0, β 6= 1; log(t) if t > 0, β = 1; undefined if t ≤ 0. Proof. See Appendix B.1 We describe a normalization sufficient condition analagous to Proposition 1 for the sparse deformed kernel exponential family. With Lemma 1, we can take an unnormalized exp2−α(f̃(t)) and derive a valid normalized kernel deformed exponential family density. We only require that an affine function of the terms in the deformed-exponential are negative for large magnitude t. Proposition 2. For 1 < α ≤ 2 assume p̃(t) = exp2−α(f̃(t)) with f̃ ∈ H,H is a RKHS with kernel k. If ∃Ct > 0 s.t. for ‖t‖2 > Ct, (α− 1)f̃(t) + 1 ≤ 0 and k(t, t) ≤ Lk‖t‖ξ2 + Ck for some ξ > 0, then ∫ S exp2−α(f̃(t))dQ <∞. Proof. See Appendix B.2 We now construct a valid kernel deformed exponential family density using the finite integral. Corollary 2. Under the conditions of proposition 2, assume exp2−α(f̃(t)) > 0 on a setA ⊆ S such that Q(A) > 0, then ∃ constants Z > 0, Aα(f) ∈ R such that for f(t) = 1Zα−1 f̃(t), the following holds ∫ S exp2−α(f(t)−Aα(f))dQ = 1. Proof. See Appendix B.3. We thus estimate f̃(t) = (Z)α−1f(t) and normalize to obtain a density of the desired form. 4.2.2 APPROXIMATION THEORY Under certain kernel conditions, kernel deformed exponential family densities can approximate densities of a similar form where the RKHS function is replaced with a C0(S) 1 function. 1continuous function on domain S vanishing at infinity Proposition 3. Define P0 = {πf (t) = exp2−α(f(t)−Aα(f)), t ∈ S : f ∈ C0(S)} where S ⊆ Rd. Suppose k(x, ·) ∈ C0(S),∀x ∈ S and∫ ∫ k(x, y)dµ(x)dµ(y) > 0,∀µ ∈Mb(S)\{0}. (10) hereMb(S) is the space of bounded measures over S. Then the set of deformed exponential families is dense in P0 wrt Lr(Q) norm and Hellinger distance. Proof. See Appendix B.4 We apply this to approximate fairly general densities with kernel deformed exponential families. Theorem 1. Let q0 ∈ C(S), such that q0(t) > 0 for all t ∈ S, where S ⊆ Rd is locally compact Hausdorff and q0(t) is the density ofQ with respect to a dominating measure ν. Suppose there exists l > 0 such that for any > 0,∃R > 0 satisfying |p(t)− l| ≤ for any t with ‖t‖2 > R. Define Pc = {p ∈ C(S) : ∫ S p(t)dQ = 1, p(t) ≥ 0,∀t ∈ S and p− l ∈ C0(S)}. Suppose k(t, ·) ∈ C0(S)∀t ∈ S and the kernel integration condition (Eqn. 10) holds. Then kernel deformed exponential families are dense in Pc wrt Lr norm, Hellinger distance and Bregman divergence for the α-Tsallis negative entropy functional. Proof. See Appendix B.5. For uniform q0, kernel deformed exponential families can thus approximate continuous densities on compact domains arbitrarily well. Our Bregman divergence result is analagous to the KL divergence result in Sriperumbudur et al. (2017). KL divergence is Bregman divergence with the Shannon entropy functional: we show the same for Tsallis entropy. The Bregman divergence here describes how close the uncertainty in a density is to its first order approximation evaluated at another density. Using the Tsallis entropy functional here is appropriate for deformed exponential families: they maximize it given expected sufficient statistics (Naudts, 2004). These results extend Sriperumbudur et al. (2017)’s approximation results to the deformed setting, where standard log and exponential rules cannot be applied. The Bregman divergence case requires bounding Frechet derivatives and applying the functional mean value theorem. 4.3 USING KERNELS FOR CONTINUOUS ATTENTION We apply kernel exponential and deformed exponential families to attention. The forward pass computes attention densities and the context vector. The backwards pass uses automatic differentiation. We assume a vector representation v ∈ R|v| computed from the locations we take an expectation over. For kernel exponential families we compute kernel weights {γi}Ii=1 for f(t) = ∑I i=1 γik(t, ti) γi = w T i v, and compute Z = ∫ S exp(f(t))dQ numerically. For the deformed case we compute γ̃i = wTi v and f̃(t) = (Z)α−1f(t) = ∑I i=1 γ̃ik(t, ti) followed by Z = ∫ S exp2−α(f̃(t))dQ. The context c = ET∼p[V (T )] = BEp[Ψ(t)] requires taking the expectation of Ψ(T ) with respect to a (possibly deformed) kernel exponential family density p. Unlike Martins et al. (2020; 2021), where they obtained closed form expectations, difficult normalizing constants prevent us from doing so. We thus use numerical integration for the forward pass and automatic differentiation for the backward pass. Algorithm 1 shows how to compute a continuous attention mechanism for a kernel deformed exponential family attention density. The kernel exponential family case is similar. Algorithm 1 Continuous Attention Mechanism via Kernel Deformed Exponential Families Choose base density q0(t) and kernel k. Inducing point locations {ti}Ii=1 Input Vector representation v of input object i.e. document representation Parameters {γ̃i}Ii=1 the weights for f̃(t) = (Z)α−1f(t) = ∑I i=1 γ̃ik(t, ti), matrix B for basis 5 EXPERIMENTS For document classification, we follow Martins et al. (2020)’s architecture. For the remaining, architectures have four parts: 1) an encoder takes a discrete representation of a time series and outputs attention density parameters. 2) The value function takes a time series representation (original or after passing through a neural network) and does (potentially multivariate) linear regression to obtain parameters B for a function V (t; B). These are combined to compute 3) context c = Ep[V (T )], which is used in a 4) classifier. Fig. 2 in the Appendices visualizes this. 5.1 DOCUMENT CLASSIFICATION We extend Martins et al. (2020)’s code2 for the IMDB sentiment classification dataset (Maas et al., 2011). This starts with a document representation v computed from a convolutional neural network and uses an LSTM attention model. We use a Gaussian base density and kernel, and divide the interval [0, 1] into I = 10 inducing points where we evaluate the kernel in f(t) = ∑I i=1 γik(t, ti). We set the bandwidth to be 0.01 for I = 10. Table 1 shows results. On average, kernel exponential and deformed exponential family slightly outperforms the continuous softmax and sparsemax, although the results are essentially the same. The continuous softmax/sparsemax results are from their code. 2Martins et al. (2020)’s repository for this dataset is https://github.com/deep-spin/quati 5.2 UWAVE DATASET We analyze uWave (Liu et al., 2009): accelerometer time series with eight gesture classes. We follow Li & Marlin (2016)’s split into 3,582 training observations and 896 test observations: sequences have length 945. We do synthetic irregular sampling and uniformly sample 10% of the observations. Table 2 shows results. Our highest accuracy is 94.26%, the unimodal case’s best is 74.69%, and the mixture’s best is 81.13%. Since this dataset is small, we report ±1.96 standard deviations from 10 runs. Fig. 1 shows that attention densities have support over non-overlapping time intervals. This cannot be done with Gaussian mixtures, and the intervals would be the same for each density in the exponential family case. Appendix C describes additional details. 6 ECG HEARTBEAT CLASSIFICATION We use the MIT Arrhythmia Database’s (Goldberger et al., 2000) Kaggle 3. The task is to detect abnormal heart beats from ECG signals. The five different classes are {Normal, Supraventricular premature, Premature ventricular contraction, Fusion of ventricular and normal, Unclassifiable}. There are 87,553 training samples and 21,891 test samples. Our value function is trained using the output of repeated convolutional layers: the final layer has 256 dimensions and 23 time points. Our encoder is a feedforward neural network with the original data as input, and our classifier is a feedforward network. Table 3 shows results. All accuracies are very similar, but the F1 score of kernel sparsemax is drastically higher. Additional details are in Appendix D. 7 DISCUSSION In this paper we extend continuous attention mechanisms to use kernel exponential and deformed exponential family attention densities. The latter is a new flexible class of non-parametric densities with sparse support. We show novel existence properties for both kernel exponential and deformed exponential families, and prove approximation properties for the latter. We then apply these to the framework described in Martins et al. (2020; 2021) for continuous attention. We show results on three datasets: sentiment classification, gesture recognition, and arrhythmia classification. In the first case performance is similar to unimodal attention, for the second it is drastically better, and in the third it is similar in the dense case and drastically better in the sparse case. 7.1 LIMITATIONS AND FUTURE WORK A limitation of this work was the use of numerical integration, which scales poorly with the dimensionality of the locations. Because of this we restricted our applications to temporal and text data. This still allows for multiple observation dimensions at a given location. A future direction would be to use varianced reduced Monte Carlo to approximate the integral, as well as studying how to choose the number of basis functions in the value function and how to choose the number of inducing points. 3https://www.kaggle.com/mondejar/mitbih-database A PROOF RELATED TO PROPOSITION 1 A.1 PROOF OF PROPOSITION 1 Proof. This proof has several parts. We first bound the RKHS function f and use the general tail bound we assumed to give a tail bound for the one dimensional marginals TQd of TQ. Using the RKHS function bound, we then bound the integral of the unnormalized density in terms of expectations with respect to these finite dimensional marginals. We then express these expectations over finite dimensional marginals as infinite series of integrals. For each integral within the infinite series, we use the finite dimensional marginal tail bound to bound it, and then use the ratio test to show that the infinite series converges. This gives us that the original unnormalized density has a finite integral. We first note, following Wenliang et al. (2019), that the bound on the kernel in the assumption allows us to bound f in terms of two constants and the absolute value of the point at which it is evaluated. |f(t)| = |〈f, k(t, ·)〉H| reproducing property ≤ ‖f‖H‖k(t, ·)‖H Cauchy Schwarz = ‖f‖H √ 〈k(t, ·), k(t, ·)〉H = ‖f‖H √ k(t, t) ≤ ‖f‖H √ Lk‖t‖ξ + Ck by assumption ≤ C0 + C1‖t‖|ξ|/2 for some C1, C2 > 0. We can write TQ = (TQ1, · · · , TQD). Let ed be a standard Euclidean basis vector. Then by the assumption and setting u = ed we have P (|TQd| ≥ z) ≤ Cq exp(−vzη) Letting Qd be the marginal law,∫ S exp(f(t))dQ ≤ ∫ S exp(C0 + C1‖t‖ξ/2)dQ = exp(C0) ∫ S exp(C1‖t‖ξ/2)dQ = exp(C0)E exp(C1‖TQ‖ξ/2) ≤ exp(C0)E exp(C1( √ d max d=1,··· ,D |TQd|)ξ/2) ≤ exp(C0) D∑ d=1 E exp(C2|TQd|ξ/2) which will be finite if each E exp(C2|TQd|ξ/2) < ∞. Now letting Sd be the relevant dimension of S, E exp(C2|TQd|ξ/2) = ∫ Sd exp(C2|td|ξ/2)dQd ≤ −1∑ j=−∞ ∫ j+1 j exp(C2|td|ξ/2)dQd + ∞∑ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd where the inequality follows since Sd ⊆ R, exp is a non-negative function and probability measures are monotonic. We will show that the second sum converges. Similar techniques can be shown for the first sum. Note that for j ≥ 0 Qd([j, j + 1)) = P (Td ≥ j)− P (Td ≥ j + 1) ≤ P (Td ≥ j) ≤ Cq exp(−vjη) by assumption Then ∞∑ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd ≤ ∞∑ j=0 exp(C2|j|ξ/2)Qd([j, j + 1)) ≤ ∞∑ j=0 CQ exp(C2|j|ξ/2 − vjη) Let aj = exp(C2|j|ξ/2 − viη). We will use the ratio test to show that the RHS converges. We have∣∣∣∣aj+1aj ∣∣∣∣ = exp(C2((j + 1)ξ/2 − jξ/2)− v[(j + 1)η − jη]). (11) We want this ratio to be < 1 for large j. We thus need to select η so that for sufficiently large j, we have C1 v ((j + 1)ξ/2 − jξ/2) < [(j + 1)η − jη]. Assume that η > ξ2 . Then (j + 1)η − jη (j + 1)ξ/2 − jξ/2 = jη[(1 + 1j ) η − 1] jξ/2[(1 + 1j ) ξ/2 − 1] ≥ jη−ξ/2. Since the RHS is unbounded for η > ξ2 , we have that Eqn. 11 holds for sufficiently large j. By the ratio test Eqd(t) exp(C2|Td|ξ/2) = ∑−1 j=−∞ ∫ j+1 j exp(C2|td|ξ/2)dQd +∑∞ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd is finite. Thus putting everything together we have∫ S exp(f(t))dQ ≤ ∫ S exp(C0 + C1‖t‖ξ/2)dQ < exp(C0) D∑ d=1 E exp(C2|TQd|ξ/2) <∞ and p̃(t) can be normalized. A.2 PROOF OF COROLLARY 1 Proof. Let ξ = 4. Then η > 2 and P (|uTT | > t) ≤ P (|uTT | ≥ t) monotonicity ≤ CQ exp(−vtη) < CQ exp(−vt2). The second case is similar. For the uniformly bounded kernel,∫ S exp(〈f, k(·, t)〉H)dQ ≤ exp(‖f‖H √ Ck) ∫ S dQ = exp(‖f‖H √ Ck) <∞. The first line follows from Cauchy Schwarz and ξ = 0 B PROOFS RELATED TO KERNEL DEFORMED EXPONENTIAL FAMILY B.1 PROOF OF LEMMA 1 Proof. The high level idea is to express a term inside the deformed exponential family that becomes 1/Z once outside. exp2−α(f(t)− logα(Z)) = [1 + (α− 1)(f(t)− logα Z)] 1 α−1 + = [1 + (α− 1)f(t)− (α− 1)Z 1−α − 1 1− α ] 1 α−1 + = [1 + (α− 1)f(t) + Z1−α − 1] 1 α−1 + = [(α− 1)f(t) + Z1−α] 1 α−1 + = [(α− 1)f(t)Z α−1 Zα−1 + Z1−α)] 1 α−1 + = 1 Z [(α− 1)f(t) 1 Zα−1 + 1] 1 α−1 + = 1 Z exp2−α(Z α−1f(t)) B.2 PROOF OF PROPOSITION 2 Proof. ∫ S exp2−α(f̃(t))dQ = ∫ S [1 + (α− 1)f̃(t)] 1 α−1 + dQ = ∫ ‖t‖≤Ct [1 + (α− 1)f̃(t)] 1 α−1 + dQ ≤ ∫ ‖t‖≤Ct [1 + (α− 1)(C0 + C1|Ct|ξ/2)] 1 α−1 + dQ <∞ B.3 PROOF OF COROLLARY 2 Proof. From proposition 2 and the assumption,∫ S exp2−α(f̃(t))dQ = Z for some Z > 0. Then ∫ S 1 Z exp2−α(Z α−1f(t))dQ = 1∫ S exp2−α(f(t)− logα Z)dQ = 1 where the second line follows from lemma 1. Set Aα(f) = logα(Z) and we are done. B.4 PROOF OF PROPOSITION 3 Proof. The kernel integration condition tells us that H is dense in C0(S) with respect to L∞ norm. This was shown in Sriperumbudur et al. (2011). For the Lr norm, we apply ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ from Lemma 5 with f ∈ C0(S), g ∈ H, and f0 = f . L1 convergence implies Hellinger convergence. B.5 PROOF OF THEOREM 1 Proof. For any p ∈ Pc, define pδ = p+δ1+δ . Then ‖p− pδ‖r = δ 1 + δ ‖p‖r → 0 for 1 ≤ r ≤ ∞. Thus for any > 0,∃δ > 0 such that for any 0 < θ < δ , we have ‖p− pθ‖r ≤ , where pθ(t) > 0 for all t ∈ S. Define f = ( 1+θ l+θ )1−α log2−α pθ 1+θ l+θ . Since p ∈ C(S), so is f . Fix any η > 0 and note that f(t) ≥ η( 1 + θ l + θ )1−α log2−α pθ 1 + θ l + θ ≥ η log2−α pθ 1 + θ l + θ ≥ ( 1 + θ l + θ )α−1 η pθ 1 + θ l + θ ≥ exp2−α (( 1 + θ l + θ )α−1 η ) pθ ≥ l + θ 1 + θ exp2−α (( 1 + θ l + θ )α−1 η ) p− l ≥ (l + θ) ( exp2−α (( 1 + θ l + θ )α−1 η ) − 1 ) Thus A = {t : f(t) ≥ η} = { p− l ≥ (l + θ) ( exp2−α (( 1 + θ l + θ )α−1 η ) − 1 )} Since p − l ∈ C0(S) the set on the second line is bounded. Thus A is bounded so that f ∈ C0(S). Further, by Lemma 1 pθ = exp2−α ( f − logα 1 + θ l + θ ) giving us pθ ∈ P0. By Proposition 3 there is some pg in the deformed kernel exponential family so that ‖pθ − pg‖Lr(S) ≤ . Thus ‖p − pg‖r ≤ 2 for any 1 ≤ r ≤ ∞. To show convergence in Helinger distance, note H2(p, pg) = 1 2 ∫ S ( √ p−√pg)2dQ = 1 2 ∫ S (p− 2√ppg + pg)dQ ≤ 1 2 ∫ S (p− 2 min(p, pg) + pg)dQ = 1 2 ∫ S |p− pg|dQ = 1 2 ‖p− pg‖1 4actually an equality, see https://www2.cs.uic.edu/ zhangx/teaching/bregman.pdf for proof so that L1(S) convergence, which we showed, implies Hellinger convergence. Let us consider the Bregman divergence. Note the generalized triangle inequality4 for Bregman divergence BΩα(p, pg) = BΩα(p, pθ)︸ ︷︷ ︸ I +BΩα(pθ, pg)︸ ︷︷ ︸ II −〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2︸ ︷︷ ︸ III (12) Term I BΩα(p, pθ) = 1 α(α− 1) ∫ S (pα − pαθ )dQ− 〈∇Ωα(pθ), p− pθ〉 = 1 α(α− 1) ∫ S (pα − pαθ )dQ− 1 α− 1 ∫ pα−1θ (p− pθ)dQ ≤ 1 α(α− 1) ∫ S (pα − pαθ )dQ+ 1 α− 1 ‖pα−1θ ‖1‖‖p− pθ‖∞ The first term on the rhs clearly vanishes as θ → 0. For the second term, we already showed that ‖p− pθ‖∞ → 0. Term II Fix θ. Then term II converges to 0 by Lemma 5. Term III For term III , 〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2 ≤ ‖p− pθ‖∞‖∇Ωα(pθ)−∇Ωα(pg)‖1 Clearly the first term on the rhs converges by Lr convergence. The L1 term for the gradient is given by ‖∇Ωα(pθ)−∇Ωα(pg)‖1 = 1 α− 1 ∫ |pθ(t)α−1 − pg(t)α−1|dQ ≤ ∫ (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞dQ Eqn. 17 = (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞ so that the inner product terms are bounded as |〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2| ≤ (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞‖p− pθ‖∞ Lemma 2. (Functional Taylor’s Theorem) Let F : X → R where X is a Banach space. Let f, g ∈ X and let F be k times Gateaux differentiable. Then we can write F (g) = k−1∑ i=0 F i(f)(g − f)i + F k(f + c(g − f))(g − f)k for some c ∈ [0, 1]. Proof. This is simply a consequence of expressing a functional as a function of an ∈ [0, 1], which restricts the functional to take input functions between two functions. We then apply Taylor’s theorem to the function and apply the chain rule for Gateaux derivatives to obtain the resulting Taylor remainder theorem for functionals. Let G(η) = F (f + η(g − f)). By Taylor’s theorem we have G(1) = G(0) +G′(0) + · · ·+Gk(c) and applying the chain rule gives us F (g) = k−1∑ i=0 F i(f)(g − f)i + F k(f + c(g − f))(g − f)k Lemma 3. (Functional Mean Value Theorem) Let F : X → R be a functional where f, g ∈ X some Banach space with norm ‖ · ‖. Then |F (f)− F (g)| ≤ ‖F ′(h)‖op‖f − g‖ where h = g + c(f − g) for some c ∈ [0, 1], F ′(h) is the Gateaux derivative of F , and ‖ · ‖op is the operator norm ‖A‖op = inf{c > 0 : ‖Ax‖ ≤ c‖x‖∀x ∈ X}. Proof. Consider G(η) = F (g + η(f − g)). Apply the ordinary mean value theorem to obtain G(1)−G(0) = G′(c), c ∈ [0, 1] = F ′(g + c(f − g)) · (f − g) and thus |F (f)− F (g)| ≤ ‖F ′(h)‖op‖f − g‖ Claim 1. Consider P∞ = {pf = exp2−α(f − Aα(f)) : f ∈ L∞(S)}. Then for pf ∈ P∞, Aα(f) ≤ ‖f‖∞. Proof. pf (t) = exp2−α(f(t)−Aα(f)) ≤ exp2−α(‖f‖∞ −Aα(f)) for 1 < α ≤ 2∫ S pf (t)dQ ≤ ∫ S exp2−α(‖f‖∞ −Aα(f))dQ 1 ≤ exp2−α(‖f‖∞ −Aα(f)) log2−α 1 ≤ ‖f‖∞ −Aα(f) Aα(f) ≤ ‖f‖∞ where for the second line recall that we assumed that throughout the paper 1 < α ≤ 2. Lemma 4. ConsiderP∞ = {pf = exp2−α(f−Aα(f)) : f ∈ L∞(S)}. Then the Frechet derivative of Aα : L∞ → R exists. It is given by the map A′(f)(g) = Ep̃2−αf (g(T )) = ∫ p2−αf (t)g(t)dQ∫ p2−αf (t)dQ Proof. This proof has several parts. We first derive the Gateaux differential of pf in a direction ψ ∈ L∞ and as it depends on the Gateaux differential of Aα(f) in that direction, we can rearrange terms to recover the latter. We then show that it exists for any f, ψ ∈ L∞. Next we show that the second Gateaux differential of Aα(f) exists, and use that along with a functional Taylor expansion to prove that the first Gateaux derivative is in fact a Frechet derivative. In Martins et al. (2020) they show how to compute the gradient of Aα(θ) for the finite dimensional case: we extend this to the Gateaux differential. We start by computing the Gateaux differential of pf . d dη pf+ηψ(t) = d dη exp2−α(f(t) + ηψ(t)−Aα(f + ηψ)) = d dη [1 + (α− 1)(f(t) + ηψ(t)−Aα(f + ηψ))]1/(α−1)+ = [1 + (α− 1)(f(t) + ηψ(t)−Aα(f + ηψ))](2−α)/(α−1)+ ( ψ(t)− d dη Aα(f + ηψ) ) = p2−αf+ηψ(t) ( ψ(t)− d dη Aα(f + ηψ) ) evaluating at η = 0 gives us dp(f ;ψ)(t) = p2−αf (ψ(t) + dAα(f ;ψ)) Note that by claim 1 we have pf+ηψ(t) = exp2−α(f(t) + ηψ(t)−Aα(f + ηψ(t)) ≤ exp2−α(2‖f‖∞ + 2η‖ψ‖∞) ≤ exp2−α(2(‖f‖∞ + ‖ψ‖∞)) We can thus apply the dominated convergence theorem to pull a derivative with respect to η under an integral. We can then recover the Gateaux diferential of Aα via 0 = d dη ∣∣∣∣ η=0 ∫ pf+ηψ(t)dQ = ∫ dp(f ;ψ)(t)dQ = ∫ pf (t) 2−α(ψ(t)− dAα(f ;ψ))dQ dAα(f ;ψ) = Ep̃2−αf (ψ(T )) <∞ where the last line follows as ψ ∈ L∞. Thus the Gateaux derivative exists in L∞ directions. The derivative at f maps ψ :→ Ep̃2−αf (ψ(T )) i.e. A ′ α(f)(ψ) = Ep̃2−αf (ψ(T )). We need to show that this is a Frechet derivative. To do so, we will take take second derivatives of pf+ηψ(t) with respect to η in order to obtain second derivatives of Aα(f + ηψ) with respect to η. We will then construct a functional second order Taylor expansion. By showing that the second order terms converge sufficiently quickly, we will prove that the map ψ :→ Ep̃2−αf (ψ(T )) is a Frechet derivative. d2 dη2 pf+ηψ(t) = d dη pf+ηψ(t) 2−α ( ψ(t)− d dη Aα(f + ηψ) ) = ( d dη pf+ηψ(t) 2−α )( ψ(t)− d dη Aα(f + ηψ) ) − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) = (2− α)pf+ηψ(t)(ψ(t)− d dη Aα(f + ηψ)) d dη pf+ηψ(t) − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) = (2− α)p3−2αf+ηψ(ψ(t)− d dη Aα(f + ηψ)) 2 − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) We need to show that we can again pull the second derivative under the integral. We already showed that we can pull the derivative under once (for the first derivative) and we now need to do it again. We need to show an integrable function that dominates pf+ηψ(t)2−α(ψ(t)− Ep̃2−αf+ηψψ(T )). |pf+ηψ(t)2−α(ψ(t)− Ep̃2−αf+ηψψ(T ))| ≤ pf+ηψ(t) 2−α2‖ψ‖∞ ≤ exp2−α(2(‖f‖∞ + ‖ψ‖∞))2‖ψ‖∞ which is in L1(Q). Now applying the dominated convergence theorem 0 = ∫ d2 d 2 pf+ ψ(t)dQ = ∫ [ (2− α)p3−2αf+ ψ(ψ(t)− d d Aα(f + ψ)) 2 − pf+ ψ(t)2−α d2 d 2 Aα(f + ψ) ] dQ and rearranging gives d2 d 2 Aα(f + ψ) = (2− α) ∫ p3−2αf+ ψ(ψ(t)− d d Aα(f + ψ)) 2dQ∫ pf+ ψ(t)2−αdQ d2 d 2 Aα(f) ∣∣∣∣ =0 = (2− α) ∫ p3−2αf (ψ(t)− Ep̃2−αf [ψ(T )]) 2dQ∫ pf (t)2−αdQ since f, ψ ∈ L∞. For the functional Taylor expansion, we have from Lemma 2 Aα(f + ψ) = Aα(f) +A ′ α(f)(ψ) + 1 2 A′′α(f + ψ)(ψ) 2 for some ∈ [0, 1]. We thus need to show that for ∈ [0, 1], (2− α) 1 ‖ψ‖∞ ∫ p3−2αf+ ψ(ψ(t)− Ep̃2−αf+ ψ [ψ(T )]) 2dQ∫ pf+ ψ(t)2−αdQ ψ→0→ 0 It suffices to show that the numerator tends to 0 as ψ → 0.∣∣∣∣ 1‖ψ‖∞ (ψ(t)− Ep̃2−αf+ ψ [ψ(T )])2 ∣∣∣∣ = ∣∣∣∣∣ ψ(t)‖ψ‖∞ψ(t)− ψ(t)‖ψ‖∞ 2Ep̃2−αf+ ψ [ψ(T )] + Ep̃2−αf+ ψ [ψ(T )] ‖ψ‖∞ Ep̃2−αf+ ψ [ψ(T )] ∣∣∣∣∣ ≤ ∣∣∣∣ ψ(t)‖ψ‖∞ ∣∣∣∣ ∣∣∣ψ(t)− 2Ep̃2−αf+ ψ [ψ(T )]∣∣∣ + ∣∣∣∣Ep̃2−αf+ ψ ψ(T )‖ψ‖∞ ∣∣∣∣ ∣∣∣Ep̃2−αf+ ψ [ψ(T )]∣∣∣ ≤ ∣∣∣ψ(t)− 2Ep̃2−αf+ ψ [ψ(T )]∣∣∣+ ‖pf+ ψ‖2−α2−α ∣∣∣Ep̃2−αf+ ψ [ψ(T )]∣∣∣ → 0 as ψ → 0 and plugging this in we obtain the desired result. Thus the Frechet derivative of Aα(f) exists. Lemma 5. Define P∞ = {pf = exp2−α(f −Aα(f)) : f ∈ L∞(S)} where L∞(S) is the space of almost surely bounded measurable functions with domain S. Fix f0 ∈ L∞. Then for any fixed > 0 and pg, pf ∈ P∞ such that f, g ∈ B ∞ (f0) the L ∞ closed ball around f0, there exists constant Mexp > 0 depending only on f0 such that ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ Further BΩα(pf , pg) ≤ 1 α− 1 ‖pf − pg‖∞[(‖pf‖∞ + ‖pf − pg‖∞)α−1 + exp2−α(2‖g‖∞)] Proof. This Lemma mirrors Lemma A.1 in Sriperumbudur et al. (2017), but the proof is very different as they rely on the property that exp(x + y) = exp(x) exp(y), which does not hold for β-exponentials. We thus had to strengthen the assumption to include that f and g lie in a closed ball, and then use the functional mean value theorem Lemma 3 as the main technique to achieve our result. Consider that by functional mean value inequality, ‖pf − pg‖Lr = ‖ expβ(f −Aα(f))− expβ(g −Aα(g))‖Lr ≤ ‖ expβ(h−Aα(h))2−α‖∞(‖f − g‖∞ + |Aα(f)−Aα(g)|) (13) where h = cf + (1− c)g for some c ∈ [0, 1]. We need to bound expβ(h− Aα(h)) and ‖Aα(f)− Aα(g)‖∞. We can show a bound on ‖h‖∞ ‖h‖∞ = ‖cf + (1− c)g − f0 + f0‖∞ ≤ ‖c(f − f0) + (1− c)(g − f0) + f0‖∞ ≤ c‖f − f0‖∞ + (1− c)‖g − f0‖∞ + ‖f0‖∞ ≤ + ‖f0‖∞ so that h is bounded. Now we previously showed in claim 1 that |Aα(h)| ≤ ‖h‖∞ ≤ + ‖f0‖∞. Since h,Aα(h) are both bounded expβ(h−Aα(h))2−α is also. Now note that by Lemma 3, |Aα(f)−Aα(g)| ≤ ‖A′α(h)‖op‖f − g‖∞ We need to show that ‖A′α(h)‖op is bounded for f, g ∈ B (f0). Note that in Lemma 4 we showed that |A′α(f)(g)| = |Ep2−αf [g(T )]| ≤ ‖g‖∞ Thus ‖A′α‖op = sup{|A′α(h)(m)| : ‖m‖∞ = 1} ≤ 1. LetMexp be the bound on expβ(h−Aα(h)). Then putting everything together we have the desired result ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ Now BΩα(pf , pg) = Ωα(pf )− Ωα(pg)− 〈∇Ωα(pg), pf − pg〉2 (14) For the inner prodct term, first note that following Martins et al. (2020) the gradient is given by ∇Ωα(pg)(t) = pg(t) α−1 α− 1 (15) Thus |〈∇Ωα(pg), pf − pg〉2| ≤ ‖∇Ωα(pg)‖1‖pf − pg‖∞ = 1 α− 1 ∫ S exp2−α(g(t)−A(g))dQ‖pf − pg‖∞ ≤ 1 α− 1 exp2−α(2‖g‖∞)‖pf − pg‖∞ where the second line follows from claim 1. Further note that by Taylor’s theorem, yα = xα + αzα−1(y − x) (16) for some z between x and y. Then letting y = pf (t) and x = pg(t), we have for some z = h(t) lying between pf (t) and pg(t) that pf (t) α = pg(t) α + αh(t)α−1(pf (t)− pg(t)) Since f ∈ L∞ then applying Claim 1 we have that each pf , pg ∈ L∞ and thus h is. Then |pf (t)α − pg(t)α| = α|h(t)|α−1|pf (t)− pg(t)| ≤ α‖h‖α−1∞ ‖pf − pg‖∞ ≤ αmax{‖pf‖∞, ‖pg‖∞}α−1‖pf − pg‖∞ ≤ α(‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞ (17) so that |Ωα(pf )− Ωα(pg)| = ∣∣∣∣ 1α(α− 1) ∫ (pf (t) α − pg(t)α)dQ ∣∣∣∣ ≤ 1 α− 1 (‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞. Putting it all together we obtain BΩα(pf , pg) ≤ 1 α− 1 (‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞ + 1 α− 1 exp2−α(2‖g‖∞)‖pf − pg‖∞ = 1 α− 1 ‖pf − pg‖∞[(‖pf‖∞ + ‖pf − pg‖∞)α−1 + exp2−α(2‖g‖∞)] C UWAVE EXPERIMENTS: ADDITIONAL DETAILS We experiment with N = 64, 128 and 256 basis functions, and use a learning rate of 1e− 4. We use H = 100 attention mechanisms, or heads. Unlike Vaswani et al. (2017), our use of multiple heads is slightly different as we use the same value function for each head, and only vary the attention densities. Additional architectural details are given below. C.1 VALUE FUNCTION The value function uses regularized linear regression on the original time series observed at random observation times (which are not dependent on the data) to obtain an approximation V (t; B) = BΨ(t) ≈ X(t). The H in Eqn. 6 is the original time series. C.1.1 ENCODER In the encoder, we use the value function to interpolate the irregularly sampled time series at the original points. This is then passed through a convolutional layer with 4 filters and filter size 5 followed by a max pooling layer with pool size 2. This is followed by one hidden layer with 256 units and an output v of size 256. The attention densities for each head h = 1, · · · , H are then µh = w T h,1v σh = softplus(wTh,2v) γh = W (h)v for vectors wh,1, wh,2 and matrices Wh and heads h = 1, · · · , H C.1.2 ATTENTION MECHANISM After forming densities and normalizing, we have densities p1(t), · · · , pH(t), which we use to compute context scalars ch = Eph [V (T )] We compute these expectations using numerical integration to compute basis function expectations Eph [ψn(T )] and a parametrized value function V (t) = Bψ(t) as described in section 3. C.1.3 CLASSIFIER The classifier takes as input the concatenated context scalars as a vector. A linear layer is then followed by a softmax activation to output class probabilities. D MIT BIH: ADDITIONAL DETAILS Note that our architecture takes some inspiration for the H that we use in our value function from a github repository5, although they used tensorflow and we implemented our method in pytorch. D.1 VALUE FUNCTION The value function regresses the output of repeated convolutional and max pool layers on basis functions, where the original time series was the input to these convolutional/max pooling layers. All max pool layers have pool size 2. There are multiple sets of two convolutional layers followed by a max pooling layer. The first set of convolutional layers has 16 filters and filter size 5. The second and third each have 32 filters of size 3. The fourth has one with 32 filters and one with 256, each of size 3. The final output has 256 dimensions of length 23. This is then used as our H matrix in Eqn 6. D.2 ENCODER The encoder takes the original time series as input. It has one hidden layer with a ReLU activation function and 64 hidden units. It outputs the attention density parameters. D.3 ATTENTION MECHANISM The attention mechanism takes the parameters from the encoder and forms an attention density. It then computes c = Ep[V (T )] (18) for input to the classifier. D.4 CLASSIFIER The classifier has two hidden layers with ReLU activation and outputs class probabilities. Each hidden layer has 64 hidden units. 5https://github.com/CVxTz/ECG_Heartbeat_Classification
1. What is the main contribution of the paper regarding attention mechanisms? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the significance and practicality of the theoretical development presented in the paper? 4. What are the limitations of the experimental section, and what improvements could be made to demonstrate the effectiveness and efficiency of the proposed method? 5. Are there any concerns or suggestions regarding the computational cost and implementation of the proposed attention mechanism?
Summary Of The Paper Review
Summary Of The Paper The manuscript considers an extension of the attention mechanism framework developed by others in recent work (by Martins et al.). Specifically this framework allows to break free from the discrete nature of attention that typically consists of a weighted average of a finite set of vectors. This is realized by estimating a probability mass function (PMF) over the finite collection of vectors, and then computing the expected value. The generalization allows one to extend the finite collection to a continuum. This is then handled by using a probability distribution function (pdf) over the collection to compute an expected value. In recent work, various authors considered the probability distribution used in defining the attention mechanism to belong to either a unimodal exponential family, a "deformed" exponential family (deformed versions having possibly finite support), or a mixture of Gaussians. Traditional exponential family comprises pdfs that possess a finite set of sufficient statistics. In contrast, the kernel exponential family comprises pdfs that have essentially infinitely many sufficient statistics through the use of a kernel. The current manuscript proposes to employ a kernel exponential family, and a deformed kernel exponential family. This allows them to work with multimodal probability distributions, and/or distributions with compact support. The authors layout conditions under which the kernel versions of the (deformed) exponential family are defined. They also apply the new attention schemes to several datasets. Review Overall, I think the subject that the manuscript aims to extend is of high current interest. But I think the authors have focused on an overly dry/technical aspect of the problem. The main theoretical contribution seems to be the statement of conditions under which a kernel exponential distribution exists (i.e., the normalization constant is finite). I don't have any technical objections to this development. But an attention mechanism is a means for obtaining improved performance, and I would have expected a clearer demonstration that the theoretical development is worth considering from a practical standpoint (i.e., is computationally light enough to be part of a deep network, and improves performance significantly). In that respect, I found the experiments to be thin in terms of demonstrating how and why one should consider the proposed alternative over existing ones. Instead of considering three very short experiments, it would be more useful to focus on one experiment with a clear explanation of the computational load, and the steps taken in order to compute the attention vector. The improvement obtained by using the proposed mechanism was also not clear to me -- for instance in the IMDB example, the results are close to when the attention mechanism is continuous sparsemax (from previous work). For uWave, the proposed attention mechanism does better than when alternatives are usedm, but the accuracy obtained is around the accuracty reported in the original paper (from 2009, i.e., the pre-deep learning era). I would suggest the authors to focus more on the experimental section. A discussion on computational cost is also welcome. The authors do mention the intention of replacing the (undesired, in my opinion) numerical integration, but even with Monte Carlo techniques, wouldn't this be a bottleneck?
ICLR
Title Kernel Deformed Exponential Families for Sparse Continuous Attention Abstract Attention mechanisms take an expectation of a data representation with respect to probability weights. This creates summary statistics that focus on important features. Recently, Martins et al. (2020; 2021) proposed continuous attention mechanisms, focusing on unimodal attention densities from the exponential and deformed exponential families: the latter has sparse support. Farinhas et al. (2021) extended this to use Gaussian mixture attention densities, which are a flexible class with dense support. In this paper, we extend this to two general flexible classes: kernel exponential families (Canu & Smola, 2006) and our new sparse counterpart kernel deformed exponential families. Theoretically, we show new existence results for both kernel exponential and deformed exponential families, and that the deformed case has similar approximation capabilities to kernel exponential families. Experiments show that kernel deformed exponential families can attend to multiple compact regions of the data domain. 1 INTRODUCTION Attention mechanisms take weighted averages of data representations (Bahdanau et al., 2015), where the weights are a function of input objects. These are then used as inputs for prediction. Discrete attention 1) cannot easily handle data where observations are irregularly spaced 2) attention maps may be scattered, lacking focus. Martins et al. (2020; 2021) extended attention to continuous settings, showing that attention densities maximize the regularized expectation of a function of the data location (i.e. time). Special case solutions lead to exponential and deformed exponential families: the latter has sparse support. They form a continuous data representation and take expectations with respect to attention densities. Using measure theory to unify discrete and continuous approaches, they show transformer self-attention (Vaswani et al., 2017) is a special case of their formulation. Martins et al. (2020; 2021) explored unimodal attention densities: these only give high importance to one region of data. Farinhas et al. (2021) extended this to use multi-modal mixture of Gaussian attention densities. However 1) mixture of Gaussians do not lie in either the exponential or deformed exponential families, and are difficult to study in the context of Martins et al. (2020; 2021) 2) they have dense support. Sparse support can say that certain regions of data do not matter: a region of time has no effect on class probabilities, or a region of an image is not some object. We would like to use multimodal exponential and deformed exponential family attention densities, and understand how Farinhas et al. (2021) relates to the optimization problem of Martins et al. (2020; 2021). This paper makes three contributions: 1) we introduce kernel deformed exponential families, a multimodal class of densities with sparse support and apply it along with the multimodal kernel exponential families (Canu & Smola, 2006) to attention mechanisms. The latter have been used for density estimation, but not weighting data importance 2) we theoretically analyze normalization for both kernel exponential and deformed exponential families in terms of a base density and kernel, and show approximation properties for the latter 3) we apply them to real world datasets and show that kernel deformed exponential families learn flexible continuous attention densities with sparse support. Approximation properties for the kernel deformed are challenging: similar kernel exponential family results (Sriperumbudur et al., 2017) relied on standard exponential and logarithm properties to bound the difference of the log-partition functional at two functions: these do not hold for deformed analogues. We provide similar bounds via the functional mean value theorem along with bounding the Frechet derivative of the deformed log-partition functional. The paper is organized as follows: we review continuous attention (Martins et al., 2020; 2021). We then describe how mixture of Gaussian attention densities, used in Farinhas et al. (2021), solve a different optimization problem. We next describe kernel exponential families and give novel normalization condition relating the kernel growth to the base density’s tail decay. We then propose kernel deformed exponential families, which can have support over disjoint regions. We describe normalization and prove approximation capabilities. Next we describe use of these densities for continuous attention, including experiments where we show that the kernel deformed case learns multimodal attention densities with sparse support. We conclude with limitations and future work. 2 RELATED WORK Attention Mechanisms closely related are Martins et al. (2020; 2021); Farinhas et al. (2021); Tsai et al. (2019); Shukla & Marlin (2021). Martins et al. (2020; 2021) frame continuous attention as an expectation of a value function over the domain with respect to a density, where the density solves an optimization problem. They only used unimodal exponential and deformed exponential family densities: we extend this to the multimodal setting by leveraging kernel exponential families and proposing a deformed counterpart. Farinhas et al. (2021) proposed a multi-modal continuous attention mechanism via a mixture of Gaussians approach. We show that this solves a slightly different optimization problem from Martins et al. (2020; 2021), and extend to two further general density classes. Shukla & Marlin (2021) provide an attention mechanism for irregularly sampled time series by use of a continuous-time kernel regression framework, but do not actually take an expectation of a data representation over time with respect to a continuous pdf, evaluating the kernel regression model at a fixed set of time points to obtain a discrete representation. This describes importance of data at a set of points rather than over regions. Other papers connect attention and kernels, but focus on the discrete attention setting (Tsai et al., 2019; Choromanski et al., 2020). Also relevant are temporal transformer papers, including Xu et al. (2019); Li et al. (2019; 2020); Song et al. (2018). However none have continuous attention densities. Kernel Exponential Families Canu & Smola (2006) proposed kernel exponential families: Sriperumbudur et al. (2017) analyzed theory for density estimation. Wenliang et al. (2019) parametrized the kernel with a deep neural network. Other density estimation papers include Arbel & Gretton (2018); Dai et al. (2019); Sutherland et al. (2018). We apply kernel exponential families as attention densities to weight a value function which represents the data, rather than to estimate the data density, and extend similar ideas to kernel deformed exponential families with sparse support. Wenliang et al. (2019) showed a condition for an unnormalized kernel exponential family density to have a finite normalizer. However, they used exponential power base densities. We instead relate kernel growth rates to the base density tail decay, allowing non-symmetric base densities. To summarize our theoretical contributions: 1) introducing kernel deformed exponential families with approximation and normalization analysis 2) improved kernel exponential family normalization results. 3 CONTINUOUS ATTENTION MECHANISMS An attention mechanism involves: 1) the value function approximates a data representation. This may be the original data or a learned representation. 2) the attention density is chosen to be ’similar’ to another data representation, encoding it into a density 3) the context combines the two, taking an expectation of the value function with respect to the attention density. Formally, the context is c = ET∼p[V (T )]. (1) Here V (t) the value function approximates a data representation, T ∼ p(t) is the random variable or vector for locations (temporal, spatial, etc), and p(t) is the attention density. To choose the attention density p, one takes a data representation f and finds p ’similar’ to f and thus to a data representation, but regularizing p. Martins et al. (2020; 2021) did this, providing a rigorous formulation of attention mechanisms. Given a probability space (S,A, Q), letM1+(S) be the set of densities with respect to Q. Assume that Q is dominated by a measure ν (i.e. Lebesgue) and that it has density q0 = dQdν with respect to ν. Let S ⊆ R D, F be a function class, and Ω :M1+(S) → R be a lower semi-continuous, proper, strictly convex functional. Given f ∈ F , an attention density (Martins et al., 2020) p̂ : F → R≥0 solves p̂[f ] = arg max p∈M1+(S) 〈p, f〉L2(Q) − Ω(p). (2) This maximizes regularized L2 similarity between p and a data representation f . If Ω(p) is the negative differential entropy, the attention density is Boltzmann Gibbs p̂[f ](t) = exp(f(t)−A(f)), (3) where A(f) ensures ∫ S p̂[f ](t)dQ = 1. If f(t) = θTφ(t) for parameters and statistics θ ∈ RM , φ(t) ∈ RM respectively, Eqn. 3 becomes an exponential family density. For f in a reproducing kernel Hilbert space H, it becomes a kernel exponential family density (Canu & Smola, 2006), which we propose to use as an alternative attention density. One desirable class would be heavy or thin tailed exponential family-like densities. In exponential families, the support, or non-negative region of the density, is controlled by the measure Q. Letting Ω(p) be the α-Tsallis negative entropy Ωα(p) (Tsallis, 1988), Ωα(p) = { 1 α(α−1) (∫ S p(t)αdQ− 1 ) , α 6= 1;∫ S p(t) log p(t)dQ, α = 1, then p̂[f ] for f(t) = θTφ(t) lies in the deformed exponential family (Tsallis, 1988; Naudts, 2004) p̂Ωα [f ](t) = exp2−α(θ Tφ(t)−Aα(f)), (4) where Aα(f) again ensures normalization and the density uses the β-exponential expβ(t) = { [1 + (1− β)t]1/(1−β)+ , β 6= 1; exp(t), β = 1. (5) For β < 1, Eqn. 5 and thus deformed exponential family densities for 1 < α ≤ 2 can return 0 values. Values α > 1 (and thus β < 1) give thinner tails than the exponential family, while α < 1 gives fatter tails. Setting β = 0 is called sparsemax (Martins & Astudillo, 2016). In this paper, we assume 1 < α ≤ 2, which is the sparse case studied in Martins et al. (2020). We again propose to replace f(t) = θTφ(t) with f ∈ H, which leads to the novel kernel deformed exponential families. Computing Eqn. 1’s context vector requires parametrizing V (t). Martins et al. (2020) obtain a value function V : S → RD parametrized by B ∈ RD×N by applying regularized multivariate linear regression to estimate V (t; B) = BΨ(t), where Ψ = {ψn}Nn=1 is a set of basis functions. Let L be the number of observation locations (times in a temporal setting), O be the observation dimension, andN be the number of basis functions. This involves regressing the observation matrix H ∈ RO×L on a matrix F ∈ RN×L of basis functions {ψn}Nn=1 evaluated at observation locations {tl}Ll=1 B∗ = arg min B ‖BF−H‖2F + λ‖B‖2F . (6) 3.1 GAUSSIAN MIXTURE MODEL Farinhas et al. (2021) used mixture of Gaussian attention densities, but did not relate this to the optimization definition of attention densities in Martins et al. (2020; 2021). In fact their attention densities solve a related but different optimization problem. Martins et al. (2020; 2021) show that exponential family attention densities maximize a regularized linear predictor of the expected sufficient statistics of locations. In contrast, Farinhas et al. (2021) find a joint density over locations and latent states, and maximize a regularized linear predictor of the expected joint sufficient statistics. They then take the marginal location densities to be the attention densities. Let Ω(p) be Shannon entropy and consider two optimization problems: arg max p∈M1+(S) 〈θ,Ep[φ(T )]〉l2 − Ω(p) arg max p∈M1+(S) 〈θ,Ep[φ(T,Z)]〉l2 − Ω(p) The first is Eqn. 2 with f = θTφ(t) and rewritten to emphasize expected sufficient statistics. If one solves the second with variables Z, we recover an Exponential family joint density p̂Ωα [f ](t, z) = exp(θ Tφ(t, z)−A(θ)). This encourages the joint density of T,Z to be similar to a complete data representation θTφ(t, z) of both location variables T and latent variables Z, instead of encouraging the density of T to be similar to an observed data representation θTφ(t). The latter optimization is equivalent to arg max p∈M1+(S) Ω(p) s.t. Ep(T,Z)[φm(T,Z)] = cm,m = 1, · · · ,M. The constraint terms cm are determined by θ. Thus, this maximizes the joint entropy of Z and T , subject to constraints on the expected joint sufficient statistics. To recover EM learned Gaussian mixture densities, one must select φm so that the marginal distribution of T will be a mixture of Gaussians, and relate cm to the EM algorithm used to learn the mixture model parameters. For the first, assume that Z is a multinomial random variable taking |Z| possible values and let φ(t, z) = (z1, z2, · · · , z|Z|−1, I(z = 1)t, I(z = 1)t2, · · · , I(z = |Z|)t, I(z = |Z|)t2). These are multinomial sufficient statistics, followed by the sufficient statistics of |Z| Gaussians multiplied by indicators for each z. Then p(T |Z) will be Gaussian, p(Z) will be multinomial, and p(T ) will be a Gaussian mixture. For contraints, Farinhas et al. (2021) have Ep(T,Z)[φm(T,Z)] = L∑ l=1 wl |Z|∑ z=1 pold(z|tl)φm(tl, z),m = 1, · · · ,M (7) at each EM iteration. Here pold(z|xl) is the previous iteration’s latent state density conditional on the observation value, wl are discrete attention weights, and tl is a discrete attention location. That EM has this constraint was shown in Wang et al. (2012). Intuitively, this matches the expected joint sufficient statistics to those implied by discrete attention over locations, taking into account the dependence between z and tl given by old model parameters. An alternative is simply to let θ be the output of a neural network. While the constraints lack the intuition of Eqn. 7, it avoids the need to select an initialization. We focus on this case and use it for our baselines: both approaches are valid. 4 KERNEL EXPONENTIAL AND DEFORMED EXPONENTIAL FAMILIES We now use kernel exponential families and a new deformed counterpart to obtain flexible attention densities solving Eqn. 2 with the same regularizers. We first review kernel exponential families. We then give a novel theoretical result describing when an unnormalized kernel exponential family density can be normalized. Next we introduce kernel deformed exponential families, extending kernel exponential families to have either sparse support or fatter tails: we focus on the former. These can attend to multiple non-overlapping time intervals or spatial regions. We show similar normalization results based on the choice of kernel and base density. Following this we show approximation theory. We conclude by showing how to compute attention densities in practice. Kernel exponential families (Canu & Smola, 2006) extend exponential family distributions, replacing f(t) = θTφ(t) with f in a reproducing kernel Hilbert space H (Aronszajn, 1950) with kernel k : S × S → R. Densities can be written as p(t) = exp(f(t)−A(f)) = exp(〈f, k(·, t)〉H〉 −A(f)), where the second equality follows from the reproducing property. A challenge is to choose H, Q so that a normalizing constant exists, i.e., ∫ S exp(f(t))dQ < ∞. Kernel exponential family distributions can approximate any continuous density over a compact domain arbitrarily well in KL divergence, Hellinger, and Lp distance (Sriperumbudur et al., 2017). However relevant integrals including the normalizing constant generally require numerical integration. To avoid infinite dimensionality one generally assumes a representation of the form f = I∑ i=1 γik(·, ti), where for density estimation (Sriperumbudur et al., 2017) the ti are the observation locations. However, this requires using one parameter per observation value. This level of model complexity may not be necessary, and often one chooses a set of inducing points (Titsias, 2009) {ti}Ii=1 where I is less than the number of observation locations. For a given pair H, k, how can we choose Q to ensure that the normalization constant exists? We first give a simple example ofH, f and Q where the normalizing constant does not exist. Example 1. Let Q be the law of a N (0, 1) distribution and S = R. Let H = span{t3, t4} with k(x, y) = x3y3 + x4y4 and f(t) = t3 + t4 = k(t, 1). Then∫ S exp(f(t))dQ = ∫ R exp( t2 2 + t3 + t4)dt (8) where the integral diverges. 4.1 THEORY FOR KERNEL EXPONENTIAL FAMILIES We provide sufficient conditions for Q and H so that A(f) the log-partition function exists. We relateH’s kernel growth rate to the tail decay of the random variable or vector TQ with law Q. Proposition 1. Let p̃(t) = exp(f(t)) where f ∈ H an RKHS with kernel k. Assume k(t, t) ≤ Lk‖t‖ξ2 + Ck for constants Lk, Ck, ξ > 0. Let Q be the law of a random vector TQ, so that Q(A) = P (TQ ∈ A). Assume ∀u s.t. ‖u‖2 = 1, P (|uTTQ| ≥ z) ≤ Cq exp(−vzη) (9) for some constants η > ξ2 , CQ, v > 0. Then∫ S p̃(t)dQ <∞. Proof. See Appendix A.1 Based on k(t, t)’s growth, we can vary what tail decay rate for TQ ensures we can normalize p̃(t). Wenliang et al. (2019) also proved normalization conditions, but focused on random variables with exponential power density for a specific growth rate of k(t, t) rather than relating tail decay to growth rate. By focusing on tail decay, our result can be applied to non-symmetric base densities. Specific kernel bound growth rate terms ξ lead to allowing different tail decay rates. Corollary 1. For ξ = 4, TQ can be any sub-Gaussian random vector. For ξ = 2 it can be any sub-exponential. For ξ = 0 it can have any density. Proof. See Appendix A.2 4.2 KERNEL DEFORMED EXPONENTIAL FAMILIES We now propose kernel deformed exponential families: flexible sparse non-parametric distributions. These take deformed exponential families and extend them to use kernels in the deformed exponential term. This mirrors kernel exponential families. We write p(t) = exp2−α(f(t)−Aα(f)), where f ∈ H with kernel k. Fig. 1b shows that they can have support over disjoint intervals. 4.2.1 NORMALIZATION THEORY We construct a valid kernel deformed exponential family density from Q and f ∈ H. We first discuss the deformed log normalizer. In exponential family densities, the log-normalizer is the log of the normalizer. For deformed exponentials, the following holds. Lemma 1. Let Z > 0 be a constant. Then for 1 < α ≤ 2, 1 Z exp2−α(Z α−1f(t)) = exp2−α(f(t)− logα Z) where logβ t = t1−β−1 1−β if t > 0, β 6= 1; log(t) if t > 0, β = 1; undefined if t ≤ 0. Proof. See Appendix B.1 We describe a normalization sufficient condition analagous to Proposition 1 for the sparse deformed kernel exponential family. With Lemma 1, we can take an unnormalized exp2−α(f̃(t)) and derive a valid normalized kernel deformed exponential family density. We only require that an affine function of the terms in the deformed-exponential are negative for large magnitude t. Proposition 2. For 1 < α ≤ 2 assume p̃(t) = exp2−α(f̃(t)) with f̃ ∈ H,H is a RKHS with kernel k. If ∃Ct > 0 s.t. for ‖t‖2 > Ct, (α− 1)f̃(t) + 1 ≤ 0 and k(t, t) ≤ Lk‖t‖ξ2 + Ck for some ξ > 0, then ∫ S exp2−α(f̃(t))dQ <∞. Proof. See Appendix B.2 We now construct a valid kernel deformed exponential family density using the finite integral. Corollary 2. Under the conditions of proposition 2, assume exp2−α(f̃(t)) > 0 on a setA ⊆ S such that Q(A) > 0, then ∃ constants Z > 0, Aα(f) ∈ R such that for f(t) = 1Zα−1 f̃(t), the following holds ∫ S exp2−α(f(t)−Aα(f))dQ = 1. Proof. See Appendix B.3. We thus estimate f̃(t) = (Z)α−1f(t) and normalize to obtain a density of the desired form. 4.2.2 APPROXIMATION THEORY Under certain kernel conditions, kernel deformed exponential family densities can approximate densities of a similar form where the RKHS function is replaced with a C0(S) 1 function. 1continuous function on domain S vanishing at infinity Proposition 3. Define P0 = {πf (t) = exp2−α(f(t)−Aα(f)), t ∈ S : f ∈ C0(S)} where S ⊆ Rd. Suppose k(x, ·) ∈ C0(S),∀x ∈ S and∫ ∫ k(x, y)dµ(x)dµ(y) > 0,∀µ ∈Mb(S)\{0}. (10) hereMb(S) is the space of bounded measures over S. Then the set of deformed exponential families is dense in P0 wrt Lr(Q) norm and Hellinger distance. Proof. See Appendix B.4 We apply this to approximate fairly general densities with kernel deformed exponential families. Theorem 1. Let q0 ∈ C(S), such that q0(t) > 0 for all t ∈ S, where S ⊆ Rd is locally compact Hausdorff and q0(t) is the density ofQ with respect to a dominating measure ν. Suppose there exists l > 0 such that for any > 0,∃R > 0 satisfying |p(t)− l| ≤ for any t with ‖t‖2 > R. Define Pc = {p ∈ C(S) : ∫ S p(t)dQ = 1, p(t) ≥ 0,∀t ∈ S and p− l ∈ C0(S)}. Suppose k(t, ·) ∈ C0(S)∀t ∈ S and the kernel integration condition (Eqn. 10) holds. Then kernel deformed exponential families are dense in Pc wrt Lr norm, Hellinger distance and Bregman divergence for the α-Tsallis negative entropy functional. Proof. See Appendix B.5. For uniform q0, kernel deformed exponential families can thus approximate continuous densities on compact domains arbitrarily well. Our Bregman divergence result is analagous to the KL divergence result in Sriperumbudur et al. (2017). KL divergence is Bregman divergence with the Shannon entropy functional: we show the same for Tsallis entropy. The Bregman divergence here describes how close the uncertainty in a density is to its first order approximation evaluated at another density. Using the Tsallis entropy functional here is appropriate for deformed exponential families: they maximize it given expected sufficient statistics (Naudts, 2004). These results extend Sriperumbudur et al. (2017)’s approximation results to the deformed setting, where standard log and exponential rules cannot be applied. The Bregman divergence case requires bounding Frechet derivatives and applying the functional mean value theorem. 4.3 USING KERNELS FOR CONTINUOUS ATTENTION We apply kernel exponential and deformed exponential families to attention. The forward pass computes attention densities and the context vector. The backwards pass uses automatic differentiation. We assume a vector representation v ∈ R|v| computed from the locations we take an expectation over. For kernel exponential families we compute kernel weights {γi}Ii=1 for f(t) = ∑I i=1 γik(t, ti) γi = w T i v, and compute Z = ∫ S exp(f(t))dQ numerically. For the deformed case we compute γ̃i = wTi v and f̃(t) = (Z)α−1f(t) = ∑I i=1 γ̃ik(t, ti) followed by Z = ∫ S exp2−α(f̃(t))dQ. The context c = ET∼p[V (T )] = BEp[Ψ(t)] requires taking the expectation of Ψ(T ) with respect to a (possibly deformed) kernel exponential family density p. Unlike Martins et al. (2020; 2021), where they obtained closed form expectations, difficult normalizing constants prevent us from doing so. We thus use numerical integration for the forward pass and automatic differentiation for the backward pass. Algorithm 1 shows how to compute a continuous attention mechanism for a kernel deformed exponential family attention density. The kernel exponential family case is similar. Algorithm 1 Continuous Attention Mechanism via Kernel Deformed Exponential Families Choose base density q0(t) and kernel k. Inducing point locations {ti}Ii=1 Input Vector representation v of input object i.e. document representation Parameters {γ̃i}Ii=1 the weights for f̃(t) = (Z)α−1f(t) = ∑I i=1 γ̃ik(t, ti), matrix B for basis 5 EXPERIMENTS For document classification, we follow Martins et al. (2020)’s architecture. For the remaining, architectures have four parts: 1) an encoder takes a discrete representation of a time series and outputs attention density parameters. 2) The value function takes a time series representation (original or after passing through a neural network) and does (potentially multivariate) linear regression to obtain parameters B for a function V (t; B). These are combined to compute 3) context c = Ep[V (T )], which is used in a 4) classifier. Fig. 2 in the Appendices visualizes this. 5.1 DOCUMENT CLASSIFICATION We extend Martins et al. (2020)’s code2 for the IMDB sentiment classification dataset (Maas et al., 2011). This starts with a document representation v computed from a convolutional neural network and uses an LSTM attention model. We use a Gaussian base density and kernel, and divide the interval [0, 1] into I = 10 inducing points where we evaluate the kernel in f(t) = ∑I i=1 γik(t, ti). We set the bandwidth to be 0.01 for I = 10. Table 1 shows results. On average, kernel exponential and deformed exponential family slightly outperforms the continuous softmax and sparsemax, although the results are essentially the same. The continuous softmax/sparsemax results are from their code. 2Martins et al. (2020)’s repository for this dataset is https://github.com/deep-spin/quati 5.2 UWAVE DATASET We analyze uWave (Liu et al., 2009): accelerometer time series with eight gesture classes. We follow Li & Marlin (2016)’s split into 3,582 training observations and 896 test observations: sequences have length 945. We do synthetic irregular sampling and uniformly sample 10% of the observations. Table 2 shows results. Our highest accuracy is 94.26%, the unimodal case’s best is 74.69%, and the mixture’s best is 81.13%. Since this dataset is small, we report ±1.96 standard deviations from 10 runs. Fig. 1 shows that attention densities have support over non-overlapping time intervals. This cannot be done with Gaussian mixtures, and the intervals would be the same for each density in the exponential family case. Appendix C describes additional details. 6 ECG HEARTBEAT CLASSIFICATION We use the MIT Arrhythmia Database’s (Goldberger et al., 2000) Kaggle 3. The task is to detect abnormal heart beats from ECG signals. The five different classes are {Normal, Supraventricular premature, Premature ventricular contraction, Fusion of ventricular and normal, Unclassifiable}. There are 87,553 training samples and 21,891 test samples. Our value function is trained using the output of repeated convolutional layers: the final layer has 256 dimensions and 23 time points. Our encoder is a feedforward neural network with the original data as input, and our classifier is a feedforward network. Table 3 shows results. All accuracies are very similar, but the F1 score of kernel sparsemax is drastically higher. Additional details are in Appendix D. 7 DISCUSSION In this paper we extend continuous attention mechanisms to use kernel exponential and deformed exponential family attention densities. The latter is a new flexible class of non-parametric densities with sparse support. We show novel existence properties for both kernel exponential and deformed exponential families, and prove approximation properties for the latter. We then apply these to the framework described in Martins et al. (2020; 2021) for continuous attention. We show results on three datasets: sentiment classification, gesture recognition, and arrhythmia classification. In the first case performance is similar to unimodal attention, for the second it is drastically better, and in the third it is similar in the dense case and drastically better in the sparse case. 7.1 LIMITATIONS AND FUTURE WORK A limitation of this work was the use of numerical integration, which scales poorly with the dimensionality of the locations. Because of this we restricted our applications to temporal and text data. This still allows for multiple observation dimensions at a given location. A future direction would be to use varianced reduced Monte Carlo to approximate the integral, as well as studying how to choose the number of basis functions in the value function and how to choose the number of inducing points. 3https://www.kaggle.com/mondejar/mitbih-database A PROOF RELATED TO PROPOSITION 1 A.1 PROOF OF PROPOSITION 1 Proof. This proof has several parts. We first bound the RKHS function f and use the general tail bound we assumed to give a tail bound for the one dimensional marginals TQd of TQ. Using the RKHS function bound, we then bound the integral of the unnormalized density in terms of expectations with respect to these finite dimensional marginals. We then express these expectations over finite dimensional marginals as infinite series of integrals. For each integral within the infinite series, we use the finite dimensional marginal tail bound to bound it, and then use the ratio test to show that the infinite series converges. This gives us that the original unnormalized density has a finite integral. We first note, following Wenliang et al. (2019), that the bound on the kernel in the assumption allows us to bound f in terms of two constants and the absolute value of the point at which it is evaluated. |f(t)| = |〈f, k(t, ·)〉H| reproducing property ≤ ‖f‖H‖k(t, ·)‖H Cauchy Schwarz = ‖f‖H √ 〈k(t, ·), k(t, ·)〉H = ‖f‖H √ k(t, t) ≤ ‖f‖H √ Lk‖t‖ξ + Ck by assumption ≤ C0 + C1‖t‖|ξ|/2 for some C1, C2 > 0. We can write TQ = (TQ1, · · · , TQD). Let ed be a standard Euclidean basis vector. Then by the assumption and setting u = ed we have P (|TQd| ≥ z) ≤ Cq exp(−vzη) Letting Qd be the marginal law,∫ S exp(f(t))dQ ≤ ∫ S exp(C0 + C1‖t‖ξ/2)dQ = exp(C0) ∫ S exp(C1‖t‖ξ/2)dQ = exp(C0)E exp(C1‖TQ‖ξ/2) ≤ exp(C0)E exp(C1( √ d max d=1,··· ,D |TQd|)ξ/2) ≤ exp(C0) D∑ d=1 E exp(C2|TQd|ξ/2) which will be finite if each E exp(C2|TQd|ξ/2) < ∞. Now letting Sd be the relevant dimension of S, E exp(C2|TQd|ξ/2) = ∫ Sd exp(C2|td|ξ/2)dQd ≤ −1∑ j=−∞ ∫ j+1 j exp(C2|td|ξ/2)dQd + ∞∑ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd where the inequality follows since Sd ⊆ R, exp is a non-negative function and probability measures are monotonic. We will show that the second sum converges. Similar techniques can be shown for the first sum. Note that for j ≥ 0 Qd([j, j + 1)) = P (Td ≥ j)− P (Td ≥ j + 1) ≤ P (Td ≥ j) ≤ Cq exp(−vjη) by assumption Then ∞∑ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd ≤ ∞∑ j=0 exp(C2|j|ξ/2)Qd([j, j + 1)) ≤ ∞∑ j=0 CQ exp(C2|j|ξ/2 − vjη) Let aj = exp(C2|j|ξ/2 − viη). We will use the ratio test to show that the RHS converges. We have∣∣∣∣aj+1aj ∣∣∣∣ = exp(C2((j + 1)ξ/2 − jξ/2)− v[(j + 1)η − jη]). (11) We want this ratio to be < 1 for large j. We thus need to select η so that for sufficiently large j, we have C1 v ((j + 1)ξ/2 − jξ/2) < [(j + 1)η − jη]. Assume that η > ξ2 . Then (j + 1)η − jη (j + 1)ξ/2 − jξ/2 = jη[(1 + 1j ) η − 1] jξ/2[(1 + 1j ) ξ/2 − 1] ≥ jη−ξ/2. Since the RHS is unbounded for η > ξ2 , we have that Eqn. 11 holds for sufficiently large j. By the ratio test Eqd(t) exp(C2|Td|ξ/2) = ∑−1 j=−∞ ∫ j+1 j exp(C2|td|ξ/2)dQd +∑∞ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd is finite. Thus putting everything together we have∫ S exp(f(t))dQ ≤ ∫ S exp(C0 + C1‖t‖ξ/2)dQ < exp(C0) D∑ d=1 E exp(C2|TQd|ξ/2) <∞ and p̃(t) can be normalized. A.2 PROOF OF COROLLARY 1 Proof. Let ξ = 4. Then η > 2 and P (|uTT | > t) ≤ P (|uTT | ≥ t) monotonicity ≤ CQ exp(−vtη) < CQ exp(−vt2). The second case is similar. For the uniformly bounded kernel,∫ S exp(〈f, k(·, t)〉H)dQ ≤ exp(‖f‖H √ Ck) ∫ S dQ = exp(‖f‖H √ Ck) <∞. The first line follows from Cauchy Schwarz and ξ = 0 B PROOFS RELATED TO KERNEL DEFORMED EXPONENTIAL FAMILY B.1 PROOF OF LEMMA 1 Proof. The high level idea is to express a term inside the deformed exponential family that becomes 1/Z once outside. exp2−α(f(t)− logα(Z)) = [1 + (α− 1)(f(t)− logα Z)] 1 α−1 + = [1 + (α− 1)f(t)− (α− 1)Z 1−α − 1 1− α ] 1 α−1 + = [1 + (α− 1)f(t) + Z1−α − 1] 1 α−1 + = [(α− 1)f(t) + Z1−α] 1 α−1 + = [(α− 1)f(t)Z α−1 Zα−1 + Z1−α)] 1 α−1 + = 1 Z [(α− 1)f(t) 1 Zα−1 + 1] 1 α−1 + = 1 Z exp2−α(Z α−1f(t)) B.2 PROOF OF PROPOSITION 2 Proof. ∫ S exp2−α(f̃(t))dQ = ∫ S [1 + (α− 1)f̃(t)] 1 α−1 + dQ = ∫ ‖t‖≤Ct [1 + (α− 1)f̃(t)] 1 α−1 + dQ ≤ ∫ ‖t‖≤Ct [1 + (α− 1)(C0 + C1|Ct|ξ/2)] 1 α−1 + dQ <∞ B.3 PROOF OF COROLLARY 2 Proof. From proposition 2 and the assumption,∫ S exp2−α(f̃(t))dQ = Z for some Z > 0. Then ∫ S 1 Z exp2−α(Z α−1f(t))dQ = 1∫ S exp2−α(f(t)− logα Z)dQ = 1 where the second line follows from lemma 1. Set Aα(f) = logα(Z) and we are done. B.4 PROOF OF PROPOSITION 3 Proof. The kernel integration condition tells us that H is dense in C0(S) with respect to L∞ norm. This was shown in Sriperumbudur et al. (2011). For the Lr norm, we apply ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ from Lemma 5 with f ∈ C0(S), g ∈ H, and f0 = f . L1 convergence implies Hellinger convergence. B.5 PROOF OF THEOREM 1 Proof. For any p ∈ Pc, define pδ = p+δ1+δ . Then ‖p− pδ‖r = δ 1 + δ ‖p‖r → 0 for 1 ≤ r ≤ ∞. Thus for any > 0,∃δ > 0 such that for any 0 < θ < δ , we have ‖p− pθ‖r ≤ , where pθ(t) > 0 for all t ∈ S. Define f = ( 1+θ l+θ )1−α log2−α pθ 1+θ l+θ . Since p ∈ C(S), so is f . Fix any η > 0 and note that f(t) ≥ η( 1 + θ l + θ )1−α log2−α pθ 1 + θ l + θ ≥ η log2−α pθ 1 + θ l + θ ≥ ( 1 + θ l + θ )α−1 η pθ 1 + θ l + θ ≥ exp2−α (( 1 + θ l + θ )α−1 η ) pθ ≥ l + θ 1 + θ exp2−α (( 1 + θ l + θ )α−1 η ) p− l ≥ (l + θ) ( exp2−α (( 1 + θ l + θ )α−1 η ) − 1 ) Thus A = {t : f(t) ≥ η} = { p− l ≥ (l + θ) ( exp2−α (( 1 + θ l + θ )α−1 η ) − 1 )} Since p − l ∈ C0(S) the set on the second line is bounded. Thus A is bounded so that f ∈ C0(S). Further, by Lemma 1 pθ = exp2−α ( f − logα 1 + θ l + θ ) giving us pθ ∈ P0. By Proposition 3 there is some pg in the deformed kernel exponential family so that ‖pθ − pg‖Lr(S) ≤ . Thus ‖p − pg‖r ≤ 2 for any 1 ≤ r ≤ ∞. To show convergence in Helinger distance, note H2(p, pg) = 1 2 ∫ S ( √ p−√pg)2dQ = 1 2 ∫ S (p− 2√ppg + pg)dQ ≤ 1 2 ∫ S (p− 2 min(p, pg) + pg)dQ = 1 2 ∫ S |p− pg|dQ = 1 2 ‖p− pg‖1 4actually an equality, see https://www2.cs.uic.edu/ zhangx/teaching/bregman.pdf for proof so that L1(S) convergence, which we showed, implies Hellinger convergence. Let us consider the Bregman divergence. Note the generalized triangle inequality4 for Bregman divergence BΩα(p, pg) = BΩα(p, pθ)︸ ︷︷ ︸ I +BΩα(pθ, pg)︸ ︷︷ ︸ II −〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2︸ ︷︷ ︸ III (12) Term I BΩα(p, pθ) = 1 α(α− 1) ∫ S (pα − pαθ )dQ− 〈∇Ωα(pθ), p− pθ〉 = 1 α(α− 1) ∫ S (pα − pαθ )dQ− 1 α− 1 ∫ pα−1θ (p− pθ)dQ ≤ 1 α(α− 1) ∫ S (pα − pαθ )dQ+ 1 α− 1 ‖pα−1θ ‖1‖‖p− pθ‖∞ The first term on the rhs clearly vanishes as θ → 0. For the second term, we already showed that ‖p− pθ‖∞ → 0. Term II Fix θ. Then term II converges to 0 by Lemma 5. Term III For term III , 〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2 ≤ ‖p− pθ‖∞‖∇Ωα(pθ)−∇Ωα(pg)‖1 Clearly the first term on the rhs converges by Lr convergence. The L1 term for the gradient is given by ‖∇Ωα(pθ)−∇Ωα(pg)‖1 = 1 α− 1 ∫ |pθ(t)α−1 − pg(t)α−1|dQ ≤ ∫ (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞dQ Eqn. 17 = (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞ so that the inner product terms are bounded as |〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2| ≤ (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞‖p− pθ‖∞ Lemma 2. (Functional Taylor’s Theorem) Let F : X → R where X is a Banach space. Let f, g ∈ X and let F be k times Gateaux differentiable. Then we can write F (g) = k−1∑ i=0 F i(f)(g − f)i + F k(f + c(g − f))(g − f)k for some c ∈ [0, 1]. Proof. This is simply a consequence of expressing a functional as a function of an ∈ [0, 1], which restricts the functional to take input functions between two functions. We then apply Taylor’s theorem to the function and apply the chain rule for Gateaux derivatives to obtain the resulting Taylor remainder theorem for functionals. Let G(η) = F (f + η(g − f)). By Taylor’s theorem we have G(1) = G(0) +G′(0) + · · ·+Gk(c) and applying the chain rule gives us F (g) = k−1∑ i=0 F i(f)(g − f)i + F k(f + c(g − f))(g − f)k Lemma 3. (Functional Mean Value Theorem) Let F : X → R be a functional where f, g ∈ X some Banach space with norm ‖ · ‖. Then |F (f)− F (g)| ≤ ‖F ′(h)‖op‖f − g‖ where h = g + c(f − g) for some c ∈ [0, 1], F ′(h) is the Gateaux derivative of F , and ‖ · ‖op is the operator norm ‖A‖op = inf{c > 0 : ‖Ax‖ ≤ c‖x‖∀x ∈ X}. Proof. Consider G(η) = F (g + η(f − g)). Apply the ordinary mean value theorem to obtain G(1)−G(0) = G′(c), c ∈ [0, 1] = F ′(g + c(f − g)) · (f − g) and thus |F (f)− F (g)| ≤ ‖F ′(h)‖op‖f − g‖ Claim 1. Consider P∞ = {pf = exp2−α(f − Aα(f)) : f ∈ L∞(S)}. Then for pf ∈ P∞, Aα(f) ≤ ‖f‖∞. Proof. pf (t) = exp2−α(f(t)−Aα(f)) ≤ exp2−α(‖f‖∞ −Aα(f)) for 1 < α ≤ 2∫ S pf (t)dQ ≤ ∫ S exp2−α(‖f‖∞ −Aα(f))dQ 1 ≤ exp2−α(‖f‖∞ −Aα(f)) log2−α 1 ≤ ‖f‖∞ −Aα(f) Aα(f) ≤ ‖f‖∞ where for the second line recall that we assumed that throughout the paper 1 < α ≤ 2. Lemma 4. ConsiderP∞ = {pf = exp2−α(f−Aα(f)) : f ∈ L∞(S)}. Then the Frechet derivative of Aα : L∞ → R exists. It is given by the map A′(f)(g) = Ep̃2−αf (g(T )) = ∫ p2−αf (t)g(t)dQ∫ p2−αf (t)dQ Proof. This proof has several parts. We first derive the Gateaux differential of pf in a direction ψ ∈ L∞ and as it depends on the Gateaux differential of Aα(f) in that direction, we can rearrange terms to recover the latter. We then show that it exists for any f, ψ ∈ L∞. Next we show that the second Gateaux differential of Aα(f) exists, and use that along with a functional Taylor expansion to prove that the first Gateaux derivative is in fact a Frechet derivative. In Martins et al. (2020) they show how to compute the gradient of Aα(θ) for the finite dimensional case: we extend this to the Gateaux differential. We start by computing the Gateaux differential of pf . d dη pf+ηψ(t) = d dη exp2−α(f(t) + ηψ(t)−Aα(f + ηψ)) = d dη [1 + (α− 1)(f(t) + ηψ(t)−Aα(f + ηψ))]1/(α−1)+ = [1 + (α− 1)(f(t) + ηψ(t)−Aα(f + ηψ))](2−α)/(α−1)+ ( ψ(t)− d dη Aα(f + ηψ) ) = p2−αf+ηψ(t) ( ψ(t)− d dη Aα(f + ηψ) ) evaluating at η = 0 gives us dp(f ;ψ)(t) = p2−αf (ψ(t) + dAα(f ;ψ)) Note that by claim 1 we have pf+ηψ(t) = exp2−α(f(t) + ηψ(t)−Aα(f + ηψ(t)) ≤ exp2−α(2‖f‖∞ + 2η‖ψ‖∞) ≤ exp2−α(2(‖f‖∞ + ‖ψ‖∞)) We can thus apply the dominated convergence theorem to pull a derivative with respect to η under an integral. We can then recover the Gateaux diferential of Aα via 0 = d dη ∣∣∣∣ η=0 ∫ pf+ηψ(t)dQ = ∫ dp(f ;ψ)(t)dQ = ∫ pf (t) 2−α(ψ(t)− dAα(f ;ψ))dQ dAα(f ;ψ) = Ep̃2−αf (ψ(T )) <∞ where the last line follows as ψ ∈ L∞. Thus the Gateaux derivative exists in L∞ directions. The derivative at f maps ψ :→ Ep̃2−αf (ψ(T )) i.e. A ′ α(f)(ψ) = Ep̃2−αf (ψ(T )). We need to show that this is a Frechet derivative. To do so, we will take take second derivatives of pf+ηψ(t) with respect to η in order to obtain second derivatives of Aα(f + ηψ) with respect to η. We will then construct a functional second order Taylor expansion. By showing that the second order terms converge sufficiently quickly, we will prove that the map ψ :→ Ep̃2−αf (ψ(T )) is a Frechet derivative. d2 dη2 pf+ηψ(t) = d dη pf+ηψ(t) 2−α ( ψ(t)− d dη Aα(f + ηψ) ) = ( d dη pf+ηψ(t) 2−α )( ψ(t)− d dη Aα(f + ηψ) ) − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) = (2− α)pf+ηψ(t)(ψ(t)− d dη Aα(f + ηψ)) d dη pf+ηψ(t) − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) = (2− α)p3−2αf+ηψ(ψ(t)− d dη Aα(f + ηψ)) 2 − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) We need to show that we can again pull the second derivative under the integral. We already showed that we can pull the derivative under once (for the first derivative) and we now need to do it again. We need to show an integrable function that dominates pf+ηψ(t)2−α(ψ(t)− Ep̃2−αf+ηψψ(T )). |pf+ηψ(t)2−α(ψ(t)− Ep̃2−αf+ηψψ(T ))| ≤ pf+ηψ(t) 2−α2‖ψ‖∞ ≤ exp2−α(2(‖f‖∞ + ‖ψ‖∞))2‖ψ‖∞ which is in L1(Q). Now applying the dominated convergence theorem 0 = ∫ d2 d 2 pf+ ψ(t)dQ = ∫ [ (2− α)p3−2αf+ ψ(ψ(t)− d d Aα(f + ψ)) 2 − pf+ ψ(t)2−α d2 d 2 Aα(f + ψ) ] dQ and rearranging gives d2 d 2 Aα(f + ψ) = (2− α) ∫ p3−2αf+ ψ(ψ(t)− d d Aα(f + ψ)) 2dQ∫ pf+ ψ(t)2−αdQ d2 d 2 Aα(f) ∣∣∣∣ =0 = (2− α) ∫ p3−2αf (ψ(t)− Ep̃2−αf [ψ(T )]) 2dQ∫ pf (t)2−αdQ since f, ψ ∈ L∞. For the functional Taylor expansion, we have from Lemma 2 Aα(f + ψ) = Aα(f) +A ′ α(f)(ψ) + 1 2 A′′α(f + ψ)(ψ) 2 for some ∈ [0, 1]. We thus need to show that for ∈ [0, 1], (2− α) 1 ‖ψ‖∞ ∫ p3−2αf+ ψ(ψ(t)− Ep̃2−αf+ ψ [ψ(T )]) 2dQ∫ pf+ ψ(t)2−αdQ ψ→0→ 0 It suffices to show that the numerator tends to 0 as ψ → 0.∣∣∣∣ 1‖ψ‖∞ (ψ(t)− Ep̃2−αf+ ψ [ψ(T )])2 ∣∣∣∣ = ∣∣∣∣∣ ψ(t)‖ψ‖∞ψ(t)− ψ(t)‖ψ‖∞ 2Ep̃2−αf+ ψ [ψ(T )] + Ep̃2−αf+ ψ [ψ(T )] ‖ψ‖∞ Ep̃2−αf+ ψ [ψ(T )] ∣∣∣∣∣ ≤ ∣∣∣∣ ψ(t)‖ψ‖∞ ∣∣∣∣ ∣∣∣ψ(t)− 2Ep̃2−αf+ ψ [ψ(T )]∣∣∣ + ∣∣∣∣Ep̃2−αf+ ψ ψ(T )‖ψ‖∞ ∣∣∣∣ ∣∣∣Ep̃2−αf+ ψ [ψ(T )]∣∣∣ ≤ ∣∣∣ψ(t)− 2Ep̃2−αf+ ψ [ψ(T )]∣∣∣+ ‖pf+ ψ‖2−α2−α ∣∣∣Ep̃2−αf+ ψ [ψ(T )]∣∣∣ → 0 as ψ → 0 and plugging this in we obtain the desired result. Thus the Frechet derivative of Aα(f) exists. Lemma 5. Define P∞ = {pf = exp2−α(f −Aα(f)) : f ∈ L∞(S)} where L∞(S) is the space of almost surely bounded measurable functions with domain S. Fix f0 ∈ L∞. Then for any fixed > 0 and pg, pf ∈ P∞ such that f, g ∈ B ∞ (f0) the L ∞ closed ball around f0, there exists constant Mexp > 0 depending only on f0 such that ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ Further BΩα(pf , pg) ≤ 1 α− 1 ‖pf − pg‖∞[(‖pf‖∞ + ‖pf − pg‖∞)α−1 + exp2−α(2‖g‖∞)] Proof. This Lemma mirrors Lemma A.1 in Sriperumbudur et al. (2017), but the proof is very different as they rely on the property that exp(x + y) = exp(x) exp(y), which does not hold for β-exponentials. We thus had to strengthen the assumption to include that f and g lie in a closed ball, and then use the functional mean value theorem Lemma 3 as the main technique to achieve our result. Consider that by functional mean value inequality, ‖pf − pg‖Lr = ‖ expβ(f −Aα(f))− expβ(g −Aα(g))‖Lr ≤ ‖ expβ(h−Aα(h))2−α‖∞(‖f − g‖∞ + |Aα(f)−Aα(g)|) (13) where h = cf + (1− c)g for some c ∈ [0, 1]. We need to bound expβ(h− Aα(h)) and ‖Aα(f)− Aα(g)‖∞. We can show a bound on ‖h‖∞ ‖h‖∞ = ‖cf + (1− c)g − f0 + f0‖∞ ≤ ‖c(f − f0) + (1− c)(g − f0) + f0‖∞ ≤ c‖f − f0‖∞ + (1− c)‖g − f0‖∞ + ‖f0‖∞ ≤ + ‖f0‖∞ so that h is bounded. Now we previously showed in claim 1 that |Aα(h)| ≤ ‖h‖∞ ≤ + ‖f0‖∞. Since h,Aα(h) are both bounded expβ(h−Aα(h))2−α is also. Now note that by Lemma 3, |Aα(f)−Aα(g)| ≤ ‖A′α(h)‖op‖f − g‖∞ We need to show that ‖A′α(h)‖op is bounded for f, g ∈ B (f0). Note that in Lemma 4 we showed that |A′α(f)(g)| = |Ep2−αf [g(T )]| ≤ ‖g‖∞ Thus ‖A′α‖op = sup{|A′α(h)(m)| : ‖m‖∞ = 1} ≤ 1. LetMexp be the bound on expβ(h−Aα(h)). Then putting everything together we have the desired result ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ Now BΩα(pf , pg) = Ωα(pf )− Ωα(pg)− 〈∇Ωα(pg), pf − pg〉2 (14) For the inner prodct term, first note that following Martins et al. (2020) the gradient is given by ∇Ωα(pg)(t) = pg(t) α−1 α− 1 (15) Thus |〈∇Ωα(pg), pf − pg〉2| ≤ ‖∇Ωα(pg)‖1‖pf − pg‖∞ = 1 α− 1 ∫ S exp2−α(g(t)−A(g))dQ‖pf − pg‖∞ ≤ 1 α− 1 exp2−α(2‖g‖∞)‖pf − pg‖∞ where the second line follows from claim 1. Further note that by Taylor’s theorem, yα = xα + αzα−1(y − x) (16) for some z between x and y. Then letting y = pf (t) and x = pg(t), we have for some z = h(t) lying between pf (t) and pg(t) that pf (t) α = pg(t) α + αh(t)α−1(pf (t)− pg(t)) Since f ∈ L∞ then applying Claim 1 we have that each pf , pg ∈ L∞ and thus h is. Then |pf (t)α − pg(t)α| = α|h(t)|α−1|pf (t)− pg(t)| ≤ α‖h‖α−1∞ ‖pf − pg‖∞ ≤ αmax{‖pf‖∞, ‖pg‖∞}α−1‖pf − pg‖∞ ≤ α(‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞ (17) so that |Ωα(pf )− Ωα(pg)| = ∣∣∣∣ 1α(α− 1) ∫ (pf (t) α − pg(t)α)dQ ∣∣∣∣ ≤ 1 α− 1 (‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞. Putting it all together we obtain BΩα(pf , pg) ≤ 1 α− 1 (‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞ + 1 α− 1 exp2−α(2‖g‖∞)‖pf − pg‖∞ = 1 α− 1 ‖pf − pg‖∞[(‖pf‖∞ + ‖pf − pg‖∞)α−1 + exp2−α(2‖g‖∞)] C UWAVE EXPERIMENTS: ADDITIONAL DETAILS We experiment with N = 64, 128 and 256 basis functions, and use a learning rate of 1e− 4. We use H = 100 attention mechanisms, or heads. Unlike Vaswani et al. (2017), our use of multiple heads is slightly different as we use the same value function for each head, and only vary the attention densities. Additional architectural details are given below. C.1 VALUE FUNCTION The value function uses regularized linear regression on the original time series observed at random observation times (which are not dependent on the data) to obtain an approximation V (t; B) = BΨ(t) ≈ X(t). The H in Eqn. 6 is the original time series. C.1.1 ENCODER In the encoder, we use the value function to interpolate the irregularly sampled time series at the original points. This is then passed through a convolutional layer with 4 filters and filter size 5 followed by a max pooling layer with pool size 2. This is followed by one hidden layer with 256 units and an output v of size 256. The attention densities for each head h = 1, · · · , H are then µh = w T h,1v σh = softplus(wTh,2v) γh = W (h)v for vectors wh,1, wh,2 and matrices Wh and heads h = 1, · · · , H C.1.2 ATTENTION MECHANISM After forming densities and normalizing, we have densities p1(t), · · · , pH(t), which we use to compute context scalars ch = Eph [V (T )] We compute these expectations using numerical integration to compute basis function expectations Eph [ψn(T )] and a parametrized value function V (t) = Bψ(t) as described in section 3. C.1.3 CLASSIFIER The classifier takes as input the concatenated context scalars as a vector. A linear layer is then followed by a softmax activation to output class probabilities. D MIT BIH: ADDITIONAL DETAILS Note that our architecture takes some inspiration for the H that we use in our value function from a github repository5, although they used tensorflow and we implemented our method in pytorch. D.1 VALUE FUNCTION The value function regresses the output of repeated convolutional and max pool layers on basis functions, where the original time series was the input to these convolutional/max pooling layers. All max pool layers have pool size 2. There are multiple sets of two convolutional layers followed by a max pooling layer. The first set of convolutional layers has 16 filters and filter size 5. The second and third each have 32 filters of size 3. The fourth has one with 32 filters and one with 256, each of size 3. The final output has 256 dimensions of length 23. This is then used as our H matrix in Eqn 6. D.2 ENCODER The encoder takes the original time series as input. It has one hidden layer with a ReLU activation function and 64 hidden units. It outputs the attention density parameters. D.3 ATTENTION MECHANISM The attention mechanism takes the parameters from the encoder and forms an attention density. It then computes c = Ep[V (T )] (18) for input to the classifier. D.4 CLASSIFIER The classifier has two hidden layers with ReLU activation and outputs class probabilities. Each hidden layer has 64 hidden units. 5https://github.com/CVxTz/ECG_Heartbeat_Classification
1. What is the focus of the paper regarding continuous attention mechanisms? 2. What are the strengths of the proposed approach, particularly in terms of theoretical conditions? 3. What are the weaknesses of the paper compared to prior works? 4. How can the parameter alpha in the deformed exponential family be chosen effectively? 5. Is there an efficient approach to computing the context c, such as a resampling method?
Summary Of The Paper Review
Summary Of The Paper The paper studies the continuous attention mechanism using the kernel exponential family and its deformed variant. This is an extension of existing works based on finite-dimensional exponential families. The authors investigated some theoretical conditions such that the RKHS defines the probability density functions. Numerical experiments showed that the proposed method works efficiently for some datasets. Review The paper is clearly written. However, the results are rather a straightforward extension of the existing work by Martins et al. (2020; 2021), in which finite-dimensional exponential families and their deformed variants are used to formulate the continuous attention mechanism. In Section 4, some conditions for the finiteness of integration are introduced for the kernel-based model. These results may be important as a fundamental property of kernel-based models. However, this paper does not reveal the significant advantage of kernel-based modeling as an ingredient of continuous attention mechanism. As the author pointed out, the numerical integration required in the proposed method is the problem that should be resolved. The current form of the proposed method is far from practical usage. Some questions: How can one choose the parameter alpha in the deformed exponential family? It would be nice to show a data-dependent method of selecting the deformation parameter. To compute the context c, Is a resampling method such as the Metropolis-Hastings algorithm an efficient approach?
ICLR
Title Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction Abstract Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take numerical node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related learning tasks. Recent works exploring the correlation between numerical node features and graph structure via self-supervised learning have paved the way for further performance improvements of GNNs. However, methods used for extracting numerical node features from raw data are still graph-agnostic within standard GNN pipelines. This practice is sub-optimal as it prevents one from fully utilizing potential correlations between graph topology and node attributes. To mitigate this issue, we propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT). GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information, and scales to large datasets. We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework. We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets: For example, we improve the accuracy of the top-ranked method GAMLP from 68.25% to 69.67%, SGC from 63.29% to 66.10% and MLP from 47.24% to 61.10% on the ogbn-papers100M dataset by leveraging GIANT. Our implementation is public available1. 1 INTRODUCTION The ubiquity of graph-structured data and its importance in solving various real-world problems such as node and graph classification have made graph-centered machine learning an important research area (Lü & Zhou, 2011; Shervashidze et al., 2011; Zhu, 2005). Graph neural networks (GNNs) offer state-of-the-art performance on many graph learning tasks and have by now become a standard methodology in the field (Kipf & Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Chien et al., 2020). In most such studies, GNNs take graphs with numerical node attributes as inputs and train them with task-specific labels. Recent research has shown that self-supervised learning (SSL) leads to performance improvements in many applications, including graph learning, natural language processing and computer vision. ∗This work was done during Eli Chien’s internship at Amazon, USA. 1https://github.com/amzn/pecos/tree/mainline/examples/giant-xrt Several SSL approaches have also been successfully used with GNNs (Hu et al., 2020b; You et al., 2018; 2020; Hu et al., 2020c; Velickovic et al., 2019; Kipf & Welling, 2016; Deng et al., 2020). The common idea behind these works is to explore the correlated information provided by the numerical node features and graph topology, which can lead to improved node representations and GNN initialization. However, one critical yet neglected issue in the current graph learning literature is how to actually obtain the numerical node features from raw data such as text, images and audio signals. As an example, when dealing with raw text features, the standard approach is to apply graph-agnostic methods such as bag-of-words, word2vec (Mikolov et al., 2013) or pre-trained BERT (Devlin et al., 2019) (As a further example, raw texts of product descriptions are used to construct node features via the bag-of-words model for benchmarking GNNs on the ogbn-products dataset (Hu et al., 2020a; Chiang et al., 2019)). The pre-trained BERT language model, as well as convolutional neural networks (CNNs) (Goyal et al., 2019; Kolesnikov et al., 2019), produce numerical features that can significantly improve the performance of various downstream learners (Devlin et al., 2019). Still, none of these works leverage graph information for actual self-supervision. Clearly, using graphagnostic methods to extract numerical features is sub-optimal, as correlations between the graph topology and raw features are ignored. Motivated by the recent success of SSL approaches for GNNs, we propose GIANT, an SSL framework that resolves the aforementioned issue of graph-agnostic feature extraction in the standard GNN learning pipeline. Our framework takes raw node attributes and generates numerical node features with graph-structured self-supervision. To integrate the graph topology information into language models such as BERT, we also propose a novel SSL task termed neighborhood prediction, which works for both homophilous and heterophilous graphs, and establish connections between neighborhood prediction and the eXtreme Multi-label Classification (XMC) problem (Shen et al., 2020; Yu et al., 2022; Chang et al., 2020b). Roughly speaking, the neighborhood of each node can be encoded using binary multi-labels (indicating whether a node is a neighbor or not) and the BERT model is fine-tuned by successively improving the predicted neighborhoods. This approach allows us to not only leverage the advanced solvers for the XMC problem and address the issue of graph-agnostic feature extraction, but also to perform a theoretical study of the XMC problem and determine its importance in the context of graph-guided SSL. Throughout the work, we focus on raw texts as these are the most common data used for largescale graph benchmarking. Examples include titles/abstracts in citation networks and product descriptions in co-purchase networks. To solve our proposed self-supervised XMC task, we adopt the state-of-the-art XR-Transformer method (Zhang et al., 2021a). By using the encoder from the XR-Transformer pre-trained with GIANT, we obtain informative numerical node features which consistently boost the performance of GNNs on downstream tasks. Notably, GIANT significantly improves state-of-the-art methods for node classification tasks described on the Open Graph Benchmark (OGB) (Hu et al., 2020a) leaderboard on three large-scale graph datasets, with absolute improvements in accuracy roughly 1.5% for the first-ranked methods, 3% for standard GNNs and 14% for multilayer perceptron (MLP). GIANT coupled with XRTransformer is also highly scalable and can be combined with other downstream learning methods. Our contributions may be summarized as follows. 1. We identify the issue of graph-agnostic feature extraction in standard GNN pipelines and propose a new GIANT self-supervised framework as a solution to the problem. 2. We introduce a new approach to extract numerical features by graph information based on the idea of neighborhood prediction. The gist of the approach is to use neighborhood prediction within a language model such as BERT to guide the process of fine-tuning the features. Unlike linkprediction, neighborhood prediction resolves problems associated with heterophilic graphs. 3. We establish pertinent connections between neighborhood prediction and the XMC problem by noting that neighborhoods of individual nodes can be encoded by binary vectors which may be interpreted as multi-labels. This allows for performing neighborhood prediction via XR-Transformers, especially designed to solve XMC problems at scale. 4. We demonstrate through extensive experiments that GIANT consistently improves the performance of tested GNNs on downstream tasks by large margins. We also report new state-of-the-art results on the OGB leaderboard, including absolute improvements in accuracy roughly 1.5% compared to the top-ranked method, 3% for standard GNNs and 14% for multilayer perceptron (MLP). More precisely, we improve the accuracy of the top-ranked method GAMLP (Zhang et al., 2021b) from 68.25% to 69.67%, SGC (Wu et al., 2019) from 63.29% to 66.10% and MLP from 47.24% to 61.10% on the ogbn-papers100M dataset. 5. We present a new theoretical analysis that verifies the benefits of key components in XRTransformers on our neighborhood prediction task. This analysis also further improves our understanding of XR-Transformers and the XMC problem. Due to the space limitation, all proofs are deferred to the Appendix. 2 BACKGROUND AND RELATED WORK General notation. Throughout the paper, we use bold capital letters such as A to denote matrices. We use Ai for the i-th row of the matrix and Aij for its entry in row i and column j. We reserve bold lowercase letters such as a for vectors. The symbol I denotes the identity matrix while 1 denotes the all-ones vector. We use o(·), O(·), ω(·),Θ(·) in the standard manner. SSL in GNNs. SSL is a topic of substantial interest due to its potential for improving the performance of GNNs on various tasks. Exploiting the correlation between node features and the graph structure is known to lead to better node representations or GNN initialization (Hu et al., 2020b; You et al., 2018; 2020; Hu et al., 2020c). Several methods have been proposed for improving node representations, including (variational) graph autoencoders (Kipf & Welling, 2016), Deep Graph Infomax (Velickovic et al., 2019) and GraphZoom (Deng et al., 2020). For more information, the interested reader is referred to a survey of SSL GNNs (Xie et al., 2021). While these methods can be used as SSL modules in GNNs (Figure 1), it is clear that they do not solve the described graph agnostics issue in the standard GNN pipeline. Furthermore, as the above described SSL GNNs modules and other pre-processing and post-processing methods for GNNs such as C&S (Huang et al., 2021) and FLAG (Kong et al., 2020) in general improve graph learners, it is worth pointing out that they can be naturally be integrated into the GIANT framework. This topic is left as a future work. The XMC problem, PECOS and XR-Transformer. The XMC problem can be succinctly formulated as follows: We are given a training set {Ti,yi}ni=1, where Ti ∈ D is the ith input text instance and yi ∈ {0, 1}L is the target multi-label from an extremely large collection of labels. The goal is to learn a function f : D× [L] 7→ R, where f(T, l) captures the relevance between the input text T and the label l. The XMC problem is of importance in many real-world applications (Jiang et al., 2021; Ye et al., 2020): For example, in E-commerce dynamic search advertising, XMC arises when trying to find a “good” mapping from items to bid queries on the market (Prabhu et al., 2018; Prabhu & Varma, 2014). In open-domain question answering, XMC problems arise when trying to map ques- tions to “evidence” passages containing the answers (Chang et al., 2020a; Lee et al., 2019). Many methods for the XMC problem leverage hierarchical clustering approaches for labels (Prabhu et al., 2018; You et al., 2019). This organizational structure allows one to handle potentially enormous numbers of labels, such as used by PECOS (Yu et al., 2022). The key is to take advantage of the correlations among labels within the hierarchical clustering. In our approach, we observe that the multi-labels correspond to neighborhoods of nodes in the given graph. Neighborhoods have to be predicted using the textual information in order to best match the a priori given graph topology. We use the state-of-the-art XR-Transformer (Zhang et al., 2021a) method for solving the XMC problem to achieve this goal. The high-level idea is to first cluster the output labels, and then learn the instance-to-cluster “matchers” (please refer to Figure 2). Note that many other methods have used PECOS (including XR-Transformers) for solving large-scale real-world learning problems (Etter et al., 2022; Liu et al., 2021; Chang et al., 2020b; Baharav et al., 2021; Chang et al., 2021; Yadav et al., 2021; Sen et al., 2021), but not in the context of self-supervised numerical feature extraction as done in our work. GNNs with raw text data. It is conceptually possible to jointly train BERT and GNNs in an end-to-end fashion, which could potentially resolve the issue of being graph agnostic in the standard pipeline. However, the excessive model complexity of BERT makes such a combination practically prohibitive due to GPU memory limitations. Furthermore, it is nontrivial to train this combination of methods with arbitrary mini-batch sizes (Chiang et al., 2019; Zeng et al., 2020). In contrast, the XRTransformer architecture naturally supports mini-batch training and scales well (Jiang et al., 2021). Hence, our GIANT method uses XR-Transformers instead of combinations of BERT and GNNs. To the best of our knowledge, we are aware of only one prior work that uses raw text inputs for node classification problem (Zhang et al., 2020), but it still follows the standard pipeline described in Figure 1. Some other works apply GNNs on texts and for document classification, where the actual graphs are constructed based on the raw text. This is clearly not the focus of this work (Yao et al., 2019; Huang et al., 2019; Zhang & Zhang, 2020; Liu et al., 2020). 3 METHODS Our goal is to resolve the issue of graph-agnostic numerical feature extraction for standard GNN learning pipelines. Although our interest lies in raw text data, as already pointed out, the proposed methodology can be easily extended to account for other types of raw data and corresponding feature extraction methods. To this end, consider a large-scale graph G with node set V = {1, 2, . . . , n} and adjacency matrix A ∈ {0, 1}n×n. Each node i is associated with some raw text, which we denote by Ti. The language model is treated as an encoder Φ that maps the raw text Ti to numerical node feature Xi ∈ Rd. Key to our SSL approach is the task of neighborhood prediction, which aims to determine the neighborhood Ai from Ti. The neighborhood vector Ai can be viewed as a target multi-label yi for node i, where we have L = n labels. Hence, neighborhood prediction represents an instance of the XMC problem, which we solve by leveraging XR-Transformers. The trained encoder in an XR-Transformer generates informative numerical node features, which can then be used further in downstream tasks, the SSL GNNs module and for GNN pre-training. Detailed description regarding the use of XR-Transformers for Neighborhood Prediction. The most straightforward instance of the XMC problem is the one-versus-all (OVA) model, which can be formalized as f(T, l) = wTl Φ(T ); l ∈ [L], where W = [w1, . . . ,wL] ∈ Rd×L are weight vectors and Φ : D 7→ Rd is the encoder that maps T to a d-dimensional feature vector. OVA can be a deterministic model such as bag-of-words, the Term Frequency-Inverse Document Frequency (TFIDF) model or some other model with learnable parameters, such as XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019b). We choose to work with pre-trained BERT (Devlin et al., 2019). Also, one can change Φ according to the type of input data format (i.e., CNNs for images). Despite their simple formulation, it is known Chang et al. (2020b) that fine-tuning transformer models directly on large output spaces can be prohibitively complex. For neighborhood prediction, L = n, and the graphs encountered may have millions of nodes. Hence, we need a more scalable approach to training Transformers. As part of an XR-Transformer, one builds a hierarchical label clustering tree based on the label features Z ∈ RL×d; Z is based on Positive Instance Feature Aggregation (PIFA): Zl = vl ‖vl‖ , where vl = ∑ i:yi,l=1 Ψ(Ti), ∀l ∈ [L]. (1) Note that for neighborhood prediction, the above expression represents exactly one step of a graph convolution with node features Ψ(Ti), followed by a norm normalization; here, Ψ(·) denotes some text vectorizer such as bag-of-words or TFIDF. In the next step, XR-Transformer uses balanced kmeans to recursively partition label sets and generate the hierarchical label cluster tree in a top-down fashion. This step corresponds to Step 1 in Figure 2. Note that at each intermediate level, it learns a matcher to find the most relevant clusters, as illustrated in Step 2 of Figure 2. By leveraging the label hierarchy defined by the cluster tree, the XR-Transformer can train the model on multi-resolution objectives. Multi-resolution learning has been used in many different contexts, including computer vision (Lai et al., 2017; Karras et al., 2018; 2019; Pedersoli et al., 2015), meta-learning (Liu et al., 2019a), but has only recently been applied to the XMC problem as part of PECOS and XRTransformers. For neighborhood prediction, multi-resolution amounts to generating a hierarchy of coarse-to-fine views of neighborhoods. The only line of work in self-supervised graph learning that somewhat resembles this approach is GraphZoom (Deng et al., 2020), in so far that it applies SSL on coarsened graphs. Nevertheless, the way in which we perform coarsening is substantially different; furthermore, GraphZoom still falls into the standard GNN pipeline category depicted in Figure 1. 4 THEORETICAL ANALYSIS We also provide theoretical evidence in support of using each component of our proposed learning framework. First, we show that self-supervised neighborhood prediction is better suited to the task at hand than standard link prediction. More specifically, we show that the standard design criteria in self-supervised link prediction tasks are biased towards graph homophily assumptions (McPherson et al., 2001; Klicpera et al., 2018). In contrast, our self-supervised neighborhood prediction model works for both homophilic and heterophilic graphs. This universality property is crucial for the robustness of graph learning methods, especially in relationship to GNNs (Chien et al., 2020). Second, we demonstrate the benefits of using PIFA embeddings and clustering in XR-Transformers for graph-guided numerical feature extraction. Our analysis is based on the contextual stochastic block model (cSBM) (Deshpande et al., 2018), which was also used in Chien et al. (2020) for testing the GPR-GNN framework and in Baranwal et al. (2021) for establishing the utility of graph convolutions for node classification. Link versus neighborhood prediction. One standard SSL task on graphs is link prediction, which aims to find an entry in the adjacency matrix according to P (Aij = 1) ∝ Similarity (Φ(Ti),Φ(Tj)) . (2) Here, the function Similarity(x,y) is a measure of similarity of two vectors, x and y. The most frequently used choice for the function is the inner product of two input vectors followed by a sigmoid function. However, this type of design implicitly relies on the homophily assumption: Nodes with similar node representations are more likely to have links. It has been shown in Pei et al. (2020); Chien et al. (2020); Zhu et al. (2020); Lim et al. (2021) that there are real-world graph datasets that violate the homophily assumption and on which many GNN architectures fail. A simple example that shows how SSL link prediction may fail is presented in Figure 3. Nodes of the same color share the same features (these are for simplicity represented as numerical values). Clearly, no matter what encoder Φ we have, the similarity of node features for nodes of the same color is the highest. However, there is no edge between nodes of the same color, hence the standard methodology of link prediction based on homophily assumption fails to work for this simple heterophilous graph. In order to fix this issue, we use a different modeling assumption, stated below. Assumption 4.1. Nodes with similar node features have similar “structural roles” in the graph. In our study, we equate “structure” with the 1-hop neighborhood of a node (i.e., the row of the adjacency matrix indexed by the underlying node). The above assumption is in alignment with our XMC problem assumptions, where nodes with a small perturbation in their raw text should be mapped to a similar multi-label. Our assumption is more general then the standard homophily assumption; it is also clear that there exists a perfect mapping from node features to their neighborhoods for the example in Figure 3. Hence, neighborhood prediction appears to be a more suitable SSL approach than SSL link prediction for graph-guided feature extraction. Analysis of key components in XR-Transformers. In the original XR-Transformer work (Zhang et al., 2021a), the authors argued that one needs to perform clustering of the multi-label space in order to resolve scarce training instances in XMC. They also empirically showed that directly finetuning language models on extremely large output spaces is prohibitive. Furthermore, they empirically established that constructing clusters based on PIFA embedding with TFIDF features gives the best performance. However, no theoretical evidence was given in support of this approach to solving the XMC problem. We next leverage recent advances in graph learning to analytically characterize the benefits of using XR-Transformers. Description of the cSBM. Using our Assumption 4.1, we analyze the case where the graph and node features are generated according to a cSBM (Deshpande et al., 2018) (see Figure 4). For simplicity, we use the most straightforward two-cluster cSBM. Let {yi}ni=1 ∈ {0, 1} be the labels of nodes in a graph. We denote the size of class j ∈ {0, 1} by Cj = |{i : yi = j,∀i ∈ [n]}|. We also assume that the classes are balanced, i.e., C0 = C1 = n2 . The node features {Xi}ni=1 ∈ Rd are independent d-dimensional Gaussian random vectors, such that Xi ∼ N( r√d1, σ2 d I) if yi = 0 and Xi ∼ N(− r√d1, σ2 d I) if yi = 1. The adjacency matrix of the cSBM is denoted by A, and is clearly symmetric. All edges are drawn according to independent Bernoulli random variables, so that Aij ∼ Ber(p) if yi = yj and Aij ∼ Ber(q) if yi 6= yj . Our analysis is inspired by Baranwal et al. (2021) and Li et al. (2019), albeit their definitions of graph convolutions and random walks differ from those in PIFA. For our subsequent analysis, we also make use of the following standard assumption and define the notion of effect size. Assumption 4.2. p, q = ω( √ logn n ). |p−q| p+q = Θ(1). d = o(n). 0 < r, σ = Θ(1). Note that Baranwal et al. (2021) also poses constraints on p, q, |p−q|p+q and d. In contrast, we do not require p − q > 0 to hold (Baranwal et al., 2021; Li et al., 2019) so that we can address graph structures that are either homophilic or heterophilic. Due to the difference between PIFA and standard graph convolution, we require p, q to be larger compared to the corresponding values used in Baranwal et al. (2021). Definition 4.3. For cSBM, the effect size of the two centroids of the node features X of the two different classes is defined as ‖EXi − EXj‖√ E‖Xi − EXi‖2 + √ E‖Xj − EXj‖2 , where yi 6= yj . (3) In the standard definition of effect size, the mean difference is divided by the standard deviation of a class, as the latter is assumed to be the same for both classes. We use the sum of both standard deviations to prevent any ambiguity in our definition. Note that for the case of isotropic Gaussian distributions, the larger the effect size the larger the separation of the two classes. Theoretical results. We are now ready to state our main theoretical result, which asserts that the effect size of centroids for PIFA embeddings Z is asymptotically larger than that obtained from the original node features. Our Theorem 4.4 provides strong evidence that using PIFA in XRTransformers offers improved clustering results and consequently, better feature quality. Theorem 4.4. For the cSBM and under Assumption 4.2, the effect size of the two centroids of the node features X of the two different classes is rσ = Θ(1). Moreover, the effect size of the two centroids of the PIFA embedding Z of the two different classes, conditioned on an event of probability at least 1−O( 1nc ) for some constant c > 0, is ω(1). We see that although two nodes i, j from the same class have the same neighborhood vectors in expectation, EAi = EAj , their Hamming distance can be large in practice. This finding is formally characterized in Proposition 4.5. Proposition 4.5. For the cSBM and under Assumption 4.2, the Hamming distance between Ai and Aj with yi = yj is ω( √ n log n) with probability at least 1−O( 1nc ), for some c > 0. Hence, directly using neighborhood vectors for self-supervision is not advisable. Our result also agrees with findings from the XMC literature (Chang et al., 2020b). It is also intuitively clear that averaging neighborhood vectors from the same class can reduce the variance, which is approximately performed by clustering based on node representations (in our case, via a PIFA embedding). This result establishes the importance of clustering within the XR-Transformer approach and for the SSL neighborhood prediction task. 5 EXPERIMENTS Evaluation Datasets. We consider node classification as our downstream task and evaluate GIANT on three large-scale OGB datasets (Hu et al., 2020a) with available raw text: ogbn-arxiv, ogbn-products, and ogbn-papers100M. The parameters of these datasets are given in Table 1 and detailed descriptions are available in the Appendix E.1. Following the OGB benchmarking protocol, we report the average test accuracy and the corresponding standard deviation by repeating 3 runs of each downstream GNN model. Evaluation Protocol. We refer to our actual implementation as GIANT-XRT since the multi-scale neighborhood prediction task in the proposed GIANT framework is solved by an XR-Transformer. In the pre-training stage, GIANT-XRT learns a raw text encoder by optimizing the self-supervised neighborhood prediction objective, and generates a fine-tuned node embedding for later stages. For the node classification downstream tasks, we input the node embeddings from GIANT-XRT into several different GNN models. One is the multi-layer perceptron (MLP), which does not use graph information. Two other methods are GraphSAGE (Hamilton et al., 2017), which we applied to ogbn-arxiv, and GraphSAINT (Zeng et al., 2020), which we used for ogbn-products as it allows for mini-batch training. Due to scalability issues, we used Simple Graph Convolution (SGC) (Wu et al., 2019) for ogbn-papers100M. We also tested the state-of-the-art GNN for each dataset. At the time we conducted the main experiments (07/01/2021), the top-ranked model for ogbn-arxiv was RevGAT2 (Li et al., 2021) and the top-ranked model for ogbn-products was SAGN3 (Sun & Wu, 2021). When we conducted the experiment on ogbn-papers100M (09/10/2021), the topranked model for ogbn-papers100M was GAMLP4 (Zhang et al., 2021b) (Since then, the highest reported accuracy was improved by 0.05% for ogbn-arxiv and 0.31% for ogbn-products; both of these improvements fall short compared to those offered by GIANT). For all evaluations, we use publicly available implementations of the GNNs. For RevGAT, we report the performance of the model with and without self knowledge distillation; the former setting is henceforth referred to as +SelfKD. For SAGN, we report results with the self-label-enhanced (SLE) feature, and denote them by SAGN+SLE. For GAMLP, we report results with and without Reliable Label Utilization (RLU); the former is denoted as GAMLP+RLU. SSL GNN Competing Methods. We compare GIANT-XRT to methods that rely on graphagnostic feature inputs and use node embeddings generated by various SSL GNNs modules. The graph-agnostic features are either default features available from the OGB datasets (denoted by OGB-feat) or obtained from plain BERT embeddings (without fine-tuning) generated from raw text (denoted by BERT?). For OGB-feat combined with downstream GNN methods, we report the results from the OGB leaderboard (and denote them by †). For the SSL GNNs modules, we test three frequently-used methods: (Variantional) Graph AutoEncoders (Kipf & Welling, 2016) (denoted by (V)GAE); Deep Graph Infomax (Velickovic et al., 2019) (denoted by DGI); and GraphZoom (Deng et al., 2020) (denoted by GZ). The hyper-parameters of SSL GNNs modules are given in the Appendix E.2. For all reported results, we use Xplain, XSSLGNN and XGIANT (c.f. Figure 1) to denote which framework the method belongs to. Note that XGIANT refers to our approach. The implementation details and hyper-parameters of GIANT-XRT can be founded in the Appendix E.3. 5.1 MAIN RESULTS The results for the ogbn-arxiv and ogbn-products datasets are listed in Table 2. Our GIANT-XRT approach gives the best results for both datasets and all downstream models. It improves the accuracy of the top-ranked OGB leaderboard models by a large margin: 1.86% on ogbn-arxiv and 1.19% on ogbn-products. Using graph-agnostic BERT embeddings does not necessarily lead to good results (see the first two rows in Table 2). This shows that the improvement of our method is not merely 2https://github.com/lightaime/deep_gcns_torch/tree/master/examples/ogb_ eff/ogbn_arxiv_dgl 3https://github.com/skepsun/SAGN_with_SLE 4https://github.com/PKU-DAIR/GAMLP due to the use of a more powerful language model, and establishes the need for self-supervision governed by graph information. Another observation is that among possible combinations involving a standard GNN pipeline with a SSL module, BERT+(V)GAE offers the best performance. This can be attributed to exploiting the correlation between numerical node features and the graph structure, albeit in a two-stage approach within the standard GNN pipeline. The most important finding is that using node features generated by GIANT-XRT leads to consistent and significant improvements in the accuracy of all tested methods, when compared to the standard GNN pipeline: In particular, on ogbn-arxiv, the improvement equals 17.58% for MLP and 3.1% for GraphSAGE; on ogbn-products, the improvement equals 18.76% for MLP and 5.32% for GraphSAINT. Figure 5 in the Appendix E.6 further illustrate the gain obtained by our GIANT-XRT over SOTA methods on OGB leaderboard. Another important observation is that GIANT-XRT is highly scalable, which can be clearly observed on the example of the ogbn-papers100M dataset, for which the results are shown in Table 3. In particular, GIANT-XRT improves the accuracy of the top-ranked model, GAMLP-RLU, by a margin of 1.42%. Furthermore, GIANT-XRT again consistently improves all tested downstream methods on the ogbn-papers100M dataset. As a final remark, we surprisingly find that combining MLP with GIANT-XRT greatly improves the performance of the former learner on all datasets. It becomes just slightly worse then GIANT-XRT+GNNs and can even outperform the GraphSAGE and GraphSAINT methods with default OGB features on ogbn-arxiv and ogbn-products datasets. This is yet another positive property of GIANT, since MLPs are low-complexity and more easily implementable than other GNNs. 5.2 ABLATION STUDY We also conduct an ablation study of the GIANT framework to determine the relevance of each module involved. The first step is to consider alternatives to the proposed multi-scale neighborhood prediction task: In this case, we fine-tune BERT with a SSL link prediction approach, which we for simplicity refer to as BERT+LP. In addition, we examine how the PIFA embedding affects the performance of GIANT-XRT and how more informative node features (TFIDF) can improve the clustering steps. First, recall that in GIANT-XRT, we use TFIDF features from raw text to construct PIFA embeddings. We subsequently use the term “NO TFIDF” to indicate that we replaced the TFIDF feature matrix by an identity matrix, which contain no raw text information. The term “TFIDF+NO PIFA” is used to refer to the setting where only raw text information (node attributes) is used to perform hierarchical clustering. Similarly, “NO TFIDF+PIFA” indicates that we only use normalized neighborhood vectors (graph structure) to construct the hierarchical clustering. If both node attributes and graph structure are ignore, the result is a random clustering. Nevertheless, we keep the same sizes of clusters at each level in the hierarchical clustering. The results of the ablation study are listed under rows indexed by XGIANT in Table 2 for ogbn-arxiv and ogbn-products datasets. They once again confirm that GIANT-XRT consistently outperforms other tested methods. For BERT+LP, we find that it has better performance on ogbn-arixv compared to that of the standard GNN pipeline but a worse performance on ogbn-products. This shows that using link prediction to fine-tune BERT in a self-supervised manner is not robust in general, and further strengthens the case for using neighborhood instead of link prediction. With respect to the ablation study of GIANT-XRT, we see that NO TFIDF+NO PIFA indeed gives the worst results. Using node attributes (TFIDF features) or graph information (PIFA) to construct the hierarchical clustering in GIANT-XRT leads to performance improvements that can be seen from Table 2. Nevertheless, combining both as done in GIANT-XRT gives the best results. Moreover, one can observe that using PIFA always gives better results compared to the case when PIFA is not used. It aligns with our theoretical analysis, which shows that PIFA embeddings lead to better hierarchical clusterings. ACKNOWLEDGEMENT The authors thank the support from Amazon and the Amazon Post Internship Fellowship. Cho-Jui Hsieh is partially supported by NSF under IIS-2008173 and IIS-2048280. This work was funded in part by the NSF grant 1956384. 6 ETHICS STATEMENT We are not aware of any potential ethical issues regarding our work. 7 REPRODUCIBILITY STATEMENT We provide our code in the supplementary material along with an easy-to-follow description and package dependency for reproducibility. Our experimental setting is stated in Section 5 and details pertaining to hyperparameters and computational environment are described in the Appendix. All tested methods are integrated in our code: https://github.com/amzn/pecos/tree/ mainline/examples/giant-xrt. A CONCLUSIONS We introduced a novel self-supervised learning framework for graph-guided numerical node feature extraction from raw data, and evaluated it within multiple GNN pipelines. Our method, termed GIANT, for the first time successfully resolved the issue of graph-agnostic numerical feature extraction. We also described a new SSL task, neighborhood prediction, established a connection between the task and the XMC problem, and solved it using XR-Transformers. We also examined the theoretical properties of GIANT in order to evaluate the advantages of neighborhood prediction over standard link prediction, and to assess the benefits of using XR-Transformers. Our extensive numerical experiments, which showed that GIANT consistently improves state-of-the-art GNN models, were supplemented with an ablation study that aligns with our theoretical analysis. B PROOF OF THEOREM 4.4 Throughout the proof, we use µ = rd1 to denote the mean of node feature from class 0 and ν = −r d 1 for class 1. We choose to keep this notation to demonstrate that our setting on mean can be easily generalized. The choice of the high probability events will be clear in the proof. The proof for the effect size of centroid for node feature X is quite straightforward from the Definition 4.3. By directly plugging in the mean and standard deviation, we have 2r 2σ = Θ(1). (4) The last equality is due to our assumption that both r, σ > 0 are both constants. To prove the effect size of centroid for PIFA embedding Z, we need to first introduce some standard concentration results for sum of Bernoulli and Gaussian random variables. Lemma B.1 (Hoeffiding’s inequality (Hoeffding, 1994)). Let Sn = ∑n i=1Xn, where Xi are i.i.d. Bernoulli random variable with parameter p. Then for any t > 0, we have P (|Sn − np| ≥ t) ≤ 2 exp( −2t2 n ). (5) Lemma B.2 (Concentration of sum of Gaussian). Let Sn = ∑n i=1Xn, whereXi are i.i.d. Gaussian random variable with zero mean and standard deviation σ. Then for some constant c > 0, we have P ( |Sn| ≥ cσ √ n log n ) ≤ 2 exp(−c 2 2 log n). (6) Now we are ready to prove our claim. Recall that the definition of PIFA embedding Z is as follows: Zi = vi ‖vi‖ , where vi = ∑ j:Aij=1 Xj = [AX]i . (7) We first focus on analyzing the vector vi. We denote Niy = |{j : yj = y ∧Aij = 1, j ∈ [n]}| and Ni = Ni0 +Ni1. Without loss of generality, we assume yi = 0. The conditional expectation of it is as follows: E [vi|A] = Ni0µ +Ni1ν. (8) Next, by leveraging Lemma B.1, we have P(|Ni1 − nq 2 | ≥ t) ≤ 2 exp(−4t 2 n ). (9) By choosing t = c1 √ n log n for some constant c1 > 1, we know that with probability at least 1−O( 1nc1 ), Ni1 ∈ [ nq 2 − c1 √ n log n, nq2 + c1 √ n log n]. Finally, by our Assumption 4.2, we know that nq = ω( √ n log n). Thus, we arrive to the conclusion that with probability at least 1−O( 1nc1 ), Ni1 = nq 2 ± o(1). Following the same analysis, we can prove that with probability at least 1−O( 1nc1 ),Ni0 = np 2 ±o(1). The only difference is that we are dealing with n2 − 1 random variables in this case as there’s no self-loop. Nevertheless, (n2 −1) = n 2 (1−o(1)) so the result is the same. Note that we need to apply union bound over 2n error events (∀i ∈ [n] and 2 cases for Ni0 and Ni1 respectively). Together we know that the error probability is upper bounded byO( 1nc2 ) for some new constant c2 = c1−1 > 0. Hence, we characterize the mean of vi on a high probability event B1. Next we need to analyze its norm. By direct analysis and condition on A, we have ‖vi‖2 d = d∑ k=1 (µkNi0 + νkNi1 + n∑ j=1 AijGjk) 2, (10) where Gjk are i.i.d. Gaussian random variables with zero mean, σ standard deviation and d = stands for equal in distribution. Then by Lemma B.2 we know that with probability at least 1 − O( 1 N c3 i ) for some constant c3 > 2 + c2 | n∑ j=1 AijGjk| = O( σ√ d √ Ni logNi). (11) This is because condition on A, we are summing over Ni Gaussian random variables. Recall that condition on our high probability event B1, Ni = n2 (p + q)(1 + o(1)) = ω( √ n log n) ≤ n. Thus, we know that for some c2 > 0, with probability at least 1− O( 1nc2 + 1 nc3 ) = 1− O( 1 nc2 ) we have | ∑n j=1 AijGjk| = O( σ√ d √ n log n). Again, recall that bothNi0, Ni1 = ω( √ n log n), thus together we have ‖vik‖2 = n2 4 (µkp+ νkq) 2(1 + o(1)). (12) ⇒ ‖vi‖ = n 2 √√√√ d∑ k=1 (µkp+ νkq)2(1 + o(1)) (13) = n 2 |p− q|r(1 + o(1)). (14) Again, we need to apply union bound over nd error events, which result in the error probability O( 1nc2 ) since c3 > 2 + c2 and d = o(n) in our Assumption 4.2. We denote the corresponding high probability event to be B2. Note that the same analysis can be applied to the case yi = 1, where the result for the norm is the same and the result for vi would be just swapping p and q. Combine all the current result, we know that with probability at least 1− O( 1nc2 ) for some c2 > 0, the PIFA embedding Zi equals to the following Zi = n 2 (µp+ νq)(1 + o(1)) n 2 |p− q|r(1 + o(1)) = (µp+ νq)(1 + o(1)) |p− q|r if yi = 0. (15) Zi = n 2 (µq + νp)(1 + o(1)) n 2 |p− q|r(1 + o(1)) = (µq + νp)(1 + o(1)) |p− q|r if yi = 1. (16) Hence, the centroid distance would be = ∥∥∥∥ (µ(p− q) + ν(q − p))(1 + o(1))r|p− q|(1 + o(1)) ∥∥∥∥ (17) = ∥∥∥∥ (µ− ν)(p− q)(1 + o(1))r|p− q|(1 + o(1)) ∥∥∥∥ (18) = ‖µ− ν‖ r (1 + o(1)) = 2(1 + o(1)). (19) Now we turn to the standard deviation part. Specifically, we will characterize the following quantity (again, recall that we assume yi = 0 w.l.o.g.).∥∥∥∥zi − (µp+ νq)r|p− q| ∥∥∥∥ (20) Recall that the latter part is the centroid for nodes with label 0. Hence, by characterize this quantity we can understand the deviation of PIFA embedding around its centroid. From the analysis above, we know that given A, we have∥∥∥∥zi − (µp+ νq)r|p− q| ∥∥∥∥ ≤ ∥∥∥∥Ni0µ +Ni1ν‖vi‖ − (µp+ νq)r|p− q| ∥∥∥∥+ ∥∥∥∥∥ ∑n j=1 AijGj ‖vi‖ ∥∥∥∥∥ . (21) For the terms ‖vi‖, Ni0, Ni1 and ‖ ∑n j=1 AijZj‖, we already derive their concentration results above. Plug in those results, the first term becomes ‖Ni0µ +Ni1ν ‖vi‖ − (µp+ νq) r|p− q| ‖ (22) = ‖ n 2 (µp(1 + o(1)) + νq(1 + o(1)) n 2 r|p− q|(1 + o(1)) − (µp+ νq) r|p− q| ‖ (23) = ‖ (µpo(1) + νqo(1)) r|p− q|(1 + o(1)) ‖ = r|p− q|o(1) r|p− q|(1 + o(1)) = o(1). (24) The second term becomes ‖ ∑n j=1 AijGj ‖vi‖ ‖ = O(σ √ n log n) n 2 r|p− q|(1 + o(1)) (25) = O(σ √ n log n) ω(r √ n log n) = o(1), (26) where the last equality is from our Assumption 4.2 that r, σ are constants. Together we show that the deviation of nodes from their centroid is of scale o(1). The similar result holds for the case yi = 1. Together we have shown that the standard deviation of Zi is o(1) on the high probability event B1 ∩ B2. Hence, the effect size for the PIFA embedding is ω(1) with probability at least 1 − O( 1nc2 ) for some constant c2 > 0, which implies that PIFA embedding gives a better clustered node representation. Thus, it is preferable to use PIFA embedding and we complete the proof. C PROOF OF PROPOSITION 4.5 Note that under the setting of cSBM and the Assumption 4.2, the Hamming distance of Ai,Aj for yi = yj is a Poisson-Binomial random variable. More precisely, note that ∀k ∈ [n] \ {i, j} s.t.yk = yi, |Aik −Ajk| ∼ Ber(2p(1− p)). (27) ∀k ∈ [n] \ {i, j} s.t.yk 6= yi, |Aik −Ajk| ∼ Ber(2q(1− q)), (28) where they are all independent. Hence, we have Hamming(Ai,Aj) ∼ Bin( n 2 − 2, 2p(1− p)) +Bin(n 2 , 2q(1− q)), (29) where Hamming(Ai,Aj) denotes the Hamming distance of Ai,Aj andBin(a, b) stands for the Binomial random variable with a trials and the success probability is b. By leveraging the Lemma B.1, we know that for a random variable X ∼ Bin(n2 , 2q(1− q)), we have P (|X − nq(1− q)| ≥ t) ≤ 2 exp(−4t 2 n ). (30) Note that the function q(1 − q) is monotonic increasing for q ∈ [0, 12 ] and has maximum at q = 1 2 . Combine with Assumption 4.2 we know that nq(1 − q) = ω( √ n log n). Hence, by choosing t =√ cn logn 2 for some constant c > 0, with probability at least 1−O( 1 nc ) we have X ≥ nq(1− q)− √ cn log n 2 = ω( √ n log n). (31) Finally, by noting the fact that with probability 1 we have Bin(n2 − 2, 2p(1 − p)) ≥ 0. Hence, by showingX ∼ Bin(n2 , 2q(1−q)) is of order ω( √ n log n) with probability at least 1−O( 1nc ) implies that the Hamming distance of Ai,Aj is of order ω( √ n log n) with at least the same probability. Together we complete the proof. D PROOF OF LEMMA B.2 By Chernoff bound, we have P (Sn ≥ a) ≤ min t>0 exp(−ta)E exp(tSn). (32) By the i.i.d assumption, we have min t>0 exp(−ta)E exp(tSn) = min t>0 exp(−ta)(E exp(tX1))n. (33) Note that the moment generating function (MGF) of a zero-mean, σ standard deviation Gaussian random variable is exp(σ 2t2 2 ). Hence we have min t>0 exp(−ta)(E exp(tX1))n = min t>0 exp( 1 2 nσ2t2 − ta). (34) By minimizing the upper bound with respect to t, we can choose t = anσ2 . Plug in this choice of t we have P (Sn ≥ a) ≤ exp( −a2 2nσ2 ). (35) Finally, by choosing a = cσ √ n log n for some c > 0, applying the same bound for the other side and the union bound, we complete the proof. E EXPERIMENTAL DETAIL E.1 DATASETS In this work, we choose node classification as our downstream task to focus. We conduct experiments on three large-scale datasets, ogbn-arxiv, ogbn-products and ogbn-papers100M as these are the only three datasets with raw text available in OGB. Ogbn-arxiv and ogbn-papers100M (Wang et al., 2020; Hu et al., 2020a) are citation networks where each node represents a paper. The corresponding input raw text consists of titles and abstracts and the node labels are the primary categories of the papers. Ogbn-products (Chiang et al., 2019; Hu et al., 2020a) is an Amazon co-purchase network where each node represents a product. The corresponding input raw text consists of titles and descriptions of products. The node labels are categories of products. E.2 HYPER-PARAMETERS OF SSL GNN MODULES The implementation of (V)GAE and DGI are taken from Pytorch Geometric Library (Fey & Lenssen, 2019). Note that due to the GPU memory constraint, we adopt GraphSAINT (Zeng et al., 2020) to (V)GAE and DGI for ogbn-products. GraphZoom is directly taken from the official repository5. For all downstream GNNs in the experiment, we average the results over three independent runs. The only exception is OGB-feat + downstream GNNs, where we directly take the results from OGB leaderboard. Note that we also try to repeat the experiment of OGB-feat + downstream GNNs, where the accuracy is similar to the one reported on the leaderboard. To prevent confusion we decide to take the results from OGB leaderboard for comparison. For the BERT model used throughout the paper, we use “bert-base-uncased” downloaded from HuggingFace 6. For the methods used in the SSL GNNs module, we try our best to follow the default setting. We slightly optimize some hyperparameters (such as learning rate, max epochs...etc) to ensure the training process converge. To ensure the fair comparison, we fix the output dimension for all SSL GNNs as 768 which is the same as bert-base-uncased and XR-Transformer. E.3 HYPER-PARAMETERS OF GIANT-XRT AND BERT+LP Pre-training of GIANT-XRT. In Table 4, we outline the pre-training hyper-parameter of GIANT-XRT for all three OGB benchmark datasets. We mostly follow the convention of XRTransformer (Zhang et al., 2021a) to set the hyper-parameters. For ogbn-arxiv and ogbn-products 5https://github.com/cornell-zhang/GraphZoom 6https://huggingface.co/bert-base-uncased datasets, we use the full graph adjacency matrix as the XMC instance-to-label matrix Y ∈ {0, 1}n×n, where n is number of nodes in the graph. For ogbn-papers100M, we sub-sample 50M (out of 111M) most important nodes based on page rank scores of the bipartite graph (He et al., 2016). The resulting XMC instance-to-label matrix Y has 50.0M of rows, 49.9M of columns, and 2.5B of edges. The PIFA node embedding for hierarchical clustering is constructed by aggregating its neighborhood nodes’ TF-IDF features. Specifically, the PIFA node embedding is a 4.2M highdimensional sparse vector, consisting of 1M of word unigram, 3M of word bigram, and 200K of character triagram. Finally, for ogbn-arxiv and ogbn-products, we use TFN+MAN negative sampling to pre-train XR-Transformer where the model-aware negatives (MAN) are selected from top 20 model’s predictions. Because of the extreme scale of ogbn-papers100M, we consider TFN only to avoid excessive CPU memory consumption on the GPU machine. Pre-training on BERT+LP. To verify the effectiveness of multi-scale neighborhood prediction loss, we consider learning a Siamese BERT encoder with the alternative Link Prediction loss for pretraining, hence the name BERT+LP. We implement BERT+LP baseline with the triplet loss (Balntas et al., 2016) as we empirically observed it has better performance than other loss functions for the link prediction task. We sample one positive pair and one negative pair for each node as training samples for each epoch, and train the model until the loss is converged. E.4 HYPER-PARAMETERS OF DOWNSTREAM METHODS For the downstream models, we optimize the learning rate within {0.01, 0.001} for all models. For MLP, GraphSAGE and GraphSAINT, we optimize the number of layers within {1, 3}. For RevGAT, we keep the hyperparameter choice the same as default. For SAGN, we also optimize the number of layers within {1, 3}. For GAMLP, we directly adopt the setting from the official implementation. Note that all hyperparameter tuning applies for all pre-trained node features (Xplain, XSSLGNN and XGIANT). E.5 COMPUTATIONAL ENVIRONMENT All experiments are conducted on the AWS p3dn.24xlarge instance, consisting of 96 Intel Xeon CPUs with 768 GB of RAM and 8 Nvidia V100 GPUs with 32 GB of memory each. E.6 ILLUSTRATION ON THE IMPROVEMENT OF GIANT-XRT See Figure 5.
1. What is the focus and contribution of the paper on enhancing language models with graph information? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its novelty and impact? 3. Do you have any questions or concerns about the experimental results and their interpretation? 4. Are there any issues with the clarity and completeness of the figures and captions? 5. What is the reviewer trying to understand or validate regarding the split ratio, class number, and pre-training dataset for the experiments?
Summary Of The Paper Review
Summary Of The Paper This paper presents a new self-supervised learning framework to enhance language model based on graph information. Review Strengths: The proposed method is simple and reasonable. The experimental studies are extensive. Weaknesses: (1) The novelty is limited. In my view, masked language modeling aims to predict the masked tokens given the context, and in this paper, neighborhood prediction aims to predict the relation given the context. Relation-prediction-based objectives have been widely applied in knowledge/entity oriented pre-training, such as K-ADAPTER (Wang et al.), ERICA (Qin et al.). In addition, the impact of the proposed method could be rather minor given their experimental results. For example, in Table 1, on the ogbn-arxiv dataset, based on GIANT-XRT, GraphSAGE, RevGAT, and RevGAT+SelfKD gain accuracies of 74.59%, 75.96%, and 76.12%, respectively, and based on TFIDF+NO PIFA, gain accuracies of 74.09%, 75.56%, and 75.85%. With such small gaps, it would be important to know whether the difference is actually statistically significant. (2) Some details in Figure 1 are not clear, e.g., denotations of A and Y, full terms of XMC. To make Figure 1 self-contained, the caption of the figure should provide more necessary information. (3) The idea are verified on three node classification datasets, while some details are missing. “Split ratio”in Table 1 is confusing, I cannot figure out what the ratios for train/test/development are. How many classes for each dataset? It would be better to show some real instances. (4) What is the dataset used for pre-training GIANT?
ICLR
Title Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction Abstract Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take numerical node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related learning tasks. Recent works exploring the correlation between numerical node features and graph structure via self-supervised learning have paved the way for further performance improvements of GNNs. However, methods used for extracting numerical node features from raw data are still graph-agnostic within standard GNN pipelines. This practice is sub-optimal as it prevents one from fully utilizing potential correlations between graph topology and node attributes. To mitigate this issue, we propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT). GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information, and scales to large datasets. We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework. We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets: For example, we improve the accuracy of the top-ranked method GAMLP from 68.25% to 69.67%, SGC from 63.29% to 66.10% and MLP from 47.24% to 61.10% on the ogbn-papers100M dataset by leveraging GIANT. Our implementation is public available1. 1 INTRODUCTION The ubiquity of graph-structured data and its importance in solving various real-world problems such as node and graph classification have made graph-centered machine learning an important research area (Lü & Zhou, 2011; Shervashidze et al., 2011; Zhu, 2005). Graph neural networks (GNNs) offer state-of-the-art performance on many graph learning tasks and have by now become a standard methodology in the field (Kipf & Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Chien et al., 2020). In most such studies, GNNs take graphs with numerical node attributes as inputs and train them with task-specific labels. Recent research has shown that self-supervised learning (SSL) leads to performance improvements in many applications, including graph learning, natural language processing and computer vision. ∗This work was done during Eli Chien’s internship at Amazon, USA. 1https://github.com/amzn/pecos/tree/mainline/examples/giant-xrt Several SSL approaches have also been successfully used with GNNs (Hu et al., 2020b; You et al., 2018; 2020; Hu et al., 2020c; Velickovic et al., 2019; Kipf & Welling, 2016; Deng et al., 2020). The common idea behind these works is to explore the correlated information provided by the numerical node features and graph topology, which can lead to improved node representations and GNN initialization. However, one critical yet neglected issue in the current graph learning literature is how to actually obtain the numerical node features from raw data such as text, images and audio signals. As an example, when dealing with raw text features, the standard approach is to apply graph-agnostic methods such as bag-of-words, word2vec (Mikolov et al., 2013) or pre-trained BERT (Devlin et al., 2019) (As a further example, raw texts of product descriptions are used to construct node features via the bag-of-words model for benchmarking GNNs on the ogbn-products dataset (Hu et al., 2020a; Chiang et al., 2019)). The pre-trained BERT language model, as well as convolutional neural networks (CNNs) (Goyal et al., 2019; Kolesnikov et al., 2019), produce numerical features that can significantly improve the performance of various downstream learners (Devlin et al., 2019). Still, none of these works leverage graph information for actual self-supervision. Clearly, using graphagnostic methods to extract numerical features is sub-optimal, as correlations between the graph topology and raw features are ignored. Motivated by the recent success of SSL approaches for GNNs, we propose GIANT, an SSL framework that resolves the aforementioned issue of graph-agnostic feature extraction in the standard GNN learning pipeline. Our framework takes raw node attributes and generates numerical node features with graph-structured self-supervision. To integrate the graph topology information into language models such as BERT, we also propose a novel SSL task termed neighborhood prediction, which works for both homophilous and heterophilous graphs, and establish connections between neighborhood prediction and the eXtreme Multi-label Classification (XMC) problem (Shen et al., 2020; Yu et al., 2022; Chang et al., 2020b). Roughly speaking, the neighborhood of each node can be encoded using binary multi-labels (indicating whether a node is a neighbor or not) and the BERT model is fine-tuned by successively improving the predicted neighborhoods. This approach allows us to not only leverage the advanced solvers for the XMC problem and address the issue of graph-agnostic feature extraction, but also to perform a theoretical study of the XMC problem and determine its importance in the context of graph-guided SSL. Throughout the work, we focus on raw texts as these are the most common data used for largescale graph benchmarking. Examples include titles/abstracts in citation networks and product descriptions in co-purchase networks. To solve our proposed self-supervised XMC task, we adopt the state-of-the-art XR-Transformer method (Zhang et al., 2021a). By using the encoder from the XR-Transformer pre-trained with GIANT, we obtain informative numerical node features which consistently boost the performance of GNNs on downstream tasks. Notably, GIANT significantly improves state-of-the-art methods for node classification tasks described on the Open Graph Benchmark (OGB) (Hu et al., 2020a) leaderboard on three large-scale graph datasets, with absolute improvements in accuracy roughly 1.5% for the first-ranked methods, 3% for standard GNNs and 14% for multilayer perceptron (MLP). GIANT coupled with XRTransformer is also highly scalable and can be combined with other downstream learning methods. Our contributions may be summarized as follows. 1. We identify the issue of graph-agnostic feature extraction in standard GNN pipelines and propose a new GIANT self-supervised framework as a solution to the problem. 2. We introduce a new approach to extract numerical features by graph information based on the idea of neighborhood prediction. The gist of the approach is to use neighborhood prediction within a language model such as BERT to guide the process of fine-tuning the features. Unlike linkprediction, neighborhood prediction resolves problems associated with heterophilic graphs. 3. We establish pertinent connections between neighborhood prediction and the XMC problem by noting that neighborhoods of individual nodes can be encoded by binary vectors which may be interpreted as multi-labels. This allows for performing neighborhood prediction via XR-Transformers, especially designed to solve XMC problems at scale. 4. We demonstrate through extensive experiments that GIANT consistently improves the performance of tested GNNs on downstream tasks by large margins. We also report new state-of-the-art results on the OGB leaderboard, including absolute improvements in accuracy roughly 1.5% compared to the top-ranked method, 3% for standard GNNs and 14% for multilayer perceptron (MLP). More precisely, we improve the accuracy of the top-ranked method GAMLP (Zhang et al., 2021b) from 68.25% to 69.67%, SGC (Wu et al., 2019) from 63.29% to 66.10% and MLP from 47.24% to 61.10% on the ogbn-papers100M dataset. 5. We present a new theoretical analysis that verifies the benefits of key components in XRTransformers on our neighborhood prediction task. This analysis also further improves our understanding of XR-Transformers and the XMC problem. Due to the space limitation, all proofs are deferred to the Appendix. 2 BACKGROUND AND RELATED WORK General notation. Throughout the paper, we use bold capital letters such as A to denote matrices. We use Ai for the i-th row of the matrix and Aij for its entry in row i and column j. We reserve bold lowercase letters such as a for vectors. The symbol I denotes the identity matrix while 1 denotes the all-ones vector. We use o(·), O(·), ω(·),Θ(·) in the standard manner. SSL in GNNs. SSL is a topic of substantial interest due to its potential for improving the performance of GNNs on various tasks. Exploiting the correlation between node features and the graph structure is known to lead to better node representations or GNN initialization (Hu et al., 2020b; You et al., 2018; 2020; Hu et al., 2020c). Several methods have been proposed for improving node representations, including (variational) graph autoencoders (Kipf & Welling, 2016), Deep Graph Infomax (Velickovic et al., 2019) and GraphZoom (Deng et al., 2020). For more information, the interested reader is referred to a survey of SSL GNNs (Xie et al., 2021). While these methods can be used as SSL modules in GNNs (Figure 1), it is clear that they do not solve the described graph agnostics issue in the standard GNN pipeline. Furthermore, as the above described SSL GNNs modules and other pre-processing and post-processing methods for GNNs such as C&S (Huang et al., 2021) and FLAG (Kong et al., 2020) in general improve graph learners, it is worth pointing out that they can be naturally be integrated into the GIANT framework. This topic is left as a future work. The XMC problem, PECOS and XR-Transformer. The XMC problem can be succinctly formulated as follows: We are given a training set {Ti,yi}ni=1, where Ti ∈ D is the ith input text instance and yi ∈ {0, 1}L is the target multi-label from an extremely large collection of labels. The goal is to learn a function f : D× [L] 7→ R, where f(T, l) captures the relevance between the input text T and the label l. The XMC problem is of importance in many real-world applications (Jiang et al., 2021; Ye et al., 2020): For example, in E-commerce dynamic search advertising, XMC arises when trying to find a “good” mapping from items to bid queries on the market (Prabhu et al., 2018; Prabhu & Varma, 2014). In open-domain question answering, XMC problems arise when trying to map ques- tions to “evidence” passages containing the answers (Chang et al., 2020a; Lee et al., 2019). Many methods for the XMC problem leverage hierarchical clustering approaches for labels (Prabhu et al., 2018; You et al., 2019). This organizational structure allows one to handle potentially enormous numbers of labels, such as used by PECOS (Yu et al., 2022). The key is to take advantage of the correlations among labels within the hierarchical clustering. In our approach, we observe that the multi-labels correspond to neighborhoods of nodes in the given graph. Neighborhoods have to be predicted using the textual information in order to best match the a priori given graph topology. We use the state-of-the-art XR-Transformer (Zhang et al., 2021a) method for solving the XMC problem to achieve this goal. The high-level idea is to first cluster the output labels, and then learn the instance-to-cluster “matchers” (please refer to Figure 2). Note that many other methods have used PECOS (including XR-Transformers) for solving large-scale real-world learning problems (Etter et al., 2022; Liu et al., 2021; Chang et al., 2020b; Baharav et al., 2021; Chang et al., 2021; Yadav et al., 2021; Sen et al., 2021), but not in the context of self-supervised numerical feature extraction as done in our work. GNNs with raw text data. It is conceptually possible to jointly train BERT and GNNs in an end-to-end fashion, which could potentially resolve the issue of being graph agnostic in the standard pipeline. However, the excessive model complexity of BERT makes such a combination practically prohibitive due to GPU memory limitations. Furthermore, it is nontrivial to train this combination of methods with arbitrary mini-batch sizes (Chiang et al., 2019; Zeng et al., 2020). In contrast, the XRTransformer architecture naturally supports mini-batch training and scales well (Jiang et al., 2021). Hence, our GIANT method uses XR-Transformers instead of combinations of BERT and GNNs. To the best of our knowledge, we are aware of only one prior work that uses raw text inputs for node classification problem (Zhang et al., 2020), but it still follows the standard pipeline described in Figure 1. Some other works apply GNNs on texts and for document classification, where the actual graphs are constructed based on the raw text. This is clearly not the focus of this work (Yao et al., 2019; Huang et al., 2019; Zhang & Zhang, 2020; Liu et al., 2020). 3 METHODS Our goal is to resolve the issue of graph-agnostic numerical feature extraction for standard GNN learning pipelines. Although our interest lies in raw text data, as already pointed out, the proposed methodology can be easily extended to account for other types of raw data and corresponding feature extraction methods. To this end, consider a large-scale graph G with node set V = {1, 2, . . . , n} and adjacency matrix A ∈ {0, 1}n×n. Each node i is associated with some raw text, which we denote by Ti. The language model is treated as an encoder Φ that maps the raw text Ti to numerical node feature Xi ∈ Rd. Key to our SSL approach is the task of neighborhood prediction, which aims to determine the neighborhood Ai from Ti. The neighborhood vector Ai can be viewed as a target multi-label yi for node i, where we have L = n labels. Hence, neighborhood prediction represents an instance of the XMC problem, which we solve by leveraging XR-Transformers. The trained encoder in an XR-Transformer generates informative numerical node features, which can then be used further in downstream tasks, the SSL GNNs module and for GNN pre-training. Detailed description regarding the use of XR-Transformers for Neighborhood Prediction. The most straightforward instance of the XMC problem is the one-versus-all (OVA) model, which can be formalized as f(T, l) = wTl Φ(T ); l ∈ [L], where W = [w1, . . . ,wL] ∈ Rd×L are weight vectors and Φ : D 7→ Rd is the encoder that maps T to a d-dimensional feature vector. OVA can be a deterministic model such as bag-of-words, the Term Frequency-Inverse Document Frequency (TFIDF) model or some other model with learnable parameters, such as XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019b). We choose to work with pre-trained BERT (Devlin et al., 2019). Also, one can change Φ according to the type of input data format (i.e., CNNs for images). Despite their simple formulation, it is known Chang et al. (2020b) that fine-tuning transformer models directly on large output spaces can be prohibitively complex. For neighborhood prediction, L = n, and the graphs encountered may have millions of nodes. Hence, we need a more scalable approach to training Transformers. As part of an XR-Transformer, one builds a hierarchical label clustering tree based on the label features Z ∈ RL×d; Z is based on Positive Instance Feature Aggregation (PIFA): Zl = vl ‖vl‖ , where vl = ∑ i:yi,l=1 Ψ(Ti), ∀l ∈ [L]. (1) Note that for neighborhood prediction, the above expression represents exactly one step of a graph convolution with node features Ψ(Ti), followed by a norm normalization; here, Ψ(·) denotes some text vectorizer such as bag-of-words or TFIDF. In the next step, XR-Transformer uses balanced kmeans to recursively partition label sets and generate the hierarchical label cluster tree in a top-down fashion. This step corresponds to Step 1 in Figure 2. Note that at each intermediate level, it learns a matcher to find the most relevant clusters, as illustrated in Step 2 of Figure 2. By leveraging the label hierarchy defined by the cluster tree, the XR-Transformer can train the model on multi-resolution objectives. Multi-resolution learning has been used in many different contexts, including computer vision (Lai et al., 2017; Karras et al., 2018; 2019; Pedersoli et al., 2015), meta-learning (Liu et al., 2019a), but has only recently been applied to the XMC problem as part of PECOS and XRTransformers. For neighborhood prediction, multi-resolution amounts to generating a hierarchy of coarse-to-fine views of neighborhoods. The only line of work in self-supervised graph learning that somewhat resembles this approach is GraphZoom (Deng et al., 2020), in so far that it applies SSL on coarsened graphs. Nevertheless, the way in which we perform coarsening is substantially different; furthermore, GraphZoom still falls into the standard GNN pipeline category depicted in Figure 1. 4 THEORETICAL ANALYSIS We also provide theoretical evidence in support of using each component of our proposed learning framework. First, we show that self-supervised neighborhood prediction is better suited to the task at hand than standard link prediction. More specifically, we show that the standard design criteria in self-supervised link prediction tasks are biased towards graph homophily assumptions (McPherson et al., 2001; Klicpera et al., 2018). In contrast, our self-supervised neighborhood prediction model works for both homophilic and heterophilic graphs. This universality property is crucial for the robustness of graph learning methods, especially in relationship to GNNs (Chien et al., 2020). Second, we demonstrate the benefits of using PIFA embeddings and clustering in XR-Transformers for graph-guided numerical feature extraction. Our analysis is based on the contextual stochastic block model (cSBM) (Deshpande et al., 2018), which was also used in Chien et al. (2020) for testing the GPR-GNN framework and in Baranwal et al. (2021) for establishing the utility of graph convolutions for node classification. Link versus neighborhood prediction. One standard SSL task on graphs is link prediction, which aims to find an entry in the adjacency matrix according to P (Aij = 1) ∝ Similarity (Φ(Ti),Φ(Tj)) . (2) Here, the function Similarity(x,y) is a measure of similarity of two vectors, x and y. The most frequently used choice for the function is the inner product of two input vectors followed by a sigmoid function. However, this type of design implicitly relies on the homophily assumption: Nodes with similar node representations are more likely to have links. It has been shown in Pei et al. (2020); Chien et al. (2020); Zhu et al. (2020); Lim et al. (2021) that there are real-world graph datasets that violate the homophily assumption and on which many GNN architectures fail. A simple example that shows how SSL link prediction may fail is presented in Figure 3. Nodes of the same color share the same features (these are for simplicity represented as numerical values). Clearly, no matter what encoder Φ we have, the similarity of node features for nodes of the same color is the highest. However, there is no edge between nodes of the same color, hence the standard methodology of link prediction based on homophily assumption fails to work for this simple heterophilous graph. In order to fix this issue, we use a different modeling assumption, stated below. Assumption 4.1. Nodes with similar node features have similar “structural roles” in the graph. In our study, we equate “structure” with the 1-hop neighborhood of a node (i.e., the row of the adjacency matrix indexed by the underlying node). The above assumption is in alignment with our XMC problem assumptions, where nodes with a small perturbation in their raw text should be mapped to a similar multi-label. Our assumption is more general then the standard homophily assumption; it is also clear that there exists a perfect mapping from node features to their neighborhoods for the example in Figure 3. Hence, neighborhood prediction appears to be a more suitable SSL approach than SSL link prediction for graph-guided feature extraction. Analysis of key components in XR-Transformers. In the original XR-Transformer work (Zhang et al., 2021a), the authors argued that one needs to perform clustering of the multi-label space in order to resolve scarce training instances in XMC. They also empirically showed that directly finetuning language models on extremely large output spaces is prohibitive. Furthermore, they empirically established that constructing clusters based on PIFA embedding with TFIDF features gives the best performance. However, no theoretical evidence was given in support of this approach to solving the XMC problem. We next leverage recent advances in graph learning to analytically characterize the benefits of using XR-Transformers. Description of the cSBM. Using our Assumption 4.1, we analyze the case where the graph and node features are generated according to a cSBM (Deshpande et al., 2018) (see Figure 4). For simplicity, we use the most straightforward two-cluster cSBM. Let {yi}ni=1 ∈ {0, 1} be the labels of nodes in a graph. We denote the size of class j ∈ {0, 1} by Cj = |{i : yi = j,∀i ∈ [n]}|. We also assume that the classes are balanced, i.e., C0 = C1 = n2 . The node features {Xi}ni=1 ∈ Rd are independent d-dimensional Gaussian random vectors, such that Xi ∼ N( r√d1, σ2 d I) if yi = 0 and Xi ∼ N(− r√d1, σ2 d I) if yi = 1. The adjacency matrix of the cSBM is denoted by A, and is clearly symmetric. All edges are drawn according to independent Bernoulli random variables, so that Aij ∼ Ber(p) if yi = yj and Aij ∼ Ber(q) if yi 6= yj . Our analysis is inspired by Baranwal et al. (2021) and Li et al. (2019), albeit their definitions of graph convolutions and random walks differ from those in PIFA. For our subsequent analysis, we also make use of the following standard assumption and define the notion of effect size. Assumption 4.2. p, q = ω( √ logn n ). |p−q| p+q = Θ(1). d = o(n). 0 < r, σ = Θ(1). Note that Baranwal et al. (2021) also poses constraints on p, q, |p−q|p+q and d. In contrast, we do not require p − q > 0 to hold (Baranwal et al., 2021; Li et al., 2019) so that we can address graph structures that are either homophilic or heterophilic. Due to the difference between PIFA and standard graph convolution, we require p, q to be larger compared to the corresponding values used in Baranwal et al. (2021). Definition 4.3. For cSBM, the effect size of the two centroids of the node features X of the two different classes is defined as ‖EXi − EXj‖√ E‖Xi − EXi‖2 + √ E‖Xj − EXj‖2 , where yi 6= yj . (3) In the standard definition of effect size, the mean difference is divided by the standard deviation of a class, as the latter is assumed to be the same for both classes. We use the sum of both standard deviations to prevent any ambiguity in our definition. Note that for the case of isotropic Gaussian distributions, the larger the effect size the larger the separation of the two classes. Theoretical results. We are now ready to state our main theoretical result, which asserts that the effect size of centroids for PIFA embeddings Z is asymptotically larger than that obtained from the original node features. Our Theorem 4.4 provides strong evidence that using PIFA in XRTransformers offers improved clustering results and consequently, better feature quality. Theorem 4.4. For the cSBM and under Assumption 4.2, the effect size of the two centroids of the node features X of the two different classes is rσ = Θ(1). Moreover, the effect size of the two centroids of the PIFA embedding Z of the two different classes, conditioned on an event of probability at least 1−O( 1nc ) for some constant c > 0, is ω(1). We see that although two nodes i, j from the same class have the same neighborhood vectors in expectation, EAi = EAj , their Hamming distance can be large in practice. This finding is formally characterized in Proposition 4.5. Proposition 4.5. For the cSBM and under Assumption 4.2, the Hamming distance between Ai and Aj with yi = yj is ω( √ n log n) with probability at least 1−O( 1nc ), for some c > 0. Hence, directly using neighborhood vectors for self-supervision is not advisable. Our result also agrees with findings from the XMC literature (Chang et al., 2020b). It is also intuitively clear that averaging neighborhood vectors from the same class can reduce the variance, which is approximately performed by clustering based on node representations (in our case, via a PIFA embedding). This result establishes the importance of clustering within the XR-Transformer approach and for the SSL neighborhood prediction task. 5 EXPERIMENTS Evaluation Datasets. We consider node classification as our downstream task and evaluate GIANT on three large-scale OGB datasets (Hu et al., 2020a) with available raw text: ogbn-arxiv, ogbn-products, and ogbn-papers100M. The parameters of these datasets are given in Table 1 and detailed descriptions are available in the Appendix E.1. Following the OGB benchmarking protocol, we report the average test accuracy and the corresponding standard deviation by repeating 3 runs of each downstream GNN model. Evaluation Protocol. We refer to our actual implementation as GIANT-XRT since the multi-scale neighborhood prediction task in the proposed GIANT framework is solved by an XR-Transformer. In the pre-training stage, GIANT-XRT learns a raw text encoder by optimizing the self-supervised neighborhood prediction objective, and generates a fine-tuned node embedding for later stages. For the node classification downstream tasks, we input the node embeddings from GIANT-XRT into several different GNN models. One is the multi-layer perceptron (MLP), which does not use graph information. Two other methods are GraphSAGE (Hamilton et al., 2017), which we applied to ogbn-arxiv, and GraphSAINT (Zeng et al., 2020), which we used for ogbn-products as it allows for mini-batch training. Due to scalability issues, we used Simple Graph Convolution (SGC) (Wu et al., 2019) for ogbn-papers100M. We also tested the state-of-the-art GNN for each dataset. At the time we conducted the main experiments (07/01/2021), the top-ranked model for ogbn-arxiv was RevGAT2 (Li et al., 2021) and the top-ranked model for ogbn-products was SAGN3 (Sun & Wu, 2021). When we conducted the experiment on ogbn-papers100M (09/10/2021), the topranked model for ogbn-papers100M was GAMLP4 (Zhang et al., 2021b) (Since then, the highest reported accuracy was improved by 0.05% for ogbn-arxiv and 0.31% for ogbn-products; both of these improvements fall short compared to those offered by GIANT). For all evaluations, we use publicly available implementations of the GNNs. For RevGAT, we report the performance of the model with and without self knowledge distillation; the former setting is henceforth referred to as +SelfKD. For SAGN, we report results with the self-label-enhanced (SLE) feature, and denote them by SAGN+SLE. For GAMLP, we report results with and without Reliable Label Utilization (RLU); the former is denoted as GAMLP+RLU. SSL GNN Competing Methods. We compare GIANT-XRT to methods that rely on graphagnostic feature inputs and use node embeddings generated by various SSL GNNs modules. The graph-agnostic features are either default features available from the OGB datasets (denoted by OGB-feat) or obtained from plain BERT embeddings (without fine-tuning) generated from raw text (denoted by BERT?). For OGB-feat combined with downstream GNN methods, we report the results from the OGB leaderboard (and denote them by †). For the SSL GNNs modules, we test three frequently-used methods: (Variantional) Graph AutoEncoders (Kipf & Welling, 2016) (denoted by (V)GAE); Deep Graph Infomax (Velickovic et al., 2019) (denoted by DGI); and GraphZoom (Deng et al., 2020) (denoted by GZ). The hyper-parameters of SSL GNNs modules are given in the Appendix E.2. For all reported results, we use Xplain, XSSLGNN and XGIANT (c.f. Figure 1) to denote which framework the method belongs to. Note that XGIANT refers to our approach. The implementation details and hyper-parameters of GIANT-XRT can be founded in the Appendix E.3. 5.1 MAIN RESULTS The results for the ogbn-arxiv and ogbn-products datasets are listed in Table 2. Our GIANT-XRT approach gives the best results for both datasets and all downstream models. It improves the accuracy of the top-ranked OGB leaderboard models by a large margin: 1.86% on ogbn-arxiv and 1.19% on ogbn-products. Using graph-agnostic BERT embeddings does not necessarily lead to good results (see the first two rows in Table 2). This shows that the improvement of our method is not merely 2https://github.com/lightaime/deep_gcns_torch/tree/master/examples/ogb_ eff/ogbn_arxiv_dgl 3https://github.com/skepsun/SAGN_with_SLE 4https://github.com/PKU-DAIR/GAMLP due to the use of a more powerful language model, and establishes the need for self-supervision governed by graph information. Another observation is that among possible combinations involving a standard GNN pipeline with a SSL module, BERT+(V)GAE offers the best performance. This can be attributed to exploiting the correlation between numerical node features and the graph structure, albeit in a two-stage approach within the standard GNN pipeline. The most important finding is that using node features generated by GIANT-XRT leads to consistent and significant improvements in the accuracy of all tested methods, when compared to the standard GNN pipeline: In particular, on ogbn-arxiv, the improvement equals 17.58% for MLP and 3.1% for GraphSAGE; on ogbn-products, the improvement equals 18.76% for MLP and 5.32% for GraphSAINT. Figure 5 in the Appendix E.6 further illustrate the gain obtained by our GIANT-XRT over SOTA methods on OGB leaderboard. Another important observation is that GIANT-XRT is highly scalable, which can be clearly observed on the example of the ogbn-papers100M dataset, for which the results are shown in Table 3. In particular, GIANT-XRT improves the accuracy of the top-ranked model, GAMLP-RLU, by a margin of 1.42%. Furthermore, GIANT-XRT again consistently improves all tested downstream methods on the ogbn-papers100M dataset. As a final remark, we surprisingly find that combining MLP with GIANT-XRT greatly improves the performance of the former learner on all datasets. It becomes just slightly worse then GIANT-XRT+GNNs and can even outperform the GraphSAGE and GraphSAINT methods with default OGB features on ogbn-arxiv and ogbn-products datasets. This is yet another positive property of GIANT, since MLPs are low-complexity and more easily implementable than other GNNs. 5.2 ABLATION STUDY We also conduct an ablation study of the GIANT framework to determine the relevance of each module involved. The first step is to consider alternatives to the proposed multi-scale neighborhood prediction task: In this case, we fine-tune BERT with a SSL link prediction approach, which we for simplicity refer to as BERT+LP. In addition, we examine how the PIFA embedding affects the performance of GIANT-XRT and how more informative node features (TFIDF) can improve the clustering steps. First, recall that in GIANT-XRT, we use TFIDF features from raw text to construct PIFA embeddings. We subsequently use the term “NO TFIDF” to indicate that we replaced the TFIDF feature matrix by an identity matrix, which contain no raw text information. The term “TFIDF+NO PIFA” is used to refer to the setting where only raw text information (node attributes) is used to perform hierarchical clustering. Similarly, “NO TFIDF+PIFA” indicates that we only use normalized neighborhood vectors (graph structure) to construct the hierarchical clustering. If both node attributes and graph structure are ignore, the result is a random clustering. Nevertheless, we keep the same sizes of clusters at each level in the hierarchical clustering. The results of the ablation study are listed under rows indexed by XGIANT in Table 2 for ogbn-arxiv and ogbn-products datasets. They once again confirm that GIANT-XRT consistently outperforms other tested methods. For BERT+LP, we find that it has better performance on ogbn-arixv compared to that of the standard GNN pipeline but a worse performance on ogbn-products. This shows that using link prediction to fine-tune BERT in a self-supervised manner is not robust in general, and further strengthens the case for using neighborhood instead of link prediction. With respect to the ablation study of GIANT-XRT, we see that NO TFIDF+NO PIFA indeed gives the worst results. Using node attributes (TFIDF features) or graph information (PIFA) to construct the hierarchical clustering in GIANT-XRT leads to performance improvements that can be seen from Table 2. Nevertheless, combining both as done in GIANT-XRT gives the best results. Moreover, one can observe that using PIFA always gives better results compared to the case when PIFA is not used. It aligns with our theoretical analysis, which shows that PIFA embeddings lead to better hierarchical clusterings. ACKNOWLEDGEMENT The authors thank the support from Amazon and the Amazon Post Internship Fellowship. Cho-Jui Hsieh is partially supported by NSF under IIS-2008173 and IIS-2048280. This work was funded in part by the NSF grant 1956384. 6 ETHICS STATEMENT We are not aware of any potential ethical issues regarding our work. 7 REPRODUCIBILITY STATEMENT We provide our code in the supplementary material along with an easy-to-follow description and package dependency for reproducibility. Our experimental setting is stated in Section 5 and details pertaining to hyperparameters and computational environment are described in the Appendix. All tested methods are integrated in our code: https://github.com/amzn/pecos/tree/ mainline/examples/giant-xrt. A CONCLUSIONS We introduced a novel self-supervised learning framework for graph-guided numerical node feature extraction from raw data, and evaluated it within multiple GNN pipelines. Our method, termed GIANT, for the first time successfully resolved the issue of graph-agnostic numerical feature extraction. We also described a new SSL task, neighborhood prediction, established a connection between the task and the XMC problem, and solved it using XR-Transformers. We also examined the theoretical properties of GIANT in order to evaluate the advantages of neighborhood prediction over standard link prediction, and to assess the benefits of using XR-Transformers. Our extensive numerical experiments, which showed that GIANT consistently improves state-of-the-art GNN models, were supplemented with an ablation study that aligns with our theoretical analysis. B PROOF OF THEOREM 4.4 Throughout the proof, we use µ = rd1 to denote the mean of node feature from class 0 and ν = −r d 1 for class 1. We choose to keep this notation to demonstrate that our setting on mean can be easily generalized. The choice of the high probability events will be clear in the proof. The proof for the effect size of centroid for node feature X is quite straightforward from the Definition 4.3. By directly plugging in the mean and standard deviation, we have 2r 2σ = Θ(1). (4) The last equality is due to our assumption that both r, σ > 0 are both constants. To prove the effect size of centroid for PIFA embedding Z, we need to first introduce some standard concentration results for sum of Bernoulli and Gaussian random variables. Lemma B.1 (Hoeffiding’s inequality (Hoeffding, 1994)). Let Sn = ∑n i=1Xn, where Xi are i.i.d. Bernoulli random variable with parameter p. Then for any t > 0, we have P (|Sn − np| ≥ t) ≤ 2 exp( −2t2 n ). (5) Lemma B.2 (Concentration of sum of Gaussian). Let Sn = ∑n i=1Xn, whereXi are i.i.d. Gaussian random variable with zero mean and standard deviation σ. Then for some constant c > 0, we have P ( |Sn| ≥ cσ √ n log n ) ≤ 2 exp(−c 2 2 log n). (6) Now we are ready to prove our claim. Recall that the definition of PIFA embedding Z is as follows: Zi = vi ‖vi‖ , where vi = ∑ j:Aij=1 Xj = [AX]i . (7) We first focus on analyzing the vector vi. We denote Niy = |{j : yj = y ∧Aij = 1, j ∈ [n]}| and Ni = Ni0 +Ni1. Without loss of generality, we assume yi = 0. The conditional expectation of it is as follows: E [vi|A] = Ni0µ +Ni1ν. (8) Next, by leveraging Lemma B.1, we have P(|Ni1 − nq 2 | ≥ t) ≤ 2 exp(−4t 2 n ). (9) By choosing t = c1 √ n log n for some constant c1 > 1, we know that with probability at least 1−O( 1nc1 ), Ni1 ∈ [ nq 2 − c1 √ n log n, nq2 + c1 √ n log n]. Finally, by our Assumption 4.2, we know that nq = ω( √ n log n). Thus, we arrive to the conclusion that with probability at least 1−O( 1nc1 ), Ni1 = nq 2 ± o(1). Following the same analysis, we can prove that with probability at least 1−O( 1nc1 ),Ni0 = np 2 ±o(1). The only difference is that we are dealing with n2 − 1 random variables in this case as there’s no self-loop. Nevertheless, (n2 −1) = n 2 (1−o(1)) so the result is the same. Note that we need to apply union bound over 2n error events (∀i ∈ [n] and 2 cases for Ni0 and Ni1 respectively). Together we know that the error probability is upper bounded byO( 1nc2 ) for some new constant c2 = c1−1 > 0. Hence, we characterize the mean of vi on a high probability event B1. Next we need to analyze its norm. By direct analysis and condition on A, we have ‖vi‖2 d = d∑ k=1 (µkNi0 + νkNi1 + n∑ j=1 AijGjk) 2, (10) where Gjk are i.i.d. Gaussian random variables with zero mean, σ standard deviation and d = stands for equal in distribution. Then by Lemma B.2 we know that with probability at least 1 − O( 1 N c3 i ) for some constant c3 > 2 + c2 | n∑ j=1 AijGjk| = O( σ√ d √ Ni logNi). (11) This is because condition on A, we are summing over Ni Gaussian random variables. Recall that condition on our high probability event B1, Ni = n2 (p + q)(1 + o(1)) = ω( √ n log n) ≤ n. Thus, we know that for some c2 > 0, with probability at least 1− O( 1nc2 + 1 nc3 ) = 1− O( 1 nc2 ) we have | ∑n j=1 AijGjk| = O( σ√ d √ n log n). Again, recall that bothNi0, Ni1 = ω( √ n log n), thus together we have ‖vik‖2 = n2 4 (µkp+ νkq) 2(1 + o(1)). (12) ⇒ ‖vi‖ = n 2 √√√√ d∑ k=1 (µkp+ νkq)2(1 + o(1)) (13) = n 2 |p− q|r(1 + o(1)). (14) Again, we need to apply union bound over nd error events, which result in the error probability O( 1nc2 ) since c3 > 2 + c2 and d = o(n) in our Assumption 4.2. We denote the corresponding high probability event to be B2. Note that the same analysis can be applied to the case yi = 1, where the result for the norm is the same and the result for vi would be just swapping p and q. Combine all the current result, we know that with probability at least 1− O( 1nc2 ) for some c2 > 0, the PIFA embedding Zi equals to the following Zi = n 2 (µp+ νq)(1 + o(1)) n 2 |p− q|r(1 + o(1)) = (µp+ νq)(1 + o(1)) |p− q|r if yi = 0. (15) Zi = n 2 (µq + νp)(1 + o(1)) n 2 |p− q|r(1 + o(1)) = (µq + νp)(1 + o(1)) |p− q|r if yi = 1. (16) Hence, the centroid distance would be = ∥∥∥∥ (µ(p− q) + ν(q − p))(1 + o(1))r|p− q|(1 + o(1)) ∥∥∥∥ (17) = ∥∥∥∥ (µ− ν)(p− q)(1 + o(1))r|p− q|(1 + o(1)) ∥∥∥∥ (18) = ‖µ− ν‖ r (1 + o(1)) = 2(1 + o(1)). (19) Now we turn to the standard deviation part. Specifically, we will characterize the following quantity (again, recall that we assume yi = 0 w.l.o.g.).∥∥∥∥zi − (µp+ νq)r|p− q| ∥∥∥∥ (20) Recall that the latter part is the centroid for nodes with label 0. Hence, by characterize this quantity we can understand the deviation of PIFA embedding around its centroid. From the analysis above, we know that given A, we have∥∥∥∥zi − (µp+ νq)r|p− q| ∥∥∥∥ ≤ ∥∥∥∥Ni0µ +Ni1ν‖vi‖ − (µp+ νq)r|p− q| ∥∥∥∥+ ∥∥∥∥∥ ∑n j=1 AijGj ‖vi‖ ∥∥∥∥∥ . (21) For the terms ‖vi‖, Ni0, Ni1 and ‖ ∑n j=1 AijZj‖, we already derive their concentration results above. Plug in those results, the first term becomes ‖Ni0µ +Ni1ν ‖vi‖ − (µp+ νq) r|p− q| ‖ (22) = ‖ n 2 (µp(1 + o(1)) + νq(1 + o(1)) n 2 r|p− q|(1 + o(1)) − (µp+ νq) r|p− q| ‖ (23) = ‖ (µpo(1) + νqo(1)) r|p− q|(1 + o(1)) ‖ = r|p− q|o(1) r|p− q|(1 + o(1)) = o(1). (24) The second term becomes ‖ ∑n j=1 AijGj ‖vi‖ ‖ = O(σ √ n log n) n 2 r|p− q|(1 + o(1)) (25) = O(σ √ n log n) ω(r √ n log n) = o(1), (26) where the last equality is from our Assumption 4.2 that r, σ are constants. Together we show that the deviation of nodes from their centroid is of scale o(1). The similar result holds for the case yi = 1. Together we have shown that the standard deviation of Zi is o(1) on the high probability event B1 ∩ B2. Hence, the effect size for the PIFA embedding is ω(1) with probability at least 1 − O( 1nc2 ) for some constant c2 > 0, which implies that PIFA embedding gives a better clustered node representation. Thus, it is preferable to use PIFA embedding and we complete the proof. C PROOF OF PROPOSITION 4.5 Note that under the setting of cSBM and the Assumption 4.2, the Hamming distance of Ai,Aj for yi = yj is a Poisson-Binomial random variable. More precisely, note that ∀k ∈ [n] \ {i, j} s.t.yk = yi, |Aik −Ajk| ∼ Ber(2p(1− p)). (27) ∀k ∈ [n] \ {i, j} s.t.yk 6= yi, |Aik −Ajk| ∼ Ber(2q(1− q)), (28) where they are all independent. Hence, we have Hamming(Ai,Aj) ∼ Bin( n 2 − 2, 2p(1− p)) +Bin(n 2 , 2q(1− q)), (29) where Hamming(Ai,Aj) denotes the Hamming distance of Ai,Aj andBin(a, b) stands for the Binomial random variable with a trials and the success probability is b. By leveraging the Lemma B.1, we know that for a random variable X ∼ Bin(n2 , 2q(1− q)), we have P (|X − nq(1− q)| ≥ t) ≤ 2 exp(−4t 2 n ). (30) Note that the function q(1 − q) is monotonic increasing for q ∈ [0, 12 ] and has maximum at q = 1 2 . Combine with Assumption 4.2 we know that nq(1 − q) = ω( √ n log n). Hence, by choosing t =√ cn logn 2 for some constant c > 0, with probability at least 1−O( 1 nc ) we have X ≥ nq(1− q)− √ cn log n 2 = ω( √ n log n). (31) Finally, by noting the fact that with probability 1 we have Bin(n2 − 2, 2p(1 − p)) ≥ 0. Hence, by showingX ∼ Bin(n2 , 2q(1−q)) is of order ω( √ n log n) with probability at least 1−O( 1nc ) implies that the Hamming distance of Ai,Aj is of order ω( √ n log n) with at least the same probability. Together we complete the proof. D PROOF OF LEMMA B.2 By Chernoff bound, we have P (Sn ≥ a) ≤ min t>0 exp(−ta)E exp(tSn). (32) By the i.i.d assumption, we have min t>0 exp(−ta)E exp(tSn) = min t>0 exp(−ta)(E exp(tX1))n. (33) Note that the moment generating function (MGF) of a zero-mean, σ standard deviation Gaussian random variable is exp(σ 2t2 2 ). Hence we have min t>0 exp(−ta)(E exp(tX1))n = min t>0 exp( 1 2 nσ2t2 − ta). (34) By minimizing the upper bound with respect to t, we can choose t = anσ2 . Plug in this choice of t we have P (Sn ≥ a) ≤ exp( −a2 2nσ2 ). (35) Finally, by choosing a = cσ √ n log n for some c > 0, applying the same bound for the other side and the union bound, we complete the proof. E EXPERIMENTAL DETAIL E.1 DATASETS In this work, we choose node classification as our downstream task to focus. We conduct experiments on three large-scale datasets, ogbn-arxiv, ogbn-products and ogbn-papers100M as these are the only three datasets with raw text available in OGB. Ogbn-arxiv and ogbn-papers100M (Wang et al., 2020; Hu et al., 2020a) are citation networks where each node represents a paper. The corresponding input raw text consists of titles and abstracts and the node labels are the primary categories of the papers. Ogbn-products (Chiang et al., 2019; Hu et al., 2020a) is an Amazon co-purchase network where each node represents a product. The corresponding input raw text consists of titles and descriptions of products. The node labels are categories of products. E.2 HYPER-PARAMETERS OF SSL GNN MODULES The implementation of (V)GAE and DGI are taken from Pytorch Geometric Library (Fey & Lenssen, 2019). Note that due to the GPU memory constraint, we adopt GraphSAINT (Zeng et al., 2020) to (V)GAE and DGI for ogbn-products. GraphZoom is directly taken from the official repository5. For all downstream GNNs in the experiment, we average the results over three independent runs. The only exception is OGB-feat + downstream GNNs, where we directly take the results from OGB leaderboard. Note that we also try to repeat the experiment of OGB-feat + downstream GNNs, where the accuracy is similar to the one reported on the leaderboard. To prevent confusion we decide to take the results from OGB leaderboard for comparison. For the BERT model used throughout the paper, we use “bert-base-uncased” downloaded from HuggingFace 6. For the methods used in the SSL GNNs module, we try our best to follow the default setting. We slightly optimize some hyperparameters (such as learning rate, max epochs...etc) to ensure the training process converge. To ensure the fair comparison, we fix the output dimension for all SSL GNNs as 768 which is the same as bert-base-uncased and XR-Transformer. E.3 HYPER-PARAMETERS OF GIANT-XRT AND BERT+LP Pre-training of GIANT-XRT. In Table 4, we outline the pre-training hyper-parameter of GIANT-XRT for all three OGB benchmark datasets. We mostly follow the convention of XRTransformer (Zhang et al., 2021a) to set the hyper-parameters. For ogbn-arxiv and ogbn-products 5https://github.com/cornell-zhang/GraphZoom 6https://huggingface.co/bert-base-uncased datasets, we use the full graph adjacency matrix as the XMC instance-to-label matrix Y ∈ {0, 1}n×n, where n is number of nodes in the graph. For ogbn-papers100M, we sub-sample 50M (out of 111M) most important nodes based on page rank scores of the bipartite graph (He et al., 2016). The resulting XMC instance-to-label matrix Y has 50.0M of rows, 49.9M of columns, and 2.5B of edges. The PIFA node embedding for hierarchical clustering is constructed by aggregating its neighborhood nodes’ TF-IDF features. Specifically, the PIFA node embedding is a 4.2M highdimensional sparse vector, consisting of 1M of word unigram, 3M of word bigram, and 200K of character triagram. Finally, for ogbn-arxiv and ogbn-products, we use TFN+MAN negative sampling to pre-train XR-Transformer where the model-aware negatives (MAN) are selected from top 20 model’s predictions. Because of the extreme scale of ogbn-papers100M, we consider TFN only to avoid excessive CPU memory consumption on the GPU machine. Pre-training on BERT+LP. To verify the effectiveness of multi-scale neighborhood prediction loss, we consider learning a Siamese BERT encoder with the alternative Link Prediction loss for pretraining, hence the name BERT+LP. We implement BERT+LP baseline with the triplet loss (Balntas et al., 2016) as we empirically observed it has better performance than other loss functions for the link prediction task. We sample one positive pair and one negative pair for each node as training samples for each epoch, and train the model until the loss is converged. E.4 HYPER-PARAMETERS OF DOWNSTREAM METHODS For the downstream models, we optimize the learning rate within {0.01, 0.001} for all models. For MLP, GraphSAGE and GraphSAINT, we optimize the number of layers within {1, 3}. For RevGAT, we keep the hyperparameter choice the same as default. For SAGN, we also optimize the number of layers within {1, 3}. For GAMLP, we directly adopt the setting from the official implementation. Note that all hyperparameter tuning applies for all pre-trained node features (Xplain, XSSLGNN and XGIANT). E.5 COMPUTATIONAL ENVIRONMENT All experiments are conducted on the AWS p3dn.24xlarge instance, consisting of 96 Intel Xeon CPUs with 768 GB of RAM and 8 Nvidia V100 GPUs with 32 GB of memory each. E.6 ILLUSTRATION ON THE IMPROVEMENT OF GIANT-XRT See Figure 5.
1. What is the focus and contribution of the paper on self-supervised learning for node feature learning? 2. What are the strengths of the proposed approach, particularly in its novelty and experimental results? 3. What are the weaknesses of the paper, especially regarding its theoretical analysis? 4. Do you have any concerns about the connection between neighborhood prediction and XMC problem? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper proposed a self-supervised learning framework for learning node feature by exploring the correlation between the node feature and the graph structure, which leverages the graph information based on neighborhood prediction. To be specific, the proposed GIANT approach is combined with the pre-trained language model BERT, and incorporated the XMC formalism based on XR-Transformer. Partial theoretical analysis is also presented. Experiments conducted on three large benchmark datasets show promissing improvements. Review Strengths: Introducing the idea of neighborhood prediction to guide self-supervised node feature learning is interesting and somewhat novel. Connecting neighborhood prediction with the XMC problem is novel. Extensive experiments are conducted on OGB and show new state-of-the-art results. Weaknesses: The reviewer has some concerns on the provided theoretical analysis based on cSBM (Deshpande et al., 2018). It seems misleading and incomplete. The theoretical analysis could be deduced from the analysis in Baranwal et al. (2021) with a few changes. In Baranwal et al. (2021), the cSBM is used to analysis the effect of graph convolution operation on the linear separability. The established theoretical results show that: if the means of the two mixture of Gaussians is not large than a threshold, the results after graph convolution are not guaranteed with high probability to improve the linear separability. However, the statements in Theorem 4.4 is relatively vague. Note that PIFA is just one step of a graph convolution with the node features, plus a normalization step. What can we say about the performance of using the PIFA embedding? Without the characteristic of the node features and the affinity of the graph convolution, it is hardly to have a convinsing conclusion. Furthermore, the requirement on p > q , i.e., the probability p of having a link between two nodes having the same label y i = y j should be larger than the probability of having a wrong link between two nodes having different labels y i ≠ y j . Is it necessary or not? Why?
ICLR
Title Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction Abstract Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take numerical node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related learning tasks. Recent works exploring the correlation between numerical node features and graph structure via self-supervised learning have paved the way for further performance improvements of GNNs. However, methods used for extracting numerical node features from raw data are still graph-agnostic within standard GNN pipelines. This practice is sub-optimal as it prevents one from fully utilizing potential correlations between graph topology and node attributes. To mitigate this issue, we propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT). GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information, and scales to large datasets. We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework. We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets: For example, we improve the accuracy of the top-ranked method GAMLP from 68.25% to 69.67%, SGC from 63.29% to 66.10% and MLP from 47.24% to 61.10% on the ogbn-papers100M dataset by leveraging GIANT. Our implementation is public available1. 1 INTRODUCTION The ubiquity of graph-structured data and its importance in solving various real-world problems such as node and graph classification have made graph-centered machine learning an important research area (Lü & Zhou, 2011; Shervashidze et al., 2011; Zhu, 2005). Graph neural networks (GNNs) offer state-of-the-art performance on many graph learning tasks and have by now become a standard methodology in the field (Kipf & Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Chien et al., 2020). In most such studies, GNNs take graphs with numerical node attributes as inputs and train them with task-specific labels. Recent research has shown that self-supervised learning (SSL) leads to performance improvements in many applications, including graph learning, natural language processing and computer vision. ∗This work was done during Eli Chien’s internship at Amazon, USA. 1https://github.com/amzn/pecos/tree/mainline/examples/giant-xrt Several SSL approaches have also been successfully used with GNNs (Hu et al., 2020b; You et al., 2018; 2020; Hu et al., 2020c; Velickovic et al., 2019; Kipf & Welling, 2016; Deng et al., 2020). The common idea behind these works is to explore the correlated information provided by the numerical node features and graph topology, which can lead to improved node representations and GNN initialization. However, one critical yet neglected issue in the current graph learning literature is how to actually obtain the numerical node features from raw data such as text, images and audio signals. As an example, when dealing with raw text features, the standard approach is to apply graph-agnostic methods such as bag-of-words, word2vec (Mikolov et al., 2013) or pre-trained BERT (Devlin et al., 2019) (As a further example, raw texts of product descriptions are used to construct node features via the bag-of-words model for benchmarking GNNs on the ogbn-products dataset (Hu et al., 2020a; Chiang et al., 2019)). The pre-trained BERT language model, as well as convolutional neural networks (CNNs) (Goyal et al., 2019; Kolesnikov et al., 2019), produce numerical features that can significantly improve the performance of various downstream learners (Devlin et al., 2019). Still, none of these works leverage graph information for actual self-supervision. Clearly, using graphagnostic methods to extract numerical features is sub-optimal, as correlations between the graph topology and raw features are ignored. Motivated by the recent success of SSL approaches for GNNs, we propose GIANT, an SSL framework that resolves the aforementioned issue of graph-agnostic feature extraction in the standard GNN learning pipeline. Our framework takes raw node attributes and generates numerical node features with graph-structured self-supervision. To integrate the graph topology information into language models such as BERT, we also propose a novel SSL task termed neighborhood prediction, which works for both homophilous and heterophilous graphs, and establish connections between neighborhood prediction and the eXtreme Multi-label Classification (XMC) problem (Shen et al., 2020; Yu et al., 2022; Chang et al., 2020b). Roughly speaking, the neighborhood of each node can be encoded using binary multi-labels (indicating whether a node is a neighbor or not) and the BERT model is fine-tuned by successively improving the predicted neighborhoods. This approach allows us to not only leverage the advanced solvers for the XMC problem and address the issue of graph-agnostic feature extraction, but also to perform a theoretical study of the XMC problem and determine its importance in the context of graph-guided SSL. Throughout the work, we focus on raw texts as these are the most common data used for largescale graph benchmarking. Examples include titles/abstracts in citation networks and product descriptions in co-purchase networks. To solve our proposed self-supervised XMC task, we adopt the state-of-the-art XR-Transformer method (Zhang et al., 2021a). By using the encoder from the XR-Transformer pre-trained with GIANT, we obtain informative numerical node features which consistently boost the performance of GNNs on downstream tasks. Notably, GIANT significantly improves state-of-the-art methods for node classification tasks described on the Open Graph Benchmark (OGB) (Hu et al., 2020a) leaderboard on three large-scale graph datasets, with absolute improvements in accuracy roughly 1.5% for the first-ranked methods, 3% for standard GNNs and 14% for multilayer perceptron (MLP). GIANT coupled with XRTransformer is also highly scalable and can be combined with other downstream learning methods. Our contributions may be summarized as follows. 1. We identify the issue of graph-agnostic feature extraction in standard GNN pipelines and propose a new GIANT self-supervised framework as a solution to the problem. 2. We introduce a new approach to extract numerical features by graph information based on the idea of neighborhood prediction. The gist of the approach is to use neighborhood prediction within a language model such as BERT to guide the process of fine-tuning the features. Unlike linkprediction, neighborhood prediction resolves problems associated with heterophilic graphs. 3. We establish pertinent connections between neighborhood prediction and the XMC problem by noting that neighborhoods of individual nodes can be encoded by binary vectors which may be interpreted as multi-labels. This allows for performing neighborhood prediction via XR-Transformers, especially designed to solve XMC problems at scale. 4. We demonstrate through extensive experiments that GIANT consistently improves the performance of tested GNNs on downstream tasks by large margins. We also report new state-of-the-art results on the OGB leaderboard, including absolute improvements in accuracy roughly 1.5% compared to the top-ranked method, 3% for standard GNNs and 14% for multilayer perceptron (MLP). More precisely, we improve the accuracy of the top-ranked method GAMLP (Zhang et al., 2021b) from 68.25% to 69.67%, SGC (Wu et al., 2019) from 63.29% to 66.10% and MLP from 47.24% to 61.10% on the ogbn-papers100M dataset. 5. We present a new theoretical analysis that verifies the benefits of key components in XRTransformers on our neighborhood prediction task. This analysis also further improves our understanding of XR-Transformers and the XMC problem. Due to the space limitation, all proofs are deferred to the Appendix. 2 BACKGROUND AND RELATED WORK General notation. Throughout the paper, we use bold capital letters such as A to denote matrices. We use Ai for the i-th row of the matrix and Aij for its entry in row i and column j. We reserve bold lowercase letters such as a for vectors. The symbol I denotes the identity matrix while 1 denotes the all-ones vector. We use o(·), O(·), ω(·),Θ(·) in the standard manner. SSL in GNNs. SSL is a topic of substantial interest due to its potential for improving the performance of GNNs on various tasks. Exploiting the correlation between node features and the graph structure is known to lead to better node representations or GNN initialization (Hu et al., 2020b; You et al., 2018; 2020; Hu et al., 2020c). Several methods have been proposed for improving node representations, including (variational) graph autoencoders (Kipf & Welling, 2016), Deep Graph Infomax (Velickovic et al., 2019) and GraphZoom (Deng et al., 2020). For more information, the interested reader is referred to a survey of SSL GNNs (Xie et al., 2021). While these methods can be used as SSL modules in GNNs (Figure 1), it is clear that they do not solve the described graph agnostics issue in the standard GNN pipeline. Furthermore, as the above described SSL GNNs modules and other pre-processing and post-processing methods for GNNs such as C&S (Huang et al., 2021) and FLAG (Kong et al., 2020) in general improve graph learners, it is worth pointing out that they can be naturally be integrated into the GIANT framework. This topic is left as a future work. The XMC problem, PECOS and XR-Transformer. The XMC problem can be succinctly formulated as follows: We are given a training set {Ti,yi}ni=1, where Ti ∈ D is the ith input text instance and yi ∈ {0, 1}L is the target multi-label from an extremely large collection of labels. The goal is to learn a function f : D× [L] 7→ R, where f(T, l) captures the relevance between the input text T and the label l. The XMC problem is of importance in many real-world applications (Jiang et al., 2021; Ye et al., 2020): For example, in E-commerce dynamic search advertising, XMC arises when trying to find a “good” mapping from items to bid queries on the market (Prabhu et al., 2018; Prabhu & Varma, 2014). In open-domain question answering, XMC problems arise when trying to map ques- tions to “evidence” passages containing the answers (Chang et al., 2020a; Lee et al., 2019). Many methods for the XMC problem leverage hierarchical clustering approaches for labels (Prabhu et al., 2018; You et al., 2019). This organizational structure allows one to handle potentially enormous numbers of labels, such as used by PECOS (Yu et al., 2022). The key is to take advantage of the correlations among labels within the hierarchical clustering. In our approach, we observe that the multi-labels correspond to neighborhoods of nodes in the given graph. Neighborhoods have to be predicted using the textual information in order to best match the a priori given graph topology. We use the state-of-the-art XR-Transformer (Zhang et al., 2021a) method for solving the XMC problem to achieve this goal. The high-level idea is to first cluster the output labels, and then learn the instance-to-cluster “matchers” (please refer to Figure 2). Note that many other methods have used PECOS (including XR-Transformers) for solving large-scale real-world learning problems (Etter et al., 2022; Liu et al., 2021; Chang et al., 2020b; Baharav et al., 2021; Chang et al., 2021; Yadav et al., 2021; Sen et al., 2021), but not in the context of self-supervised numerical feature extraction as done in our work. GNNs with raw text data. It is conceptually possible to jointly train BERT and GNNs in an end-to-end fashion, which could potentially resolve the issue of being graph agnostic in the standard pipeline. However, the excessive model complexity of BERT makes such a combination practically prohibitive due to GPU memory limitations. Furthermore, it is nontrivial to train this combination of methods with arbitrary mini-batch sizes (Chiang et al., 2019; Zeng et al., 2020). In contrast, the XRTransformer architecture naturally supports mini-batch training and scales well (Jiang et al., 2021). Hence, our GIANT method uses XR-Transformers instead of combinations of BERT and GNNs. To the best of our knowledge, we are aware of only one prior work that uses raw text inputs for node classification problem (Zhang et al., 2020), but it still follows the standard pipeline described in Figure 1. Some other works apply GNNs on texts and for document classification, where the actual graphs are constructed based on the raw text. This is clearly not the focus of this work (Yao et al., 2019; Huang et al., 2019; Zhang & Zhang, 2020; Liu et al., 2020). 3 METHODS Our goal is to resolve the issue of graph-agnostic numerical feature extraction for standard GNN learning pipelines. Although our interest lies in raw text data, as already pointed out, the proposed methodology can be easily extended to account for other types of raw data and corresponding feature extraction methods. To this end, consider a large-scale graph G with node set V = {1, 2, . . . , n} and adjacency matrix A ∈ {0, 1}n×n. Each node i is associated with some raw text, which we denote by Ti. The language model is treated as an encoder Φ that maps the raw text Ti to numerical node feature Xi ∈ Rd. Key to our SSL approach is the task of neighborhood prediction, which aims to determine the neighborhood Ai from Ti. The neighborhood vector Ai can be viewed as a target multi-label yi for node i, where we have L = n labels. Hence, neighborhood prediction represents an instance of the XMC problem, which we solve by leveraging XR-Transformers. The trained encoder in an XR-Transformer generates informative numerical node features, which can then be used further in downstream tasks, the SSL GNNs module and for GNN pre-training. Detailed description regarding the use of XR-Transformers for Neighborhood Prediction. The most straightforward instance of the XMC problem is the one-versus-all (OVA) model, which can be formalized as f(T, l) = wTl Φ(T ); l ∈ [L], where W = [w1, . . . ,wL] ∈ Rd×L are weight vectors and Φ : D 7→ Rd is the encoder that maps T to a d-dimensional feature vector. OVA can be a deterministic model such as bag-of-words, the Term Frequency-Inverse Document Frequency (TFIDF) model or some other model with learnable parameters, such as XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019b). We choose to work with pre-trained BERT (Devlin et al., 2019). Also, one can change Φ according to the type of input data format (i.e., CNNs for images). Despite their simple formulation, it is known Chang et al. (2020b) that fine-tuning transformer models directly on large output spaces can be prohibitively complex. For neighborhood prediction, L = n, and the graphs encountered may have millions of nodes. Hence, we need a more scalable approach to training Transformers. As part of an XR-Transformer, one builds a hierarchical label clustering tree based on the label features Z ∈ RL×d; Z is based on Positive Instance Feature Aggregation (PIFA): Zl = vl ‖vl‖ , where vl = ∑ i:yi,l=1 Ψ(Ti), ∀l ∈ [L]. (1) Note that for neighborhood prediction, the above expression represents exactly one step of a graph convolution with node features Ψ(Ti), followed by a norm normalization; here, Ψ(·) denotes some text vectorizer such as bag-of-words or TFIDF. In the next step, XR-Transformer uses balanced kmeans to recursively partition label sets and generate the hierarchical label cluster tree in a top-down fashion. This step corresponds to Step 1 in Figure 2. Note that at each intermediate level, it learns a matcher to find the most relevant clusters, as illustrated in Step 2 of Figure 2. By leveraging the label hierarchy defined by the cluster tree, the XR-Transformer can train the model on multi-resolution objectives. Multi-resolution learning has been used in many different contexts, including computer vision (Lai et al., 2017; Karras et al., 2018; 2019; Pedersoli et al., 2015), meta-learning (Liu et al., 2019a), but has only recently been applied to the XMC problem as part of PECOS and XRTransformers. For neighborhood prediction, multi-resolution amounts to generating a hierarchy of coarse-to-fine views of neighborhoods. The only line of work in self-supervised graph learning that somewhat resembles this approach is GraphZoom (Deng et al., 2020), in so far that it applies SSL on coarsened graphs. Nevertheless, the way in which we perform coarsening is substantially different; furthermore, GraphZoom still falls into the standard GNN pipeline category depicted in Figure 1. 4 THEORETICAL ANALYSIS We also provide theoretical evidence in support of using each component of our proposed learning framework. First, we show that self-supervised neighborhood prediction is better suited to the task at hand than standard link prediction. More specifically, we show that the standard design criteria in self-supervised link prediction tasks are biased towards graph homophily assumptions (McPherson et al., 2001; Klicpera et al., 2018). In contrast, our self-supervised neighborhood prediction model works for both homophilic and heterophilic graphs. This universality property is crucial for the robustness of graph learning methods, especially in relationship to GNNs (Chien et al., 2020). Second, we demonstrate the benefits of using PIFA embeddings and clustering in XR-Transformers for graph-guided numerical feature extraction. Our analysis is based on the contextual stochastic block model (cSBM) (Deshpande et al., 2018), which was also used in Chien et al. (2020) for testing the GPR-GNN framework and in Baranwal et al. (2021) for establishing the utility of graph convolutions for node classification. Link versus neighborhood prediction. One standard SSL task on graphs is link prediction, which aims to find an entry in the adjacency matrix according to P (Aij = 1) ∝ Similarity (Φ(Ti),Φ(Tj)) . (2) Here, the function Similarity(x,y) is a measure of similarity of two vectors, x and y. The most frequently used choice for the function is the inner product of two input vectors followed by a sigmoid function. However, this type of design implicitly relies on the homophily assumption: Nodes with similar node representations are more likely to have links. It has been shown in Pei et al. (2020); Chien et al. (2020); Zhu et al. (2020); Lim et al. (2021) that there are real-world graph datasets that violate the homophily assumption and on which many GNN architectures fail. A simple example that shows how SSL link prediction may fail is presented in Figure 3. Nodes of the same color share the same features (these are for simplicity represented as numerical values). Clearly, no matter what encoder Φ we have, the similarity of node features for nodes of the same color is the highest. However, there is no edge between nodes of the same color, hence the standard methodology of link prediction based on homophily assumption fails to work for this simple heterophilous graph. In order to fix this issue, we use a different modeling assumption, stated below. Assumption 4.1. Nodes with similar node features have similar “structural roles” in the graph. In our study, we equate “structure” with the 1-hop neighborhood of a node (i.e., the row of the adjacency matrix indexed by the underlying node). The above assumption is in alignment with our XMC problem assumptions, where nodes with a small perturbation in their raw text should be mapped to a similar multi-label. Our assumption is more general then the standard homophily assumption; it is also clear that there exists a perfect mapping from node features to their neighborhoods for the example in Figure 3. Hence, neighborhood prediction appears to be a more suitable SSL approach than SSL link prediction for graph-guided feature extraction. Analysis of key components in XR-Transformers. In the original XR-Transformer work (Zhang et al., 2021a), the authors argued that one needs to perform clustering of the multi-label space in order to resolve scarce training instances in XMC. They also empirically showed that directly finetuning language models on extremely large output spaces is prohibitive. Furthermore, they empirically established that constructing clusters based on PIFA embedding with TFIDF features gives the best performance. However, no theoretical evidence was given in support of this approach to solving the XMC problem. We next leverage recent advances in graph learning to analytically characterize the benefits of using XR-Transformers. Description of the cSBM. Using our Assumption 4.1, we analyze the case where the graph and node features are generated according to a cSBM (Deshpande et al., 2018) (see Figure 4). For simplicity, we use the most straightforward two-cluster cSBM. Let {yi}ni=1 ∈ {0, 1} be the labels of nodes in a graph. We denote the size of class j ∈ {0, 1} by Cj = |{i : yi = j,∀i ∈ [n]}|. We also assume that the classes are balanced, i.e., C0 = C1 = n2 . The node features {Xi}ni=1 ∈ Rd are independent d-dimensional Gaussian random vectors, such that Xi ∼ N( r√d1, σ2 d I) if yi = 0 and Xi ∼ N(− r√d1, σ2 d I) if yi = 1. The adjacency matrix of the cSBM is denoted by A, and is clearly symmetric. All edges are drawn according to independent Bernoulli random variables, so that Aij ∼ Ber(p) if yi = yj and Aij ∼ Ber(q) if yi 6= yj . Our analysis is inspired by Baranwal et al. (2021) and Li et al. (2019), albeit their definitions of graph convolutions and random walks differ from those in PIFA. For our subsequent analysis, we also make use of the following standard assumption and define the notion of effect size. Assumption 4.2. p, q = ω( √ logn n ). |p−q| p+q = Θ(1). d = o(n). 0 < r, σ = Θ(1). Note that Baranwal et al. (2021) also poses constraints on p, q, |p−q|p+q and d. In contrast, we do not require p − q > 0 to hold (Baranwal et al., 2021; Li et al., 2019) so that we can address graph structures that are either homophilic or heterophilic. Due to the difference between PIFA and standard graph convolution, we require p, q to be larger compared to the corresponding values used in Baranwal et al. (2021). Definition 4.3. For cSBM, the effect size of the two centroids of the node features X of the two different classes is defined as ‖EXi − EXj‖√ E‖Xi − EXi‖2 + √ E‖Xj − EXj‖2 , where yi 6= yj . (3) In the standard definition of effect size, the mean difference is divided by the standard deviation of a class, as the latter is assumed to be the same for both classes. We use the sum of both standard deviations to prevent any ambiguity in our definition. Note that for the case of isotropic Gaussian distributions, the larger the effect size the larger the separation of the two classes. Theoretical results. We are now ready to state our main theoretical result, which asserts that the effect size of centroids for PIFA embeddings Z is asymptotically larger than that obtained from the original node features. Our Theorem 4.4 provides strong evidence that using PIFA in XRTransformers offers improved clustering results and consequently, better feature quality. Theorem 4.4. For the cSBM and under Assumption 4.2, the effect size of the two centroids of the node features X of the two different classes is rσ = Θ(1). Moreover, the effect size of the two centroids of the PIFA embedding Z of the two different classes, conditioned on an event of probability at least 1−O( 1nc ) for some constant c > 0, is ω(1). We see that although two nodes i, j from the same class have the same neighborhood vectors in expectation, EAi = EAj , their Hamming distance can be large in practice. This finding is formally characterized in Proposition 4.5. Proposition 4.5. For the cSBM and under Assumption 4.2, the Hamming distance between Ai and Aj with yi = yj is ω( √ n log n) with probability at least 1−O( 1nc ), for some c > 0. Hence, directly using neighborhood vectors for self-supervision is not advisable. Our result also agrees with findings from the XMC literature (Chang et al., 2020b). It is also intuitively clear that averaging neighborhood vectors from the same class can reduce the variance, which is approximately performed by clustering based on node representations (in our case, via a PIFA embedding). This result establishes the importance of clustering within the XR-Transformer approach and for the SSL neighborhood prediction task. 5 EXPERIMENTS Evaluation Datasets. We consider node classification as our downstream task and evaluate GIANT on three large-scale OGB datasets (Hu et al., 2020a) with available raw text: ogbn-arxiv, ogbn-products, and ogbn-papers100M. The parameters of these datasets are given in Table 1 and detailed descriptions are available in the Appendix E.1. Following the OGB benchmarking protocol, we report the average test accuracy and the corresponding standard deviation by repeating 3 runs of each downstream GNN model. Evaluation Protocol. We refer to our actual implementation as GIANT-XRT since the multi-scale neighborhood prediction task in the proposed GIANT framework is solved by an XR-Transformer. In the pre-training stage, GIANT-XRT learns a raw text encoder by optimizing the self-supervised neighborhood prediction objective, and generates a fine-tuned node embedding for later stages. For the node classification downstream tasks, we input the node embeddings from GIANT-XRT into several different GNN models. One is the multi-layer perceptron (MLP), which does not use graph information. Two other methods are GraphSAGE (Hamilton et al., 2017), which we applied to ogbn-arxiv, and GraphSAINT (Zeng et al., 2020), which we used for ogbn-products as it allows for mini-batch training. Due to scalability issues, we used Simple Graph Convolution (SGC) (Wu et al., 2019) for ogbn-papers100M. We also tested the state-of-the-art GNN for each dataset. At the time we conducted the main experiments (07/01/2021), the top-ranked model for ogbn-arxiv was RevGAT2 (Li et al., 2021) and the top-ranked model for ogbn-products was SAGN3 (Sun & Wu, 2021). When we conducted the experiment on ogbn-papers100M (09/10/2021), the topranked model for ogbn-papers100M was GAMLP4 (Zhang et al., 2021b) (Since then, the highest reported accuracy was improved by 0.05% for ogbn-arxiv and 0.31% for ogbn-products; both of these improvements fall short compared to those offered by GIANT). For all evaluations, we use publicly available implementations of the GNNs. For RevGAT, we report the performance of the model with and without self knowledge distillation; the former setting is henceforth referred to as +SelfKD. For SAGN, we report results with the self-label-enhanced (SLE) feature, and denote them by SAGN+SLE. For GAMLP, we report results with and without Reliable Label Utilization (RLU); the former is denoted as GAMLP+RLU. SSL GNN Competing Methods. We compare GIANT-XRT to methods that rely on graphagnostic feature inputs and use node embeddings generated by various SSL GNNs modules. The graph-agnostic features are either default features available from the OGB datasets (denoted by OGB-feat) or obtained from plain BERT embeddings (without fine-tuning) generated from raw text (denoted by BERT?). For OGB-feat combined with downstream GNN methods, we report the results from the OGB leaderboard (and denote them by †). For the SSL GNNs modules, we test three frequently-used methods: (Variantional) Graph AutoEncoders (Kipf & Welling, 2016) (denoted by (V)GAE); Deep Graph Infomax (Velickovic et al., 2019) (denoted by DGI); and GraphZoom (Deng et al., 2020) (denoted by GZ). The hyper-parameters of SSL GNNs modules are given in the Appendix E.2. For all reported results, we use Xplain, XSSLGNN and XGIANT (c.f. Figure 1) to denote which framework the method belongs to. Note that XGIANT refers to our approach. The implementation details and hyper-parameters of GIANT-XRT can be founded in the Appendix E.3. 5.1 MAIN RESULTS The results for the ogbn-arxiv and ogbn-products datasets are listed in Table 2. Our GIANT-XRT approach gives the best results for both datasets and all downstream models. It improves the accuracy of the top-ranked OGB leaderboard models by a large margin: 1.86% on ogbn-arxiv and 1.19% on ogbn-products. Using graph-agnostic BERT embeddings does not necessarily lead to good results (see the first two rows in Table 2). This shows that the improvement of our method is not merely 2https://github.com/lightaime/deep_gcns_torch/tree/master/examples/ogb_ eff/ogbn_arxiv_dgl 3https://github.com/skepsun/SAGN_with_SLE 4https://github.com/PKU-DAIR/GAMLP due to the use of a more powerful language model, and establishes the need for self-supervision governed by graph information. Another observation is that among possible combinations involving a standard GNN pipeline with a SSL module, BERT+(V)GAE offers the best performance. This can be attributed to exploiting the correlation between numerical node features and the graph structure, albeit in a two-stage approach within the standard GNN pipeline. The most important finding is that using node features generated by GIANT-XRT leads to consistent and significant improvements in the accuracy of all tested methods, when compared to the standard GNN pipeline: In particular, on ogbn-arxiv, the improvement equals 17.58% for MLP and 3.1% for GraphSAGE; on ogbn-products, the improvement equals 18.76% for MLP and 5.32% for GraphSAINT. Figure 5 in the Appendix E.6 further illustrate the gain obtained by our GIANT-XRT over SOTA methods on OGB leaderboard. Another important observation is that GIANT-XRT is highly scalable, which can be clearly observed on the example of the ogbn-papers100M dataset, for which the results are shown in Table 3. In particular, GIANT-XRT improves the accuracy of the top-ranked model, GAMLP-RLU, by a margin of 1.42%. Furthermore, GIANT-XRT again consistently improves all tested downstream methods on the ogbn-papers100M dataset. As a final remark, we surprisingly find that combining MLP with GIANT-XRT greatly improves the performance of the former learner on all datasets. It becomes just slightly worse then GIANT-XRT+GNNs and can even outperform the GraphSAGE and GraphSAINT methods with default OGB features on ogbn-arxiv and ogbn-products datasets. This is yet another positive property of GIANT, since MLPs are low-complexity and more easily implementable than other GNNs. 5.2 ABLATION STUDY We also conduct an ablation study of the GIANT framework to determine the relevance of each module involved. The first step is to consider alternatives to the proposed multi-scale neighborhood prediction task: In this case, we fine-tune BERT with a SSL link prediction approach, which we for simplicity refer to as BERT+LP. In addition, we examine how the PIFA embedding affects the performance of GIANT-XRT and how more informative node features (TFIDF) can improve the clustering steps. First, recall that in GIANT-XRT, we use TFIDF features from raw text to construct PIFA embeddings. We subsequently use the term “NO TFIDF” to indicate that we replaced the TFIDF feature matrix by an identity matrix, which contain no raw text information. The term “TFIDF+NO PIFA” is used to refer to the setting where only raw text information (node attributes) is used to perform hierarchical clustering. Similarly, “NO TFIDF+PIFA” indicates that we only use normalized neighborhood vectors (graph structure) to construct the hierarchical clustering. If both node attributes and graph structure are ignore, the result is a random clustering. Nevertheless, we keep the same sizes of clusters at each level in the hierarchical clustering. The results of the ablation study are listed under rows indexed by XGIANT in Table 2 for ogbn-arxiv and ogbn-products datasets. They once again confirm that GIANT-XRT consistently outperforms other tested methods. For BERT+LP, we find that it has better performance on ogbn-arixv compared to that of the standard GNN pipeline but a worse performance on ogbn-products. This shows that using link prediction to fine-tune BERT in a self-supervised manner is not robust in general, and further strengthens the case for using neighborhood instead of link prediction. With respect to the ablation study of GIANT-XRT, we see that NO TFIDF+NO PIFA indeed gives the worst results. Using node attributes (TFIDF features) or graph information (PIFA) to construct the hierarchical clustering in GIANT-XRT leads to performance improvements that can be seen from Table 2. Nevertheless, combining both as done in GIANT-XRT gives the best results. Moreover, one can observe that using PIFA always gives better results compared to the case when PIFA is not used. It aligns with our theoretical analysis, which shows that PIFA embeddings lead to better hierarchical clusterings. ACKNOWLEDGEMENT The authors thank the support from Amazon and the Amazon Post Internship Fellowship. Cho-Jui Hsieh is partially supported by NSF under IIS-2008173 and IIS-2048280. This work was funded in part by the NSF grant 1956384. 6 ETHICS STATEMENT We are not aware of any potential ethical issues regarding our work. 7 REPRODUCIBILITY STATEMENT We provide our code in the supplementary material along with an easy-to-follow description and package dependency for reproducibility. Our experimental setting is stated in Section 5 and details pertaining to hyperparameters and computational environment are described in the Appendix. All tested methods are integrated in our code: https://github.com/amzn/pecos/tree/ mainline/examples/giant-xrt. A CONCLUSIONS We introduced a novel self-supervised learning framework for graph-guided numerical node feature extraction from raw data, and evaluated it within multiple GNN pipelines. Our method, termed GIANT, for the first time successfully resolved the issue of graph-agnostic numerical feature extraction. We also described a new SSL task, neighborhood prediction, established a connection between the task and the XMC problem, and solved it using XR-Transformers. We also examined the theoretical properties of GIANT in order to evaluate the advantages of neighborhood prediction over standard link prediction, and to assess the benefits of using XR-Transformers. Our extensive numerical experiments, which showed that GIANT consistently improves state-of-the-art GNN models, were supplemented with an ablation study that aligns with our theoretical analysis. B PROOF OF THEOREM 4.4 Throughout the proof, we use µ = rd1 to denote the mean of node feature from class 0 and ν = −r d 1 for class 1. We choose to keep this notation to demonstrate that our setting on mean can be easily generalized. The choice of the high probability events will be clear in the proof. The proof for the effect size of centroid for node feature X is quite straightforward from the Definition 4.3. By directly plugging in the mean and standard deviation, we have 2r 2σ = Θ(1). (4) The last equality is due to our assumption that both r, σ > 0 are both constants. To prove the effect size of centroid for PIFA embedding Z, we need to first introduce some standard concentration results for sum of Bernoulli and Gaussian random variables. Lemma B.1 (Hoeffiding’s inequality (Hoeffding, 1994)). Let Sn = ∑n i=1Xn, where Xi are i.i.d. Bernoulli random variable with parameter p. Then for any t > 0, we have P (|Sn − np| ≥ t) ≤ 2 exp( −2t2 n ). (5) Lemma B.2 (Concentration of sum of Gaussian). Let Sn = ∑n i=1Xn, whereXi are i.i.d. Gaussian random variable with zero mean and standard deviation σ. Then for some constant c > 0, we have P ( |Sn| ≥ cσ √ n log n ) ≤ 2 exp(−c 2 2 log n). (6) Now we are ready to prove our claim. Recall that the definition of PIFA embedding Z is as follows: Zi = vi ‖vi‖ , where vi = ∑ j:Aij=1 Xj = [AX]i . (7) We first focus on analyzing the vector vi. We denote Niy = |{j : yj = y ∧Aij = 1, j ∈ [n]}| and Ni = Ni0 +Ni1. Without loss of generality, we assume yi = 0. The conditional expectation of it is as follows: E [vi|A] = Ni0µ +Ni1ν. (8) Next, by leveraging Lemma B.1, we have P(|Ni1 − nq 2 | ≥ t) ≤ 2 exp(−4t 2 n ). (9) By choosing t = c1 √ n log n for some constant c1 > 1, we know that with probability at least 1−O( 1nc1 ), Ni1 ∈ [ nq 2 − c1 √ n log n, nq2 + c1 √ n log n]. Finally, by our Assumption 4.2, we know that nq = ω( √ n log n). Thus, we arrive to the conclusion that with probability at least 1−O( 1nc1 ), Ni1 = nq 2 ± o(1). Following the same analysis, we can prove that with probability at least 1−O( 1nc1 ),Ni0 = np 2 ±o(1). The only difference is that we are dealing with n2 − 1 random variables in this case as there’s no self-loop. Nevertheless, (n2 −1) = n 2 (1−o(1)) so the result is the same. Note that we need to apply union bound over 2n error events (∀i ∈ [n] and 2 cases for Ni0 and Ni1 respectively). Together we know that the error probability is upper bounded byO( 1nc2 ) for some new constant c2 = c1−1 > 0. Hence, we characterize the mean of vi on a high probability event B1. Next we need to analyze its norm. By direct analysis and condition on A, we have ‖vi‖2 d = d∑ k=1 (µkNi0 + νkNi1 + n∑ j=1 AijGjk) 2, (10) where Gjk are i.i.d. Gaussian random variables with zero mean, σ standard deviation and d = stands for equal in distribution. Then by Lemma B.2 we know that with probability at least 1 − O( 1 N c3 i ) for some constant c3 > 2 + c2 | n∑ j=1 AijGjk| = O( σ√ d √ Ni logNi). (11) This is because condition on A, we are summing over Ni Gaussian random variables. Recall that condition on our high probability event B1, Ni = n2 (p + q)(1 + o(1)) = ω( √ n log n) ≤ n. Thus, we know that for some c2 > 0, with probability at least 1− O( 1nc2 + 1 nc3 ) = 1− O( 1 nc2 ) we have | ∑n j=1 AijGjk| = O( σ√ d √ n log n). Again, recall that bothNi0, Ni1 = ω( √ n log n), thus together we have ‖vik‖2 = n2 4 (µkp+ νkq) 2(1 + o(1)). (12) ⇒ ‖vi‖ = n 2 √√√√ d∑ k=1 (µkp+ νkq)2(1 + o(1)) (13) = n 2 |p− q|r(1 + o(1)). (14) Again, we need to apply union bound over nd error events, which result in the error probability O( 1nc2 ) since c3 > 2 + c2 and d = o(n) in our Assumption 4.2. We denote the corresponding high probability event to be B2. Note that the same analysis can be applied to the case yi = 1, where the result for the norm is the same and the result for vi would be just swapping p and q. Combine all the current result, we know that with probability at least 1− O( 1nc2 ) for some c2 > 0, the PIFA embedding Zi equals to the following Zi = n 2 (µp+ νq)(1 + o(1)) n 2 |p− q|r(1 + o(1)) = (µp+ νq)(1 + o(1)) |p− q|r if yi = 0. (15) Zi = n 2 (µq + νp)(1 + o(1)) n 2 |p− q|r(1 + o(1)) = (µq + νp)(1 + o(1)) |p− q|r if yi = 1. (16) Hence, the centroid distance would be = ∥∥∥∥ (µ(p− q) + ν(q − p))(1 + o(1))r|p− q|(1 + o(1)) ∥∥∥∥ (17) = ∥∥∥∥ (µ− ν)(p− q)(1 + o(1))r|p− q|(1 + o(1)) ∥∥∥∥ (18) = ‖µ− ν‖ r (1 + o(1)) = 2(1 + o(1)). (19) Now we turn to the standard deviation part. Specifically, we will characterize the following quantity (again, recall that we assume yi = 0 w.l.o.g.).∥∥∥∥zi − (µp+ νq)r|p− q| ∥∥∥∥ (20) Recall that the latter part is the centroid for nodes with label 0. Hence, by characterize this quantity we can understand the deviation of PIFA embedding around its centroid. From the analysis above, we know that given A, we have∥∥∥∥zi − (µp+ νq)r|p− q| ∥∥∥∥ ≤ ∥∥∥∥Ni0µ +Ni1ν‖vi‖ − (µp+ νq)r|p− q| ∥∥∥∥+ ∥∥∥∥∥ ∑n j=1 AijGj ‖vi‖ ∥∥∥∥∥ . (21) For the terms ‖vi‖, Ni0, Ni1 and ‖ ∑n j=1 AijZj‖, we already derive their concentration results above. Plug in those results, the first term becomes ‖Ni0µ +Ni1ν ‖vi‖ − (µp+ νq) r|p− q| ‖ (22) = ‖ n 2 (µp(1 + o(1)) + νq(1 + o(1)) n 2 r|p− q|(1 + o(1)) − (µp+ νq) r|p− q| ‖ (23) = ‖ (µpo(1) + νqo(1)) r|p− q|(1 + o(1)) ‖ = r|p− q|o(1) r|p− q|(1 + o(1)) = o(1). (24) The second term becomes ‖ ∑n j=1 AijGj ‖vi‖ ‖ = O(σ √ n log n) n 2 r|p− q|(1 + o(1)) (25) = O(σ √ n log n) ω(r √ n log n) = o(1), (26) where the last equality is from our Assumption 4.2 that r, σ are constants. Together we show that the deviation of nodes from their centroid is of scale o(1). The similar result holds for the case yi = 1. Together we have shown that the standard deviation of Zi is o(1) on the high probability event B1 ∩ B2. Hence, the effect size for the PIFA embedding is ω(1) with probability at least 1 − O( 1nc2 ) for some constant c2 > 0, which implies that PIFA embedding gives a better clustered node representation. Thus, it is preferable to use PIFA embedding and we complete the proof. C PROOF OF PROPOSITION 4.5 Note that under the setting of cSBM and the Assumption 4.2, the Hamming distance of Ai,Aj for yi = yj is a Poisson-Binomial random variable. More precisely, note that ∀k ∈ [n] \ {i, j} s.t.yk = yi, |Aik −Ajk| ∼ Ber(2p(1− p)). (27) ∀k ∈ [n] \ {i, j} s.t.yk 6= yi, |Aik −Ajk| ∼ Ber(2q(1− q)), (28) where they are all independent. Hence, we have Hamming(Ai,Aj) ∼ Bin( n 2 − 2, 2p(1− p)) +Bin(n 2 , 2q(1− q)), (29) where Hamming(Ai,Aj) denotes the Hamming distance of Ai,Aj andBin(a, b) stands for the Binomial random variable with a trials and the success probability is b. By leveraging the Lemma B.1, we know that for a random variable X ∼ Bin(n2 , 2q(1− q)), we have P (|X − nq(1− q)| ≥ t) ≤ 2 exp(−4t 2 n ). (30) Note that the function q(1 − q) is monotonic increasing for q ∈ [0, 12 ] and has maximum at q = 1 2 . Combine with Assumption 4.2 we know that nq(1 − q) = ω( √ n log n). Hence, by choosing t =√ cn logn 2 for some constant c > 0, with probability at least 1−O( 1 nc ) we have X ≥ nq(1− q)− √ cn log n 2 = ω( √ n log n). (31) Finally, by noting the fact that with probability 1 we have Bin(n2 − 2, 2p(1 − p)) ≥ 0. Hence, by showingX ∼ Bin(n2 , 2q(1−q)) is of order ω( √ n log n) with probability at least 1−O( 1nc ) implies that the Hamming distance of Ai,Aj is of order ω( √ n log n) with at least the same probability. Together we complete the proof. D PROOF OF LEMMA B.2 By Chernoff bound, we have P (Sn ≥ a) ≤ min t>0 exp(−ta)E exp(tSn). (32) By the i.i.d assumption, we have min t>0 exp(−ta)E exp(tSn) = min t>0 exp(−ta)(E exp(tX1))n. (33) Note that the moment generating function (MGF) of a zero-mean, σ standard deviation Gaussian random variable is exp(σ 2t2 2 ). Hence we have min t>0 exp(−ta)(E exp(tX1))n = min t>0 exp( 1 2 nσ2t2 − ta). (34) By minimizing the upper bound with respect to t, we can choose t = anσ2 . Plug in this choice of t we have P (Sn ≥ a) ≤ exp( −a2 2nσ2 ). (35) Finally, by choosing a = cσ √ n log n for some c > 0, applying the same bound for the other side and the union bound, we complete the proof. E EXPERIMENTAL DETAIL E.1 DATASETS In this work, we choose node classification as our downstream task to focus. We conduct experiments on three large-scale datasets, ogbn-arxiv, ogbn-products and ogbn-papers100M as these are the only three datasets with raw text available in OGB. Ogbn-arxiv and ogbn-papers100M (Wang et al., 2020; Hu et al., 2020a) are citation networks where each node represents a paper. The corresponding input raw text consists of titles and abstracts and the node labels are the primary categories of the papers. Ogbn-products (Chiang et al., 2019; Hu et al., 2020a) is an Amazon co-purchase network where each node represents a product. The corresponding input raw text consists of titles and descriptions of products. The node labels are categories of products. E.2 HYPER-PARAMETERS OF SSL GNN MODULES The implementation of (V)GAE and DGI are taken from Pytorch Geometric Library (Fey & Lenssen, 2019). Note that due to the GPU memory constraint, we adopt GraphSAINT (Zeng et al., 2020) to (V)GAE and DGI for ogbn-products. GraphZoom is directly taken from the official repository5. For all downstream GNNs in the experiment, we average the results over three independent runs. The only exception is OGB-feat + downstream GNNs, where we directly take the results from OGB leaderboard. Note that we also try to repeat the experiment of OGB-feat + downstream GNNs, where the accuracy is similar to the one reported on the leaderboard. To prevent confusion we decide to take the results from OGB leaderboard for comparison. For the BERT model used throughout the paper, we use “bert-base-uncased” downloaded from HuggingFace 6. For the methods used in the SSL GNNs module, we try our best to follow the default setting. We slightly optimize some hyperparameters (such as learning rate, max epochs...etc) to ensure the training process converge. To ensure the fair comparison, we fix the output dimension for all SSL GNNs as 768 which is the same as bert-base-uncased and XR-Transformer. E.3 HYPER-PARAMETERS OF GIANT-XRT AND BERT+LP Pre-training of GIANT-XRT. In Table 4, we outline the pre-training hyper-parameter of GIANT-XRT for all three OGB benchmark datasets. We mostly follow the convention of XRTransformer (Zhang et al., 2021a) to set the hyper-parameters. For ogbn-arxiv and ogbn-products 5https://github.com/cornell-zhang/GraphZoom 6https://huggingface.co/bert-base-uncased datasets, we use the full graph adjacency matrix as the XMC instance-to-label matrix Y ∈ {0, 1}n×n, where n is number of nodes in the graph. For ogbn-papers100M, we sub-sample 50M (out of 111M) most important nodes based on page rank scores of the bipartite graph (He et al., 2016). The resulting XMC instance-to-label matrix Y has 50.0M of rows, 49.9M of columns, and 2.5B of edges. The PIFA node embedding for hierarchical clustering is constructed by aggregating its neighborhood nodes’ TF-IDF features. Specifically, the PIFA node embedding is a 4.2M highdimensional sparse vector, consisting of 1M of word unigram, 3M of word bigram, and 200K of character triagram. Finally, for ogbn-arxiv and ogbn-products, we use TFN+MAN negative sampling to pre-train XR-Transformer where the model-aware negatives (MAN) are selected from top 20 model’s predictions. Because of the extreme scale of ogbn-papers100M, we consider TFN only to avoid excessive CPU memory consumption on the GPU machine. Pre-training on BERT+LP. To verify the effectiveness of multi-scale neighborhood prediction loss, we consider learning a Siamese BERT encoder with the alternative Link Prediction loss for pretraining, hence the name BERT+LP. We implement BERT+LP baseline with the triplet loss (Balntas et al., 2016) as we empirically observed it has better performance than other loss functions for the link prediction task. We sample one positive pair and one negative pair for each node as training samples for each epoch, and train the model until the loss is converged. E.4 HYPER-PARAMETERS OF DOWNSTREAM METHODS For the downstream models, we optimize the learning rate within {0.01, 0.001} for all models. For MLP, GraphSAGE and GraphSAINT, we optimize the number of layers within {1, 3}. For RevGAT, we keep the hyperparameter choice the same as default. For SAGN, we also optimize the number of layers within {1, 3}. For GAMLP, we directly adopt the setting from the official implementation. Note that all hyperparameter tuning applies for all pre-trained node features (Xplain, XSSLGNN and XGIANT). E.5 COMPUTATIONAL ENVIRONMENT All experiments are conducted on the AWS p3dn.24xlarge instance, consisting of 96 Intel Xeon CPUs with 768 GB of RAM and 8 Nvidia V100 GPUs with 32 GB of memory each. E.6 ILLUSTRATION ON THE IMPROVEMENT OF GIANT-XRT See Figure 5.
1. What is the focus of the paper regarding graph neural networks? 2. What are the strengths of the proposed framework, particularly in its applicability and theoretical support? 3. What are the weaknesses of the paper, specifically regarding efficiency and presentation quality?
Summary Of The Paper Review
Summary Of The Paper This paper develops a self-supervised learning framework to extract node features with the aid of graph. Connections between neighborhood prediction and the XMC problem are also established. Experiments on large-scale data show the superiority of the proposed method. Review Strengths: The problem is well motivated. The proposed framework could be useful in general. Both theoretical analysis and experiments are convincing. Weaknesses: the efficiency is not supported with experiments. There are some typos.
ICLR
Title Object detection deep learning networks for Optical Character Recognition Abstract In this article, we show how we applied a simple approach coming from deep learning networks for object detection to the task of optical character recognition in order to build image features taylored for documents. In contrast to scene text reading in natural images using networks pretrained on ImageNet, our document reading is performed with small networks inspired by MNIST digit recognition challenge, at a small computational budget and a small stride. The object detection modern frameworks allow a direct end-to-end training, with no other algorithm than the deep learning and the non-max-suppression algorithm to filter the duplicate predictions. The trained weights can be used for higher level models, such as, for example, document classification, or document segmentation. 1 INTRODUCTION Document images make the use of deep learning networks a complex task, since most deep learning network architectures have been designed and trained for natural images, making them useless for document images which are mainly white and black characters and figures. This is in particular the case for classification networks (VGG, ResNets, ...), object detection networks (Fast RCNN, Faster RCNN, Yolo, SSD, ...), segmentation networks (FCN, U-Net, SegNet, DeconvNet, Dilated-Net, ParseNet, DeepLab...) which cannot be applied directly, even with finetuning. Two challenges arise with deep learning and document data. First, we need to train specific features for the type of data. Second, the available datasets can be of smaller size than classical datasets for computer vision (ImageNet, COCO, ...), in particular when it is required to annotate images for a specific purpose. To reduce the amount of data to train high level descriptions of the document images, such as document zones, segments, lines, or elements, the idea is to train a smaller network on OCR data which exists at massive scale, and use the weights of this small network as pretrained early layers in bigger networks to perform high level tasks with less required data. In our experiments, we show that best performing approaches currently available for object detection on natural images can be used with success at OCR tasks. Code will be released on Github, so that the open research community can bring the best model architectures in terms of accuracy and speed/size efficiency. 2 RELATED WORK In this section we quickly review the literature on OCR and object detection. 2.1 APPROACHES FOR OCR Most deep learning approaches using Object Detection methods for OCR are applied to the task of scene text recognition also called text spotting, which consists in recognizing image areas of text, such as a sign or a wall plaque. Once the text area is recognized, a reading method is applied inside the zone. Some approaches use weakly supervised training either using a CTC loss leaving the alignment between the character positions and the output result to a recurrent network such as bidirectionnal LSTM (He et al. (2015), Jaderberg et al. (2014b), Wang et al. (2018), Goodfellow et al. (2013), dro (2017), Liao et al. (2016)) or using a fixed number of softmax classifiers (Jaderberg et al. (2015), Bartz et al. (2017)) ; some other approaches use guided learning (He et al., 2018). These approaches are mainly driven by the Street View SVHN, Uber-Text (Zhang et al., 2017), FSNS (Smith et al., 2017), Coco-text (Veit et al., 2016), ICDAR 2003 (Lucas et al., 2003) and 2015 (Karatzas et al., 2015), SVT and IIIT5K (Mishra et al., 2012), Synth90k (Jaderberg et al., 2014a) datasets. Rather than recognizing at word level or scene text level, few approaches concern direct detection of characters in natural images, using a localization network in ST-CNN (Jaderberg et al., 2015), or modern object detection approach in yolo-digits (Redmon & Farhadi, 2018) to recognize digits in natural images. This work is the first to apply modern object detection deep learning approaches to document data with small convolutional networks, without converting them to natural images as in (Gilani et al., 2017). (Tensmeyer & Martinez, 2017) shows that document classification accuracy decreases with deeper networks. 2.2 APPROACHES FOR OBJECT DETECTION Modern object detections approaches are divided into two classes. The first class yields to the highest accuracy object detectors, such as Fast-RCNN (Girshick, 2015), Faster-RCNN (Ren et al., 2015), Mask-RCNN (Detectron) (He et al., 2017), and is based on the twostage approach of R-CNN (Girshick et al., 2013). In the first stage, an algorithm, such as Selective Search, or a deep learning model, generates a set of candidate proposals for object regions. In the second stage, a deep learning network classifies the presence of the object (the objectness), its class, as well as estimates the precise object bounding box. In the second class of object detectors, the objectness, the class as well as the bounding box regression, are directly predicted by a single dense deep learning network. These approaches include OverFeat (Rowley et al., 1995), Yolo (Redmon et al. (2015), Redmon & Farhadi (2016), Redmon & Farhadi (2018)) or SSD (Liu et al., 2015). 3 OUR APPROACH TO OCR In our work, as a first attempt to use object detection networks to OCR, we design a single stage object detector, predicting the confidence of an object presence, the class, and the regression for the bounding box. In order to cope with multiple scales we use the feature pyramid approach of SSD (Liu et al., 2015). 3.1 ARCHITECTURES Our 1-scale models are inspired by the LeCun model for digit classification except that the dense layer have been converted to convolutions (locally linear) in order to compute a prediction at multiple positions in the image on a grid defined by the stride of the whole network. These models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 1 convolution layers of kernel 12 and 128 features, so that the receptive field of the network is 28 pixel large and wide. Offset of the model with valid paddings will be 14. We consider the stride of the last convolution as a parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale. On top of these features, 4 stacks of dense layers are used for objectness, classification, position and scale regressions. We named this 1-scale model CNN_C32_C64_M2_C128_D. Our 2-scale models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 2 other convolution layers of kernel 3 and 64 features and another max pooling of stride 2. Each max pooling layer is followed by a convolution layer of kernel 11 and 12 respectively, so that the receptive field for each output is 28 and 56 pixel large and wide. Offset for each output is 14 and 28 respectively. We consider the stride of the output convolutions as a variable parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale for the first output, and 4 × stride scale for the second output. On top of these 2 stages of features, 4 stacks of dense layers are used as well. We name this 2-scale model CNN_C32_C64_M2_C64_C64_M2_C128_D_2. Each layer is followed by a ReLU activation, except for the outputs: objectness is computed with a sigmoid, classification with a softmax, position with hyperbolic tangent and scale with sigmoid. 1-scale model: 2-scale model: 3.2 LOSS For objectness, we need to consider the abundance of negative positions compared to positive positions. That is why we use the Tensorflow expression of weighted crossentropy designed to ensure stability and avoid overflow: (1− z)× x+ l × log(1 + exp(− | x |) + max(−x, 0) where l = (1 + (q − 1) × z) and x = logits, z = targets, q = pos weight. We found that a positive weight of 1000 works well on our OCR dataset. The loss for classification and regression are crossentropy loss and MSE loss. We found that adding a multiplier of 100 to the regression loss help converge faster. 3.3 COMPUTATION OF AVERAGE PRECISION It is common to use the mAP score as the final metric for object detection. In our case, we consider all classes as one class in order to use average precision as metric to measure the capacity of the models in terms of objectness and not classification. We use the name object mAP to distinguish it from classical mAP score. The reason for this choice is that we focus on character detection in this work. For full character recognition, early results suggest that two-stage detectors might be of better fit than a one-stage detector, because in our 1-stage setting, classification accuracy drops when the classification network is not well positioned on the character (see Stride experiments on Mnist), and this argument could give an advantage to 2-stage detectors. Later on, we might add a second stage on top of this network as in Mask RCNN or Faster RCNN and this network might become a region proposal network. We leave this as future work, which purpose will be to improve the classification accuracy. 4 DATASETS 4.1 TOY DATASET We build a toy dataset in order to test our implementations on a simpler task and check that the implementation is correct. For that purpose, we use the MNIST handwritten digits dataset to create pages with handwritten digits, at fixed or variable scales, with or without noise. The number of object classes is 10, the digits [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”]. Our MNIST dataset is composed of 1600 images of size 728x448, consisting of 28x28 square digits randomly placed on a 2D grid of stride 28. MNIST (default digit size 28) MNIST with white prob .4 ( .9) MNIST with noise MNIST with digit size in 14-28 range random digits at scale 28 at different positions on a grid more digits per positions cluttered with noise added randomly random digit scale between 14 and 28. position is random MNIST digit size 56 MNIST with digit size in 28-56 range MNIST with 2 digit sizes 28,58 MNIST with 2 digit ranges 14-28,28-56 random digits at scale 56 at different positions on a grid random digits scale between 28 and 56. random position digits at scales 28 and 56. Positions on a grid random digit scales between 14 and 56. random positions. Our MNIST dataset with noise adds random distortions to create a high level of noise on the images and test the robustness of the models. To check the performance of the position prediction, we set a different network stride, for example 12 (setting stride scale to 6), so that the network grid of positions where the model is evaluated in the convolutions, do not fall exactly on the grid of characters. That way, some digits will appear cropped, up to 6 pixels off horizontally and vertically, in the viewpoint of the network, ie its 28x28 receptive field. To check the performance of the scale prediction, we build a MNIST dataset with digits randomly resized in the [14-28] range. Before adding a layer to our network architecture as in SSD (Liu et al., 2015), we also check larger models at a bigger scale, with a MNIST dataset of 56 pixel wide digits. Last, we build a two-scale dataset for two-layer predictions as in SSD (Liu et al., 2015), with digits at size 28 and 56, and add a middle output to CNN_C32_C64_M2_C64_C64_M2_C128_D to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2, a two-scale network. For a continuous scale between 14 pixels and 56 pixels, we build another two-scale dataset with 2 digit size ranges, 14-28 and 28-56. 4.2 OCR DATA We build our own dataset of 8000 document pages, split into train (90%) and validation (10%) sets, for a fast check of our approach. Document PDF are converted to images with a resolution chosen automatically to have normal sized characters. To have fixed-sized image input for the network batched training, document images are then randomly cropped on a 728x448 area with characters, to have the same sized inputs as our mnist dataset. We consider uppercase and lowercase letters, digits, the two parenthesis and the % sign. The number of classes is 65: [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”, ”a”, ”b”, ”c”, ”d”, ”e”, ”f”, ”g”, ”h”, ”i”, ”j”, ”k”, ”l”, ”m”, ”n”, ”o”, ”p”, ”q”, ”r”, ”s”, ”t”, ”u”, ”v”, ”w”, ”x”, ”y”, ”z”, ”A”, ”B”, ”C”, ”D”, ”E”, ”F”, ”G”, ”H”, ”I”, ”J”, ”K”, ”L”, ”M”, ”N”, ”O”, ”P”, ”Q”, ”R”, ”S”, ”T”, ”U”, ”V”, ”W”, ”X”, ”Y”, ”Z”, ”(”, ”)”, ”%”] Letters are filtered by their size to fall in the range of [14-56] pixels and we start with two-scale networks ([14-28] and [28-56]) tested on our MNIST dataset. Character widths Character heights 5 EXPERIMENTS 5.1 IMPLEMENTATION DETAILS Code has been developed under Python with Keras deep learning framework, for Tensorflow and CNTK compute engines. It is compatible with Python 2.7 and 3.5 and allows multi-gpu training. For training, batch size is 3, the optimizer is Adam and the learning rate 0.001. Hyperparameters are searched by simple grid search. To create the OCR dataset, we use Tesseract OCR on 10 000 documents. 5.2 TOY DATASET 5.2.1 DIGITS CENTERED IN NETWORK FIELD OF VIEW On the MNIST toy dataset, digits are always centered on a grid (of 28x28). A 28-pixel-strided LeCun convolutional model offers a class accuracy above 99.2% since every positive position falls centered on the digit centered on a grid of stride 28. object mAP score is above 0.99 at 12 epochs with our simple model CNN_C32_C64_M2_C128_D. With noise, object mAP score with our simple model CNN_C32_C64_M2_C128_D is above 0.98. Classification accuracy drops to 98.7. mnist mnist noise Command Obj acc Class acc Reg acc Obj mAP MNIST 100 99.2827 1.60e-10 99.93 MNIST with noise 99.62 98.92 4.65e-6 98.41 5.2.2 THE EFFECT OF STRIDE AND IOU ON AVERAGE PRECISION To test the network capability to predict position, we need to use a network stride different than the data grid stride. For example, with stride 12 instead of 28, most digits are not anymore in the center of the network reception field (except first row and column of our image). Also, most digits will appear cropped in the field of view of the network and the IOU threshold defines how much crop ratio will be allowed to still consider the position on the grid as positive. In the worst case, the network stride can be so large that some digits do not appear on the output grid, equivalent to an Intersection Over Union (IOU) of zero (intersection area is zero). Usually, the stride is not larger than the receptive field, and under this condition, the maximal intersection area between any digit and any network field is 50% times 50% = 0.25, while the union is 1.75, leading to a minimal IOU of 0.14. In the case of a smaller stride, for example 12 as below, the IOU threshold can be set higher without losing any digit for reconstruction: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .15 96.37 36.25 0.010 99.97 100 .2 98.42 28.56 0.012 99.75 100 .25 97.05 36.42 0.015 99.52 100 .3 98.35 92.78 0.0013 99.88 100 .35 98.99 83.72 0.0069 99.22 100 .4 98.70 94.96 0.0066 98.37 100 .5 96.71 95.46 0.0062 91.09 95.71 .6 99.92 98.23 4.8e-05 51.80 54.32 .8 99.90 97.90 7.67e-05 8.5 10.63 .95 99.94 97.27 3.7-07 10.80 12.21 .99 99.91 97.66 7.06e-07 9.3 11.71 The large drop in classification accuracy for a small stride suggests that classification would benefit from better localized digits in the receptive field, which would encourage the use of 2-stage detectors. To reduce the impact of the stride, we set a stride margin (see Experiment section on OCR data) on the digit max size to consider at a layer scale so that there is always one position on the network grid for which the character is fully seen by the network. Reconstruction of ground truth from target data at 100% is only possible until an IOU threshold of 0.4, after which the network stride should be decreased. With a smaller stride of 4, reconstruction at 100% is possible at most IOU range: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .2 98.51 72.71 0.034 99.99 100 .25 98.63 78.53 0.018 100 100 .3 97.88 94.54 0.0098 99.89 100 .4 96.85 97.41 0.0098 99.93 100 .5 94.14 98.81 0.0099 99.61 100 .6 99.80 98.57 0.00031 99.93 100 .7 99.64 98.21 0.0016 99.77 100 .8 100 98.19 1.7e-8 82.24 100 .8 -e 30 99.98 99.35 1.73e-9 91.05 100 mnist mnist stride 6 mnist stride 12 The images below show the target for an IOU of .2 for digits at scale between 7 and 14 pixels. The right image shows that with a large stride, small digits cut in the receptive field are dropped because of a too small IOU with the anchor, while with smaller stride, the IOU threshold does remove good candidates. A smaller stride enables to work with higher IOU and better mAP scores. With stride 14, final object mAP is 72.31 With stride 4, final object mAP is 98.61 5.2.3 DIGIT SCALE PREDICTION The second reason (after the cropped digits) to use a smaller IOU threshold is to capture small digits. For example, for digits two times smaller, the maximal intersection of a digit with a network receptive field is 0.5 times 0.5 times 0.25 (the maximal intersection area for full size digits of 20), hence 0.0625, while the union is 1+ 0.5× 0.5× (1− 0.25) = 1.1875 leading to a minimal IOU of 0.052. About 3 times smaller than for the full digit size. With a range scale of 14-28 for the digit sizes, the target object mAP (obtained when we reconstruct the bounding boxes from the target) remains 100% at IOU 0.25 for a stride of 12 pixels. The predicted object mAP is 99.58. The classification accuracy drops down to 89.37%. 5.2.4 HIGHER CAPACITY NETWORKS Lets double the number of kernels to create CNN_C64_C128_M2_C256_D model. At stride 12 and IOU .3, classification accuracy increases from 92.78 to 96.84, while objectness remains roughly perfect. At stride 6 and IOU .2, it increases from 78.53 to 95.78%. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 12, IOU .5 99.59 98.02 0.00078 92.32 94.89 Stride 12, IOU .4 99.17 97.23 0.0047 99.79 100 Stride 12, IOU .3 99.74 96.84 0.00043 100 100 Stride 12, IOU .2 97.57 91.14 0.0016 99.98 100 Stride 12, IOU .15 98.02 83.85 0.0083 99.95 100 Stride 4, IOU .5 99.80 98.87 0.00053 100 100 Stride 4, IOU .25 99.48 95.78 0.00054 100 100 14-28 pixel wide, Stride 12, IOU .25 96.58 91.42 0.0045 99.85 100 5.2.5 MULTI-STAGE NETWORKS In order to capture digits in a bigger range than 28 pixels, we try networks with double reception field size, adding more layers (CNN_C32_C64_M2_C64_C64_M2_C128_D model), and possibly, multiple outputs at multiple layer stages (CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model) as in SSD (Liu et al., 2015). First, we check our model with bigger field, CNN_C32_C64_M2_C64_C64_M2_C128_D model, on the MNIST dataset of 56 pixel wide digits. object mAP score is 1 while classification accuracy is 99.2% at 12 epochs, meaning this first architecture 56x56 receptive field deals well with digits twice big. Then we add a second output to our network architecture as in SSD (Liu et al., 2015) to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model, and on a 2-scale dataset with digits at size 28 and 56, object mAP scores remain stable at 99.44 and 99.64 for network strides 12 and 4 respectively. On a 2-scale dataset with digits at size ranges 14-28 and 28-56, object mAP score with our CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model is 98.82% and for the double size CNN_C64_C128_M2_C128_C128_M2_C256_D_2 is 99.11%. Model Digit size Stride IOU Obj acc Class acc Reg acc Obj mAP Target mAP S 28-56 12 .25 98.99 93.92 0.0018 99.89 100 S 14-28, 28-56 12 .25 98.92 / 98.04 64.06 / 91.08 0.0037 / 0.0056 98.82 99.90 D 14-28, 28-56 12 .2 98.57 / 97.73 58.30 / 79.84 0.0058 / 0.0036 98.31 99.90 D 14-28, 28-56 12 .25 99.10 / 98.16 93.64 / 95.28 0.0016 / 0.0014 98.42 99.93 D, 50e 14-28, 28-56 12 .25 99.26 / 98.78 93.91 / 94.02 0.0010 / 0.0014 98.81 99.93 D, 50e 14-28, 28-56 12 .2 99.05 / 98.05 89.88 / 91.97 0.0021 / 0.0022 99.11 99.97 S 14-56 12 .02 97.58 30.17 0.10 75.07 100 S 14-56 12 .05 97.92 53.20 0.027 75.49 100 S 14-56 12 .01 97.82 58.44 0.0057 87.45 92.67 S 14-56 12 .2 98.82 79.23 0.0010 72.36 75.78 5.2.6 LOW RESOLUTION In order to train full document image rather than a 700 pixel high crop of the images, resolution has to be smaller to fit in the GPU. For that reason, we look at models to recognize digits at a maximum size of 14 pixels instead of 28. We build a model CNN_C32_C64_C128_D by removing the max pooling layer. The network input fields becomes 14 pixel wide, and the stride is divided by 2. With stride 8 after 30 epochs: Digit size IOU Obj acc Class acc Reg acc Obj mAP Target mAP 14 .3 97.12 94.50 0.012 99.91 100 7-14 .2 98.58 73.07 0.0087 98.61 100 7-14 .25 99.07 75.34 0.012 98.98 100 To capture digits on a larger range 7-28 pixel wide, we remove the 2 max pooling layers from our 56 pixel wide model, to build CNN_C32_C64_C64_Cd64_C128_D. At stride 3, Epochs IOU Obj acc Class acc Reg acc Obj mAP Target mAP 30 .1 97.47 73.70 0.010 87.19 95.45 30 .2 99.08 92.84 0.0074 81.01 76.47 50 .15 98.71 88.02 0.0046 87.79 84.76 50 .1 97.97 79.19 0.0096 89.17 95.24 On 7-28 pixel digit range, the network sometimes learns a better reconstruction than the target, due to the hard IOU threshold decision in the target computation : Targets mAP score is 76% at stride 8, IOU 0.2 While results mAP score is 80% Targets mAP score is 86% at stride 8, IOU 0.15 While results mAP score is 89% 5.3 OCR DATASET Target (training data) Detection results (with on the top left corner each layers receptive field minus stride margin) Results filtered by NMS 5.3.1 TARGET We experiment different settings to define positives on the grid and compute the target average precision obtained if we reconstruct the bounding boxes from the target instead of the prediction results. We also compute the final average precision obtained by the trained model on this setting. We consider positive a position on the grid that has a sufficient IOU with the receptive field of the network. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 4+8, IOU 0.15 97.00 / 97.76 69.11 / 71.78 0.027 / 0.016 58.82 91.22 Stride 4+8, IOU 0.2 97.89 / 98.44 75.39 / 72.75 0.020 / 0.011 68.09 84.47 Stride 4+8, IOU 0.25 98.19 81.43 0.014 64.69 65.40 Stride 6+12, IOU 0.15 97.52 / 97.58 72.18 / 77.03 0.028 / 0.015 67.05 86.07 Stride 6+12, IOU 0.2 98.24 / 98.25 79.01 / 79.47 0.019 / 0.10 66.25 78.15 Stride 6+12, IOU0.25 98.60 / 98.90 80.17 / 78.93 0.015 / 0.0075 62.71 66.42 Stride 8+16, IOU 0.15 97.90 / 97.50 72.05 / 74.58 0.029 / 0.017 62.87 89.77 Stride 8+16, IOU 0.2 98.42 / 97.99 78.35 / 79.15 0.021 / 0.012 66.30 83.94 Stride 8+16, IOU 0.25 98.88 / 98.61 77.64 / 81.11 0.017 / 0.0077 60.26 69.35 Stride 10+20, IOU 0.15 98.47 / 97.36 70.94 / 77.87 0.031 / 0.018 59.33 85.87 Stride 10+20, IOU 0.2 98.92 / 97.76 67.94 / 80.13 0.021 / 0.014 51.87 77.52 Stride 10+20, IOU 0.25 99.09 / 98.45 70.41 / 83.67 0.018 / 0.0097 44.59 61.57 IOU 0.2 (ce7562) IOU 0.15 (98cfe6) IOU 0.25 (de7ca2) Target average precision is better when the IOU is low, since the grid misses no ground truth boxes, nevertheless, the model possibly learns better at an higher IOU, which also leads to better classification results. We also tried considering as positive any character that fall in the receptive field of the network. Target average precision is very close to 1 but final average precision remains below 0.48. 5.3.2 STRIDE MARGIN Since the network performs a strided analysis of the input image, we consider that characters should fall entirely into the receptive field of the network on one positive position. For that reason, we consider a stride margin, ie filter characters with a size lower than the receptive field dimension minus the stride. Deactivating this setting, some characters are not being seen completely by the network anymore and the prediction in position and scale should be harder to perform. object mAP score becomes 78.5%. with stride margin (d0644c) without stride margin (6aa6c8) 5.3.3 POSITIVE WEIGHT AND LOSS WEIGHTS pos weight 1000 (4ca89a) pos weight 100 (b9bfbf) pos weight (df9c83) Best results are obtained with pos weights=100. 5.3.4 KERNEL WIDTH To see the influence of the width of the layers, we try to double the number of filters for each layer, leading to the CNN_C64_C128_M2_C128_C128_M2_C256_D_2 model. A wider architecture does not seem to help much. CNN_C32_C64_M2_C64_C64_M2_C128_D_2 (f0f2f2) CNN_C64_C128_M2_C128_C128_M2_C256_D_2 (b0d7d3) Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 6+12, IOU 0.2 98.45/98.66 83.27/85.42 0.018/0.0097 70.11 78.15 5.3.5 FULL DOCUMENT, AT LOW RESOLUTION Since document images are wide, we used a generator for the preprocessing, and do not have ground truth for mAP computation. Results are evaluated qualitatively. Best results with convolution of 28 pixel wide reception fields are obtained with images of max size 1000 pixels, since most characters fall at the right to be recognized. At 1000 pixels: While at size 1500 pixels, it misses the address : More results at scale 1000: very small characters (below 7 pixels) as well as too big characters are missed, but main invoice information (amounts, headers, client,...) is recognized correctly. 6 CONCLUSION Object detection architectures sound promising for high quality character reading and the development of document features through end-2-end training. Classical rule-based algorithms can reuse these features to solve higher level tasks, such as line extraction. This opens the way for best model architectures search by the community. Future work includes reusing improvements from object detection models, such as multiple anchors, two-stage detection, focal loss, optimization tuning for larger images, batches or other input resolution.
1. What is the focus of the paper, and how does it relate to previous works in the field? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to existing methods? 3. How clear and concise is the writing in the paper, and how well are the figures and tables explained? 4. Are there any notable contributions or innovations presented in the paper? 5. How does the reviewer assess the overall quality and impact of the paper?
Review
Review This paper applied an object detection network, like SSD, for optical character detection and recognition. This paper doesn't give any new contributions and has no potential values. weakness: 1. the paper is lack of novelty and the motivation is weak. I even can't find any contribution to OCR or object detection. 2. the paper is written badly so that I can't follow easily. In addition, the figures and tables are not always explained in the main body, which makes the experimental results confusing. 3. There are no titles in the figures and tables in this paper 4. the authors don't confirm the superiority of the proposed method to others. minor comments 1. what's the meaning of Target mAP in the table? 2. It seems that Some figures are cropped from TensorBoard, with some extra shadows.
ICLR
Title Object detection deep learning networks for Optical Character Recognition Abstract In this article, we show how we applied a simple approach coming from deep learning networks for object detection to the task of optical character recognition in order to build image features taylored for documents. In contrast to scene text reading in natural images using networks pretrained on ImageNet, our document reading is performed with small networks inspired by MNIST digit recognition challenge, at a small computational budget and a small stride. The object detection modern frameworks allow a direct end-to-end training, with no other algorithm than the deep learning and the non-max-suppression algorithm to filter the duplicate predictions. The trained weights can be used for higher level models, such as, for example, document classification, or document segmentation. 1 INTRODUCTION Document images make the use of deep learning networks a complex task, since most deep learning network architectures have been designed and trained for natural images, making them useless for document images which are mainly white and black characters and figures. This is in particular the case for classification networks (VGG, ResNets, ...), object detection networks (Fast RCNN, Faster RCNN, Yolo, SSD, ...), segmentation networks (FCN, U-Net, SegNet, DeconvNet, Dilated-Net, ParseNet, DeepLab...) which cannot be applied directly, even with finetuning. Two challenges arise with deep learning and document data. First, we need to train specific features for the type of data. Second, the available datasets can be of smaller size than classical datasets for computer vision (ImageNet, COCO, ...), in particular when it is required to annotate images for a specific purpose. To reduce the amount of data to train high level descriptions of the document images, such as document zones, segments, lines, or elements, the idea is to train a smaller network on OCR data which exists at massive scale, and use the weights of this small network as pretrained early layers in bigger networks to perform high level tasks with less required data. In our experiments, we show that best performing approaches currently available for object detection on natural images can be used with success at OCR tasks. Code will be released on Github, so that the open research community can bring the best model architectures in terms of accuracy and speed/size efficiency. 2 RELATED WORK In this section we quickly review the literature on OCR and object detection. 2.1 APPROACHES FOR OCR Most deep learning approaches using Object Detection methods for OCR are applied to the task of scene text recognition also called text spotting, which consists in recognizing image areas of text, such as a sign or a wall plaque. Once the text area is recognized, a reading method is applied inside the zone. Some approaches use weakly supervised training either using a CTC loss leaving the alignment between the character positions and the output result to a recurrent network such as bidirectionnal LSTM (He et al. (2015), Jaderberg et al. (2014b), Wang et al. (2018), Goodfellow et al. (2013), dro (2017), Liao et al. (2016)) or using a fixed number of softmax classifiers (Jaderberg et al. (2015), Bartz et al. (2017)) ; some other approaches use guided learning (He et al., 2018). These approaches are mainly driven by the Street View SVHN, Uber-Text (Zhang et al., 2017), FSNS (Smith et al., 2017), Coco-text (Veit et al., 2016), ICDAR 2003 (Lucas et al., 2003) and 2015 (Karatzas et al., 2015), SVT and IIIT5K (Mishra et al., 2012), Synth90k (Jaderberg et al., 2014a) datasets. Rather than recognizing at word level or scene text level, few approaches concern direct detection of characters in natural images, using a localization network in ST-CNN (Jaderberg et al., 2015), or modern object detection approach in yolo-digits (Redmon & Farhadi, 2018) to recognize digits in natural images. This work is the first to apply modern object detection deep learning approaches to document data with small convolutional networks, without converting them to natural images as in (Gilani et al., 2017). (Tensmeyer & Martinez, 2017) shows that document classification accuracy decreases with deeper networks. 2.2 APPROACHES FOR OBJECT DETECTION Modern object detections approaches are divided into two classes. The first class yields to the highest accuracy object detectors, such as Fast-RCNN (Girshick, 2015), Faster-RCNN (Ren et al., 2015), Mask-RCNN (Detectron) (He et al., 2017), and is based on the twostage approach of R-CNN (Girshick et al., 2013). In the first stage, an algorithm, such as Selective Search, or a deep learning model, generates a set of candidate proposals for object regions. In the second stage, a deep learning network classifies the presence of the object (the objectness), its class, as well as estimates the precise object bounding box. In the second class of object detectors, the objectness, the class as well as the bounding box regression, are directly predicted by a single dense deep learning network. These approaches include OverFeat (Rowley et al., 1995), Yolo (Redmon et al. (2015), Redmon & Farhadi (2016), Redmon & Farhadi (2018)) or SSD (Liu et al., 2015). 3 OUR APPROACH TO OCR In our work, as a first attempt to use object detection networks to OCR, we design a single stage object detector, predicting the confidence of an object presence, the class, and the regression for the bounding box. In order to cope with multiple scales we use the feature pyramid approach of SSD (Liu et al., 2015). 3.1 ARCHITECTURES Our 1-scale models are inspired by the LeCun model for digit classification except that the dense layer have been converted to convolutions (locally linear) in order to compute a prediction at multiple positions in the image on a grid defined by the stride of the whole network. These models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 1 convolution layers of kernel 12 and 128 features, so that the receptive field of the network is 28 pixel large and wide. Offset of the model with valid paddings will be 14. We consider the stride of the last convolution as a parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale. On top of these features, 4 stacks of dense layers are used for objectness, classification, position and scale regressions. We named this 1-scale model CNN_C32_C64_M2_C128_D. Our 2-scale models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 2 other convolution layers of kernel 3 and 64 features and another max pooling of stride 2. Each max pooling layer is followed by a convolution layer of kernel 11 and 12 respectively, so that the receptive field for each output is 28 and 56 pixel large and wide. Offset for each output is 14 and 28 respectively. We consider the stride of the output convolutions as a variable parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale for the first output, and 4 × stride scale for the second output. On top of these 2 stages of features, 4 stacks of dense layers are used as well. We name this 2-scale model CNN_C32_C64_M2_C64_C64_M2_C128_D_2. Each layer is followed by a ReLU activation, except for the outputs: objectness is computed with a sigmoid, classification with a softmax, position with hyperbolic tangent and scale with sigmoid. 1-scale model: 2-scale model: 3.2 LOSS For objectness, we need to consider the abundance of negative positions compared to positive positions. That is why we use the Tensorflow expression of weighted crossentropy designed to ensure stability and avoid overflow: (1− z)× x+ l × log(1 + exp(− | x |) + max(−x, 0) where l = (1 + (q − 1) × z) and x = logits, z = targets, q = pos weight. We found that a positive weight of 1000 works well on our OCR dataset. The loss for classification and regression are crossentropy loss and MSE loss. We found that adding a multiplier of 100 to the regression loss help converge faster. 3.3 COMPUTATION OF AVERAGE PRECISION It is common to use the mAP score as the final metric for object detection. In our case, we consider all classes as one class in order to use average precision as metric to measure the capacity of the models in terms of objectness and not classification. We use the name object mAP to distinguish it from classical mAP score. The reason for this choice is that we focus on character detection in this work. For full character recognition, early results suggest that two-stage detectors might be of better fit than a one-stage detector, because in our 1-stage setting, classification accuracy drops when the classification network is not well positioned on the character (see Stride experiments on Mnist), and this argument could give an advantage to 2-stage detectors. Later on, we might add a second stage on top of this network as in Mask RCNN or Faster RCNN and this network might become a region proposal network. We leave this as future work, which purpose will be to improve the classification accuracy. 4 DATASETS 4.1 TOY DATASET We build a toy dataset in order to test our implementations on a simpler task and check that the implementation is correct. For that purpose, we use the MNIST handwritten digits dataset to create pages with handwritten digits, at fixed or variable scales, with or without noise. The number of object classes is 10, the digits [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”]. Our MNIST dataset is composed of 1600 images of size 728x448, consisting of 28x28 square digits randomly placed on a 2D grid of stride 28. MNIST (default digit size 28) MNIST with white prob .4 ( .9) MNIST with noise MNIST with digit size in 14-28 range random digits at scale 28 at different positions on a grid more digits per positions cluttered with noise added randomly random digit scale between 14 and 28. position is random MNIST digit size 56 MNIST with digit size in 28-56 range MNIST with 2 digit sizes 28,58 MNIST with 2 digit ranges 14-28,28-56 random digits at scale 56 at different positions on a grid random digits scale between 28 and 56. random position digits at scales 28 and 56. Positions on a grid random digit scales between 14 and 56. random positions. Our MNIST dataset with noise adds random distortions to create a high level of noise on the images and test the robustness of the models. To check the performance of the position prediction, we set a different network stride, for example 12 (setting stride scale to 6), so that the network grid of positions where the model is evaluated in the convolutions, do not fall exactly on the grid of characters. That way, some digits will appear cropped, up to 6 pixels off horizontally and vertically, in the viewpoint of the network, ie its 28x28 receptive field. To check the performance of the scale prediction, we build a MNIST dataset with digits randomly resized in the [14-28] range. Before adding a layer to our network architecture as in SSD (Liu et al., 2015), we also check larger models at a bigger scale, with a MNIST dataset of 56 pixel wide digits. Last, we build a two-scale dataset for two-layer predictions as in SSD (Liu et al., 2015), with digits at size 28 and 56, and add a middle output to CNN_C32_C64_M2_C64_C64_M2_C128_D to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2, a two-scale network. For a continuous scale between 14 pixels and 56 pixels, we build another two-scale dataset with 2 digit size ranges, 14-28 and 28-56. 4.2 OCR DATA We build our own dataset of 8000 document pages, split into train (90%) and validation (10%) sets, for a fast check of our approach. Document PDF are converted to images with a resolution chosen automatically to have normal sized characters. To have fixed-sized image input for the network batched training, document images are then randomly cropped on a 728x448 area with characters, to have the same sized inputs as our mnist dataset. We consider uppercase and lowercase letters, digits, the two parenthesis and the % sign. The number of classes is 65: [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”, ”a”, ”b”, ”c”, ”d”, ”e”, ”f”, ”g”, ”h”, ”i”, ”j”, ”k”, ”l”, ”m”, ”n”, ”o”, ”p”, ”q”, ”r”, ”s”, ”t”, ”u”, ”v”, ”w”, ”x”, ”y”, ”z”, ”A”, ”B”, ”C”, ”D”, ”E”, ”F”, ”G”, ”H”, ”I”, ”J”, ”K”, ”L”, ”M”, ”N”, ”O”, ”P”, ”Q”, ”R”, ”S”, ”T”, ”U”, ”V”, ”W”, ”X”, ”Y”, ”Z”, ”(”, ”)”, ”%”] Letters are filtered by their size to fall in the range of [14-56] pixels and we start with two-scale networks ([14-28] and [28-56]) tested on our MNIST dataset. Character widths Character heights 5 EXPERIMENTS 5.1 IMPLEMENTATION DETAILS Code has been developed under Python with Keras deep learning framework, for Tensorflow and CNTK compute engines. It is compatible with Python 2.7 and 3.5 and allows multi-gpu training. For training, batch size is 3, the optimizer is Adam and the learning rate 0.001. Hyperparameters are searched by simple grid search. To create the OCR dataset, we use Tesseract OCR on 10 000 documents. 5.2 TOY DATASET 5.2.1 DIGITS CENTERED IN NETWORK FIELD OF VIEW On the MNIST toy dataset, digits are always centered on a grid (of 28x28). A 28-pixel-strided LeCun convolutional model offers a class accuracy above 99.2% since every positive position falls centered on the digit centered on a grid of stride 28. object mAP score is above 0.99 at 12 epochs with our simple model CNN_C32_C64_M2_C128_D. With noise, object mAP score with our simple model CNN_C32_C64_M2_C128_D is above 0.98. Classification accuracy drops to 98.7. mnist mnist noise Command Obj acc Class acc Reg acc Obj mAP MNIST 100 99.2827 1.60e-10 99.93 MNIST with noise 99.62 98.92 4.65e-6 98.41 5.2.2 THE EFFECT OF STRIDE AND IOU ON AVERAGE PRECISION To test the network capability to predict position, we need to use a network stride different than the data grid stride. For example, with stride 12 instead of 28, most digits are not anymore in the center of the network reception field (except first row and column of our image). Also, most digits will appear cropped in the field of view of the network and the IOU threshold defines how much crop ratio will be allowed to still consider the position on the grid as positive. In the worst case, the network stride can be so large that some digits do not appear on the output grid, equivalent to an Intersection Over Union (IOU) of zero (intersection area is zero). Usually, the stride is not larger than the receptive field, and under this condition, the maximal intersection area between any digit and any network field is 50% times 50% = 0.25, while the union is 1.75, leading to a minimal IOU of 0.14. In the case of a smaller stride, for example 12 as below, the IOU threshold can be set higher without losing any digit for reconstruction: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .15 96.37 36.25 0.010 99.97 100 .2 98.42 28.56 0.012 99.75 100 .25 97.05 36.42 0.015 99.52 100 .3 98.35 92.78 0.0013 99.88 100 .35 98.99 83.72 0.0069 99.22 100 .4 98.70 94.96 0.0066 98.37 100 .5 96.71 95.46 0.0062 91.09 95.71 .6 99.92 98.23 4.8e-05 51.80 54.32 .8 99.90 97.90 7.67e-05 8.5 10.63 .95 99.94 97.27 3.7-07 10.80 12.21 .99 99.91 97.66 7.06e-07 9.3 11.71 The large drop in classification accuracy for a small stride suggests that classification would benefit from better localized digits in the receptive field, which would encourage the use of 2-stage detectors. To reduce the impact of the stride, we set a stride margin (see Experiment section on OCR data) on the digit max size to consider at a layer scale so that there is always one position on the network grid for which the character is fully seen by the network. Reconstruction of ground truth from target data at 100% is only possible until an IOU threshold of 0.4, after which the network stride should be decreased. With a smaller stride of 4, reconstruction at 100% is possible at most IOU range: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .2 98.51 72.71 0.034 99.99 100 .25 98.63 78.53 0.018 100 100 .3 97.88 94.54 0.0098 99.89 100 .4 96.85 97.41 0.0098 99.93 100 .5 94.14 98.81 0.0099 99.61 100 .6 99.80 98.57 0.00031 99.93 100 .7 99.64 98.21 0.0016 99.77 100 .8 100 98.19 1.7e-8 82.24 100 .8 -e 30 99.98 99.35 1.73e-9 91.05 100 mnist mnist stride 6 mnist stride 12 The images below show the target for an IOU of .2 for digits at scale between 7 and 14 pixels. The right image shows that with a large stride, small digits cut in the receptive field are dropped because of a too small IOU with the anchor, while with smaller stride, the IOU threshold does remove good candidates. A smaller stride enables to work with higher IOU and better mAP scores. With stride 14, final object mAP is 72.31 With stride 4, final object mAP is 98.61 5.2.3 DIGIT SCALE PREDICTION The second reason (after the cropped digits) to use a smaller IOU threshold is to capture small digits. For example, for digits two times smaller, the maximal intersection of a digit with a network receptive field is 0.5 times 0.5 times 0.25 (the maximal intersection area for full size digits of 20), hence 0.0625, while the union is 1+ 0.5× 0.5× (1− 0.25) = 1.1875 leading to a minimal IOU of 0.052. About 3 times smaller than for the full digit size. With a range scale of 14-28 for the digit sizes, the target object mAP (obtained when we reconstruct the bounding boxes from the target) remains 100% at IOU 0.25 for a stride of 12 pixels. The predicted object mAP is 99.58. The classification accuracy drops down to 89.37%. 5.2.4 HIGHER CAPACITY NETWORKS Lets double the number of kernels to create CNN_C64_C128_M2_C256_D model. At stride 12 and IOU .3, classification accuracy increases from 92.78 to 96.84, while objectness remains roughly perfect. At stride 6 and IOU .2, it increases from 78.53 to 95.78%. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 12, IOU .5 99.59 98.02 0.00078 92.32 94.89 Stride 12, IOU .4 99.17 97.23 0.0047 99.79 100 Stride 12, IOU .3 99.74 96.84 0.00043 100 100 Stride 12, IOU .2 97.57 91.14 0.0016 99.98 100 Stride 12, IOU .15 98.02 83.85 0.0083 99.95 100 Stride 4, IOU .5 99.80 98.87 0.00053 100 100 Stride 4, IOU .25 99.48 95.78 0.00054 100 100 14-28 pixel wide, Stride 12, IOU .25 96.58 91.42 0.0045 99.85 100 5.2.5 MULTI-STAGE NETWORKS In order to capture digits in a bigger range than 28 pixels, we try networks with double reception field size, adding more layers (CNN_C32_C64_M2_C64_C64_M2_C128_D model), and possibly, multiple outputs at multiple layer stages (CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model) as in SSD (Liu et al., 2015). First, we check our model with bigger field, CNN_C32_C64_M2_C64_C64_M2_C128_D model, on the MNIST dataset of 56 pixel wide digits. object mAP score is 1 while classification accuracy is 99.2% at 12 epochs, meaning this first architecture 56x56 receptive field deals well with digits twice big. Then we add a second output to our network architecture as in SSD (Liu et al., 2015) to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model, and on a 2-scale dataset with digits at size 28 and 56, object mAP scores remain stable at 99.44 and 99.64 for network strides 12 and 4 respectively. On a 2-scale dataset with digits at size ranges 14-28 and 28-56, object mAP score with our CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model is 98.82% and for the double size CNN_C64_C128_M2_C128_C128_M2_C256_D_2 is 99.11%. Model Digit size Stride IOU Obj acc Class acc Reg acc Obj mAP Target mAP S 28-56 12 .25 98.99 93.92 0.0018 99.89 100 S 14-28, 28-56 12 .25 98.92 / 98.04 64.06 / 91.08 0.0037 / 0.0056 98.82 99.90 D 14-28, 28-56 12 .2 98.57 / 97.73 58.30 / 79.84 0.0058 / 0.0036 98.31 99.90 D 14-28, 28-56 12 .25 99.10 / 98.16 93.64 / 95.28 0.0016 / 0.0014 98.42 99.93 D, 50e 14-28, 28-56 12 .25 99.26 / 98.78 93.91 / 94.02 0.0010 / 0.0014 98.81 99.93 D, 50e 14-28, 28-56 12 .2 99.05 / 98.05 89.88 / 91.97 0.0021 / 0.0022 99.11 99.97 S 14-56 12 .02 97.58 30.17 0.10 75.07 100 S 14-56 12 .05 97.92 53.20 0.027 75.49 100 S 14-56 12 .01 97.82 58.44 0.0057 87.45 92.67 S 14-56 12 .2 98.82 79.23 0.0010 72.36 75.78 5.2.6 LOW RESOLUTION In order to train full document image rather than a 700 pixel high crop of the images, resolution has to be smaller to fit in the GPU. For that reason, we look at models to recognize digits at a maximum size of 14 pixels instead of 28. We build a model CNN_C32_C64_C128_D by removing the max pooling layer. The network input fields becomes 14 pixel wide, and the stride is divided by 2. With stride 8 after 30 epochs: Digit size IOU Obj acc Class acc Reg acc Obj mAP Target mAP 14 .3 97.12 94.50 0.012 99.91 100 7-14 .2 98.58 73.07 0.0087 98.61 100 7-14 .25 99.07 75.34 0.012 98.98 100 To capture digits on a larger range 7-28 pixel wide, we remove the 2 max pooling layers from our 56 pixel wide model, to build CNN_C32_C64_C64_Cd64_C128_D. At stride 3, Epochs IOU Obj acc Class acc Reg acc Obj mAP Target mAP 30 .1 97.47 73.70 0.010 87.19 95.45 30 .2 99.08 92.84 0.0074 81.01 76.47 50 .15 98.71 88.02 0.0046 87.79 84.76 50 .1 97.97 79.19 0.0096 89.17 95.24 On 7-28 pixel digit range, the network sometimes learns a better reconstruction than the target, due to the hard IOU threshold decision in the target computation : Targets mAP score is 76% at stride 8, IOU 0.2 While results mAP score is 80% Targets mAP score is 86% at stride 8, IOU 0.15 While results mAP score is 89% 5.3 OCR DATASET Target (training data) Detection results (with on the top left corner each layers receptive field minus stride margin) Results filtered by NMS 5.3.1 TARGET We experiment different settings to define positives on the grid and compute the target average precision obtained if we reconstruct the bounding boxes from the target instead of the prediction results. We also compute the final average precision obtained by the trained model on this setting. We consider positive a position on the grid that has a sufficient IOU with the receptive field of the network. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 4+8, IOU 0.15 97.00 / 97.76 69.11 / 71.78 0.027 / 0.016 58.82 91.22 Stride 4+8, IOU 0.2 97.89 / 98.44 75.39 / 72.75 0.020 / 0.011 68.09 84.47 Stride 4+8, IOU 0.25 98.19 81.43 0.014 64.69 65.40 Stride 6+12, IOU 0.15 97.52 / 97.58 72.18 / 77.03 0.028 / 0.015 67.05 86.07 Stride 6+12, IOU 0.2 98.24 / 98.25 79.01 / 79.47 0.019 / 0.10 66.25 78.15 Stride 6+12, IOU0.25 98.60 / 98.90 80.17 / 78.93 0.015 / 0.0075 62.71 66.42 Stride 8+16, IOU 0.15 97.90 / 97.50 72.05 / 74.58 0.029 / 0.017 62.87 89.77 Stride 8+16, IOU 0.2 98.42 / 97.99 78.35 / 79.15 0.021 / 0.012 66.30 83.94 Stride 8+16, IOU 0.25 98.88 / 98.61 77.64 / 81.11 0.017 / 0.0077 60.26 69.35 Stride 10+20, IOU 0.15 98.47 / 97.36 70.94 / 77.87 0.031 / 0.018 59.33 85.87 Stride 10+20, IOU 0.2 98.92 / 97.76 67.94 / 80.13 0.021 / 0.014 51.87 77.52 Stride 10+20, IOU 0.25 99.09 / 98.45 70.41 / 83.67 0.018 / 0.0097 44.59 61.57 IOU 0.2 (ce7562) IOU 0.15 (98cfe6) IOU 0.25 (de7ca2) Target average precision is better when the IOU is low, since the grid misses no ground truth boxes, nevertheless, the model possibly learns better at an higher IOU, which also leads to better classification results. We also tried considering as positive any character that fall in the receptive field of the network. Target average precision is very close to 1 but final average precision remains below 0.48. 5.3.2 STRIDE MARGIN Since the network performs a strided analysis of the input image, we consider that characters should fall entirely into the receptive field of the network on one positive position. For that reason, we consider a stride margin, ie filter characters with a size lower than the receptive field dimension minus the stride. Deactivating this setting, some characters are not being seen completely by the network anymore and the prediction in position and scale should be harder to perform. object mAP score becomes 78.5%. with stride margin (d0644c) without stride margin (6aa6c8) 5.3.3 POSITIVE WEIGHT AND LOSS WEIGHTS pos weight 1000 (4ca89a) pos weight 100 (b9bfbf) pos weight (df9c83) Best results are obtained with pos weights=100. 5.3.4 KERNEL WIDTH To see the influence of the width of the layers, we try to double the number of filters for each layer, leading to the CNN_C64_C128_M2_C128_C128_M2_C256_D_2 model. A wider architecture does not seem to help much. CNN_C32_C64_M2_C64_C64_M2_C128_D_2 (f0f2f2) CNN_C64_C128_M2_C128_C128_M2_C256_D_2 (b0d7d3) Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 6+12, IOU 0.2 98.45/98.66 83.27/85.42 0.018/0.0097 70.11 78.15 5.3.5 FULL DOCUMENT, AT LOW RESOLUTION Since document images are wide, we used a generator for the preprocessing, and do not have ground truth for mAP computation. Results are evaluated qualitatively. Best results with convolution of 28 pixel wide reception fields are obtained with images of max size 1000 pixels, since most characters fall at the right to be recognized. At 1000 pixels: While at size 1500 pixels, it misses the address : More results at scale 1000: very small characters (below 7 pixels) as well as too big characters are missed, but main invoice information (amounts, headers, client,...) is recognized correctly. 6 CONCLUSION Object detection architectures sound promising for high quality character reading and the development of document features through end-2-end training. Classical rule-based algorithms can reuse these features to solve higher level tasks, such as line extraction. This opens the way for best model architectures search by the community. Future work includes reusing improvements from object detection models, such as multiple anchors, two-stage detection, focal loss, optimization tuning for larger images, batches or other input resolution.
1. What is the novelty and contribution of the paper in object detection and image classification? 2. How does the proposed pipeline improve existing OCR engines, and what scenarios can it be applied to? 3. Are there any previous works that have applied deep learning approaches to document data, and how does the current paper compare to those works? 4. What are the strengths and weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 5. How can the presentation of the paper be improved, specifically regarding data plots, tables, figures, and missing content?
Review
Review This paper lacks any novelty/contribution as it just applies well-known and standard architectures for object detection (SSD) and image classification (LeNet) trained with standard algorithms and losses. Moreover, I fail to see what is the purpose of the proposed pipeline and it is not clear at all how it may help improving existing OCR engines in any particular scenario (handwriting recognition, printed text, historical documents, etc.). No demonstration or comparison with state of the art is provided. The authors claim “This work is the first to apply modern object detection deep learning approaches to document data” but there are previously published works. For example: Tuggener, Lukas, et al. "DeepScores--A Dataset for Segmentation, Detection and Classification of Tiny Objects." ICPR 2018. Pacha, Alexander, et al. "Handwritten music object detection: Open issues and baseline results." DAS 2018. Actually, in my opinion Music Object Detection in musical scores would be a much better test-bed/application for the proposed pipeline than any of the datasets used in this paper. The datasets used in the experimental section seem to be created ad-hoc for the proposed pipeline and do not come from any real world application. Finally, the presentation of the paper is marginal. Data plots have very bad resolution, there are no captions in any table or figure and they are not correctly referenced within the text. There seem to be also missing content in the last sections which makes them impossible to read/understand.
ICLR
Title Object detection deep learning networks for Optical Character Recognition Abstract In this article, we show how we applied a simple approach coming from deep learning networks for object detection to the task of optical character recognition in order to build image features taylored for documents. In contrast to scene text reading in natural images using networks pretrained on ImageNet, our document reading is performed with small networks inspired by MNIST digit recognition challenge, at a small computational budget and a small stride. The object detection modern frameworks allow a direct end-to-end training, with no other algorithm than the deep learning and the non-max-suppression algorithm to filter the duplicate predictions. The trained weights can be used for higher level models, such as, for example, document classification, or document segmentation. 1 INTRODUCTION Document images make the use of deep learning networks a complex task, since most deep learning network architectures have been designed and trained for natural images, making them useless for document images which are mainly white and black characters and figures. This is in particular the case for classification networks (VGG, ResNets, ...), object detection networks (Fast RCNN, Faster RCNN, Yolo, SSD, ...), segmentation networks (FCN, U-Net, SegNet, DeconvNet, Dilated-Net, ParseNet, DeepLab...) which cannot be applied directly, even with finetuning. Two challenges arise with deep learning and document data. First, we need to train specific features for the type of data. Second, the available datasets can be of smaller size than classical datasets for computer vision (ImageNet, COCO, ...), in particular when it is required to annotate images for a specific purpose. To reduce the amount of data to train high level descriptions of the document images, such as document zones, segments, lines, or elements, the idea is to train a smaller network on OCR data which exists at massive scale, and use the weights of this small network as pretrained early layers in bigger networks to perform high level tasks with less required data. In our experiments, we show that best performing approaches currently available for object detection on natural images can be used with success at OCR tasks. Code will be released on Github, so that the open research community can bring the best model architectures in terms of accuracy and speed/size efficiency. 2 RELATED WORK In this section we quickly review the literature on OCR and object detection. 2.1 APPROACHES FOR OCR Most deep learning approaches using Object Detection methods for OCR are applied to the task of scene text recognition also called text spotting, which consists in recognizing image areas of text, such as a sign or a wall plaque. Once the text area is recognized, a reading method is applied inside the zone. Some approaches use weakly supervised training either using a CTC loss leaving the alignment between the character positions and the output result to a recurrent network such as bidirectionnal LSTM (He et al. (2015), Jaderberg et al. (2014b), Wang et al. (2018), Goodfellow et al. (2013), dro (2017), Liao et al. (2016)) or using a fixed number of softmax classifiers (Jaderberg et al. (2015), Bartz et al. (2017)) ; some other approaches use guided learning (He et al., 2018). These approaches are mainly driven by the Street View SVHN, Uber-Text (Zhang et al., 2017), FSNS (Smith et al., 2017), Coco-text (Veit et al., 2016), ICDAR 2003 (Lucas et al., 2003) and 2015 (Karatzas et al., 2015), SVT and IIIT5K (Mishra et al., 2012), Synth90k (Jaderberg et al., 2014a) datasets. Rather than recognizing at word level or scene text level, few approaches concern direct detection of characters in natural images, using a localization network in ST-CNN (Jaderberg et al., 2015), or modern object detection approach in yolo-digits (Redmon & Farhadi, 2018) to recognize digits in natural images. This work is the first to apply modern object detection deep learning approaches to document data with small convolutional networks, without converting them to natural images as in (Gilani et al., 2017). (Tensmeyer & Martinez, 2017) shows that document classification accuracy decreases with deeper networks. 2.2 APPROACHES FOR OBJECT DETECTION Modern object detections approaches are divided into two classes. The first class yields to the highest accuracy object detectors, such as Fast-RCNN (Girshick, 2015), Faster-RCNN (Ren et al., 2015), Mask-RCNN (Detectron) (He et al., 2017), and is based on the twostage approach of R-CNN (Girshick et al., 2013). In the first stage, an algorithm, such as Selective Search, or a deep learning model, generates a set of candidate proposals for object regions. In the second stage, a deep learning network classifies the presence of the object (the objectness), its class, as well as estimates the precise object bounding box. In the second class of object detectors, the objectness, the class as well as the bounding box regression, are directly predicted by a single dense deep learning network. These approaches include OverFeat (Rowley et al., 1995), Yolo (Redmon et al. (2015), Redmon & Farhadi (2016), Redmon & Farhadi (2018)) or SSD (Liu et al., 2015). 3 OUR APPROACH TO OCR In our work, as a first attempt to use object detection networks to OCR, we design a single stage object detector, predicting the confidence of an object presence, the class, and the regression for the bounding box. In order to cope with multiple scales we use the feature pyramid approach of SSD (Liu et al., 2015). 3.1 ARCHITECTURES Our 1-scale models are inspired by the LeCun model for digit classification except that the dense layer have been converted to convolutions (locally linear) in order to compute a prediction at multiple positions in the image on a grid defined by the stride of the whole network. These models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 1 convolution layers of kernel 12 and 128 features, so that the receptive field of the network is 28 pixel large and wide. Offset of the model with valid paddings will be 14. We consider the stride of the last convolution as a parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale. On top of these features, 4 stacks of dense layers are used for objectness, classification, position and scale regressions. We named this 1-scale model CNN_C32_C64_M2_C128_D. Our 2-scale models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 2 other convolution layers of kernel 3 and 64 features and another max pooling of stride 2. Each max pooling layer is followed by a convolution layer of kernel 11 and 12 respectively, so that the receptive field for each output is 28 and 56 pixel large and wide. Offset for each output is 14 and 28 respectively. We consider the stride of the output convolutions as a variable parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale for the first output, and 4 × stride scale for the second output. On top of these 2 stages of features, 4 stacks of dense layers are used as well. We name this 2-scale model CNN_C32_C64_M2_C64_C64_M2_C128_D_2. Each layer is followed by a ReLU activation, except for the outputs: objectness is computed with a sigmoid, classification with a softmax, position with hyperbolic tangent and scale with sigmoid. 1-scale model: 2-scale model: 3.2 LOSS For objectness, we need to consider the abundance of negative positions compared to positive positions. That is why we use the Tensorflow expression of weighted crossentropy designed to ensure stability and avoid overflow: (1− z)× x+ l × log(1 + exp(− | x |) + max(−x, 0) where l = (1 + (q − 1) × z) and x = logits, z = targets, q = pos weight. We found that a positive weight of 1000 works well on our OCR dataset. The loss for classification and regression are crossentropy loss and MSE loss. We found that adding a multiplier of 100 to the regression loss help converge faster. 3.3 COMPUTATION OF AVERAGE PRECISION It is common to use the mAP score as the final metric for object detection. In our case, we consider all classes as one class in order to use average precision as metric to measure the capacity of the models in terms of objectness and not classification. We use the name object mAP to distinguish it from classical mAP score. The reason for this choice is that we focus on character detection in this work. For full character recognition, early results suggest that two-stage detectors might be of better fit than a one-stage detector, because in our 1-stage setting, classification accuracy drops when the classification network is not well positioned on the character (see Stride experiments on Mnist), and this argument could give an advantage to 2-stage detectors. Later on, we might add a second stage on top of this network as in Mask RCNN or Faster RCNN and this network might become a region proposal network. We leave this as future work, which purpose will be to improve the classification accuracy. 4 DATASETS 4.1 TOY DATASET We build a toy dataset in order to test our implementations on a simpler task and check that the implementation is correct. For that purpose, we use the MNIST handwritten digits dataset to create pages with handwritten digits, at fixed or variable scales, with or without noise. The number of object classes is 10, the digits [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”]. Our MNIST dataset is composed of 1600 images of size 728x448, consisting of 28x28 square digits randomly placed on a 2D grid of stride 28. MNIST (default digit size 28) MNIST with white prob .4 ( .9) MNIST with noise MNIST with digit size in 14-28 range random digits at scale 28 at different positions on a grid more digits per positions cluttered with noise added randomly random digit scale between 14 and 28. position is random MNIST digit size 56 MNIST with digit size in 28-56 range MNIST with 2 digit sizes 28,58 MNIST with 2 digit ranges 14-28,28-56 random digits at scale 56 at different positions on a grid random digits scale between 28 and 56. random position digits at scales 28 and 56. Positions on a grid random digit scales between 14 and 56. random positions. Our MNIST dataset with noise adds random distortions to create a high level of noise on the images and test the robustness of the models. To check the performance of the position prediction, we set a different network stride, for example 12 (setting stride scale to 6), so that the network grid of positions where the model is evaluated in the convolutions, do not fall exactly on the grid of characters. That way, some digits will appear cropped, up to 6 pixels off horizontally and vertically, in the viewpoint of the network, ie its 28x28 receptive field. To check the performance of the scale prediction, we build a MNIST dataset with digits randomly resized in the [14-28] range. Before adding a layer to our network architecture as in SSD (Liu et al., 2015), we also check larger models at a bigger scale, with a MNIST dataset of 56 pixel wide digits. Last, we build a two-scale dataset for two-layer predictions as in SSD (Liu et al., 2015), with digits at size 28 and 56, and add a middle output to CNN_C32_C64_M2_C64_C64_M2_C128_D to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2, a two-scale network. For a continuous scale between 14 pixels and 56 pixels, we build another two-scale dataset with 2 digit size ranges, 14-28 and 28-56. 4.2 OCR DATA We build our own dataset of 8000 document pages, split into train (90%) and validation (10%) sets, for a fast check of our approach. Document PDF are converted to images with a resolution chosen automatically to have normal sized characters. To have fixed-sized image input for the network batched training, document images are then randomly cropped on a 728x448 area with characters, to have the same sized inputs as our mnist dataset. We consider uppercase and lowercase letters, digits, the two parenthesis and the % sign. The number of classes is 65: [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”, ”a”, ”b”, ”c”, ”d”, ”e”, ”f”, ”g”, ”h”, ”i”, ”j”, ”k”, ”l”, ”m”, ”n”, ”o”, ”p”, ”q”, ”r”, ”s”, ”t”, ”u”, ”v”, ”w”, ”x”, ”y”, ”z”, ”A”, ”B”, ”C”, ”D”, ”E”, ”F”, ”G”, ”H”, ”I”, ”J”, ”K”, ”L”, ”M”, ”N”, ”O”, ”P”, ”Q”, ”R”, ”S”, ”T”, ”U”, ”V”, ”W”, ”X”, ”Y”, ”Z”, ”(”, ”)”, ”%”] Letters are filtered by their size to fall in the range of [14-56] pixels and we start with two-scale networks ([14-28] and [28-56]) tested on our MNIST dataset. Character widths Character heights 5 EXPERIMENTS 5.1 IMPLEMENTATION DETAILS Code has been developed under Python with Keras deep learning framework, for Tensorflow and CNTK compute engines. It is compatible with Python 2.7 and 3.5 and allows multi-gpu training. For training, batch size is 3, the optimizer is Adam and the learning rate 0.001. Hyperparameters are searched by simple grid search. To create the OCR dataset, we use Tesseract OCR on 10 000 documents. 5.2 TOY DATASET 5.2.1 DIGITS CENTERED IN NETWORK FIELD OF VIEW On the MNIST toy dataset, digits are always centered on a grid (of 28x28). A 28-pixel-strided LeCun convolutional model offers a class accuracy above 99.2% since every positive position falls centered on the digit centered on a grid of stride 28. object mAP score is above 0.99 at 12 epochs with our simple model CNN_C32_C64_M2_C128_D. With noise, object mAP score with our simple model CNN_C32_C64_M2_C128_D is above 0.98. Classification accuracy drops to 98.7. mnist mnist noise Command Obj acc Class acc Reg acc Obj mAP MNIST 100 99.2827 1.60e-10 99.93 MNIST with noise 99.62 98.92 4.65e-6 98.41 5.2.2 THE EFFECT OF STRIDE AND IOU ON AVERAGE PRECISION To test the network capability to predict position, we need to use a network stride different than the data grid stride. For example, with stride 12 instead of 28, most digits are not anymore in the center of the network reception field (except first row and column of our image). Also, most digits will appear cropped in the field of view of the network and the IOU threshold defines how much crop ratio will be allowed to still consider the position on the grid as positive. In the worst case, the network stride can be so large that some digits do not appear on the output grid, equivalent to an Intersection Over Union (IOU) of zero (intersection area is zero). Usually, the stride is not larger than the receptive field, and under this condition, the maximal intersection area between any digit and any network field is 50% times 50% = 0.25, while the union is 1.75, leading to a minimal IOU of 0.14. In the case of a smaller stride, for example 12 as below, the IOU threshold can be set higher without losing any digit for reconstruction: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .15 96.37 36.25 0.010 99.97 100 .2 98.42 28.56 0.012 99.75 100 .25 97.05 36.42 0.015 99.52 100 .3 98.35 92.78 0.0013 99.88 100 .35 98.99 83.72 0.0069 99.22 100 .4 98.70 94.96 0.0066 98.37 100 .5 96.71 95.46 0.0062 91.09 95.71 .6 99.92 98.23 4.8e-05 51.80 54.32 .8 99.90 97.90 7.67e-05 8.5 10.63 .95 99.94 97.27 3.7-07 10.80 12.21 .99 99.91 97.66 7.06e-07 9.3 11.71 The large drop in classification accuracy for a small stride suggests that classification would benefit from better localized digits in the receptive field, which would encourage the use of 2-stage detectors. To reduce the impact of the stride, we set a stride margin (see Experiment section on OCR data) on the digit max size to consider at a layer scale so that there is always one position on the network grid for which the character is fully seen by the network. Reconstruction of ground truth from target data at 100% is only possible until an IOU threshold of 0.4, after which the network stride should be decreased. With a smaller stride of 4, reconstruction at 100% is possible at most IOU range: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .2 98.51 72.71 0.034 99.99 100 .25 98.63 78.53 0.018 100 100 .3 97.88 94.54 0.0098 99.89 100 .4 96.85 97.41 0.0098 99.93 100 .5 94.14 98.81 0.0099 99.61 100 .6 99.80 98.57 0.00031 99.93 100 .7 99.64 98.21 0.0016 99.77 100 .8 100 98.19 1.7e-8 82.24 100 .8 -e 30 99.98 99.35 1.73e-9 91.05 100 mnist mnist stride 6 mnist stride 12 The images below show the target for an IOU of .2 for digits at scale between 7 and 14 pixels. The right image shows that with a large stride, small digits cut in the receptive field are dropped because of a too small IOU with the anchor, while with smaller stride, the IOU threshold does remove good candidates. A smaller stride enables to work with higher IOU and better mAP scores. With stride 14, final object mAP is 72.31 With stride 4, final object mAP is 98.61 5.2.3 DIGIT SCALE PREDICTION The second reason (after the cropped digits) to use a smaller IOU threshold is to capture small digits. For example, for digits two times smaller, the maximal intersection of a digit with a network receptive field is 0.5 times 0.5 times 0.25 (the maximal intersection area for full size digits of 20), hence 0.0625, while the union is 1+ 0.5× 0.5× (1− 0.25) = 1.1875 leading to a minimal IOU of 0.052. About 3 times smaller than for the full digit size. With a range scale of 14-28 for the digit sizes, the target object mAP (obtained when we reconstruct the bounding boxes from the target) remains 100% at IOU 0.25 for a stride of 12 pixels. The predicted object mAP is 99.58. The classification accuracy drops down to 89.37%. 5.2.4 HIGHER CAPACITY NETWORKS Lets double the number of kernels to create CNN_C64_C128_M2_C256_D model. At stride 12 and IOU .3, classification accuracy increases from 92.78 to 96.84, while objectness remains roughly perfect. At stride 6 and IOU .2, it increases from 78.53 to 95.78%. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 12, IOU .5 99.59 98.02 0.00078 92.32 94.89 Stride 12, IOU .4 99.17 97.23 0.0047 99.79 100 Stride 12, IOU .3 99.74 96.84 0.00043 100 100 Stride 12, IOU .2 97.57 91.14 0.0016 99.98 100 Stride 12, IOU .15 98.02 83.85 0.0083 99.95 100 Stride 4, IOU .5 99.80 98.87 0.00053 100 100 Stride 4, IOU .25 99.48 95.78 0.00054 100 100 14-28 pixel wide, Stride 12, IOU .25 96.58 91.42 0.0045 99.85 100 5.2.5 MULTI-STAGE NETWORKS In order to capture digits in a bigger range than 28 pixels, we try networks with double reception field size, adding more layers (CNN_C32_C64_M2_C64_C64_M2_C128_D model), and possibly, multiple outputs at multiple layer stages (CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model) as in SSD (Liu et al., 2015). First, we check our model with bigger field, CNN_C32_C64_M2_C64_C64_M2_C128_D model, on the MNIST dataset of 56 pixel wide digits. object mAP score is 1 while classification accuracy is 99.2% at 12 epochs, meaning this first architecture 56x56 receptive field deals well with digits twice big. Then we add a second output to our network architecture as in SSD (Liu et al., 2015) to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model, and on a 2-scale dataset with digits at size 28 and 56, object mAP scores remain stable at 99.44 and 99.64 for network strides 12 and 4 respectively. On a 2-scale dataset with digits at size ranges 14-28 and 28-56, object mAP score with our CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model is 98.82% and for the double size CNN_C64_C128_M2_C128_C128_M2_C256_D_2 is 99.11%. Model Digit size Stride IOU Obj acc Class acc Reg acc Obj mAP Target mAP S 28-56 12 .25 98.99 93.92 0.0018 99.89 100 S 14-28, 28-56 12 .25 98.92 / 98.04 64.06 / 91.08 0.0037 / 0.0056 98.82 99.90 D 14-28, 28-56 12 .2 98.57 / 97.73 58.30 / 79.84 0.0058 / 0.0036 98.31 99.90 D 14-28, 28-56 12 .25 99.10 / 98.16 93.64 / 95.28 0.0016 / 0.0014 98.42 99.93 D, 50e 14-28, 28-56 12 .25 99.26 / 98.78 93.91 / 94.02 0.0010 / 0.0014 98.81 99.93 D, 50e 14-28, 28-56 12 .2 99.05 / 98.05 89.88 / 91.97 0.0021 / 0.0022 99.11 99.97 S 14-56 12 .02 97.58 30.17 0.10 75.07 100 S 14-56 12 .05 97.92 53.20 0.027 75.49 100 S 14-56 12 .01 97.82 58.44 0.0057 87.45 92.67 S 14-56 12 .2 98.82 79.23 0.0010 72.36 75.78 5.2.6 LOW RESOLUTION In order to train full document image rather than a 700 pixel high crop of the images, resolution has to be smaller to fit in the GPU. For that reason, we look at models to recognize digits at a maximum size of 14 pixels instead of 28. We build a model CNN_C32_C64_C128_D by removing the max pooling layer. The network input fields becomes 14 pixel wide, and the stride is divided by 2. With stride 8 after 30 epochs: Digit size IOU Obj acc Class acc Reg acc Obj mAP Target mAP 14 .3 97.12 94.50 0.012 99.91 100 7-14 .2 98.58 73.07 0.0087 98.61 100 7-14 .25 99.07 75.34 0.012 98.98 100 To capture digits on a larger range 7-28 pixel wide, we remove the 2 max pooling layers from our 56 pixel wide model, to build CNN_C32_C64_C64_Cd64_C128_D. At stride 3, Epochs IOU Obj acc Class acc Reg acc Obj mAP Target mAP 30 .1 97.47 73.70 0.010 87.19 95.45 30 .2 99.08 92.84 0.0074 81.01 76.47 50 .15 98.71 88.02 0.0046 87.79 84.76 50 .1 97.97 79.19 0.0096 89.17 95.24 On 7-28 pixel digit range, the network sometimes learns a better reconstruction than the target, due to the hard IOU threshold decision in the target computation : Targets mAP score is 76% at stride 8, IOU 0.2 While results mAP score is 80% Targets mAP score is 86% at stride 8, IOU 0.15 While results mAP score is 89% 5.3 OCR DATASET Target (training data) Detection results (with on the top left corner each layers receptive field minus stride margin) Results filtered by NMS 5.3.1 TARGET We experiment different settings to define positives on the grid and compute the target average precision obtained if we reconstruct the bounding boxes from the target instead of the prediction results. We also compute the final average precision obtained by the trained model on this setting. We consider positive a position on the grid that has a sufficient IOU with the receptive field of the network. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 4+8, IOU 0.15 97.00 / 97.76 69.11 / 71.78 0.027 / 0.016 58.82 91.22 Stride 4+8, IOU 0.2 97.89 / 98.44 75.39 / 72.75 0.020 / 0.011 68.09 84.47 Stride 4+8, IOU 0.25 98.19 81.43 0.014 64.69 65.40 Stride 6+12, IOU 0.15 97.52 / 97.58 72.18 / 77.03 0.028 / 0.015 67.05 86.07 Stride 6+12, IOU 0.2 98.24 / 98.25 79.01 / 79.47 0.019 / 0.10 66.25 78.15 Stride 6+12, IOU0.25 98.60 / 98.90 80.17 / 78.93 0.015 / 0.0075 62.71 66.42 Stride 8+16, IOU 0.15 97.90 / 97.50 72.05 / 74.58 0.029 / 0.017 62.87 89.77 Stride 8+16, IOU 0.2 98.42 / 97.99 78.35 / 79.15 0.021 / 0.012 66.30 83.94 Stride 8+16, IOU 0.25 98.88 / 98.61 77.64 / 81.11 0.017 / 0.0077 60.26 69.35 Stride 10+20, IOU 0.15 98.47 / 97.36 70.94 / 77.87 0.031 / 0.018 59.33 85.87 Stride 10+20, IOU 0.2 98.92 / 97.76 67.94 / 80.13 0.021 / 0.014 51.87 77.52 Stride 10+20, IOU 0.25 99.09 / 98.45 70.41 / 83.67 0.018 / 0.0097 44.59 61.57 IOU 0.2 (ce7562) IOU 0.15 (98cfe6) IOU 0.25 (de7ca2) Target average precision is better when the IOU is low, since the grid misses no ground truth boxes, nevertheless, the model possibly learns better at an higher IOU, which also leads to better classification results. We also tried considering as positive any character that fall in the receptive field of the network. Target average precision is very close to 1 but final average precision remains below 0.48. 5.3.2 STRIDE MARGIN Since the network performs a strided analysis of the input image, we consider that characters should fall entirely into the receptive field of the network on one positive position. For that reason, we consider a stride margin, ie filter characters with a size lower than the receptive field dimension minus the stride. Deactivating this setting, some characters are not being seen completely by the network anymore and the prediction in position and scale should be harder to perform. object mAP score becomes 78.5%. with stride margin (d0644c) without stride margin (6aa6c8) 5.3.3 POSITIVE WEIGHT AND LOSS WEIGHTS pos weight 1000 (4ca89a) pos weight 100 (b9bfbf) pos weight (df9c83) Best results are obtained with pos weights=100. 5.3.4 KERNEL WIDTH To see the influence of the width of the layers, we try to double the number of filters for each layer, leading to the CNN_C64_C128_M2_C128_C128_M2_C256_D_2 model. A wider architecture does not seem to help much. CNN_C32_C64_M2_C64_C64_M2_C128_D_2 (f0f2f2) CNN_C64_C128_M2_C128_C128_M2_C256_D_2 (b0d7d3) Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 6+12, IOU 0.2 98.45/98.66 83.27/85.42 0.018/0.0097 70.11 78.15 5.3.5 FULL DOCUMENT, AT LOW RESOLUTION Since document images are wide, we used a generator for the preprocessing, and do not have ground truth for mAP computation. Results are evaluated qualitatively. Best results with convolution of 28 pixel wide reception fields are obtained with images of max size 1000 pixels, since most characters fall at the right to be recognized. At 1000 pixels: While at size 1500 pixels, it misses the address : More results at scale 1000: very small characters (below 7 pixels) as well as too big characters are missed, but main invoice information (amounts, headers, client,...) is recognized correctly. 6 CONCLUSION Object detection architectures sound promising for high quality character reading and the development of document features through end-2-end training. Classical rule-based algorithms can reuse these features to solve higher level tasks, such as line extraction. This opens the way for best model architectures search by the community. Future work includes reusing improvements from object detection models, such as multiple anchors, two-stage detection, focal loss, optimization tuning for larger images, batches or other input resolution.
1. What is the main contribution of the paper? 2. Is the contribution novel and significant? 3. Does the paper provide enough evidence and experiments to support its claims? 4. Are there any limitations or weaknesses in the approach or methodology? 5. How does the reviewer assess the quality and impact of the paper's content?
Review
Review Unfortunately, the work does not introduce new contributions, with the point of the paper provided in the introduction: In our experiments, we show that best performing approaches currently available for object detection on natural images can be used with success at OCR tasks. The work is applying established object detection algorithms to OCR. While the work provides a thorough experimental section exploring trade offs in network hyper-parameters, the application of object detection to the OCR domain does not provide enough novelty to warrant publication.
ICLR
Title Object detection deep learning networks for Optical Character Recognition Abstract In this article, we show how we applied a simple approach coming from deep learning networks for object detection to the task of optical character recognition in order to build image features taylored for documents. In contrast to scene text reading in natural images using networks pretrained on ImageNet, our document reading is performed with small networks inspired by MNIST digit recognition challenge, at a small computational budget and a small stride. The object detection modern frameworks allow a direct end-to-end training, with no other algorithm than the deep learning and the non-max-suppression algorithm to filter the duplicate predictions. The trained weights can be used for higher level models, such as, for example, document classification, or document segmentation. 1 INTRODUCTION Document images make the use of deep learning networks a complex task, since most deep learning network architectures have been designed and trained for natural images, making them useless for document images which are mainly white and black characters and figures. This is in particular the case for classification networks (VGG, ResNets, ...), object detection networks (Fast RCNN, Faster RCNN, Yolo, SSD, ...), segmentation networks (FCN, U-Net, SegNet, DeconvNet, Dilated-Net, ParseNet, DeepLab...) which cannot be applied directly, even with finetuning. Two challenges arise with deep learning and document data. First, we need to train specific features for the type of data. Second, the available datasets can be of smaller size than classical datasets for computer vision (ImageNet, COCO, ...), in particular when it is required to annotate images for a specific purpose. To reduce the amount of data to train high level descriptions of the document images, such as document zones, segments, lines, or elements, the idea is to train a smaller network on OCR data which exists at massive scale, and use the weights of this small network as pretrained early layers in bigger networks to perform high level tasks with less required data. In our experiments, we show that best performing approaches currently available for object detection on natural images can be used with success at OCR tasks. Code will be released on Github, so that the open research community can bring the best model architectures in terms of accuracy and speed/size efficiency. 2 RELATED WORK In this section we quickly review the literature on OCR and object detection. 2.1 APPROACHES FOR OCR Most deep learning approaches using Object Detection methods for OCR are applied to the task of scene text recognition also called text spotting, which consists in recognizing image areas of text, such as a sign or a wall plaque. Once the text area is recognized, a reading method is applied inside the zone. Some approaches use weakly supervised training either using a CTC loss leaving the alignment between the character positions and the output result to a recurrent network such as bidirectionnal LSTM (He et al. (2015), Jaderberg et al. (2014b), Wang et al. (2018), Goodfellow et al. (2013), dro (2017), Liao et al. (2016)) or using a fixed number of softmax classifiers (Jaderberg et al. (2015), Bartz et al. (2017)) ; some other approaches use guided learning (He et al., 2018). These approaches are mainly driven by the Street View SVHN, Uber-Text (Zhang et al., 2017), FSNS (Smith et al., 2017), Coco-text (Veit et al., 2016), ICDAR 2003 (Lucas et al., 2003) and 2015 (Karatzas et al., 2015), SVT and IIIT5K (Mishra et al., 2012), Synth90k (Jaderberg et al., 2014a) datasets. Rather than recognizing at word level or scene text level, few approaches concern direct detection of characters in natural images, using a localization network in ST-CNN (Jaderberg et al., 2015), or modern object detection approach in yolo-digits (Redmon & Farhadi, 2018) to recognize digits in natural images. This work is the first to apply modern object detection deep learning approaches to document data with small convolutional networks, without converting them to natural images as in (Gilani et al., 2017). (Tensmeyer & Martinez, 2017) shows that document classification accuracy decreases with deeper networks. 2.2 APPROACHES FOR OBJECT DETECTION Modern object detections approaches are divided into two classes. The first class yields to the highest accuracy object detectors, such as Fast-RCNN (Girshick, 2015), Faster-RCNN (Ren et al., 2015), Mask-RCNN (Detectron) (He et al., 2017), and is based on the twostage approach of R-CNN (Girshick et al., 2013). In the first stage, an algorithm, such as Selective Search, or a deep learning model, generates a set of candidate proposals for object regions. In the second stage, a deep learning network classifies the presence of the object (the objectness), its class, as well as estimates the precise object bounding box. In the second class of object detectors, the objectness, the class as well as the bounding box regression, are directly predicted by a single dense deep learning network. These approaches include OverFeat (Rowley et al., 1995), Yolo (Redmon et al. (2015), Redmon & Farhadi (2016), Redmon & Farhadi (2018)) or SSD (Liu et al., 2015). 3 OUR APPROACH TO OCR In our work, as a first attempt to use object detection networks to OCR, we design a single stage object detector, predicting the confidence of an object presence, the class, and the regression for the bounding box. In order to cope with multiple scales we use the feature pyramid approach of SSD (Liu et al., 2015). 3.1 ARCHITECTURES Our 1-scale models are inspired by the LeCun model for digit classification except that the dense layer have been converted to convolutions (locally linear) in order to compute a prediction at multiple positions in the image on a grid defined by the stride of the whole network. These models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 1 convolution layers of kernel 12 and 128 features, so that the receptive field of the network is 28 pixel large and wide. Offset of the model with valid paddings will be 14. We consider the stride of the last convolution as a parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale. On top of these features, 4 stacks of dense layers are used for objectness, classification, position and scale regressions. We named this 1-scale model CNN_C32_C64_M2_C128_D. Our 2-scale models are composed of 2 convolution layers of kernel 3 and 32 and 64 features respectively, followed by a max pooling of stride 2 and 2 other convolution layers of kernel 3 and 64 features and another max pooling of stride 2. Each max pooling layer is followed by a convolution layer of kernel 11 and 12 respectively, so that the receptive field for each output is 28 and 56 pixel large and wide. Offset for each output is 14 and 28 respectively. We consider the stride of the output convolutions as a variable parameter, stride scale, to adjust the stride of the whole model which will be 2 × stride scale for the first output, and 4 × stride scale for the second output. On top of these 2 stages of features, 4 stacks of dense layers are used as well. We name this 2-scale model CNN_C32_C64_M2_C64_C64_M2_C128_D_2. Each layer is followed by a ReLU activation, except for the outputs: objectness is computed with a sigmoid, classification with a softmax, position with hyperbolic tangent and scale with sigmoid. 1-scale model: 2-scale model: 3.2 LOSS For objectness, we need to consider the abundance of negative positions compared to positive positions. That is why we use the Tensorflow expression of weighted crossentropy designed to ensure stability and avoid overflow: (1− z)× x+ l × log(1 + exp(− | x |) + max(−x, 0) where l = (1 + (q − 1) × z) and x = logits, z = targets, q = pos weight. We found that a positive weight of 1000 works well on our OCR dataset. The loss for classification and regression are crossentropy loss and MSE loss. We found that adding a multiplier of 100 to the regression loss help converge faster. 3.3 COMPUTATION OF AVERAGE PRECISION It is common to use the mAP score as the final metric for object detection. In our case, we consider all classes as one class in order to use average precision as metric to measure the capacity of the models in terms of objectness and not classification. We use the name object mAP to distinguish it from classical mAP score. The reason for this choice is that we focus on character detection in this work. For full character recognition, early results suggest that two-stage detectors might be of better fit than a one-stage detector, because in our 1-stage setting, classification accuracy drops when the classification network is not well positioned on the character (see Stride experiments on Mnist), and this argument could give an advantage to 2-stage detectors. Later on, we might add a second stage on top of this network as in Mask RCNN or Faster RCNN and this network might become a region proposal network. We leave this as future work, which purpose will be to improve the classification accuracy. 4 DATASETS 4.1 TOY DATASET We build a toy dataset in order to test our implementations on a simpler task and check that the implementation is correct. For that purpose, we use the MNIST handwritten digits dataset to create pages with handwritten digits, at fixed or variable scales, with or without noise. The number of object classes is 10, the digits [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”]. Our MNIST dataset is composed of 1600 images of size 728x448, consisting of 28x28 square digits randomly placed on a 2D grid of stride 28. MNIST (default digit size 28) MNIST with white prob .4 ( .9) MNIST with noise MNIST with digit size in 14-28 range random digits at scale 28 at different positions on a grid more digits per positions cluttered with noise added randomly random digit scale between 14 and 28. position is random MNIST digit size 56 MNIST with digit size in 28-56 range MNIST with 2 digit sizes 28,58 MNIST with 2 digit ranges 14-28,28-56 random digits at scale 56 at different positions on a grid random digits scale between 28 and 56. random position digits at scales 28 and 56. Positions on a grid random digit scales between 14 and 56. random positions. Our MNIST dataset with noise adds random distortions to create a high level of noise on the images and test the robustness of the models. To check the performance of the position prediction, we set a different network stride, for example 12 (setting stride scale to 6), so that the network grid of positions where the model is evaluated in the convolutions, do not fall exactly on the grid of characters. That way, some digits will appear cropped, up to 6 pixels off horizontally and vertically, in the viewpoint of the network, ie its 28x28 receptive field. To check the performance of the scale prediction, we build a MNIST dataset with digits randomly resized in the [14-28] range. Before adding a layer to our network architecture as in SSD (Liu et al., 2015), we also check larger models at a bigger scale, with a MNIST dataset of 56 pixel wide digits. Last, we build a two-scale dataset for two-layer predictions as in SSD (Liu et al., 2015), with digits at size 28 and 56, and add a middle output to CNN_C32_C64_M2_C64_C64_M2_C128_D to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2, a two-scale network. For a continuous scale between 14 pixels and 56 pixels, we build another two-scale dataset with 2 digit size ranges, 14-28 and 28-56. 4.2 OCR DATA We build our own dataset of 8000 document pages, split into train (90%) and validation (10%) sets, for a fast check of our approach. Document PDF are converted to images with a resolution chosen automatically to have normal sized characters. To have fixed-sized image input for the network batched training, document images are then randomly cropped on a 728x448 area with characters, to have the same sized inputs as our mnist dataset. We consider uppercase and lowercase letters, digits, the two parenthesis and the % sign. The number of classes is 65: [”0”, ”1”, ”2”, ”3”, ”4”, ”5”, ”6”, ”7”, ”8”, ”9”, ”a”, ”b”, ”c”, ”d”, ”e”, ”f”, ”g”, ”h”, ”i”, ”j”, ”k”, ”l”, ”m”, ”n”, ”o”, ”p”, ”q”, ”r”, ”s”, ”t”, ”u”, ”v”, ”w”, ”x”, ”y”, ”z”, ”A”, ”B”, ”C”, ”D”, ”E”, ”F”, ”G”, ”H”, ”I”, ”J”, ”K”, ”L”, ”M”, ”N”, ”O”, ”P”, ”Q”, ”R”, ”S”, ”T”, ”U”, ”V”, ”W”, ”X”, ”Y”, ”Z”, ”(”, ”)”, ”%”] Letters are filtered by their size to fall in the range of [14-56] pixels and we start with two-scale networks ([14-28] and [28-56]) tested on our MNIST dataset. Character widths Character heights 5 EXPERIMENTS 5.1 IMPLEMENTATION DETAILS Code has been developed under Python with Keras deep learning framework, for Tensorflow and CNTK compute engines. It is compatible with Python 2.7 and 3.5 and allows multi-gpu training. For training, batch size is 3, the optimizer is Adam and the learning rate 0.001. Hyperparameters are searched by simple grid search. To create the OCR dataset, we use Tesseract OCR on 10 000 documents. 5.2 TOY DATASET 5.2.1 DIGITS CENTERED IN NETWORK FIELD OF VIEW On the MNIST toy dataset, digits are always centered on a grid (of 28x28). A 28-pixel-strided LeCun convolutional model offers a class accuracy above 99.2% since every positive position falls centered on the digit centered on a grid of stride 28. object mAP score is above 0.99 at 12 epochs with our simple model CNN_C32_C64_M2_C128_D. With noise, object mAP score with our simple model CNN_C32_C64_M2_C128_D is above 0.98. Classification accuracy drops to 98.7. mnist mnist noise Command Obj acc Class acc Reg acc Obj mAP MNIST 100 99.2827 1.60e-10 99.93 MNIST with noise 99.62 98.92 4.65e-6 98.41 5.2.2 THE EFFECT OF STRIDE AND IOU ON AVERAGE PRECISION To test the network capability to predict position, we need to use a network stride different than the data grid stride. For example, with stride 12 instead of 28, most digits are not anymore in the center of the network reception field (except first row and column of our image). Also, most digits will appear cropped in the field of view of the network and the IOU threshold defines how much crop ratio will be allowed to still consider the position on the grid as positive. In the worst case, the network stride can be so large that some digits do not appear on the output grid, equivalent to an Intersection Over Union (IOU) of zero (intersection area is zero). Usually, the stride is not larger than the receptive field, and under this condition, the maximal intersection area between any digit and any network field is 50% times 50% = 0.25, while the union is 1.75, leading to a minimal IOU of 0.14. In the case of a smaller stride, for example 12 as below, the IOU threshold can be set higher without losing any digit for reconstruction: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .15 96.37 36.25 0.010 99.97 100 .2 98.42 28.56 0.012 99.75 100 .25 97.05 36.42 0.015 99.52 100 .3 98.35 92.78 0.0013 99.88 100 .35 98.99 83.72 0.0069 99.22 100 .4 98.70 94.96 0.0066 98.37 100 .5 96.71 95.46 0.0062 91.09 95.71 .6 99.92 98.23 4.8e-05 51.80 54.32 .8 99.90 97.90 7.67e-05 8.5 10.63 .95 99.94 97.27 3.7-07 10.80 12.21 .99 99.91 97.66 7.06e-07 9.3 11.71 The large drop in classification accuracy for a small stride suggests that classification would benefit from better localized digits in the receptive field, which would encourage the use of 2-stage detectors. To reduce the impact of the stride, we set a stride margin (see Experiment section on OCR data) on the digit max size to consider at a layer scale so that there is always one position on the network grid for which the character is fully seen by the network. Reconstruction of ground truth from target data at 100% is only possible until an IOU threshold of 0.4, after which the network stride should be decreased. With a smaller stride of 4, reconstruction at 100% is possible at most IOU range: IOU Obj acc Class acc Reg acc Obj mAP Target mAP .2 98.51 72.71 0.034 99.99 100 .25 98.63 78.53 0.018 100 100 .3 97.88 94.54 0.0098 99.89 100 .4 96.85 97.41 0.0098 99.93 100 .5 94.14 98.81 0.0099 99.61 100 .6 99.80 98.57 0.00031 99.93 100 .7 99.64 98.21 0.0016 99.77 100 .8 100 98.19 1.7e-8 82.24 100 .8 -e 30 99.98 99.35 1.73e-9 91.05 100 mnist mnist stride 6 mnist stride 12 The images below show the target for an IOU of .2 for digits at scale between 7 and 14 pixels. The right image shows that with a large stride, small digits cut in the receptive field are dropped because of a too small IOU with the anchor, while with smaller stride, the IOU threshold does remove good candidates. A smaller stride enables to work with higher IOU and better mAP scores. With stride 14, final object mAP is 72.31 With stride 4, final object mAP is 98.61 5.2.3 DIGIT SCALE PREDICTION The second reason (after the cropped digits) to use a smaller IOU threshold is to capture small digits. For example, for digits two times smaller, the maximal intersection of a digit with a network receptive field is 0.5 times 0.5 times 0.25 (the maximal intersection area for full size digits of 20), hence 0.0625, while the union is 1+ 0.5× 0.5× (1− 0.25) = 1.1875 leading to a minimal IOU of 0.052. About 3 times smaller than for the full digit size. With a range scale of 14-28 for the digit sizes, the target object mAP (obtained when we reconstruct the bounding boxes from the target) remains 100% at IOU 0.25 for a stride of 12 pixels. The predicted object mAP is 99.58. The classification accuracy drops down to 89.37%. 5.2.4 HIGHER CAPACITY NETWORKS Lets double the number of kernels to create CNN_C64_C128_M2_C256_D model. At stride 12 and IOU .3, classification accuracy increases from 92.78 to 96.84, while objectness remains roughly perfect. At stride 6 and IOU .2, it increases from 78.53 to 95.78%. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 12, IOU .5 99.59 98.02 0.00078 92.32 94.89 Stride 12, IOU .4 99.17 97.23 0.0047 99.79 100 Stride 12, IOU .3 99.74 96.84 0.00043 100 100 Stride 12, IOU .2 97.57 91.14 0.0016 99.98 100 Stride 12, IOU .15 98.02 83.85 0.0083 99.95 100 Stride 4, IOU .5 99.80 98.87 0.00053 100 100 Stride 4, IOU .25 99.48 95.78 0.00054 100 100 14-28 pixel wide, Stride 12, IOU .25 96.58 91.42 0.0045 99.85 100 5.2.5 MULTI-STAGE NETWORKS In order to capture digits in a bigger range than 28 pixels, we try networks with double reception field size, adding more layers (CNN_C32_C64_M2_C64_C64_M2_C128_D model), and possibly, multiple outputs at multiple layer stages (CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model) as in SSD (Liu et al., 2015). First, we check our model with bigger field, CNN_C32_C64_M2_C64_C64_M2_C128_D model, on the MNIST dataset of 56 pixel wide digits. object mAP score is 1 while classification accuracy is 99.2% at 12 epochs, meaning this first architecture 56x56 receptive field deals well with digits twice big. Then we add a second output to our network architecture as in SSD (Liu et al., 2015) to build CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model, and on a 2-scale dataset with digits at size 28 and 56, object mAP scores remain stable at 99.44 and 99.64 for network strides 12 and 4 respectively. On a 2-scale dataset with digits at size ranges 14-28 and 28-56, object mAP score with our CNN_C32_C64_M2_C64_C64_M2_C128_D_2 model is 98.82% and for the double size CNN_C64_C128_M2_C128_C128_M2_C256_D_2 is 99.11%. Model Digit size Stride IOU Obj acc Class acc Reg acc Obj mAP Target mAP S 28-56 12 .25 98.99 93.92 0.0018 99.89 100 S 14-28, 28-56 12 .25 98.92 / 98.04 64.06 / 91.08 0.0037 / 0.0056 98.82 99.90 D 14-28, 28-56 12 .2 98.57 / 97.73 58.30 / 79.84 0.0058 / 0.0036 98.31 99.90 D 14-28, 28-56 12 .25 99.10 / 98.16 93.64 / 95.28 0.0016 / 0.0014 98.42 99.93 D, 50e 14-28, 28-56 12 .25 99.26 / 98.78 93.91 / 94.02 0.0010 / 0.0014 98.81 99.93 D, 50e 14-28, 28-56 12 .2 99.05 / 98.05 89.88 / 91.97 0.0021 / 0.0022 99.11 99.97 S 14-56 12 .02 97.58 30.17 0.10 75.07 100 S 14-56 12 .05 97.92 53.20 0.027 75.49 100 S 14-56 12 .01 97.82 58.44 0.0057 87.45 92.67 S 14-56 12 .2 98.82 79.23 0.0010 72.36 75.78 5.2.6 LOW RESOLUTION In order to train full document image rather than a 700 pixel high crop of the images, resolution has to be smaller to fit in the GPU. For that reason, we look at models to recognize digits at a maximum size of 14 pixels instead of 28. We build a model CNN_C32_C64_C128_D by removing the max pooling layer. The network input fields becomes 14 pixel wide, and the stride is divided by 2. With stride 8 after 30 epochs: Digit size IOU Obj acc Class acc Reg acc Obj mAP Target mAP 14 .3 97.12 94.50 0.012 99.91 100 7-14 .2 98.58 73.07 0.0087 98.61 100 7-14 .25 99.07 75.34 0.012 98.98 100 To capture digits on a larger range 7-28 pixel wide, we remove the 2 max pooling layers from our 56 pixel wide model, to build CNN_C32_C64_C64_Cd64_C128_D. At stride 3, Epochs IOU Obj acc Class acc Reg acc Obj mAP Target mAP 30 .1 97.47 73.70 0.010 87.19 95.45 30 .2 99.08 92.84 0.0074 81.01 76.47 50 .15 98.71 88.02 0.0046 87.79 84.76 50 .1 97.97 79.19 0.0096 89.17 95.24 On 7-28 pixel digit range, the network sometimes learns a better reconstruction than the target, due to the hard IOU threshold decision in the target computation : Targets mAP score is 76% at stride 8, IOU 0.2 While results mAP score is 80% Targets mAP score is 86% at stride 8, IOU 0.15 While results mAP score is 89% 5.3 OCR DATASET Target (training data) Detection results (with on the top left corner each layers receptive field minus stride margin) Results filtered by NMS 5.3.1 TARGET We experiment different settings to define positives on the grid and compute the target average precision obtained if we reconstruct the bounding boxes from the target instead of the prediction results. We also compute the final average precision obtained by the trained model on this setting. We consider positive a position on the grid that has a sufficient IOU with the receptive field of the network. Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 4+8, IOU 0.15 97.00 / 97.76 69.11 / 71.78 0.027 / 0.016 58.82 91.22 Stride 4+8, IOU 0.2 97.89 / 98.44 75.39 / 72.75 0.020 / 0.011 68.09 84.47 Stride 4+8, IOU 0.25 98.19 81.43 0.014 64.69 65.40 Stride 6+12, IOU 0.15 97.52 / 97.58 72.18 / 77.03 0.028 / 0.015 67.05 86.07 Stride 6+12, IOU 0.2 98.24 / 98.25 79.01 / 79.47 0.019 / 0.10 66.25 78.15 Stride 6+12, IOU0.25 98.60 / 98.90 80.17 / 78.93 0.015 / 0.0075 62.71 66.42 Stride 8+16, IOU 0.15 97.90 / 97.50 72.05 / 74.58 0.029 / 0.017 62.87 89.77 Stride 8+16, IOU 0.2 98.42 / 97.99 78.35 / 79.15 0.021 / 0.012 66.30 83.94 Stride 8+16, IOU 0.25 98.88 / 98.61 77.64 / 81.11 0.017 / 0.0077 60.26 69.35 Stride 10+20, IOU 0.15 98.47 / 97.36 70.94 / 77.87 0.031 / 0.018 59.33 85.87 Stride 10+20, IOU 0.2 98.92 / 97.76 67.94 / 80.13 0.021 / 0.014 51.87 77.52 Stride 10+20, IOU 0.25 99.09 / 98.45 70.41 / 83.67 0.018 / 0.0097 44.59 61.57 IOU 0.2 (ce7562) IOU 0.15 (98cfe6) IOU 0.25 (de7ca2) Target average precision is better when the IOU is low, since the grid misses no ground truth boxes, nevertheless, the model possibly learns better at an higher IOU, which also leads to better classification results. We also tried considering as positive any character that fall in the receptive field of the network. Target average precision is very close to 1 but final average precision remains below 0.48. 5.3.2 STRIDE MARGIN Since the network performs a strided analysis of the input image, we consider that characters should fall entirely into the receptive field of the network on one positive position. For that reason, we consider a stride margin, ie filter characters with a size lower than the receptive field dimension minus the stride. Deactivating this setting, some characters are not being seen completely by the network anymore and the prediction in position and scale should be harder to perform. object mAP score becomes 78.5%. with stride margin (d0644c) without stride margin (6aa6c8) 5.3.3 POSITIVE WEIGHT AND LOSS WEIGHTS pos weight 1000 (4ca89a) pos weight 100 (b9bfbf) pos weight (df9c83) Best results are obtained with pos weights=100. 5.3.4 KERNEL WIDTH To see the influence of the width of the layers, we try to double the number of filters for each layer, leading to the CNN_C64_C128_M2_C128_C128_M2_C256_D_2 model. A wider architecture does not seem to help much. CNN_C32_C64_M2_C64_C64_M2_C128_D_2 (f0f2f2) CNN_C64_C128_M2_C128_C128_M2_C256_D_2 (b0d7d3) Parameters Obj acc Class acc Reg acc Obj mAP Target mAP Stride 6+12, IOU 0.2 98.45/98.66 83.27/85.42 0.018/0.0097 70.11 78.15 5.3.5 FULL DOCUMENT, AT LOW RESOLUTION Since document images are wide, we used a generator for the preprocessing, and do not have ground truth for mAP computation. Results are evaluated qualitatively. Best results with convolution of 28 pixel wide reception fields are obtained with images of max size 1000 pixels, since most characters fall at the right to be recognized. At 1000 pixels: While at size 1500 pixels, it misses the address : More results at scale 1000: very small characters (below 7 pixels) as well as too big characters are missed, but main invoice information (amounts, headers, client,...) is recognized correctly. 6 CONCLUSION Object detection architectures sound promising for high quality character reading and the development of document features through end-2-end training. Classical rule-based algorithms can reuse these features to solve higher level tasks, such as line extraction. This opens the way for best model architectures search by the community. Future work includes reusing improvements from object detection models, such as multiple anchors, two-stage detection, focal loss, optimization tuning for larger images, batches or other input resolution.
1. What is the main contribution of the paper regarding text detection in documents? 2. What are the weaknesses of the paper, particularly in terms of novelty and experimental clarity? 3. Do you have any concerns regarding the network architecture and its components, such as label assignment and NMS? 4. How would you assess the organization and formatting of the experimental section and the overall paper? 5. Are there any questions regarding the ablation study presented in the paper?
Review
Review The authors experiment with building an SSD-like object detection network for text detection in documents, by replacing the usual VGG or ResNet base architecture with a light weight model inspired by the original digits classification CNN from [LeCun et al 1999]. This paper is a pure technical report with no novel contribution: all the authors do is replace the "body" network in the well-known SSD architecture with a simpler model (taken from existing literature) and evaluate it on two synthetic benchmarks of their creation. The idea of employing object detection CNNs for OCR is not novel either, as pointed out in the related works section. Beside the absence of novelty, the paper also suffers from several other serious flaws: 1) One of the main motivations provided by the authors for this work is that existing "classification [...] detection [...] or segmentation networks, cannot be applied directly, even with finetuning". However, no experimental results are reported to justify this claim. In fact, in the experimental section the proposed network is not compared against any existing baseline. 2) The text has serious clarity and formatting issues, in particular: - most tables and figures have no caption, and the few that have one are not numbered - the text exceed both the 8 pages limit and the extended 10 pages limit allowed in the case of big figures - the experimental section is very confusing, in particular the way the authors refer to the various network variants using long code names makes it really hard for the reader to follow the ablation studies - given the absence of proper captions and numbering, it is quite hard to understand which table refers to which experiment - most of the graphs seem to be in the form of low-resolution bitmaps, which are quite hard to read even on screen - many entries in the References section are either missing the venue, or point to an arXiv link even when a proper conference / journal reference would be available 3) Some important details about the network are missing, in particular the authors do not mention how labels are assigned to the network outputs, and only give a vague indication about the losses being used. Similarly, there's no mention about the use of NMS, which is also an important component of the two architectures (SSD and YOLO) that inspire this work. Assigning labels and performing NMS are actually some of the most crucial components in the training of object proposal / object detection networks, often requiring numerous meta-parameters to be properly configured and tuned, as testified by the meticulous descriptions given in previous works (e.g. YOLO and Fast / Faster / Mask r-CNN). 4) The experimental section is very poorly organized and formatted (as mentioned in (2) above), and completely lacks any comparison with other state of the art approaches. A lot of space is devoted to presenting a detailed ablation study which, in my opinion, doesn't contribute much to the overall paper and actually reads more like a report on meta-parameter tuning. Finally, starting from Section 5.3.1 the text seems to be copy-pasted without a second read from some differently formatted document, as entire phrases or possibly tables / figures seems to be missing. In conclusion, in my opinion this paper does not meet the conference's minimum quality standards and should definitely be rejected.
ICLR
Title Perception Updating Networks: On architectural constraints for interpretable video generative models Abstract We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents “sprites” or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network. 1 INTRODUCTION The current computer graphics pipelines are the result of efficient implementations required by limited hardware and high frequency output requirements. These requirements were also achieved with the use of explicit physics and optic constraints and modeling with constantly improving data structures (Shirley et al., 2015). In machine learning on the other hand, for a long time image (Olshausen et al., 1996) and video (Hurri & Hyvärinen, 2003) generative models had been investigated with statistical approaches that model images down to the pixel level (Simoncelli & Olshausen, 2001), sometimes assuming neighborhood statistical dependencies (Osindero & Hinton, 2008). In video prediction, the current state of the art uses variations of deep convolutional recurrent neural networks (Kalchbrenner et al., 2016) (Lotter et al., 2016) (Finn et al., 2016). As a parallel to the classic machine learning approach to image and video interpretation and prediction is a growing trend in the deep learning literature for modeling vision as inverse graphics (Kulkarni et al., 2015)(Rezende et al., 2016)(Eslami et al., 2016). These approaches can be interpreted into two groups: supervised and unsupervised vision as inverse graphics. The supervised approach assumes that during training an image is provided with extra information about its rotation, translation, illumination, etc. The goal of the supervised model is to learn an auto-encoder that explicitly factors out the content of the image and its physical properties. The supervised approach is illustrated by Kulkarni et al. (2015). The unsupervised approach requires extra architectural constraints, similar to those assumed in computer graphics. For example, Reed et al. (2016) modeled the content of a scene with a Generative Adversarial Network (Goodfellow et al., 2014) and its location with Spatial Transformer Networks (Jaderberg et al., 2015). The full model is adapted end-to-end to generate images whose appearance can be changed by independently modifying the ”what” and/or ”where” variables. A similar approach was applied to video generation with volumetric convolutional neural networks (Vondrick et al., 2016).In two papers by Google DeepMind (Rezende et al., 2016) (Eslami et al., 2016) they improved the ”where” representations of the unsupervised approach and modeled the 3D geometry of the scene. This way they explicitly represented object rotation, translation, camera pose, etc. Their approaches were also trained end-to-end with REINFORCE-like stochastic gradients to backpropagate through non-differentiable parts of the graphics pipeline (Rezende et al., 2016) or to count ∗Companion code repo coming soon. the number of objects in the scene (Eslami et al., 2016). Those papers also used Spatial Transformer Networks to model the position of the objects in the scene, but they extended it to 3D geometry so it could also model rotation and translation in a volumetric space. Other approaches inspired by the graphics pipeline and computer vision geometry in machine learning uses the physics constraints to estimate the depth of each pixel in the scene and camera pose movements to predict frames in video (Mahjourian et al., 2016) (Godard et al., 2016). The present paper is closer to the unsupervised approach of vision as inverse graphics. More precisely, here we investigate frame prediction in video. Contrary to the work by Reed et al. (2016) here we first limit ourselves to simple synthetic 2D datasets and learning models whose representations can be visually interpreted. This way we can investigate exactly what the neural network is learning and validate our statistical assumptions. Also, we investigate the behavior of Spatial Transformer Networks and question it as the default choice when limited compute resources are available and no scale invariance is required. First in the next Section we will pose a statistical model that is appropriate for machine learning but inspired by the graphics pipeline. 2 A 2D STATISTICAL GRAPHICS PIPELINE This section starts with a high level description of the 2D graphics pipeline, followed by a discussion of how to implement it with neural network modules, and finally we define a formal statistical model. The 2D graphics pipeline starts from geometric primitives and follows with modeling transformations, clipping, viewing transformations and finally scan conversion for generating an image. Here, we will deal with previously rasterized bitmaps, i.e. sprites, and will model the translation transformations, rotation and clipping with differential operations. This way, the steps in the pipeline can be defined as layers of a neural network and the free parameters can be optimized with backpropagation. For our neural network implementation, we assume a finite set of sprites (later we generalize it to infinite sprites) that will be part of the frames in the video. The image generation network selects a sprite, s, from a memorized sprite database Si∈{1,...,K} using an addressing signal c: s = ∑ j cjSj , where∑ j cj = 1. (1) For interpretable results it would be optimal to do one-hot memory addressing where cj = 1 for Sj = S and cj = 0 otherwise. Note that (1) is differentiable w.r.t to both cj and Sj so we can learn the individual sprites from data. We can for cj to sum to 1 using the softmax nonlinearity. This approach was inspired by the recent deep learning literature on attention modules (Bahdanau et al., 2014) (Graves et al., 2014). When the number of possible sprites is too large it is more efficient to do a compressed representation. Instead of using an address value c we use a content addressable memory where the image generator estimates a code z that is then decoded to the desired sprite with a (possibly nonlinear) function d(z). If we interpret the addressing value z as a latent representation and the content addressable memory d(z) as a decoder, we can use the recent advances in neural networks for generative models to setup our statistical model. We will revisit this later in this section. The translation transformation can be modeled with a convolution with a Delta function or using spatial transformers. Note that the translation of an image I(x, y) can be defined as I(x− τx, y − τy) = I(x, y) ? δ(x− τx, y − τy), (2) where ? denotes the image convolution operation. Clipping is naturally handled in such a case. If the output images have finite dimensions and δ(x−τx, y−τy) is non-zero near its border, the translated image I(x− τx, y − τy) will be clipped. Another way of implementing the translation operation is using Spatial Transformer Networks (STN) (Jaderberg et al., 2015). An implementation of STN can be defined in two steps: resampling and bilinear interpolation. Resampling is defined by moving the position of the pixels (x, y) in the original image using a linear transform to new positions (x̃, ỹ) as [ x̃ ỹ ] = A [ x y 1 ] , where A = [ A11 A12 A13 A21 A22 A23 ] . (3) We assume the coordinates in the original image are integers 0 ≤ x < M and 0 ≤ y < N , where M ×N is the size of the image I . Once the new coordinates are defined, we can calculate the values of the pixels in the new image Ĩ using bilinear interpolation: Ĩ(x̃, ỹ) = wx1,y1I(x1, y1) + wx1,y2I(x1, y2)+ wx2,y1I(x2, y1) + wx2,y2I(x2, y2) (4) where (x1, x2, y1, y2) are integers, x1 ≤ x̃ < x2, y1 ≤ ỹ < y2 and wx1,y1 = (bx̃c − x̃)(bỹc − x̃) wx1,y2 = (bx̃c − x̃)(bỹc+ 1− ỹ) wx2,y1 = (bx̃c+ 1− x̃)(bỹc − ỹ) wx2,y2 = (bx̃c − x̃)(bỹc+ 1− ỹ) (5) To avoid sampling from outside the image we clip the values bx̃c and bx̃c+1 between 0 and M and the values bỹc and bỹc + 1 between 0 and N . We omitted that in (5) for conciseness. Note that (4) is piecewise differentiable w.r.t I . We can define translation through operations with A = [ 1 0 τx 0 1 τy ] . (6) Also, we can rotate the image ρ radians counter clockwise with A = [ cos ρ sin ρ 0 − sin ρ cosρ 0 ] . (7) Image rescaling is achieved on that framework by rescaling in the right square submatrix A1:2,1:2. We illustrate in Fig. 1 how to get similar results using convolutions with a delta-function and spatial transformers. Considering the tools defined above, we can define a statistical model of 2D images the explicitly represents sprites and their positions in the scene. We can use the free energy of this statistical model to optimize a neural network. Let us start with a static single frame model and later generalize it to video. Let an image I ∼ pθ(I) be composed of sprite s ∼ pθ(s) centered in the (x, y) coordinates in the larger image I . Denote these coordinates as a random variable δxy ∼ pθ, where θ are the model parameters. pθ(δxy) can be factored in two marginal categorical distributions Cat(δx) and Cat(δy) that models the probability of each coordinate of the sprite independently. For the finite sprite dataset, pθ(s) is also a categorical distribution conditioned on the true sprites. For this finite case the generative model can be factored as pθ(I, s, δ) = pθ(s)pθ(δxy)p(I|s, δxy), (8) assuming that “what”, s, and “where”, δxy , are statistically independent. Also, in such case the posterior pθ(s, δ|I) = pθ(s|I)p(δxy|I) (9) is tractable. One could use for instance Expectation-Maximization or greedy approaches like Matching Pursuit to alternate between the search for the position and fitting the best matching shape. For the infinite number of sprites case, we assume that there is a hidden variable z from which the sprites are generated as p(s, z) = pθ(z)pθ(s|z). In such case our full posterior becomes pθ(z, s, δ|I) = pθ(z, s|I)p(δxy|I) = pθ(z|I)pθ(s|I, z)p(δxy|I). (10) We can simplify (10) assuming pθ(z|s) = pθ(z|I) for simple images without ambiguity and no sprite occlusion. For a scalable inference in the case of unknown θ and z and intractable pθ(z|s) we can use the auto-encoding variational Bayes (VAE) approach proposed by Kingma & Welling (2013). Using VAE we define an approximate recognition model qφ(z|s). In such case, the loglikelihood of the i.i.d images I is log pθ(I1, . . . , IT ) = ∑T i log pθ(Ii) and log pθ(Ii) = DKL(qφ(z|si)||pθ(z|si))+ DKL(pθ(z|si)||pθ(z|Ii))+ L(θ, φ, δxy, Ii). (11) Again, assume that the approximation pθ(z|s) = pθ(z|I) we have DKL(pθ(z|si)||pθ(z|Ii)) = 0 and the free energy (or variational lower bound) term equal to L(θ, φ, δ, I) = −DKL(qφ(z|si)||pθ(z))+ Eqφ(z|s,δ)pθ(δ|I)[log pθ(I|z, δ)], (12) where we dropped the subindices xy and i to avoid clutter. Here we would like to train our model by maximizing the lower bound (12), again inspired by VAE. We can do so using the reparametrization trick assuming qφ(z|s) and the prior pθ(z) to be Gaussian and sampling z = mφ(I) + vφ(I) · ξ, (13) where ξ ∼ N (0, σI), I is the identity matrix, the functions m(I) and v(I) are deep neural networks learned from data. One can argue that given z and a good approximation to the posterior qφ, estimating δ is still tractable. Nevertheless, we preemptively avoid Expectation-Maximization or other search approaches and use instead neural network layers lx and ly: δxy = softmax(lx(I))⊗ softmax(ly(I)), (14) with ⊗ denoting the outer product of marginals. We also experiment using STNs. Such amortized inference is also faster in training and test time than EM and will also cover the case where I is itself a learned low dimensional or latent representation instead of an observable image. Bear this in mind while we use this approach even in simple experiments such as those with moving shapes in the Experiments Section. This will help us to understand what can be learned from this model. We extend the model above to videos, i.e. sequences of images I(t) = {I(0), I(1), . . .}, assuming that the conditional log-likelihood log pθ(It|HIt) = logpθ(It|Hδt , Hzt) follows (11), where HIt is the history of video frames prior to time point t. Also Hδt and Hzt are the history of position coordinates and the history of latent variables of the sprites respectively. We should observe that one can make the assumption that the sprites don’t change for a given video I(t) and only estimate one sprite st=0 or hidden variable zt=0. This assumption can be useful for long term predictions, but requires that the main object moving in the scene doesn’t change. In the next section, we propose a neural network architecture for maximizing our approximate variational lower bound 2D videos. 3 PERCEPTION UPDATING NETWORKS This Section proposes a family of neural architectures for optimizing the lower bound (12). A schematic diagram is represented in Fig. (2). The core of our method is a Recurrent Neural Network (RNN) augmented with task specific modules, namely a sprite addressable memory and modeling transformations layers. RNNs augmented with task specific units were popularized by Graves et al. (2014) in the context of learning simple differentiable algorithms and served as inspiration for us as well. Here since we explicitly model the perceived sprites as s or z and update it and its location and/or rotation though time we decided to call our method simply Perception Updating Networks. Here an input frame at time t, It, is fed to the RNN that emits 2 signals: a memory address that selects a relevant sprite and transformation parameters. If we are doing the translation transformation using convolutions and delta functions this output is equal to (14). If using STN, the translation operation returns the matrix A used in (3). Note that we could use both, letting convolutions with δ to the translation is constraining A as in (7) to do rotation transformations only. We describe the general case where both δxy and STNs are used in Algorithm 1. Beyond deciding between STNs vs δxy , a few other free parameters of our method are the type of RNN (e.g. vanilla RNN, LSTM, GRU, ConvRNN, etc), the number of neurons in the hidden state of the RNN and neural network architectures that infer the correct sprite and modeling transformation parameters. Our hyperparameter choices are investigated separately in each experiment in the next Section. Data: input videos It, t ∈ {0, 1, 2, . . .}, initial RNN state h0, neural network layers mφ, vφ, d, l, f Result: video predictions It, t ∈ {1, 2, 3, . . .} for t ∈ {0, 1, . . .} do ht ← RNN(It, ht−1) δxy = softmax(lx(ht))⊗ softmax(ly(ht)) ρ = f(ht) A = [ cos ρ sin ρ 0 − sin ρ cos ρ 0 ] ξ ∼ pθ(z) zt = mφ(ht) + vφ(ht) · ξ st = d(zt) at = STN(st, A) Ĩt+1 = at ? δxy It+1 = µĨt+1 + (1− µ)B end Algorithm 1: Perception Updating Networks. STN denotes spatial transformer operator (3)-(4) and ? denotes convolution. We experimented with several variations of this algorithm, mainly changing if and how the “where” modules δxy and STN are used. Also changing how the sprite st is calculated and not using a background B when not necessary. In the next section we present experiments with the proposed architecture on synthetic datasets. 4 EXPERIMENTS In this section we experiment with several implementations of the proposed Perception Updating Networks. We start with a simple synthetic dataset made of videos where one of 3 moving shapes moves with constant speed bouncing in the edges of an image. This illustrates the working of the finite memory and the addressing scheme in (1). Afterwards we show results on the moving MNIST dataset (Srivastava et al., 2015) commonly used in the literature of generative neural network models of videos. 4.1 BOUNCING SHAPES In this first experiment we generate videos of one of three shapes moving on a non-zero background. The shapes are a square, triangle and cross. The image size is 20×20 pixels and the shapes are 8×8 pixels. The pixel values are between 0 and 1. The shapes are picked with equal probability and they move at constant speed of 1 pixel per frame. The shapes start from random initial positions with and start moving in random directions as well. We tested two implementations of the proposed architecture: one using only convolutions, referred to as convolutional PUN in the figures, and another using using spatial transformers, called spatial transformer PUN. For the parameters of the convolutional PUN the RNN used was a Long Short Term Memory (LSTM) with 100 cells. The RNN in the Spatial Transformer PUN had 256 cells. In the convolutional PUN, the location layers used to calculate δxy , lx and ly , output vectors of size 20 pixels and we used the finite addressable memory described in (1). The background is also learned from data as weights of neural network. This background served to make the task more difficult and force the network to avoid just exploiting any non-zero value. After the convolutional composition It = st ? δxy , we added the background to form a new image using Ĩt = µ · It + (1− µ)B, where µ is a differentiable mask that accounts for the “transparency” of the image It. B is the learned 20 × 20 pixels background image. For complex shapes this mask shape could be calculated as another module in the network, similarly to the approach in Vondrick et al. (2016). In the following experiments, the training videos were 10 frames long. At test time the network is fed the first 10 frames of a video and asked to predict the next 10. Results for the compared methods are shown in Fig. ??. For the baseline method, we did a hyperparameter search on conventional LSTMs with a single linear output layer until we found one that had comparable results at test time. That network had 256 hidden cells. Also, note that although the scale of the mean square error is the same, the results from our proposed architecture look smoother than those learned by the LSTM as shown in Fig. 3. Given such a simple experiment, it is elucidating to visualize values learned by each piece of the network. As expected the sprite memory learned the 3 investigated shapes in transposed order since they are reverted by the convolution operation to compose the frame. We also experimented with choosing the size of the learned sprites st smaller and larger than the true shapes. We observed that for larger shapes such as 10 × 10 the sprites converge to the correct shapes but just using part of the pixels. For smaller shapes such as 6 × 6 pixels, instead of learning a part of the correct shape, the convolutional Perception Updating Network learned to compensate for the lack of enough pixels with more than one non-zero value in the location operation δxy (see Fig. 3). This allow us to suggest to the interested practitioner that in order to get interpretable results it is better to use sprites larger than the expected size than smaller. For the spatial transformer PUN the image is calculated as (see Algorithm 1 for context): A = f(ht), It+1 = STN(st, A). (15) We noticed that the spatial transformer PUN was not able to learn the training videos using an equivalent architecture to the convolutional PUN one. We had to use multiple layers to define the function f(ht). In other words, in the convolution based method δxy can be estimated by a single affine transformation of the state ht but A cannot. We also had to use smaller learning rates to guarantee convergence: 0.0001 for STN while the δxy-based model worked with a value 10 times larger. If we don’t use the softmax nonlinearity to construct δxy the representations learned by the convolutional PUN are no longer visually interpretable. It is interesting to conclude that under this framework the “what” and “where” can only be distinguished if we impose architectural constraints. The reason is the commutative property of the convolution operation. As a note on rotation, we ran experiments where the sprite are rotated by a random angle before being placed in the image. This new type of videos cannot be learned using only convolutional based Perception Updating Networks unless we increase the number of sprites proportionally to the number of possible angles. Spatial transformer based Perception Updating Networks can handle this new type of video naturally. Nevertheless, if the number of rotation angles is finite or can be discretized we found that we could learn to generate the videos faster if we combined the convolutional approach with a mechanism to select the appropriate angle from a set of possibilities. Results on this experiment are not shown in this paper due to space constraints but they can be reproduced with the companion code. 4.2 MOVING MNIST The Moving MNIST benchmark uses videos generated by moving 28×28 pixel images of hand written digits in a 64× 64 pixels canvas. Just like in the Bouncing Shapes dataset, the digits move with different different speeds in different directions and can bounce in the walls. Unlike the Bouncing Shapes dataset, there are 60000 different sprites for training and 10000 for test, making it impractical to use a discrete memory module. Instead, we use the memory representation denoted by (13) followed by st = d(zt) as written in Algorithm 1. We trained a convolutional Perception Updating Network using 2 layer LSTMs each one with 1024 cells for 200 epochs, with 10000 gradient updates per epoch. The latent variable z had 100 dimensions and the decoder d(·) was a single hidden layer MLP with 1000 hidden neurons and softplus activation function. The output layer of this MLP has 784 neurons, which is the size of an MNIST image, and sigmoid activation function. In the test set we obtained a negative log-likelihood of 239 nats with the proposed architecture, while a 2 layer LSTM baseline had 250 nats. Note that the our method was optimized to minimize the lower bound (12), not only the negative likelihood. These results are not as good as those obtained by the Video Pixel Networks (Kalchbrenner et al., 2016) that obtained 87 nats on the test set. Nevertheless, both approaches are not mutually exclusive and instead of a fully connected decoder we could use a similar PixelCNN decoder to generate sprites with higher likelihood. In this first paper we decided instead to focus in defining the statistical framework and interpretable “what” and “where” decoupling. When running the proposed method in rollout mode, feeding the outputs back as next time step inputs, we were able to generate high likelihood frames for more time steps than with a baseline LSTM. Also, since the sprite to be generated and its position in the frame are decoupled, in rollout mode we can fix the sprite and only use the δxy coming from the network. This way we can generate realistic looking frames for even longer, but after a few frames we observed the digits stopped moving or moved in the wrong direction (see video in the companion code repository). This means that the LSTM RNN was not able to maintain its internal dynamics for too long, thus, there is still room for improvement in the proposed architecture. In Fig. 5 we show sample rollout videos. The network was fed with 10 frames and asked to generate 10 more getting its own outputs back as inputs and the companion code repository for an animated version of this figure. This experiment also suggests several improvements in the proposed architecture. For example, we assumed that the internal RNN has to calculate a sprite at every time step, which is inefficient when the sprites don’t change in the video. We should improve the architecture with an extra memory unity that snapshots the sprites and avoid the burden of recalculating the sprites at every step. We believe this would a possible way to free representation power that the internal RNN could use to model the movement dynamics for even more time steps. 5 CONCLUSIONS This paper introduced a statistical framework for modeling video of 2D scenes inspired by graphics pipelines and variational auto-encoding Bayes. From this statistical framework we derived a variational lower bound that decouples sprites and their dynamics in a video. To optimize this lower bound, we suggested a family of architectures called Perception Updating Networks that can take advantage of this decoupled representation by memorizing sprites or their percepts and updating in location in a scene independently. We showed that this architecture could generate videos that are interpretable and are better suited than baseline RNNs for long video generation. ACKNOWLEDGMENTS We thank Ryan Burt for several suggestions to the first draft. This work was partially funded by the University of Florida Graduate Student Fellowship and ONR N00014-14-1-0542.
1. What is the focus of the paper, and what are the proposed contributions? 2. What are the strengths of the approach, particularly regarding the optimization method? 3. What are the weaknesses of the paper, especially regarding its experimental section? 4. Do you have any suggestions for improving the paper or expanding on the research?
Review
Review This paper proposes a generative model of videos composed of a background and a set of 2D objects (sprites). Optimization is performed under a VAE framework. The authors' proposal of an outer product of softmaxed vectors (resulting in a 2D map that is delta-like), composed with a convolution, is a very interesting way to achieve translation of an image with differentiable parameters. It seems to be an attractive alternative to more complicated differentiable resamplers (such as those used by STNs) when only translation is needed. Below I have made some comments regarding parts of the text, especially the experiments, that are not clear. The experimental section in particular seems rushed, with some results only alluded to but not given, not even in the appendix. For an extremely novel and exotic proposal, showing only synthetic experiments could be excused. However, though there is some novelty in the method, it is disappointing that there isn't even an attempt at trying to tackle a problem with real data. I suggest as an example aerial videos (such as those taken from drone platforms), since the planar assumption that the authors make would most probably hold in that case. I also suggest that the authors do another pass at proof-reading the paper. There are missing references ("Fig. ??"), unfinished sentences (caption of Fig. 5), and the aforementioned issues with the experimental exposition.
ICLR
Title Perception Updating Networks: On architectural constraints for interpretable video generative models Abstract We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents “sprites” or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network. 1 INTRODUCTION The current computer graphics pipelines are the result of efficient implementations required by limited hardware and high frequency output requirements. These requirements were also achieved with the use of explicit physics and optic constraints and modeling with constantly improving data structures (Shirley et al., 2015). In machine learning on the other hand, for a long time image (Olshausen et al., 1996) and video (Hurri & Hyvärinen, 2003) generative models had been investigated with statistical approaches that model images down to the pixel level (Simoncelli & Olshausen, 2001), sometimes assuming neighborhood statistical dependencies (Osindero & Hinton, 2008). In video prediction, the current state of the art uses variations of deep convolutional recurrent neural networks (Kalchbrenner et al., 2016) (Lotter et al., 2016) (Finn et al., 2016). As a parallel to the classic machine learning approach to image and video interpretation and prediction is a growing trend in the deep learning literature for modeling vision as inverse graphics (Kulkarni et al., 2015)(Rezende et al., 2016)(Eslami et al., 2016). These approaches can be interpreted into two groups: supervised and unsupervised vision as inverse graphics. The supervised approach assumes that during training an image is provided with extra information about its rotation, translation, illumination, etc. The goal of the supervised model is to learn an auto-encoder that explicitly factors out the content of the image and its physical properties. The supervised approach is illustrated by Kulkarni et al. (2015). The unsupervised approach requires extra architectural constraints, similar to those assumed in computer graphics. For example, Reed et al. (2016) modeled the content of a scene with a Generative Adversarial Network (Goodfellow et al., 2014) and its location with Spatial Transformer Networks (Jaderberg et al., 2015). The full model is adapted end-to-end to generate images whose appearance can be changed by independently modifying the ”what” and/or ”where” variables. A similar approach was applied to video generation with volumetric convolutional neural networks (Vondrick et al., 2016).In two papers by Google DeepMind (Rezende et al., 2016) (Eslami et al., 2016) they improved the ”where” representations of the unsupervised approach and modeled the 3D geometry of the scene. This way they explicitly represented object rotation, translation, camera pose, etc. Their approaches were also trained end-to-end with REINFORCE-like stochastic gradients to backpropagate through non-differentiable parts of the graphics pipeline (Rezende et al., 2016) or to count ∗Companion code repo coming soon. the number of objects in the scene (Eslami et al., 2016). Those papers also used Spatial Transformer Networks to model the position of the objects in the scene, but they extended it to 3D geometry so it could also model rotation and translation in a volumetric space. Other approaches inspired by the graphics pipeline and computer vision geometry in machine learning uses the physics constraints to estimate the depth of each pixel in the scene and camera pose movements to predict frames in video (Mahjourian et al., 2016) (Godard et al., 2016). The present paper is closer to the unsupervised approach of vision as inverse graphics. More precisely, here we investigate frame prediction in video. Contrary to the work by Reed et al. (2016) here we first limit ourselves to simple synthetic 2D datasets and learning models whose representations can be visually interpreted. This way we can investigate exactly what the neural network is learning and validate our statistical assumptions. Also, we investigate the behavior of Spatial Transformer Networks and question it as the default choice when limited compute resources are available and no scale invariance is required. First in the next Section we will pose a statistical model that is appropriate for machine learning but inspired by the graphics pipeline. 2 A 2D STATISTICAL GRAPHICS PIPELINE This section starts with a high level description of the 2D graphics pipeline, followed by a discussion of how to implement it with neural network modules, and finally we define a formal statistical model. The 2D graphics pipeline starts from geometric primitives and follows with modeling transformations, clipping, viewing transformations and finally scan conversion for generating an image. Here, we will deal with previously rasterized bitmaps, i.e. sprites, and will model the translation transformations, rotation and clipping with differential operations. This way, the steps in the pipeline can be defined as layers of a neural network and the free parameters can be optimized with backpropagation. For our neural network implementation, we assume a finite set of sprites (later we generalize it to infinite sprites) that will be part of the frames in the video. The image generation network selects a sprite, s, from a memorized sprite database Si∈{1,...,K} using an addressing signal c: s = ∑ j cjSj , where∑ j cj = 1. (1) For interpretable results it would be optimal to do one-hot memory addressing where cj = 1 for Sj = S and cj = 0 otherwise. Note that (1) is differentiable w.r.t to both cj and Sj so we can learn the individual sprites from data. We can for cj to sum to 1 using the softmax nonlinearity. This approach was inspired by the recent deep learning literature on attention modules (Bahdanau et al., 2014) (Graves et al., 2014). When the number of possible sprites is too large it is more efficient to do a compressed representation. Instead of using an address value c we use a content addressable memory where the image generator estimates a code z that is then decoded to the desired sprite with a (possibly nonlinear) function d(z). If we interpret the addressing value z as a latent representation and the content addressable memory d(z) as a decoder, we can use the recent advances in neural networks for generative models to setup our statistical model. We will revisit this later in this section. The translation transformation can be modeled with a convolution with a Delta function or using spatial transformers. Note that the translation of an image I(x, y) can be defined as I(x− τx, y − τy) = I(x, y) ? δ(x− τx, y − τy), (2) where ? denotes the image convolution operation. Clipping is naturally handled in such a case. If the output images have finite dimensions and δ(x−τx, y−τy) is non-zero near its border, the translated image I(x− τx, y − τy) will be clipped. Another way of implementing the translation operation is using Spatial Transformer Networks (STN) (Jaderberg et al., 2015). An implementation of STN can be defined in two steps: resampling and bilinear interpolation. Resampling is defined by moving the position of the pixels (x, y) in the original image using a linear transform to new positions (x̃, ỹ) as [ x̃ ỹ ] = A [ x y 1 ] , where A = [ A11 A12 A13 A21 A22 A23 ] . (3) We assume the coordinates in the original image are integers 0 ≤ x < M and 0 ≤ y < N , where M ×N is the size of the image I . Once the new coordinates are defined, we can calculate the values of the pixels in the new image Ĩ using bilinear interpolation: Ĩ(x̃, ỹ) = wx1,y1I(x1, y1) + wx1,y2I(x1, y2)+ wx2,y1I(x2, y1) + wx2,y2I(x2, y2) (4) where (x1, x2, y1, y2) are integers, x1 ≤ x̃ < x2, y1 ≤ ỹ < y2 and wx1,y1 = (bx̃c − x̃)(bỹc − x̃) wx1,y2 = (bx̃c − x̃)(bỹc+ 1− ỹ) wx2,y1 = (bx̃c+ 1− x̃)(bỹc − ỹ) wx2,y2 = (bx̃c − x̃)(bỹc+ 1− ỹ) (5) To avoid sampling from outside the image we clip the values bx̃c and bx̃c+1 between 0 and M and the values bỹc and bỹc + 1 between 0 and N . We omitted that in (5) for conciseness. Note that (4) is piecewise differentiable w.r.t I . We can define translation through operations with A = [ 1 0 τx 0 1 τy ] . (6) Also, we can rotate the image ρ radians counter clockwise with A = [ cos ρ sin ρ 0 − sin ρ cosρ 0 ] . (7) Image rescaling is achieved on that framework by rescaling in the right square submatrix A1:2,1:2. We illustrate in Fig. 1 how to get similar results using convolutions with a delta-function and spatial transformers. Considering the tools defined above, we can define a statistical model of 2D images the explicitly represents sprites and their positions in the scene. We can use the free energy of this statistical model to optimize a neural network. Let us start with a static single frame model and later generalize it to video. Let an image I ∼ pθ(I) be composed of sprite s ∼ pθ(s) centered in the (x, y) coordinates in the larger image I . Denote these coordinates as a random variable δxy ∼ pθ, where θ are the model parameters. pθ(δxy) can be factored in two marginal categorical distributions Cat(δx) and Cat(δy) that models the probability of each coordinate of the sprite independently. For the finite sprite dataset, pθ(s) is also a categorical distribution conditioned on the true sprites. For this finite case the generative model can be factored as pθ(I, s, δ) = pθ(s)pθ(δxy)p(I|s, δxy), (8) assuming that “what”, s, and “where”, δxy , are statistically independent. Also, in such case the posterior pθ(s, δ|I) = pθ(s|I)p(δxy|I) (9) is tractable. One could use for instance Expectation-Maximization or greedy approaches like Matching Pursuit to alternate between the search for the position and fitting the best matching shape. For the infinite number of sprites case, we assume that there is a hidden variable z from which the sprites are generated as p(s, z) = pθ(z)pθ(s|z). In such case our full posterior becomes pθ(z, s, δ|I) = pθ(z, s|I)p(δxy|I) = pθ(z|I)pθ(s|I, z)p(δxy|I). (10) We can simplify (10) assuming pθ(z|s) = pθ(z|I) for simple images without ambiguity and no sprite occlusion. For a scalable inference in the case of unknown θ and z and intractable pθ(z|s) we can use the auto-encoding variational Bayes (VAE) approach proposed by Kingma & Welling (2013). Using VAE we define an approximate recognition model qφ(z|s). In such case, the loglikelihood of the i.i.d images I is log pθ(I1, . . . , IT ) = ∑T i log pθ(Ii) and log pθ(Ii) = DKL(qφ(z|si)||pθ(z|si))+ DKL(pθ(z|si)||pθ(z|Ii))+ L(θ, φ, δxy, Ii). (11) Again, assume that the approximation pθ(z|s) = pθ(z|I) we have DKL(pθ(z|si)||pθ(z|Ii)) = 0 and the free energy (or variational lower bound) term equal to L(θ, φ, δ, I) = −DKL(qφ(z|si)||pθ(z))+ Eqφ(z|s,δ)pθ(δ|I)[log pθ(I|z, δ)], (12) where we dropped the subindices xy and i to avoid clutter. Here we would like to train our model by maximizing the lower bound (12), again inspired by VAE. We can do so using the reparametrization trick assuming qφ(z|s) and the prior pθ(z) to be Gaussian and sampling z = mφ(I) + vφ(I) · ξ, (13) where ξ ∼ N (0, σI), I is the identity matrix, the functions m(I) and v(I) are deep neural networks learned from data. One can argue that given z and a good approximation to the posterior qφ, estimating δ is still tractable. Nevertheless, we preemptively avoid Expectation-Maximization or other search approaches and use instead neural network layers lx and ly: δxy = softmax(lx(I))⊗ softmax(ly(I)), (14) with ⊗ denoting the outer product of marginals. We also experiment using STNs. Such amortized inference is also faster in training and test time than EM and will also cover the case where I is itself a learned low dimensional or latent representation instead of an observable image. Bear this in mind while we use this approach even in simple experiments such as those with moving shapes in the Experiments Section. This will help us to understand what can be learned from this model. We extend the model above to videos, i.e. sequences of images I(t) = {I(0), I(1), . . .}, assuming that the conditional log-likelihood log pθ(It|HIt) = logpθ(It|Hδt , Hzt) follows (11), where HIt is the history of video frames prior to time point t. Also Hδt and Hzt are the history of position coordinates and the history of latent variables of the sprites respectively. We should observe that one can make the assumption that the sprites don’t change for a given video I(t) and only estimate one sprite st=0 or hidden variable zt=0. This assumption can be useful for long term predictions, but requires that the main object moving in the scene doesn’t change. In the next section, we propose a neural network architecture for maximizing our approximate variational lower bound 2D videos. 3 PERCEPTION UPDATING NETWORKS This Section proposes a family of neural architectures for optimizing the lower bound (12). A schematic diagram is represented in Fig. (2). The core of our method is a Recurrent Neural Network (RNN) augmented with task specific modules, namely a sprite addressable memory and modeling transformations layers. RNNs augmented with task specific units were popularized by Graves et al. (2014) in the context of learning simple differentiable algorithms and served as inspiration for us as well. Here since we explicitly model the perceived sprites as s or z and update it and its location and/or rotation though time we decided to call our method simply Perception Updating Networks. Here an input frame at time t, It, is fed to the RNN that emits 2 signals: a memory address that selects a relevant sprite and transformation parameters. If we are doing the translation transformation using convolutions and delta functions this output is equal to (14). If using STN, the translation operation returns the matrix A used in (3). Note that we could use both, letting convolutions with δ to the translation is constraining A as in (7) to do rotation transformations only. We describe the general case where both δxy and STNs are used in Algorithm 1. Beyond deciding between STNs vs δxy , a few other free parameters of our method are the type of RNN (e.g. vanilla RNN, LSTM, GRU, ConvRNN, etc), the number of neurons in the hidden state of the RNN and neural network architectures that infer the correct sprite and modeling transformation parameters. Our hyperparameter choices are investigated separately in each experiment in the next Section. Data: input videos It, t ∈ {0, 1, 2, . . .}, initial RNN state h0, neural network layers mφ, vφ, d, l, f Result: video predictions It, t ∈ {1, 2, 3, . . .} for t ∈ {0, 1, . . .} do ht ← RNN(It, ht−1) δxy = softmax(lx(ht))⊗ softmax(ly(ht)) ρ = f(ht) A = [ cos ρ sin ρ 0 − sin ρ cos ρ 0 ] ξ ∼ pθ(z) zt = mφ(ht) + vφ(ht) · ξ st = d(zt) at = STN(st, A) Ĩt+1 = at ? δxy It+1 = µĨt+1 + (1− µ)B end Algorithm 1: Perception Updating Networks. STN denotes spatial transformer operator (3)-(4) and ? denotes convolution. We experimented with several variations of this algorithm, mainly changing if and how the “where” modules δxy and STN are used. Also changing how the sprite st is calculated and not using a background B when not necessary. In the next section we present experiments with the proposed architecture on synthetic datasets. 4 EXPERIMENTS In this section we experiment with several implementations of the proposed Perception Updating Networks. We start with a simple synthetic dataset made of videos where one of 3 moving shapes moves with constant speed bouncing in the edges of an image. This illustrates the working of the finite memory and the addressing scheme in (1). Afterwards we show results on the moving MNIST dataset (Srivastava et al., 2015) commonly used in the literature of generative neural network models of videos. 4.1 BOUNCING SHAPES In this first experiment we generate videos of one of three shapes moving on a non-zero background. The shapes are a square, triangle and cross. The image size is 20×20 pixels and the shapes are 8×8 pixels. The pixel values are between 0 and 1. The shapes are picked with equal probability and they move at constant speed of 1 pixel per frame. The shapes start from random initial positions with and start moving in random directions as well. We tested two implementations of the proposed architecture: one using only convolutions, referred to as convolutional PUN in the figures, and another using using spatial transformers, called spatial transformer PUN. For the parameters of the convolutional PUN the RNN used was a Long Short Term Memory (LSTM) with 100 cells. The RNN in the Spatial Transformer PUN had 256 cells. In the convolutional PUN, the location layers used to calculate δxy , lx and ly , output vectors of size 20 pixels and we used the finite addressable memory described in (1). The background is also learned from data as weights of neural network. This background served to make the task more difficult and force the network to avoid just exploiting any non-zero value. After the convolutional composition It = st ? δxy , we added the background to form a new image using Ĩt = µ · It + (1− µ)B, where µ is a differentiable mask that accounts for the “transparency” of the image It. B is the learned 20 × 20 pixels background image. For complex shapes this mask shape could be calculated as another module in the network, similarly to the approach in Vondrick et al. (2016). In the following experiments, the training videos were 10 frames long. At test time the network is fed the first 10 frames of a video and asked to predict the next 10. Results for the compared methods are shown in Fig. ??. For the baseline method, we did a hyperparameter search on conventional LSTMs with a single linear output layer until we found one that had comparable results at test time. That network had 256 hidden cells. Also, note that although the scale of the mean square error is the same, the results from our proposed architecture look smoother than those learned by the LSTM as shown in Fig. 3. Given such a simple experiment, it is elucidating to visualize values learned by each piece of the network. As expected the sprite memory learned the 3 investigated shapes in transposed order since they are reverted by the convolution operation to compose the frame. We also experimented with choosing the size of the learned sprites st smaller and larger than the true shapes. We observed that for larger shapes such as 10 × 10 the sprites converge to the correct shapes but just using part of the pixels. For smaller shapes such as 6 × 6 pixels, instead of learning a part of the correct shape, the convolutional Perception Updating Network learned to compensate for the lack of enough pixels with more than one non-zero value in the location operation δxy (see Fig. 3). This allow us to suggest to the interested practitioner that in order to get interpretable results it is better to use sprites larger than the expected size than smaller. For the spatial transformer PUN the image is calculated as (see Algorithm 1 for context): A = f(ht), It+1 = STN(st, A). (15) We noticed that the spatial transformer PUN was not able to learn the training videos using an equivalent architecture to the convolutional PUN one. We had to use multiple layers to define the function f(ht). In other words, in the convolution based method δxy can be estimated by a single affine transformation of the state ht but A cannot. We also had to use smaller learning rates to guarantee convergence: 0.0001 for STN while the δxy-based model worked with a value 10 times larger. If we don’t use the softmax nonlinearity to construct δxy the representations learned by the convolutional PUN are no longer visually interpretable. It is interesting to conclude that under this framework the “what” and “where” can only be distinguished if we impose architectural constraints. The reason is the commutative property of the convolution operation. As a note on rotation, we ran experiments where the sprite are rotated by a random angle before being placed in the image. This new type of videos cannot be learned using only convolutional based Perception Updating Networks unless we increase the number of sprites proportionally to the number of possible angles. Spatial transformer based Perception Updating Networks can handle this new type of video naturally. Nevertheless, if the number of rotation angles is finite or can be discretized we found that we could learn to generate the videos faster if we combined the convolutional approach with a mechanism to select the appropriate angle from a set of possibilities. Results on this experiment are not shown in this paper due to space constraints but they can be reproduced with the companion code. 4.2 MOVING MNIST The Moving MNIST benchmark uses videos generated by moving 28×28 pixel images of hand written digits in a 64× 64 pixels canvas. Just like in the Bouncing Shapes dataset, the digits move with different different speeds in different directions and can bounce in the walls. Unlike the Bouncing Shapes dataset, there are 60000 different sprites for training and 10000 for test, making it impractical to use a discrete memory module. Instead, we use the memory representation denoted by (13) followed by st = d(zt) as written in Algorithm 1. We trained a convolutional Perception Updating Network using 2 layer LSTMs each one with 1024 cells for 200 epochs, with 10000 gradient updates per epoch. The latent variable z had 100 dimensions and the decoder d(·) was a single hidden layer MLP with 1000 hidden neurons and softplus activation function. The output layer of this MLP has 784 neurons, which is the size of an MNIST image, and sigmoid activation function. In the test set we obtained a negative log-likelihood of 239 nats with the proposed architecture, while a 2 layer LSTM baseline had 250 nats. Note that the our method was optimized to minimize the lower bound (12), not only the negative likelihood. These results are not as good as those obtained by the Video Pixel Networks (Kalchbrenner et al., 2016) that obtained 87 nats on the test set. Nevertheless, both approaches are not mutually exclusive and instead of a fully connected decoder we could use a similar PixelCNN decoder to generate sprites with higher likelihood. In this first paper we decided instead to focus in defining the statistical framework and interpretable “what” and “where” decoupling. When running the proposed method in rollout mode, feeding the outputs back as next time step inputs, we were able to generate high likelihood frames for more time steps than with a baseline LSTM. Also, since the sprite to be generated and its position in the frame are decoupled, in rollout mode we can fix the sprite and only use the δxy coming from the network. This way we can generate realistic looking frames for even longer, but after a few frames we observed the digits stopped moving or moved in the wrong direction (see video in the companion code repository). This means that the LSTM RNN was not able to maintain its internal dynamics for too long, thus, there is still room for improvement in the proposed architecture. In Fig. 5 we show sample rollout videos. The network was fed with 10 frames and asked to generate 10 more getting its own outputs back as inputs and the companion code repository for an animated version of this figure. This experiment also suggests several improvements in the proposed architecture. For example, we assumed that the internal RNN has to calculate a sprite at every time step, which is inefficient when the sprites don’t change in the video. We should improve the architecture with an extra memory unity that snapshots the sprites and avoid the burden of recalculating the sprites at every step. We believe this would a possible way to free representation power that the internal RNN could use to model the movement dynamics for even more time steps. 5 CONCLUSIONS This paper introduced a statistical framework for modeling video of 2D scenes inspired by graphics pipelines and variational auto-encoding Bayes. From this statistical framework we derived a variational lower bound that decouples sprites and their dynamics in a video. To optimize this lower bound, we suggested a family of architectures called Perception Updating Networks that can take advantage of this decoupled representation by memorizing sprites or their percepts and updating in location in a scene independently. We showed that this architecture could generate videos that are interpretable and are better suited than baseline RNNs for long video generation. ACKNOWLEDGMENTS We thank Ryan Burt for several suggestions to the first draft. This work was partially funded by the University of Florida Graduate Student Fellowship and ONR N00014-14-1-0542.
1. What is the focus of the paper in terms of video modeling? 2. What is the proposed approach in the paper, and how does it differ from other methods? 3. What are the limitations of the paper, specifically regarding experimentation? 4. How does the reviewer assess the potential impact of the paper's contributions to the field of computer vision?
Review
Review This paper presents an approach to modeling videos based on a decomposition into a background + 2d sprites with a latent hidden state. The exposition is OK, and I think the approach is sensible, but the main issue with this paper is that it is lacking experiments on non-synthetic datasets. As such, while I find the graphics inspired questions the paper is investigating interesting, I don't think it is clear that this work introduces useful machinery for modeling more general videos. I think this paper is more appropriate as a workshop contribution in its current form.
ICLR
Title Perception Updating Networks: On architectural constraints for interpretable video generative models Abstract We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents “sprites” or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network. 1 INTRODUCTION The current computer graphics pipelines are the result of efficient implementations required by limited hardware and high frequency output requirements. These requirements were also achieved with the use of explicit physics and optic constraints and modeling with constantly improving data structures (Shirley et al., 2015). In machine learning on the other hand, for a long time image (Olshausen et al., 1996) and video (Hurri & Hyvärinen, 2003) generative models had been investigated with statistical approaches that model images down to the pixel level (Simoncelli & Olshausen, 2001), sometimes assuming neighborhood statistical dependencies (Osindero & Hinton, 2008). In video prediction, the current state of the art uses variations of deep convolutional recurrent neural networks (Kalchbrenner et al., 2016) (Lotter et al., 2016) (Finn et al., 2016). As a parallel to the classic machine learning approach to image and video interpretation and prediction is a growing trend in the deep learning literature for modeling vision as inverse graphics (Kulkarni et al., 2015)(Rezende et al., 2016)(Eslami et al., 2016). These approaches can be interpreted into two groups: supervised and unsupervised vision as inverse graphics. The supervised approach assumes that during training an image is provided with extra information about its rotation, translation, illumination, etc. The goal of the supervised model is to learn an auto-encoder that explicitly factors out the content of the image and its physical properties. The supervised approach is illustrated by Kulkarni et al. (2015). The unsupervised approach requires extra architectural constraints, similar to those assumed in computer graphics. For example, Reed et al. (2016) modeled the content of a scene with a Generative Adversarial Network (Goodfellow et al., 2014) and its location with Spatial Transformer Networks (Jaderberg et al., 2015). The full model is adapted end-to-end to generate images whose appearance can be changed by independently modifying the ”what” and/or ”where” variables. A similar approach was applied to video generation with volumetric convolutional neural networks (Vondrick et al., 2016).In two papers by Google DeepMind (Rezende et al., 2016) (Eslami et al., 2016) they improved the ”where” representations of the unsupervised approach and modeled the 3D geometry of the scene. This way they explicitly represented object rotation, translation, camera pose, etc. Their approaches were also trained end-to-end with REINFORCE-like stochastic gradients to backpropagate through non-differentiable parts of the graphics pipeline (Rezende et al., 2016) or to count ∗Companion code repo coming soon. the number of objects in the scene (Eslami et al., 2016). Those papers also used Spatial Transformer Networks to model the position of the objects in the scene, but they extended it to 3D geometry so it could also model rotation and translation in a volumetric space. Other approaches inspired by the graphics pipeline and computer vision geometry in machine learning uses the physics constraints to estimate the depth of each pixel in the scene and camera pose movements to predict frames in video (Mahjourian et al., 2016) (Godard et al., 2016). The present paper is closer to the unsupervised approach of vision as inverse graphics. More precisely, here we investigate frame prediction in video. Contrary to the work by Reed et al. (2016) here we first limit ourselves to simple synthetic 2D datasets and learning models whose representations can be visually interpreted. This way we can investigate exactly what the neural network is learning and validate our statistical assumptions. Also, we investigate the behavior of Spatial Transformer Networks and question it as the default choice when limited compute resources are available and no scale invariance is required. First in the next Section we will pose a statistical model that is appropriate for machine learning but inspired by the graphics pipeline. 2 A 2D STATISTICAL GRAPHICS PIPELINE This section starts with a high level description of the 2D graphics pipeline, followed by a discussion of how to implement it with neural network modules, and finally we define a formal statistical model. The 2D graphics pipeline starts from geometric primitives and follows with modeling transformations, clipping, viewing transformations and finally scan conversion for generating an image. Here, we will deal with previously rasterized bitmaps, i.e. sprites, and will model the translation transformations, rotation and clipping with differential operations. This way, the steps in the pipeline can be defined as layers of a neural network and the free parameters can be optimized with backpropagation. For our neural network implementation, we assume a finite set of sprites (later we generalize it to infinite sprites) that will be part of the frames in the video. The image generation network selects a sprite, s, from a memorized sprite database Si∈{1,...,K} using an addressing signal c: s = ∑ j cjSj , where∑ j cj = 1. (1) For interpretable results it would be optimal to do one-hot memory addressing where cj = 1 for Sj = S and cj = 0 otherwise. Note that (1) is differentiable w.r.t to both cj and Sj so we can learn the individual sprites from data. We can for cj to sum to 1 using the softmax nonlinearity. This approach was inspired by the recent deep learning literature on attention modules (Bahdanau et al., 2014) (Graves et al., 2014). When the number of possible sprites is too large it is more efficient to do a compressed representation. Instead of using an address value c we use a content addressable memory where the image generator estimates a code z that is then decoded to the desired sprite with a (possibly nonlinear) function d(z). If we interpret the addressing value z as a latent representation and the content addressable memory d(z) as a decoder, we can use the recent advances in neural networks for generative models to setup our statistical model. We will revisit this later in this section. The translation transformation can be modeled with a convolution with a Delta function or using spatial transformers. Note that the translation of an image I(x, y) can be defined as I(x− τx, y − τy) = I(x, y) ? δ(x− τx, y − τy), (2) where ? denotes the image convolution operation. Clipping is naturally handled in such a case. If the output images have finite dimensions and δ(x−τx, y−τy) is non-zero near its border, the translated image I(x− τx, y − τy) will be clipped. Another way of implementing the translation operation is using Spatial Transformer Networks (STN) (Jaderberg et al., 2015). An implementation of STN can be defined in two steps: resampling and bilinear interpolation. Resampling is defined by moving the position of the pixels (x, y) in the original image using a linear transform to new positions (x̃, ỹ) as [ x̃ ỹ ] = A [ x y 1 ] , where A = [ A11 A12 A13 A21 A22 A23 ] . (3) We assume the coordinates in the original image are integers 0 ≤ x < M and 0 ≤ y < N , where M ×N is the size of the image I . Once the new coordinates are defined, we can calculate the values of the pixels in the new image Ĩ using bilinear interpolation: Ĩ(x̃, ỹ) = wx1,y1I(x1, y1) + wx1,y2I(x1, y2)+ wx2,y1I(x2, y1) + wx2,y2I(x2, y2) (4) where (x1, x2, y1, y2) are integers, x1 ≤ x̃ < x2, y1 ≤ ỹ < y2 and wx1,y1 = (bx̃c − x̃)(bỹc − x̃) wx1,y2 = (bx̃c − x̃)(bỹc+ 1− ỹ) wx2,y1 = (bx̃c+ 1− x̃)(bỹc − ỹ) wx2,y2 = (bx̃c − x̃)(bỹc+ 1− ỹ) (5) To avoid sampling from outside the image we clip the values bx̃c and bx̃c+1 between 0 and M and the values bỹc and bỹc + 1 between 0 and N . We omitted that in (5) for conciseness. Note that (4) is piecewise differentiable w.r.t I . We can define translation through operations with A = [ 1 0 τx 0 1 τy ] . (6) Also, we can rotate the image ρ radians counter clockwise with A = [ cos ρ sin ρ 0 − sin ρ cosρ 0 ] . (7) Image rescaling is achieved on that framework by rescaling in the right square submatrix A1:2,1:2. We illustrate in Fig. 1 how to get similar results using convolutions with a delta-function and spatial transformers. Considering the tools defined above, we can define a statistical model of 2D images the explicitly represents sprites and their positions in the scene. We can use the free energy of this statistical model to optimize a neural network. Let us start with a static single frame model and later generalize it to video. Let an image I ∼ pθ(I) be composed of sprite s ∼ pθ(s) centered in the (x, y) coordinates in the larger image I . Denote these coordinates as a random variable δxy ∼ pθ, where θ are the model parameters. pθ(δxy) can be factored in two marginal categorical distributions Cat(δx) and Cat(δy) that models the probability of each coordinate of the sprite independently. For the finite sprite dataset, pθ(s) is also a categorical distribution conditioned on the true sprites. For this finite case the generative model can be factored as pθ(I, s, δ) = pθ(s)pθ(δxy)p(I|s, δxy), (8) assuming that “what”, s, and “where”, δxy , are statistically independent. Also, in such case the posterior pθ(s, δ|I) = pθ(s|I)p(δxy|I) (9) is tractable. One could use for instance Expectation-Maximization or greedy approaches like Matching Pursuit to alternate between the search for the position and fitting the best matching shape. For the infinite number of sprites case, we assume that there is a hidden variable z from which the sprites are generated as p(s, z) = pθ(z)pθ(s|z). In such case our full posterior becomes pθ(z, s, δ|I) = pθ(z, s|I)p(δxy|I) = pθ(z|I)pθ(s|I, z)p(δxy|I). (10) We can simplify (10) assuming pθ(z|s) = pθ(z|I) for simple images without ambiguity and no sprite occlusion. For a scalable inference in the case of unknown θ and z and intractable pθ(z|s) we can use the auto-encoding variational Bayes (VAE) approach proposed by Kingma & Welling (2013). Using VAE we define an approximate recognition model qφ(z|s). In such case, the loglikelihood of the i.i.d images I is log pθ(I1, . . . , IT ) = ∑T i log pθ(Ii) and log pθ(Ii) = DKL(qφ(z|si)||pθ(z|si))+ DKL(pθ(z|si)||pθ(z|Ii))+ L(θ, φ, δxy, Ii). (11) Again, assume that the approximation pθ(z|s) = pθ(z|I) we have DKL(pθ(z|si)||pθ(z|Ii)) = 0 and the free energy (or variational lower bound) term equal to L(θ, φ, δ, I) = −DKL(qφ(z|si)||pθ(z))+ Eqφ(z|s,δ)pθ(δ|I)[log pθ(I|z, δ)], (12) where we dropped the subindices xy and i to avoid clutter. Here we would like to train our model by maximizing the lower bound (12), again inspired by VAE. We can do so using the reparametrization trick assuming qφ(z|s) and the prior pθ(z) to be Gaussian and sampling z = mφ(I) + vφ(I) · ξ, (13) where ξ ∼ N (0, σI), I is the identity matrix, the functions m(I) and v(I) are deep neural networks learned from data. One can argue that given z and a good approximation to the posterior qφ, estimating δ is still tractable. Nevertheless, we preemptively avoid Expectation-Maximization or other search approaches and use instead neural network layers lx and ly: δxy = softmax(lx(I))⊗ softmax(ly(I)), (14) with ⊗ denoting the outer product of marginals. We also experiment using STNs. Such amortized inference is also faster in training and test time than EM and will also cover the case where I is itself a learned low dimensional or latent representation instead of an observable image. Bear this in mind while we use this approach even in simple experiments such as those with moving shapes in the Experiments Section. This will help us to understand what can be learned from this model. We extend the model above to videos, i.e. sequences of images I(t) = {I(0), I(1), . . .}, assuming that the conditional log-likelihood log pθ(It|HIt) = logpθ(It|Hδt , Hzt) follows (11), where HIt is the history of video frames prior to time point t. Also Hδt and Hzt are the history of position coordinates and the history of latent variables of the sprites respectively. We should observe that one can make the assumption that the sprites don’t change for a given video I(t) and only estimate one sprite st=0 or hidden variable zt=0. This assumption can be useful for long term predictions, but requires that the main object moving in the scene doesn’t change. In the next section, we propose a neural network architecture for maximizing our approximate variational lower bound 2D videos. 3 PERCEPTION UPDATING NETWORKS This Section proposes a family of neural architectures for optimizing the lower bound (12). A schematic diagram is represented in Fig. (2). The core of our method is a Recurrent Neural Network (RNN) augmented with task specific modules, namely a sprite addressable memory and modeling transformations layers. RNNs augmented with task specific units were popularized by Graves et al. (2014) in the context of learning simple differentiable algorithms and served as inspiration for us as well. Here since we explicitly model the perceived sprites as s or z and update it and its location and/or rotation though time we decided to call our method simply Perception Updating Networks. Here an input frame at time t, It, is fed to the RNN that emits 2 signals: a memory address that selects a relevant sprite and transformation parameters. If we are doing the translation transformation using convolutions and delta functions this output is equal to (14). If using STN, the translation operation returns the matrix A used in (3). Note that we could use both, letting convolutions with δ to the translation is constraining A as in (7) to do rotation transformations only. We describe the general case where both δxy and STNs are used in Algorithm 1. Beyond deciding between STNs vs δxy , a few other free parameters of our method are the type of RNN (e.g. vanilla RNN, LSTM, GRU, ConvRNN, etc), the number of neurons in the hidden state of the RNN and neural network architectures that infer the correct sprite and modeling transformation parameters. Our hyperparameter choices are investigated separately in each experiment in the next Section. Data: input videos It, t ∈ {0, 1, 2, . . .}, initial RNN state h0, neural network layers mφ, vφ, d, l, f Result: video predictions It, t ∈ {1, 2, 3, . . .} for t ∈ {0, 1, . . .} do ht ← RNN(It, ht−1) δxy = softmax(lx(ht))⊗ softmax(ly(ht)) ρ = f(ht) A = [ cos ρ sin ρ 0 − sin ρ cos ρ 0 ] ξ ∼ pθ(z) zt = mφ(ht) + vφ(ht) · ξ st = d(zt) at = STN(st, A) Ĩt+1 = at ? δxy It+1 = µĨt+1 + (1− µ)B end Algorithm 1: Perception Updating Networks. STN denotes spatial transformer operator (3)-(4) and ? denotes convolution. We experimented with several variations of this algorithm, mainly changing if and how the “where” modules δxy and STN are used. Also changing how the sprite st is calculated and not using a background B when not necessary. In the next section we present experiments with the proposed architecture on synthetic datasets. 4 EXPERIMENTS In this section we experiment with several implementations of the proposed Perception Updating Networks. We start with a simple synthetic dataset made of videos where one of 3 moving shapes moves with constant speed bouncing in the edges of an image. This illustrates the working of the finite memory and the addressing scheme in (1). Afterwards we show results on the moving MNIST dataset (Srivastava et al., 2015) commonly used in the literature of generative neural network models of videos. 4.1 BOUNCING SHAPES In this first experiment we generate videos of one of three shapes moving on a non-zero background. The shapes are a square, triangle and cross. The image size is 20×20 pixels and the shapes are 8×8 pixels. The pixel values are between 0 and 1. The shapes are picked with equal probability and they move at constant speed of 1 pixel per frame. The shapes start from random initial positions with and start moving in random directions as well. We tested two implementations of the proposed architecture: one using only convolutions, referred to as convolutional PUN in the figures, and another using using spatial transformers, called spatial transformer PUN. For the parameters of the convolutional PUN the RNN used was a Long Short Term Memory (LSTM) with 100 cells. The RNN in the Spatial Transformer PUN had 256 cells. In the convolutional PUN, the location layers used to calculate δxy , lx and ly , output vectors of size 20 pixels and we used the finite addressable memory described in (1). The background is also learned from data as weights of neural network. This background served to make the task more difficult and force the network to avoid just exploiting any non-zero value. After the convolutional composition It = st ? δxy , we added the background to form a new image using Ĩt = µ · It + (1− µ)B, where µ is a differentiable mask that accounts for the “transparency” of the image It. B is the learned 20 × 20 pixels background image. For complex shapes this mask shape could be calculated as another module in the network, similarly to the approach in Vondrick et al. (2016). In the following experiments, the training videos were 10 frames long. At test time the network is fed the first 10 frames of a video and asked to predict the next 10. Results for the compared methods are shown in Fig. ??. For the baseline method, we did a hyperparameter search on conventional LSTMs with a single linear output layer until we found one that had comparable results at test time. That network had 256 hidden cells. Also, note that although the scale of the mean square error is the same, the results from our proposed architecture look smoother than those learned by the LSTM as shown in Fig. 3. Given such a simple experiment, it is elucidating to visualize values learned by each piece of the network. As expected the sprite memory learned the 3 investigated shapes in transposed order since they are reverted by the convolution operation to compose the frame. We also experimented with choosing the size of the learned sprites st smaller and larger than the true shapes. We observed that for larger shapes such as 10 × 10 the sprites converge to the correct shapes but just using part of the pixels. For smaller shapes such as 6 × 6 pixels, instead of learning a part of the correct shape, the convolutional Perception Updating Network learned to compensate for the lack of enough pixels with more than one non-zero value in the location operation δxy (see Fig. 3). This allow us to suggest to the interested practitioner that in order to get interpretable results it is better to use sprites larger than the expected size than smaller. For the spatial transformer PUN the image is calculated as (see Algorithm 1 for context): A = f(ht), It+1 = STN(st, A). (15) We noticed that the spatial transformer PUN was not able to learn the training videos using an equivalent architecture to the convolutional PUN one. We had to use multiple layers to define the function f(ht). In other words, in the convolution based method δxy can be estimated by a single affine transformation of the state ht but A cannot. We also had to use smaller learning rates to guarantee convergence: 0.0001 for STN while the δxy-based model worked with a value 10 times larger. If we don’t use the softmax nonlinearity to construct δxy the representations learned by the convolutional PUN are no longer visually interpretable. It is interesting to conclude that under this framework the “what” and “where” can only be distinguished if we impose architectural constraints. The reason is the commutative property of the convolution operation. As a note on rotation, we ran experiments where the sprite are rotated by a random angle before being placed in the image. This new type of videos cannot be learned using only convolutional based Perception Updating Networks unless we increase the number of sprites proportionally to the number of possible angles. Spatial transformer based Perception Updating Networks can handle this new type of video naturally. Nevertheless, if the number of rotation angles is finite or can be discretized we found that we could learn to generate the videos faster if we combined the convolutional approach with a mechanism to select the appropriate angle from a set of possibilities. Results on this experiment are not shown in this paper due to space constraints but they can be reproduced with the companion code. 4.2 MOVING MNIST The Moving MNIST benchmark uses videos generated by moving 28×28 pixel images of hand written digits in a 64× 64 pixels canvas. Just like in the Bouncing Shapes dataset, the digits move with different different speeds in different directions and can bounce in the walls. Unlike the Bouncing Shapes dataset, there are 60000 different sprites for training and 10000 for test, making it impractical to use a discrete memory module. Instead, we use the memory representation denoted by (13) followed by st = d(zt) as written in Algorithm 1. We trained a convolutional Perception Updating Network using 2 layer LSTMs each one with 1024 cells for 200 epochs, with 10000 gradient updates per epoch. The latent variable z had 100 dimensions and the decoder d(·) was a single hidden layer MLP with 1000 hidden neurons and softplus activation function. The output layer of this MLP has 784 neurons, which is the size of an MNIST image, and sigmoid activation function. In the test set we obtained a negative log-likelihood of 239 nats with the proposed architecture, while a 2 layer LSTM baseline had 250 nats. Note that the our method was optimized to minimize the lower bound (12), not only the negative likelihood. These results are not as good as those obtained by the Video Pixel Networks (Kalchbrenner et al., 2016) that obtained 87 nats on the test set. Nevertheless, both approaches are not mutually exclusive and instead of a fully connected decoder we could use a similar PixelCNN decoder to generate sprites with higher likelihood. In this first paper we decided instead to focus in defining the statistical framework and interpretable “what” and “where” decoupling. When running the proposed method in rollout mode, feeding the outputs back as next time step inputs, we were able to generate high likelihood frames for more time steps than with a baseline LSTM. Also, since the sprite to be generated and its position in the frame are decoupled, in rollout mode we can fix the sprite and only use the δxy coming from the network. This way we can generate realistic looking frames for even longer, but after a few frames we observed the digits stopped moving or moved in the wrong direction (see video in the companion code repository). This means that the LSTM RNN was not able to maintain its internal dynamics for too long, thus, there is still room for improvement in the proposed architecture. In Fig. 5 we show sample rollout videos. The network was fed with 10 frames and asked to generate 10 more getting its own outputs back as inputs and the companion code repository for an animated version of this figure. This experiment also suggests several improvements in the proposed architecture. For example, we assumed that the internal RNN has to calculate a sprite at every time step, which is inefficient when the sprites don’t change in the video. We should improve the architecture with an extra memory unity that snapshots the sprites and avoid the burden of recalculating the sprites at every step. We believe this would a possible way to free representation power that the internal RNN could use to model the movement dynamics for even more time steps. 5 CONCLUSIONS This paper introduced a statistical framework for modeling video of 2D scenes inspired by graphics pipelines and variational auto-encoding Bayes. From this statistical framework we derived a variational lower bound that decouples sprites and their dynamics in a video. To optimize this lower bound, we suggested a family of architectures called Perception Updating Networks that can take advantage of this decoupled representation by memorizing sprites or their percepts and updating in location in a scene independently. We showed that this architecture could generate videos that are interpretable and are better suited than baseline RNNs for long video generation. ACKNOWLEDGMENTS We thank Ryan Burt for several suggestions to the first draft. This work was partially funded by the University of Florida Graduate Student Fellowship and ONR N00014-14-1-0542.
1. What is the main contribution of the paper regarding generative models for video sequences? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other works in the field? 3. Do you have any concerns about the assumptions made in the paper, especially regarding its applicability to real-world videos? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper presents a generative model of video sequence data where the frames are assumed to be generated by a static background with a 2d sprite composited onto it at each timestep. The sprite itself is allowed to dynamically change its appearance and location within the image from frame to frame. This paper follows the VAE (Variational Autoencoder) approach, where a recognition/inference network allows them to recover the latent state at each timestep. Some results are presented on simple synthetic data (such as a moving rectangle on a black background or the “Moving MNIST” data. However, the results are preliminary and I suspect that the assumptions used in the paper are far too strong too be useful in real videos. On the Moving MNIST data, the numerical results are not competitive to state of the art numbers. The model itself is also not particularly novel and the work currently misses some relevant citations. The form of the forward model, for example, could be viewed as a variation on the DRAW paper by Gregor et al (ICML 2014). Efficient Inference in Occlusion-Aware Generative Models of Images by Huang & Murphy (ICLR) is another relevant work, which used a variational auto-encoder with a spatial transformer and an RNN-like sequence model to model the appearance of multiple sprites on a background. Finally, the exposition in this paper is short on many details and I don’t believe that the paper is reproducible from the text alone. For example, it is not clear what the form of the recognition model is… Low-level details (which are very important) are also not presented, such as initialization strategy.
ICLR
Title Privacy-preserving Task-Agnostic Vision Transformer for Image Processing Abstract Distributed collaborative learning approaches such as federated and split learning have attracted significant attention lately due to their ability to train neural networks using data from multiple sources without sharing data. However, they are not usually suitable in applications where each client carries out different tasks with its own data. Recently, Vision Transformer (ViT) has been widely explored in computer vision applications due to its capability to learn the common representation through global attention of the embedded input sequence. By leveraging the advantages of ViT, here we present a new distributed learning framework for image processing tasks, allowing clients to learn multiple tasks with their private data. The key idea arises from a disentangled representation of local and non-local features using a task-agnostic Vision Transformer and a task-specific head/tail. By connecting task-specific heads and tails at client sides to a task-agnostic Transformer body at a server side, each client learns a translation from its own task to a common representation, while the Transformer body learns global attention between the features embedded in the representation. To enable decomposition between the task-specific and common representation, we propose an alternating training strategy in which task-specific learning for the heads and tails is run on the clients by fixing the Transformer, which alternates with task-agnostic learning for the Transformer on the server by freezing the heads and tails. Once the Transformer body is fully trained with a sufficient number of tasks and clients, additional training of the Transformer body is no longer required when a new client is added with a new task, and all that is required is the training of customer-specific head and tail. Experimental results on multi-task learning for various low-level and high-level computer vision including medical image data show that our method synergistically improves the performance of the task-specific network of each client while maintaining privacy. 1 INTRODUCTION Deep learning approaches have demonstrated the state-of-the-art performance and fast inference time in computer vision tasks (Ronneberger et al., 2015; Zhang et al., 2017a; Wang et al., 2017). In particular, convolutional neural networks (CNN) can learn the hierarchy of complex image features, so that a variety of CNN-based methods have been developed for denoising (Zhang et al., 2017b; Chang et al., 2020), deraining (Wei et al., 2019; Ren et al., 2019), deblurring (Nah et al., 2017; Kupyn et al., 2019), deblocking (Li et al., 2020b; Maleki et al., 2018), etc. However, the performance of CNN typically depends on a large number of training data (Chervenak et al., 2000; Krizhevsky et al., 2017), and it is often difficult to collect data from various entities due to privacy and regulation issues (Price & Cohen, 2019). Since the amount of data from a single source may not be enough, a deep learning framework that can leverage many datasets without violating privacy is required in real-world applications. To address this, distributed collaborative learning (DCL) approaches, which jointly train a single network on multiple systems or devices without revealing distributed data to a central entity or to each device, have been investigated (Konečnỳ et al., 2016; McMahan et al., 2017a; Gupta & Raskar, 2018). For example, federated learning (FL) (McMahan et al., 2017a; Li et al., 2020c) is studied to aggregate all data to the center under privacy constraints. Thanks to the parallel communication between each client, FL enables fast training of the network across multiple clients. Also, split learning (SL) (Gupta & Raskar, 2018; Vepakomma et al., 2018) is developed as an enhanced privacy-preserving model that splits a network into clients and server so that each client does not share all network parameters but only train a part of networks. From the advantages of FL and SL, a combination of split and federated learning, named SplitFed learning (SFL) (Thapa et al., 2020), has been recently proposed to provide efficient training and a high level of privacy with a less computational burden. However, the existing CNN-based methods are difficult to determine the proper layer of the network to split. Also, although training data are distributed across each client, all clients usually consider a common learning task. Meanwhile, in many practical image processing applications, it is unlikely that all the clients are interested in the same applications. For example, some of the clients may be interested in image denoising (Zhang et al., 2017b), whereas the other clients are focused on image deblurring (Nah et al., 2017), deraining (Wei et al., 2019), deblocking (Li et al., 2020b), etc. As each task is different from the others, the existing distributed learning framework may not work. That said, these image processing tasks still require understanding of common image representation, so one may wonder whether there is any systematic way of synergistically learning multiple image processing tasks in a privacy-preserving manner. One of the most important contributions of this work is to show that Task-agnostic Vision Transformer (TAViT), composed of the CNN-based head and tail and ViT-based body, is nicely fit to this purpose. Specifically, the head and tail are placed on each client to learn specific image processing tasks, while the body is stored and trained on a server to learn common representation across all tasks of clients. In contrast to the existing SL framework where the network split is arbitrary, TAViT provides a systematic way of splitting neural networks between clients and servers for privacy-preserving training without losing any performance. Furthermore, TAViT allows clients to use a common Transformer body model to learn multiple image processing tasks and synergistically improve the performance of their task-specific networks. One may think that the proposed method is similar to the image processing transformer (IPT) (Chen et al., 2020), which consists of CNN-based heads and tails and a Transformer body. However, IPT requires centralized data and large computation resources for both pretraining and task-specific fine-tuning the whole model. Also, the Transformer in IPT has an encoder-decoder architecture which needs an explicit conditioning vector to convert the Transformer for a specific task. Thus, to our best knowledge, IPT is not suitable for distributed learning. In contrast, the body of TAViT is made of an encoder-only Transformer architecture to learn global embedding features of multiple tasks without any condition. Besides, by imposing computation of this Transformer body on the server rather than clients, our framework enables clients to reduce the computational burden while maintaining the overall performance for specific image processing tasks. In addition, TAViT views the heads and tail at the clients and the body at the server as two-part players and updates them alternately. Specifically, our training step is composed of task-specific learning and task-agnostic learning: the former is to train the client-side heads and tails to learn each task of the client, while the latter is to train the server-side Transformer body to learn general feature interpretation over multiple tasks. When there are more than two clients for any single task, parameters of their heads and tails can be aggregated through FL. Accordingly, TAViT offers seamless integration between SL and FL approaches to protect privacy. Recall that one of the most unique advantages of Transformer body is to convert “unattended ” input features into “attended ” output features by learning global attention and non-local interactions between the input features. Accordingly, with the help of aforementioned alternating training scheme, the task-specific head/tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer. In fact, this disentangled representation of local and non-local features has been pursued throughout the development of deep networks (Ye et al., 2018; Zhang et al., 2019b; Wang et al., 2018). Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split learning. We validate the performance of TAViT on multiple image processing tasks. Experimental results show that our multi-task distributed learning framework using the alternating training strategy outperforms the end-to-end learning of each individual task thanks to the decomposition of the task-agnostic Transformer body and task-specific networks. This suggests that our framework is a promising approach for learning multiple tasks with distributed privacy-sensitive data. In sum, our contributions are summarized as follows: • We propose a novel distributed learning framework, TAViT, that carries out multiple image processing tasks using distributed data. • The proposed method consists of task-specific heads and tails on clients and a task-agnostic Transformer body on a server, which reduces the computational cost of clients and does not require centralized data for multi-task learning. • An alternating training strategy between the task-specific and task-agnostic learning for the split networks shows the synergy effect of performance improvement, which is demonstrated by experimental results on multiple tasks. 2 RELATED WORKS Federated learning In the FL setting, multiple clients learn locally stored data while one server aggregates information of clients by various methods including FedAvg (McMahan et al., 2017a). For the efficient implementation of FL, practical challenges of unstable networks, hardware capacity difference, and statistical heterogeneity of data distributions (Li et al., 2020c; Smith et al., 2017; Li et al., 2018) have been actively studied. Corinzia et al. (2019) performs FL with multiple classification tasks, and He et al. (2020) loads a huge network to a server and small CNNs to clients and trains them by knowledge distillation. Yao et al. (2019) presents an unbiased gradient aggregation for FL and meta updating of the model. In contrast, our method is presented for effectively learning on task heterogeneity using distributed data. Although Li et al. (2020a) presents task-agnostic FL method based on the feature extractor, each client trains the task-specific network independently, while our model can learn multiple tasks simultaneously for synergistic performance improvement. Split learning Split learning (SL) is designed to train networks over distributed data by splitting networks into two parts, which updates client-part and server-part networks sequentially (Gupta & Raskar, 2018). By extending this idea, Vepakomma et al. (2018) presents several ways to use SL, and Abuadbba et al. (2020) applies SL to 1D CNN models. However, the existing SL methods are designed using CNN, and to our best knowledge, there is no principle way of splitting the network for the best performance. In particular, Thapa et al. (2020) proposes a combination of FL and SL, but the server requires labels from clients to update the split networks, which may lose data privacy. Also, since outputs are generated from a shared network on the server when there are multiple clients, these methods can be narrowly applied to a single task. In contrast, our model presents Transformer-based shared body that enables multi-task learning of clients without sharing data. Vision Transformer for image processing Recently, inspired by the success of Transformer in natural language processing (Vaswani et al., 2017; Devlin et al., 2018), Transformer-based image processing methods have been extensively explored (He et al., 2021; Han et al., 2021). In particular, Dosovitskiy et al. (2020) proposes a Vision Transformer (ViT) with an encoder-only architecture to learn image recognition tasks. Also, Chen et al. (2020) presents an image processing Transformer (IPT) that learns low-level vision tasks by pretraining and task-specific fine-tuning. However, to the best of our knowledge, there are no existing works that exploit ViT architecture for distributed learning applications. 3 PRIVACY-PRESERVING TASK-AGNOSTIC VISION TRANSFORMER 3.1 SUBSCRIPTION-BASED SERVICE MODEL As illustrated in Figure 1(a), TAViT is designed for subscription-based services. Specifically, a client subscribes to a task-agnostic Transformer model at the server side that has learned global attention over the image features from other datasets. Then, the client can build the head and tail proper to its own image processing task, and connect them to the Transformer body at the server. At the subscription time, there may be already multiple clients that subscribe to the same Transformer body. Accordingly, each client can train its own head and tail using its local data whereas the common Transformer body is regularly updated using embedding features from all subscribers through alternating training strategy as shown in Figure 1(b), or even fixed if training has been performed with sufficient number of tasks and clients. As a result, the latest version of the Transformer body trained using more training data can be maintained on the server side so that it can be offered to new clients at the next subscription. Since the local data are not centralized to one device and are not shared with other clients, our framework can preserve data privacy. In the proposed framework, we consider the features from the head as a sequence of tokens similar to natural language processing. Specifically, as shown in Figure 1(c), we reshape the features f with Y ×X×D size into a sequence of patches f = {f1, f2, . . . , fS}, whereX,Y,D denote width, height, and channel dimension of image features, respectively, S is the number of patches, i.e. S = Y X/p2 for patch size p, and fs denotes the s-th patch of the features with p2 ×D size. Then, these reshaped features f are taken into the Transformer body as an input sequence, which is added to learnable positional embeddings to keep the position information of each feature patch. The Transformer body consists of several encoder layers proposed in Vaswani et al. (2017) so that the encoded features pass through several multi-head self-attention modules and feed-forward modules for each layer. And then, the body output of transformed features is reshaped into the original shape of features f to be used as input for the tail CNN. Here, for the Transformer body, we employ the encoder-only architecture as a task-agnostic network, compared to IPT (Chen et al., 2020) that uses both encoder and decoder of Transformer. The encoderonly Transformer learns the global relationship between features in the input corpus, and that global attention may be all we need for better performance in vision tasks as demonstrated in ViT. Therefore, the body of our framework can be trained to translate the input embedding features into globally self-attended features independent of specific tasks. Moreover, the heads are guided to learn the task-specific embedding from the input images to the common feature representation, and the tails are trained to learn the attended features for the specific image processing tasks. This architectural modification enables the framework to be suitable for multi-task distributed learning. 3.2 TRAINING SCHEME For distributed datasets of different tasks, we apply the alternating training strategy between clients and the server by considering them as two players. Specifically, as shown in Figure 2, TAViT trains the client-side task-specific head and tail networks and the server-side task-agnostic body network in an alternating manner. In the task-specific learning, clients train their own heads and tails with the fixed body weights in parallel using locally stored datasets. In contrast, in the task-agnostic learning, Algorithm 1 TAViT: C = {C1, C2, . . . , CK} is a group of client sets with different tasks each other. Is and Ia denote the number of optimization iterations for each task-specific and task-agnostic step in one cycle. Hc and Tc are the head and the tail of a client c, and B is the Transformer body on the server. Initialization :H,T to all clients, B to a body for i in [1, num_cycles] do for is in [1, Is] do // task-specific learning (heads & tails) for each client c ∈ Ck ⊂ C in parallel do update Hc, Tc with fixed B end if is is aggregation step then // for case of multi-clients with one task for each client subset Ck ⊂ C, s.t. |Ck| > 1 do unify Hc and Tc of client c ∈ Ck (e.g. FedAvg) end end end for ia in [1, Ia] do // task-agnostic learning (body) k ← randomly selected task update B with fixed Hc, Tc, s.t. c ∈ Ck end end Output: H,T,B the server trains the Transformer body with the fixed head and tail of a randomly chosen client for each iteration. More details are as follows. 3.2.1 TASK-SPECIFIC LEARNING Let C = ⋃K k=1 Ck be a group of client sets participating different image processing tasks, where K denotes the number of tasks, and Ck has one or more clients with different datasets for the k-th task, i.e. Ck = {ck1 , ck2 , . . . , ckNk} with Nk ≥ 1. Each client c ∈ Ck has task-specific own network architecture for a head Hc and a tail Tc, which are connected to the Transformer body B in the server. In the task-specific learning, for the given freezed Transformer B at the server and the local training data {(x(i)c , y(i)c }Nci=1, the c-th client then trains the neural networksHc and Tc by solving the following optimization problem: min Hc,Tc Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))), (1) where `c(y, ŷ) refers to the c-th client specific loss between the target y and the estimate ŷ. The parameters of Hc and Tc are iteratively updated using ∂`c/∂Tc and ∂`c/∂Hc. These gradients are calculated by back-propagation through the entire model which can be expressed by the chain rule: ∂`c ∂Tc = ∂`c ∂ŷ · ∂ŷ ∂Tc , ∂`c ∂Hc = ∂`c ∂fH · ∂fH ∂Hc = ∂`c ∂fB · ∂fB ∂fH · ∂fH ∂Hc (2) where fH = Hc(x (i) c ), fB = B(fH) and ŷ = Tc(fB). This implies that to update the head Hc and the tail Tc, the gradient ∂`c/∂fB is transmitted to the server after back-propagation through the tail, and also the ∂`c/∂fH computed from back-propagation through the body is transported to each client. Federated learning In the task-specific learning, when there are multiple clients for the same task k (i.e. Nk > 1), their heads and tails can be trained in parallel. Suppose that cki has training dataset size of |Di| and the total size of dataset in Ck is ∑ |Di| = |D|. In this case, the backpropagation and optimization process are the same with the single client case, but additionally applies FedAvg(McMahan et al., 2017a) to the parameters Hc and Tc of c ∈ Ck for every assigned period, which is written as: (Hcj , Tcj )← ( Nk∑ i=1 |Di| |D| Hci , Nk∑ i=1 |Di| |D| Tci ) , where 1 ≤ j ≤ Nk. (3) The period of the weight aggregation is adjustable (50 epochs in our experiments). From this federated learning, clients corresponding to the k-th task share the same parameters at the end of task-specific learning as shown in Figure 2. 3.2.2 TASK-AGNOSTIC LEARNING Once the heads and tails of multiple clients are trained, the Transformer body is trained by fixing the heads and tails at the clients. To train the Transformer body that learns the common representation in a task-agnostic manner, we construct a subset of the client CB by selecting one client from each task: CB = {c1n1 , c 2 n2 , . . . , c K nK}, c k nk ∈ Ck. (4) Then, the training data {x(i)c , y(i)c }Nci=1 corresponding to the task of the client are also selected, and the Transformer body in the server is updated by solving the optimization problem as follows: min B ∑ c∈CB Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))). (5) Similar to the task-specific learning, the parameters of B are updated using ∂`c/∂B, where the client c is randomly chosen from CB at each optimization step. The required gradients also come from back-propagation as following: ∂`c ∂B = ∂`c ∂fB · ∂fB ∂B , where ∂`c ∂fB = ∂`c ∂ŷ · ∂ŷ ∂fB , (6) where fB = B(fH) and ŷ = Tc(fB). Here, the gradient ∂`c/∂fB is only transported to the server after back-propagation through the tail. Through this task-agnostic learning, the Transformer body in the server learns global embedding representation and provides task-agnostic self-attended features for various image processing. The pseudocode of the overall TAViT is described in Algorithm 1 with more details in Appendix A. 3.3 COMMUNICATION COST AND PRIVACY PRESERVATION BY TAVIT Given that gradients have to be transmitted two-way or one-way for training head/tail and body parts of the architecture, one many wonder whether additional communication overhead is significant. However, since the Transformer body is a shared model on the server that does not perform any weight aggregation, our model has much smaller cost for one communication between the client and the server in the task-agnostic learning. This comes from the small size of transported features and gradients for the heads and tails. If we sample clients in the network training, the communication cost can be further controlled. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases if a bigger Transformer body is used for better representation of global attention. For detailed analysis, see Appendix D.4. The proposed TAViT is designed to use distributed local data for distinct tasks without sharing the data to the other clients or any central device. Although the privacy attack on the transported features between the server and clients can be occurred, yet another powerful and unique mechanism for maintaining privacy in TAViT arises when the client-side network of the proposed method has a skip connection between the head and the tail. In this case, the transported feature characteristics can contain very lossy information from the original data, and one cannot reconstruct data only using the transmitted hidden features of the proposed method as detailed in Appendix D.1. 4 EXPERIMENTAL RESULTS We examine the performance of TAViT with the following image processing tasks: deblocking (JPEG artifact removal), denoising, deraining, and deblurring. Additional experiments for image inpainting and medical data are also performed to investigate its performance for high-level computer vision tasks and different domain data, respectively, which can be found in Section D.5 of Appendix. With a single server, we set two clients to carry out FL on the deblocking task and set one client for each of the other tasks, so the total number of clients is five in our experiments. We evaluate results using two metrics of PSNR and SSIM. Datasets The public datasets we used are as follows. For deblocking and denoising, we use 10,582 images from PASCAL VOC 2012 (Everingham et al., 2010) and Segmentation Boundaries Dataset (SBD) (Hariharan et al., 2011). Particularly, for FL on the deblocking, we split the data into two sets with 5,291 images and distribute them to each client. Deblocking results are evaluated on Berkely Segmentation Database (BSD500) (Martin et al., 2001b) that provides 200 test images. For the denoising, we apply random Gaussian noise with the level of σ = 30 to images. The Denoising model is evaluated on CBSD68 that contains 68 test images extracted from the BSD500. For deraining, following the experiment setting of Jiang et al. (2020), we use data from Rain14000 (Fu et al., 2017b), Rain1800 (Yang et al., 2017), Rain800 (Zhang et al., 2019a), and Rain12 (Li et al., 2016) that provide 13,711 pairs of clean and synthetic rain images. Deraining results are evaluated on Rain100H and Rain100L (Yang et al., 2017), each of which has 100 synthetic rainy images. Deblurring is performed using a GoPro dataset (Nah et al., 2017) that contains 2,103 and 1,111 pairs of sharp and blurry images for training and test sets, respectively. Implementation details To implement our TAViT, we use the encoder and decoder of DDPM (Ho et al., 2020) with three stages as our backbone of each head and tail at the client. For the Transformer body in the server, we use 8 encoder layers of the vanilla Transformer (Vaswani et al., 2017) with 256 words and 512 embedding dimensions. The total number of parameters for networks at each client and the server is about 28M and 17M, respectively. Using 4 Nvidia Quadro RTX 6000 cards and 2 Nvidia Geforce GTX 1080Ti cards, we train the networks using Adam optimizer with a learning rate of 3× 10−5. We initialize parameters of the networks with those of the pre-trained model by an autoencoder scheme. For the data augmentation, we apply random horizontal and vertical flipping, rotating with 90 degrees, and cropping by a patch size of 64× 64× 3. For three cycles, by setting the batch size as 20, we perform the task-specific learning for 200, 400, 400, and 2000 epochs on deblocking, denoising, deraining, and deblurring, respectively. Also, we perform the task-agnostic learning for 1000 epochs with 1/4 data for each task. We implement our TAViT using PyTorch library under BSD-style license using Flower federated learning protocol (Beutel et al., 2020) under Apache 2.0 License. The details of datasets and implementation are described in Appendix B. 4.1 RESULTS Convergence of TAViT for multi-task distributed learning We evaluated the results of the proposed TAViT of multi-task distributed learning with all participated clients and one common Transformer body in the server. Figure 3 shows the gradual progression of quality metrics through the alternating training scheme. The performance of all tasks from our method increased and outperformed as the task-specific learning and the task-agnostic learning continued. This demonstrates the synergistic improvement of the task-specific heads/tails and the task-agnostic body: the heads and tails learn more accurate feature embedding for given tasks, and the common body learns the global attention general to multiple image processing tasks by looking at various datasets. Although there were some tasks in which the score of each step was slightly lower than that of the previous step by the interaction of different task datasets, the overall performance of TAViT was improved as the cycle progressed. Detailed quantitative results for each cycle are described in Appendix C. Comparison of TAViT to other strategies We compared our TAViT with other distributed learning strategies: SL and FL. We conducted both SL and FL with the two clients assigned for the deblocking task. For SL, as designed in Vepakomma et al. (2018), we placed the head and tail networks on clients and the body on the server, and trained those split networks without the weight aggregation for the head and tail. For FL, we placed the entire model composed of the head, body, and tail on each client, and trained the network in parallel by carrying out the aggregation with FedAvg (McMahan et al., 2017a) using a common server. Figure 4 shows these scenarios, where C1 and C2 are clients for the deblocking, C3, C4, and C5 are clients for the denoising, deraining, deblurring, and S is the server. As reported in Table 1, the proposed method achieves better performance compared to the other strategies even though ours learns multiple tasks. Comparison of TAViT to learning each separate task To verify the capability of the task-agnostic Transformer body learning from multiple tasks, we compared TAViT with the models independently trained on each individual task. Under the setting of centralized data for each task, we implemented this study in two manners: end-to-end learning (EL) and single-task learning (STL). For EL, we trained the whole network in one device through the end-to-end optimization scheme. For STL, we distributed the decomposable head, body, and tail to a client and a server as the proposed method, and trained the networks by the alternating training strategy for one cycle. Table 2 reports the results on Benchmark datasets for each task. This shows that our TAViT trained on multiple tasks simultaneously outperforms both EL and STL, which suggests that the task-agnostic body of our framework does not degrade the model by task heterogeneity but enhances the performance for various tasks. Comparison of TAViT to CNN-based models To compare the performance of TAViT with CNNbased deep learning models, we tested several existing methods on benchmark datasets for each task. Table 2 and Figure 5 show the qualitative and visual comparison results, respectively. For the deblocking, when comparing with DnCNN Zhang et al. (2017a), AR-CNN Dong et al. (2015), and QCN (Li et al., 2020b), the proposed method outperforms for both the 10 and 50 levels of quantization quality. Visual comparisons also show that the proposed method removes block artifacts clearly compared to the others. For the denoising, we compared our method with CBM3D (Dabov et al., 2007), DnCNN (Zhang et al., 2017a), FFDNet (Zhang et al., 2018b), IRCNN (Zhang et al., 2017b), DHDN (Park et al., 2019), and SADNet (Chang et al., 2020). The results show that our Dataset Metric CNN-based Transformer-based TAViT achieves better PSNR/SSIM scores, and also provides more clearly denoised images while preserving structure and texture details than the comparisons. For the deraining, we tested our model in addition to DerainNet (Fu et al., 2017a), SEMI (Wei et al., 2019), UMRL (Yasarla & Patel, 2019), PreNet (Ren et al., 2019), and MSPFN (Jiang et al., 2020). We used Y channel in YCbCr color space followed by Jiang et al. (2020) for the evaluation. As a result, our model outperforms the comparative methods on both Rain100H and Rain100L. Also, the images restored by ours are closer to the references by removing rain streaks than the others. For the deblurring, we employed DeblurGAN (Kupyn et al., 2018), Nah et al. (2017), Zhang et al. (2018a), DeblurGANv2 (Kupyn et al., 2019) for comparisons. The results show that the proposed method achieves comparable performance to the existing approaches. Visual results show that our TAViT restores blurry images with sharp edges, while the others still contain blurry artifacts or position shifting of objects compared to the references. 5 CONCLUSION In this work, we present a multi-task distributed learning framework called TAViT. In TAViT, the task-specific head CNN and the tail CNN are distributed to clients that have their own data, which are connected to a common Transformer body placed in the server. With an alternating training scheme, the heads and tails on client sides are trained by task-specific learning, while the body is trained by task-agnostic learning. We conduct experiments on four different image processing tasks, which shows the success of task-agnostic learning of the Transformer body and its synergistic improvement with the task-specific heads and tails. Through our model, the participating clients can design and train their own networks depending on the task using local data in parallel. We expect that the proposed TAViT can be efficiently used in the case that sharing data with other institutions is sensitive such as medical fields. Ethics statement As our work utilizes distributed learning models, similar to the existing FL and SL, our method may be vulnerable to privacy attacks against the server such as inversion attacks (Yin et al., 2021). Although the proposed framework is designed by encoding the feature maps and gradients under Flower protocol which makes it difficult for attackers to restore the original data, the hidden feature may leak the raw data to some degree. Thus, privacy-related techniques such as differential privacy (McMahan et al., 2017b) and authenticated encryption of data (Rogaway, 2002) should be analyzed for practical applications. Reproducibility statement The source code and our trained models to reproduce the proposed method are available at https://github.com/TAViT2022/TAViT. For the detailed pseudocode, refer to Appendix A. Also, the data processing steps for datasets used in the experiments are provided in Appendix B. A DETAILS OF TAVIT WITH PSEUDOCODE As described in the main paper, the task-specific heads and tails in clients and the Transformer body in the server are trained in an alternating manner between the proposed task-specific learning and the task-agnostic learning. In the following, we describe each step in more detail in terms of its implementation. Pseudocode for the task-specific learning Algorithm 2 shows the pseudocode for the task-specific learning. Given K image processing tasks, the task-specific learning updates the heads H and the tails T in each client with the fixed body B. The server first initializes global weights of the heads Algorithm 2 Task-specific learning of TAViT: Is denotes the number of iterations for the task-specific learning in one cycle. (Hc, Tc) denote the head and the tail in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (HCk , TCk) are global weights of the heads and the tails for the task k. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. |Dj | is the size of training data at cj , and |D| is the total size of training data at Ck, i.e. |D|= ∑ j |Dj |. /* run on the server (with fixed B) */ Initialize HCk , TCk Send HCk , TCk to all clients c ∈ Ck for is in [1, Is] do for each client c ∈ Ck, where k = {1, 2, . . . ,K} in parallel do fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂fH ← ∂`c∂fB · ∂fB ∂fH // backpropagation through body ClientUpdate( ∂`c∂fH ) end if is is weight aggregation step for |Ck| > 1 then Get (Hcj , Tcj ) from client cj , where j ∈ {1, 2, . . . , Nk} HCk ← ∑Nk j=1 |Dj | |D| Hcj // FedAvg for head TCk ← ∑Nk j=1 |Dj | |D| Tcj // FedAvg for tail Send (HCk , TCK ) to all clients c ∈ Ck end end /* run on client c */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂Tc ← ∂`c∂ŷ · ∂ŷ ∂Tc // computation of tail gradients ∂`c ∂fB ← ∂`c∂Tc · ∂Tc ∂fB return ∂`c∂fB /* run on client c */ Function ClientUpdate( ∂`c∂fH ) ∂`c ∂Hc ← ∂`c∂fH · ∂fH ∂Hc // computation of head gradients update Hc, Tc using ∂`c∂Hc and ∂`c ∂Tc by optimizer e.g. Adam Algorithm 3 Task-agnostic learning of TAViT: Ia denotes the number of optimization iterations for task-agnostic learning in one cycle. (Hc, Tc) denote the head and tail in a client c in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. /* run on the server */ Initialize CB = {c1n1 , c 2 n2 , . . . , c K nK} where c k nk ∈ Ck for ia in [1, Ia] do c← cknk ∈ CB // Random selection of client with task k fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂B ← ∂`c ∂fB · ∂fB∂B // computation of body gradients update B using ∂`c∂B by optimizer e.g. Adam end /* run on client c (with fixed Hc, Tc) */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c (with fixed Hc, Tc) */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂fB ← ∂`c∂ŷ · ∂ŷ ∂fB // backpropagation through tail return ∂`c∂fcB and the tails and sends them to all clients in Ck, where Ck is a set of clients with different datasets for the k-th task. When each client c ∈ Ck takes local training data x and provides a feature map fH by the head Hc to the server (line 5 with ClientPhase1), the server-side Transformer body takes the feature map fH as an input embedding and estimates the self-attended features fB that is independent of specific tasks. Once the fB in the server is sent to the client, the client computes the task-specific loss `c between the label y and the tail output ŷ (line 24 in ClientPhase2). The gradient of the tail ∂`c/∂Tc is also computed in the client at ClientPhase2 (line 25), which is used to compute ∂`c/∂fB that is transported to the server. Then, in the server, ∂`c/∂fH is calculated by backpropagation through the body, which is sent to the client so as to compute the head gradient ∂`c/∂Hc and finally update Hc and Tc (lines 28-30 of ClientUpdate) using a single optimizer. Here, when there are multiple clients for the k-th task, i.e. |Ck| > 1, we apply the federated learning for the heads and the tails of those clients as described in lines 11-16 in Algorithm 2. The heads and the tails are trained in parallel, and their weights are aggregated by FedAvg (McMahan et al., 2017a) on the server side for every weight aggregation period. Then, these updated global weights of the heads HCk and the tails TCk and are transmitted to all clients in Ck so that the clients train their own head and tail using the new global weights from the next step. Pseudocode for the task-agnostic learning In the task-agnostic learning, the Transformer body in the server is updated with the fixed heads and tails of clients. Algorithm 3 shows the pseudocode for the task-agnostic learning of TAViT. Given a subset of clients, CB , by selecting one client among Ck for each task, the client c ∈ CB is randomly chosen for every iteration. Then, compared to the task-specific learning, the implementation of the task-agnostic learning is similarly conducted but does not need ClientUpdate process in Algorithm 2. In other words, after the gradient ∂`c/∂fB is computed on the client side at ClientPhase2 (line 14-18) and transmitted to the server (line 6), the server updates the Transformer body by computing the body gradients ∂`c/∂B (lines 7-8), which is the final step of each iteration. B DETAILS OF DATASETS AND IMPLEMENTATION B.1 LICENSE/SOURCE FOR EACH DATASET In our experiments, we use the public datasets for image deblocking, denoising, deraining, and deblurring tasks. Here, we describe the specific information of each data set such as license and source link. PASCAL VOC 2012 The PASCAL VOC data set (Everingham et al., 2010) is publicly available, which includes images obtained from the "flickr" website under SmugMug or its third-party licensors. The data are protected by the United States and international intellectual property laws. The data source is from the URL: http://host.robots.ox.ac.uk/pascal/VOC/. BSDS500 and CBSD68 The Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500) (Arbeláez et al., 2011) data set is an extended version of BSDS300 (Martin et al., 2001b), which is a public data set originally provided for image segmentation and boundary detection by Berkeley Computer Vision Group. This data set is widely used for measuring image restoration performance. The color BSD68 data set (CBSD68) is extracted from the BSDS500. The BSDS500 can be downloaded at https://www2.eecs.berkeley.edu/Research/Projects/CS/ vision/grouping/resources.html. Synthetic rainy images The synthetic rainy data set for training is collected from Rain14000 synthesized by Fu et al. (2017b), Rain1800 authored by Yang et al. (2017), Rain800 created by Zhang et al. (2019a), and Rain12 made by Li et al. (2016). We test our method on the synthetic rainy data sets of Rain100H and Rain100L, both of which are authored by Yang et al. (2017). All these data sets are publicly available and can be downloaded at the following links: - Rain14000: https://xueyangfu.github.io/projects/cvpr2017.html - Rain1800: https://www.icst.pku.edu.cn/struct/Projects/joint_rain_ removal.html - Rain800: https://github.com/hezhangsprinter/ID-CGAN - Rain12: https://yu-li.github.io/ - Rain100L & Rain100H: https://www.icst.pku.edu.cn/struct/Projects/ joint_rain_removal.html GoPro The GoPro dataset (Nah et al., 2017) provides training and test sets for deblurring. The data are available at https://seungjunnah.github.io/Datasets/gopro.html. B.2 DATA PROCESSING All datasets we used in experiments provide natural images that have three RGB channels and pixel values with a range of [0, 255]. Upon these datasets, we performed the following data processing according to the image processing tasks. For the image deblocking task, we quantized the images following JPEG compression. We first transformed RGB image into YUV color space using the following equations. Y = 0.257R+ 0.504G+ 0.098B + 16 (7) U = −0.148R− 0.291G+ 0.439B + 128 (8) V = 0.439R+ 0.368G− 0.071B + 128 (9) Then, we split the image into 8x8 blocks without overlapping and apply Discrete Cosine Transform (DCT) to each block. According to the level of quantization quality, we divided each element of the DCT coefficients by proper predefined matrices. After that, we apply inverse DCT and aggregate all blocks into an image, and then, we transformed the image from YUV to RGB color space. R = 1.164(Y − 16) + 1.596(V − 128) (10) G = 1.164(Y − 16)− 0.392(U − 128)− 0.813(V − 128) (11) B = 1.164(Y − 16) + 2.017(U − 128) (12) For the denoising task, we added Gaussian noise to the clean images. Specifically, we applied random Gaussian noise with the level of (µ, σ) = (0, 30) to images, and then clipped the values into [0, 255]. For the other tasks, the datasets named Rain# and GoPro provide synthetic rainy images and blurry images, respectively. Since we used these datasets for the deraining and deblurring tasks, we did not perform any preprocessing such as the synthesis of rain artifacts blurry effect. After the above data processing for all tasks, we randomly cropped the images by a patch size of 64× 64× 3. Also, we applied data augmentation using random flipping and rotating with 90 degrees. Then, we normalized the images with the scale of pixel values from [0,255] to [-1, 1], which are final inputs to the model. B.3 NETWORK ARCHITECTURES For the task-specific head and tail for each task, we use the network architecture of DDPM (Ho et al., 2020) that is composed of residual blocks and attention modules. We set the number of two-times downsampling/upsampling stages as 3. For each stage, the channel size is set as 128, 256, and 512, respectively. Accordingly, given an input image x ∈ R64×64×3, the head provides a feature map fH ∈ R16×16×512 that passes through the body, and the tail generates an output of the same size as the input. On the other hand, for the Transformer body, we use the encoder part of the vanilla Transformer (Vaswani et al., 2017). As described in the main paper, the Transformer body takes a sequence of patches f by reshaping the feature map fH as an embedding of the words. In the experiments, the length of the input sequence is 256 by setting the patch size as 1, and the sequence dimension is 512. Then, once the input sequence is added to learnable positional encodings, the encoded features h pass through L encoder layers (L = 8 in our experiments). Table 3 shows the structure of each encoder layer of the body. Model sizes Table 4 shows the model sizes of the task-specific head and tail, and the Transformer body we used. When comparing the number of parameters and the size of networks, we can observe that the client-side networks composed of the head and the tail is larger than the task-agnostic Transformer body. Considering the experimental results in the main paper, this model size suggests that the body estimates task-agnostic self-attended features that provide the synergy effect in the task-specific and task-agnostic learning even if the body size is smaller than the sum of head and tail. C EXPERIMENTAL RESULTS C.1 TAVIT ON MULTIPLE IMAGE PROCESSING TASKS Evaluation results of TAViT Table 5 reports the quantitative evaluation results of TAViT on multiple image processing tasks, which is visualized with graphs of scores for the cycles in the main paper. Figure 6 shows the qualitative results of TAViT. This shows that the performance of each task is improved according to the cycles between the task-specific and task-agnostic learning. Table 5: Quantitative results of TAViT according to the cycles, which are visualized with graphs in the main paper. The best results are highlighted in bold. Task Deblocking Denoising Deraining Deblurring Cycle BSDS500 (Q10) BSDS500 (Q50) CBSD68 (σ = 30) Rain100H Rain100L GoPro PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM 0.5 27.53 0.781 32.92 0.921 30.57 0.868 28.24 0.860 33.17 0.939 28.94 0.871 1.0 27.57 0.782 33.01 0.922 30.62 0.869 28.75 0.862 32.69 0.936 29.09 0.873 1.5 27.61 0.784 33.05 0.923 30.57 0.870 28.57 0.869 33.58 0.945 29.63 0.885 2.0 27.65 0.785 33.14 0.924 30.66 0.870 28.79 0.867 33.50 0.944 29.72 0.887 2.5 27.64 0.785 33.14 0.924 30.62 0.870 29.25 0.875 34.30 0.949 29.96 0.893 3.0 27.69 0.786 33.21 0.924 30.69 0.871 29.35 0.875 33.88 0.947 30.06 0.894 Figure 6: Qualitative results of TAViT according to the cycles. From the left to the right columns, the deblocking results on images with the quantization quality 10 and 50, the denoising results, the deraining results on Rain100H and Rain100L, and the deblurring results. The yellow value is PSNR, and an inset box is a magnified view of a red rectangle. Qualitative comparisons Besides the results presented in the main paper, here, we show more visual comparisons of our TAViT to the existing methods. Figure 7, 8, 9, and 10 display the deblocking, denoising, deraining, and deblurring results, respectively. All these results verify that our TAViT as a distributed learning for multiple image processing tasks outperforms the comparisons. C.2 ABLATION STUDY OF TAVIT Study on the amount of data for each task in the task-agnostic learning In the main paper, we implemented our method using 1/4 of the dataset for each task in the task-agnostic learning. To verify that this amount of data is enough for the task-agnostic learning, we performed the ablation study using different amounts of data with a 1/2 ratio for each deblocking, denoising, deraining, and deblurring task. Table 6 and Figure 11 show the quantitative results of TAViT on the multiple tasks using 1/2 data in the task-agnostic learning. Similar to the results with a 1/4 data ratio, the scores of PSNR and SSIM tend to increase as the cycle continues. When comparing the best results from the 1/4 and 1/2 data ratio, we can observe that performance for each task using even 1/4 amount of data is comparable or better than using 1/2 data. This suggests that using 1/4 of data for each task in the task-agnostic learning is sufficient to train the Transformer body and obtain high performance. Study on the weight aggregation period In the main paper, we conducted the experiment of TAViT by applying FL to the deblocking task that has two clients with their own data. In FL, the weights of the network in each client are averaged in the server for every weight aggregation period that is given as a hyperparameter. Here, since this period can influence learning performance in that the clients and the common server communicate to aggregate network weights, we performed the ablation study on the weight aggregation period for training the client-side networks. As reported in Table 7, for the deblocking task, we trained the model with the aggregation period of 20, 50, and 100 epochs. When we evaluated the deblocking results, the weight aggregation per 50 epochs provides better performance with 27.53dB/0.781 and 32.92dB/0.921 of PSNR/SSIM for the quality 10 and 50, respectively, compared to the other methods. This verifies that the weight aggregation period of 50 epochs presented in the main paper is proper to train and evaluate the proposed TAViT in our experiments. D DISCUSSION D.1 SKIP-CONNECTION OF HEAD AND TAIL FOR PRIVACY PRESERVATION When configuring the task-specific heads and tails with skip-connections, our model can avoid the privacy attack in some degree while maintaining the encoding information for the tail to generate outputs. This is because the skip-connected features are isolated on each client and not transported to the server. Accordingly, the transported features between the clients and the server may contain far less information about the original data. Figure 12 shows examples of the outputs with and without skip-connections. This shows that the network output without skip-connections barely has the property of original data, which indicates that one may not be able to reconstruct the original data using the transmitted hidden features of the proposed method. D.2 EFFECT OF TASK-AGNOSTIC TRANSFORMER BODY As we described in the main paper, the reason for developing our model with CNN-based heads/tails and the Transformer-based body is to take advantage of each network. In particular, Transformer learns the global attention of the input sequence through self-attention modules and has recently been extensively studied for various computer vision tasks. One of the most unique advantages of Transformer is to convert “unattended ” input feature vectors into “attended ” output feature vectors by learning global attention and non-local interactions between the input features. Accordingly, the task-specific head / tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer body. This disentangled representation of local and non-local features has been pursued throughout the development of deep networks. Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split-learning architecture. In order to show that this design is proper to the multi-task distributed learning, we additionally conducted the experiment by replacing the Transformer body to the CNN model. Specifically, we configured the CNN body with CBR blocks, where C is a convolutional layer with the consistent channel 512, B is a batch normalization layer, and R is an activation by ReLU layer. For a fair comparison, we set the CBR blocks as 7 to have almost the same number of learnable parameters with the Transformer body we used (16,522,240 of CNN body vs. 16,953,344 of Transformer body). Then, using this CNN body, we implemented our proposed task-specific and task-agnostic learning for one cycle on the multiple image processing tasks as the main paper. As a result, Table 8 shows that our model with the Transformer body achieves higher performance in both task-specific and task-agnostic learning. This indicates that the Transformer can be used as a general task-agnostic body for multi-task learning. D.3 SAMPLING STRATEGY OF CLIENTS When there are multiple clients for one task in task-specific learning, the task-specific networks of clients are aggregated through the sampling strategy of FedAvg. On the other hand, in task-agnostic learning of the proposed TAViT, one client is sampled for each iteration. Since the networks of clients for the same task are aggregated before the task-agnostic learning, we can readily sample one client for each task. Then, choosing one client for the subset of Eq. (4) can be viewed as sampling one task, which naturally reduces the communication cost. In fact, the performance of TAViT is not affected by the number of sampled clients in the task-agnostic learning, since the task-agnostic body is updated for sufficient iterations. To demonstrate this, we performed the task-agnostic learning for the four tasks in our experiments by varying the sampling strategy. Table 9 shows the results after training our model for one cycle according to the number of sampled clients in the task-agnostic learning. The results show that sampling one client achieves comparable or higher performance for all tasks, compared to the results of sampling more than one clients. This supports that our sampling strategy is an efficient way to train the Transformer body with less computational cost even when the number of clients increases. D.4 COMMUNICATION COST BETWEEN CLIENTS AND SERVER In the proposed TAViT, the features and gradients of the networks are transported between clients and server, so one may wonder how much additional communication cost occurs. To compute the communication cost of our method, we assume that the cost is proportional to the number of transported elements. Also, since the size of features and gradients from clients to the server are the same with those from the server to client, we only consider one direction from clients to the server. Then, we computed the maximum cost for one communication to update our model for each task-specific and task-agnostic learning, and compared our cost to the method of FL (McMahan et al., 2017a). Specifically, when there Nk clients for the k-th task, let PH , PB and PT be the number of parameters of the head, body, and tail, respectively. In the case of FL that aggregates the whole model composed of the head, body, and tail, the communication cost per communication can be represented as: CostFL = Nk(PH + PB + PT ). (13) On the other hand, our model does not require the transportation of learnable parameters except for the aggregation step in the task-specific learning. Thus, the communication cost can be computed as follows: CostTAV iT = { Nk(PH + PT ), if an aggregation step (task-specific learning) Nk(F +G), else if a non-aggregation step (task-specific learning) F +G, otherwise, (task-agnostic learning) (14) where F and G are the number of elements of transported features and gradients, respectively. From (14), we can see that the communication cost at the network aggregation step in the task-specific learning of the proposed method is smaller than FL that needs to transport the parameters of the whole model including head, body, and tail. Specifically, instead of aggregating parameters of the Transformer body, the TAViT transports features and gradients that have much smaller size than the body parameters, which can reduce the cost per communication significantly. For example, the proposed model for the deblocking task contains PH + PB + PT = 44, 774, 792 parameters whose memory size is about 207.5MB. Suppose that 10 clients participate in FL to train this model. Then, 447.7M elements are transported from the clients to the server, and the network of the server should handle more than 2GB load per communication. In contrast, our model transports PH + PT = 27, 952, 520 parameters whose memory size is approximately 142.5MB. Thus, even with 10 clients, 279.5M elements are transported, and the network of the server is supposed to handle about 60% load of FL. In addition, since the number of features and gradients is F = G = 20× 16× 16× 512 = 2, 621, 440 which is 10MB of memory, the number of transported elements per communication for 10 clients is 52.4M, and the server is pressed by only 200MB load per communication. On the other hand, in task-agnostic learning, the server updates the body with the sampled client without any weight aggregation. Accordingly, only the features and gradients are transported from the client to the server. In particular, in the case of the communication from the server to the client, the server does not need to transport the gradients to the client, but only transmit the features. Thus, the cost per communication in the task-agnostic learning phase is significantly reduced. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases more if a bigger Transformer body is used for better representation of global attention. Scalability Suppose that there are K tasks, and let the total number of clients connected to a server be Nall. For simple analysis of the scalability, we assume that each communication between clients and the server takes a constant time. The scalability is computed on the time complexity for one communication between clients and the server to update the models. For our task-specific learning, the one communication has time complexity O(Nall) if we update the heads and tails of all clients. This means that the communication cost would increase according to the number of clients, which can limit the scalability of the proposed method. However, if we apply the client sampling strategy of the FedAvg, we can control the number of communications and the one communication will have time complexity Ω(K). This sampling strategy can be readily adapted to our model without significant modification. On the other hand, for the task-agnostic learning phase, the one communication has time complexity O(K), since the network parameters of clients for the same task are aggregated before the task-agnostic optimization. Also, according to the sampling strategy of one task in the proposed method, one communication has time complexity Ω(1), which is studied in Appendix D.3. D.5 APPLICATION TO HIGH-LEVEL VISION TASKS AND MEDICAL DATA In the main paper, the proposed TAViT was demonstrated on multiple low-level computer vision tasks. However, the TAViT framework can be also used for a wide range of high-level computer vision tasks, and even with different data domains such as medical images. To demonstrate this, we additionally conduct on inpainting task for natural images and denoising task for X-ray CT images. Here, the image inpainting is a higher-level computer vision task which requires more semantic information, and the denoising of X-ray CT requires domain-specific knowledge about the data. In particular, to show that our task-agnostic Transformer body provides a positive effect on the training of new task-specific networks, we performed the task-specific learning only for the client-side heads and tails by subscribing to the pre-trained Transformer body, which was trained on the four natural image processing tasks without additional fine-tuning. The details of training and results are as follows. Dataset For the image inpainting task, we used PASCAL VOC2012 dataset which contains 10,582 natural images. The information about license and source of this dataset can be found in Appendix B. For the preprocessing, we scaled the image from [0, 255] to [-1, 1] and randomly cropped by 128× 128 patches. Then we multiplied the image with the zero-box mask that has a random size of width and height from 48 to 64 according to Yu et al. (2018). For the X-ray CT denoising task, we used the 2016 AAPM Low-dose CT Grand Challenge dataset (McCollough et al., 2020) that provides noisy CT images with quarter dose and clean CT images with routine dose of X-ray. Since the X-ray CT data are measured in the Hounsfield unit, we divided the intensity by 4,000 and randomly cropped by 64× 64 patches. Implementation details For the image inpainting task, we employed the network architecture of Yu et al. (2018) and decomposed it into two parts for the task-specific head and tail. We performed task-specific learning by minimizing the adversarial generative loss for 400 epochs using Adam optimizer with learning rate 1× 10−4. For the X-ray CT denoising task, we used the same network architecture of head and tail implemented in this paper. We trained the task-specific networks with the fixed task-agnostic body for 400 epochs using Adam optimization algorithm with learning rate 5× 10−3. Results To evaluate the performance of image inpainting and medical image denoising, we compared our method to the CNN model that has the same network architectures of head and tail with ours but does not have the Transformer body. The quantitative evaluation results are shown in Table 10, and the visual comparisons are shown in Figure 13. We can see that the performance of inpainting is improved when training the client-side networks with our pre-trained Transformer body, even though we use the Transformer body pre-trained using low-level computer vision tasks. This implies that the proposed method can be extended to various high-level tasks. In addition, we can observe that our model on the medical image denoising task achieves higher performance than the comparative CNN model and provides clean images, although the Transformer body was trained on the natural image domain. From these results, we can confirm that our task-agnostic Transformer body has a capability to learn the domain gap even in different data sources. Also, this suggests that clients do not need to train the server-side body from the scratch when they subscribe the body for the other tasks.
1. What is the main contribution of the paper regarding privacy preservation in distributed training? 2. How does the proposed approach differ from previous methods such as Splitfed? 3. What are the strengths and weaknesses of the experimental results presented in the paper? 4. How does the reviewer assess the novelty and significance of the proposed method's use of a transformer-based task-agnostic backbone on the server side? 5. Does the reviewer have any concerns about the design choice of using CNNs as heads/tails and a ViT as backbone? 6. How does the reviewer evaluate the scalability and practicality of the proposed method in terms of the number of clients used in the experiments?
Summary Of The Paper Review
Summary Of The Paper This work is aimed at distributed "privacy preserving" training of neural networks for image processing tasks like deblocking, denoising, deraining, and deblurring. "One of the most important contribution" [pg 2] is breaking down the neural network model into task-specific convolutional head and tails (trained on "clients"), and a common shared (across tasks) Transformer based feature backbone, which is trained on the server. The heads/tails and the transformer backbone are trained in an alternate manner by assuming the other model to be fixed. The proposed is similar to the method "Splitfed" (Thapa et al., 2020) but is extended for different tasks (as described above). Experimental results demonstrate: (i) successful training of the neural network models with the proposed method. (ii) better/comparable performance to prior works on distributed/privacy-preserving methods. (iii) better performance using the Vit backbone as compared to CNN backbones, and also with the proposed multi-task vs. single-task setting. Review Strengths: (i) the key idea of the paper is communicated effectively. The method is an extension of "SplitFed", Thapa et al. 2020 to multi-task learning with a shared backbone on the server side. (ii) the experiments validate that the proposed method is viable, and achieves comparable performance to/marginal improvements over existing privacy-preserving training methods on the tasks considered in this work. (iii) some synergy can be seen in the performance on various tasks due to joint multi-task training, whereby the proposed method outperforms end-to-end and single-task distributed training (table 2). Weaknesses: (i) the key weakness is that the "core contribution" of using a transformer (Vit) based task-agnostic backbone on the server has incremental novelty over the more standard/prior works (specifically, Thapa et al.). The key novelty is joint-training of a shared backbone on different tasks in the proposed work --- this is a natural extension of the prior method. (ii) it is unclear why the specific decision of using CNNs as heads/tails (on the clients) and a ViT as backbone (on the server) is so crucial; could the roles be swapped --- ViT based heads/tails, and a CNN backbone? Overall this particular design choice has not been shown to be critical for operation or the performance of privacy-preserving distributed learning. Any differentiable neural network can be placed at the server/client sides as long as the two can be "plumbed" together. (iii) the number of clients in the "distributed" experiments are small (perhaps this is prevalent in this research community). For example, only 5 clients, are used for 4 tasks (section 4). Hence, it is difficult to gauge the practical deployment of this method, as common distributed learning issues like clients dropping out, asynchronous communication is not tested.
ICLR
Title Privacy-preserving Task-Agnostic Vision Transformer for Image Processing Abstract Distributed collaborative learning approaches such as federated and split learning have attracted significant attention lately due to their ability to train neural networks using data from multiple sources without sharing data. However, they are not usually suitable in applications where each client carries out different tasks with its own data. Recently, Vision Transformer (ViT) has been widely explored in computer vision applications due to its capability to learn the common representation through global attention of the embedded input sequence. By leveraging the advantages of ViT, here we present a new distributed learning framework for image processing tasks, allowing clients to learn multiple tasks with their private data. The key idea arises from a disentangled representation of local and non-local features using a task-agnostic Vision Transformer and a task-specific head/tail. By connecting task-specific heads and tails at client sides to a task-agnostic Transformer body at a server side, each client learns a translation from its own task to a common representation, while the Transformer body learns global attention between the features embedded in the representation. To enable decomposition between the task-specific and common representation, we propose an alternating training strategy in which task-specific learning for the heads and tails is run on the clients by fixing the Transformer, which alternates with task-agnostic learning for the Transformer on the server by freezing the heads and tails. Once the Transformer body is fully trained with a sufficient number of tasks and clients, additional training of the Transformer body is no longer required when a new client is added with a new task, and all that is required is the training of customer-specific head and tail. Experimental results on multi-task learning for various low-level and high-level computer vision including medical image data show that our method synergistically improves the performance of the task-specific network of each client while maintaining privacy. 1 INTRODUCTION Deep learning approaches have demonstrated the state-of-the-art performance and fast inference time in computer vision tasks (Ronneberger et al., 2015; Zhang et al., 2017a; Wang et al., 2017). In particular, convolutional neural networks (CNN) can learn the hierarchy of complex image features, so that a variety of CNN-based methods have been developed for denoising (Zhang et al., 2017b; Chang et al., 2020), deraining (Wei et al., 2019; Ren et al., 2019), deblurring (Nah et al., 2017; Kupyn et al., 2019), deblocking (Li et al., 2020b; Maleki et al., 2018), etc. However, the performance of CNN typically depends on a large number of training data (Chervenak et al., 2000; Krizhevsky et al., 2017), and it is often difficult to collect data from various entities due to privacy and regulation issues (Price & Cohen, 2019). Since the amount of data from a single source may not be enough, a deep learning framework that can leverage many datasets without violating privacy is required in real-world applications. To address this, distributed collaborative learning (DCL) approaches, which jointly train a single network on multiple systems or devices without revealing distributed data to a central entity or to each device, have been investigated (Konečnỳ et al., 2016; McMahan et al., 2017a; Gupta & Raskar, 2018). For example, federated learning (FL) (McMahan et al., 2017a; Li et al., 2020c) is studied to aggregate all data to the center under privacy constraints. Thanks to the parallel communication between each client, FL enables fast training of the network across multiple clients. Also, split learning (SL) (Gupta & Raskar, 2018; Vepakomma et al., 2018) is developed as an enhanced privacy-preserving model that splits a network into clients and server so that each client does not share all network parameters but only train a part of networks. From the advantages of FL and SL, a combination of split and federated learning, named SplitFed learning (SFL) (Thapa et al., 2020), has been recently proposed to provide efficient training and a high level of privacy with a less computational burden. However, the existing CNN-based methods are difficult to determine the proper layer of the network to split. Also, although training data are distributed across each client, all clients usually consider a common learning task. Meanwhile, in many practical image processing applications, it is unlikely that all the clients are interested in the same applications. For example, some of the clients may be interested in image denoising (Zhang et al., 2017b), whereas the other clients are focused on image deblurring (Nah et al., 2017), deraining (Wei et al., 2019), deblocking (Li et al., 2020b), etc. As each task is different from the others, the existing distributed learning framework may not work. That said, these image processing tasks still require understanding of common image representation, so one may wonder whether there is any systematic way of synergistically learning multiple image processing tasks in a privacy-preserving manner. One of the most important contributions of this work is to show that Task-agnostic Vision Transformer (TAViT), composed of the CNN-based head and tail and ViT-based body, is nicely fit to this purpose. Specifically, the head and tail are placed on each client to learn specific image processing tasks, while the body is stored and trained on a server to learn common representation across all tasks of clients. In contrast to the existing SL framework where the network split is arbitrary, TAViT provides a systematic way of splitting neural networks between clients and servers for privacy-preserving training without losing any performance. Furthermore, TAViT allows clients to use a common Transformer body model to learn multiple image processing tasks and synergistically improve the performance of their task-specific networks. One may think that the proposed method is similar to the image processing transformer (IPT) (Chen et al., 2020), which consists of CNN-based heads and tails and a Transformer body. However, IPT requires centralized data and large computation resources for both pretraining and task-specific fine-tuning the whole model. Also, the Transformer in IPT has an encoder-decoder architecture which needs an explicit conditioning vector to convert the Transformer for a specific task. Thus, to our best knowledge, IPT is not suitable for distributed learning. In contrast, the body of TAViT is made of an encoder-only Transformer architecture to learn global embedding features of multiple tasks without any condition. Besides, by imposing computation of this Transformer body on the server rather than clients, our framework enables clients to reduce the computational burden while maintaining the overall performance for specific image processing tasks. In addition, TAViT views the heads and tail at the clients and the body at the server as two-part players and updates them alternately. Specifically, our training step is composed of task-specific learning and task-agnostic learning: the former is to train the client-side heads and tails to learn each task of the client, while the latter is to train the server-side Transformer body to learn general feature interpretation over multiple tasks. When there are more than two clients for any single task, parameters of their heads and tails can be aggregated through FL. Accordingly, TAViT offers seamless integration between SL and FL approaches to protect privacy. Recall that one of the most unique advantages of Transformer body is to convert “unattended ” input features into “attended ” output features by learning global attention and non-local interactions between the input features. Accordingly, with the help of aforementioned alternating training scheme, the task-specific head/tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer. In fact, this disentangled representation of local and non-local features has been pursued throughout the development of deep networks (Ye et al., 2018; Zhang et al., 2019b; Wang et al., 2018). Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split learning. We validate the performance of TAViT on multiple image processing tasks. Experimental results show that our multi-task distributed learning framework using the alternating training strategy outperforms the end-to-end learning of each individual task thanks to the decomposition of the task-agnostic Transformer body and task-specific networks. This suggests that our framework is a promising approach for learning multiple tasks with distributed privacy-sensitive data. In sum, our contributions are summarized as follows: • We propose a novel distributed learning framework, TAViT, that carries out multiple image processing tasks using distributed data. • The proposed method consists of task-specific heads and tails on clients and a task-agnostic Transformer body on a server, which reduces the computational cost of clients and does not require centralized data for multi-task learning. • An alternating training strategy between the task-specific and task-agnostic learning for the split networks shows the synergy effect of performance improvement, which is demonstrated by experimental results on multiple tasks. 2 RELATED WORKS Federated learning In the FL setting, multiple clients learn locally stored data while one server aggregates information of clients by various methods including FedAvg (McMahan et al., 2017a). For the efficient implementation of FL, practical challenges of unstable networks, hardware capacity difference, and statistical heterogeneity of data distributions (Li et al., 2020c; Smith et al., 2017; Li et al., 2018) have been actively studied. Corinzia et al. (2019) performs FL with multiple classification tasks, and He et al. (2020) loads a huge network to a server and small CNNs to clients and trains them by knowledge distillation. Yao et al. (2019) presents an unbiased gradient aggregation for FL and meta updating of the model. In contrast, our method is presented for effectively learning on task heterogeneity using distributed data. Although Li et al. (2020a) presents task-agnostic FL method based on the feature extractor, each client trains the task-specific network independently, while our model can learn multiple tasks simultaneously for synergistic performance improvement. Split learning Split learning (SL) is designed to train networks over distributed data by splitting networks into two parts, which updates client-part and server-part networks sequentially (Gupta & Raskar, 2018). By extending this idea, Vepakomma et al. (2018) presents several ways to use SL, and Abuadbba et al. (2020) applies SL to 1D CNN models. However, the existing SL methods are designed using CNN, and to our best knowledge, there is no principle way of splitting the network for the best performance. In particular, Thapa et al. (2020) proposes a combination of FL and SL, but the server requires labels from clients to update the split networks, which may lose data privacy. Also, since outputs are generated from a shared network on the server when there are multiple clients, these methods can be narrowly applied to a single task. In contrast, our model presents Transformer-based shared body that enables multi-task learning of clients without sharing data. Vision Transformer for image processing Recently, inspired by the success of Transformer in natural language processing (Vaswani et al., 2017; Devlin et al., 2018), Transformer-based image processing methods have been extensively explored (He et al., 2021; Han et al., 2021). In particular, Dosovitskiy et al. (2020) proposes a Vision Transformer (ViT) with an encoder-only architecture to learn image recognition tasks. Also, Chen et al. (2020) presents an image processing Transformer (IPT) that learns low-level vision tasks by pretraining and task-specific fine-tuning. However, to the best of our knowledge, there are no existing works that exploit ViT architecture for distributed learning applications. 3 PRIVACY-PRESERVING TASK-AGNOSTIC VISION TRANSFORMER 3.1 SUBSCRIPTION-BASED SERVICE MODEL As illustrated in Figure 1(a), TAViT is designed for subscription-based services. Specifically, a client subscribes to a task-agnostic Transformer model at the server side that has learned global attention over the image features from other datasets. Then, the client can build the head and tail proper to its own image processing task, and connect them to the Transformer body at the server. At the subscription time, there may be already multiple clients that subscribe to the same Transformer body. Accordingly, each client can train its own head and tail using its local data whereas the common Transformer body is regularly updated using embedding features from all subscribers through alternating training strategy as shown in Figure 1(b), or even fixed if training has been performed with sufficient number of tasks and clients. As a result, the latest version of the Transformer body trained using more training data can be maintained on the server side so that it can be offered to new clients at the next subscription. Since the local data are not centralized to one device and are not shared with other clients, our framework can preserve data privacy. In the proposed framework, we consider the features from the head as a sequence of tokens similar to natural language processing. Specifically, as shown in Figure 1(c), we reshape the features f with Y ×X×D size into a sequence of patches f = {f1, f2, . . . , fS}, whereX,Y,D denote width, height, and channel dimension of image features, respectively, S is the number of patches, i.e. S = Y X/p2 for patch size p, and fs denotes the s-th patch of the features with p2 ×D size. Then, these reshaped features f are taken into the Transformer body as an input sequence, which is added to learnable positional embeddings to keep the position information of each feature patch. The Transformer body consists of several encoder layers proposed in Vaswani et al. (2017) so that the encoded features pass through several multi-head self-attention modules and feed-forward modules for each layer. And then, the body output of transformed features is reshaped into the original shape of features f to be used as input for the tail CNN. Here, for the Transformer body, we employ the encoder-only architecture as a task-agnostic network, compared to IPT (Chen et al., 2020) that uses both encoder and decoder of Transformer. The encoderonly Transformer learns the global relationship between features in the input corpus, and that global attention may be all we need for better performance in vision tasks as demonstrated in ViT. Therefore, the body of our framework can be trained to translate the input embedding features into globally self-attended features independent of specific tasks. Moreover, the heads are guided to learn the task-specific embedding from the input images to the common feature representation, and the tails are trained to learn the attended features for the specific image processing tasks. This architectural modification enables the framework to be suitable for multi-task distributed learning. 3.2 TRAINING SCHEME For distributed datasets of different tasks, we apply the alternating training strategy between clients and the server by considering them as two players. Specifically, as shown in Figure 2, TAViT trains the client-side task-specific head and tail networks and the server-side task-agnostic body network in an alternating manner. In the task-specific learning, clients train their own heads and tails with the fixed body weights in parallel using locally stored datasets. In contrast, in the task-agnostic learning, Algorithm 1 TAViT: C = {C1, C2, . . . , CK} is a group of client sets with different tasks each other. Is and Ia denote the number of optimization iterations for each task-specific and task-agnostic step in one cycle. Hc and Tc are the head and the tail of a client c, and B is the Transformer body on the server. Initialization :H,T to all clients, B to a body for i in [1, num_cycles] do for is in [1, Is] do // task-specific learning (heads & tails) for each client c ∈ Ck ⊂ C in parallel do update Hc, Tc with fixed B end if is is aggregation step then // for case of multi-clients with one task for each client subset Ck ⊂ C, s.t. |Ck| > 1 do unify Hc and Tc of client c ∈ Ck (e.g. FedAvg) end end end for ia in [1, Ia] do // task-agnostic learning (body) k ← randomly selected task update B with fixed Hc, Tc, s.t. c ∈ Ck end end Output: H,T,B the server trains the Transformer body with the fixed head and tail of a randomly chosen client for each iteration. More details are as follows. 3.2.1 TASK-SPECIFIC LEARNING Let C = ⋃K k=1 Ck be a group of client sets participating different image processing tasks, where K denotes the number of tasks, and Ck has one or more clients with different datasets for the k-th task, i.e. Ck = {ck1 , ck2 , . . . , ckNk} with Nk ≥ 1. Each client c ∈ Ck has task-specific own network architecture for a head Hc and a tail Tc, which are connected to the Transformer body B in the server. In the task-specific learning, for the given freezed Transformer B at the server and the local training data {(x(i)c , y(i)c }Nci=1, the c-th client then trains the neural networksHc and Tc by solving the following optimization problem: min Hc,Tc Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))), (1) where `c(y, ŷ) refers to the c-th client specific loss between the target y and the estimate ŷ. The parameters of Hc and Tc are iteratively updated using ∂`c/∂Tc and ∂`c/∂Hc. These gradients are calculated by back-propagation through the entire model which can be expressed by the chain rule: ∂`c ∂Tc = ∂`c ∂ŷ · ∂ŷ ∂Tc , ∂`c ∂Hc = ∂`c ∂fH · ∂fH ∂Hc = ∂`c ∂fB · ∂fB ∂fH · ∂fH ∂Hc (2) where fH = Hc(x (i) c ), fB = B(fH) and ŷ = Tc(fB). This implies that to update the head Hc and the tail Tc, the gradient ∂`c/∂fB is transmitted to the server after back-propagation through the tail, and also the ∂`c/∂fH computed from back-propagation through the body is transported to each client. Federated learning In the task-specific learning, when there are multiple clients for the same task k (i.e. Nk > 1), their heads and tails can be trained in parallel. Suppose that cki has training dataset size of |Di| and the total size of dataset in Ck is ∑ |Di| = |D|. In this case, the backpropagation and optimization process are the same with the single client case, but additionally applies FedAvg(McMahan et al., 2017a) to the parameters Hc and Tc of c ∈ Ck for every assigned period, which is written as: (Hcj , Tcj )← ( Nk∑ i=1 |Di| |D| Hci , Nk∑ i=1 |Di| |D| Tci ) , where 1 ≤ j ≤ Nk. (3) The period of the weight aggregation is adjustable (50 epochs in our experiments). From this federated learning, clients corresponding to the k-th task share the same parameters at the end of task-specific learning as shown in Figure 2. 3.2.2 TASK-AGNOSTIC LEARNING Once the heads and tails of multiple clients are trained, the Transformer body is trained by fixing the heads and tails at the clients. To train the Transformer body that learns the common representation in a task-agnostic manner, we construct a subset of the client CB by selecting one client from each task: CB = {c1n1 , c 2 n2 , . . . , c K nK}, c k nk ∈ Ck. (4) Then, the training data {x(i)c , y(i)c }Nci=1 corresponding to the task of the client are also selected, and the Transformer body in the server is updated by solving the optimization problem as follows: min B ∑ c∈CB Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))). (5) Similar to the task-specific learning, the parameters of B are updated using ∂`c/∂B, where the client c is randomly chosen from CB at each optimization step. The required gradients also come from back-propagation as following: ∂`c ∂B = ∂`c ∂fB · ∂fB ∂B , where ∂`c ∂fB = ∂`c ∂ŷ · ∂ŷ ∂fB , (6) where fB = B(fH) and ŷ = Tc(fB). Here, the gradient ∂`c/∂fB is only transported to the server after back-propagation through the tail. Through this task-agnostic learning, the Transformer body in the server learns global embedding representation and provides task-agnostic self-attended features for various image processing. The pseudocode of the overall TAViT is described in Algorithm 1 with more details in Appendix A. 3.3 COMMUNICATION COST AND PRIVACY PRESERVATION BY TAVIT Given that gradients have to be transmitted two-way or one-way for training head/tail and body parts of the architecture, one many wonder whether additional communication overhead is significant. However, since the Transformer body is a shared model on the server that does not perform any weight aggregation, our model has much smaller cost for one communication between the client and the server in the task-agnostic learning. This comes from the small size of transported features and gradients for the heads and tails. If we sample clients in the network training, the communication cost can be further controlled. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases if a bigger Transformer body is used for better representation of global attention. For detailed analysis, see Appendix D.4. The proposed TAViT is designed to use distributed local data for distinct tasks without sharing the data to the other clients or any central device. Although the privacy attack on the transported features between the server and clients can be occurred, yet another powerful and unique mechanism for maintaining privacy in TAViT arises when the client-side network of the proposed method has a skip connection between the head and the tail. In this case, the transported feature characteristics can contain very lossy information from the original data, and one cannot reconstruct data only using the transmitted hidden features of the proposed method as detailed in Appendix D.1. 4 EXPERIMENTAL RESULTS We examine the performance of TAViT with the following image processing tasks: deblocking (JPEG artifact removal), denoising, deraining, and deblurring. Additional experiments for image inpainting and medical data are also performed to investigate its performance for high-level computer vision tasks and different domain data, respectively, which can be found in Section D.5 of Appendix. With a single server, we set two clients to carry out FL on the deblocking task and set one client for each of the other tasks, so the total number of clients is five in our experiments. We evaluate results using two metrics of PSNR and SSIM. Datasets The public datasets we used are as follows. For deblocking and denoising, we use 10,582 images from PASCAL VOC 2012 (Everingham et al., 2010) and Segmentation Boundaries Dataset (SBD) (Hariharan et al., 2011). Particularly, for FL on the deblocking, we split the data into two sets with 5,291 images and distribute them to each client. Deblocking results are evaluated on Berkely Segmentation Database (BSD500) (Martin et al., 2001b) that provides 200 test images. For the denoising, we apply random Gaussian noise with the level of σ = 30 to images. The Denoising model is evaluated on CBSD68 that contains 68 test images extracted from the BSD500. For deraining, following the experiment setting of Jiang et al. (2020), we use data from Rain14000 (Fu et al., 2017b), Rain1800 (Yang et al., 2017), Rain800 (Zhang et al., 2019a), and Rain12 (Li et al., 2016) that provide 13,711 pairs of clean and synthetic rain images. Deraining results are evaluated on Rain100H and Rain100L (Yang et al., 2017), each of which has 100 synthetic rainy images. Deblurring is performed using a GoPro dataset (Nah et al., 2017) that contains 2,103 and 1,111 pairs of sharp and blurry images for training and test sets, respectively. Implementation details To implement our TAViT, we use the encoder and decoder of DDPM (Ho et al., 2020) with three stages as our backbone of each head and tail at the client. For the Transformer body in the server, we use 8 encoder layers of the vanilla Transformer (Vaswani et al., 2017) with 256 words and 512 embedding dimensions. The total number of parameters for networks at each client and the server is about 28M and 17M, respectively. Using 4 Nvidia Quadro RTX 6000 cards and 2 Nvidia Geforce GTX 1080Ti cards, we train the networks using Adam optimizer with a learning rate of 3× 10−5. We initialize parameters of the networks with those of the pre-trained model by an autoencoder scheme. For the data augmentation, we apply random horizontal and vertical flipping, rotating with 90 degrees, and cropping by a patch size of 64× 64× 3. For three cycles, by setting the batch size as 20, we perform the task-specific learning for 200, 400, 400, and 2000 epochs on deblocking, denoising, deraining, and deblurring, respectively. Also, we perform the task-agnostic learning for 1000 epochs with 1/4 data for each task. We implement our TAViT using PyTorch library under BSD-style license using Flower federated learning protocol (Beutel et al., 2020) under Apache 2.0 License. The details of datasets and implementation are described in Appendix B. 4.1 RESULTS Convergence of TAViT for multi-task distributed learning We evaluated the results of the proposed TAViT of multi-task distributed learning with all participated clients and one common Transformer body in the server. Figure 3 shows the gradual progression of quality metrics through the alternating training scheme. The performance of all tasks from our method increased and outperformed as the task-specific learning and the task-agnostic learning continued. This demonstrates the synergistic improvement of the task-specific heads/tails and the task-agnostic body: the heads and tails learn more accurate feature embedding for given tasks, and the common body learns the global attention general to multiple image processing tasks by looking at various datasets. Although there were some tasks in which the score of each step was slightly lower than that of the previous step by the interaction of different task datasets, the overall performance of TAViT was improved as the cycle progressed. Detailed quantitative results for each cycle are described in Appendix C. Comparison of TAViT to other strategies We compared our TAViT with other distributed learning strategies: SL and FL. We conducted both SL and FL with the two clients assigned for the deblocking task. For SL, as designed in Vepakomma et al. (2018), we placed the head and tail networks on clients and the body on the server, and trained those split networks without the weight aggregation for the head and tail. For FL, we placed the entire model composed of the head, body, and tail on each client, and trained the network in parallel by carrying out the aggregation with FedAvg (McMahan et al., 2017a) using a common server. Figure 4 shows these scenarios, where C1 and C2 are clients for the deblocking, C3, C4, and C5 are clients for the denoising, deraining, deblurring, and S is the server. As reported in Table 1, the proposed method achieves better performance compared to the other strategies even though ours learns multiple tasks. Comparison of TAViT to learning each separate task To verify the capability of the task-agnostic Transformer body learning from multiple tasks, we compared TAViT with the models independently trained on each individual task. Under the setting of centralized data for each task, we implemented this study in two manners: end-to-end learning (EL) and single-task learning (STL). For EL, we trained the whole network in one device through the end-to-end optimization scheme. For STL, we distributed the decomposable head, body, and tail to a client and a server as the proposed method, and trained the networks by the alternating training strategy for one cycle. Table 2 reports the results on Benchmark datasets for each task. This shows that our TAViT trained on multiple tasks simultaneously outperforms both EL and STL, which suggests that the task-agnostic body of our framework does not degrade the model by task heterogeneity but enhances the performance for various tasks. Comparison of TAViT to CNN-based models To compare the performance of TAViT with CNNbased deep learning models, we tested several existing methods on benchmark datasets for each task. Table 2 and Figure 5 show the qualitative and visual comparison results, respectively. For the deblocking, when comparing with DnCNN Zhang et al. (2017a), AR-CNN Dong et al. (2015), and QCN (Li et al., 2020b), the proposed method outperforms for both the 10 and 50 levels of quantization quality. Visual comparisons also show that the proposed method removes block artifacts clearly compared to the others. For the denoising, we compared our method with CBM3D (Dabov et al., 2007), DnCNN (Zhang et al., 2017a), FFDNet (Zhang et al., 2018b), IRCNN (Zhang et al., 2017b), DHDN (Park et al., 2019), and SADNet (Chang et al., 2020). The results show that our Dataset Metric CNN-based Transformer-based TAViT achieves better PSNR/SSIM scores, and also provides more clearly denoised images while preserving structure and texture details than the comparisons. For the deraining, we tested our model in addition to DerainNet (Fu et al., 2017a), SEMI (Wei et al., 2019), UMRL (Yasarla & Patel, 2019), PreNet (Ren et al., 2019), and MSPFN (Jiang et al., 2020). We used Y channel in YCbCr color space followed by Jiang et al. (2020) for the evaluation. As a result, our model outperforms the comparative methods on both Rain100H and Rain100L. Also, the images restored by ours are closer to the references by removing rain streaks than the others. For the deblurring, we employed DeblurGAN (Kupyn et al., 2018), Nah et al. (2017), Zhang et al. (2018a), DeblurGANv2 (Kupyn et al., 2019) for comparisons. The results show that the proposed method achieves comparable performance to the existing approaches. Visual results show that our TAViT restores blurry images with sharp edges, while the others still contain blurry artifacts or position shifting of objects compared to the references. 5 CONCLUSION In this work, we present a multi-task distributed learning framework called TAViT. In TAViT, the task-specific head CNN and the tail CNN are distributed to clients that have their own data, which are connected to a common Transformer body placed in the server. With an alternating training scheme, the heads and tails on client sides are trained by task-specific learning, while the body is trained by task-agnostic learning. We conduct experiments on four different image processing tasks, which shows the success of task-agnostic learning of the Transformer body and its synergistic improvement with the task-specific heads and tails. Through our model, the participating clients can design and train their own networks depending on the task using local data in parallel. We expect that the proposed TAViT can be efficiently used in the case that sharing data with other institutions is sensitive such as medical fields. Ethics statement As our work utilizes distributed learning models, similar to the existing FL and SL, our method may be vulnerable to privacy attacks against the server such as inversion attacks (Yin et al., 2021). Although the proposed framework is designed by encoding the feature maps and gradients under Flower protocol which makes it difficult for attackers to restore the original data, the hidden feature may leak the raw data to some degree. Thus, privacy-related techniques such as differential privacy (McMahan et al., 2017b) and authenticated encryption of data (Rogaway, 2002) should be analyzed for practical applications. Reproducibility statement The source code and our trained models to reproduce the proposed method are available at https://github.com/TAViT2022/TAViT. For the detailed pseudocode, refer to Appendix A. Also, the data processing steps for datasets used in the experiments are provided in Appendix B. A DETAILS OF TAVIT WITH PSEUDOCODE As described in the main paper, the task-specific heads and tails in clients and the Transformer body in the server are trained in an alternating manner between the proposed task-specific learning and the task-agnostic learning. In the following, we describe each step in more detail in terms of its implementation. Pseudocode for the task-specific learning Algorithm 2 shows the pseudocode for the task-specific learning. Given K image processing tasks, the task-specific learning updates the heads H and the tails T in each client with the fixed body B. The server first initializes global weights of the heads Algorithm 2 Task-specific learning of TAViT: Is denotes the number of iterations for the task-specific learning in one cycle. (Hc, Tc) denote the head and the tail in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (HCk , TCk) are global weights of the heads and the tails for the task k. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. |Dj | is the size of training data at cj , and |D| is the total size of training data at Ck, i.e. |D|= ∑ j |Dj |. /* run on the server (with fixed B) */ Initialize HCk , TCk Send HCk , TCk to all clients c ∈ Ck for is in [1, Is] do for each client c ∈ Ck, where k = {1, 2, . . . ,K} in parallel do fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂fH ← ∂`c∂fB · ∂fB ∂fH // backpropagation through body ClientUpdate( ∂`c∂fH ) end if is is weight aggregation step for |Ck| > 1 then Get (Hcj , Tcj ) from client cj , where j ∈ {1, 2, . . . , Nk} HCk ← ∑Nk j=1 |Dj | |D| Hcj // FedAvg for head TCk ← ∑Nk j=1 |Dj | |D| Tcj // FedAvg for tail Send (HCk , TCK ) to all clients c ∈ Ck end end /* run on client c */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂Tc ← ∂`c∂ŷ · ∂ŷ ∂Tc // computation of tail gradients ∂`c ∂fB ← ∂`c∂Tc · ∂Tc ∂fB return ∂`c∂fB /* run on client c */ Function ClientUpdate( ∂`c∂fH ) ∂`c ∂Hc ← ∂`c∂fH · ∂fH ∂Hc // computation of head gradients update Hc, Tc using ∂`c∂Hc and ∂`c ∂Tc by optimizer e.g. Adam Algorithm 3 Task-agnostic learning of TAViT: Ia denotes the number of optimization iterations for task-agnostic learning in one cycle. (Hc, Tc) denote the head and tail in a client c in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. /* run on the server */ Initialize CB = {c1n1 , c 2 n2 , . . . , c K nK} where c k nk ∈ Ck for ia in [1, Ia] do c← cknk ∈ CB // Random selection of client with task k fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂B ← ∂`c ∂fB · ∂fB∂B // computation of body gradients update B using ∂`c∂B by optimizer e.g. Adam end /* run on client c (with fixed Hc, Tc) */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c (with fixed Hc, Tc) */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂fB ← ∂`c∂ŷ · ∂ŷ ∂fB // backpropagation through tail return ∂`c∂fcB and the tails and sends them to all clients in Ck, where Ck is a set of clients with different datasets for the k-th task. When each client c ∈ Ck takes local training data x and provides a feature map fH by the head Hc to the server (line 5 with ClientPhase1), the server-side Transformer body takes the feature map fH as an input embedding and estimates the self-attended features fB that is independent of specific tasks. Once the fB in the server is sent to the client, the client computes the task-specific loss `c between the label y and the tail output ŷ (line 24 in ClientPhase2). The gradient of the tail ∂`c/∂Tc is also computed in the client at ClientPhase2 (line 25), which is used to compute ∂`c/∂fB that is transported to the server. Then, in the server, ∂`c/∂fH is calculated by backpropagation through the body, which is sent to the client so as to compute the head gradient ∂`c/∂Hc and finally update Hc and Tc (lines 28-30 of ClientUpdate) using a single optimizer. Here, when there are multiple clients for the k-th task, i.e. |Ck| > 1, we apply the federated learning for the heads and the tails of those clients as described in lines 11-16 in Algorithm 2. The heads and the tails are trained in parallel, and their weights are aggregated by FedAvg (McMahan et al., 2017a) on the server side for every weight aggregation period. Then, these updated global weights of the heads HCk and the tails TCk and are transmitted to all clients in Ck so that the clients train their own head and tail using the new global weights from the next step. Pseudocode for the task-agnostic learning In the task-agnostic learning, the Transformer body in the server is updated with the fixed heads and tails of clients. Algorithm 3 shows the pseudocode for the task-agnostic learning of TAViT. Given a subset of clients, CB , by selecting one client among Ck for each task, the client c ∈ CB is randomly chosen for every iteration. Then, compared to the task-specific learning, the implementation of the task-agnostic learning is similarly conducted but does not need ClientUpdate process in Algorithm 2. In other words, after the gradient ∂`c/∂fB is computed on the client side at ClientPhase2 (line 14-18) and transmitted to the server (line 6), the server updates the Transformer body by computing the body gradients ∂`c/∂B (lines 7-8), which is the final step of each iteration. B DETAILS OF DATASETS AND IMPLEMENTATION B.1 LICENSE/SOURCE FOR EACH DATASET In our experiments, we use the public datasets for image deblocking, denoising, deraining, and deblurring tasks. Here, we describe the specific information of each data set such as license and source link. PASCAL VOC 2012 The PASCAL VOC data set (Everingham et al., 2010) is publicly available, which includes images obtained from the "flickr" website under SmugMug or its third-party licensors. The data are protected by the United States and international intellectual property laws. The data source is from the URL: http://host.robots.ox.ac.uk/pascal/VOC/. BSDS500 and CBSD68 The Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500) (Arbeláez et al., 2011) data set is an extended version of BSDS300 (Martin et al., 2001b), which is a public data set originally provided for image segmentation and boundary detection by Berkeley Computer Vision Group. This data set is widely used for measuring image restoration performance. The color BSD68 data set (CBSD68) is extracted from the BSDS500. The BSDS500 can be downloaded at https://www2.eecs.berkeley.edu/Research/Projects/CS/ vision/grouping/resources.html. Synthetic rainy images The synthetic rainy data set for training is collected from Rain14000 synthesized by Fu et al. (2017b), Rain1800 authored by Yang et al. (2017), Rain800 created by Zhang et al. (2019a), and Rain12 made by Li et al. (2016). We test our method on the synthetic rainy data sets of Rain100H and Rain100L, both of which are authored by Yang et al. (2017). All these data sets are publicly available and can be downloaded at the following links: - Rain14000: https://xueyangfu.github.io/projects/cvpr2017.html - Rain1800: https://www.icst.pku.edu.cn/struct/Projects/joint_rain_ removal.html - Rain800: https://github.com/hezhangsprinter/ID-CGAN - Rain12: https://yu-li.github.io/ - Rain100L & Rain100H: https://www.icst.pku.edu.cn/struct/Projects/ joint_rain_removal.html GoPro The GoPro dataset (Nah et al., 2017) provides training and test sets for deblurring. The data are available at https://seungjunnah.github.io/Datasets/gopro.html. B.2 DATA PROCESSING All datasets we used in experiments provide natural images that have three RGB channels and pixel values with a range of [0, 255]. Upon these datasets, we performed the following data processing according to the image processing tasks. For the image deblocking task, we quantized the images following JPEG compression. We first transformed RGB image into YUV color space using the following equations. Y = 0.257R+ 0.504G+ 0.098B + 16 (7) U = −0.148R− 0.291G+ 0.439B + 128 (8) V = 0.439R+ 0.368G− 0.071B + 128 (9) Then, we split the image into 8x8 blocks without overlapping and apply Discrete Cosine Transform (DCT) to each block. According to the level of quantization quality, we divided each element of the DCT coefficients by proper predefined matrices. After that, we apply inverse DCT and aggregate all blocks into an image, and then, we transformed the image from YUV to RGB color space. R = 1.164(Y − 16) + 1.596(V − 128) (10) G = 1.164(Y − 16)− 0.392(U − 128)− 0.813(V − 128) (11) B = 1.164(Y − 16) + 2.017(U − 128) (12) For the denoising task, we added Gaussian noise to the clean images. Specifically, we applied random Gaussian noise with the level of (µ, σ) = (0, 30) to images, and then clipped the values into [0, 255]. For the other tasks, the datasets named Rain# and GoPro provide synthetic rainy images and blurry images, respectively. Since we used these datasets for the deraining and deblurring tasks, we did not perform any preprocessing such as the synthesis of rain artifacts blurry effect. After the above data processing for all tasks, we randomly cropped the images by a patch size of 64× 64× 3. Also, we applied data augmentation using random flipping and rotating with 90 degrees. Then, we normalized the images with the scale of pixel values from [0,255] to [-1, 1], which are final inputs to the model. B.3 NETWORK ARCHITECTURES For the task-specific head and tail for each task, we use the network architecture of DDPM (Ho et al., 2020) that is composed of residual blocks and attention modules. We set the number of two-times downsampling/upsampling stages as 3. For each stage, the channel size is set as 128, 256, and 512, respectively. Accordingly, given an input image x ∈ R64×64×3, the head provides a feature map fH ∈ R16×16×512 that passes through the body, and the tail generates an output of the same size as the input. On the other hand, for the Transformer body, we use the encoder part of the vanilla Transformer (Vaswani et al., 2017). As described in the main paper, the Transformer body takes a sequence of patches f by reshaping the feature map fH as an embedding of the words. In the experiments, the length of the input sequence is 256 by setting the patch size as 1, and the sequence dimension is 512. Then, once the input sequence is added to learnable positional encodings, the encoded features h pass through L encoder layers (L = 8 in our experiments). Table 3 shows the structure of each encoder layer of the body. Model sizes Table 4 shows the model sizes of the task-specific head and tail, and the Transformer body we used. When comparing the number of parameters and the size of networks, we can observe that the client-side networks composed of the head and the tail is larger than the task-agnostic Transformer body. Considering the experimental results in the main paper, this model size suggests that the body estimates task-agnostic self-attended features that provide the synergy effect in the task-specific and task-agnostic learning even if the body size is smaller than the sum of head and tail. C EXPERIMENTAL RESULTS C.1 TAVIT ON MULTIPLE IMAGE PROCESSING TASKS Evaluation results of TAViT Table 5 reports the quantitative evaluation results of TAViT on multiple image processing tasks, which is visualized with graphs of scores for the cycles in the main paper. Figure 6 shows the qualitative results of TAViT. This shows that the performance of each task is improved according to the cycles between the task-specific and task-agnostic learning. Table 5: Quantitative results of TAViT according to the cycles, which are visualized with graphs in the main paper. The best results are highlighted in bold. Task Deblocking Denoising Deraining Deblurring Cycle BSDS500 (Q10) BSDS500 (Q50) CBSD68 (σ = 30) Rain100H Rain100L GoPro PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM 0.5 27.53 0.781 32.92 0.921 30.57 0.868 28.24 0.860 33.17 0.939 28.94 0.871 1.0 27.57 0.782 33.01 0.922 30.62 0.869 28.75 0.862 32.69 0.936 29.09 0.873 1.5 27.61 0.784 33.05 0.923 30.57 0.870 28.57 0.869 33.58 0.945 29.63 0.885 2.0 27.65 0.785 33.14 0.924 30.66 0.870 28.79 0.867 33.50 0.944 29.72 0.887 2.5 27.64 0.785 33.14 0.924 30.62 0.870 29.25 0.875 34.30 0.949 29.96 0.893 3.0 27.69 0.786 33.21 0.924 30.69 0.871 29.35 0.875 33.88 0.947 30.06 0.894 Figure 6: Qualitative results of TAViT according to the cycles. From the left to the right columns, the deblocking results on images with the quantization quality 10 and 50, the denoising results, the deraining results on Rain100H and Rain100L, and the deblurring results. The yellow value is PSNR, and an inset box is a magnified view of a red rectangle. Qualitative comparisons Besides the results presented in the main paper, here, we show more visual comparisons of our TAViT to the existing methods. Figure 7, 8, 9, and 10 display the deblocking, denoising, deraining, and deblurring results, respectively. All these results verify that our TAViT as a distributed learning for multiple image processing tasks outperforms the comparisons. C.2 ABLATION STUDY OF TAVIT Study on the amount of data for each task in the task-agnostic learning In the main paper, we implemented our method using 1/4 of the dataset for each task in the task-agnostic learning. To verify that this amount of data is enough for the task-agnostic learning, we performed the ablation study using different amounts of data with a 1/2 ratio for each deblocking, denoising, deraining, and deblurring task. Table 6 and Figure 11 show the quantitative results of TAViT on the multiple tasks using 1/2 data in the task-agnostic learning. Similar to the results with a 1/4 data ratio, the scores of PSNR and SSIM tend to increase as the cycle continues. When comparing the best results from the 1/4 and 1/2 data ratio, we can observe that performance for each task using even 1/4 amount of data is comparable or better than using 1/2 data. This suggests that using 1/4 of data for each task in the task-agnostic learning is sufficient to train the Transformer body and obtain high performance. Study on the weight aggregation period In the main paper, we conducted the experiment of TAViT by applying FL to the deblocking task that has two clients with their own data. In FL, the weights of the network in each client are averaged in the server for every weight aggregation period that is given as a hyperparameter. Here, since this period can influence learning performance in that the clients and the common server communicate to aggregate network weights, we performed the ablation study on the weight aggregation period for training the client-side networks. As reported in Table 7, for the deblocking task, we trained the model with the aggregation period of 20, 50, and 100 epochs. When we evaluated the deblocking results, the weight aggregation per 50 epochs provides better performance with 27.53dB/0.781 and 32.92dB/0.921 of PSNR/SSIM for the quality 10 and 50, respectively, compared to the other methods. This verifies that the weight aggregation period of 50 epochs presented in the main paper is proper to train and evaluate the proposed TAViT in our experiments. D DISCUSSION D.1 SKIP-CONNECTION OF HEAD AND TAIL FOR PRIVACY PRESERVATION When configuring the task-specific heads and tails with skip-connections, our model can avoid the privacy attack in some degree while maintaining the encoding information for the tail to generate outputs. This is because the skip-connected features are isolated on each client and not transported to the server. Accordingly, the transported features between the clients and the server may contain far less information about the original data. Figure 12 shows examples of the outputs with and without skip-connections. This shows that the network output without skip-connections barely has the property of original data, which indicates that one may not be able to reconstruct the original data using the transmitted hidden features of the proposed method. D.2 EFFECT OF TASK-AGNOSTIC TRANSFORMER BODY As we described in the main paper, the reason for developing our model with CNN-based heads/tails and the Transformer-based body is to take advantage of each network. In particular, Transformer learns the global attention of the input sequence through self-attention modules and has recently been extensively studied for various computer vision tasks. One of the most unique advantages of Transformer is to convert “unattended ” input feature vectors into “attended ” output feature vectors by learning global attention and non-local interactions between the input features. Accordingly, the task-specific head / tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer body. This disentangled representation of local and non-local features has been pursued throughout the development of deep networks. Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split-learning architecture. In order to show that this design is proper to the multi-task distributed learning, we additionally conducted the experiment by replacing the Transformer body to the CNN model. Specifically, we configured the CNN body with CBR blocks, where C is a convolutional layer with the consistent channel 512, B is a batch normalization layer, and R is an activation by ReLU layer. For a fair comparison, we set the CBR blocks as 7 to have almost the same number of learnable parameters with the Transformer body we used (16,522,240 of CNN body vs. 16,953,344 of Transformer body). Then, using this CNN body, we implemented our proposed task-specific and task-agnostic learning for one cycle on the multiple image processing tasks as the main paper. As a result, Table 8 shows that our model with the Transformer body achieves higher performance in both task-specific and task-agnostic learning. This indicates that the Transformer can be used as a general task-agnostic body for multi-task learning. D.3 SAMPLING STRATEGY OF CLIENTS When there are multiple clients for one task in task-specific learning, the task-specific networks of clients are aggregated through the sampling strategy of FedAvg. On the other hand, in task-agnostic learning of the proposed TAViT, one client is sampled for each iteration. Since the networks of clients for the same task are aggregated before the task-agnostic learning, we can readily sample one client for each task. Then, choosing one client for the subset of Eq. (4) can be viewed as sampling one task, which naturally reduces the communication cost. In fact, the performance of TAViT is not affected by the number of sampled clients in the task-agnostic learning, since the task-agnostic body is updated for sufficient iterations. To demonstrate this, we performed the task-agnostic learning for the four tasks in our experiments by varying the sampling strategy. Table 9 shows the results after training our model for one cycle according to the number of sampled clients in the task-agnostic learning. The results show that sampling one client achieves comparable or higher performance for all tasks, compared to the results of sampling more than one clients. This supports that our sampling strategy is an efficient way to train the Transformer body with less computational cost even when the number of clients increases. D.4 COMMUNICATION COST BETWEEN CLIENTS AND SERVER In the proposed TAViT, the features and gradients of the networks are transported between clients and server, so one may wonder how much additional communication cost occurs. To compute the communication cost of our method, we assume that the cost is proportional to the number of transported elements. Also, since the size of features and gradients from clients to the server are the same with those from the server to client, we only consider one direction from clients to the server. Then, we computed the maximum cost for one communication to update our model for each task-specific and task-agnostic learning, and compared our cost to the method of FL (McMahan et al., 2017a). Specifically, when there Nk clients for the k-th task, let PH , PB and PT be the number of parameters of the head, body, and tail, respectively. In the case of FL that aggregates the whole model composed of the head, body, and tail, the communication cost per communication can be represented as: CostFL = Nk(PH + PB + PT ). (13) On the other hand, our model does not require the transportation of learnable parameters except for the aggregation step in the task-specific learning. Thus, the communication cost can be computed as follows: CostTAV iT = { Nk(PH + PT ), if an aggregation step (task-specific learning) Nk(F +G), else if a non-aggregation step (task-specific learning) F +G, otherwise, (task-agnostic learning) (14) where F and G are the number of elements of transported features and gradients, respectively. From (14), we can see that the communication cost at the network aggregation step in the task-specific learning of the proposed method is smaller than FL that needs to transport the parameters of the whole model including head, body, and tail. Specifically, instead of aggregating parameters of the Transformer body, the TAViT transports features and gradients that have much smaller size than the body parameters, which can reduce the cost per communication significantly. For example, the proposed model for the deblocking task contains PH + PB + PT = 44, 774, 792 parameters whose memory size is about 207.5MB. Suppose that 10 clients participate in FL to train this model. Then, 447.7M elements are transported from the clients to the server, and the network of the server should handle more than 2GB load per communication. In contrast, our model transports PH + PT = 27, 952, 520 parameters whose memory size is approximately 142.5MB. Thus, even with 10 clients, 279.5M elements are transported, and the network of the server is supposed to handle about 60% load of FL. In addition, since the number of features and gradients is F = G = 20× 16× 16× 512 = 2, 621, 440 which is 10MB of memory, the number of transported elements per communication for 10 clients is 52.4M, and the server is pressed by only 200MB load per communication. On the other hand, in task-agnostic learning, the server updates the body with the sampled client without any weight aggregation. Accordingly, only the features and gradients are transported from the client to the server. In particular, in the case of the communication from the server to the client, the server does not need to transport the gradients to the client, but only transmit the features. Thus, the cost per communication in the task-agnostic learning phase is significantly reduced. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases more if a bigger Transformer body is used for better representation of global attention. Scalability Suppose that there are K tasks, and let the total number of clients connected to a server be Nall. For simple analysis of the scalability, we assume that each communication between clients and the server takes a constant time. The scalability is computed on the time complexity for one communication between clients and the server to update the models. For our task-specific learning, the one communication has time complexity O(Nall) if we update the heads and tails of all clients. This means that the communication cost would increase according to the number of clients, which can limit the scalability of the proposed method. However, if we apply the client sampling strategy of the FedAvg, we can control the number of communications and the one communication will have time complexity Ω(K). This sampling strategy can be readily adapted to our model without significant modification. On the other hand, for the task-agnostic learning phase, the one communication has time complexity O(K), since the network parameters of clients for the same task are aggregated before the task-agnostic optimization. Also, according to the sampling strategy of one task in the proposed method, one communication has time complexity Ω(1), which is studied in Appendix D.3. D.5 APPLICATION TO HIGH-LEVEL VISION TASKS AND MEDICAL DATA In the main paper, the proposed TAViT was demonstrated on multiple low-level computer vision tasks. However, the TAViT framework can be also used for a wide range of high-level computer vision tasks, and even with different data domains such as medical images. To demonstrate this, we additionally conduct on inpainting task for natural images and denoising task for X-ray CT images. Here, the image inpainting is a higher-level computer vision task which requires more semantic information, and the denoising of X-ray CT requires domain-specific knowledge about the data. In particular, to show that our task-agnostic Transformer body provides a positive effect on the training of new task-specific networks, we performed the task-specific learning only for the client-side heads and tails by subscribing to the pre-trained Transformer body, which was trained on the four natural image processing tasks without additional fine-tuning. The details of training and results are as follows. Dataset For the image inpainting task, we used PASCAL VOC2012 dataset which contains 10,582 natural images. The information about license and source of this dataset can be found in Appendix B. For the preprocessing, we scaled the image from [0, 255] to [-1, 1] and randomly cropped by 128× 128 patches. Then we multiplied the image with the zero-box mask that has a random size of width and height from 48 to 64 according to Yu et al. (2018). For the X-ray CT denoising task, we used the 2016 AAPM Low-dose CT Grand Challenge dataset (McCollough et al., 2020) that provides noisy CT images with quarter dose and clean CT images with routine dose of X-ray. Since the X-ray CT data are measured in the Hounsfield unit, we divided the intensity by 4,000 and randomly cropped by 64× 64 patches. Implementation details For the image inpainting task, we employed the network architecture of Yu et al. (2018) and decomposed it into two parts for the task-specific head and tail. We performed task-specific learning by minimizing the adversarial generative loss for 400 epochs using Adam optimizer with learning rate 1× 10−4. For the X-ray CT denoising task, we used the same network architecture of head and tail implemented in this paper. We trained the task-specific networks with the fixed task-agnostic body for 400 epochs using Adam optimization algorithm with learning rate 5× 10−3. Results To evaluate the performance of image inpainting and medical image denoising, we compared our method to the CNN model that has the same network architectures of head and tail with ours but does not have the Transformer body. The quantitative evaluation results are shown in Table 10, and the visual comparisons are shown in Figure 13. We can see that the performance of inpainting is improved when training the client-side networks with our pre-trained Transformer body, even though we use the Transformer body pre-trained using low-level computer vision tasks. This implies that the proposed method can be extended to various high-level tasks. In addition, we can observe that our model on the medical image denoising task achieves higher performance than the comparative CNN model and provides clean images, although the Transformer body was trained on the natural image domain. From these results, we can confirm that our task-agnostic Transformer body has a capability to learn the domain gap even in different data sources. Also, this suggests that clients do not need to train the server-side body from the scratch when they subscribe the body for the other tasks.
1. What is the main contribution of the paper regarding image processing tasks? 2. What are the strengths and weaknesses of the proposed split architecture between client(s) and server? 3. Does the paper provide sufficient discussion on privacy guarantees and communication cost in a federated setting? 4. Are the choices of CNN and transformer architectures sufficiently motivated? 5. How does the sampling strategy scale in the face of several orders of magnitudes of more clients? 6. Is the focus of the paper on positioning TAviT as a general distributed multi-task learning framework or presenting this particular architecture as a viable means to do distributed multi-task learning for the presented tasks? 7. Do the experimental results convincingly show the effectiveness of the proposed approach?
Summary Of The Paper Review
Summary Of The Paper The paper presents an architecture for image processing tasks that splits up a network into three subsequent parts: head, body, and tail. Head and tail parts are CNN-based and can be trained on multiple client devices using federated learning (FedAvg), while the body part of the architecture is transformer-based and is trained on a central server. Head and tail parts are trained for specific tasks, while the body part is trained in a task-agnostic manner by selecting clients from each task for loss optimization. Experimental results show benchmark and convergence results that are comparable or favorable to non-distributed models, as well as comparison results to purely FL and SL approaches with a very small nr of clients. Review The submission is overall clearly written and presents an embodiment of a split architecture between client(s) and server that facilitates multi-task learning, which could be adopted by further work in the future. Code and models are available, which greatly eases adoption. However, while most of the pages are spent on architecture description and experimental results, there are several key omissions of discussion points which I deem essential for a paper in the distributed learning space: There is no mention of any sort of privacy guarantees given the proposed architecture, despite "privacy-preserving" being part of the title. Privacy preservation is a strong claim that needs to be backed up rigorously. Moreover, in their ethics statement, the authors correctly lay out that transmitted hidden features "may leak the raw data to some degree." There is no discussion of communication cost in a federated setting. The number of clients used in experiments is exceedingly small, which might serve as a proof of concept. But given that gradients have to be transmitted two-way or one-way for training head/tail and body parts of the architecture, a discussion on communication cost and scalability is necessary. Several key choices are not sufficiently motivated, including the choice of both CNN and transformer architectures, or the sampling of exactly one client from each task in Eq. 5. What makes CNN architectures less suitable for the body part in a more general framework beyond the particular architecture embodiment presented here? How would the presented sampling strategy scale in the face of several orders of magnitudes of more clients? If the focus of the paper is to position TAviT as a general distributed multi-task learning framework, then the diversity of presented experiments for validation could have been expanded. On the other hand, if the focus is to present this particular architecture as a viable means to do distributed multi-task learning for the presented tasks, then the results of Table 2 remain unconvincing, in the sense that differences in results might come from essentially uncomparable architectures, as opposed to contributions to multi-task learning or distributed learning. Post-rebuttal: I thank the authors for their valuable comments on my review and their revision. The revision has improved the submission substantially. My comments were addressed mainly due to the addition of Section 3.3 and the there referenced Appendix D. I believe that most of the other reviewers' comments were also addressed and can now recommend the submission for acceptance. This is reflected by my adjusted score. Minor comment: The "Federated Learning" paragraph on pg. 3 contains an unresolved reference due to a typo.
ICLR
Title Privacy-preserving Task-Agnostic Vision Transformer for Image Processing Abstract Distributed collaborative learning approaches such as federated and split learning have attracted significant attention lately due to their ability to train neural networks using data from multiple sources without sharing data. However, they are not usually suitable in applications where each client carries out different tasks with its own data. Recently, Vision Transformer (ViT) has been widely explored in computer vision applications due to its capability to learn the common representation through global attention of the embedded input sequence. By leveraging the advantages of ViT, here we present a new distributed learning framework for image processing tasks, allowing clients to learn multiple tasks with their private data. The key idea arises from a disentangled representation of local and non-local features using a task-agnostic Vision Transformer and a task-specific head/tail. By connecting task-specific heads and tails at client sides to a task-agnostic Transformer body at a server side, each client learns a translation from its own task to a common representation, while the Transformer body learns global attention between the features embedded in the representation. To enable decomposition between the task-specific and common representation, we propose an alternating training strategy in which task-specific learning for the heads and tails is run on the clients by fixing the Transformer, which alternates with task-agnostic learning for the Transformer on the server by freezing the heads and tails. Once the Transformer body is fully trained with a sufficient number of tasks and clients, additional training of the Transformer body is no longer required when a new client is added with a new task, and all that is required is the training of customer-specific head and tail. Experimental results on multi-task learning for various low-level and high-level computer vision including medical image data show that our method synergistically improves the performance of the task-specific network of each client while maintaining privacy. 1 INTRODUCTION Deep learning approaches have demonstrated the state-of-the-art performance and fast inference time in computer vision tasks (Ronneberger et al., 2015; Zhang et al., 2017a; Wang et al., 2017). In particular, convolutional neural networks (CNN) can learn the hierarchy of complex image features, so that a variety of CNN-based methods have been developed for denoising (Zhang et al., 2017b; Chang et al., 2020), deraining (Wei et al., 2019; Ren et al., 2019), deblurring (Nah et al., 2017; Kupyn et al., 2019), deblocking (Li et al., 2020b; Maleki et al., 2018), etc. However, the performance of CNN typically depends on a large number of training data (Chervenak et al., 2000; Krizhevsky et al., 2017), and it is often difficult to collect data from various entities due to privacy and regulation issues (Price & Cohen, 2019). Since the amount of data from a single source may not be enough, a deep learning framework that can leverage many datasets without violating privacy is required in real-world applications. To address this, distributed collaborative learning (DCL) approaches, which jointly train a single network on multiple systems or devices without revealing distributed data to a central entity or to each device, have been investigated (Konečnỳ et al., 2016; McMahan et al., 2017a; Gupta & Raskar, 2018). For example, federated learning (FL) (McMahan et al., 2017a; Li et al., 2020c) is studied to aggregate all data to the center under privacy constraints. Thanks to the parallel communication between each client, FL enables fast training of the network across multiple clients. Also, split learning (SL) (Gupta & Raskar, 2018; Vepakomma et al., 2018) is developed as an enhanced privacy-preserving model that splits a network into clients and server so that each client does not share all network parameters but only train a part of networks. From the advantages of FL and SL, a combination of split and federated learning, named SplitFed learning (SFL) (Thapa et al., 2020), has been recently proposed to provide efficient training and a high level of privacy with a less computational burden. However, the existing CNN-based methods are difficult to determine the proper layer of the network to split. Also, although training data are distributed across each client, all clients usually consider a common learning task. Meanwhile, in many practical image processing applications, it is unlikely that all the clients are interested in the same applications. For example, some of the clients may be interested in image denoising (Zhang et al., 2017b), whereas the other clients are focused on image deblurring (Nah et al., 2017), deraining (Wei et al., 2019), deblocking (Li et al., 2020b), etc. As each task is different from the others, the existing distributed learning framework may not work. That said, these image processing tasks still require understanding of common image representation, so one may wonder whether there is any systematic way of synergistically learning multiple image processing tasks in a privacy-preserving manner. One of the most important contributions of this work is to show that Task-agnostic Vision Transformer (TAViT), composed of the CNN-based head and tail and ViT-based body, is nicely fit to this purpose. Specifically, the head and tail are placed on each client to learn specific image processing tasks, while the body is stored and trained on a server to learn common representation across all tasks of clients. In contrast to the existing SL framework where the network split is arbitrary, TAViT provides a systematic way of splitting neural networks between clients and servers for privacy-preserving training without losing any performance. Furthermore, TAViT allows clients to use a common Transformer body model to learn multiple image processing tasks and synergistically improve the performance of their task-specific networks. One may think that the proposed method is similar to the image processing transformer (IPT) (Chen et al., 2020), which consists of CNN-based heads and tails and a Transformer body. However, IPT requires centralized data and large computation resources for both pretraining and task-specific fine-tuning the whole model. Also, the Transformer in IPT has an encoder-decoder architecture which needs an explicit conditioning vector to convert the Transformer for a specific task. Thus, to our best knowledge, IPT is not suitable for distributed learning. In contrast, the body of TAViT is made of an encoder-only Transformer architecture to learn global embedding features of multiple tasks without any condition. Besides, by imposing computation of this Transformer body on the server rather than clients, our framework enables clients to reduce the computational burden while maintaining the overall performance for specific image processing tasks. In addition, TAViT views the heads and tail at the clients and the body at the server as two-part players and updates them alternately. Specifically, our training step is composed of task-specific learning and task-agnostic learning: the former is to train the client-side heads and tails to learn each task of the client, while the latter is to train the server-side Transformer body to learn general feature interpretation over multiple tasks. When there are more than two clients for any single task, parameters of their heads and tails can be aggregated through FL. Accordingly, TAViT offers seamless integration between SL and FL approaches to protect privacy. Recall that one of the most unique advantages of Transformer body is to convert “unattended ” input features into “attended ” output features by learning global attention and non-local interactions between the input features. Accordingly, with the help of aforementioned alternating training scheme, the task-specific head/tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer. In fact, this disentangled representation of local and non-local features has been pursued throughout the development of deep networks (Ye et al., 2018; Zhang et al., 2019b; Wang et al., 2018). Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split learning. We validate the performance of TAViT on multiple image processing tasks. Experimental results show that our multi-task distributed learning framework using the alternating training strategy outperforms the end-to-end learning of each individual task thanks to the decomposition of the task-agnostic Transformer body and task-specific networks. This suggests that our framework is a promising approach for learning multiple tasks with distributed privacy-sensitive data. In sum, our contributions are summarized as follows: • We propose a novel distributed learning framework, TAViT, that carries out multiple image processing tasks using distributed data. • The proposed method consists of task-specific heads and tails on clients and a task-agnostic Transformer body on a server, which reduces the computational cost of clients and does not require centralized data for multi-task learning. • An alternating training strategy between the task-specific and task-agnostic learning for the split networks shows the synergy effect of performance improvement, which is demonstrated by experimental results on multiple tasks. 2 RELATED WORKS Federated learning In the FL setting, multiple clients learn locally stored data while one server aggregates information of clients by various methods including FedAvg (McMahan et al., 2017a). For the efficient implementation of FL, practical challenges of unstable networks, hardware capacity difference, and statistical heterogeneity of data distributions (Li et al., 2020c; Smith et al., 2017; Li et al., 2018) have been actively studied. Corinzia et al. (2019) performs FL with multiple classification tasks, and He et al. (2020) loads a huge network to a server and small CNNs to clients and trains them by knowledge distillation. Yao et al. (2019) presents an unbiased gradient aggregation for FL and meta updating of the model. In contrast, our method is presented for effectively learning on task heterogeneity using distributed data. Although Li et al. (2020a) presents task-agnostic FL method based on the feature extractor, each client trains the task-specific network independently, while our model can learn multiple tasks simultaneously for synergistic performance improvement. Split learning Split learning (SL) is designed to train networks over distributed data by splitting networks into two parts, which updates client-part and server-part networks sequentially (Gupta & Raskar, 2018). By extending this idea, Vepakomma et al. (2018) presents several ways to use SL, and Abuadbba et al. (2020) applies SL to 1D CNN models. However, the existing SL methods are designed using CNN, and to our best knowledge, there is no principle way of splitting the network for the best performance. In particular, Thapa et al. (2020) proposes a combination of FL and SL, but the server requires labels from clients to update the split networks, which may lose data privacy. Also, since outputs are generated from a shared network on the server when there are multiple clients, these methods can be narrowly applied to a single task. In contrast, our model presents Transformer-based shared body that enables multi-task learning of clients without sharing data. Vision Transformer for image processing Recently, inspired by the success of Transformer in natural language processing (Vaswani et al., 2017; Devlin et al., 2018), Transformer-based image processing methods have been extensively explored (He et al., 2021; Han et al., 2021). In particular, Dosovitskiy et al. (2020) proposes a Vision Transformer (ViT) with an encoder-only architecture to learn image recognition tasks. Also, Chen et al. (2020) presents an image processing Transformer (IPT) that learns low-level vision tasks by pretraining and task-specific fine-tuning. However, to the best of our knowledge, there are no existing works that exploit ViT architecture for distributed learning applications. 3 PRIVACY-PRESERVING TASK-AGNOSTIC VISION TRANSFORMER 3.1 SUBSCRIPTION-BASED SERVICE MODEL As illustrated in Figure 1(a), TAViT is designed for subscription-based services. Specifically, a client subscribes to a task-agnostic Transformer model at the server side that has learned global attention over the image features from other datasets. Then, the client can build the head and tail proper to its own image processing task, and connect them to the Transformer body at the server. At the subscription time, there may be already multiple clients that subscribe to the same Transformer body. Accordingly, each client can train its own head and tail using its local data whereas the common Transformer body is regularly updated using embedding features from all subscribers through alternating training strategy as shown in Figure 1(b), or even fixed if training has been performed with sufficient number of tasks and clients. As a result, the latest version of the Transformer body trained using more training data can be maintained on the server side so that it can be offered to new clients at the next subscription. Since the local data are not centralized to one device and are not shared with other clients, our framework can preserve data privacy. In the proposed framework, we consider the features from the head as a sequence of tokens similar to natural language processing. Specifically, as shown in Figure 1(c), we reshape the features f with Y ×X×D size into a sequence of patches f = {f1, f2, . . . , fS}, whereX,Y,D denote width, height, and channel dimension of image features, respectively, S is the number of patches, i.e. S = Y X/p2 for patch size p, and fs denotes the s-th patch of the features with p2 ×D size. Then, these reshaped features f are taken into the Transformer body as an input sequence, which is added to learnable positional embeddings to keep the position information of each feature patch. The Transformer body consists of several encoder layers proposed in Vaswani et al. (2017) so that the encoded features pass through several multi-head self-attention modules and feed-forward modules for each layer. And then, the body output of transformed features is reshaped into the original shape of features f to be used as input for the tail CNN. Here, for the Transformer body, we employ the encoder-only architecture as a task-agnostic network, compared to IPT (Chen et al., 2020) that uses both encoder and decoder of Transformer. The encoderonly Transformer learns the global relationship between features in the input corpus, and that global attention may be all we need for better performance in vision tasks as demonstrated in ViT. Therefore, the body of our framework can be trained to translate the input embedding features into globally self-attended features independent of specific tasks. Moreover, the heads are guided to learn the task-specific embedding from the input images to the common feature representation, and the tails are trained to learn the attended features for the specific image processing tasks. This architectural modification enables the framework to be suitable for multi-task distributed learning. 3.2 TRAINING SCHEME For distributed datasets of different tasks, we apply the alternating training strategy between clients and the server by considering them as two players. Specifically, as shown in Figure 2, TAViT trains the client-side task-specific head and tail networks and the server-side task-agnostic body network in an alternating manner. In the task-specific learning, clients train their own heads and tails with the fixed body weights in parallel using locally stored datasets. In contrast, in the task-agnostic learning, Algorithm 1 TAViT: C = {C1, C2, . . . , CK} is a group of client sets with different tasks each other. Is and Ia denote the number of optimization iterations for each task-specific and task-agnostic step in one cycle. Hc and Tc are the head and the tail of a client c, and B is the Transformer body on the server. Initialization :H,T to all clients, B to a body for i in [1, num_cycles] do for is in [1, Is] do // task-specific learning (heads & tails) for each client c ∈ Ck ⊂ C in parallel do update Hc, Tc with fixed B end if is is aggregation step then // for case of multi-clients with one task for each client subset Ck ⊂ C, s.t. |Ck| > 1 do unify Hc and Tc of client c ∈ Ck (e.g. FedAvg) end end end for ia in [1, Ia] do // task-agnostic learning (body) k ← randomly selected task update B with fixed Hc, Tc, s.t. c ∈ Ck end end Output: H,T,B the server trains the Transformer body with the fixed head and tail of a randomly chosen client for each iteration. More details are as follows. 3.2.1 TASK-SPECIFIC LEARNING Let C = ⋃K k=1 Ck be a group of client sets participating different image processing tasks, where K denotes the number of tasks, and Ck has one or more clients with different datasets for the k-th task, i.e. Ck = {ck1 , ck2 , . . . , ckNk} with Nk ≥ 1. Each client c ∈ Ck has task-specific own network architecture for a head Hc and a tail Tc, which are connected to the Transformer body B in the server. In the task-specific learning, for the given freezed Transformer B at the server and the local training data {(x(i)c , y(i)c }Nci=1, the c-th client then trains the neural networksHc and Tc by solving the following optimization problem: min Hc,Tc Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))), (1) where `c(y, ŷ) refers to the c-th client specific loss between the target y and the estimate ŷ. The parameters of Hc and Tc are iteratively updated using ∂`c/∂Tc and ∂`c/∂Hc. These gradients are calculated by back-propagation through the entire model which can be expressed by the chain rule: ∂`c ∂Tc = ∂`c ∂ŷ · ∂ŷ ∂Tc , ∂`c ∂Hc = ∂`c ∂fH · ∂fH ∂Hc = ∂`c ∂fB · ∂fB ∂fH · ∂fH ∂Hc (2) where fH = Hc(x (i) c ), fB = B(fH) and ŷ = Tc(fB). This implies that to update the head Hc and the tail Tc, the gradient ∂`c/∂fB is transmitted to the server after back-propagation through the tail, and also the ∂`c/∂fH computed from back-propagation through the body is transported to each client. Federated learning In the task-specific learning, when there are multiple clients for the same task k (i.e. Nk > 1), their heads and tails can be trained in parallel. Suppose that cki has training dataset size of |Di| and the total size of dataset in Ck is ∑ |Di| = |D|. In this case, the backpropagation and optimization process are the same with the single client case, but additionally applies FedAvg(McMahan et al., 2017a) to the parameters Hc and Tc of c ∈ Ck for every assigned period, which is written as: (Hcj , Tcj )← ( Nk∑ i=1 |Di| |D| Hci , Nk∑ i=1 |Di| |D| Tci ) , where 1 ≤ j ≤ Nk. (3) The period of the weight aggregation is adjustable (50 epochs in our experiments). From this federated learning, clients corresponding to the k-th task share the same parameters at the end of task-specific learning as shown in Figure 2. 3.2.2 TASK-AGNOSTIC LEARNING Once the heads and tails of multiple clients are trained, the Transformer body is trained by fixing the heads and tails at the clients. To train the Transformer body that learns the common representation in a task-agnostic manner, we construct a subset of the client CB by selecting one client from each task: CB = {c1n1 , c 2 n2 , . . . , c K nK}, c k nk ∈ Ck. (4) Then, the training data {x(i)c , y(i)c }Nci=1 corresponding to the task of the client are also selected, and the Transformer body in the server is updated by solving the optimization problem as follows: min B ∑ c∈CB Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))). (5) Similar to the task-specific learning, the parameters of B are updated using ∂`c/∂B, where the client c is randomly chosen from CB at each optimization step. The required gradients also come from back-propagation as following: ∂`c ∂B = ∂`c ∂fB · ∂fB ∂B , where ∂`c ∂fB = ∂`c ∂ŷ · ∂ŷ ∂fB , (6) where fB = B(fH) and ŷ = Tc(fB). Here, the gradient ∂`c/∂fB is only transported to the server after back-propagation through the tail. Through this task-agnostic learning, the Transformer body in the server learns global embedding representation and provides task-agnostic self-attended features for various image processing. The pseudocode of the overall TAViT is described in Algorithm 1 with more details in Appendix A. 3.3 COMMUNICATION COST AND PRIVACY PRESERVATION BY TAVIT Given that gradients have to be transmitted two-way or one-way for training head/tail and body parts of the architecture, one many wonder whether additional communication overhead is significant. However, since the Transformer body is a shared model on the server that does not perform any weight aggregation, our model has much smaller cost for one communication between the client and the server in the task-agnostic learning. This comes from the small size of transported features and gradients for the heads and tails. If we sample clients in the network training, the communication cost can be further controlled. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases if a bigger Transformer body is used for better representation of global attention. For detailed analysis, see Appendix D.4. The proposed TAViT is designed to use distributed local data for distinct tasks without sharing the data to the other clients or any central device. Although the privacy attack on the transported features between the server and clients can be occurred, yet another powerful and unique mechanism for maintaining privacy in TAViT arises when the client-side network of the proposed method has a skip connection between the head and the tail. In this case, the transported feature characteristics can contain very lossy information from the original data, and one cannot reconstruct data only using the transmitted hidden features of the proposed method as detailed in Appendix D.1. 4 EXPERIMENTAL RESULTS We examine the performance of TAViT with the following image processing tasks: deblocking (JPEG artifact removal), denoising, deraining, and deblurring. Additional experiments for image inpainting and medical data are also performed to investigate its performance for high-level computer vision tasks and different domain data, respectively, which can be found in Section D.5 of Appendix. With a single server, we set two clients to carry out FL on the deblocking task and set one client for each of the other tasks, so the total number of clients is five in our experiments. We evaluate results using two metrics of PSNR and SSIM. Datasets The public datasets we used are as follows. For deblocking and denoising, we use 10,582 images from PASCAL VOC 2012 (Everingham et al., 2010) and Segmentation Boundaries Dataset (SBD) (Hariharan et al., 2011). Particularly, for FL on the deblocking, we split the data into two sets with 5,291 images and distribute them to each client. Deblocking results are evaluated on Berkely Segmentation Database (BSD500) (Martin et al., 2001b) that provides 200 test images. For the denoising, we apply random Gaussian noise with the level of σ = 30 to images. The Denoising model is evaluated on CBSD68 that contains 68 test images extracted from the BSD500. For deraining, following the experiment setting of Jiang et al. (2020), we use data from Rain14000 (Fu et al., 2017b), Rain1800 (Yang et al., 2017), Rain800 (Zhang et al., 2019a), and Rain12 (Li et al., 2016) that provide 13,711 pairs of clean and synthetic rain images. Deraining results are evaluated on Rain100H and Rain100L (Yang et al., 2017), each of which has 100 synthetic rainy images. Deblurring is performed using a GoPro dataset (Nah et al., 2017) that contains 2,103 and 1,111 pairs of sharp and blurry images for training and test sets, respectively. Implementation details To implement our TAViT, we use the encoder and decoder of DDPM (Ho et al., 2020) with three stages as our backbone of each head and tail at the client. For the Transformer body in the server, we use 8 encoder layers of the vanilla Transformer (Vaswani et al., 2017) with 256 words and 512 embedding dimensions. The total number of parameters for networks at each client and the server is about 28M and 17M, respectively. Using 4 Nvidia Quadro RTX 6000 cards and 2 Nvidia Geforce GTX 1080Ti cards, we train the networks using Adam optimizer with a learning rate of 3× 10−5. We initialize parameters of the networks with those of the pre-trained model by an autoencoder scheme. For the data augmentation, we apply random horizontal and vertical flipping, rotating with 90 degrees, and cropping by a patch size of 64× 64× 3. For three cycles, by setting the batch size as 20, we perform the task-specific learning for 200, 400, 400, and 2000 epochs on deblocking, denoising, deraining, and deblurring, respectively. Also, we perform the task-agnostic learning for 1000 epochs with 1/4 data for each task. We implement our TAViT using PyTorch library under BSD-style license using Flower federated learning protocol (Beutel et al., 2020) under Apache 2.0 License. The details of datasets and implementation are described in Appendix B. 4.1 RESULTS Convergence of TAViT for multi-task distributed learning We evaluated the results of the proposed TAViT of multi-task distributed learning with all participated clients and one common Transformer body in the server. Figure 3 shows the gradual progression of quality metrics through the alternating training scheme. The performance of all tasks from our method increased and outperformed as the task-specific learning and the task-agnostic learning continued. This demonstrates the synergistic improvement of the task-specific heads/tails and the task-agnostic body: the heads and tails learn more accurate feature embedding for given tasks, and the common body learns the global attention general to multiple image processing tasks by looking at various datasets. Although there were some tasks in which the score of each step was slightly lower than that of the previous step by the interaction of different task datasets, the overall performance of TAViT was improved as the cycle progressed. Detailed quantitative results for each cycle are described in Appendix C. Comparison of TAViT to other strategies We compared our TAViT with other distributed learning strategies: SL and FL. We conducted both SL and FL with the two clients assigned for the deblocking task. For SL, as designed in Vepakomma et al. (2018), we placed the head and tail networks on clients and the body on the server, and trained those split networks without the weight aggregation for the head and tail. For FL, we placed the entire model composed of the head, body, and tail on each client, and trained the network in parallel by carrying out the aggregation with FedAvg (McMahan et al., 2017a) using a common server. Figure 4 shows these scenarios, where C1 and C2 are clients for the deblocking, C3, C4, and C5 are clients for the denoising, deraining, deblurring, and S is the server. As reported in Table 1, the proposed method achieves better performance compared to the other strategies even though ours learns multiple tasks. Comparison of TAViT to learning each separate task To verify the capability of the task-agnostic Transformer body learning from multiple tasks, we compared TAViT with the models independently trained on each individual task. Under the setting of centralized data for each task, we implemented this study in two manners: end-to-end learning (EL) and single-task learning (STL). For EL, we trained the whole network in one device through the end-to-end optimization scheme. For STL, we distributed the decomposable head, body, and tail to a client and a server as the proposed method, and trained the networks by the alternating training strategy for one cycle. Table 2 reports the results on Benchmark datasets for each task. This shows that our TAViT trained on multiple tasks simultaneously outperforms both EL and STL, which suggests that the task-agnostic body of our framework does not degrade the model by task heterogeneity but enhances the performance for various tasks. Comparison of TAViT to CNN-based models To compare the performance of TAViT with CNNbased deep learning models, we tested several existing methods on benchmark datasets for each task. Table 2 and Figure 5 show the qualitative and visual comparison results, respectively. For the deblocking, when comparing with DnCNN Zhang et al. (2017a), AR-CNN Dong et al. (2015), and QCN (Li et al., 2020b), the proposed method outperforms for both the 10 and 50 levels of quantization quality. Visual comparisons also show that the proposed method removes block artifacts clearly compared to the others. For the denoising, we compared our method with CBM3D (Dabov et al., 2007), DnCNN (Zhang et al., 2017a), FFDNet (Zhang et al., 2018b), IRCNN (Zhang et al., 2017b), DHDN (Park et al., 2019), and SADNet (Chang et al., 2020). The results show that our Dataset Metric CNN-based Transformer-based TAViT achieves better PSNR/SSIM scores, and also provides more clearly denoised images while preserving structure and texture details than the comparisons. For the deraining, we tested our model in addition to DerainNet (Fu et al., 2017a), SEMI (Wei et al., 2019), UMRL (Yasarla & Patel, 2019), PreNet (Ren et al., 2019), and MSPFN (Jiang et al., 2020). We used Y channel in YCbCr color space followed by Jiang et al. (2020) for the evaluation. As a result, our model outperforms the comparative methods on both Rain100H and Rain100L. Also, the images restored by ours are closer to the references by removing rain streaks than the others. For the deblurring, we employed DeblurGAN (Kupyn et al., 2018), Nah et al. (2017), Zhang et al. (2018a), DeblurGANv2 (Kupyn et al., 2019) for comparisons. The results show that the proposed method achieves comparable performance to the existing approaches. Visual results show that our TAViT restores blurry images with sharp edges, while the others still contain blurry artifacts or position shifting of objects compared to the references. 5 CONCLUSION In this work, we present a multi-task distributed learning framework called TAViT. In TAViT, the task-specific head CNN and the tail CNN are distributed to clients that have their own data, which are connected to a common Transformer body placed in the server. With an alternating training scheme, the heads and tails on client sides are trained by task-specific learning, while the body is trained by task-agnostic learning. We conduct experiments on four different image processing tasks, which shows the success of task-agnostic learning of the Transformer body and its synergistic improvement with the task-specific heads and tails. Through our model, the participating clients can design and train their own networks depending on the task using local data in parallel. We expect that the proposed TAViT can be efficiently used in the case that sharing data with other institutions is sensitive such as medical fields. Ethics statement As our work utilizes distributed learning models, similar to the existing FL and SL, our method may be vulnerable to privacy attacks against the server such as inversion attacks (Yin et al., 2021). Although the proposed framework is designed by encoding the feature maps and gradients under Flower protocol which makes it difficult for attackers to restore the original data, the hidden feature may leak the raw data to some degree. Thus, privacy-related techniques such as differential privacy (McMahan et al., 2017b) and authenticated encryption of data (Rogaway, 2002) should be analyzed for practical applications. Reproducibility statement The source code and our trained models to reproduce the proposed method are available at https://github.com/TAViT2022/TAViT. For the detailed pseudocode, refer to Appendix A. Also, the data processing steps for datasets used in the experiments are provided in Appendix B. A DETAILS OF TAVIT WITH PSEUDOCODE As described in the main paper, the task-specific heads and tails in clients and the Transformer body in the server are trained in an alternating manner between the proposed task-specific learning and the task-agnostic learning. In the following, we describe each step in more detail in terms of its implementation. Pseudocode for the task-specific learning Algorithm 2 shows the pseudocode for the task-specific learning. Given K image processing tasks, the task-specific learning updates the heads H and the tails T in each client with the fixed body B. The server first initializes global weights of the heads Algorithm 2 Task-specific learning of TAViT: Is denotes the number of iterations for the task-specific learning in one cycle. (Hc, Tc) denote the head and the tail in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (HCk , TCk) are global weights of the heads and the tails for the task k. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. |Dj | is the size of training data at cj , and |D| is the total size of training data at Ck, i.e. |D|= ∑ j |Dj |. /* run on the server (with fixed B) */ Initialize HCk , TCk Send HCk , TCk to all clients c ∈ Ck for is in [1, Is] do for each client c ∈ Ck, where k = {1, 2, . . . ,K} in parallel do fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂fH ← ∂`c∂fB · ∂fB ∂fH // backpropagation through body ClientUpdate( ∂`c∂fH ) end if is is weight aggregation step for |Ck| > 1 then Get (Hcj , Tcj ) from client cj , where j ∈ {1, 2, . . . , Nk} HCk ← ∑Nk j=1 |Dj | |D| Hcj // FedAvg for head TCk ← ∑Nk j=1 |Dj | |D| Tcj // FedAvg for tail Send (HCk , TCK ) to all clients c ∈ Ck end end /* run on client c */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂Tc ← ∂`c∂ŷ · ∂ŷ ∂Tc // computation of tail gradients ∂`c ∂fB ← ∂`c∂Tc · ∂Tc ∂fB return ∂`c∂fB /* run on client c */ Function ClientUpdate( ∂`c∂fH ) ∂`c ∂Hc ← ∂`c∂fH · ∂fH ∂Hc // computation of head gradients update Hc, Tc using ∂`c∂Hc and ∂`c ∂Tc by optimizer e.g. Adam Algorithm 3 Task-agnostic learning of TAViT: Ia denotes the number of optimization iterations for task-agnostic learning in one cycle. (Hc, Tc) denote the head and tail in a client c in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. /* run on the server */ Initialize CB = {c1n1 , c 2 n2 , . . . , c K nK} where c k nk ∈ Ck for ia in [1, Ia] do c← cknk ∈ CB // Random selection of client with task k fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂B ← ∂`c ∂fB · ∂fB∂B // computation of body gradients update B using ∂`c∂B by optimizer e.g. Adam end /* run on client c (with fixed Hc, Tc) */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c (with fixed Hc, Tc) */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂fB ← ∂`c∂ŷ · ∂ŷ ∂fB // backpropagation through tail return ∂`c∂fcB and the tails and sends them to all clients in Ck, where Ck is a set of clients with different datasets for the k-th task. When each client c ∈ Ck takes local training data x and provides a feature map fH by the head Hc to the server (line 5 with ClientPhase1), the server-side Transformer body takes the feature map fH as an input embedding and estimates the self-attended features fB that is independent of specific tasks. Once the fB in the server is sent to the client, the client computes the task-specific loss `c between the label y and the tail output ŷ (line 24 in ClientPhase2). The gradient of the tail ∂`c/∂Tc is also computed in the client at ClientPhase2 (line 25), which is used to compute ∂`c/∂fB that is transported to the server. Then, in the server, ∂`c/∂fH is calculated by backpropagation through the body, which is sent to the client so as to compute the head gradient ∂`c/∂Hc and finally update Hc and Tc (lines 28-30 of ClientUpdate) using a single optimizer. Here, when there are multiple clients for the k-th task, i.e. |Ck| > 1, we apply the federated learning for the heads and the tails of those clients as described in lines 11-16 in Algorithm 2. The heads and the tails are trained in parallel, and their weights are aggregated by FedAvg (McMahan et al., 2017a) on the server side for every weight aggregation period. Then, these updated global weights of the heads HCk and the tails TCk and are transmitted to all clients in Ck so that the clients train their own head and tail using the new global weights from the next step. Pseudocode for the task-agnostic learning In the task-agnostic learning, the Transformer body in the server is updated with the fixed heads and tails of clients. Algorithm 3 shows the pseudocode for the task-agnostic learning of TAViT. Given a subset of clients, CB , by selecting one client among Ck for each task, the client c ∈ CB is randomly chosen for every iteration. Then, compared to the task-specific learning, the implementation of the task-agnostic learning is similarly conducted but does not need ClientUpdate process in Algorithm 2. In other words, after the gradient ∂`c/∂fB is computed on the client side at ClientPhase2 (line 14-18) and transmitted to the server (line 6), the server updates the Transformer body by computing the body gradients ∂`c/∂B (lines 7-8), which is the final step of each iteration. B DETAILS OF DATASETS AND IMPLEMENTATION B.1 LICENSE/SOURCE FOR EACH DATASET In our experiments, we use the public datasets for image deblocking, denoising, deraining, and deblurring tasks. Here, we describe the specific information of each data set such as license and source link. PASCAL VOC 2012 The PASCAL VOC data set (Everingham et al., 2010) is publicly available, which includes images obtained from the "flickr" website under SmugMug or its third-party licensors. The data are protected by the United States and international intellectual property laws. The data source is from the URL: http://host.robots.ox.ac.uk/pascal/VOC/. BSDS500 and CBSD68 The Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500) (Arbeláez et al., 2011) data set is an extended version of BSDS300 (Martin et al., 2001b), which is a public data set originally provided for image segmentation and boundary detection by Berkeley Computer Vision Group. This data set is widely used for measuring image restoration performance. The color BSD68 data set (CBSD68) is extracted from the BSDS500. The BSDS500 can be downloaded at https://www2.eecs.berkeley.edu/Research/Projects/CS/ vision/grouping/resources.html. Synthetic rainy images The synthetic rainy data set for training is collected from Rain14000 synthesized by Fu et al. (2017b), Rain1800 authored by Yang et al. (2017), Rain800 created by Zhang et al. (2019a), and Rain12 made by Li et al. (2016). We test our method on the synthetic rainy data sets of Rain100H and Rain100L, both of which are authored by Yang et al. (2017). All these data sets are publicly available and can be downloaded at the following links: - Rain14000: https://xueyangfu.github.io/projects/cvpr2017.html - Rain1800: https://www.icst.pku.edu.cn/struct/Projects/joint_rain_ removal.html - Rain800: https://github.com/hezhangsprinter/ID-CGAN - Rain12: https://yu-li.github.io/ - Rain100L & Rain100H: https://www.icst.pku.edu.cn/struct/Projects/ joint_rain_removal.html GoPro The GoPro dataset (Nah et al., 2017) provides training and test sets for deblurring. The data are available at https://seungjunnah.github.io/Datasets/gopro.html. B.2 DATA PROCESSING All datasets we used in experiments provide natural images that have three RGB channels and pixel values with a range of [0, 255]. Upon these datasets, we performed the following data processing according to the image processing tasks. For the image deblocking task, we quantized the images following JPEG compression. We first transformed RGB image into YUV color space using the following equations. Y = 0.257R+ 0.504G+ 0.098B + 16 (7) U = −0.148R− 0.291G+ 0.439B + 128 (8) V = 0.439R+ 0.368G− 0.071B + 128 (9) Then, we split the image into 8x8 blocks without overlapping and apply Discrete Cosine Transform (DCT) to each block. According to the level of quantization quality, we divided each element of the DCT coefficients by proper predefined matrices. After that, we apply inverse DCT and aggregate all blocks into an image, and then, we transformed the image from YUV to RGB color space. R = 1.164(Y − 16) + 1.596(V − 128) (10) G = 1.164(Y − 16)− 0.392(U − 128)− 0.813(V − 128) (11) B = 1.164(Y − 16) + 2.017(U − 128) (12) For the denoising task, we added Gaussian noise to the clean images. Specifically, we applied random Gaussian noise with the level of (µ, σ) = (0, 30) to images, and then clipped the values into [0, 255]. For the other tasks, the datasets named Rain# and GoPro provide synthetic rainy images and blurry images, respectively. Since we used these datasets for the deraining and deblurring tasks, we did not perform any preprocessing such as the synthesis of rain artifacts blurry effect. After the above data processing for all tasks, we randomly cropped the images by a patch size of 64× 64× 3. Also, we applied data augmentation using random flipping and rotating with 90 degrees. Then, we normalized the images with the scale of pixel values from [0,255] to [-1, 1], which are final inputs to the model. B.3 NETWORK ARCHITECTURES For the task-specific head and tail for each task, we use the network architecture of DDPM (Ho et al., 2020) that is composed of residual blocks and attention modules. We set the number of two-times downsampling/upsampling stages as 3. For each stage, the channel size is set as 128, 256, and 512, respectively. Accordingly, given an input image x ∈ R64×64×3, the head provides a feature map fH ∈ R16×16×512 that passes through the body, and the tail generates an output of the same size as the input. On the other hand, for the Transformer body, we use the encoder part of the vanilla Transformer (Vaswani et al., 2017). As described in the main paper, the Transformer body takes a sequence of patches f by reshaping the feature map fH as an embedding of the words. In the experiments, the length of the input sequence is 256 by setting the patch size as 1, and the sequence dimension is 512. Then, once the input sequence is added to learnable positional encodings, the encoded features h pass through L encoder layers (L = 8 in our experiments). Table 3 shows the structure of each encoder layer of the body. Model sizes Table 4 shows the model sizes of the task-specific head and tail, and the Transformer body we used. When comparing the number of parameters and the size of networks, we can observe that the client-side networks composed of the head and the tail is larger than the task-agnostic Transformer body. Considering the experimental results in the main paper, this model size suggests that the body estimates task-agnostic self-attended features that provide the synergy effect in the task-specific and task-agnostic learning even if the body size is smaller than the sum of head and tail. C EXPERIMENTAL RESULTS C.1 TAVIT ON MULTIPLE IMAGE PROCESSING TASKS Evaluation results of TAViT Table 5 reports the quantitative evaluation results of TAViT on multiple image processing tasks, which is visualized with graphs of scores for the cycles in the main paper. Figure 6 shows the qualitative results of TAViT. This shows that the performance of each task is improved according to the cycles between the task-specific and task-agnostic learning. Table 5: Quantitative results of TAViT according to the cycles, which are visualized with graphs in the main paper. The best results are highlighted in bold. Task Deblocking Denoising Deraining Deblurring Cycle BSDS500 (Q10) BSDS500 (Q50) CBSD68 (σ = 30) Rain100H Rain100L GoPro PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM 0.5 27.53 0.781 32.92 0.921 30.57 0.868 28.24 0.860 33.17 0.939 28.94 0.871 1.0 27.57 0.782 33.01 0.922 30.62 0.869 28.75 0.862 32.69 0.936 29.09 0.873 1.5 27.61 0.784 33.05 0.923 30.57 0.870 28.57 0.869 33.58 0.945 29.63 0.885 2.0 27.65 0.785 33.14 0.924 30.66 0.870 28.79 0.867 33.50 0.944 29.72 0.887 2.5 27.64 0.785 33.14 0.924 30.62 0.870 29.25 0.875 34.30 0.949 29.96 0.893 3.0 27.69 0.786 33.21 0.924 30.69 0.871 29.35 0.875 33.88 0.947 30.06 0.894 Figure 6: Qualitative results of TAViT according to the cycles. From the left to the right columns, the deblocking results on images with the quantization quality 10 and 50, the denoising results, the deraining results on Rain100H and Rain100L, and the deblurring results. The yellow value is PSNR, and an inset box is a magnified view of a red rectangle. Qualitative comparisons Besides the results presented in the main paper, here, we show more visual comparisons of our TAViT to the existing methods. Figure 7, 8, 9, and 10 display the deblocking, denoising, deraining, and deblurring results, respectively. All these results verify that our TAViT as a distributed learning for multiple image processing tasks outperforms the comparisons. C.2 ABLATION STUDY OF TAVIT Study on the amount of data for each task in the task-agnostic learning In the main paper, we implemented our method using 1/4 of the dataset for each task in the task-agnostic learning. To verify that this amount of data is enough for the task-agnostic learning, we performed the ablation study using different amounts of data with a 1/2 ratio for each deblocking, denoising, deraining, and deblurring task. Table 6 and Figure 11 show the quantitative results of TAViT on the multiple tasks using 1/2 data in the task-agnostic learning. Similar to the results with a 1/4 data ratio, the scores of PSNR and SSIM tend to increase as the cycle continues. When comparing the best results from the 1/4 and 1/2 data ratio, we can observe that performance for each task using even 1/4 amount of data is comparable or better than using 1/2 data. This suggests that using 1/4 of data for each task in the task-agnostic learning is sufficient to train the Transformer body and obtain high performance. Study on the weight aggregation period In the main paper, we conducted the experiment of TAViT by applying FL to the deblocking task that has two clients with their own data. In FL, the weights of the network in each client are averaged in the server for every weight aggregation period that is given as a hyperparameter. Here, since this period can influence learning performance in that the clients and the common server communicate to aggregate network weights, we performed the ablation study on the weight aggregation period for training the client-side networks. As reported in Table 7, for the deblocking task, we trained the model with the aggregation period of 20, 50, and 100 epochs. When we evaluated the deblocking results, the weight aggregation per 50 epochs provides better performance with 27.53dB/0.781 and 32.92dB/0.921 of PSNR/SSIM for the quality 10 and 50, respectively, compared to the other methods. This verifies that the weight aggregation period of 50 epochs presented in the main paper is proper to train and evaluate the proposed TAViT in our experiments. D DISCUSSION D.1 SKIP-CONNECTION OF HEAD AND TAIL FOR PRIVACY PRESERVATION When configuring the task-specific heads and tails with skip-connections, our model can avoid the privacy attack in some degree while maintaining the encoding information for the tail to generate outputs. This is because the skip-connected features are isolated on each client and not transported to the server. Accordingly, the transported features between the clients and the server may contain far less information about the original data. Figure 12 shows examples of the outputs with and without skip-connections. This shows that the network output without skip-connections barely has the property of original data, which indicates that one may not be able to reconstruct the original data using the transmitted hidden features of the proposed method. D.2 EFFECT OF TASK-AGNOSTIC TRANSFORMER BODY As we described in the main paper, the reason for developing our model with CNN-based heads/tails and the Transformer-based body is to take advantage of each network. In particular, Transformer learns the global attention of the input sequence through self-attention modules and has recently been extensively studied for various computer vision tasks. One of the most unique advantages of Transformer is to convert “unattended ” input feature vectors into “attended ” output feature vectors by learning global attention and non-local interactions between the input features. Accordingly, the task-specific head / tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer body. This disentangled representation of local and non-local features has been pursued throughout the development of deep networks. Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split-learning architecture. In order to show that this design is proper to the multi-task distributed learning, we additionally conducted the experiment by replacing the Transformer body to the CNN model. Specifically, we configured the CNN body with CBR blocks, where C is a convolutional layer with the consistent channel 512, B is a batch normalization layer, and R is an activation by ReLU layer. For a fair comparison, we set the CBR blocks as 7 to have almost the same number of learnable parameters with the Transformer body we used (16,522,240 of CNN body vs. 16,953,344 of Transformer body). Then, using this CNN body, we implemented our proposed task-specific and task-agnostic learning for one cycle on the multiple image processing tasks as the main paper. As a result, Table 8 shows that our model with the Transformer body achieves higher performance in both task-specific and task-agnostic learning. This indicates that the Transformer can be used as a general task-agnostic body for multi-task learning. D.3 SAMPLING STRATEGY OF CLIENTS When there are multiple clients for one task in task-specific learning, the task-specific networks of clients are aggregated through the sampling strategy of FedAvg. On the other hand, in task-agnostic learning of the proposed TAViT, one client is sampled for each iteration. Since the networks of clients for the same task are aggregated before the task-agnostic learning, we can readily sample one client for each task. Then, choosing one client for the subset of Eq. (4) can be viewed as sampling one task, which naturally reduces the communication cost. In fact, the performance of TAViT is not affected by the number of sampled clients in the task-agnostic learning, since the task-agnostic body is updated for sufficient iterations. To demonstrate this, we performed the task-agnostic learning for the four tasks in our experiments by varying the sampling strategy. Table 9 shows the results after training our model for one cycle according to the number of sampled clients in the task-agnostic learning. The results show that sampling one client achieves comparable or higher performance for all tasks, compared to the results of sampling more than one clients. This supports that our sampling strategy is an efficient way to train the Transformer body with less computational cost even when the number of clients increases. D.4 COMMUNICATION COST BETWEEN CLIENTS AND SERVER In the proposed TAViT, the features and gradients of the networks are transported between clients and server, so one may wonder how much additional communication cost occurs. To compute the communication cost of our method, we assume that the cost is proportional to the number of transported elements. Also, since the size of features and gradients from clients to the server are the same with those from the server to client, we only consider one direction from clients to the server. Then, we computed the maximum cost for one communication to update our model for each task-specific and task-agnostic learning, and compared our cost to the method of FL (McMahan et al., 2017a). Specifically, when there Nk clients for the k-th task, let PH , PB and PT be the number of parameters of the head, body, and tail, respectively. In the case of FL that aggregates the whole model composed of the head, body, and tail, the communication cost per communication can be represented as: CostFL = Nk(PH + PB + PT ). (13) On the other hand, our model does not require the transportation of learnable parameters except for the aggregation step in the task-specific learning. Thus, the communication cost can be computed as follows: CostTAV iT = { Nk(PH + PT ), if an aggregation step (task-specific learning) Nk(F +G), else if a non-aggregation step (task-specific learning) F +G, otherwise, (task-agnostic learning) (14) where F and G are the number of elements of transported features and gradients, respectively. From (14), we can see that the communication cost at the network aggregation step in the task-specific learning of the proposed method is smaller than FL that needs to transport the parameters of the whole model including head, body, and tail. Specifically, instead of aggregating parameters of the Transformer body, the TAViT transports features and gradients that have much smaller size than the body parameters, which can reduce the cost per communication significantly. For example, the proposed model for the deblocking task contains PH + PB + PT = 44, 774, 792 parameters whose memory size is about 207.5MB. Suppose that 10 clients participate in FL to train this model. Then, 447.7M elements are transported from the clients to the server, and the network of the server should handle more than 2GB load per communication. In contrast, our model transports PH + PT = 27, 952, 520 parameters whose memory size is approximately 142.5MB. Thus, even with 10 clients, 279.5M elements are transported, and the network of the server is supposed to handle about 60% load of FL. In addition, since the number of features and gradients is F = G = 20× 16× 16× 512 = 2, 621, 440 which is 10MB of memory, the number of transported elements per communication for 10 clients is 52.4M, and the server is pressed by only 200MB load per communication. On the other hand, in task-agnostic learning, the server updates the body with the sampled client without any weight aggregation. Accordingly, only the features and gradients are transported from the client to the server. In particular, in the case of the communication from the server to the client, the server does not need to transport the gradients to the client, but only transmit the features. Thus, the cost per communication in the task-agnostic learning phase is significantly reduced. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases more if a bigger Transformer body is used for better representation of global attention. Scalability Suppose that there are K tasks, and let the total number of clients connected to a server be Nall. For simple analysis of the scalability, we assume that each communication between clients and the server takes a constant time. The scalability is computed on the time complexity for one communication between clients and the server to update the models. For our task-specific learning, the one communication has time complexity O(Nall) if we update the heads and tails of all clients. This means that the communication cost would increase according to the number of clients, which can limit the scalability of the proposed method. However, if we apply the client sampling strategy of the FedAvg, we can control the number of communications and the one communication will have time complexity Ω(K). This sampling strategy can be readily adapted to our model without significant modification. On the other hand, for the task-agnostic learning phase, the one communication has time complexity O(K), since the network parameters of clients for the same task are aggregated before the task-agnostic optimization. Also, according to the sampling strategy of one task in the proposed method, one communication has time complexity Ω(1), which is studied in Appendix D.3. D.5 APPLICATION TO HIGH-LEVEL VISION TASKS AND MEDICAL DATA In the main paper, the proposed TAViT was demonstrated on multiple low-level computer vision tasks. However, the TAViT framework can be also used for a wide range of high-level computer vision tasks, and even with different data domains such as medical images. To demonstrate this, we additionally conduct on inpainting task for natural images and denoising task for X-ray CT images. Here, the image inpainting is a higher-level computer vision task which requires more semantic information, and the denoising of X-ray CT requires domain-specific knowledge about the data. In particular, to show that our task-agnostic Transformer body provides a positive effect on the training of new task-specific networks, we performed the task-specific learning only for the client-side heads and tails by subscribing to the pre-trained Transformer body, which was trained on the four natural image processing tasks without additional fine-tuning. The details of training and results are as follows. Dataset For the image inpainting task, we used PASCAL VOC2012 dataset which contains 10,582 natural images. The information about license and source of this dataset can be found in Appendix B. For the preprocessing, we scaled the image from [0, 255] to [-1, 1] and randomly cropped by 128× 128 patches. Then we multiplied the image with the zero-box mask that has a random size of width and height from 48 to 64 according to Yu et al. (2018). For the X-ray CT denoising task, we used the 2016 AAPM Low-dose CT Grand Challenge dataset (McCollough et al., 2020) that provides noisy CT images with quarter dose and clean CT images with routine dose of X-ray. Since the X-ray CT data are measured in the Hounsfield unit, we divided the intensity by 4,000 and randomly cropped by 64× 64 patches. Implementation details For the image inpainting task, we employed the network architecture of Yu et al. (2018) and decomposed it into two parts for the task-specific head and tail. We performed task-specific learning by minimizing the adversarial generative loss for 400 epochs using Adam optimizer with learning rate 1× 10−4. For the X-ray CT denoising task, we used the same network architecture of head and tail implemented in this paper. We trained the task-specific networks with the fixed task-agnostic body for 400 epochs using Adam optimization algorithm with learning rate 5× 10−3. Results To evaluate the performance of image inpainting and medical image denoising, we compared our method to the CNN model that has the same network architectures of head and tail with ours but does not have the Transformer body. The quantitative evaluation results are shown in Table 10, and the visual comparisons are shown in Figure 13. We can see that the performance of inpainting is improved when training the client-side networks with our pre-trained Transformer body, even though we use the Transformer body pre-trained using low-level computer vision tasks. This implies that the proposed method can be extended to various high-level tasks. In addition, we can observe that our model on the medical image denoising task achieves higher performance than the comparative CNN model and provides clean images, although the Transformer body was trained on the natural image domain. From these results, we can confirm that our task-agnostic Transformer body has a capability to learn the domain gap even in different data sources. Also, this suggests that clients do not need to train the server-side body from the scratch when they subscribe the body for the other tasks.
1. What is the main contribution of the paper, and how does it relate to Vision Transformers? 2. Can the proposed method be applied to higher-level tasks, and what are the potential challenges? 3. How does the paper address the domain gap caused by different tasks, and how does it handle data from different sources? 4. What are the differences between this work and prior works on task-agnostic federated learning?
Summary Of The Paper Review
Summary Of The Paper In this work, the authors present a multi-task distributed learning framework called TAViT. The task-specific head CNN and the tail CNN are distributed to clients with their data connected to a standard Transformer body placed in the server. With an alternating training scheme, the heads and tails on client sides are trained by task-specific learning, while the body is trained by task-agnostic learning. Experiments on four different image processing tasks show the success of task-agnostic learning of the Transformer body and its synergistic improvement with the task-specific heads and tails. Review In line-5 of the abstract, what did the authors inspire from Vision Transformers? Why the core motivation of this paper comes from "the success of ViT". The authors should revise the statement. The proposed method has been verified on four low-level tasks. Can this strategy/framework be applied to higher-level tasks, such as image inpainting, classification, and object detection? Note that those tasks need more semantic understanding during training/inference. In the traditional federated learning framework, each client commonly conducts the local train process, then transfers the model update consisting of the intermediate gradient to sever. However, in this paper, the author transfers the dataset's features to sever, which may bring about unpredictable challenges. For example, the computation cost of homomorphic encryption for features may increase rapidly. Does it have any advantages (research or application value)? As for different tasks, the authors use a unified task-agnostic Transformer body, how the authors bridge the domain gap caused by the other tasks' knowledge. Furthermore, do different data sources (e.g., nature, satellite, and medical images) share the same Transformer body in Figure-1? Does it still work well? The authors should provide more experimental results and in-depth analysis to verify this point. Prior works [*1,*2] have also addressed the task-agnostic problem in federated learning. What's the significant difference between those works? [*1] Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating [*2] Task-Agnostic Privacy-Preserving Representation Learning via Federated Learning
ICLR
Title Privacy-preserving Task-Agnostic Vision Transformer for Image Processing Abstract Distributed collaborative learning approaches such as federated and split learning have attracted significant attention lately due to their ability to train neural networks using data from multiple sources without sharing data. However, they are not usually suitable in applications where each client carries out different tasks with its own data. Recently, Vision Transformer (ViT) has been widely explored in computer vision applications due to its capability to learn the common representation through global attention of the embedded input sequence. By leveraging the advantages of ViT, here we present a new distributed learning framework for image processing tasks, allowing clients to learn multiple tasks with their private data. The key idea arises from a disentangled representation of local and non-local features using a task-agnostic Vision Transformer and a task-specific head/tail. By connecting task-specific heads and tails at client sides to a task-agnostic Transformer body at a server side, each client learns a translation from its own task to a common representation, while the Transformer body learns global attention between the features embedded in the representation. To enable decomposition between the task-specific and common representation, we propose an alternating training strategy in which task-specific learning for the heads and tails is run on the clients by fixing the Transformer, which alternates with task-agnostic learning for the Transformer on the server by freezing the heads and tails. Once the Transformer body is fully trained with a sufficient number of tasks and clients, additional training of the Transformer body is no longer required when a new client is added with a new task, and all that is required is the training of customer-specific head and tail. Experimental results on multi-task learning for various low-level and high-level computer vision including medical image data show that our method synergistically improves the performance of the task-specific network of each client while maintaining privacy. 1 INTRODUCTION Deep learning approaches have demonstrated the state-of-the-art performance and fast inference time in computer vision tasks (Ronneberger et al., 2015; Zhang et al., 2017a; Wang et al., 2017). In particular, convolutional neural networks (CNN) can learn the hierarchy of complex image features, so that a variety of CNN-based methods have been developed for denoising (Zhang et al., 2017b; Chang et al., 2020), deraining (Wei et al., 2019; Ren et al., 2019), deblurring (Nah et al., 2017; Kupyn et al., 2019), deblocking (Li et al., 2020b; Maleki et al., 2018), etc. However, the performance of CNN typically depends on a large number of training data (Chervenak et al., 2000; Krizhevsky et al., 2017), and it is often difficult to collect data from various entities due to privacy and regulation issues (Price & Cohen, 2019). Since the amount of data from a single source may not be enough, a deep learning framework that can leverage many datasets without violating privacy is required in real-world applications. To address this, distributed collaborative learning (DCL) approaches, which jointly train a single network on multiple systems or devices without revealing distributed data to a central entity or to each device, have been investigated (Konečnỳ et al., 2016; McMahan et al., 2017a; Gupta & Raskar, 2018). For example, federated learning (FL) (McMahan et al., 2017a; Li et al., 2020c) is studied to aggregate all data to the center under privacy constraints. Thanks to the parallel communication between each client, FL enables fast training of the network across multiple clients. Also, split learning (SL) (Gupta & Raskar, 2018; Vepakomma et al., 2018) is developed as an enhanced privacy-preserving model that splits a network into clients and server so that each client does not share all network parameters but only train a part of networks. From the advantages of FL and SL, a combination of split and federated learning, named SplitFed learning (SFL) (Thapa et al., 2020), has been recently proposed to provide efficient training and a high level of privacy with a less computational burden. However, the existing CNN-based methods are difficult to determine the proper layer of the network to split. Also, although training data are distributed across each client, all clients usually consider a common learning task. Meanwhile, in many practical image processing applications, it is unlikely that all the clients are interested in the same applications. For example, some of the clients may be interested in image denoising (Zhang et al., 2017b), whereas the other clients are focused on image deblurring (Nah et al., 2017), deraining (Wei et al., 2019), deblocking (Li et al., 2020b), etc. As each task is different from the others, the existing distributed learning framework may not work. That said, these image processing tasks still require understanding of common image representation, so one may wonder whether there is any systematic way of synergistically learning multiple image processing tasks in a privacy-preserving manner. One of the most important contributions of this work is to show that Task-agnostic Vision Transformer (TAViT), composed of the CNN-based head and tail and ViT-based body, is nicely fit to this purpose. Specifically, the head and tail are placed on each client to learn specific image processing tasks, while the body is stored and trained on a server to learn common representation across all tasks of clients. In contrast to the existing SL framework where the network split is arbitrary, TAViT provides a systematic way of splitting neural networks between clients and servers for privacy-preserving training without losing any performance. Furthermore, TAViT allows clients to use a common Transformer body model to learn multiple image processing tasks and synergistically improve the performance of their task-specific networks. One may think that the proposed method is similar to the image processing transformer (IPT) (Chen et al., 2020), which consists of CNN-based heads and tails and a Transformer body. However, IPT requires centralized data and large computation resources for both pretraining and task-specific fine-tuning the whole model. Also, the Transformer in IPT has an encoder-decoder architecture which needs an explicit conditioning vector to convert the Transformer for a specific task. Thus, to our best knowledge, IPT is not suitable for distributed learning. In contrast, the body of TAViT is made of an encoder-only Transformer architecture to learn global embedding features of multiple tasks without any condition. Besides, by imposing computation of this Transformer body on the server rather than clients, our framework enables clients to reduce the computational burden while maintaining the overall performance for specific image processing tasks. In addition, TAViT views the heads and tail at the clients and the body at the server as two-part players and updates them alternately. Specifically, our training step is composed of task-specific learning and task-agnostic learning: the former is to train the client-side heads and tails to learn each task of the client, while the latter is to train the server-side Transformer body to learn general feature interpretation over multiple tasks. When there are more than two clients for any single task, parameters of their heads and tails can be aggregated through FL. Accordingly, TAViT offers seamless integration between SL and FL approaches to protect privacy. Recall that one of the most unique advantages of Transformer body is to convert “unattended ” input features into “attended ” output features by learning global attention and non-local interactions between the input features. Accordingly, with the help of aforementioned alternating training scheme, the task-specific head/tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer. In fact, this disentangled representation of local and non-local features has been pursued throughout the development of deep networks (Ye et al., 2018; Zhang et al., 2019b; Wang et al., 2018). Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split learning. We validate the performance of TAViT on multiple image processing tasks. Experimental results show that our multi-task distributed learning framework using the alternating training strategy outperforms the end-to-end learning of each individual task thanks to the decomposition of the task-agnostic Transformer body and task-specific networks. This suggests that our framework is a promising approach for learning multiple tasks with distributed privacy-sensitive data. In sum, our contributions are summarized as follows: • We propose a novel distributed learning framework, TAViT, that carries out multiple image processing tasks using distributed data. • The proposed method consists of task-specific heads and tails on clients and a task-agnostic Transformer body on a server, which reduces the computational cost of clients and does not require centralized data for multi-task learning. • An alternating training strategy between the task-specific and task-agnostic learning for the split networks shows the synergy effect of performance improvement, which is demonstrated by experimental results on multiple tasks. 2 RELATED WORKS Federated learning In the FL setting, multiple clients learn locally stored data while one server aggregates information of clients by various methods including FedAvg (McMahan et al., 2017a). For the efficient implementation of FL, practical challenges of unstable networks, hardware capacity difference, and statistical heterogeneity of data distributions (Li et al., 2020c; Smith et al., 2017; Li et al., 2018) have been actively studied. Corinzia et al. (2019) performs FL with multiple classification tasks, and He et al. (2020) loads a huge network to a server and small CNNs to clients and trains them by knowledge distillation. Yao et al. (2019) presents an unbiased gradient aggregation for FL and meta updating of the model. In contrast, our method is presented for effectively learning on task heterogeneity using distributed data. Although Li et al. (2020a) presents task-agnostic FL method based on the feature extractor, each client trains the task-specific network independently, while our model can learn multiple tasks simultaneously for synergistic performance improvement. Split learning Split learning (SL) is designed to train networks over distributed data by splitting networks into two parts, which updates client-part and server-part networks sequentially (Gupta & Raskar, 2018). By extending this idea, Vepakomma et al. (2018) presents several ways to use SL, and Abuadbba et al. (2020) applies SL to 1D CNN models. However, the existing SL methods are designed using CNN, and to our best knowledge, there is no principle way of splitting the network for the best performance. In particular, Thapa et al. (2020) proposes a combination of FL and SL, but the server requires labels from clients to update the split networks, which may lose data privacy. Also, since outputs are generated from a shared network on the server when there are multiple clients, these methods can be narrowly applied to a single task. In contrast, our model presents Transformer-based shared body that enables multi-task learning of clients without sharing data. Vision Transformer for image processing Recently, inspired by the success of Transformer in natural language processing (Vaswani et al., 2017; Devlin et al., 2018), Transformer-based image processing methods have been extensively explored (He et al., 2021; Han et al., 2021). In particular, Dosovitskiy et al. (2020) proposes a Vision Transformer (ViT) with an encoder-only architecture to learn image recognition tasks. Also, Chen et al. (2020) presents an image processing Transformer (IPT) that learns low-level vision tasks by pretraining and task-specific fine-tuning. However, to the best of our knowledge, there are no existing works that exploit ViT architecture for distributed learning applications. 3 PRIVACY-PRESERVING TASK-AGNOSTIC VISION TRANSFORMER 3.1 SUBSCRIPTION-BASED SERVICE MODEL As illustrated in Figure 1(a), TAViT is designed for subscription-based services. Specifically, a client subscribes to a task-agnostic Transformer model at the server side that has learned global attention over the image features from other datasets. Then, the client can build the head and tail proper to its own image processing task, and connect them to the Transformer body at the server. At the subscription time, there may be already multiple clients that subscribe to the same Transformer body. Accordingly, each client can train its own head and tail using its local data whereas the common Transformer body is regularly updated using embedding features from all subscribers through alternating training strategy as shown in Figure 1(b), or even fixed if training has been performed with sufficient number of tasks and clients. As a result, the latest version of the Transformer body trained using more training data can be maintained on the server side so that it can be offered to new clients at the next subscription. Since the local data are not centralized to one device and are not shared with other clients, our framework can preserve data privacy. In the proposed framework, we consider the features from the head as a sequence of tokens similar to natural language processing. Specifically, as shown in Figure 1(c), we reshape the features f with Y ×X×D size into a sequence of patches f = {f1, f2, . . . , fS}, whereX,Y,D denote width, height, and channel dimension of image features, respectively, S is the number of patches, i.e. S = Y X/p2 for patch size p, and fs denotes the s-th patch of the features with p2 ×D size. Then, these reshaped features f are taken into the Transformer body as an input sequence, which is added to learnable positional embeddings to keep the position information of each feature patch. The Transformer body consists of several encoder layers proposed in Vaswani et al. (2017) so that the encoded features pass through several multi-head self-attention modules and feed-forward modules for each layer. And then, the body output of transformed features is reshaped into the original shape of features f to be used as input for the tail CNN. Here, for the Transformer body, we employ the encoder-only architecture as a task-agnostic network, compared to IPT (Chen et al., 2020) that uses both encoder and decoder of Transformer. The encoderonly Transformer learns the global relationship between features in the input corpus, and that global attention may be all we need for better performance in vision tasks as demonstrated in ViT. Therefore, the body of our framework can be trained to translate the input embedding features into globally self-attended features independent of specific tasks. Moreover, the heads are guided to learn the task-specific embedding from the input images to the common feature representation, and the tails are trained to learn the attended features for the specific image processing tasks. This architectural modification enables the framework to be suitable for multi-task distributed learning. 3.2 TRAINING SCHEME For distributed datasets of different tasks, we apply the alternating training strategy between clients and the server by considering them as two players. Specifically, as shown in Figure 2, TAViT trains the client-side task-specific head and tail networks and the server-side task-agnostic body network in an alternating manner. In the task-specific learning, clients train their own heads and tails with the fixed body weights in parallel using locally stored datasets. In contrast, in the task-agnostic learning, Algorithm 1 TAViT: C = {C1, C2, . . . , CK} is a group of client sets with different tasks each other. Is and Ia denote the number of optimization iterations for each task-specific and task-agnostic step in one cycle. Hc and Tc are the head and the tail of a client c, and B is the Transformer body on the server. Initialization :H,T to all clients, B to a body for i in [1, num_cycles] do for is in [1, Is] do // task-specific learning (heads & tails) for each client c ∈ Ck ⊂ C in parallel do update Hc, Tc with fixed B end if is is aggregation step then // for case of multi-clients with one task for each client subset Ck ⊂ C, s.t. |Ck| > 1 do unify Hc and Tc of client c ∈ Ck (e.g. FedAvg) end end end for ia in [1, Ia] do // task-agnostic learning (body) k ← randomly selected task update B with fixed Hc, Tc, s.t. c ∈ Ck end end Output: H,T,B the server trains the Transformer body with the fixed head and tail of a randomly chosen client for each iteration. More details are as follows. 3.2.1 TASK-SPECIFIC LEARNING Let C = ⋃K k=1 Ck be a group of client sets participating different image processing tasks, where K denotes the number of tasks, and Ck has one or more clients with different datasets for the k-th task, i.e. Ck = {ck1 , ck2 , . . . , ckNk} with Nk ≥ 1. Each client c ∈ Ck has task-specific own network architecture for a head Hc and a tail Tc, which are connected to the Transformer body B in the server. In the task-specific learning, for the given freezed Transformer B at the server and the local training data {(x(i)c , y(i)c }Nci=1, the c-th client then trains the neural networksHc and Tc by solving the following optimization problem: min Hc,Tc Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))), (1) where `c(y, ŷ) refers to the c-th client specific loss between the target y and the estimate ŷ. The parameters of Hc and Tc are iteratively updated using ∂`c/∂Tc and ∂`c/∂Hc. These gradients are calculated by back-propagation through the entire model which can be expressed by the chain rule: ∂`c ∂Tc = ∂`c ∂ŷ · ∂ŷ ∂Tc , ∂`c ∂Hc = ∂`c ∂fH · ∂fH ∂Hc = ∂`c ∂fB · ∂fB ∂fH · ∂fH ∂Hc (2) where fH = Hc(x (i) c ), fB = B(fH) and ŷ = Tc(fB). This implies that to update the head Hc and the tail Tc, the gradient ∂`c/∂fB is transmitted to the server after back-propagation through the tail, and also the ∂`c/∂fH computed from back-propagation through the body is transported to each client. Federated learning In the task-specific learning, when there are multiple clients for the same task k (i.e. Nk > 1), their heads and tails can be trained in parallel. Suppose that cki has training dataset size of |Di| and the total size of dataset in Ck is ∑ |Di| = |D|. In this case, the backpropagation and optimization process are the same with the single client case, but additionally applies FedAvg(McMahan et al., 2017a) to the parameters Hc and Tc of c ∈ Ck for every assigned period, which is written as: (Hcj , Tcj )← ( Nk∑ i=1 |Di| |D| Hci , Nk∑ i=1 |Di| |D| Tci ) , where 1 ≤ j ≤ Nk. (3) The period of the weight aggregation is adjustable (50 epochs in our experiments). From this federated learning, clients corresponding to the k-th task share the same parameters at the end of task-specific learning as shown in Figure 2. 3.2.2 TASK-AGNOSTIC LEARNING Once the heads and tails of multiple clients are trained, the Transformer body is trained by fixing the heads and tails at the clients. To train the Transformer body that learns the common representation in a task-agnostic manner, we construct a subset of the client CB by selecting one client from each task: CB = {c1n1 , c 2 n2 , . . . , c K nK}, c k nk ∈ Ck. (4) Then, the training data {x(i)c , y(i)c }Nci=1 corresponding to the task of the client are also selected, and the Transformer body in the server is updated by solving the optimization problem as follows: min B ∑ c∈CB Nc∑ i=1 `c(y (i) c , Tc(B(Hc(x (i) c )))). (5) Similar to the task-specific learning, the parameters of B are updated using ∂`c/∂B, where the client c is randomly chosen from CB at each optimization step. The required gradients also come from back-propagation as following: ∂`c ∂B = ∂`c ∂fB · ∂fB ∂B , where ∂`c ∂fB = ∂`c ∂ŷ · ∂ŷ ∂fB , (6) where fB = B(fH) and ŷ = Tc(fB). Here, the gradient ∂`c/∂fB is only transported to the server after back-propagation through the tail. Through this task-agnostic learning, the Transformer body in the server learns global embedding representation and provides task-agnostic self-attended features for various image processing. The pseudocode of the overall TAViT is described in Algorithm 1 with more details in Appendix A. 3.3 COMMUNICATION COST AND PRIVACY PRESERVATION BY TAVIT Given that gradients have to be transmitted two-way or one-way for training head/tail and body parts of the architecture, one many wonder whether additional communication overhead is significant. However, since the Transformer body is a shared model on the server that does not perform any weight aggregation, our model has much smaller cost for one communication between the client and the server in the task-agnostic learning. This comes from the small size of transported features and gradients for the heads and tails. If we sample clients in the network training, the communication cost can be further controlled. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases if a bigger Transformer body is used for better representation of global attention. For detailed analysis, see Appendix D.4. The proposed TAViT is designed to use distributed local data for distinct tasks without sharing the data to the other clients or any central device. Although the privacy attack on the transported features between the server and clients can be occurred, yet another powerful and unique mechanism for maintaining privacy in TAViT arises when the client-side network of the proposed method has a skip connection between the head and the tail. In this case, the transported feature characteristics can contain very lossy information from the original data, and one cannot reconstruct data only using the transmitted hidden features of the proposed method as detailed in Appendix D.1. 4 EXPERIMENTAL RESULTS We examine the performance of TAViT with the following image processing tasks: deblocking (JPEG artifact removal), denoising, deraining, and deblurring. Additional experiments for image inpainting and medical data are also performed to investigate its performance for high-level computer vision tasks and different domain data, respectively, which can be found in Section D.5 of Appendix. With a single server, we set two clients to carry out FL on the deblocking task and set one client for each of the other tasks, so the total number of clients is five in our experiments. We evaluate results using two metrics of PSNR and SSIM. Datasets The public datasets we used are as follows. For deblocking and denoising, we use 10,582 images from PASCAL VOC 2012 (Everingham et al., 2010) and Segmentation Boundaries Dataset (SBD) (Hariharan et al., 2011). Particularly, for FL on the deblocking, we split the data into two sets with 5,291 images and distribute them to each client. Deblocking results are evaluated on Berkely Segmentation Database (BSD500) (Martin et al., 2001b) that provides 200 test images. For the denoising, we apply random Gaussian noise with the level of σ = 30 to images. The Denoising model is evaluated on CBSD68 that contains 68 test images extracted from the BSD500. For deraining, following the experiment setting of Jiang et al. (2020), we use data from Rain14000 (Fu et al., 2017b), Rain1800 (Yang et al., 2017), Rain800 (Zhang et al., 2019a), and Rain12 (Li et al., 2016) that provide 13,711 pairs of clean and synthetic rain images. Deraining results are evaluated on Rain100H and Rain100L (Yang et al., 2017), each of which has 100 synthetic rainy images. Deblurring is performed using a GoPro dataset (Nah et al., 2017) that contains 2,103 and 1,111 pairs of sharp and blurry images for training and test sets, respectively. Implementation details To implement our TAViT, we use the encoder and decoder of DDPM (Ho et al., 2020) with three stages as our backbone of each head and tail at the client. For the Transformer body in the server, we use 8 encoder layers of the vanilla Transformer (Vaswani et al., 2017) with 256 words and 512 embedding dimensions. The total number of parameters for networks at each client and the server is about 28M and 17M, respectively. Using 4 Nvidia Quadro RTX 6000 cards and 2 Nvidia Geforce GTX 1080Ti cards, we train the networks using Adam optimizer with a learning rate of 3× 10−5. We initialize parameters of the networks with those of the pre-trained model by an autoencoder scheme. For the data augmentation, we apply random horizontal and vertical flipping, rotating with 90 degrees, and cropping by a patch size of 64× 64× 3. For three cycles, by setting the batch size as 20, we perform the task-specific learning for 200, 400, 400, and 2000 epochs on deblocking, denoising, deraining, and deblurring, respectively. Also, we perform the task-agnostic learning for 1000 epochs with 1/4 data for each task. We implement our TAViT using PyTorch library under BSD-style license using Flower federated learning protocol (Beutel et al., 2020) under Apache 2.0 License. The details of datasets and implementation are described in Appendix B. 4.1 RESULTS Convergence of TAViT for multi-task distributed learning We evaluated the results of the proposed TAViT of multi-task distributed learning with all participated clients and one common Transformer body in the server. Figure 3 shows the gradual progression of quality metrics through the alternating training scheme. The performance of all tasks from our method increased and outperformed as the task-specific learning and the task-agnostic learning continued. This demonstrates the synergistic improvement of the task-specific heads/tails and the task-agnostic body: the heads and tails learn more accurate feature embedding for given tasks, and the common body learns the global attention general to multiple image processing tasks by looking at various datasets. Although there were some tasks in which the score of each step was slightly lower than that of the previous step by the interaction of different task datasets, the overall performance of TAViT was improved as the cycle progressed. Detailed quantitative results for each cycle are described in Appendix C. Comparison of TAViT to other strategies We compared our TAViT with other distributed learning strategies: SL and FL. We conducted both SL and FL with the two clients assigned for the deblocking task. For SL, as designed in Vepakomma et al. (2018), we placed the head and tail networks on clients and the body on the server, and trained those split networks without the weight aggregation for the head and tail. For FL, we placed the entire model composed of the head, body, and tail on each client, and trained the network in parallel by carrying out the aggregation with FedAvg (McMahan et al., 2017a) using a common server. Figure 4 shows these scenarios, where C1 and C2 are clients for the deblocking, C3, C4, and C5 are clients for the denoising, deraining, deblurring, and S is the server. As reported in Table 1, the proposed method achieves better performance compared to the other strategies even though ours learns multiple tasks. Comparison of TAViT to learning each separate task To verify the capability of the task-agnostic Transformer body learning from multiple tasks, we compared TAViT with the models independently trained on each individual task. Under the setting of centralized data for each task, we implemented this study in two manners: end-to-end learning (EL) and single-task learning (STL). For EL, we trained the whole network in one device through the end-to-end optimization scheme. For STL, we distributed the decomposable head, body, and tail to a client and a server as the proposed method, and trained the networks by the alternating training strategy for one cycle. Table 2 reports the results on Benchmark datasets for each task. This shows that our TAViT trained on multiple tasks simultaneously outperforms both EL and STL, which suggests that the task-agnostic body of our framework does not degrade the model by task heterogeneity but enhances the performance for various tasks. Comparison of TAViT to CNN-based models To compare the performance of TAViT with CNNbased deep learning models, we tested several existing methods on benchmark datasets for each task. Table 2 and Figure 5 show the qualitative and visual comparison results, respectively. For the deblocking, when comparing with DnCNN Zhang et al. (2017a), AR-CNN Dong et al. (2015), and QCN (Li et al., 2020b), the proposed method outperforms for both the 10 and 50 levels of quantization quality. Visual comparisons also show that the proposed method removes block artifacts clearly compared to the others. For the denoising, we compared our method with CBM3D (Dabov et al., 2007), DnCNN (Zhang et al., 2017a), FFDNet (Zhang et al., 2018b), IRCNN (Zhang et al., 2017b), DHDN (Park et al., 2019), and SADNet (Chang et al., 2020). The results show that our Dataset Metric CNN-based Transformer-based TAViT achieves better PSNR/SSIM scores, and also provides more clearly denoised images while preserving structure and texture details than the comparisons. For the deraining, we tested our model in addition to DerainNet (Fu et al., 2017a), SEMI (Wei et al., 2019), UMRL (Yasarla & Patel, 2019), PreNet (Ren et al., 2019), and MSPFN (Jiang et al., 2020). We used Y channel in YCbCr color space followed by Jiang et al. (2020) for the evaluation. As a result, our model outperforms the comparative methods on both Rain100H and Rain100L. Also, the images restored by ours are closer to the references by removing rain streaks than the others. For the deblurring, we employed DeblurGAN (Kupyn et al., 2018), Nah et al. (2017), Zhang et al. (2018a), DeblurGANv2 (Kupyn et al., 2019) for comparisons. The results show that the proposed method achieves comparable performance to the existing approaches. Visual results show that our TAViT restores blurry images with sharp edges, while the others still contain blurry artifacts or position shifting of objects compared to the references. 5 CONCLUSION In this work, we present a multi-task distributed learning framework called TAViT. In TAViT, the task-specific head CNN and the tail CNN are distributed to clients that have their own data, which are connected to a common Transformer body placed in the server. With an alternating training scheme, the heads and tails on client sides are trained by task-specific learning, while the body is trained by task-agnostic learning. We conduct experiments on four different image processing tasks, which shows the success of task-agnostic learning of the Transformer body and its synergistic improvement with the task-specific heads and tails. Through our model, the participating clients can design and train their own networks depending on the task using local data in parallel. We expect that the proposed TAViT can be efficiently used in the case that sharing data with other institutions is sensitive such as medical fields. Ethics statement As our work utilizes distributed learning models, similar to the existing FL and SL, our method may be vulnerable to privacy attacks against the server such as inversion attacks (Yin et al., 2021). Although the proposed framework is designed by encoding the feature maps and gradients under Flower protocol which makes it difficult for attackers to restore the original data, the hidden feature may leak the raw data to some degree. Thus, privacy-related techniques such as differential privacy (McMahan et al., 2017b) and authenticated encryption of data (Rogaway, 2002) should be analyzed for practical applications. Reproducibility statement The source code and our trained models to reproduce the proposed method are available at https://github.com/TAViT2022/TAViT. For the detailed pseudocode, refer to Appendix A. Also, the data processing steps for datasets used in the experiments are provided in Appendix B. A DETAILS OF TAVIT WITH PSEUDOCODE As described in the main paper, the task-specific heads and tails in clients and the Transformer body in the server are trained in an alternating manner between the proposed task-specific learning and the task-agnostic learning. In the following, we describe each step in more detail in terms of its implementation. Pseudocode for the task-specific learning Algorithm 2 shows the pseudocode for the task-specific learning. Given K image processing tasks, the task-specific learning updates the heads H and the tails T in each client with the fixed body B. The server first initializes global weights of the heads Algorithm 2 Task-specific learning of TAViT: Is denotes the number of iterations for the task-specific learning in one cycle. (Hc, Tc) denote the head and the tail in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (HCk , TCk) are global weights of the heads and the tails for the task k. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. |Dj | is the size of training data at cj , and |D| is the total size of training data at Ck, i.e. |D|= ∑ j |Dj |. /* run on the server (with fixed B) */ Initialize HCk , TCk Send HCk , TCk to all clients c ∈ Ck for is in [1, Is] do for each client c ∈ Ck, where k = {1, 2, . . . ,K} in parallel do fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂fH ← ∂`c∂fB · ∂fB ∂fH // backpropagation through body ClientUpdate( ∂`c∂fH ) end if is is weight aggregation step for |Ck| > 1 then Get (Hcj , Tcj ) from client cj , where j ∈ {1, 2, . . . , Nk} HCk ← ∑Nk j=1 |Dj | |D| Hcj // FedAvg for head TCk ← ∑Nk j=1 |Dj | |D| Tcj // FedAvg for tail Send (HCk , TCK ) to all clients c ∈ Ck end end /* run on client c */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂Tc ← ∂`c∂ŷ · ∂ŷ ∂Tc // computation of tail gradients ∂`c ∂fB ← ∂`c∂Tc · ∂Tc ∂fB return ∂`c∂fB /* run on client c */ Function ClientUpdate( ∂`c∂fH ) ∂`c ∂Hc ← ∂`c∂fH · ∂fH ∂Hc // computation of head gradients update Hc, Tc using ∂`c∂Hc and ∂`c ∂Tc by optimizer e.g. Adam Algorithm 3 Task-agnostic learning of TAViT: Ia denotes the number of optimization iterations for task-agnostic learning in one cycle. (Hc, Tc) denote the head and tail in a client c in a client c ∈ Ck for the k-th task, and B denotes the Transformer body in a common server. (fH , fB) are output feature maps from the head and the body, and ŷ is the output of the tail. `c is the task-specific loss of the client c. /* run on the server */ Initialize CB = {c1n1 , c 2 n2 , . . . , c K nK} where c k nk ∈ Ck for ia in [1, Ia] do c← cknk ∈ CB // Random selection of client with task k fH ← ClientPhase1() fB ← B(fH ) // body output ∂`c ∂fB ← ClientPhase2(fB) ∂`c ∂B ← ∂`c ∂fB · ∂fB∂B // computation of body gradients update B using ∂`c∂B by optimizer e.g. Adam end /* run on client c (with fixed Hc, Tc) */ Function ClientPhase1() x, y ← set current data and label fH ← Hc(x) // head output return fH /* run on client c (with fixed Hc, Tc) */ Function ClientPhase2(fB) ŷ ← Tc(fB) // tail output `c ← Loss(y, ŷ) ∂`c ∂fB ← ∂`c∂ŷ · ∂ŷ ∂fB // backpropagation through tail return ∂`c∂fcB and the tails and sends them to all clients in Ck, where Ck is a set of clients with different datasets for the k-th task. When each client c ∈ Ck takes local training data x and provides a feature map fH by the head Hc to the server (line 5 with ClientPhase1), the server-side Transformer body takes the feature map fH as an input embedding and estimates the self-attended features fB that is independent of specific tasks. Once the fB in the server is sent to the client, the client computes the task-specific loss `c between the label y and the tail output ŷ (line 24 in ClientPhase2). The gradient of the tail ∂`c/∂Tc is also computed in the client at ClientPhase2 (line 25), which is used to compute ∂`c/∂fB that is transported to the server. Then, in the server, ∂`c/∂fH is calculated by backpropagation through the body, which is sent to the client so as to compute the head gradient ∂`c/∂Hc and finally update Hc and Tc (lines 28-30 of ClientUpdate) using a single optimizer. Here, when there are multiple clients for the k-th task, i.e. |Ck| > 1, we apply the federated learning for the heads and the tails of those clients as described in lines 11-16 in Algorithm 2. The heads and the tails are trained in parallel, and their weights are aggregated by FedAvg (McMahan et al., 2017a) on the server side for every weight aggregation period. Then, these updated global weights of the heads HCk and the tails TCk and are transmitted to all clients in Ck so that the clients train their own head and tail using the new global weights from the next step. Pseudocode for the task-agnostic learning In the task-agnostic learning, the Transformer body in the server is updated with the fixed heads and tails of clients. Algorithm 3 shows the pseudocode for the task-agnostic learning of TAViT. Given a subset of clients, CB , by selecting one client among Ck for each task, the client c ∈ CB is randomly chosen for every iteration. Then, compared to the task-specific learning, the implementation of the task-agnostic learning is similarly conducted but does not need ClientUpdate process in Algorithm 2. In other words, after the gradient ∂`c/∂fB is computed on the client side at ClientPhase2 (line 14-18) and transmitted to the server (line 6), the server updates the Transformer body by computing the body gradients ∂`c/∂B (lines 7-8), which is the final step of each iteration. B DETAILS OF DATASETS AND IMPLEMENTATION B.1 LICENSE/SOURCE FOR EACH DATASET In our experiments, we use the public datasets for image deblocking, denoising, deraining, and deblurring tasks. Here, we describe the specific information of each data set such as license and source link. PASCAL VOC 2012 The PASCAL VOC data set (Everingham et al., 2010) is publicly available, which includes images obtained from the "flickr" website under SmugMug or its third-party licensors. The data are protected by the United States and international intellectual property laws. The data source is from the URL: http://host.robots.ox.ac.uk/pascal/VOC/. BSDS500 and CBSD68 The Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500) (Arbeláez et al., 2011) data set is an extended version of BSDS300 (Martin et al., 2001b), which is a public data set originally provided for image segmentation and boundary detection by Berkeley Computer Vision Group. This data set is widely used for measuring image restoration performance. The color BSD68 data set (CBSD68) is extracted from the BSDS500. The BSDS500 can be downloaded at https://www2.eecs.berkeley.edu/Research/Projects/CS/ vision/grouping/resources.html. Synthetic rainy images The synthetic rainy data set for training is collected from Rain14000 synthesized by Fu et al. (2017b), Rain1800 authored by Yang et al. (2017), Rain800 created by Zhang et al. (2019a), and Rain12 made by Li et al. (2016). We test our method on the synthetic rainy data sets of Rain100H and Rain100L, both of which are authored by Yang et al. (2017). All these data sets are publicly available and can be downloaded at the following links: - Rain14000: https://xueyangfu.github.io/projects/cvpr2017.html - Rain1800: https://www.icst.pku.edu.cn/struct/Projects/joint_rain_ removal.html - Rain800: https://github.com/hezhangsprinter/ID-CGAN - Rain12: https://yu-li.github.io/ - Rain100L & Rain100H: https://www.icst.pku.edu.cn/struct/Projects/ joint_rain_removal.html GoPro The GoPro dataset (Nah et al., 2017) provides training and test sets for deblurring. The data are available at https://seungjunnah.github.io/Datasets/gopro.html. B.2 DATA PROCESSING All datasets we used in experiments provide natural images that have three RGB channels and pixel values with a range of [0, 255]. Upon these datasets, we performed the following data processing according to the image processing tasks. For the image deblocking task, we quantized the images following JPEG compression. We first transformed RGB image into YUV color space using the following equations. Y = 0.257R+ 0.504G+ 0.098B + 16 (7) U = −0.148R− 0.291G+ 0.439B + 128 (8) V = 0.439R+ 0.368G− 0.071B + 128 (9) Then, we split the image into 8x8 blocks without overlapping and apply Discrete Cosine Transform (DCT) to each block. According to the level of quantization quality, we divided each element of the DCT coefficients by proper predefined matrices. After that, we apply inverse DCT and aggregate all blocks into an image, and then, we transformed the image from YUV to RGB color space. R = 1.164(Y − 16) + 1.596(V − 128) (10) G = 1.164(Y − 16)− 0.392(U − 128)− 0.813(V − 128) (11) B = 1.164(Y − 16) + 2.017(U − 128) (12) For the denoising task, we added Gaussian noise to the clean images. Specifically, we applied random Gaussian noise with the level of (µ, σ) = (0, 30) to images, and then clipped the values into [0, 255]. For the other tasks, the datasets named Rain# and GoPro provide synthetic rainy images and blurry images, respectively. Since we used these datasets for the deraining and deblurring tasks, we did not perform any preprocessing such as the synthesis of rain artifacts blurry effect. After the above data processing for all tasks, we randomly cropped the images by a patch size of 64× 64× 3. Also, we applied data augmentation using random flipping and rotating with 90 degrees. Then, we normalized the images with the scale of pixel values from [0,255] to [-1, 1], which are final inputs to the model. B.3 NETWORK ARCHITECTURES For the task-specific head and tail for each task, we use the network architecture of DDPM (Ho et al., 2020) that is composed of residual blocks and attention modules. We set the number of two-times downsampling/upsampling stages as 3. For each stage, the channel size is set as 128, 256, and 512, respectively. Accordingly, given an input image x ∈ R64×64×3, the head provides a feature map fH ∈ R16×16×512 that passes through the body, and the tail generates an output of the same size as the input. On the other hand, for the Transformer body, we use the encoder part of the vanilla Transformer (Vaswani et al., 2017). As described in the main paper, the Transformer body takes a sequence of patches f by reshaping the feature map fH as an embedding of the words. In the experiments, the length of the input sequence is 256 by setting the patch size as 1, and the sequence dimension is 512. Then, once the input sequence is added to learnable positional encodings, the encoded features h pass through L encoder layers (L = 8 in our experiments). Table 3 shows the structure of each encoder layer of the body. Model sizes Table 4 shows the model sizes of the task-specific head and tail, and the Transformer body we used. When comparing the number of parameters and the size of networks, we can observe that the client-side networks composed of the head and the tail is larger than the task-agnostic Transformer body. Considering the experimental results in the main paper, this model size suggests that the body estimates task-agnostic self-attended features that provide the synergy effect in the task-specific and task-agnostic learning even if the body size is smaller than the sum of head and tail. C EXPERIMENTAL RESULTS C.1 TAVIT ON MULTIPLE IMAGE PROCESSING TASKS Evaluation results of TAViT Table 5 reports the quantitative evaluation results of TAViT on multiple image processing tasks, which is visualized with graphs of scores for the cycles in the main paper. Figure 6 shows the qualitative results of TAViT. This shows that the performance of each task is improved according to the cycles between the task-specific and task-agnostic learning. Table 5: Quantitative results of TAViT according to the cycles, which are visualized with graphs in the main paper. The best results are highlighted in bold. Task Deblocking Denoising Deraining Deblurring Cycle BSDS500 (Q10) BSDS500 (Q50) CBSD68 (σ = 30) Rain100H Rain100L GoPro PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM PNSR SSIM 0.5 27.53 0.781 32.92 0.921 30.57 0.868 28.24 0.860 33.17 0.939 28.94 0.871 1.0 27.57 0.782 33.01 0.922 30.62 0.869 28.75 0.862 32.69 0.936 29.09 0.873 1.5 27.61 0.784 33.05 0.923 30.57 0.870 28.57 0.869 33.58 0.945 29.63 0.885 2.0 27.65 0.785 33.14 0.924 30.66 0.870 28.79 0.867 33.50 0.944 29.72 0.887 2.5 27.64 0.785 33.14 0.924 30.62 0.870 29.25 0.875 34.30 0.949 29.96 0.893 3.0 27.69 0.786 33.21 0.924 30.69 0.871 29.35 0.875 33.88 0.947 30.06 0.894 Figure 6: Qualitative results of TAViT according to the cycles. From the left to the right columns, the deblocking results on images with the quantization quality 10 and 50, the denoising results, the deraining results on Rain100H and Rain100L, and the deblurring results. The yellow value is PSNR, and an inset box is a magnified view of a red rectangle. Qualitative comparisons Besides the results presented in the main paper, here, we show more visual comparisons of our TAViT to the existing methods. Figure 7, 8, 9, and 10 display the deblocking, denoising, deraining, and deblurring results, respectively. All these results verify that our TAViT as a distributed learning for multiple image processing tasks outperforms the comparisons. C.2 ABLATION STUDY OF TAVIT Study on the amount of data for each task in the task-agnostic learning In the main paper, we implemented our method using 1/4 of the dataset for each task in the task-agnostic learning. To verify that this amount of data is enough for the task-agnostic learning, we performed the ablation study using different amounts of data with a 1/2 ratio for each deblocking, denoising, deraining, and deblurring task. Table 6 and Figure 11 show the quantitative results of TAViT on the multiple tasks using 1/2 data in the task-agnostic learning. Similar to the results with a 1/4 data ratio, the scores of PSNR and SSIM tend to increase as the cycle continues. When comparing the best results from the 1/4 and 1/2 data ratio, we can observe that performance for each task using even 1/4 amount of data is comparable or better than using 1/2 data. This suggests that using 1/4 of data for each task in the task-agnostic learning is sufficient to train the Transformer body and obtain high performance. Study on the weight aggregation period In the main paper, we conducted the experiment of TAViT by applying FL to the deblocking task that has two clients with their own data. In FL, the weights of the network in each client are averaged in the server for every weight aggregation period that is given as a hyperparameter. Here, since this period can influence learning performance in that the clients and the common server communicate to aggregate network weights, we performed the ablation study on the weight aggregation period for training the client-side networks. As reported in Table 7, for the deblocking task, we trained the model with the aggregation period of 20, 50, and 100 epochs. When we evaluated the deblocking results, the weight aggregation per 50 epochs provides better performance with 27.53dB/0.781 and 32.92dB/0.921 of PSNR/SSIM for the quality 10 and 50, respectively, compared to the other methods. This verifies that the weight aggregation period of 50 epochs presented in the main paper is proper to train and evaluate the proposed TAViT in our experiments. D DISCUSSION D.1 SKIP-CONNECTION OF HEAD AND TAIL FOR PRIVACY PRESERVATION When configuring the task-specific heads and tails with skip-connections, our model can avoid the privacy attack in some degree while maintaining the encoding information for the tail to generate outputs. This is because the skip-connected features are isolated on each client and not transported to the server. Accordingly, the transported features between the clients and the server may contain far less information about the original data. Figure 12 shows examples of the outputs with and without skip-connections. This shows that the network output without skip-connections barely has the property of original data, which indicates that one may not be able to reconstruct the original data using the transmitted hidden features of the proposed method. D.2 EFFECT OF TASK-AGNOSTIC TRANSFORMER BODY As we described in the main paper, the reason for developing our model with CNN-based heads/tails and the Transformer-based body is to take advantage of each network. In particular, Transformer learns the global attention of the input sequence through self-attention modules and has recently been extensively studied for various computer vision tasks. One of the most unique advantages of Transformer is to convert “unattended ” input feature vectors into “attended ” output feature vectors by learning global attention and non-local interactions between the input features. Accordingly, the task-specific head / tail can be only trained to learn task-specific local features, whereas the global feature can be learned through the Transformer body. This disentangled representation of local and non-local features has been pursued throughout the development of deep networks. Thus, the proposed Transformer-based approach is considered to be one of the most advanced architectures for achieving this goal, as it improves synergistically overall performance, and at the same time leads the privacy-preserving split-learning architecture. In order to show that this design is proper to the multi-task distributed learning, we additionally conducted the experiment by replacing the Transformer body to the CNN model. Specifically, we configured the CNN body with CBR blocks, where C is a convolutional layer with the consistent channel 512, B is a batch normalization layer, and R is an activation by ReLU layer. For a fair comparison, we set the CBR blocks as 7 to have almost the same number of learnable parameters with the Transformer body we used (16,522,240 of CNN body vs. 16,953,344 of Transformer body). Then, using this CNN body, we implemented our proposed task-specific and task-agnostic learning for one cycle on the multiple image processing tasks as the main paper. As a result, Table 8 shows that our model with the Transformer body achieves higher performance in both task-specific and task-agnostic learning. This indicates that the Transformer can be used as a general task-agnostic body for multi-task learning. D.3 SAMPLING STRATEGY OF CLIENTS When there are multiple clients for one task in task-specific learning, the task-specific networks of clients are aggregated through the sampling strategy of FedAvg. On the other hand, in task-agnostic learning of the proposed TAViT, one client is sampled for each iteration. Since the networks of clients for the same task are aggregated before the task-agnostic learning, we can readily sample one client for each task. Then, choosing one client for the subset of Eq. (4) can be viewed as sampling one task, which naturally reduces the communication cost. In fact, the performance of TAViT is not affected by the number of sampled clients in the task-agnostic learning, since the task-agnostic body is updated for sufficient iterations. To demonstrate this, we performed the task-agnostic learning for the four tasks in our experiments by varying the sampling strategy. Table 9 shows the results after training our model for one cycle according to the number of sampled clients in the task-agnostic learning. The results show that sampling one client achieves comparable or higher performance for all tasks, compared to the results of sampling more than one clients. This supports that our sampling strategy is an efficient way to train the Transformer body with less computational cost even when the number of clients increases. D.4 COMMUNICATION COST BETWEEN CLIENTS AND SERVER In the proposed TAViT, the features and gradients of the networks are transported between clients and server, so one may wonder how much additional communication cost occurs. To compute the communication cost of our method, we assume that the cost is proportional to the number of transported elements. Also, since the size of features and gradients from clients to the server are the same with those from the server to client, we only consider one direction from clients to the server. Then, we computed the maximum cost for one communication to update our model for each task-specific and task-agnostic learning, and compared our cost to the method of FL (McMahan et al., 2017a). Specifically, when there Nk clients for the k-th task, let PH , PB and PT be the number of parameters of the head, body, and tail, respectively. In the case of FL that aggregates the whole model composed of the head, body, and tail, the communication cost per communication can be represented as: CostFL = Nk(PH + PB + PT ). (13) On the other hand, our model does not require the transportation of learnable parameters except for the aggregation step in the task-specific learning. Thus, the communication cost can be computed as follows: CostTAV iT = { Nk(PH + PT ), if an aggregation step (task-specific learning) Nk(F +G), else if a non-aggregation step (task-specific learning) F +G, otherwise, (task-agnostic learning) (14) where F and G are the number of elements of transported features and gradients, respectively. From (14), we can see that the communication cost at the network aggregation step in the task-specific learning of the proposed method is smaller than FL that needs to transport the parameters of the whole model including head, body, and tail. Specifically, instead of aggregating parameters of the Transformer body, the TAViT transports features and gradients that have much smaller size than the body parameters, which can reduce the cost per communication significantly. For example, the proposed model for the deblocking task contains PH + PB + PT = 44, 774, 792 parameters whose memory size is about 207.5MB. Suppose that 10 clients participate in FL to train this model. Then, 447.7M elements are transported from the clients to the server, and the network of the server should handle more than 2GB load per communication. In contrast, our model transports PH + PT = 27, 952, 520 parameters whose memory size is approximately 142.5MB. Thus, even with 10 clients, 279.5M elements are transported, and the network of the server is supposed to handle about 60% load of FL. In addition, since the number of features and gradients is F = G = 20× 16× 16× 512 = 2, 621, 440 which is 10MB of memory, the number of transported elements per communication for 10 clients is 52.4M, and the server is pressed by only 200MB load per communication. On the other hand, in task-agnostic learning, the server updates the body with the sampled client without any weight aggregation. Accordingly, only the features and gradients are transported from the client to the server. In particular, in the case of the communication from the server to the client, the server does not need to transport the gradients to the client, but only transmit the features. Thus, the cost per communication in the task-agnostic learning phase is significantly reduced. Therefore, up to a certain epoch size, our model is more communication bandwidth efficient compared to the classical FL, and the advantage increases more if a bigger Transformer body is used for better representation of global attention. Scalability Suppose that there are K tasks, and let the total number of clients connected to a server be Nall. For simple analysis of the scalability, we assume that each communication between clients and the server takes a constant time. The scalability is computed on the time complexity for one communication between clients and the server to update the models. For our task-specific learning, the one communication has time complexity O(Nall) if we update the heads and tails of all clients. This means that the communication cost would increase according to the number of clients, which can limit the scalability of the proposed method. However, if we apply the client sampling strategy of the FedAvg, we can control the number of communications and the one communication will have time complexity Ω(K). This sampling strategy can be readily adapted to our model without significant modification. On the other hand, for the task-agnostic learning phase, the one communication has time complexity O(K), since the network parameters of clients for the same task are aggregated before the task-agnostic optimization. Also, according to the sampling strategy of one task in the proposed method, one communication has time complexity Ω(1), which is studied in Appendix D.3. D.5 APPLICATION TO HIGH-LEVEL VISION TASKS AND MEDICAL DATA In the main paper, the proposed TAViT was demonstrated on multiple low-level computer vision tasks. However, the TAViT framework can be also used for a wide range of high-level computer vision tasks, and even with different data domains such as medical images. To demonstrate this, we additionally conduct on inpainting task for natural images and denoising task for X-ray CT images. Here, the image inpainting is a higher-level computer vision task which requires more semantic information, and the denoising of X-ray CT requires domain-specific knowledge about the data. In particular, to show that our task-agnostic Transformer body provides a positive effect on the training of new task-specific networks, we performed the task-specific learning only for the client-side heads and tails by subscribing to the pre-trained Transformer body, which was trained on the four natural image processing tasks without additional fine-tuning. The details of training and results are as follows. Dataset For the image inpainting task, we used PASCAL VOC2012 dataset which contains 10,582 natural images. The information about license and source of this dataset can be found in Appendix B. For the preprocessing, we scaled the image from [0, 255] to [-1, 1] and randomly cropped by 128× 128 patches. Then we multiplied the image with the zero-box mask that has a random size of width and height from 48 to 64 according to Yu et al. (2018). For the X-ray CT denoising task, we used the 2016 AAPM Low-dose CT Grand Challenge dataset (McCollough et al., 2020) that provides noisy CT images with quarter dose and clean CT images with routine dose of X-ray. Since the X-ray CT data are measured in the Hounsfield unit, we divided the intensity by 4,000 and randomly cropped by 64× 64 patches. Implementation details For the image inpainting task, we employed the network architecture of Yu et al. (2018) and decomposed it into two parts for the task-specific head and tail. We performed task-specific learning by minimizing the adversarial generative loss for 400 epochs using Adam optimizer with learning rate 1× 10−4. For the X-ray CT denoising task, we used the same network architecture of head and tail implemented in this paper. We trained the task-specific networks with the fixed task-agnostic body for 400 epochs using Adam optimization algorithm with learning rate 5× 10−3. Results To evaluate the performance of image inpainting and medical image denoising, we compared our method to the CNN model that has the same network architectures of head and tail with ours but does not have the Transformer body. The quantitative evaluation results are shown in Table 10, and the visual comparisons are shown in Figure 13. We can see that the performance of inpainting is improved when training the client-side networks with our pre-trained Transformer body, even though we use the Transformer body pre-trained using low-level computer vision tasks. This implies that the proposed method can be extended to various high-level tasks. In addition, we can observe that our model on the medical image denoising task achieves higher performance than the comparative CNN model and provides clean images, although the Transformer body was trained on the natural image domain. From these results, we can confirm that our task-agnostic Transformer body has a capability to learn the domain gap even in different data sources. Also, this suggests that clients do not need to train the server-side body from the scratch when they subscribe the body for the other tasks.
1. What is the focus and contribution of the paper on distributed learning for image processing? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle multiple tasks while preserving privacy? 3. What are the weaknesses of the paper, especially regarding the lack of experimental or theoretical validation of the privacy-preserving property? 4. Why did the authors choose to focus on specific image restoration tasks (deblocking, denoising, deraining, and deblurring) in the client-side CNN-based task-specific heads and tails, instead of including other tasks like super-resolution and inpainting?
Summary Of The Paper Review
Summary Of The Paper This paper presents a new distributed learning framework exploiting the vision transformer for various image processing applications. It gives impressive quantitative and qualitative results on multiple image restoration tasks meanwhile keeping privacy. Specifically, it employs a task-agnostic vision transformer to learn universal representation at the server, and several CNN-based task-specific heads and tails to handle different image restoration tasks at the client side. It also gives a training strategy to learn this model. Review Strengths It gives a new and practical distributed learning framework for image restoration tasks. It is capable of handling multiple tasks with maintaining privacy. It gives state-of-the-art or competitive results on the evaluated restoration tasks. The paper is easy to follow. Weaknesses The emphasized privacy-preserving property of the given framework is not experimentally or theoretically validated. No explanation about why only choosing deblocking, denoising, deraining, and deblurring in clients. What about super-resolution and inpainting?
ICLR
Title Bag of Tricks for Unsupervised Text-to-Speech Abstract Unsupervised text-to-speech (TTS) aims to train TTS models for a specific language without any paired speech-text training data in that language. Existing methods either use speech and corresponding pseudo text generated by an unsupervised automatic speech recognition (ASR) model as training data, or employ the back-translation technique. Though effective, they suffer from low robustness to low-quality data and heavy dependence on the lexicon of a language that is sometimes unavailable, leading to difficulty in convergence, especially in lowresource language scenarios. In this work, we introduce a bag of tricks to enable effective unsupervised TTS. Specifically, 1) we carefully design a voice conversion model to normalize the variable and noisy information in the low-quality speech data while preserving the pronunciation information; 2) we employ the non-autoregressive TTS model to overcome the robustness issue; and 3) we explore several tricks applied in back-translation, including curriculum learning, length augmentation and auxiliary supervised loss to stabilize the back-translation and improve its effectiveness. Through experiments, it has been demonstrated that our method achieves better intelligibility and audio quality than all previous methods, and that these tricks are very essential to the performance gain. 1 INTRODUCTION Text to speech (TTS), or speech synthesis, has been a hot research topic (Wang et al., 2017; Shen et al., 2018; Ming et al., 2016; Arik et al., 2017; Ping et al., 2018; Ren et al., 2019a; Li et al., 2018; Ren et al., 2021a; Liu et al., 2021; Ren et al., 2021b) and has broad industrial applications as well. However, previous TTS has been developed dominantly for majority languages like English, Mandarin or German, while seldom for minority languages and dialects (low-resource languages), as supervised TTS requires hours of single-speaker and high-quality data to retain a good performance, but collecting and labeling such data for low-resource languages are very expensive and need a substantial amount of manpower. Recently, some works exploit unsupervised (Ni et al., 2022; Liu et al., 2022b) or semi-unsupervised learning (Tjandra et al., 2017; Ren et al., 2019b; Liu et al., 2020; Xu et al., 2020) to enable speech synthesis for low-resource languages, some of which are summarized in Table 1. Semi-supervised methods rely on a small amount of high-quality paired data in the target language to initialize the model parameters and employ back-translation to leverage the unpaired data. But high-quality paired data in minor languages are usually collected via recording in professional studios or transcribing by native speakers, and hence very costly and sometimes even unaffordable to attain. In contrast, unsupervised methods train an unsupervised automatic speech recognition model (ASR) (Baevski et al., 2021; Liu et al., 2022a) to generate pseudo labels for the unpaired speech data, and then use the pseudo labels and speech paired data to train the TTS model. However, their performance tends to be bounded by the performance of the unsupervised ASR model, which is extremely difficult and unstable to train on some low-resource languages, especially for those without lexicon or grapheme-to-phoneme (G2P) tools (Baevski et al., 2021; Liu et al., 2022a)1. Besides, 1Baevski et al. (2021) claimed their method “requires phonemization of the text for the language of interest”, and Liu et al. (2022a) claimed “when switching to an entirely letter-based system without a lexicon, the unit error rate increases substantially”. the unpaired speech samples used in existing unsupervised methods are clean and ready for general TTS model training, such as CSS10 (Park & Mulc, 2019), LibriTTS (Zen et al., 2019) and LJSpeech (Ito, 2017). However, in real low-resource language scenarios, there is no guarantee that enough clean data can be obtained. In this work, we aim to train an unsupervised TTS model in a low-resource language (the target language) with unpaired data, rather than any paired speech and text data, in that language, and also paired data in other rich-resource languages for initialization. Such training data are easily accessible. For example, the unpaired speech and text in the target language can be crawled from video or news websites in the countries using that language; the paired data in rich-resource languages can be obtained from some ASR and TTS datasets. Besides, these crawled speech data are from different speakers. Under such a task setting, we need to address the following challenges in order to achieve our goal. 1) Low-quality multi-speaker data. The speech data to be used for unsupervised training in our problem are often multi-speaker and low-quality, with much variable and noisy information like timbre, background noise, etc., hindering model convergence and meaningful speech-text alignment. This significantly increases the difficulty of the TTS model training. 2) Back-translation stability. Previous semi-supervised TTS methods (Xu et al., 2020; Ren et al., 2019b) leverage the unpaired data with back-translation, but only achieving limited performance and sometimes difficult to converge, especially in unsupervised settings. 3) Robustness. Previous semi-supervised/unsupervised TTS methods (Xu et al., 2020; Ren et al., 2019b; Ni et al., 2022; Liu et al., 2022b) use an auto-regressive architecture (Li et al., 2018; Shen et al., 2018), which suffers from word missing and repeating issues, especially when the supervision signal is very weak. 4) Lack of lexicon. For low-resource languages, it is usually difficult to obtain existing lexicons or G2P tools. We propose several practical tricks to address these issues and enable unsupervised TTS without any paired data in the target language and bridge the performance gap between the unsupervised and supervised TTS. Specifically, 1) we normalize the variable and noisy information in the low-quality training data. We propose a cross-lingual voice conversion model with flow-based enhanced prior, which converts the timbre of all sentences in different languages to one same speaker’s voice while preserving the pronunciation information. 2) We explore some tricks including curriculum learning, length augmentation and auxiliary supervised loss to improve the effectiveness of back-translation. 3) To strengthen model robustness, we employ the non-autoregressive (NAR) TTS model and use the alignment extracted from the ASR model2 in the back-translation process to guide the NAR TTS model training. By applying such a bag of tricks, we can successfully train an effective TTS model with noisy and multi-speaker data and without any lexicons. Through experiments, it has been verified that our method can achieve both high-quality and highintelligibility TTS, in terms of MOS and of word error rate (WER) and character error rate (CER) evaluated by external ASR, respectively. We compare our method to existing unsupervised TTS baselines (Ren et al., 2019b; Xu et al., 2020; Ni et al., 2022; Liu et al., 2022b) and find it significantly outperforms them in both audio quality and intelligibility under the same experimental settings. We conduct some analyses on the proposed tricks, which demonstrate the importance and necessity of 2The ASR model is the byproduct of back-translation, which does not need any extra paired data. these tricks to achieve state-of-the-art unsupervised TTS. The samples generated by our models can be found at https://unsupertts-tricks.github.io. 2 RELATED WORKS 2.1 SUPERVISED SPEECH SYNTHESIS In the past few years, with the development of deep learning, neural network-based TTS has thrived (Wang et al., 2017; Tachibana et al., 2018; Li et al., 2019; Ren et al., 2019a; 2021a; Łańcucki, 2020), where the text-to-speech mapping is modeled by deep neural networks using encoder-decoder architectures. Early methods by Wang et al. (2017) and Ping et al. (2017) generate the melspectrogram autoregressively. However, they suffer from slow inference and low robustness issues, e.g. word skipping and repeating. To tackle these issues, later works explore non-autoregressive (NAR) speech generation. FastSpeech (Ren et al., 2019a) is the first non-autoregressive TTS architecture, which adopts the duration predictor and length regulator to bridge the length gap between the speech and the text sequence. After that, many methods are proposed, such as FastSpeech 2 (Ren et al., 2021a), Glow-TTS (Kim et al., 2020) and EATS (Donahue et al., 2021), achieving not only better audio quality but also fast inference and good robustness. Recently, some NAR models leveraging variational auto-encoder (VAE) to model the variation information in the latent space are developed, like VITS (Kim et al., 2021b) and PortaSpeech (Ren et al., 2021b), and they quickly become popular. In this work, we also employ non-autoregressive architecture and VAE structure to achieve robustness against low-quality data. 2.2 LOW-RESOURCE SPEECH SYNTHESIS Supervised speech synthesis requires high-quality paired speech and text data for training, which are costly to attain, especially for low-resource languages. To broaden the application scope of TTS systems, several low-resource TTS models are developed, which only need a few or even not any high-quality paired data. Instead, they use unpaired text and audio data to train TTS models in a semi-supervised or unsupervised way, which are much straightforward and cheap to obtain. Semi-supervised TTS. Ren et al. (2019b) adopt back-translation and pre-training to leverage unpaired data, generating pseudo text/speech samples with ASR/TTS models and training them with the augmented data iteratively. However, as a proof of concept, Ren et al. (2019b) only verify the feasibility of semi-supervised TTS in a single-speaker dataset. Later, LRSpeech (Xu et al., 2020) supports multi-speaker and noisy datasets and is closer to real application. However, these semi-supervised methods still require a few pairs of high-quality speech and text data, which are expensive for low-resource languages since they often need to be recorded in professional studios. Unsupervised speech synthesis. Unsupervised speech synthesis does not use any paired training data from the target speaker and language, which has attracted growing attention recently. As the earliest unsupervised TTS works, Liu et al. (2022b) and Ni et al. (2022) both use an unsupervised ASR model to transcribe the TTS speech data to pseudo text and train with the augmented data to build an unsupervised TTS system. However, they heavily rely on the unsupervised ASR technique, whose training procedure is very unstable and heavily relies on lexicons. Therefore, these methods are difficult to apply to other low-resource languages. Besides, when switching to a multi-speaker setup, the gap between supervised and these unsupervised TTS methods becomes larger than singlespeaker setup (Liu et al., 2022b). A recent ArXiv paper (Lian et al., 2022) trains a non-parallel voice conversion model using unpaired speech data as the acoustic model and a specific module to map the text sequence to the speech discrete representation sequence, but this module has to be trained with an external dataset with the same language as in the unpaired dataset. Thus this method is hardly applicable to real low-resource language scenarios due to the difficulty of collecting such a large paired dataset in this language. 3 PROPOSED BAG OF TRICKS Suppose we have an unpaired speech dataset Slow and a text dataset Tlow in the target low-resource language Llow, together with a paired speech-text dataset Srich and Trich in another rich-resource language Lrich as auxiliary supervised training data. We assume that Llow and Lrich share some common characters, such as Indonesian and French share some Latin alphabets. Slow is a multispeaker speech dataset whose audio quality is extremely low, as it is difficult to obtain enough single-speaker clean data for the low-resource language. As for the auxiliary training data, since there are many public speech audios available in rich-resource languages, we do not impose any restrictions on the quality of these speech data. Our method aims to train the TTS model in the language Llow using the above datasets. Besides, we need another clean speech dataset Sref to provide the target timbre in our voice conversion model and it can be part of Srich or Slow. In this section, we first describe the overall training pipeline. Then we introduce model designs and some tricks used in each stage of the pipeline. 3.1 OVERALL TRAINING PIPELINE As shown in Figure 1, the training pipeline of our method consists of 3 stages: voice conversion, supervised warm-up training and unsupervised backtranslation training. We put the detailed pseudo-code algorithm of our training pipeline in Appendix A and describe each stage in the following paragraphs. Stage 1: Voice conversion. The low-resource speech dataset Slow contains many speakers and can be very noisy. We consider the variable and noisy information, e.g., background noise, speaker timbre, accent and some specific prosody, as the text-independent information in speech. Although some variable information is essential for certain TTS tasks like emotional, expressive and personalized TTS, it would be an obstacle for unsupervised TTS. The core purpose of unsupervised TTS is to solve the information matching problem between two modalities, i.e., speech and text, which are actually aligned by the pronunciation (or called content). The variable information in speech may interfere with the crossmodal pronunciation information matching in the unsupervised training stage and make the model struggle to find aligned clues in the text for this variable information. Therefore intuitively, if we can reduce the information gap between speech and text, our TTS and ASR model can achieve crossmodal pronunciation information matching faster, and the unsupervised training process can then be stabilized. To this end, we apply the cross-lingual voice conversion as the first stage. We train the voice conversion model on the datasets Slow, Srich and a clean dataset Sref providing the reference speaker timbre and can be in any language. Then we can normalize the variable and noisy speech information of audios in Slow and Srich using the voice conversion model and denote the generated datasets as S′low and S ′ rich. Stage 2: Supervised warm-up training. It is still very difficult to directly train unsupervised TTS from scratch even though we have normalized the variable information in the speech dataset Slow. To warm up the models for next unsupervised training, we train a sequence-to-sequence-based ASR model, which is required in the following back-translation stage, and a non-autoregressive TTS model using the auxiliary paired dataset Srich in a rich-resource language. This stage can provide a better initialization for the model since there exist certain commonalities between written and spoken formats in different languages3. Stage 3: Unsupervised back-translation training. Back-translation, originating from neural machine translation, is one of the most effective ways to leverage monolingual data for translation. In unsupervised TTS, back-translation (Sennrich et al., 2016; He et al., 2016; Ren et al., 2019b) leverages the dual nature (He et al., 2016; Qin, 2020) of TTS and ASR tasks and develops the capability of 3For a low-resource language Llow, it is usually not difficult to find a rich-resource language Lrich which is close to Llow. transforming text to speech (TTS) and speech to text (ASR). We transform a speech sequence s into a text sequence tpseudo using the ASR model, and then train the TTS model on the transformed pair (tpseudo, s). Similarly, we also train the ASR model on the transformed pair (spseudo, t) generated by the TTS model. The back-translation has two training directions, and as the training directions shift, the accuracy of ASR and performance of TTS can be boosted iteratively. We show the performance improvements as the back-translation training progresses in Appendix D.1. 3.2 VARIATIONAL VOICE CONVERSION MODEL WITH ENHANCED PRIOR The new voice conversion (VC) model in Stage 1 is aimed at normalizing the variance and noisy information in low-quality audios. It is based on self-supervised learning (SSL) audio representation (Polyak et al., 2021; van Niekerk et al., 2022), which has been proved to be very effective in disentangling the content and timbre information. As shown in Figure 2a, the overall architecture of our VC model is like an autoencoder. In training, the mel-spectrogram M1 and M2, which are the same here, are fed into several information extraction modules to generate disentangled representations. Specifically, 1) the pre-trained speaker encoder extracts the sentence-level speaker embedding; 2) the pre-trained HuBERT (Hsu et al., 2021) extracts the frame-level SSL discrete representations containing content (pronunciation) information; and 3) the posterior encoder extracts the residual information. After the information decomposition, the speech decoder takes all the representations as input and reconstructs the mel-spectrogram using a mean absolute error (MAE) and a multi-length adversarial loss (Ladv) following Ye et al. (2022) and Chen et al. (2020). In inference, we replace M1 with the reference speech which provides the target speaker timbre. In this way, the generated speech can preserve the pronunciation information in M2 and transfer its timbre to M1. However, besides the common merits of general VC including preserving content and converting timbre, our model should also have the below properties to ensure its performance in our pipeline. 1) Our model should be cross-lingual. It should be able to normalize Slow and Srich which are in different languages. 2) Our model should generate high-quality results. As the normalized speech will be fed to the next two stages as the training target, its result would bound that of the whole pipeline. 3) Our model should be robust to noisy and low-quality audio. Upon previous SSL-based VC methods, we enable these properties of our model with several improvements: Multilingual HuBERT. To enable the model to be cross-lingual, we employ a multilingual HuBERT (Lee et al., 2021; Popuri et al., 2022) to extract the SSL discrete representations as the content information, which is pre-trained on speech in multiple languages. We find it also generalizes well to other unseen languages (see Appendix D.2). HuBERT does not need any paired data or speaker information, which is consistent with our task setting. Besides, we add a language ID input to the speech decoder to indicate the language we need to generate, which can accurately model the pronunciation differences among different languages given the same discrete representation, and make up for the limited capacity of multilingual representations. Variational encoder with flow-based enhanced prior. To improve model robustness and audio quality, inspired by previous successful work in TTS (Ren et al., 2021b; Kim et al., 2021a), we introduce a variational encoder with flow-based enhanced prior. In training, this encoder can “store” the residual information, e.g., some irregular noises, time-varying timbre and prosody, that cannot be encoded by other information extraction modules to the posterior distribution Dq , and uses a normalizing flow to reshape the prior distribution Dp which needs to be close to Dq in terms of KL-divergence. With normalizing flows, the KL-divergence no longer offers a simple closed-form solution. So we estimate KL-divergence via Monte-Carlo method as in Ren et al. (2021b) and Kim et al. (2021a). The reason why we need the normalizing flow is that simple Gaussian prior distribution results in strong constraints on the posterior, which pushes the posterior distribution towards the mean and limits diversity, while the distribution shaped by normalizing flows is more flexible and provides the decoder with stronger prior. Besides, it can also provide the sampled random variables with temporal dependency. Information bottleneck. HuBERT representation is a kind of low-bitrate representation for speech content and does not contain much non-lexical information such as speaker identity and emotion. However, due to its discrete space bottleneck and way of training, we still cannot ensure it fully disentangles the timbre information, and the remaining timbre information may degrade the voice conversion quality in the inference stage. To further erase the speaker identity information, we need to choose an appropriate input dimension for the content encoder (i.e., embedding layer for HuBERT tokens), which can neither be too large nor too small. A large dimension may lead to leakage of fine-grained identity information from the content encoder and a small one may result in loss of pronunciation information. We put more details of our VC model in Appendix B.1. 3.3 TTS AND ASR MODELS Previous unsupervised TTS works use an autoregressive (AR) TTS architecture such as Tacotron 2 (Shen et al., 2018) and TransformerTTS (Li et al., 2018), which automatically find the speechtext alignment. However, such an AR TTS architecture is not robust and prone to word missing and repeating problems as stated in Ren et al. (2019a). In this work, as shown in Figure 2b, we adopt a non-autoregressive (NAR) TTS architecture (Ren et al., 2019a; 2021a). We mainly follow PortaSpeech (Ren et al., 2021b), except that we replace the post-net in PortaSpeech with multilength adversarial training (Ye et al., 2022; Chen et al., 2020) to simplify the training pipeline while keeping the naturalness of the generated mel-spectrogram. Instead of obtaining the ground-truth duration information from Montreal Forced Aligner (MFA) (McAuliffe et al., 2017) as many nonautoregressive TTS models (Ren et al., 2021a;b; Ye et al., 2022) do, we extract the speech-text alignment from the attention matrix generated by the ASR model, which simplifies the training pipeline in our back-translation stage and removes the dependency upon external tools. Specifically, inspired by GlowTTS (Kim et al., 2020), we extract the speech-text alignment by finding the monotonic path of maximum probability over the attention matrix of our ASR model using the Viterbi decoding. To enable the TTS model to generate speech in a different language, we add a language embedding to the decoder and an extra language ID input is required to specify the language of the target speech. Our ASR model is based on a sequence-to-sequence architecture with an LSTM-based encoder and decoder. To generate more monotonic alignment for TTS training, we employ the locationsensitive attention (Shen et al., 2018). Different from the TTS model, our ASR model is universal to all languages and does not need any language embedding as input, which can generalize to new languages better in our scenario. We put more details and model configurations of TTS and ASR models in Appendix B.2 and B.3. 3.4 TRICKS IN BACK-TRANSLATION Back-translation is a very critical step for unsupervised TTS training to leverage unpaired speech and text data. In this subsection, we describe some back-translation strategies that can significantly improve the effectiveness and efficiency of unsupervised TTS training. Curriculum learning. After warming up the ASR model in Stage 2 using Srich and Trich, we can force the ASR model to transcribe the audio in Llow to the text in Lrich by initializing the language embedding of Llow with that of Lrich. Considering the results of ASR are taken as the input of TTS, we select some good transcriptions, whose pronunciation is very similar to the groundtruth, for TTS training and discard bad ones in each round of back-translation. Apparently, we cannot directly calculate the error rate between the transcription and ground-truth text since we have no corresponding text for each audio. Therefore, to filter good recognition results during iterative back-translation, we design a metric called focus rate (F) to evaluate the confidence of ASR results, which is defined as F = 1N ∑N i=1 Ai,Pi . In its definition, N denotes the number of mel-spectrogram frames; Ai,j is ASR attention weights at the position of the i-th mel-spectrogram frame and the j-th text token and satisfies ∑ Ai = 1; Pi is the text token index corresponding to the i-th mel-spectrogram frame in the monotonic path of maximum probability decoded by the Viterbi algorithm (Forney, 1973). A higher F means greater probability lies in the decoded monotonic path and implies better speech-text monotonic alignment and higher confidence in the transcribed results. In each round of the pseudo text generation process, we use F to select good ASR results whose F is greater than a fixed threshold Fthres. Besides, we also store F for each pseudo text tpseudo and replace tpseudo with new result in the next back-translation round only if F is increased. Length augmentation. At the beginning of training in the low-resource language, short utterances (text and speech) are easier for TTS and ASR models to fit and they are generally better at generating short utterances rather than long ones. Besides, our curriculum learning strategy approximates the quality of the generated text and filters bad results, forcing our model to keep more short utterances than long ones. Consequently, our model becomes biased towards short utterances and may perform very poorly for long sentences. To fix this issue, we introduce a length augmentation strategy. In particular, we randomly concatenate two utterances (t1, t2)/(s1, s2) and their generated results (s1pseudo, s 2 pseudo)/(t 1 pseudo, t 2 pseudo) with some probability pcat and obtain the generated pairs (t cat, scatpseudo) and (scat, tcatpseudo) for back-translation training. Length augmentation helps the TTS and ASR models generate long sentences better and become more robust to some long text inputs in inference. Auxiliary supervised losses. If we only employ the back-translation loss in Stage 3, the model may fail to find the correct speech-text alignment, leading to unsatisfied results and unstable training, especially at the beginning of training. To solve this problem, apart from the back-translation loss in the target low-resource language, we also keep the supervised training losses in the auxiliary rich-resource language the same as those in Stage 2. We call them “auxiliary supervised losses”. Specifically, in the process of training, we intersperse the rich resource language supervised training, for both TTS and ASR, into the back-translation steps with some probability paux. 4 EXPERIMENTS AND RESULTS In this section, we conduct experiments to evaluate the effectiveness of our proposed method for unsupervised TTS. We first describe the experiment settings, show the results of our method, and conduct some analyses of our method. 4.1 EXPERIMENTAL SETUP Datasets. We choose the speech and text data from CommonVoice dataset (Ardila et al., 2019) for training and English and Indonesian as the target low-resource languages4. We use French as the rich-resource language unless otherwise stated. The experimental results of using other languages as the rich-resource language are put in Appendix D.2. We split the target language data into two halves. We take unpaired speech data from the first half and text data from the second, so as to guarantee the speech and text data are disjoint. We use LJSpeech (Ito, 2017) as the Sref to provide the speaker timbre and suppress the background noise for the voice conversion model. For evaluation, we choose 100 audio/text pairs in LJSpeech for English5 and 100 audio/text pairs in CommonVoice (Indonesian subset) for Indonesian. For the speech data, we convert the raw waveform into mel- 4We choose English as one of the target languages since we can understand English and it is easy to evaluate, although English is not a low-resource language. 5We use LJSpeech because it has fewer errors in text and speech pairing, while the data in CommonVoice are very noisy and have much wrongly labeled text. spectrograms with 80 ms frame size, 20 ms frame hop following Hsu et al. (2021). More details are listed in Appendix C.1. Training and evaluation. We train our VC, TTS, and ASR models on 1 NVIDIA A100 GPU witch batch size 128. We use the Adam optimizer with β1 = 0.9, β2 = 0.98, ε = 10−9 and learning rate 2e-4. The training takes nearly 3 days. The output mel-spectrograms are converted to waveform using a HiFi-GAN (Kong et al., 2020) pre-trained on LJSpeech (Ito, 2017). The focus rate Fthres, Nsteps, pcat and paux in back-translation are set to 0.2, 20k, 0.2 and 0.2. For evaluation, we mainly use MOS (mean opinion score) for audio quality, WER (word error rate), and CER (character error rate) for the intelligibility of the voice (French & Steinberg, 1947) to verify if we can generate a reasonable speech sequence. For mean opinion score evaluation, we keep the text content consistent among different models so as to exclude other interference factors and only examine the audio quality. We randomly choose 20 sentences from the test set and each audio is listened by at least 20 testers following Ren et al. (2019a; 2021a), who are all native English/Indonesian speakers. For WER and CER, we first transcribe the sentences from the generated speech using open-sourced or commercial ASR and calculate these metrics between them and the ground-truth text in the test set. We use WeNet (Yao et al., 2021; Zhang et al., 2022) for English for fair comparison with future works, since commercial ASR could be changed in the future; but we choose Azure ASR service6 for Indonesian since we cannot find any Indonesian open-sourced ASR that is accurate enough. In analytical experiments, we also show the CER of our ASR model, which also indicates the performance of our system since ASR and TTS are dependent on each other and boosted iteratively. 4.2 RESULTS AND ANALYSES 4.2.1 PERFORMANCE We compare our method with previous works including Ren et al. (2019b), Xu et al. (2020), Liu et al. (2022b) and Ni et al. (2022). For fair comparison, we make some modifications to all baseline methods including unifying the training dataset, TTS acoustic model and vocoder (more detailed modifications of each baseline method are put in Appendix C.2). The results are shown in Table 2. We also evaluate the outputs of a supervised TTS model trained with paired target language data for reference, and its results can be regarded as the upper bound. From the table, it can be seen that our method achieves the best performance in both speech quality (MOS) and intelligibility (CER and WER) in English and Indonesian. And very surprisingly, our method can even approach the performance of the supervised model in Indonesian. A possible reason is that Indonesian is easier to pronounce than English. These observations prove the effectiveness of our proposed tricks for unsupervised TTS. 4.2.2 ABLATION STUDY To analyze the effectiveness of each trick and component, we conduct some ablation studies on English. In addition to generated speech quality (MOS) and intelligibility (CER and WER), we also analyze the character error rate of our ASR model (CER(ASR)). The results are shown in Table 3. 1) From Row 2, it can be seen that our model can achieve better performance after normalizing the speech variance including timbre and noise in our dataset. 6https://azure.microsoft.com/en-us/products/cognitive-services/ speech-to-text/ 2) From Row 3, it can be seen that the NAR TTS architecture improves speech quality and intelligibility by a large margin, as NAR TTS is more robust to noisy speech and reduces some bad cases in generated speech. 3) From Row 4, it can be seen that back-translation is essential to unsupervised TTS, which is consistent with the findings of previous works (Xu et al., 2020; Ren et al., 2019b). 4) From Row 5, we can see that curriculum learning can improve the training effectiveness since it can filter out bad pseudo transcripts and improve the training set quality for back-translation. 5) From Row 6, it can be seen that our length augmentation strategy can improve the robustness to long text inputs in inference. 6) From Row 7, we find that auxiliary supervised training loss can improve the performance of both ASR and TTS by stabilizing the training. From the table, comparing other rows with Row 1 that shows our model with all tricks, we have several observations. 4.2.3 ANALYSES ON VOICE CONVERSION MODEL With verified effectiveness of normalization via the voice conversion model as demonstrated in Section 4.2.2, we conduct more analyses on our proposed voice conversion model, including the effects of different information bottleneck channels and the flow-based enhanced prior. The results are shown in Table 4. It can be observed that an appropriate size of bottleneck channels is crucial for the performance of the voice conversion model, with a large bottleneck resulting in timbre information leakage and a small bottleneck leading to pronunciation information loss. Besides, our flow-based enhanced prior can improve the quality of converted speech, since it has make fewer assumptions about the prior distribution as we mentioned in Section 3.2. 5 CONCLUSION In this work, we proposed an unsupervised method for TTS by leveraging low-quality and noisy unpaired speech and text data in the target language and paired data in other rich-resource languages. Our method encloses several practical tricks to realize unsupervised text-to-speech, including normalizing variable and noisy information in speech data, curriculum learning, length augmentation, and auxiliary supervised training. We have also found that the non-autoregressive TTS architecture can significantly relieve robustness issues in unsupervised settings. We conducted experiments on CommonVoice dataset, taking English and Indonesian as the target languages, and have found that our method can achieve high audio quality in terms of MOS, and high intelligibility in terms of WER and CER, demonstrating remarkable effectiveness. Further analyses have well verified the importance of each trick of our method. Appendices A TRAINING ALGORITHM The detailed unsupervised training algorithm is shown in Algorithm 1. Algorithm 1 Unsupervised TTS Training 1: Input: paired dataset in rich-resource language Srich and Trich; unpaired speech and text data in low- resource language Slow and Tlow; single-speaker speech dataset Sref containing reference speaker for voice conversion; pre-trained multilingual HuBERT model Mh; pre-trained speaker encoder Mspk. 2: Initialize: multilingual TTS model MTTS and ASR model MASR; current unsupervised training step t = 0; total unsupervised training steps Ttotal; number of steps for each TTS or ASR stage Nstep. 3: Train our proposed voice conversion model with Srich, Slow and Sref , and use the Mh and Mspk to extract HuBERT and speaker representations. 4: Convert the timbre of all speech samples in Srich and Slow to that of the speech in Sref and obtain the converted S′rich and S ′ low. {Sec. 3.2} 5: Train MASR and MTTS using S′rich and Trich. 6: repeat 7: Convert all samples s in S′low to pseudo text tpseudo. 8: Select pseudo training pairs (tpseudo, s) satisfying F > Fthres and obtain (Tpseudo, S′). {Curriculum learning in Sec. 3.4} 9: for N in 0 to Nsteps do 10: if Random() ≤ paux then 11: Sample D ← (trich, s) from (Trich, S′rich). {Auxiliary loss in Sec. 3.4} 12: else 13: Sample D ← (tpseudo, s) from (Tpseudo, S′). 14: if Random() ≤ pcat then 15: Sample D′ ← (t2pseudo, s2) from (Tpseudo, S′). 16: D ← (Concat(tpseudo, t2pseudo), Concat(s, s2)) {Length augmentation in Sec. 3.4} 17: end if 18: end if 19: Train MTTS using D. 20: end for 21: for N in 0 to Nsteps do 22: Convert all samples t in Tlow to pseudo speech spseudo and obtain (Spseudo, Tlow). 23: if Random() ≤ paux then 24: Sample D ← (srich, t) from (S′rich, Trich). {Auxiliary loss in Sec. 3.4} 25: else 26: Sample D ← (spseudo, t) from (Spseudo, Tlow). 27: if Random() ≤ pcat then 28: Sample D′ ← (s2pseudo, t2) from (Spseudo, Tlow). 29: D ← (Concat(spseudo, s2pseudo), Concat(t, t2)) {Length augmentation in Sec. 3.4} 30: end if 31: end if 32: Train MASR using D. 33: end for 34: t← t+ 1 35: until t > Ttotal B MODEL DETAILS AND CONFIGURATIONS In this section, we put more details of models including voice conversion (VC), text-to-speech (TTS), and automatic speech recognition (ASR) models, and also the hyper-parameters we used in our experiments. B.1 VC MODEL Our proposed VC model takes two mel-spectrograms (content provider M1 and timbre reference M2) as inputs and outputs the converted mel-spectrogram M3. Firstly, M1 is fed into a pre-trained speaker encoder7 to extract the speaker embedding Hspk. Secondly, M2 is fed into a pre-trained multilingual HuBERT8, which is pre-trained with three languages, and extract the HuBERT discrete frame-level representation Hling. Thirdly, M2 is taken to the posterior encoder, which generates a multivariate Gaussian distribution as the posterior in our variational VC model. Instead of directly employing Gaussian distribution, we introduce a small volume-preserving normalizing flow to model the prior distribution. A latent z is sampled from the posterior distribution (in training) or prior distribution (in inference). Finally, we add Hspk, Hling, z and language embedding of M2 together (all of them have the same channel size C = 192) and feed the result hidden states into the speech decoder to generate the target speech. Besides, we introduce a multi-length discriminator to distinguish between the output generated by the model and the ground truth mel-spectrogram. The loss terms of the voice conversion model consist of 1) reconstruction loss of mel-spectrotram LMAE: mean absolute error between the generated and ground-truth mel-spectogram; 2) the KLdivergence of prior and posterior distributions: LKL = log qϕ(z|x)− log pθ̄(z), where z ∼ qϕ(z|x); and 3) the adversarial training loss introduced by the multi-length discriminator: Ladv. The final weighted total loss is Ltotal = λ1LMAE + λ2LKL + λ3Ladv. In our experiments, we set λ1 = λ2 = λ3 = 1.0. The detailed structure of each module is introduced in the following subsubsections. B.1.1 MULTILINGUAL HUBERT Multilingual HuBERT (Lee et al., 2021) is trained on English (En), Spanish (Es), and French (Fr) 100k subsets of the VoxPopuli dataset (Wang et al., 2021). VoxPopuli contains unlabeled speech data for 23 languages, and Lee et al. (2021) use the 4.5k hrs of unlabeled speech for En, Es, and Fr, totaling 13.5k hours. We extract the HuBERT features from the 11-th layer of the third-iteration HuBERT model and discretize them using the pre-trained K-means model to obtain the discrete representations Hling. B.1.2 POSTERIOR ENCODER The structure of posterior encoder is similar with the encoder in the variational generator of PortaSpeech (Ren et al., 2021b), which is composed of a 1D-convolution with stride 4 followed by ReLU activation (Glorot et al., 2011) and layer normalization (Ba et al., 2016), and a non-causal WaveNet (Van Den Oord et al., 2016), as shown in Figure 3a. The number of encoder layers, WaveNet channel size and kernel size are 8, 192 and 5. The outputs of posterior encoder is the parameters (µq and σq) of the posterior distribution N(µq, σq) and the latent z is sampled from N(µq, σq), whose latent size is set to 32. B.1.3 VOLUME-PRESERVING (VP) NORMALIZING FLOW Following Kim et al. (2021b) and Ren et al. (2021b), we use volume-preserving normalizing flow as the prior distribution generator since it does not need to consider the Jacobian term when calculating the data log-likelihood and is powerful enough for modeling the prior, as shown in Figure 3c. The normalizing flow transforms simple distributions (e.g., Gaussian distribution) to complex distributions through a series of K invertible mappings, which is a stack of WaveNet (van den Oord et al.) residual blocks with dilation 1. Then we take the complex distributions as the prior of the speech decoder. When introducing normalizing flow-based enhanced prior, the optimization objective of the mel-spectrogram generator becomes: log p(x) ≥ Eqϕ(z|x)[log pθ(x|z)]−KL(qϕ(z|x)|pθ̄(z)) ≡ L(ϕ, θ, θ̄), (1) where ϕ, θ and θ̄ denote the model parameters of the posterior encoder, speech decoder and the normalizing flow-based enhanced prior, respectively. Due to the introduction of normalizing flows, the KL term in Equation 1 no longer offers a simple closed-form solution. So we estimate the expectation w.r.t. qϕ(z|x) via Monte-Carlo method by modifying the KL term: KL(qϕ(z|x)|pθ̄(z)) = Eqϕ(z|x)[log qϕ(z|x)− log pθ̄(z)]. (2) 7https://github.com/resemble-ai/Resemblyzer 8https://github.com/facebookresearch/fairseq/blob/main/examples/speech_ to_speech/docs/textless_s2st_real_data.md In training, the posterior distribution N(µq, σq) is encoded by the encoder of the posterior encoder. Then z is sampled from the posterior distribution using reparameterization and is passed to the speech decoder. In the meanwhile, the posterior distribution is fed into the VP normalizing flow to convert it to a standard normal distribution (the middle dotted line). In inference, VP normalizing flow converts a sample in the standard normal distribution into a sample z and we pass the z to the speech decoder. Our VP normalizing flow consists of 4 flow steps, each of which has 4 WaveNet layers, whose channel size and kernel size are set to 64 and 3. B.1.4 CONTENT ENCODER The content encoder is stacks of feed-forward Transformer (Vaswani et al., 2017) layers with relative position encoding (Shaw et al., 2018), as shown in Figure 3d. The information bottleneck is located in the first layer of the content encoder (the embedding layer of HuBERT tokens). We set the channel size of each embedding to 16 as default. B.1.5 SPEECH DECODER The speech decoder, as shown in Figure 3b, consists of a non-causal WaveNet and a 1D transposed convolution with stride 4, also followed by ReLU and layer normalization. The number of decoder layers, WaveNet channel size and kernel size are set to 4, 192 and 5. B.1.6 MULTI-LENGTH DISCRIMINATOR Inspired by Ye et al. (2022), our multi-length discriminator is an ensemble of multiple CNN-based discriminators, which evaluate the mel-spectrogram based on random windows of different lengths, as shown in Figure 4. In our experiments, we train three CNN-based discriminators which observe random mel-spectrogram clips with lengths of 32, 64, and 128 frames. The structure of the CNNbased discriminator is shown in Figure 4b. It consists of N+1 2D-convolutional layers, each of which is followed by a Leaky ReLU activation and drop-out layer. The latter N convolutional layers are additionally followed by an instance normalization (Ulyanov et al., 2016) layer. After the convolutional layers, a linear layer projects the hidden states of the mel-spectrogram slice to a scalar, which is the prediction that the input mel-spectrogram is true or fake. In our experiments, we set N=2 and the channel size of these discriminators to 32. B.2 TTS MODEL Our TTS model architecture follows PortaSpeech (Ren et al., 2021b) except that 1) we replace the post-net in PortaSpeech with multi- length adversarial training, which is the same as the adversarial training in our voice conversion model in Appendix B.1. 2) We add a language embedding layer to the speech decoder, indicating the language of the speech that will be generated. 3) We use a simple character encoder like FastSpeech (Ren et al., 2019a; 2021a) instead of the mixed linguistic encoders for simplicity. The detailed model architecture and hyper-parameters of posterior encoder, normalizing flow, speech decoder and multi-length discriminators are the same as those in the voice conversion model in Appendix B.1. The structures of text encoder and duration predictor are the same as those in Ren et al. (2021b), with channel size 192, kernel size 5 and number of layers 4. B.3 ASR MODEL We adopt the architecture of Tacotron 2 (Shen et al., 2018) for our ASR model, since its locationsensitive attention can generate close-to-diagonal and monotonic alignment between speech and text. We replace the character/phoneme embedding of the encoder in Tacotron 2 with a speech CNNbased pre-net with stride 4 to enable speech information encoding. For the decoder side, we use the character embedding as the input layer and the softmax as the output layer to adapt the decoder to the character sequence. We set the hidden size of encoder RNN to 512 and the number of convolution stacks to 5; the hidden size of decoder RNN and attention RNN are both set to 1024; the channel of decoder attention is set to 512. We also train a Transformer-based (Vaswani et al., 2017) language model and jointly beam search decode the recognition results X using the ASR model pASR(X|S) and language model pLM(X) to maximize the probability logpASR(X|S) + λLMlogpLM(X), where λLM is the weight of language model and it is set to 0.2 in this work. The beam size of beam search decoding is set to 5. C MORE EXPERIMENTAL DETAILS In this section, we describe more experimental details for reproducibility. C.1 DATASETS We select subsets of English, French and Indonesian from CommonVoice (Ardila et al., 2019) dataset. We choose subsets of about 200k utterances for English and French and all data (about 20k utterances) for Indonesian. We randomly select 100 utterances in English and Indonesian for validation and another 100 utterances in Indonesian for testing. The test set of English is randomly selected from LJSpeech (Ito, 2017). We split the target language data into two halves according to utterance ID: we take unpaired speech data from those with odd ID and text data from those with even ID, so as to guarantee the speech and text data are disjoint. C.2 BASELINES For fair comparison, we make some modifications to all baseline methods as follows: • We adopt training data consisting of paired French (as auxiliary rich-resource language) data and unpaired English/Indonesian (as target low-resource language) data. Specifically, for Ren et al. (2019b) and Xu et al. (2020), we warmup ASR and TTS in these methods using rich-resource language data before back-translation; for Liu et al. (2022b) and Ni et al. (2022), we initialize the unsupervised ASR model using a modified CTC loss (Graves et al., 2006) with rich-resource language data before unsupervised training and also initialize the TTS model using this data. • We directly use character sequence as TTS input without any lexicon and G2P tools. • We extract the speaker embeddings using the same pre-trained speaker encoder9 and add them to the TTS model to indicate the speaker information (timbre) since our dataset is multi-speaker. • We replace all baseline TTS models with NAR architecture the same as our method since AR architecture is very sensitive to noisy audio and cannot produce any meaningful results in our settings. • We use the same voice conversion model (described in Section 3.2) to convert ground-truth audios and all baselines’ outputs to the same person’s timbre from Sref . • We use the same vocoder, HiFi-GAN (Kong et al., 2020), to convert mel-spectrogram to the waveform. D MORE EXPERIMENTAL RESULTS D.1 PERFORMANCE CHANGES IN BACK-TRANSLATION TRAINING To verify the accuracy of ASR and the performance of TTS can be boosted iteratively as the training directions shift, we plot the accuracy of TTS and ASR results in Figure 5. From the figure, we can see that with the training of the model (the training directions shift every Nsteps = 20000 steps), the error rates of ASR and TTS results gradually drop until convergence. D.2 USE OTHER RICH-RESOURCE AUXILIARY LANGUAGES We explore how different rich-resource auxiliary languages can affect the target language’s performance. In addition to French, we use other languages including German, Dutch, Spanish, and Portuguese as the rich-resource auxiliary language to train our unsupervised TTS model. For fair comparison, we use the same training data size for all rich-resource languages (80k pairs speechtext subset in each language from CommonVoice). We choose English as the target low-resource language. The results are shown in Table 5. It can be seen that using German as the rich-resource 9https://github.com/resemble-ai/Resemblyzer language achieves the best performance. The possible reason is that the pronunciation distance between English and German is closer than other languages as they both belong to west Germanic languages. Though Dutch also belongs to west Germanic languages, it does not perform very well, which might be due to its bad data quality. Then we combine data from all these languages and find that it achieves very strong results and outperforms others that use only one auxiliary language. Besides, we observe that our method performs very well not only in English, French and Spanish, which are used to pre-trained the multilingual HuBERT, but also in other unseen languages, which verifies the generalization of our voice conversion model and the whole unsupervised TTS pipeline. D.3 ANALYSES ON FOCUS RATE F To verify the effectiveness of focus rate F we propose in Section 3.4, we calculate F and CER on English test set in our model training process. We plot the curves to explore the correlation between them in Figure 6. From the figure we can see that the focus rate F is negatively related to recognition accuracy, which means it is reasonable to use it as the indicator for filtering ASR transcriptions (higher F indicates lower CER). D.4 USE OTHER TEXT UNPAIRED DATASET We train our model using the audio data from the CommonVoice English subset which is the same as the original version of the paper and the text data from WMT16 (Bojar et al., 2016) English training set to make the domains of unpaired audio and text very different. We keep the test set the same as the original paper (LJSpeech subset). The results are shown in Table 6. From the table, it can be seen that the performance drops a bit (∼0.036 and ∼0.06 increasing in CER and WER) due to the domain gap between the text and speech unpaired data. E POTENTIAL NEGATIVE SOCIETAL IMPACTS Unsupervised TTS lowers the requirements for speech synthesis service deployment (only needs unpaired speech and text data) and synthesizes high-quality speech voice, which may cause unemployment for people with related occupations such as broadcasters and radio hosts. In addition, there is the potential for harm from non-consensual voice cloning or the generation of fake media and the voices of the speakers in the recordings might be overused than they expect.
1. What is the main contribution of the paper, and what are the strengths and weaknesses of the proposed approach? 2. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 3. Are there any questions or concerns regarding the pipeline's complexity, the choice of techniques, or the effectiveness of the adaptations made for the baselines? 4. Is there any discussion on the limitation of the method, such as its ability to generalize to different domains or languages? 5. Are there any suggestions or ideas for future works that could improve upon the current approach?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents an unsupervised TTS pipeline that is achieved by assembling several pre-trained models and various methods in a pipeline, along with a few new tricks, to achieve impressive unsupervised TTS results. The main contributions of this paper is examining different aspects of the proposed pipeline and quantifying the contributions of each component, as well as showing that when properly integrated together, this pipeline can achieve impressive results on the setup chosen by the paper. Strengths And Weaknesses Strengths: The final results are very good, especially for Indonesian There is an extensive ablation study showing the effectiveness of each component There are some innovations, such as the focus rate F Weaknesses: The method is quite complex, combining several pre-trained models together with complex techniques, though it may be justified given the results achieved Many of these tricks can also be applied to unsupervised ASR (thus improving unsupervised TTS based on these methods), such as voice conversion, access to a high quality rich resource language close to the target language, etc, and these drive significant improvements in accuracy of this method, as shown in table 3. While the authors have made attempts to make use of these methods for the baselines (as described in appendix C2), it is not very clear how effective these adaptations were without any ablation studies, nor is it clear how much effort the authors put in to make sure the baselines were modified in a an optimal way. Other: While using the focus rate F to judge the quality of back-translation results is interesting, is there any study on how well F maps to quality (e.g. using a paired dataset for evaluation of F)? In the given setup, speech and text are from same domain (CommonVoice dataset). Does this method still work if text and speech are from different domains? e,g, conversational speech + wikipedia text Clarity, Quality, Novelty And Reproducibility I believe the clarity and quality of the presentation are good and easy to follow. The code is apparently available as well, which should make reproducibility straight forward. In terms of novelty, while this work achieves most of its results mostly by composing other methods and models, there actual composition itself to achieve the stated results is not trivial and the analysis is valuable. nit: table 3 caption references LM which is not in the table, but not Aug which is in the table
ICLR
Title Bag of Tricks for Unsupervised Text-to-Speech Abstract Unsupervised text-to-speech (TTS) aims to train TTS models for a specific language without any paired speech-text training data in that language. Existing methods either use speech and corresponding pseudo text generated by an unsupervised automatic speech recognition (ASR) model as training data, or employ the back-translation technique. Though effective, they suffer from low robustness to low-quality data and heavy dependence on the lexicon of a language that is sometimes unavailable, leading to difficulty in convergence, especially in lowresource language scenarios. In this work, we introduce a bag of tricks to enable effective unsupervised TTS. Specifically, 1) we carefully design a voice conversion model to normalize the variable and noisy information in the low-quality speech data while preserving the pronunciation information; 2) we employ the non-autoregressive TTS model to overcome the robustness issue; and 3) we explore several tricks applied in back-translation, including curriculum learning, length augmentation and auxiliary supervised loss to stabilize the back-translation and improve its effectiveness. Through experiments, it has been demonstrated that our method achieves better intelligibility and audio quality than all previous methods, and that these tricks are very essential to the performance gain. 1 INTRODUCTION Text to speech (TTS), or speech synthesis, has been a hot research topic (Wang et al., 2017; Shen et al., 2018; Ming et al., 2016; Arik et al., 2017; Ping et al., 2018; Ren et al., 2019a; Li et al., 2018; Ren et al., 2021a; Liu et al., 2021; Ren et al., 2021b) and has broad industrial applications as well. However, previous TTS has been developed dominantly for majority languages like English, Mandarin or German, while seldom for minority languages and dialects (low-resource languages), as supervised TTS requires hours of single-speaker and high-quality data to retain a good performance, but collecting and labeling such data for low-resource languages are very expensive and need a substantial amount of manpower. Recently, some works exploit unsupervised (Ni et al., 2022; Liu et al., 2022b) or semi-unsupervised learning (Tjandra et al., 2017; Ren et al., 2019b; Liu et al., 2020; Xu et al., 2020) to enable speech synthesis for low-resource languages, some of which are summarized in Table 1. Semi-supervised methods rely on a small amount of high-quality paired data in the target language to initialize the model parameters and employ back-translation to leverage the unpaired data. But high-quality paired data in minor languages are usually collected via recording in professional studios or transcribing by native speakers, and hence very costly and sometimes even unaffordable to attain. In contrast, unsupervised methods train an unsupervised automatic speech recognition model (ASR) (Baevski et al., 2021; Liu et al., 2022a) to generate pseudo labels for the unpaired speech data, and then use the pseudo labels and speech paired data to train the TTS model. However, their performance tends to be bounded by the performance of the unsupervised ASR model, which is extremely difficult and unstable to train on some low-resource languages, especially for those without lexicon or grapheme-to-phoneme (G2P) tools (Baevski et al., 2021; Liu et al., 2022a)1. Besides, 1Baevski et al. (2021) claimed their method “requires phonemization of the text for the language of interest”, and Liu et al. (2022a) claimed “when switching to an entirely letter-based system without a lexicon, the unit error rate increases substantially”. the unpaired speech samples used in existing unsupervised methods are clean and ready for general TTS model training, such as CSS10 (Park & Mulc, 2019), LibriTTS (Zen et al., 2019) and LJSpeech (Ito, 2017). However, in real low-resource language scenarios, there is no guarantee that enough clean data can be obtained. In this work, we aim to train an unsupervised TTS model in a low-resource language (the target language) with unpaired data, rather than any paired speech and text data, in that language, and also paired data in other rich-resource languages for initialization. Such training data are easily accessible. For example, the unpaired speech and text in the target language can be crawled from video or news websites in the countries using that language; the paired data in rich-resource languages can be obtained from some ASR and TTS datasets. Besides, these crawled speech data are from different speakers. Under such a task setting, we need to address the following challenges in order to achieve our goal. 1) Low-quality multi-speaker data. The speech data to be used for unsupervised training in our problem are often multi-speaker and low-quality, with much variable and noisy information like timbre, background noise, etc., hindering model convergence and meaningful speech-text alignment. This significantly increases the difficulty of the TTS model training. 2) Back-translation stability. Previous semi-supervised TTS methods (Xu et al., 2020; Ren et al., 2019b) leverage the unpaired data with back-translation, but only achieving limited performance and sometimes difficult to converge, especially in unsupervised settings. 3) Robustness. Previous semi-supervised/unsupervised TTS methods (Xu et al., 2020; Ren et al., 2019b; Ni et al., 2022; Liu et al., 2022b) use an auto-regressive architecture (Li et al., 2018; Shen et al., 2018), which suffers from word missing and repeating issues, especially when the supervision signal is very weak. 4) Lack of lexicon. For low-resource languages, it is usually difficult to obtain existing lexicons or G2P tools. We propose several practical tricks to address these issues and enable unsupervised TTS without any paired data in the target language and bridge the performance gap between the unsupervised and supervised TTS. Specifically, 1) we normalize the variable and noisy information in the low-quality training data. We propose a cross-lingual voice conversion model with flow-based enhanced prior, which converts the timbre of all sentences in different languages to one same speaker’s voice while preserving the pronunciation information. 2) We explore some tricks including curriculum learning, length augmentation and auxiliary supervised loss to improve the effectiveness of back-translation. 3) To strengthen model robustness, we employ the non-autoregressive (NAR) TTS model and use the alignment extracted from the ASR model2 in the back-translation process to guide the NAR TTS model training. By applying such a bag of tricks, we can successfully train an effective TTS model with noisy and multi-speaker data and without any lexicons. Through experiments, it has been verified that our method can achieve both high-quality and highintelligibility TTS, in terms of MOS and of word error rate (WER) and character error rate (CER) evaluated by external ASR, respectively. We compare our method to existing unsupervised TTS baselines (Ren et al., 2019b; Xu et al., 2020; Ni et al., 2022; Liu et al., 2022b) and find it significantly outperforms them in both audio quality and intelligibility under the same experimental settings. We conduct some analyses on the proposed tricks, which demonstrate the importance and necessity of 2The ASR model is the byproduct of back-translation, which does not need any extra paired data. these tricks to achieve state-of-the-art unsupervised TTS. The samples generated by our models can be found at https://unsupertts-tricks.github.io. 2 RELATED WORKS 2.1 SUPERVISED SPEECH SYNTHESIS In the past few years, with the development of deep learning, neural network-based TTS has thrived (Wang et al., 2017; Tachibana et al., 2018; Li et al., 2019; Ren et al., 2019a; 2021a; Łańcucki, 2020), where the text-to-speech mapping is modeled by deep neural networks using encoder-decoder architectures. Early methods by Wang et al. (2017) and Ping et al. (2017) generate the melspectrogram autoregressively. However, they suffer from slow inference and low robustness issues, e.g. word skipping and repeating. To tackle these issues, later works explore non-autoregressive (NAR) speech generation. FastSpeech (Ren et al., 2019a) is the first non-autoregressive TTS architecture, which adopts the duration predictor and length regulator to bridge the length gap between the speech and the text sequence. After that, many methods are proposed, such as FastSpeech 2 (Ren et al., 2021a), Glow-TTS (Kim et al., 2020) and EATS (Donahue et al., 2021), achieving not only better audio quality but also fast inference and good robustness. Recently, some NAR models leveraging variational auto-encoder (VAE) to model the variation information in the latent space are developed, like VITS (Kim et al., 2021b) and PortaSpeech (Ren et al., 2021b), and they quickly become popular. In this work, we also employ non-autoregressive architecture and VAE structure to achieve robustness against low-quality data. 2.2 LOW-RESOURCE SPEECH SYNTHESIS Supervised speech synthesis requires high-quality paired speech and text data for training, which are costly to attain, especially for low-resource languages. To broaden the application scope of TTS systems, several low-resource TTS models are developed, which only need a few or even not any high-quality paired data. Instead, they use unpaired text and audio data to train TTS models in a semi-supervised or unsupervised way, which are much straightforward and cheap to obtain. Semi-supervised TTS. Ren et al. (2019b) adopt back-translation and pre-training to leverage unpaired data, generating pseudo text/speech samples with ASR/TTS models and training them with the augmented data iteratively. However, as a proof of concept, Ren et al. (2019b) only verify the feasibility of semi-supervised TTS in a single-speaker dataset. Later, LRSpeech (Xu et al., 2020) supports multi-speaker and noisy datasets and is closer to real application. However, these semi-supervised methods still require a few pairs of high-quality speech and text data, which are expensive for low-resource languages since they often need to be recorded in professional studios. Unsupervised speech synthesis. Unsupervised speech synthesis does not use any paired training data from the target speaker and language, which has attracted growing attention recently. As the earliest unsupervised TTS works, Liu et al. (2022b) and Ni et al. (2022) both use an unsupervised ASR model to transcribe the TTS speech data to pseudo text and train with the augmented data to build an unsupervised TTS system. However, they heavily rely on the unsupervised ASR technique, whose training procedure is very unstable and heavily relies on lexicons. Therefore, these methods are difficult to apply to other low-resource languages. Besides, when switching to a multi-speaker setup, the gap between supervised and these unsupervised TTS methods becomes larger than singlespeaker setup (Liu et al., 2022b). A recent ArXiv paper (Lian et al., 2022) trains a non-parallel voice conversion model using unpaired speech data as the acoustic model and a specific module to map the text sequence to the speech discrete representation sequence, but this module has to be trained with an external dataset with the same language as in the unpaired dataset. Thus this method is hardly applicable to real low-resource language scenarios due to the difficulty of collecting such a large paired dataset in this language. 3 PROPOSED BAG OF TRICKS Suppose we have an unpaired speech dataset Slow and a text dataset Tlow in the target low-resource language Llow, together with a paired speech-text dataset Srich and Trich in another rich-resource language Lrich as auxiliary supervised training data. We assume that Llow and Lrich share some common characters, such as Indonesian and French share some Latin alphabets. Slow is a multispeaker speech dataset whose audio quality is extremely low, as it is difficult to obtain enough single-speaker clean data for the low-resource language. As for the auxiliary training data, since there are many public speech audios available in rich-resource languages, we do not impose any restrictions on the quality of these speech data. Our method aims to train the TTS model in the language Llow using the above datasets. Besides, we need another clean speech dataset Sref to provide the target timbre in our voice conversion model and it can be part of Srich or Slow. In this section, we first describe the overall training pipeline. Then we introduce model designs and some tricks used in each stage of the pipeline. 3.1 OVERALL TRAINING PIPELINE As shown in Figure 1, the training pipeline of our method consists of 3 stages: voice conversion, supervised warm-up training and unsupervised backtranslation training. We put the detailed pseudo-code algorithm of our training pipeline in Appendix A and describe each stage in the following paragraphs. Stage 1: Voice conversion. The low-resource speech dataset Slow contains many speakers and can be very noisy. We consider the variable and noisy information, e.g., background noise, speaker timbre, accent and some specific prosody, as the text-independent information in speech. Although some variable information is essential for certain TTS tasks like emotional, expressive and personalized TTS, it would be an obstacle for unsupervised TTS. The core purpose of unsupervised TTS is to solve the information matching problem between two modalities, i.e., speech and text, which are actually aligned by the pronunciation (or called content). The variable information in speech may interfere with the crossmodal pronunciation information matching in the unsupervised training stage and make the model struggle to find aligned clues in the text for this variable information. Therefore intuitively, if we can reduce the information gap between speech and text, our TTS and ASR model can achieve crossmodal pronunciation information matching faster, and the unsupervised training process can then be stabilized. To this end, we apply the cross-lingual voice conversion as the first stage. We train the voice conversion model on the datasets Slow, Srich and a clean dataset Sref providing the reference speaker timbre and can be in any language. Then we can normalize the variable and noisy speech information of audios in Slow and Srich using the voice conversion model and denote the generated datasets as S′low and S ′ rich. Stage 2: Supervised warm-up training. It is still very difficult to directly train unsupervised TTS from scratch even though we have normalized the variable information in the speech dataset Slow. To warm up the models for next unsupervised training, we train a sequence-to-sequence-based ASR model, which is required in the following back-translation stage, and a non-autoregressive TTS model using the auxiliary paired dataset Srich in a rich-resource language. This stage can provide a better initialization for the model since there exist certain commonalities between written and spoken formats in different languages3. Stage 3: Unsupervised back-translation training. Back-translation, originating from neural machine translation, is one of the most effective ways to leverage monolingual data for translation. In unsupervised TTS, back-translation (Sennrich et al., 2016; He et al., 2016; Ren et al., 2019b) leverages the dual nature (He et al., 2016; Qin, 2020) of TTS and ASR tasks and develops the capability of 3For a low-resource language Llow, it is usually not difficult to find a rich-resource language Lrich which is close to Llow. transforming text to speech (TTS) and speech to text (ASR). We transform a speech sequence s into a text sequence tpseudo using the ASR model, and then train the TTS model on the transformed pair (tpseudo, s). Similarly, we also train the ASR model on the transformed pair (spseudo, t) generated by the TTS model. The back-translation has two training directions, and as the training directions shift, the accuracy of ASR and performance of TTS can be boosted iteratively. We show the performance improvements as the back-translation training progresses in Appendix D.1. 3.2 VARIATIONAL VOICE CONVERSION MODEL WITH ENHANCED PRIOR The new voice conversion (VC) model in Stage 1 is aimed at normalizing the variance and noisy information in low-quality audios. It is based on self-supervised learning (SSL) audio representation (Polyak et al., 2021; van Niekerk et al., 2022), which has been proved to be very effective in disentangling the content and timbre information. As shown in Figure 2a, the overall architecture of our VC model is like an autoencoder. In training, the mel-spectrogram M1 and M2, which are the same here, are fed into several information extraction modules to generate disentangled representations. Specifically, 1) the pre-trained speaker encoder extracts the sentence-level speaker embedding; 2) the pre-trained HuBERT (Hsu et al., 2021) extracts the frame-level SSL discrete representations containing content (pronunciation) information; and 3) the posterior encoder extracts the residual information. After the information decomposition, the speech decoder takes all the representations as input and reconstructs the mel-spectrogram using a mean absolute error (MAE) and a multi-length adversarial loss (Ladv) following Ye et al. (2022) and Chen et al. (2020). In inference, we replace M1 with the reference speech which provides the target speaker timbre. In this way, the generated speech can preserve the pronunciation information in M2 and transfer its timbre to M1. However, besides the common merits of general VC including preserving content and converting timbre, our model should also have the below properties to ensure its performance in our pipeline. 1) Our model should be cross-lingual. It should be able to normalize Slow and Srich which are in different languages. 2) Our model should generate high-quality results. As the normalized speech will be fed to the next two stages as the training target, its result would bound that of the whole pipeline. 3) Our model should be robust to noisy and low-quality audio. Upon previous SSL-based VC methods, we enable these properties of our model with several improvements: Multilingual HuBERT. To enable the model to be cross-lingual, we employ a multilingual HuBERT (Lee et al., 2021; Popuri et al., 2022) to extract the SSL discrete representations as the content information, which is pre-trained on speech in multiple languages. We find it also generalizes well to other unseen languages (see Appendix D.2). HuBERT does not need any paired data or speaker information, which is consistent with our task setting. Besides, we add a language ID input to the speech decoder to indicate the language we need to generate, which can accurately model the pronunciation differences among different languages given the same discrete representation, and make up for the limited capacity of multilingual representations. Variational encoder with flow-based enhanced prior. To improve model robustness and audio quality, inspired by previous successful work in TTS (Ren et al., 2021b; Kim et al., 2021a), we introduce a variational encoder with flow-based enhanced prior. In training, this encoder can “store” the residual information, e.g., some irregular noises, time-varying timbre and prosody, that cannot be encoded by other information extraction modules to the posterior distribution Dq , and uses a normalizing flow to reshape the prior distribution Dp which needs to be close to Dq in terms of KL-divergence. With normalizing flows, the KL-divergence no longer offers a simple closed-form solution. So we estimate KL-divergence via Monte-Carlo method as in Ren et al. (2021b) and Kim et al. (2021a). The reason why we need the normalizing flow is that simple Gaussian prior distribution results in strong constraints on the posterior, which pushes the posterior distribution towards the mean and limits diversity, while the distribution shaped by normalizing flows is more flexible and provides the decoder with stronger prior. Besides, it can also provide the sampled random variables with temporal dependency. Information bottleneck. HuBERT representation is a kind of low-bitrate representation for speech content and does not contain much non-lexical information such as speaker identity and emotion. However, due to its discrete space bottleneck and way of training, we still cannot ensure it fully disentangles the timbre information, and the remaining timbre information may degrade the voice conversion quality in the inference stage. To further erase the speaker identity information, we need to choose an appropriate input dimension for the content encoder (i.e., embedding layer for HuBERT tokens), which can neither be too large nor too small. A large dimension may lead to leakage of fine-grained identity information from the content encoder and a small one may result in loss of pronunciation information. We put more details of our VC model in Appendix B.1. 3.3 TTS AND ASR MODELS Previous unsupervised TTS works use an autoregressive (AR) TTS architecture such as Tacotron 2 (Shen et al., 2018) and TransformerTTS (Li et al., 2018), which automatically find the speechtext alignment. However, such an AR TTS architecture is not robust and prone to word missing and repeating problems as stated in Ren et al. (2019a). In this work, as shown in Figure 2b, we adopt a non-autoregressive (NAR) TTS architecture (Ren et al., 2019a; 2021a). We mainly follow PortaSpeech (Ren et al., 2021b), except that we replace the post-net in PortaSpeech with multilength adversarial training (Ye et al., 2022; Chen et al., 2020) to simplify the training pipeline while keeping the naturalness of the generated mel-spectrogram. Instead of obtaining the ground-truth duration information from Montreal Forced Aligner (MFA) (McAuliffe et al., 2017) as many nonautoregressive TTS models (Ren et al., 2021a;b; Ye et al., 2022) do, we extract the speech-text alignment from the attention matrix generated by the ASR model, which simplifies the training pipeline in our back-translation stage and removes the dependency upon external tools. Specifically, inspired by GlowTTS (Kim et al., 2020), we extract the speech-text alignment by finding the monotonic path of maximum probability over the attention matrix of our ASR model using the Viterbi decoding. To enable the TTS model to generate speech in a different language, we add a language embedding to the decoder and an extra language ID input is required to specify the language of the target speech. Our ASR model is based on a sequence-to-sequence architecture with an LSTM-based encoder and decoder. To generate more monotonic alignment for TTS training, we employ the locationsensitive attention (Shen et al., 2018). Different from the TTS model, our ASR model is universal to all languages and does not need any language embedding as input, which can generalize to new languages better in our scenario. We put more details and model configurations of TTS and ASR models in Appendix B.2 and B.3. 3.4 TRICKS IN BACK-TRANSLATION Back-translation is a very critical step for unsupervised TTS training to leverage unpaired speech and text data. In this subsection, we describe some back-translation strategies that can significantly improve the effectiveness and efficiency of unsupervised TTS training. Curriculum learning. After warming up the ASR model in Stage 2 using Srich and Trich, we can force the ASR model to transcribe the audio in Llow to the text in Lrich by initializing the language embedding of Llow with that of Lrich. Considering the results of ASR are taken as the input of TTS, we select some good transcriptions, whose pronunciation is very similar to the groundtruth, for TTS training and discard bad ones in each round of back-translation. Apparently, we cannot directly calculate the error rate between the transcription and ground-truth text since we have no corresponding text for each audio. Therefore, to filter good recognition results during iterative back-translation, we design a metric called focus rate (F) to evaluate the confidence of ASR results, which is defined as F = 1N ∑N i=1 Ai,Pi . In its definition, N denotes the number of mel-spectrogram frames; Ai,j is ASR attention weights at the position of the i-th mel-spectrogram frame and the j-th text token and satisfies ∑ Ai = 1; Pi is the text token index corresponding to the i-th mel-spectrogram frame in the monotonic path of maximum probability decoded by the Viterbi algorithm (Forney, 1973). A higher F means greater probability lies in the decoded monotonic path and implies better speech-text monotonic alignment and higher confidence in the transcribed results. In each round of the pseudo text generation process, we use F to select good ASR results whose F is greater than a fixed threshold Fthres. Besides, we also store F for each pseudo text tpseudo and replace tpseudo with new result in the next back-translation round only if F is increased. Length augmentation. At the beginning of training in the low-resource language, short utterances (text and speech) are easier for TTS and ASR models to fit and they are generally better at generating short utterances rather than long ones. Besides, our curriculum learning strategy approximates the quality of the generated text and filters bad results, forcing our model to keep more short utterances than long ones. Consequently, our model becomes biased towards short utterances and may perform very poorly for long sentences. To fix this issue, we introduce a length augmentation strategy. In particular, we randomly concatenate two utterances (t1, t2)/(s1, s2) and their generated results (s1pseudo, s 2 pseudo)/(t 1 pseudo, t 2 pseudo) with some probability pcat and obtain the generated pairs (t cat, scatpseudo) and (scat, tcatpseudo) for back-translation training. Length augmentation helps the TTS and ASR models generate long sentences better and become more robust to some long text inputs in inference. Auxiliary supervised losses. If we only employ the back-translation loss in Stage 3, the model may fail to find the correct speech-text alignment, leading to unsatisfied results and unstable training, especially at the beginning of training. To solve this problem, apart from the back-translation loss in the target low-resource language, we also keep the supervised training losses in the auxiliary rich-resource language the same as those in Stage 2. We call them “auxiliary supervised losses”. Specifically, in the process of training, we intersperse the rich resource language supervised training, for both TTS and ASR, into the back-translation steps with some probability paux. 4 EXPERIMENTS AND RESULTS In this section, we conduct experiments to evaluate the effectiveness of our proposed method for unsupervised TTS. We first describe the experiment settings, show the results of our method, and conduct some analyses of our method. 4.1 EXPERIMENTAL SETUP Datasets. We choose the speech and text data from CommonVoice dataset (Ardila et al., 2019) for training and English and Indonesian as the target low-resource languages4. We use French as the rich-resource language unless otherwise stated. The experimental results of using other languages as the rich-resource language are put in Appendix D.2. We split the target language data into two halves. We take unpaired speech data from the first half and text data from the second, so as to guarantee the speech and text data are disjoint. We use LJSpeech (Ito, 2017) as the Sref to provide the speaker timbre and suppress the background noise for the voice conversion model. For evaluation, we choose 100 audio/text pairs in LJSpeech for English5 and 100 audio/text pairs in CommonVoice (Indonesian subset) for Indonesian. For the speech data, we convert the raw waveform into mel- 4We choose English as one of the target languages since we can understand English and it is easy to evaluate, although English is not a low-resource language. 5We use LJSpeech because it has fewer errors in text and speech pairing, while the data in CommonVoice are very noisy and have much wrongly labeled text. spectrograms with 80 ms frame size, 20 ms frame hop following Hsu et al. (2021). More details are listed in Appendix C.1. Training and evaluation. We train our VC, TTS, and ASR models on 1 NVIDIA A100 GPU witch batch size 128. We use the Adam optimizer with β1 = 0.9, β2 = 0.98, ε = 10−9 and learning rate 2e-4. The training takes nearly 3 days. The output mel-spectrograms are converted to waveform using a HiFi-GAN (Kong et al., 2020) pre-trained on LJSpeech (Ito, 2017). The focus rate Fthres, Nsteps, pcat and paux in back-translation are set to 0.2, 20k, 0.2 and 0.2. For evaluation, we mainly use MOS (mean opinion score) for audio quality, WER (word error rate), and CER (character error rate) for the intelligibility of the voice (French & Steinberg, 1947) to verify if we can generate a reasonable speech sequence. For mean opinion score evaluation, we keep the text content consistent among different models so as to exclude other interference factors and only examine the audio quality. We randomly choose 20 sentences from the test set and each audio is listened by at least 20 testers following Ren et al. (2019a; 2021a), who are all native English/Indonesian speakers. For WER and CER, we first transcribe the sentences from the generated speech using open-sourced or commercial ASR and calculate these metrics between them and the ground-truth text in the test set. We use WeNet (Yao et al., 2021; Zhang et al., 2022) for English for fair comparison with future works, since commercial ASR could be changed in the future; but we choose Azure ASR service6 for Indonesian since we cannot find any Indonesian open-sourced ASR that is accurate enough. In analytical experiments, we also show the CER of our ASR model, which also indicates the performance of our system since ASR and TTS are dependent on each other and boosted iteratively. 4.2 RESULTS AND ANALYSES 4.2.1 PERFORMANCE We compare our method with previous works including Ren et al. (2019b), Xu et al. (2020), Liu et al. (2022b) and Ni et al. (2022). For fair comparison, we make some modifications to all baseline methods including unifying the training dataset, TTS acoustic model and vocoder (more detailed modifications of each baseline method are put in Appendix C.2). The results are shown in Table 2. We also evaluate the outputs of a supervised TTS model trained with paired target language data for reference, and its results can be regarded as the upper bound. From the table, it can be seen that our method achieves the best performance in both speech quality (MOS) and intelligibility (CER and WER) in English and Indonesian. And very surprisingly, our method can even approach the performance of the supervised model in Indonesian. A possible reason is that Indonesian is easier to pronounce than English. These observations prove the effectiveness of our proposed tricks for unsupervised TTS. 4.2.2 ABLATION STUDY To analyze the effectiveness of each trick and component, we conduct some ablation studies on English. In addition to generated speech quality (MOS) and intelligibility (CER and WER), we also analyze the character error rate of our ASR model (CER(ASR)). The results are shown in Table 3. 1) From Row 2, it can be seen that our model can achieve better performance after normalizing the speech variance including timbre and noise in our dataset. 6https://azure.microsoft.com/en-us/products/cognitive-services/ speech-to-text/ 2) From Row 3, it can be seen that the NAR TTS architecture improves speech quality and intelligibility by a large margin, as NAR TTS is more robust to noisy speech and reduces some bad cases in generated speech. 3) From Row 4, it can be seen that back-translation is essential to unsupervised TTS, which is consistent with the findings of previous works (Xu et al., 2020; Ren et al., 2019b). 4) From Row 5, we can see that curriculum learning can improve the training effectiveness since it can filter out bad pseudo transcripts and improve the training set quality for back-translation. 5) From Row 6, it can be seen that our length augmentation strategy can improve the robustness to long text inputs in inference. 6) From Row 7, we find that auxiliary supervised training loss can improve the performance of both ASR and TTS by stabilizing the training. From the table, comparing other rows with Row 1 that shows our model with all tricks, we have several observations. 4.2.3 ANALYSES ON VOICE CONVERSION MODEL With verified effectiveness of normalization via the voice conversion model as demonstrated in Section 4.2.2, we conduct more analyses on our proposed voice conversion model, including the effects of different information bottleneck channels and the flow-based enhanced prior. The results are shown in Table 4. It can be observed that an appropriate size of bottleneck channels is crucial for the performance of the voice conversion model, with a large bottleneck resulting in timbre information leakage and a small bottleneck leading to pronunciation information loss. Besides, our flow-based enhanced prior can improve the quality of converted speech, since it has make fewer assumptions about the prior distribution as we mentioned in Section 3.2. 5 CONCLUSION In this work, we proposed an unsupervised method for TTS by leveraging low-quality and noisy unpaired speech and text data in the target language and paired data in other rich-resource languages. Our method encloses several practical tricks to realize unsupervised text-to-speech, including normalizing variable and noisy information in speech data, curriculum learning, length augmentation, and auxiliary supervised training. We have also found that the non-autoregressive TTS architecture can significantly relieve robustness issues in unsupervised settings. We conducted experiments on CommonVoice dataset, taking English and Indonesian as the target languages, and have found that our method can achieve high audio quality in terms of MOS, and high intelligibility in terms of WER and CER, demonstrating remarkable effectiveness. Further analyses have well verified the importance of each trick of our method. Appendices A TRAINING ALGORITHM The detailed unsupervised training algorithm is shown in Algorithm 1. Algorithm 1 Unsupervised TTS Training 1: Input: paired dataset in rich-resource language Srich and Trich; unpaired speech and text data in low- resource language Slow and Tlow; single-speaker speech dataset Sref containing reference speaker for voice conversion; pre-trained multilingual HuBERT model Mh; pre-trained speaker encoder Mspk. 2: Initialize: multilingual TTS model MTTS and ASR model MASR; current unsupervised training step t = 0; total unsupervised training steps Ttotal; number of steps for each TTS or ASR stage Nstep. 3: Train our proposed voice conversion model with Srich, Slow and Sref , and use the Mh and Mspk to extract HuBERT and speaker representations. 4: Convert the timbre of all speech samples in Srich and Slow to that of the speech in Sref and obtain the converted S′rich and S ′ low. {Sec. 3.2} 5: Train MASR and MTTS using S′rich and Trich. 6: repeat 7: Convert all samples s in S′low to pseudo text tpseudo. 8: Select pseudo training pairs (tpseudo, s) satisfying F > Fthres and obtain (Tpseudo, S′). {Curriculum learning in Sec. 3.4} 9: for N in 0 to Nsteps do 10: if Random() ≤ paux then 11: Sample D ← (trich, s) from (Trich, S′rich). {Auxiliary loss in Sec. 3.4} 12: else 13: Sample D ← (tpseudo, s) from (Tpseudo, S′). 14: if Random() ≤ pcat then 15: Sample D′ ← (t2pseudo, s2) from (Tpseudo, S′). 16: D ← (Concat(tpseudo, t2pseudo), Concat(s, s2)) {Length augmentation in Sec. 3.4} 17: end if 18: end if 19: Train MTTS using D. 20: end for 21: for N in 0 to Nsteps do 22: Convert all samples t in Tlow to pseudo speech spseudo and obtain (Spseudo, Tlow). 23: if Random() ≤ paux then 24: Sample D ← (srich, t) from (S′rich, Trich). {Auxiliary loss in Sec. 3.4} 25: else 26: Sample D ← (spseudo, t) from (Spseudo, Tlow). 27: if Random() ≤ pcat then 28: Sample D′ ← (s2pseudo, t2) from (Spseudo, Tlow). 29: D ← (Concat(spseudo, s2pseudo), Concat(t, t2)) {Length augmentation in Sec. 3.4} 30: end if 31: end if 32: Train MASR using D. 33: end for 34: t← t+ 1 35: until t > Ttotal B MODEL DETAILS AND CONFIGURATIONS In this section, we put more details of models including voice conversion (VC), text-to-speech (TTS), and automatic speech recognition (ASR) models, and also the hyper-parameters we used in our experiments. B.1 VC MODEL Our proposed VC model takes two mel-spectrograms (content provider M1 and timbre reference M2) as inputs and outputs the converted mel-spectrogram M3. Firstly, M1 is fed into a pre-trained speaker encoder7 to extract the speaker embedding Hspk. Secondly, M2 is fed into a pre-trained multilingual HuBERT8, which is pre-trained with three languages, and extract the HuBERT discrete frame-level representation Hling. Thirdly, M2 is taken to the posterior encoder, which generates a multivariate Gaussian distribution as the posterior in our variational VC model. Instead of directly employing Gaussian distribution, we introduce a small volume-preserving normalizing flow to model the prior distribution. A latent z is sampled from the posterior distribution (in training) or prior distribution (in inference). Finally, we add Hspk, Hling, z and language embedding of M2 together (all of them have the same channel size C = 192) and feed the result hidden states into the speech decoder to generate the target speech. Besides, we introduce a multi-length discriminator to distinguish between the output generated by the model and the ground truth mel-spectrogram. The loss terms of the voice conversion model consist of 1) reconstruction loss of mel-spectrotram LMAE: mean absolute error between the generated and ground-truth mel-spectogram; 2) the KLdivergence of prior and posterior distributions: LKL = log qϕ(z|x)− log pθ̄(z), where z ∼ qϕ(z|x); and 3) the adversarial training loss introduced by the multi-length discriminator: Ladv. The final weighted total loss is Ltotal = λ1LMAE + λ2LKL + λ3Ladv. In our experiments, we set λ1 = λ2 = λ3 = 1.0. The detailed structure of each module is introduced in the following subsubsections. B.1.1 MULTILINGUAL HUBERT Multilingual HuBERT (Lee et al., 2021) is trained on English (En), Spanish (Es), and French (Fr) 100k subsets of the VoxPopuli dataset (Wang et al., 2021). VoxPopuli contains unlabeled speech data for 23 languages, and Lee et al. (2021) use the 4.5k hrs of unlabeled speech for En, Es, and Fr, totaling 13.5k hours. We extract the HuBERT features from the 11-th layer of the third-iteration HuBERT model and discretize them using the pre-trained K-means model to obtain the discrete representations Hling. B.1.2 POSTERIOR ENCODER The structure of posterior encoder is similar with the encoder in the variational generator of PortaSpeech (Ren et al., 2021b), which is composed of a 1D-convolution with stride 4 followed by ReLU activation (Glorot et al., 2011) and layer normalization (Ba et al., 2016), and a non-causal WaveNet (Van Den Oord et al., 2016), as shown in Figure 3a. The number of encoder layers, WaveNet channel size and kernel size are 8, 192 and 5. The outputs of posterior encoder is the parameters (µq and σq) of the posterior distribution N(µq, σq) and the latent z is sampled from N(µq, σq), whose latent size is set to 32. B.1.3 VOLUME-PRESERVING (VP) NORMALIZING FLOW Following Kim et al. (2021b) and Ren et al. (2021b), we use volume-preserving normalizing flow as the prior distribution generator since it does not need to consider the Jacobian term when calculating the data log-likelihood and is powerful enough for modeling the prior, as shown in Figure 3c. The normalizing flow transforms simple distributions (e.g., Gaussian distribution) to complex distributions through a series of K invertible mappings, which is a stack of WaveNet (van den Oord et al.) residual blocks with dilation 1. Then we take the complex distributions as the prior of the speech decoder. When introducing normalizing flow-based enhanced prior, the optimization objective of the mel-spectrogram generator becomes: log p(x) ≥ Eqϕ(z|x)[log pθ(x|z)]−KL(qϕ(z|x)|pθ̄(z)) ≡ L(ϕ, θ, θ̄), (1) where ϕ, θ and θ̄ denote the model parameters of the posterior encoder, speech decoder and the normalizing flow-based enhanced prior, respectively. Due to the introduction of normalizing flows, the KL term in Equation 1 no longer offers a simple closed-form solution. So we estimate the expectation w.r.t. qϕ(z|x) via Monte-Carlo method by modifying the KL term: KL(qϕ(z|x)|pθ̄(z)) = Eqϕ(z|x)[log qϕ(z|x)− log pθ̄(z)]. (2) 7https://github.com/resemble-ai/Resemblyzer 8https://github.com/facebookresearch/fairseq/blob/main/examples/speech_ to_speech/docs/textless_s2st_real_data.md In training, the posterior distribution N(µq, σq) is encoded by the encoder of the posterior encoder. Then z is sampled from the posterior distribution using reparameterization and is passed to the speech decoder. In the meanwhile, the posterior distribution is fed into the VP normalizing flow to convert it to a standard normal distribution (the middle dotted line). In inference, VP normalizing flow converts a sample in the standard normal distribution into a sample z and we pass the z to the speech decoder. Our VP normalizing flow consists of 4 flow steps, each of which has 4 WaveNet layers, whose channel size and kernel size are set to 64 and 3. B.1.4 CONTENT ENCODER The content encoder is stacks of feed-forward Transformer (Vaswani et al., 2017) layers with relative position encoding (Shaw et al., 2018), as shown in Figure 3d. The information bottleneck is located in the first layer of the content encoder (the embedding layer of HuBERT tokens). We set the channel size of each embedding to 16 as default. B.1.5 SPEECH DECODER The speech decoder, as shown in Figure 3b, consists of a non-causal WaveNet and a 1D transposed convolution with stride 4, also followed by ReLU and layer normalization. The number of decoder layers, WaveNet channel size and kernel size are set to 4, 192 and 5. B.1.6 MULTI-LENGTH DISCRIMINATOR Inspired by Ye et al. (2022), our multi-length discriminator is an ensemble of multiple CNN-based discriminators, which evaluate the mel-spectrogram based on random windows of different lengths, as shown in Figure 4. In our experiments, we train three CNN-based discriminators which observe random mel-spectrogram clips with lengths of 32, 64, and 128 frames. The structure of the CNNbased discriminator is shown in Figure 4b. It consists of N+1 2D-convolutional layers, each of which is followed by a Leaky ReLU activation and drop-out layer. The latter N convolutional layers are additionally followed by an instance normalization (Ulyanov et al., 2016) layer. After the convolutional layers, a linear layer projects the hidden states of the mel-spectrogram slice to a scalar, which is the prediction that the input mel-spectrogram is true or fake. In our experiments, we set N=2 and the channel size of these discriminators to 32. B.2 TTS MODEL Our TTS model architecture follows PortaSpeech (Ren et al., 2021b) except that 1) we replace the post-net in PortaSpeech with multi- length adversarial training, which is the same as the adversarial training in our voice conversion model in Appendix B.1. 2) We add a language embedding layer to the speech decoder, indicating the language of the speech that will be generated. 3) We use a simple character encoder like FastSpeech (Ren et al., 2019a; 2021a) instead of the mixed linguistic encoders for simplicity. The detailed model architecture and hyper-parameters of posterior encoder, normalizing flow, speech decoder and multi-length discriminators are the same as those in the voice conversion model in Appendix B.1. The structures of text encoder and duration predictor are the same as those in Ren et al. (2021b), with channel size 192, kernel size 5 and number of layers 4. B.3 ASR MODEL We adopt the architecture of Tacotron 2 (Shen et al., 2018) for our ASR model, since its locationsensitive attention can generate close-to-diagonal and monotonic alignment between speech and text. We replace the character/phoneme embedding of the encoder in Tacotron 2 with a speech CNNbased pre-net with stride 4 to enable speech information encoding. For the decoder side, we use the character embedding as the input layer and the softmax as the output layer to adapt the decoder to the character sequence. We set the hidden size of encoder RNN to 512 and the number of convolution stacks to 5; the hidden size of decoder RNN and attention RNN are both set to 1024; the channel of decoder attention is set to 512. We also train a Transformer-based (Vaswani et al., 2017) language model and jointly beam search decode the recognition results X using the ASR model pASR(X|S) and language model pLM(X) to maximize the probability logpASR(X|S) + λLMlogpLM(X), where λLM is the weight of language model and it is set to 0.2 in this work. The beam size of beam search decoding is set to 5. C MORE EXPERIMENTAL DETAILS In this section, we describe more experimental details for reproducibility. C.1 DATASETS We select subsets of English, French and Indonesian from CommonVoice (Ardila et al., 2019) dataset. We choose subsets of about 200k utterances for English and French and all data (about 20k utterances) for Indonesian. We randomly select 100 utterances in English and Indonesian for validation and another 100 utterances in Indonesian for testing. The test set of English is randomly selected from LJSpeech (Ito, 2017). We split the target language data into two halves according to utterance ID: we take unpaired speech data from those with odd ID and text data from those with even ID, so as to guarantee the speech and text data are disjoint. C.2 BASELINES For fair comparison, we make some modifications to all baseline methods as follows: • We adopt training data consisting of paired French (as auxiliary rich-resource language) data and unpaired English/Indonesian (as target low-resource language) data. Specifically, for Ren et al. (2019b) and Xu et al. (2020), we warmup ASR and TTS in these methods using rich-resource language data before back-translation; for Liu et al. (2022b) and Ni et al. (2022), we initialize the unsupervised ASR model using a modified CTC loss (Graves et al., 2006) with rich-resource language data before unsupervised training and also initialize the TTS model using this data. • We directly use character sequence as TTS input without any lexicon and G2P tools. • We extract the speaker embeddings using the same pre-trained speaker encoder9 and add them to the TTS model to indicate the speaker information (timbre) since our dataset is multi-speaker. • We replace all baseline TTS models with NAR architecture the same as our method since AR architecture is very sensitive to noisy audio and cannot produce any meaningful results in our settings. • We use the same voice conversion model (described in Section 3.2) to convert ground-truth audios and all baselines’ outputs to the same person’s timbre from Sref . • We use the same vocoder, HiFi-GAN (Kong et al., 2020), to convert mel-spectrogram to the waveform. D MORE EXPERIMENTAL RESULTS D.1 PERFORMANCE CHANGES IN BACK-TRANSLATION TRAINING To verify the accuracy of ASR and the performance of TTS can be boosted iteratively as the training directions shift, we plot the accuracy of TTS and ASR results in Figure 5. From the figure, we can see that with the training of the model (the training directions shift every Nsteps = 20000 steps), the error rates of ASR and TTS results gradually drop until convergence. D.2 USE OTHER RICH-RESOURCE AUXILIARY LANGUAGES We explore how different rich-resource auxiliary languages can affect the target language’s performance. In addition to French, we use other languages including German, Dutch, Spanish, and Portuguese as the rich-resource auxiliary language to train our unsupervised TTS model. For fair comparison, we use the same training data size for all rich-resource languages (80k pairs speechtext subset in each language from CommonVoice). We choose English as the target low-resource language. The results are shown in Table 5. It can be seen that using German as the rich-resource 9https://github.com/resemble-ai/Resemblyzer language achieves the best performance. The possible reason is that the pronunciation distance between English and German is closer than other languages as they both belong to west Germanic languages. Though Dutch also belongs to west Germanic languages, it does not perform very well, which might be due to its bad data quality. Then we combine data from all these languages and find that it achieves very strong results and outperforms others that use only one auxiliary language. Besides, we observe that our method performs very well not only in English, French and Spanish, which are used to pre-trained the multilingual HuBERT, but also in other unseen languages, which verifies the generalization of our voice conversion model and the whole unsupervised TTS pipeline. D.3 ANALYSES ON FOCUS RATE F To verify the effectiveness of focus rate F we propose in Section 3.4, we calculate F and CER on English test set in our model training process. We plot the curves to explore the correlation between them in Figure 6. From the figure we can see that the focus rate F is negatively related to recognition accuracy, which means it is reasonable to use it as the indicator for filtering ASR transcriptions (higher F indicates lower CER). D.4 USE OTHER TEXT UNPAIRED DATASET We train our model using the audio data from the CommonVoice English subset which is the same as the original version of the paper and the text data from WMT16 (Bojar et al., 2016) English training set to make the domains of unpaired audio and text very different. We keep the test set the same as the original paper (LJSpeech subset). The results are shown in Table 6. From the table, it can be seen that the performance drops a bit (∼0.036 and ∼0.06 increasing in CER and WER) due to the domain gap between the text and speech unpaired data. E POTENTIAL NEGATIVE SOCIETAL IMPACTS Unsupervised TTS lowers the requirements for speech synthesis service deployment (only needs unpaired speech and text data) and synthesizes high-quality speech voice, which may cause unemployment for people with related occupations such as broadcasters and radio hosts. In addition, there is the potential for harm from non-consensual voice cloning or the generation of fake media and the voices of the speakers in the recordings might be overused than they expect.
1. What is the focus and contribution of the paper regarding unsupervised speech synthesis? 2. What are the strengths of the proposed approach, particularly in terms of its explanatory capacity and performance? 3. What are the weaknesses of the paper, especially regarding the ablation study? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any typos or minor issues in the review that should be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper improves upon the unsupervised speech synthesis. The motivation of such work is to serve to low resource languages where the supervised approach might not be feasible due to lack of annotated data. The paper proposes and implements several modifications (bag of tricks) to the existing methods to improve the performance. The tricks are: the variational normalization to tackle the noisy information; the curriculum learning; non-autoregressive TTS. The main contribution of the paper is that they present a high-quality method for unsupervised TTS that allows training without the lexicon. Strengths And Weaknesses Strengths Despite the complexity of the methods for TTS, the paper is able to clearly explain prior and the proposed methods. The motivation of the work is clear and introduced well Judging by the samples, the model performs very well Weaknesses I found the ablation study unconvincing: the MOS error bars intersect Clarity, Quality, Novelty And Reproducibility The paper is very clearly written and easy to follow. Nevertheless, due to the large number of moving parts, when reading for the first time I was a bit confused about all the method applied. Not sure how to fix this. Then, some images are hard to read when printed in black and white. The paper seems to have high quality, the methods are motivated well. The approach is sound. I am slightly unconvinced by the ablation studies. The Tab. 3 shows that many of the error bars on MOS intersect making rows 1, 5, 6, 7 identical as well as rows 2, 3, 4. What are the error bars for the CER and WER? The same is true for Tab. 4, it seems that all the error bars intersect here. The paper is novel enough. While the paper builds upon the previous work, there are important changes and a clear end goal. From listening to the samples, I am convinced that the proposed method sounds better than the baselines. It should be possible to reproduce the paper. But I still encourage the authors to release the source code of the paper. Typos Indonesian and French share some Latin(?) alphabets p.4 variable information
ICLR
Title Bag of Tricks for Unsupervised Text-to-Speech Abstract Unsupervised text-to-speech (TTS) aims to train TTS models for a specific language without any paired speech-text training data in that language. Existing methods either use speech and corresponding pseudo text generated by an unsupervised automatic speech recognition (ASR) model as training data, or employ the back-translation technique. Though effective, they suffer from low robustness to low-quality data and heavy dependence on the lexicon of a language that is sometimes unavailable, leading to difficulty in convergence, especially in lowresource language scenarios. In this work, we introduce a bag of tricks to enable effective unsupervised TTS. Specifically, 1) we carefully design a voice conversion model to normalize the variable and noisy information in the low-quality speech data while preserving the pronunciation information; 2) we employ the non-autoregressive TTS model to overcome the robustness issue; and 3) we explore several tricks applied in back-translation, including curriculum learning, length augmentation and auxiliary supervised loss to stabilize the back-translation and improve its effectiveness. Through experiments, it has been demonstrated that our method achieves better intelligibility and audio quality than all previous methods, and that these tricks are very essential to the performance gain. 1 INTRODUCTION Text to speech (TTS), or speech synthesis, has been a hot research topic (Wang et al., 2017; Shen et al., 2018; Ming et al., 2016; Arik et al., 2017; Ping et al., 2018; Ren et al., 2019a; Li et al., 2018; Ren et al., 2021a; Liu et al., 2021; Ren et al., 2021b) and has broad industrial applications as well. However, previous TTS has been developed dominantly for majority languages like English, Mandarin or German, while seldom for minority languages and dialects (low-resource languages), as supervised TTS requires hours of single-speaker and high-quality data to retain a good performance, but collecting and labeling such data for low-resource languages are very expensive and need a substantial amount of manpower. Recently, some works exploit unsupervised (Ni et al., 2022; Liu et al., 2022b) or semi-unsupervised learning (Tjandra et al., 2017; Ren et al., 2019b; Liu et al., 2020; Xu et al., 2020) to enable speech synthesis for low-resource languages, some of which are summarized in Table 1. Semi-supervised methods rely on a small amount of high-quality paired data in the target language to initialize the model parameters and employ back-translation to leverage the unpaired data. But high-quality paired data in minor languages are usually collected via recording in professional studios or transcribing by native speakers, and hence very costly and sometimes even unaffordable to attain. In contrast, unsupervised methods train an unsupervised automatic speech recognition model (ASR) (Baevski et al., 2021; Liu et al., 2022a) to generate pseudo labels for the unpaired speech data, and then use the pseudo labels and speech paired data to train the TTS model. However, their performance tends to be bounded by the performance of the unsupervised ASR model, which is extremely difficult and unstable to train on some low-resource languages, especially for those without lexicon or grapheme-to-phoneme (G2P) tools (Baevski et al., 2021; Liu et al., 2022a)1. Besides, 1Baevski et al. (2021) claimed their method “requires phonemization of the text for the language of interest”, and Liu et al. (2022a) claimed “when switching to an entirely letter-based system without a lexicon, the unit error rate increases substantially”. the unpaired speech samples used in existing unsupervised methods are clean and ready for general TTS model training, such as CSS10 (Park & Mulc, 2019), LibriTTS (Zen et al., 2019) and LJSpeech (Ito, 2017). However, in real low-resource language scenarios, there is no guarantee that enough clean data can be obtained. In this work, we aim to train an unsupervised TTS model in a low-resource language (the target language) with unpaired data, rather than any paired speech and text data, in that language, and also paired data in other rich-resource languages for initialization. Such training data are easily accessible. For example, the unpaired speech and text in the target language can be crawled from video or news websites in the countries using that language; the paired data in rich-resource languages can be obtained from some ASR and TTS datasets. Besides, these crawled speech data are from different speakers. Under such a task setting, we need to address the following challenges in order to achieve our goal. 1) Low-quality multi-speaker data. The speech data to be used for unsupervised training in our problem are often multi-speaker and low-quality, with much variable and noisy information like timbre, background noise, etc., hindering model convergence and meaningful speech-text alignment. This significantly increases the difficulty of the TTS model training. 2) Back-translation stability. Previous semi-supervised TTS methods (Xu et al., 2020; Ren et al., 2019b) leverage the unpaired data with back-translation, but only achieving limited performance and sometimes difficult to converge, especially in unsupervised settings. 3) Robustness. Previous semi-supervised/unsupervised TTS methods (Xu et al., 2020; Ren et al., 2019b; Ni et al., 2022; Liu et al., 2022b) use an auto-regressive architecture (Li et al., 2018; Shen et al., 2018), which suffers from word missing and repeating issues, especially when the supervision signal is very weak. 4) Lack of lexicon. For low-resource languages, it is usually difficult to obtain existing lexicons or G2P tools. We propose several practical tricks to address these issues and enable unsupervised TTS without any paired data in the target language and bridge the performance gap between the unsupervised and supervised TTS. Specifically, 1) we normalize the variable and noisy information in the low-quality training data. We propose a cross-lingual voice conversion model with flow-based enhanced prior, which converts the timbre of all sentences in different languages to one same speaker’s voice while preserving the pronunciation information. 2) We explore some tricks including curriculum learning, length augmentation and auxiliary supervised loss to improve the effectiveness of back-translation. 3) To strengthen model robustness, we employ the non-autoregressive (NAR) TTS model and use the alignment extracted from the ASR model2 in the back-translation process to guide the NAR TTS model training. By applying such a bag of tricks, we can successfully train an effective TTS model with noisy and multi-speaker data and without any lexicons. Through experiments, it has been verified that our method can achieve both high-quality and highintelligibility TTS, in terms of MOS and of word error rate (WER) and character error rate (CER) evaluated by external ASR, respectively. We compare our method to existing unsupervised TTS baselines (Ren et al., 2019b; Xu et al., 2020; Ni et al., 2022; Liu et al., 2022b) and find it significantly outperforms them in both audio quality and intelligibility under the same experimental settings. We conduct some analyses on the proposed tricks, which demonstrate the importance and necessity of 2The ASR model is the byproduct of back-translation, which does not need any extra paired data. these tricks to achieve state-of-the-art unsupervised TTS. The samples generated by our models can be found at https://unsupertts-tricks.github.io. 2 RELATED WORKS 2.1 SUPERVISED SPEECH SYNTHESIS In the past few years, with the development of deep learning, neural network-based TTS has thrived (Wang et al., 2017; Tachibana et al., 2018; Li et al., 2019; Ren et al., 2019a; 2021a; Łańcucki, 2020), where the text-to-speech mapping is modeled by deep neural networks using encoder-decoder architectures. Early methods by Wang et al. (2017) and Ping et al. (2017) generate the melspectrogram autoregressively. However, they suffer from slow inference and low robustness issues, e.g. word skipping and repeating. To tackle these issues, later works explore non-autoregressive (NAR) speech generation. FastSpeech (Ren et al., 2019a) is the first non-autoregressive TTS architecture, which adopts the duration predictor and length regulator to bridge the length gap between the speech and the text sequence. After that, many methods are proposed, such as FastSpeech 2 (Ren et al., 2021a), Glow-TTS (Kim et al., 2020) and EATS (Donahue et al., 2021), achieving not only better audio quality but also fast inference and good robustness. Recently, some NAR models leveraging variational auto-encoder (VAE) to model the variation information in the latent space are developed, like VITS (Kim et al., 2021b) and PortaSpeech (Ren et al., 2021b), and they quickly become popular. In this work, we also employ non-autoregressive architecture and VAE structure to achieve robustness against low-quality data. 2.2 LOW-RESOURCE SPEECH SYNTHESIS Supervised speech synthesis requires high-quality paired speech and text data for training, which are costly to attain, especially for low-resource languages. To broaden the application scope of TTS systems, several low-resource TTS models are developed, which only need a few or even not any high-quality paired data. Instead, they use unpaired text and audio data to train TTS models in a semi-supervised or unsupervised way, which are much straightforward and cheap to obtain. Semi-supervised TTS. Ren et al. (2019b) adopt back-translation and pre-training to leverage unpaired data, generating pseudo text/speech samples with ASR/TTS models and training them with the augmented data iteratively. However, as a proof of concept, Ren et al. (2019b) only verify the feasibility of semi-supervised TTS in a single-speaker dataset. Later, LRSpeech (Xu et al., 2020) supports multi-speaker and noisy datasets and is closer to real application. However, these semi-supervised methods still require a few pairs of high-quality speech and text data, which are expensive for low-resource languages since they often need to be recorded in professional studios. Unsupervised speech synthesis. Unsupervised speech synthesis does not use any paired training data from the target speaker and language, which has attracted growing attention recently. As the earliest unsupervised TTS works, Liu et al. (2022b) and Ni et al. (2022) both use an unsupervised ASR model to transcribe the TTS speech data to pseudo text and train with the augmented data to build an unsupervised TTS system. However, they heavily rely on the unsupervised ASR technique, whose training procedure is very unstable and heavily relies on lexicons. Therefore, these methods are difficult to apply to other low-resource languages. Besides, when switching to a multi-speaker setup, the gap between supervised and these unsupervised TTS methods becomes larger than singlespeaker setup (Liu et al., 2022b). A recent ArXiv paper (Lian et al., 2022) trains a non-parallel voice conversion model using unpaired speech data as the acoustic model and a specific module to map the text sequence to the speech discrete representation sequence, but this module has to be trained with an external dataset with the same language as in the unpaired dataset. Thus this method is hardly applicable to real low-resource language scenarios due to the difficulty of collecting such a large paired dataset in this language. 3 PROPOSED BAG OF TRICKS Suppose we have an unpaired speech dataset Slow and a text dataset Tlow in the target low-resource language Llow, together with a paired speech-text dataset Srich and Trich in another rich-resource language Lrich as auxiliary supervised training data. We assume that Llow and Lrich share some common characters, such as Indonesian and French share some Latin alphabets. Slow is a multispeaker speech dataset whose audio quality is extremely low, as it is difficult to obtain enough single-speaker clean data for the low-resource language. As for the auxiliary training data, since there are many public speech audios available in rich-resource languages, we do not impose any restrictions on the quality of these speech data. Our method aims to train the TTS model in the language Llow using the above datasets. Besides, we need another clean speech dataset Sref to provide the target timbre in our voice conversion model and it can be part of Srich or Slow. In this section, we first describe the overall training pipeline. Then we introduce model designs and some tricks used in each stage of the pipeline. 3.1 OVERALL TRAINING PIPELINE As shown in Figure 1, the training pipeline of our method consists of 3 stages: voice conversion, supervised warm-up training and unsupervised backtranslation training. We put the detailed pseudo-code algorithm of our training pipeline in Appendix A and describe each stage in the following paragraphs. Stage 1: Voice conversion. The low-resource speech dataset Slow contains many speakers and can be very noisy. We consider the variable and noisy information, e.g., background noise, speaker timbre, accent and some specific prosody, as the text-independent information in speech. Although some variable information is essential for certain TTS tasks like emotional, expressive and personalized TTS, it would be an obstacle for unsupervised TTS. The core purpose of unsupervised TTS is to solve the information matching problem between two modalities, i.e., speech and text, which are actually aligned by the pronunciation (or called content). The variable information in speech may interfere with the crossmodal pronunciation information matching in the unsupervised training stage and make the model struggle to find aligned clues in the text for this variable information. Therefore intuitively, if we can reduce the information gap between speech and text, our TTS and ASR model can achieve crossmodal pronunciation information matching faster, and the unsupervised training process can then be stabilized. To this end, we apply the cross-lingual voice conversion as the first stage. We train the voice conversion model on the datasets Slow, Srich and a clean dataset Sref providing the reference speaker timbre and can be in any language. Then we can normalize the variable and noisy speech information of audios in Slow and Srich using the voice conversion model and denote the generated datasets as S′low and S ′ rich. Stage 2: Supervised warm-up training. It is still very difficult to directly train unsupervised TTS from scratch even though we have normalized the variable information in the speech dataset Slow. To warm up the models for next unsupervised training, we train a sequence-to-sequence-based ASR model, which is required in the following back-translation stage, and a non-autoregressive TTS model using the auxiliary paired dataset Srich in a rich-resource language. This stage can provide a better initialization for the model since there exist certain commonalities between written and spoken formats in different languages3. Stage 3: Unsupervised back-translation training. Back-translation, originating from neural machine translation, is one of the most effective ways to leverage monolingual data for translation. In unsupervised TTS, back-translation (Sennrich et al., 2016; He et al., 2016; Ren et al., 2019b) leverages the dual nature (He et al., 2016; Qin, 2020) of TTS and ASR tasks and develops the capability of 3For a low-resource language Llow, it is usually not difficult to find a rich-resource language Lrich which is close to Llow. transforming text to speech (TTS) and speech to text (ASR). We transform a speech sequence s into a text sequence tpseudo using the ASR model, and then train the TTS model on the transformed pair (tpseudo, s). Similarly, we also train the ASR model on the transformed pair (spseudo, t) generated by the TTS model. The back-translation has two training directions, and as the training directions shift, the accuracy of ASR and performance of TTS can be boosted iteratively. We show the performance improvements as the back-translation training progresses in Appendix D.1. 3.2 VARIATIONAL VOICE CONVERSION MODEL WITH ENHANCED PRIOR The new voice conversion (VC) model in Stage 1 is aimed at normalizing the variance and noisy information in low-quality audios. It is based on self-supervised learning (SSL) audio representation (Polyak et al., 2021; van Niekerk et al., 2022), which has been proved to be very effective in disentangling the content and timbre information. As shown in Figure 2a, the overall architecture of our VC model is like an autoencoder. In training, the mel-spectrogram M1 and M2, which are the same here, are fed into several information extraction modules to generate disentangled representations. Specifically, 1) the pre-trained speaker encoder extracts the sentence-level speaker embedding; 2) the pre-trained HuBERT (Hsu et al., 2021) extracts the frame-level SSL discrete representations containing content (pronunciation) information; and 3) the posterior encoder extracts the residual information. After the information decomposition, the speech decoder takes all the representations as input and reconstructs the mel-spectrogram using a mean absolute error (MAE) and a multi-length adversarial loss (Ladv) following Ye et al. (2022) and Chen et al. (2020). In inference, we replace M1 with the reference speech which provides the target speaker timbre. In this way, the generated speech can preserve the pronunciation information in M2 and transfer its timbre to M1. However, besides the common merits of general VC including preserving content and converting timbre, our model should also have the below properties to ensure its performance in our pipeline. 1) Our model should be cross-lingual. It should be able to normalize Slow and Srich which are in different languages. 2) Our model should generate high-quality results. As the normalized speech will be fed to the next two stages as the training target, its result would bound that of the whole pipeline. 3) Our model should be robust to noisy and low-quality audio. Upon previous SSL-based VC methods, we enable these properties of our model with several improvements: Multilingual HuBERT. To enable the model to be cross-lingual, we employ a multilingual HuBERT (Lee et al., 2021; Popuri et al., 2022) to extract the SSL discrete representations as the content information, which is pre-trained on speech in multiple languages. We find it also generalizes well to other unseen languages (see Appendix D.2). HuBERT does not need any paired data or speaker information, which is consistent with our task setting. Besides, we add a language ID input to the speech decoder to indicate the language we need to generate, which can accurately model the pronunciation differences among different languages given the same discrete representation, and make up for the limited capacity of multilingual representations. Variational encoder with flow-based enhanced prior. To improve model robustness and audio quality, inspired by previous successful work in TTS (Ren et al., 2021b; Kim et al., 2021a), we introduce a variational encoder with flow-based enhanced prior. In training, this encoder can “store” the residual information, e.g., some irregular noises, time-varying timbre and prosody, that cannot be encoded by other information extraction modules to the posterior distribution Dq , and uses a normalizing flow to reshape the prior distribution Dp which needs to be close to Dq in terms of KL-divergence. With normalizing flows, the KL-divergence no longer offers a simple closed-form solution. So we estimate KL-divergence via Monte-Carlo method as in Ren et al. (2021b) and Kim et al. (2021a). The reason why we need the normalizing flow is that simple Gaussian prior distribution results in strong constraints on the posterior, which pushes the posterior distribution towards the mean and limits diversity, while the distribution shaped by normalizing flows is more flexible and provides the decoder with stronger prior. Besides, it can also provide the sampled random variables with temporal dependency. Information bottleneck. HuBERT representation is a kind of low-bitrate representation for speech content and does not contain much non-lexical information such as speaker identity and emotion. However, due to its discrete space bottleneck and way of training, we still cannot ensure it fully disentangles the timbre information, and the remaining timbre information may degrade the voice conversion quality in the inference stage. To further erase the speaker identity information, we need to choose an appropriate input dimension for the content encoder (i.e., embedding layer for HuBERT tokens), which can neither be too large nor too small. A large dimension may lead to leakage of fine-grained identity information from the content encoder and a small one may result in loss of pronunciation information. We put more details of our VC model in Appendix B.1. 3.3 TTS AND ASR MODELS Previous unsupervised TTS works use an autoregressive (AR) TTS architecture such as Tacotron 2 (Shen et al., 2018) and TransformerTTS (Li et al., 2018), which automatically find the speechtext alignment. However, such an AR TTS architecture is not robust and prone to word missing and repeating problems as stated in Ren et al. (2019a). In this work, as shown in Figure 2b, we adopt a non-autoregressive (NAR) TTS architecture (Ren et al., 2019a; 2021a). We mainly follow PortaSpeech (Ren et al., 2021b), except that we replace the post-net in PortaSpeech with multilength adversarial training (Ye et al., 2022; Chen et al., 2020) to simplify the training pipeline while keeping the naturalness of the generated mel-spectrogram. Instead of obtaining the ground-truth duration information from Montreal Forced Aligner (MFA) (McAuliffe et al., 2017) as many nonautoregressive TTS models (Ren et al., 2021a;b; Ye et al., 2022) do, we extract the speech-text alignment from the attention matrix generated by the ASR model, which simplifies the training pipeline in our back-translation stage and removes the dependency upon external tools. Specifically, inspired by GlowTTS (Kim et al., 2020), we extract the speech-text alignment by finding the monotonic path of maximum probability over the attention matrix of our ASR model using the Viterbi decoding. To enable the TTS model to generate speech in a different language, we add a language embedding to the decoder and an extra language ID input is required to specify the language of the target speech. Our ASR model is based on a sequence-to-sequence architecture with an LSTM-based encoder and decoder. To generate more monotonic alignment for TTS training, we employ the locationsensitive attention (Shen et al., 2018). Different from the TTS model, our ASR model is universal to all languages and does not need any language embedding as input, which can generalize to new languages better in our scenario. We put more details and model configurations of TTS and ASR models in Appendix B.2 and B.3. 3.4 TRICKS IN BACK-TRANSLATION Back-translation is a very critical step for unsupervised TTS training to leverage unpaired speech and text data. In this subsection, we describe some back-translation strategies that can significantly improve the effectiveness and efficiency of unsupervised TTS training. Curriculum learning. After warming up the ASR model in Stage 2 using Srich and Trich, we can force the ASR model to transcribe the audio in Llow to the text in Lrich by initializing the language embedding of Llow with that of Lrich. Considering the results of ASR are taken as the input of TTS, we select some good transcriptions, whose pronunciation is very similar to the groundtruth, for TTS training and discard bad ones in each round of back-translation. Apparently, we cannot directly calculate the error rate between the transcription and ground-truth text since we have no corresponding text for each audio. Therefore, to filter good recognition results during iterative back-translation, we design a metric called focus rate (F) to evaluate the confidence of ASR results, which is defined as F = 1N ∑N i=1 Ai,Pi . In its definition, N denotes the number of mel-spectrogram frames; Ai,j is ASR attention weights at the position of the i-th mel-spectrogram frame and the j-th text token and satisfies ∑ Ai = 1; Pi is the text token index corresponding to the i-th mel-spectrogram frame in the monotonic path of maximum probability decoded by the Viterbi algorithm (Forney, 1973). A higher F means greater probability lies in the decoded monotonic path and implies better speech-text monotonic alignment and higher confidence in the transcribed results. In each round of the pseudo text generation process, we use F to select good ASR results whose F is greater than a fixed threshold Fthres. Besides, we also store F for each pseudo text tpseudo and replace tpseudo with new result in the next back-translation round only if F is increased. Length augmentation. At the beginning of training in the low-resource language, short utterances (text and speech) are easier for TTS and ASR models to fit and they are generally better at generating short utterances rather than long ones. Besides, our curriculum learning strategy approximates the quality of the generated text and filters bad results, forcing our model to keep more short utterances than long ones. Consequently, our model becomes biased towards short utterances and may perform very poorly for long sentences. To fix this issue, we introduce a length augmentation strategy. In particular, we randomly concatenate two utterances (t1, t2)/(s1, s2) and their generated results (s1pseudo, s 2 pseudo)/(t 1 pseudo, t 2 pseudo) with some probability pcat and obtain the generated pairs (t cat, scatpseudo) and (scat, tcatpseudo) for back-translation training. Length augmentation helps the TTS and ASR models generate long sentences better and become more robust to some long text inputs in inference. Auxiliary supervised losses. If we only employ the back-translation loss in Stage 3, the model may fail to find the correct speech-text alignment, leading to unsatisfied results and unstable training, especially at the beginning of training. To solve this problem, apart from the back-translation loss in the target low-resource language, we also keep the supervised training losses in the auxiliary rich-resource language the same as those in Stage 2. We call them “auxiliary supervised losses”. Specifically, in the process of training, we intersperse the rich resource language supervised training, for both TTS and ASR, into the back-translation steps with some probability paux. 4 EXPERIMENTS AND RESULTS In this section, we conduct experiments to evaluate the effectiveness of our proposed method for unsupervised TTS. We first describe the experiment settings, show the results of our method, and conduct some analyses of our method. 4.1 EXPERIMENTAL SETUP Datasets. We choose the speech and text data from CommonVoice dataset (Ardila et al., 2019) for training and English and Indonesian as the target low-resource languages4. We use French as the rich-resource language unless otherwise stated. The experimental results of using other languages as the rich-resource language are put in Appendix D.2. We split the target language data into two halves. We take unpaired speech data from the first half and text data from the second, so as to guarantee the speech and text data are disjoint. We use LJSpeech (Ito, 2017) as the Sref to provide the speaker timbre and suppress the background noise for the voice conversion model. For evaluation, we choose 100 audio/text pairs in LJSpeech for English5 and 100 audio/text pairs in CommonVoice (Indonesian subset) for Indonesian. For the speech data, we convert the raw waveform into mel- 4We choose English as one of the target languages since we can understand English and it is easy to evaluate, although English is not a low-resource language. 5We use LJSpeech because it has fewer errors in text and speech pairing, while the data in CommonVoice are very noisy and have much wrongly labeled text. spectrograms with 80 ms frame size, 20 ms frame hop following Hsu et al. (2021). More details are listed in Appendix C.1. Training and evaluation. We train our VC, TTS, and ASR models on 1 NVIDIA A100 GPU witch batch size 128. We use the Adam optimizer with β1 = 0.9, β2 = 0.98, ε = 10−9 and learning rate 2e-4. The training takes nearly 3 days. The output mel-spectrograms are converted to waveform using a HiFi-GAN (Kong et al., 2020) pre-trained on LJSpeech (Ito, 2017). The focus rate Fthres, Nsteps, pcat and paux in back-translation are set to 0.2, 20k, 0.2 and 0.2. For evaluation, we mainly use MOS (mean opinion score) for audio quality, WER (word error rate), and CER (character error rate) for the intelligibility of the voice (French & Steinberg, 1947) to verify if we can generate a reasonable speech sequence. For mean opinion score evaluation, we keep the text content consistent among different models so as to exclude other interference factors and only examine the audio quality. We randomly choose 20 sentences from the test set and each audio is listened by at least 20 testers following Ren et al. (2019a; 2021a), who are all native English/Indonesian speakers. For WER and CER, we first transcribe the sentences from the generated speech using open-sourced or commercial ASR and calculate these metrics between them and the ground-truth text in the test set. We use WeNet (Yao et al., 2021; Zhang et al., 2022) for English for fair comparison with future works, since commercial ASR could be changed in the future; but we choose Azure ASR service6 for Indonesian since we cannot find any Indonesian open-sourced ASR that is accurate enough. In analytical experiments, we also show the CER of our ASR model, which also indicates the performance of our system since ASR and TTS are dependent on each other and boosted iteratively. 4.2 RESULTS AND ANALYSES 4.2.1 PERFORMANCE We compare our method with previous works including Ren et al. (2019b), Xu et al. (2020), Liu et al. (2022b) and Ni et al. (2022). For fair comparison, we make some modifications to all baseline methods including unifying the training dataset, TTS acoustic model and vocoder (more detailed modifications of each baseline method are put in Appendix C.2). The results are shown in Table 2. We also evaluate the outputs of a supervised TTS model trained with paired target language data for reference, and its results can be regarded as the upper bound. From the table, it can be seen that our method achieves the best performance in both speech quality (MOS) and intelligibility (CER and WER) in English and Indonesian. And very surprisingly, our method can even approach the performance of the supervised model in Indonesian. A possible reason is that Indonesian is easier to pronounce than English. These observations prove the effectiveness of our proposed tricks for unsupervised TTS. 4.2.2 ABLATION STUDY To analyze the effectiveness of each trick and component, we conduct some ablation studies on English. In addition to generated speech quality (MOS) and intelligibility (CER and WER), we also analyze the character error rate of our ASR model (CER(ASR)). The results are shown in Table 3. 1) From Row 2, it can be seen that our model can achieve better performance after normalizing the speech variance including timbre and noise in our dataset. 6https://azure.microsoft.com/en-us/products/cognitive-services/ speech-to-text/ 2) From Row 3, it can be seen that the NAR TTS architecture improves speech quality and intelligibility by a large margin, as NAR TTS is more robust to noisy speech and reduces some bad cases in generated speech. 3) From Row 4, it can be seen that back-translation is essential to unsupervised TTS, which is consistent with the findings of previous works (Xu et al., 2020; Ren et al., 2019b). 4) From Row 5, we can see that curriculum learning can improve the training effectiveness since it can filter out bad pseudo transcripts and improve the training set quality for back-translation. 5) From Row 6, it can be seen that our length augmentation strategy can improve the robustness to long text inputs in inference. 6) From Row 7, we find that auxiliary supervised training loss can improve the performance of both ASR and TTS by stabilizing the training. From the table, comparing other rows with Row 1 that shows our model with all tricks, we have several observations. 4.2.3 ANALYSES ON VOICE CONVERSION MODEL With verified effectiveness of normalization via the voice conversion model as demonstrated in Section 4.2.2, we conduct more analyses on our proposed voice conversion model, including the effects of different information bottleneck channels and the flow-based enhanced prior. The results are shown in Table 4. It can be observed that an appropriate size of bottleneck channels is crucial for the performance of the voice conversion model, with a large bottleneck resulting in timbre information leakage and a small bottleneck leading to pronunciation information loss. Besides, our flow-based enhanced prior can improve the quality of converted speech, since it has make fewer assumptions about the prior distribution as we mentioned in Section 3.2. 5 CONCLUSION In this work, we proposed an unsupervised method for TTS by leveraging low-quality and noisy unpaired speech and text data in the target language and paired data in other rich-resource languages. Our method encloses several practical tricks to realize unsupervised text-to-speech, including normalizing variable and noisy information in speech data, curriculum learning, length augmentation, and auxiliary supervised training. We have also found that the non-autoregressive TTS architecture can significantly relieve robustness issues in unsupervised settings. We conducted experiments on CommonVoice dataset, taking English and Indonesian as the target languages, and have found that our method can achieve high audio quality in terms of MOS, and high intelligibility in terms of WER and CER, demonstrating remarkable effectiveness. Further analyses have well verified the importance of each trick of our method. Appendices A TRAINING ALGORITHM The detailed unsupervised training algorithm is shown in Algorithm 1. Algorithm 1 Unsupervised TTS Training 1: Input: paired dataset in rich-resource language Srich and Trich; unpaired speech and text data in low- resource language Slow and Tlow; single-speaker speech dataset Sref containing reference speaker for voice conversion; pre-trained multilingual HuBERT model Mh; pre-trained speaker encoder Mspk. 2: Initialize: multilingual TTS model MTTS and ASR model MASR; current unsupervised training step t = 0; total unsupervised training steps Ttotal; number of steps for each TTS or ASR stage Nstep. 3: Train our proposed voice conversion model with Srich, Slow and Sref , and use the Mh and Mspk to extract HuBERT and speaker representations. 4: Convert the timbre of all speech samples in Srich and Slow to that of the speech in Sref and obtain the converted S′rich and S ′ low. {Sec. 3.2} 5: Train MASR and MTTS using S′rich and Trich. 6: repeat 7: Convert all samples s in S′low to pseudo text tpseudo. 8: Select pseudo training pairs (tpseudo, s) satisfying F > Fthres and obtain (Tpseudo, S′). {Curriculum learning in Sec. 3.4} 9: for N in 0 to Nsteps do 10: if Random() ≤ paux then 11: Sample D ← (trich, s) from (Trich, S′rich). {Auxiliary loss in Sec. 3.4} 12: else 13: Sample D ← (tpseudo, s) from (Tpseudo, S′). 14: if Random() ≤ pcat then 15: Sample D′ ← (t2pseudo, s2) from (Tpseudo, S′). 16: D ← (Concat(tpseudo, t2pseudo), Concat(s, s2)) {Length augmentation in Sec. 3.4} 17: end if 18: end if 19: Train MTTS using D. 20: end for 21: for N in 0 to Nsteps do 22: Convert all samples t in Tlow to pseudo speech spseudo and obtain (Spseudo, Tlow). 23: if Random() ≤ paux then 24: Sample D ← (srich, t) from (S′rich, Trich). {Auxiliary loss in Sec. 3.4} 25: else 26: Sample D ← (spseudo, t) from (Spseudo, Tlow). 27: if Random() ≤ pcat then 28: Sample D′ ← (s2pseudo, t2) from (Spseudo, Tlow). 29: D ← (Concat(spseudo, s2pseudo), Concat(t, t2)) {Length augmentation in Sec. 3.4} 30: end if 31: end if 32: Train MASR using D. 33: end for 34: t← t+ 1 35: until t > Ttotal B MODEL DETAILS AND CONFIGURATIONS In this section, we put more details of models including voice conversion (VC), text-to-speech (TTS), and automatic speech recognition (ASR) models, and also the hyper-parameters we used in our experiments. B.1 VC MODEL Our proposed VC model takes two mel-spectrograms (content provider M1 and timbre reference M2) as inputs and outputs the converted mel-spectrogram M3. Firstly, M1 is fed into a pre-trained speaker encoder7 to extract the speaker embedding Hspk. Secondly, M2 is fed into a pre-trained multilingual HuBERT8, which is pre-trained with three languages, and extract the HuBERT discrete frame-level representation Hling. Thirdly, M2 is taken to the posterior encoder, which generates a multivariate Gaussian distribution as the posterior in our variational VC model. Instead of directly employing Gaussian distribution, we introduce a small volume-preserving normalizing flow to model the prior distribution. A latent z is sampled from the posterior distribution (in training) or prior distribution (in inference). Finally, we add Hspk, Hling, z and language embedding of M2 together (all of them have the same channel size C = 192) and feed the result hidden states into the speech decoder to generate the target speech. Besides, we introduce a multi-length discriminator to distinguish between the output generated by the model and the ground truth mel-spectrogram. The loss terms of the voice conversion model consist of 1) reconstruction loss of mel-spectrotram LMAE: mean absolute error between the generated and ground-truth mel-spectogram; 2) the KLdivergence of prior and posterior distributions: LKL = log qϕ(z|x)− log pθ̄(z), where z ∼ qϕ(z|x); and 3) the adversarial training loss introduced by the multi-length discriminator: Ladv. The final weighted total loss is Ltotal = λ1LMAE + λ2LKL + λ3Ladv. In our experiments, we set λ1 = λ2 = λ3 = 1.0. The detailed structure of each module is introduced in the following subsubsections. B.1.1 MULTILINGUAL HUBERT Multilingual HuBERT (Lee et al., 2021) is trained on English (En), Spanish (Es), and French (Fr) 100k subsets of the VoxPopuli dataset (Wang et al., 2021). VoxPopuli contains unlabeled speech data for 23 languages, and Lee et al. (2021) use the 4.5k hrs of unlabeled speech for En, Es, and Fr, totaling 13.5k hours. We extract the HuBERT features from the 11-th layer of the third-iteration HuBERT model and discretize them using the pre-trained K-means model to obtain the discrete representations Hling. B.1.2 POSTERIOR ENCODER The structure of posterior encoder is similar with the encoder in the variational generator of PortaSpeech (Ren et al., 2021b), which is composed of a 1D-convolution with stride 4 followed by ReLU activation (Glorot et al., 2011) and layer normalization (Ba et al., 2016), and a non-causal WaveNet (Van Den Oord et al., 2016), as shown in Figure 3a. The number of encoder layers, WaveNet channel size and kernel size are 8, 192 and 5. The outputs of posterior encoder is the parameters (µq and σq) of the posterior distribution N(µq, σq) and the latent z is sampled from N(µq, σq), whose latent size is set to 32. B.1.3 VOLUME-PRESERVING (VP) NORMALIZING FLOW Following Kim et al. (2021b) and Ren et al. (2021b), we use volume-preserving normalizing flow as the prior distribution generator since it does not need to consider the Jacobian term when calculating the data log-likelihood and is powerful enough for modeling the prior, as shown in Figure 3c. The normalizing flow transforms simple distributions (e.g., Gaussian distribution) to complex distributions through a series of K invertible mappings, which is a stack of WaveNet (van den Oord et al.) residual blocks with dilation 1. Then we take the complex distributions as the prior of the speech decoder. When introducing normalizing flow-based enhanced prior, the optimization objective of the mel-spectrogram generator becomes: log p(x) ≥ Eqϕ(z|x)[log pθ(x|z)]−KL(qϕ(z|x)|pθ̄(z)) ≡ L(ϕ, θ, θ̄), (1) where ϕ, θ and θ̄ denote the model parameters of the posterior encoder, speech decoder and the normalizing flow-based enhanced prior, respectively. Due to the introduction of normalizing flows, the KL term in Equation 1 no longer offers a simple closed-form solution. So we estimate the expectation w.r.t. qϕ(z|x) via Monte-Carlo method by modifying the KL term: KL(qϕ(z|x)|pθ̄(z)) = Eqϕ(z|x)[log qϕ(z|x)− log pθ̄(z)]. (2) 7https://github.com/resemble-ai/Resemblyzer 8https://github.com/facebookresearch/fairseq/blob/main/examples/speech_ to_speech/docs/textless_s2st_real_data.md In training, the posterior distribution N(µq, σq) is encoded by the encoder of the posterior encoder. Then z is sampled from the posterior distribution using reparameterization and is passed to the speech decoder. In the meanwhile, the posterior distribution is fed into the VP normalizing flow to convert it to a standard normal distribution (the middle dotted line). In inference, VP normalizing flow converts a sample in the standard normal distribution into a sample z and we pass the z to the speech decoder. Our VP normalizing flow consists of 4 flow steps, each of which has 4 WaveNet layers, whose channel size and kernel size are set to 64 and 3. B.1.4 CONTENT ENCODER The content encoder is stacks of feed-forward Transformer (Vaswani et al., 2017) layers with relative position encoding (Shaw et al., 2018), as shown in Figure 3d. The information bottleneck is located in the first layer of the content encoder (the embedding layer of HuBERT tokens). We set the channel size of each embedding to 16 as default. B.1.5 SPEECH DECODER The speech decoder, as shown in Figure 3b, consists of a non-causal WaveNet and a 1D transposed convolution with stride 4, also followed by ReLU and layer normalization. The number of decoder layers, WaveNet channel size and kernel size are set to 4, 192 and 5. B.1.6 MULTI-LENGTH DISCRIMINATOR Inspired by Ye et al. (2022), our multi-length discriminator is an ensemble of multiple CNN-based discriminators, which evaluate the mel-spectrogram based on random windows of different lengths, as shown in Figure 4. In our experiments, we train three CNN-based discriminators which observe random mel-spectrogram clips with lengths of 32, 64, and 128 frames. The structure of the CNNbased discriminator is shown in Figure 4b. It consists of N+1 2D-convolutional layers, each of which is followed by a Leaky ReLU activation and drop-out layer. The latter N convolutional layers are additionally followed by an instance normalization (Ulyanov et al., 2016) layer. After the convolutional layers, a linear layer projects the hidden states of the mel-spectrogram slice to a scalar, which is the prediction that the input mel-spectrogram is true or fake. In our experiments, we set N=2 and the channel size of these discriminators to 32. B.2 TTS MODEL Our TTS model architecture follows PortaSpeech (Ren et al., 2021b) except that 1) we replace the post-net in PortaSpeech with multi- length adversarial training, which is the same as the adversarial training in our voice conversion model in Appendix B.1. 2) We add a language embedding layer to the speech decoder, indicating the language of the speech that will be generated. 3) We use a simple character encoder like FastSpeech (Ren et al., 2019a; 2021a) instead of the mixed linguistic encoders for simplicity. The detailed model architecture and hyper-parameters of posterior encoder, normalizing flow, speech decoder and multi-length discriminators are the same as those in the voice conversion model in Appendix B.1. The structures of text encoder and duration predictor are the same as those in Ren et al. (2021b), with channel size 192, kernel size 5 and number of layers 4. B.3 ASR MODEL We adopt the architecture of Tacotron 2 (Shen et al., 2018) for our ASR model, since its locationsensitive attention can generate close-to-diagonal and monotonic alignment between speech and text. We replace the character/phoneme embedding of the encoder in Tacotron 2 with a speech CNNbased pre-net with stride 4 to enable speech information encoding. For the decoder side, we use the character embedding as the input layer and the softmax as the output layer to adapt the decoder to the character sequence. We set the hidden size of encoder RNN to 512 and the number of convolution stacks to 5; the hidden size of decoder RNN and attention RNN are both set to 1024; the channel of decoder attention is set to 512. We also train a Transformer-based (Vaswani et al., 2017) language model and jointly beam search decode the recognition results X using the ASR model pASR(X|S) and language model pLM(X) to maximize the probability logpASR(X|S) + λLMlogpLM(X), where λLM is the weight of language model and it is set to 0.2 in this work. The beam size of beam search decoding is set to 5. C MORE EXPERIMENTAL DETAILS In this section, we describe more experimental details for reproducibility. C.1 DATASETS We select subsets of English, French and Indonesian from CommonVoice (Ardila et al., 2019) dataset. We choose subsets of about 200k utterances for English and French and all data (about 20k utterances) for Indonesian. We randomly select 100 utterances in English and Indonesian for validation and another 100 utterances in Indonesian for testing. The test set of English is randomly selected from LJSpeech (Ito, 2017). We split the target language data into two halves according to utterance ID: we take unpaired speech data from those with odd ID and text data from those with even ID, so as to guarantee the speech and text data are disjoint. C.2 BASELINES For fair comparison, we make some modifications to all baseline methods as follows: • We adopt training data consisting of paired French (as auxiliary rich-resource language) data and unpaired English/Indonesian (as target low-resource language) data. Specifically, for Ren et al. (2019b) and Xu et al. (2020), we warmup ASR and TTS in these methods using rich-resource language data before back-translation; for Liu et al. (2022b) and Ni et al. (2022), we initialize the unsupervised ASR model using a modified CTC loss (Graves et al., 2006) with rich-resource language data before unsupervised training and also initialize the TTS model using this data. • We directly use character sequence as TTS input without any lexicon and G2P tools. • We extract the speaker embeddings using the same pre-trained speaker encoder9 and add them to the TTS model to indicate the speaker information (timbre) since our dataset is multi-speaker. • We replace all baseline TTS models with NAR architecture the same as our method since AR architecture is very sensitive to noisy audio and cannot produce any meaningful results in our settings. • We use the same voice conversion model (described in Section 3.2) to convert ground-truth audios and all baselines’ outputs to the same person’s timbre from Sref . • We use the same vocoder, HiFi-GAN (Kong et al., 2020), to convert mel-spectrogram to the waveform. D MORE EXPERIMENTAL RESULTS D.1 PERFORMANCE CHANGES IN BACK-TRANSLATION TRAINING To verify the accuracy of ASR and the performance of TTS can be boosted iteratively as the training directions shift, we plot the accuracy of TTS and ASR results in Figure 5. From the figure, we can see that with the training of the model (the training directions shift every Nsteps = 20000 steps), the error rates of ASR and TTS results gradually drop until convergence. D.2 USE OTHER RICH-RESOURCE AUXILIARY LANGUAGES We explore how different rich-resource auxiliary languages can affect the target language’s performance. In addition to French, we use other languages including German, Dutch, Spanish, and Portuguese as the rich-resource auxiliary language to train our unsupervised TTS model. For fair comparison, we use the same training data size for all rich-resource languages (80k pairs speechtext subset in each language from CommonVoice). We choose English as the target low-resource language. The results are shown in Table 5. It can be seen that using German as the rich-resource 9https://github.com/resemble-ai/Resemblyzer language achieves the best performance. The possible reason is that the pronunciation distance between English and German is closer than other languages as they both belong to west Germanic languages. Though Dutch also belongs to west Germanic languages, it does not perform very well, which might be due to its bad data quality. Then we combine data from all these languages and find that it achieves very strong results and outperforms others that use only one auxiliary language. Besides, we observe that our method performs very well not only in English, French and Spanish, which are used to pre-trained the multilingual HuBERT, but also in other unseen languages, which verifies the generalization of our voice conversion model and the whole unsupervised TTS pipeline. D.3 ANALYSES ON FOCUS RATE F To verify the effectiveness of focus rate F we propose in Section 3.4, we calculate F and CER on English test set in our model training process. We plot the curves to explore the correlation between them in Figure 6. From the figure we can see that the focus rate F is negatively related to recognition accuracy, which means it is reasonable to use it as the indicator for filtering ASR transcriptions (higher F indicates lower CER). D.4 USE OTHER TEXT UNPAIRED DATASET We train our model using the audio data from the CommonVoice English subset which is the same as the original version of the paper and the text data from WMT16 (Bojar et al., 2016) English training set to make the domains of unpaired audio and text very different. We keep the test set the same as the original paper (LJSpeech subset). The results are shown in Table 6. From the table, it can be seen that the performance drops a bit (∼0.036 and ∼0.06 increasing in CER and WER) due to the domain gap between the text and speech unpaired data. E POTENTIAL NEGATIVE SOCIETAL IMPACTS Unsupervised TTS lowers the requirements for speech synthesis service deployment (only needs unpaired speech and text data) and synthesizes high-quality speech voice, which may cause unemployment for people with related occupations such as broadcasters and radio hosts. In addition, there is the potential for harm from non-consensual voice cloning or the generation of fake media and the voices of the speakers in the recordings might be overused than they expect.
1. What is the focus of the paper in terms of text-to-speech synthesis? 2. What are the strengths of the proposed approach, particularly in its experimental results and thorough evaluation? 3. What are the weaknesses of the paper regarding its training algorithm and dependence on voice conversion methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies the important problem of learning text-to-speech synthesis from noisy unpaired data. To order to achieve this, the authors propose several practical tricks, including normalizing variable and noisy information in speech data, curriculum learning, length augmentation and auxiliary supervised learning. Experimental results on public datasets confirm the effectiveness of the proposed tricks. Strengths And Weaknesses Strength: The main contributions are clearly stated and supported by the experiments. Major works on the similar topic are widely covered and referenced. The samples are quite convincing. Evaluation is thorough enough to support the arguments with in-depth ablation studies. Weaknesses: The complicated training algorithm makes the proposed method hard to reproduce. It would be helpful if the authors can provide brief guidelines such as how to tune the hyper parameters. It seems that the performance of this system heavily depends on the performance of the self-supervised voice conversion. It is better to compare the proposed VC approach with SOTA VC models such as YourTTS, although YourTTS is trained with the need of text scripts. Clarity, Quality, Novelty And Reproducibility The contributions are somewhat new. Aspects of the contributions exist in prior work.
ICLR
Title Wizard of Wikipedia: Knowledge-Powered Conversational Agents Abstract In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction. 1 INTRODUCTION Arguably, one of the key goals of AI, and the ultimate the goal of natural language research, is for humans to be able to talk to machines. In order to get close to this goal, machines must master a number of skills: to be able to comprehend language, employ memory to retain and recall knowledge, to reason about these concepts together, and finally output a response that both fulfills functional goals in the conversation while simultaneously being captivating to their human speaking partner. The current state-of-the-art approaches, sequence to sequence models of various kinds (Sutskever et al., 2014; Vinyals & Le, 2015; Serban et al., 2016; Vaswani et al., 2017) attempt to address some of these skills, but generally suffer from an inability to bring memory and knowledge to bear; as indicated by their name, they involve encoding an input sequence, providing limited reasoning by transforming their hidden state given the input, and then decoding to an output. To converse intelligently on a given topic, a speaker clearly needs knowledge of that subject, and it is our contention here that more direct knowledge memory mechanisms need to be employed. In this work we consider setups where this can be naturally measured and built. We consider the task of open-domain dialogue, where two speakers conduct open-ended chit-chat given an initial starting topic, and during the course of the conversation the topic can broaden or focus on related themes. During such conversations, an interlocutor can glean new information and personal points of view from their speaking partner, while providing similarly themselves. This is a challenging task as it requires several components not found in many standard models. We design a set of architectures specifically for this goal that combine elements of Memory Network architectures (Sukhbaatar et al., 2015) to retrieve knowledge and read and condition on it, and Transformer architectures (Vaswani et al., 2017) to provide state-of-the-art text representations and sequence models for generating outputs, which we term Transformer Memory Networks. As, to our knowledge, no public domain dataset of requisite scale exists, we build a supervised dataset of human-human conversations using crowd-sourced workers, first crowd-sourcing 1365 diverse discussion topics and then conversations involving 201, 999 utterances about them. Each ∗Joint first authors. topic is connected to Wikipedia, and one of the humans (the wizard) is asked to link the knowledge they use to sentences from existing articles. In this way, we have both a natural way to train a knowledgeable conversation agent, by employing a memory component that can recall and ground on this existing text, and a natural way to evaluate models that we build, by assessing their ability at locating and using such knowledge. Our Transformer Memory Network architectures, both in retrieval and generative versions, are tested in this setup using both automatic metrics and human evaluations. We show their ability to execute engaging knowledgeable conversations with humans, compared to a number of baselines such as standard Memory Networks or Transformers. Our new benchmark, publicly in ParlAI (http:// parl.ai/projects/wizard of wikipedia/), aims to encourage and measure further improvements in this important research direction. 2 RELATED WORK Many existing dialogue tasks do not study the use of knowledge explicitly. For example, popular chit-chat datasets such as Open-Subtitles (Vinyals & Le, 2015), Persona-Chat (Zhang et al., 2018) and Twitter (Sordoni et al., 2015) have tested the ability of sequence-to-sequence models that attend over the recent dialogue history, but do not attempt to recall long-term knowledge beyond encoding it directly into the weights of the feed-forward network. In the area of goal-directed dialogue, separate from open domain chit-chat, such as airline (El Asri et al., 2017) or restaurant booking (Henderson et al., 2014; Wen et al., 2016; Bordes et al., 2017), knowledge conditioning is typically employed by allowing access to a database through API calls or otherwise. In contrast, our work investigates unstructured knowledge across a large, diverse set of topics potentially spanning all of Wikipedia. In question answering one does not produce a dialogue response based on a conversation history, but a factual answer based on a question. In that case, it is clear that retrieving and conditioning knowledge is vital. For example, in SQuAD neural models have been developed that attend to a given paragraph from Wikipedia to answer questions (Rajpurkar et al., 2016), or Open-SQuAD which extends this to answering without being given the paragraph, instead performing retrieval over the entirety of Wikipedia (Chen et al., 2017). Recently, the QuAC dataset investigates similar themes, but as a sequence of questions and answers in dialogue form instead (Choi et al., 2018). In this work we do not address question answering, but focus on natural human dialogues which contain a diverse set of utterances, not just questions and answers. The closest work to ours lies in the area of non-goal directed dialogue incorporating knowledge. The work of Dodge et al. (2016) employed Memory Networks to perform dialogue discussing movies in terms of recommendation and open-ended discussion from Reddit, conditioning on a structured knowledge base. Zhou et al. (2018) also links Reddit to structured knowledge. Both Parthasarathi & Pineau (2018) and Ghazvininejad et al. (2018) use unstructured text instead, as we do: the former to discuss news articles using Wikipedia summaries as knowledge, and the latter to discuss local businesses in two-turn dialogues using Foursquare tips as knowledge. Ghazvininejad et al. (2018) uses an extended Encoder-Decoder where the decoder is provided with an encoding of the context along with the external knowledge encoding. Neither involves dialogue authored with the given knowledge, so it is unclear when knowledge is useful or not. In contrast, in our task, we know the Wikipedia articles and sentences that ground crowdworkers dialogues. Model-wise, Parthasarathi & Pineau (2018) uses a Bag-of-Words Memory Network type fact encoder and an RNN decoder. Our work compares Memory Networks (Sukhbaatar et al., 2015) and Transformers which have been shown to be on-par or superior to RNN encoder-decoders (Vaswani et al., 2017), and develops an architecture that combines these approaches. Concurrently with our work Moghe et al. (2018) proposed a dataset based on the closed domain of movie chats. Our paper shows models working on full multi-turn dialogue in an open-domain setting, which to our knowledge was not shown before. 3 WIZARD OF WIKIPEDIA We consider the following general open-domain dialogue setting: two participants engage in chitchat, with one of the participants selecting a beginning topic, and during the conversation the topic is allowed to naturally change. The two participants, however, are not quite symmetric: one will play the role of a knowledgeable expert (which we refer to as the wizard) while the other is a curious learner (the apprentice). Apprentice At each stage of the conversation the apprentice talks to the wizard freely, playing the role of a curious learner, eager to chat. Their goal is to go into depth about a chosen topic that interests themselves or their partner, while keeping the conversation engaging and fun. Note that the instruction to delve deeply into a topic makes this different to more “shallow” chit-chat tasks. In this task the use of knowledge is emphasized more. Wizard The wizard is given the following instructions: “You have just met the other person, who seems quite curious, and you are eager to discuss a topic with them!” Their goal is to inform their conversation partner about a topic that one of them will choose. Crucially, the wizard has access to an information retrieval system that shows them paragraphs from Wikipedia possibly relevant to the conversation, which are unobserved by the apprentice. Before each conversation turn the wizard can read these paragraphs and then potentially base their next reply on that observed knowledge. Note, the wizard is particularly instructed not to simply parrot this knowledge, but to use it to craft a relevant reply, and to present any relevant knowledge in a fun and engaging way, if possible. Conversation Flow The flow of the conversation thus takes place as follows. 1. Either the wizard or apprentice is picked to choose the topic and speak first. The other player receives the topic information, and the conversation begins. 2. When the apprentice sends the wizard a message, the wizard is shown relevant knowledge (described below), and chooses a relevant sentence in order to construct a response, or else chooses the no sentence used option. 3. The Wizard responds to the apprentice basing their response on their chosen sentence. 4. The conversation repeats until one of the conversation partners ends the chat (after a minimum of 4 or 5 turns each, randomly chosen beforehand). After collecting data of such wizard-apprentice conversations between humans, the goal is to then replace the human wizard with a learned agent that will speak to a human apprentice instead, similar to the procedure in Wizard of Oz experiments (Bernsen et al., 2012). Topics We crowd-sourced a set of 1365 natural, open-domain dialogue topics, each linked to a Wikipedia article. These include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger. Knowledge Retrieval At each step of the dialogue the wizard has access to a set of passages of knowledge which may be relevant to the given dialogue context. While this is a potentially learnable part of the model, we required for this to be fixed so that we could present the results to the annotator when collecting the dataset. We thus used exactly the same retriever that is commonly used for the Open-SQuAD dataset in Chen et al. (2017). It uses a simple inverted index lookup followed by term vector model scoring. Articles and queries are compared as TF-IDF weighted bag-of-word and n-gram vectors, using the hashing trick. We retrieve the top 7 articles (first paragraph only) for the last two turns of dialogue (by wizard and apprentice) and the article (first 10 sentences only) for the original topic, and present these articles to the wizard as knowledge context, along with their titles. Note that while this system is used to build the dataset, a superior method can in principle be learned and used by a model at test time. Knowledge Selection and Response Generation During data collection, the wizard can click on any of the retrieved article titles in the dialogue UI to expand that article, at which point they can click on a sentence that is most relevant to the response they want to make (only one article, and one sentence can be selected on any turn, for simplicity). If they see no relevant article or sentence they can choose no sentence used instead. The wizard then enters their response to the apprentice. An image of the Wizard’s UI is shown in Appendix A.1. Final Dialogue Dataset The final dialogue dataset we collect consists of 22,311 dialogues with 201,999 turns, which we divide into 166,787 for train, 17,715 for validation, and 17,497 for test. The test set is split into two subsets, Test Seen and Test Unseen. Test Seen contains 533 overlapping topics with the training set with new dialogues about those topics. Test Unseen consists of 58 topics never seen before in train or validation. Overall data statistics can be found in Table 1, and further statistics and examples of collected conversations in Appendix A.2. We observe wizards and apprentices both asking and answering questions, and providing each other with a mixture of facts and personal feelings during their general discussion. 4 MODELS In this work we consider learning dialogue models to replace the wizard in our learning tasks, i.e. the knowledgeable speaker. The dialogue model thus can have access to a knowledge source, in this case Wikipedia, to ground the conversation with. We thus develop extensions of the Memory Network (Sukhbaatar et al., 2015) and Transformer (Vaswani et al., 2017) models that can (i) retrieve from a large memory relevant information conditioned on the dialogue history, (ii) carefully read and attend over the retrieved set of knowledge, and then (iii) generate the next dialogue utterance. This model is then used consecutively on each turn to form an entire dialogue with a user. We develop two classes of models capable of leveraging knowledge: (i) retrieval models that produce an output among a set of candidate responses (the set of utterances from the training set); and (ii) generative models that generate word-by-word (using a beam). The input to either model is the same: at each dialogue turn where the model is intended to make a response, it is given the current dialogue context x1, . . . , xt of t dialogue turns, where x1 is always the initial starting topic (e.g. “Kurt Cobain”), and the remaining turns swap between the two speakers. The goal at each stage is to output the next utterance xt+1. Knowledge Retrieval We assume a large knowledge base (memory) m1, . . . ,mN which is hierarchically organized into documents consisting of paragraphs and sentences. As it is infeasible for current neural attention techniques to operate on this scale, we use standard information retrieval (IR) techniques (c = IR(x,m)) as a first step to return a smaller set of candidates mc1 , . . . ,mcK for fine-grained selection. In our experiments, we use the IR system provided to the human annotators during dataset creation, detailed in Section 3. The retriever operates on the topic (x1) and the last two turns (xt and xt−1) if they exist, effectively calling the IR system three times with three different queries. Empirically, this provided better performance compared to merging into one query, likely because it can address quite different topics. We retrieve the top 7 articles (first paragraph only) for each lookup and then flatten all the results into separate sentences (i.e. remove the organization of sentences belonging to articles), but prepend every sentence with its article title. In this way the candidates mc1 , . . . ,mcK given to the neural model in the next stage can be attended to independently without having to deal with hierarchical issues. Knowledge Attention We use an attention mechanism to perform fine-grained selection of which knowledge sentences will be used to produce the next turn of dialogue. Each sentence in the memory is independently encoded with a Transformer encoder (Vaswani et al., 2017), and the same Trans- former is used to encode the dialogue context x. We then perform standard dot-product attention between the memory candidates and the dialogue context. Utterance Prediction Given the hidden state derived from the memory attention process described above, the final stage is to predict the output utterance that will form the next dialogue turn. We consider different variants of the two stages above, knowledge attention and utterance prediction, when considering retrieval and generative variants of our models. We will now detail these in turn. 4.1 RETRIEVAL TRANSFORMER MEMORY NETWORK This model encodes each knowledge sentence mc1 , . . . ,mcK and the dialogue context x with a Transformer, as described above. The final input encoding is calculated by performing dot-product attention over enc(mc1), . . . , enc(mcK ) and adding the resulting weighted sum of these vectors to enc(x) to get the representation repLHS(mc1 , . . . ,mcK , x). The candidate responses r1, . . . , rL are encoded with a separate Transformer to get repRHS(ri) for each i. We choose as a response r` where ` = argmax i∈{1,...,L} repLHS(mc1 , · · · ,mcK , x) ‖repLHS(mc1 , . . . ,mcK , x)‖2 • repRHS(ri)‖repRHS(ri)‖2 . The model is trained to minimize the cross-entropy loss, where the negative candidates for each example are the responses to the other examples in the batch (Henderson et al., 2017). 4.2 GENERATIVE TRANSFORMER MEMORY NETWORK We consider two versions: a Two-stage and an End-to-end version. Both models find the most relevant piece of knowledge mbest, and then perform an encoding step by concatenating it with the dialogue context, allowing the decoder to attend over both the knowledge and dialogue when formulating its response. We employ a beam search of 5 to select our best response. All generative models employ BPE encoding (Sennrich et al., 2016), which we found effective at enabling generators to copy rare words from Wikipedia sentences (Fan et al., 2018). In the End-to-end version, a shared Transformer encoder is used to encode all candidates mci and the dialogue history. The encoded candidates are flattened into vectors using the normalization from Cer et al. (2018) (summing, and normalizing by the square root of the sentence length in order to balance short and long sentences) to produce an attention prediction over the memory. The full sequence encoding of the single highest selected knowledgembest is concatenated with the encoding of the dialogue, and passed into a Transformer decoder. An illustration of our End-to-end model is shown in Figure 1. We train the model to minimize the negative log-likelihood of the response utterance. We can add additional supervision by forcing the knowledge selection to correctly choose the same knowledge candidate as the human wizard in the training set by adding an additional crossentropy loss over the knowledge attention, modulated by a weight λ: L = (1− λ)LNLL + λLknowledge. In the Two-stage version, we employ two separately trained models for each of these two tasks, knowledge selection and utterance prediction. As the knowledge selection step creates a hard deci- sion influencing the output of the generator, we find maximizing the performance of this component to be vital. We can also improve performance of the decoder by employing knowledge dropout (K.D.), wherein we artificially prevent the model from attending to knowledge a fraction of the time during training. We find this helps the generator be more resilient to errors at the knowledge selection stage, and makes training faster. K. D. is a novel technique we propose here, however it is similar to many other dropout techniques, e.g. feature dropout used in Wu et al. (2017). 5 EXPERIMENTS We describe each of our experimental setups and results. We first investigate the ability of our models to select knowledge appropriately, and then consider the full task of dialogue with knowledge. 5.1 KNOWLEDGE SELECTION TASK Before looking at the full Wizard dialogue task, we assess the ability of models to predict the knowledge selected by human wizards in the dataset given the dialogue history. This will inform us of the feasibility of this task and the best models to use in a two-stage architecture. We compare Transformers against various baselines including a random baseline; an Information Retrieval (IR) baseline, which uses simple word overlap; and a Bag-of-Words Memory Network (Sukhbaatar et al., 2015). Where noted, the Transformer is pretrained on Reddit data (Mazaré et al., 2018), and fine-tuned for our task. The results are shown in Table 2. Transformers work best, as long as they are pretrained on a large dataset (Reddit), while multi-tasking on SQuAD provides marginal impact. Further analysis of this task using other models is provided in Appendix B.1. We use the best performing Transformer model reported here for our two-stage generative Memory Network in the full dialogue task. 5.2 FULL TASK: DIALOGUE WITH KNOWLEDGE We evaluate our models on the full task of dialogue generation given knowledge in two settings: given the gold knowledge sentence chosen by a human, or where the model needs to predict which knowledge to use. We separately describe experiments for retrieval and generative models. Retrieval Experiments We use similar baselines as in the knowledge selection experiments, but now also apply Transformer Memory Networks, which attend over knowledge. Models are evaluated measuring Recall@1 when ranking the gold response among 99 randomly chosen candidates, and unigram F1 of the model’s prediction with the gold response. The results are shown in Table 3. We find that the addition of knowledge improves all models (improving Bow MemNet from 56 to 71 R@1 and the Transformer MemNet from 79 to 87 R@1) for predicted knowledge. Performance improves dramatically when models are provided gold knowledge, but otherwise retain similar trends. Generative Experiments We compare our generative End-to-end and Two-stage Transformer Memory Network models to two more baselines: repeating the last utterance, and a generative Transformer model trained to respond to dialogue but without access to knowledge. Models are evaluated using perplexity (PPL) of the gold response and unigram F1. The results are given in Table 4. Our experiments show that both the End-to-end and Two-stage models employ the knowledge in their response predictions, as they outperform their counterpart Transformer without knowledge, and demonstrate substantial improvements when provided the gold knowledge. While the Two-stage model produces significantly stronger perplexity and F1 scores using the predicted knowledge, the End-to-end model outperforms the Two-stage model in the gold knowledge experiments. This suggests that the Two-stage model benefits from the strong knowledge selection module (Section 5.1), but that the End-to-end model is better at employing the selected knowledge. Furthermore, we find that the additional knowledge selection supervision (auxiliary loss) in the End-to-end model improves it on every metric, suggesting that tightly integrating these tasks is beneficial. Knowledge dropout (K. D.) also helps (compare last two rows). More evidence for this is shown in Appendix B.1. Lastly, we note that both Two-stage models give higher F1 scores than any of the retrieval models shown in Table 3. 5.3 HUMAN EVALUATION We perform human evaluation of our models using crowd-sourced workers. Humans are paired with our models and asked to chat about a specific topic (given a choice of 2–3 topics) for 3–5 dialogue turns. Following their conversation, the humans are asked to rate their dialogue partner on a scale of 1–5, with the rating indicating how much they “liked” the conversation (5 is best), which we refer to as the engagingness rating. Using the collected conversations, we also calculate a metric we call the Wiki F1 sore: the F1 overlap of the model’s utterances with the first 10 sentences of the Wikipedia page for the chosen topic as a proxy for how much knowledge the model exhibits. We seek a model that can be simultaneously engaging and knowledgeable, hence we would like to maximize both these metrics1. For comparison, we also collect 100 human-human conversations, with only one human choosing the topic and performing evaluation. In total, we collect a total of 546 conversations with ratings from 464 distinct workers. These results are shown in Table 5. We find that the retrieval models significantly outperform the generative models on the human engagingness evaluation(Student’s t-test, p < .05). The human engagingness differences between retriever models with and without knowledge are not significant, but note they both trend toward use of knowledge due to the candidate sentences retrieved, with the knowledgeable version obtaining significantly higher Wiki F1 scores in both seen and unseen test sets. For the generative models, we find human engagingness ratings are significantly improved by the use of knowledge (p < .01). The significantly higher Wiki F1 scores indicate that (i) these models convey more knowledge than their counterparts without knowledge conditioning; and (ii) on both seen and unseen sets they convey more knowledge than the retrieval models. In particular, on unseen 1For example, a model could display knowledge by copying parts of Wikipedia, but not be engaging at all. data the gap between retrieval and generative models is larger. This is understandable, as retrieval models are limited to producing a response from the training set where the unseen topic did not appear. There is still a considerable gap to human ratings of other humans compared to all our models (first row of Table 5). Figure 2 shows example conversations with the retrieval and generative models. Additional analysis and examples can be found in Appendix B.3 and C. 6 CONCLUSION In this work we build dialogue agents which are able to employ large memory systems containing encyclopedic knowledge about the world in order to conduct engaging open-domain conversations. We develop a set of architectures, Transformer Memory Network models, that are capable of retrieving and attending to such knowledge and outputting a response, either in retrieval or generative modes. To train and evaluate such models, we collect the Wizard of Wikipedia dataset, a large collection of open-domain dialogues grounded by Wikipedia knowledge, and demonstrated the effectiveness of our models in automatic and human experiments. Our new publicly available benchmark aims to encourage further model exploration, and we expect such efforts will result in significant advances in this important research direction. There is much future work to be explored using our task and dataset. Some of these include: (i) bridging the gap between the engagingness of retrieval responses versus the ability of generative models to work on new knowledge and topics, (ii) learning to retrieve and reason simultaneously rather than using a separate IR component; and (iii) investigating the relationship between knowledge-grounded dialogue and existing QA tasks which also employ such IR systems. The aim is for those strands to come together to obtain an engaging and knowledgeable conversational agent. A DATASET COLLECTION A.1 HUMAN ANNOTATION INTERFACE (FOR WIZARD) A.2 WIZARD OF WIKIPEDIA EXAMPLES A.3 TOPIC SELECTION To choose between topics that are natural we employed the existing Persona-Chat dataset (Zhang et al., 2018) where crowdworkers where asked to create personas of typical speakers. There are ∼1000 personas, each of which consists of 4-5 sentences describing that person’s interests, e.g. “I love watching Game of Thrones”, “I like to eat cheetos” and “I recently got a cat”. These can thus naturally be seen as topics of interest, and using another set of annotators we mapped each sentence to 1 or more relevant Wikipedia pages, if possible, e.g. “Ariel is my favorite princess” was labeled with the Wikipedia page for The Little Mermaid. As some sentences are harder to connect with Wikipedia, e.g. “I am witty”, they are left unlabeled. We thus obtain 1,431 topics in total to use for our task. We retain the persona topic sets and thus present 2-3 related topic choices as conversation starters per dialogue during data collection. B ADDITIONAL EXPERIMENTS B.1 KNOWLEDGE SELECTION B.2 FULL DIALOGUE WITH RETRIEVAL B.3 HUMAN EXPERIMENTS C ERROR ANALYSIS We perform an analysis of the dialogues produced from the human evaluation experiments detailed in Section 5.3. We sample 20 conversations from each experimental setting, split between seen and unseen. Conversations are re-tokenized and lowercased to reduce superficial differences between models, and then analyzed in a single-blind setup. We note of common errors and behaviors exhibited in each of the different conversations. In general, the human-human conversations are starkly different than any of the bot conversations – humans tend to have more small talk, or use the topic of discussion as a mere icebreaker, with neither human behaving as a wizard. This is in contrast to human-human conversations from the Wizard dataset itself, where one human has access to Wikipedia, and the conversation becomes more grounded in factual sentences. Similarly, all models attempt to play the role of wizard and produce more factual sentences too. In some rounds, humans treat the bot as a sort of question-answer machine, suggesting that the models could be improved by additionally employing SQuAD-like training data. The retriever without knowledge is particularly prone to non sequiturs, or rapidly changing the subject. During unseen conversations, it is especially likely to discuss something other than the chosen topic. In contrast, the retriever with knowledge tends to stick to the chosen topic strongly, but has difficulty if the human changes the subject. Frequently in unseen topics, the retriever with knowledge produces similar, but factually inaccurate answers to user queries. For example, when one user asks about parts of Ireland to visit, the model enumerates a list of locations in Greece. Nonetheless, its repertoire of available responses often include inviting responses, allowing the bot to have a more natural conversational flow. Selected conversations with the retriever with knowledge may be found in Figure 4, for both seen and unseen topics. The generator without knowledge is particularly prone to many of the typical behaviors of seq2seq systems (Li et al., 2016; Vijayakumar et al., 2016), including local repetition (“cookies are made of flour, oil, oil, and oil”), global repetition (producing the near same utterance for multiple turns), or inconsistencies in its personality (saying it both likes and dislikes movies). The generator with knowledge has significantly fewer issues with repetition, as it errs on the side of copying large fragments from the Wikipedia knowledge. The generator with knowledge can also act as a selfish conversationalist, choosing to respond or detail information without inviting a response. Although it generally produces accurate statements, it sometimes produces statements using an incorrect date, name or word. It also frequently produces formulaic responses, like “I don’t know, but I do know that [Wikipedia excerpt]”. Nonetheless, we find the generator with knowledge is able to successfully generalize to unseen topics using the knowledge from Wikipedia. Selected conversations with the generator with knowledge may be found in Figure 5.
1. How does the reviewer assess the significance and potential impact of the proposed dataset on the conversational AI community? 2. What are the strengths of the paper regarding its motivation, design, and use of state-of-the-art models? 3. What are the weaknesses of the paper regarding the conversation flow's unnaturalness and the potential accumulation of errors in the framework? 4. Does the reviewer have any suggestions to improve the paper, such as introducing random ungrounded turns, using the REINFORCE algorithm, or dealing with noisy or adversarial apprentices?
Review
Review This work proposes a brand new dataset to fill in the vacancy of current conversational AI community, specifically the introduced dataset aims at providing a platform to perform large-scaled knowledge-grounded chit-chat. Overall, the dataset is well-motivated and well-designed, its existence will potentially benefit the community and inspire more effective methods to leverage external knowledge into dialog system. Besides, the paper also utilizes many trending models like Transformers, Memory Networks, etc to ensure the state-of-the-art performance. The clear structure and paragraphs also makes the paper easy to read and follow. Here are some questions I want to raise about the paper: 1. First of all, the design of the conversation flow though looks reasonable, but it is pretty uncommon for a human to ground his/her every sentence on external knowledges. Therefore, it would probably be better to introduce some random ungrounded turns into the conversation to make it more humanlike. 2. Secondly, the whole framework is based on many modules and every one of them are prone to error. I’m afraid that such cascaded errors will accumulate and lead to compromised performance in the end. Have you thought about using REINFORCE algorithm to alleviate this issue? 3. Finally, it would be better to introduce some noisy or adversarial apprentice to raise unrelated turns and see how the system react. Have you thought about how to deal with such cases?
ICLR
Title Wizard of Wikipedia: Knowledge-Powered Conversational Agents Abstract In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction. 1 INTRODUCTION Arguably, one of the key goals of AI, and the ultimate the goal of natural language research, is for humans to be able to talk to machines. In order to get close to this goal, machines must master a number of skills: to be able to comprehend language, employ memory to retain and recall knowledge, to reason about these concepts together, and finally output a response that both fulfills functional goals in the conversation while simultaneously being captivating to their human speaking partner. The current state-of-the-art approaches, sequence to sequence models of various kinds (Sutskever et al., 2014; Vinyals & Le, 2015; Serban et al., 2016; Vaswani et al., 2017) attempt to address some of these skills, but generally suffer from an inability to bring memory and knowledge to bear; as indicated by their name, they involve encoding an input sequence, providing limited reasoning by transforming their hidden state given the input, and then decoding to an output. To converse intelligently on a given topic, a speaker clearly needs knowledge of that subject, and it is our contention here that more direct knowledge memory mechanisms need to be employed. In this work we consider setups where this can be naturally measured and built. We consider the task of open-domain dialogue, where two speakers conduct open-ended chit-chat given an initial starting topic, and during the course of the conversation the topic can broaden or focus on related themes. During such conversations, an interlocutor can glean new information and personal points of view from their speaking partner, while providing similarly themselves. This is a challenging task as it requires several components not found in many standard models. We design a set of architectures specifically for this goal that combine elements of Memory Network architectures (Sukhbaatar et al., 2015) to retrieve knowledge and read and condition on it, and Transformer architectures (Vaswani et al., 2017) to provide state-of-the-art text representations and sequence models for generating outputs, which we term Transformer Memory Networks. As, to our knowledge, no public domain dataset of requisite scale exists, we build a supervised dataset of human-human conversations using crowd-sourced workers, first crowd-sourcing 1365 diverse discussion topics and then conversations involving 201, 999 utterances about them. Each ∗Joint first authors. topic is connected to Wikipedia, and one of the humans (the wizard) is asked to link the knowledge they use to sentences from existing articles. In this way, we have both a natural way to train a knowledgeable conversation agent, by employing a memory component that can recall and ground on this existing text, and a natural way to evaluate models that we build, by assessing their ability at locating and using such knowledge. Our Transformer Memory Network architectures, both in retrieval and generative versions, are tested in this setup using both automatic metrics and human evaluations. We show their ability to execute engaging knowledgeable conversations with humans, compared to a number of baselines such as standard Memory Networks or Transformers. Our new benchmark, publicly in ParlAI (http:// parl.ai/projects/wizard of wikipedia/), aims to encourage and measure further improvements in this important research direction. 2 RELATED WORK Many existing dialogue tasks do not study the use of knowledge explicitly. For example, popular chit-chat datasets such as Open-Subtitles (Vinyals & Le, 2015), Persona-Chat (Zhang et al., 2018) and Twitter (Sordoni et al., 2015) have tested the ability of sequence-to-sequence models that attend over the recent dialogue history, but do not attempt to recall long-term knowledge beyond encoding it directly into the weights of the feed-forward network. In the area of goal-directed dialogue, separate from open domain chit-chat, such as airline (El Asri et al., 2017) or restaurant booking (Henderson et al., 2014; Wen et al., 2016; Bordes et al., 2017), knowledge conditioning is typically employed by allowing access to a database through API calls or otherwise. In contrast, our work investigates unstructured knowledge across a large, diverse set of topics potentially spanning all of Wikipedia. In question answering one does not produce a dialogue response based on a conversation history, but a factual answer based on a question. In that case, it is clear that retrieving and conditioning knowledge is vital. For example, in SQuAD neural models have been developed that attend to a given paragraph from Wikipedia to answer questions (Rajpurkar et al., 2016), or Open-SQuAD which extends this to answering without being given the paragraph, instead performing retrieval over the entirety of Wikipedia (Chen et al., 2017). Recently, the QuAC dataset investigates similar themes, but as a sequence of questions and answers in dialogue form instead (Choi et al., 2018). In this work we do not address question answering, but focus on natural human dialogues which contain a diverse set of utterances, not just questions and answers. The closest work to ours lies in the area of non-goal directed dialogue incorporating knowledge. The work of Dodge et al. (2016) employed Memory Networks to perform dialogue discussing movies in terms of recommendation and open-ended discussion from Reddit, conditioning on a structured knowledge base. Zhou et al. (2018) also links Reddit to structured knowledge. Both Parthasarathi & Pineau (2018) and Ghazvininejad et al. (2018) use unstructured text instead, as we do: the former to discuss news articles using Wikipedia summaries as knowledge, and the latter to discuss local businesses in two-turn dialogues using Foursquare tips as knowledge. Ghazvininejad et al. (2018) uses an extended Encoder-Decoder where the decoder is provided with an encoding of the context along with the external knowledge encoding. Neither involves dialogue authored with the given knowledge, so it is unclear when knowledge is useful or not. In contrast, in our task, we know the Wikipedia articles and sentences that ground crowdworkers dialogues. Model-wise, Parthasarathi & Pineau (2018) uses a Bag-of-Words Memory Network type fact encoder and an RNN decoder. Our work compares Memory Networks (Sukhbaatar et al., 2015) and Transformers which have been shown to be on-par or superior to RNN encoder-decoders (Vaswani et al., 2017), and develops an architecture that combines these approaches. Concurrently with our work Moghe et al. (2018) proposed a dataset based on the closed domain of movie chats. Our paper shows models working on full multi-turn dialogue in an open-domain setting, which to our knowledge was not shown before. 3 WIZARD OF WIKIPEDIA We consider the following general open-domain dialogue setting: two participants engage in chitchat, with one of the participants selecting a beginning topic, and during the conversation the topic is allowed to naturally change. The two participants, however, are not quite symmetric: one will play the role of a knowledgeable expert (which we refer to as the wizard) while the other is a curious learner (the apprentice). Apprentice At each stage of the conversation the apprentice talks to the wizard freely, playing the role of a curious learner, eager to chat. Their goal is to go into depth about a chosen topic that interests themselves or their partner, while keeping the conversation engaging and fun. Note that the instruction to delve deeply into a topic makes this different to more “shallow” chit-chat tasks. In this task the use of knowledge is emphasized more. Wizard The wizard is given the following instructions: “You have just met the other person, who seems quite curious, and you are eager to discuss a topic with them!” Their goal is to inform their conversation partner about a topic that one of them will choose. Crucially, the wizard has access to an information retrieval system that shows them paragraphs from Wikipedia possibly relevant to the conversation, which are unobserved by the apprentice. Before each conversation turn the wizard can read these paragraphs and then potentially base their next reply on that observed knowledge. Note, the wizard is particularly instructed not to simply parrot this knowledge, but to use it to craft a relevant reply, and to present any relevant knowledge in a fun and engaging way, if possible. Conversation Flow The flow of the conversation thus takes place as follows. 1. Either the wizard or apprentice is picked to choose the topic and speak first. The other player receives the topic information, and the conversation begins. 2. When the apprentice sends the wizard a message, the wizard is shown relevant knowledge (described below), and chooses a relevant sentence in order to construct a response, or else chooses the no sentence used option. 3. The Wizard responds to the apprentice basing their response on their chosen sentence. 4. The conversation repeats until one of the conversation partners ends the chat (after a minimum of 4 or 5 turns each, randomly chosen beforehand). After collecting data of such wizard-apprentice conversations between humans, the goal is to then replace the human wizard with a learned agent that will speak to a human apprentice instead, similar to the procedure in Wizard of Oz experiments (Bernsen et al., 2012). Topics We crowd-sourced a set of 1365 natural, open-domain dialogue topics, each linked to a Wikipedia article. These include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger. Knowledge Retrieval At each step of the dialogue the wizard has access to a set of passages of knowledge which may be relevant to the given dialogue context. While this is a potentially learnable part of the model, we required for this to be fixed so that we could present the results to the annotator when collecting the dataset. We thus used exactly the same retriever that is commonly used for the Open-SQuAD dataset in Chen et al. (2017). It uses a simple inverted index lookup followed by term vector model scoring. Articles and queries are compared as TF-IDF weighted bag-of-word and n-gram vectors, using the hashing trick. We retrieve the top 7 articles (first paragraph only) for the last two turns of dialogue (by wizard and apprentice) and the article (first 10 sentences only) for the original topic, and present these articles to the wizard as knowledge context, along with their titles. Note that while this system is used to build the dataset, a superior method can in principle be learned and used by a model at test time. Knowledge Selection and Response Generation During data collection, the wizard can click on any of the retrieved article titles in the dialogue UI to expand that article, at which point they can click on a sentence that is most relevant to the response they want to make (only one article, and one sentence can be selected on any turn, for simplicity). If they see no relevant article or sentence they can choose no sentence used instead. The wizard then enters their response to the apprentice. An image of the Wizard’s UI is shown in Appendix A.1. Final Dialogue Dataset The final dialogue dataset we collect consists of 22,311 dialogues with 201,999 turns, which we divide into 166,787 for train, 17,715 for validation, and 17,497 for test. The test set is split into two subsets, Test Seen and Test Unseen. Test Seen contains 533 overlapping topics with the training set with new dialogues about those topics. Test Unseen consists of 58 topics never seen before in train or validation. Overall data statistics can be found in Table 1, and further statistics and examples of collected conversations in Appendix A.2. We observe wizards and apprentices both asking and answering questions, and providing each other with a mixture of facts and personal feelings during their general discussion. 4 MODELS In this work we consider learning dialogue models to replace the wizard in our learning tasks, i.e. the knowledgeable speaker. The dialogue model thus can have access to a knowledge source, in this case Wikipedia, to ground the conversation with. We thus develop extensions of the Memory Network (Sukhbaatar et al., 2015) and Transformer (Vaswani et al., 2017) models that can (i) retrieve from a large memory relevant information conditioned on the dialogue history, (ii) carefully read and attend over the retrieved set of knowledge, and then (iii) generate the next dialogue utterance. This model is then used consecutively on each turn to form an entire dialogue with a user. We develop two classes of models capable of leveraging knowledge: (i) retrieval models that produce an output among a set of candidate responses (the set of utterances from the training set); and (ii) generative models that generate word-by-word (using a beam). The input to either model is the same: at each dialogue turn where the model is intended to make a response, it is given the current dialogue context x1, . . . , xt of t dialogue turns, where x1 is always the initial starting topic (e.g. “Kurt Cobain”), and the remaining turns swap between the two speakers. The goal at each stage is to output the next utterance xt+1. Knowledge Retrieval We assume a large knowledge base (memory) m1, . . . ,mN which is hierarchically organized into documents consisting of paragraphs and sentences. As it is infeasible for current neural attention techniques to operate on this scale, we use standard information retrieval (IR) techniques (c = IR(x,m)) as a first step to return a smaller set of candidates mc1 , . . . ,mcK for fine-grained selection. In our experiments, we use the IR system provided to the human annotators during dataset creation, detailed in Section 3. The retriever operates on the topic (x1) and the last two turns (xt and xt−1) if they exist, effectively calling the IR system three times with three different queries. Empirically, this provided better performance compared to merging into one query, likely because it can address quite different topics. We retrieve the top 7 articles (first paragraph only) for each lookup and then flatten all the results into separate sentences (i.e. remove the organization of sentences belonging to articles), but prepend every sentence with its article title. In this way the candidates mc1 , . . . ,mcK given to the neural model in the next stage can be attended to independently without having to deal with hierarchical issues. Knowledge Attention We use an attention mechanism to perform fine-grained selection of which knowledge sentences will be used to produce the next turn of dialogue. Each sentence in the memory is independently encoded with a Transformer encoder (Vaswani et al., 2017), and the same Trans- former is used to encode the dialogue context x. We then perform standard dot-product attention between the memory candidates and the dialogue context. Utterance Prediction Given the hidden state derived from the memory attention process described above, the final stage is to predict the output utterance that will form the next dialogue turn. We consider different variants of the two stages above, knowledge attention and utterance prediction, when considering retrieval and generative variants of our models. We will now detail these in turn. 4.1 RETRIEVAL TRANSFORMER MEMORY NETWORK This model encodes each knowledge sentence mc1 , . . . ,mcK and the dialogue context x with a Transformer, as described above. The final input encoding is calculated by performing dot-product attention over enc(mc1), . . . , enc(mcK ) and adding the resulting weighted sum of these vectors to enc(x) to get the representation repLHS(mc1 , . . . ,mcK , x). The candidate responses r1, . . . , rL are encoded with a separate Transformer to get repRHS(ri) for each i. We choose as a response r` where ` = argmax i∈{1,...,L} repLHS(mc1 , · · · ,mcK , x) ‖repLHS(mc1 , . . . ,mcK , x)‖2 • repRHS(ri)‖repRHS(ri)‖2 . The model is trained to minimize the cross-entropy loss, where the negative candidates for each example are the responses to the other examples in the batch (Henderson et al., 2017). 4.2 GENERATIVE TRANSFORMER MEMORY NETWORK We consider two versions: a Two-stage and an End-to-end version. Both models find the most relevant piece of knowledge mbest, and then perform an encoding step by concatenating it with the dialogue context, allowing the decoder to attend over both the knowledge and dialogue when formulating its response. We employ a beam search of 5 to select our best response. All generative models employ BPE encoding (Sennrich et al., 2016), which we found effective at enabling generators to copy rare words from Wikipedia sentences (Fan et al., 2018). In the End-to-end version, a shared Transformer encoder is used to encode all candidates mci and the dialogue history. The encoded candidates are flattened into vectors using the normalization from Cer et al. (2018) (summing, and normalizing by the square root of the sentence length in order to balance short and long sentences) to produce an attention prediction over the memory. The full sequence encoding of the single highest selected knowledgembest is concatenated with the encoding of the dialogue, and passed into a Transformer decoder. An illustration of our End-to-end model is shown in Figure 1. We train the model to minimize the negative log-likelihood of the response utterance. We can add additional supervision by forcing the knowledge selection to correctly choose the same knowledge candidate as the human wizard in the training set by adding an additional crossentropy loss over the knowledge attention, modulated by a weight λ: L = (1− λ)LNLL + λLknowledge. In the Two-stage version, we employ two separately trained models for each of these two tasks, knowledge selection and utterance prediction. As the knowledge selection step creates a hard deci- sion influencing the output of the generator, we find maximizing the performance of this component to be vital. We can also improve performance of the decoder by employing knowledge dropout (K.D.), wherein we artificially prevent the model from attending to knowledge a fraction of the time during training. We find this helps the generator be more resilient to errors at the knowledge selection stage, and makes training faster. K. D. is a novel technique we propose here, however it is similar to many other dropout techniques, e.g. feature dropout used in Wu et al. (2017). 5 EXPERIMENTS We describe each of our experimental setups and results. We first investigate the ability of our models to select knowledge appropriately, and then consider the full task of dialogue with knowledge. 5.1 KNOWLEDGE SELECTION TASK Before looking at the full Wizard dialogue task, we assess the ability of models to predict the knowledge selected by human wizards in the dataset given the dialogue history. This will inform us of the feasibility of this task and the best models to use in a two-stage architecture. We compare Transformers against various baselines including a random baseline; an Information Retrieval (IR) baseline, which uses simple word overlap; and a Bag-of-Words Memory Network (Sukhbaatar et al., 2015). Where noted, the Transformer is pretrained on Reddit data (Mazaré et al., 2018), and fine-tuned for our task. The results are shown in Table 2. Transformers work best, as long as they are pretrained on a large dataset (Reddit), while multi-tasking on SQuAD provides marginal impact. Further analysis of this task using other models is provided in Appendix B.1. We use the best performing Transformer model reported here for our two-stage generative Memory Network in the full dialogue task. 5.2 FULL TASK: DIALOGUE WITH KNOWLEDGE We evaluate our models on the full task of dialogue generation given knowledge in two settings: given the gold knowledge sentence chosen by a human, or where the model needs to predict which knowledge to use. We separately describe experiments for retrieval and generative models. Retrieval Experiments We use similar baselines as in the knowledge selection experiments, but now also apply Transformer Memory Networks, which attend over knowledge. Models are evaluated measuring Recall@1 when ranking the gold response among 99 randomly chosen candidates, and unigram F1 of the model’s prediction with the gold response. The results are shown in Table 3. We find that the addition of knowledge improves all models (improving Bow MemNet from 56 to 71 R@1 and the Transformer MemNet from 79 to 87 R@1) for predicted knowledge. Performance improves dramatically when models are provided gold knowledge, but otherwise retain similar trends. Generative Experiments We compare our generative End-to-end and Two-stage Transformer Memory Network models to two more baselines: repeating the last utterance, and a generative Transformer model trained to respond to dialogue but without access to knowledge. Models are evaluated using perplexity (PPL) of the gold response and unigram F1. The results are given in Table 4. Our experiments show that both the End-to-end and Two-stage models employ the knowledge in their response predictions, as they outperform their counterpart Transformer without knowledge, and demonstrate substantial improvements when provided the gold knowledge. While the Two-stage model produces significantly stronger perplexity and F1 scores using the predicted knowledge, the End-to-end model outperforms the Two-stage model in the gold knowledge experiments. This suggests that the Two-stage model benefits from the strong knowledge selection module (Section 5.1), but that the End-to-end model is better at employing the selected knowledge. Furthermore, we find that the additional knowledge selection supervision (auxiliary loss) in the End-to-end model improves it on every metric, suggesting that tightly integrating these tasks is beneficial. Knowledge dropout (K. D.) also helps (compare last two rows). More evidence for this is shown in Appendix B.1. Lastly, we note that both Two-stage models give higher F1 scores than any of the retrieval models shown in Table 3. 5.3 HUMAN EVALUATION We perform human evaluation of our models using crowd-sourced workers. Humans are paired with our models and asked to chat about a specific topic (given a choice of 2–3 topics) for 3–5 dialogue turns. Following their conversation, the humans are asked to rate their dialogue partner on a scale of 1–5, with the rating indicating how much they “liked” the conversation (5 is best), which we refer to as the engagingness rating. Using the collected conversations, we also calculate a metric we call the Wiki F1 sore: the F1 overlap of the model’s utterances with the first 10 sentences of the Wikipedia page for the chosen topic as a proxy for how much knowledge the model exhibits. We seek a model that can be simultaneously engaging and knowledgeable, hence we would like to maximize both these metrics1. For comparison, we also collect 100 human-human conversations, with only one human choosing the topic and performing evaluation. In total, we collect a total of 546 conversations with ratings from 464 distinct workers. These results are shown in Table 5. We find that the retrieval models significantly outperform the generative models on the human engagingness evaluation(Student’s t-test, p < .05). The human engagingness differences between retriever models with and without knowledge are not significant, but note they both trend toward use of knowledge due to the candidate sentences retrieved, with the knowledgeable version obtaining significantly higher Wiki F1 scores in both seen and unseen test sets. For the generative models, we find human engagingness ratings are significantly improved by the use of knowledge (p < .01). The significantly higher Wiki F1 scores indicate that (i) these models convey more knowledge than their counterparts without knowledge conditioning; and (ii) on both seen and unseen sets they convey more knowledge than the retrieval models. In particular, on unseen 1For example, a model could display knowledge by copying parts of Wikipedia, but not be engaging at all. data the gap between retrieval and generative models is larger. This is understandable, as retrieval models are limited to producing a response from the training set where the unseen topic did not appear. There is still a considerable gap to human ratings of other humans compared to all our models (first row of Table 5). Figure 2 shows example conversations with the retrieval and generative models. Additional analysis and examples can be found in Appendix B.3 and C. 6 CONCLUSION In this work we build dialogue agents which are able to employ large memory systems containing encyclopedic knowledge about the world in order to conduct engaging open-domain conversations. We develop a set of architectures, Transformer Memory Network models, that are capable of retrieving and attending to such knowledge and outputting a response, either in retrieval or generative modes. To train and evaluate such models, we collect the Wizard of Wikipedia dataset, a large collection of open-domain dialogues grounded by Wikipedia knowledge, and demonstrated the effectiveness of our models in automatic and human experiments. Our new publicly available benchmark aims to encourage further model exploration, and we expect such efforts will result in significant advances in this important research direction. There is much future work to be explored using our task and dataset. Some of these include: (i) bridging the gap between the engagingness of retrieval responses versus the ability of generative models to work on new knowledge and topics, (ii) learning to retrieve and reason simultaneously rather than using a separate IR component; and (iii) investigating the relationship between knowledge-grounded dialogue and existing QA tasks which also employ such IR systems. The aim is for those strands to come together to obtain an engaging and knowledgeable conversational agent. A DATASET COLLECTION A.1 HUMAN ANNOTATION INTERFACE (FOR WIZARD) A.2 WIZARD OF WIKIPEDIA EXAMPLES A.3 TOPIC SELECTION To choose between topics that are natural we employed the existing Persona-Chat dataset (Zhang et al., 2018) where crowdworkers where asked to create personas of typical speakers. There are ∼1000 personas, each of which consists of 4-5 sentences describing that person’s interests, e.g. “I love watching Game of Thrones”, “I like to eat cheetos” and “I recently got a cat”. These can thus naturally be seen as topics of interest, and using another set of annotators we mapped each sentence to 1 or more relevant Wikipedia pages, if possible, e.g. “Ariel is my favorite princess” was labeled with the Wikipedia page for The Little Mermaid. As some sentences are harder to connect with Wikipedia, e.g. “I am witty”, they are left unlabeled. We thus obtain 1,431 topics in total to use for our task. We retain the persona topic sets and thus present 2-3 related topic choices as conversation starters per dialogue during data collection. B ADDITIONAL EXPERIMENTS B.1 KNOWLEDGE SELECTION B.2 FULL DIALOGUE WITH RETRIEVAL B.3 HUMAN EXPERIMENTS C ERROR ANALYSIS We perform an analysis of the dialogues produced from the human evaluation experiments detailed in Section 5.3. We sample 20 conversations from each experimental setting, split between seen and unseen. Conversations are re-tokenized and lowercased to reduce superficial differences between models, and then analyzed in a single-blind setup. We note of common errors and behaviors exhibited in each of the different conversations. In general, the human-human conversations are starkly different than any of the bot conversations – humans tend to have more small talk, or use the topic of discussion as a mere icebreaker, with neither human behaving as a wizard. This is in contrast to human-human conversations from the Wizard dataset itself, where one human has access to Wikipedia, and the conversation becomes more grounded in factual sentences. Similarly, all models attempt to play the role of wizard and produce more factual sentences too. In some rounds, humans treat the bot as a sort of question-answer machine, suggesting that the models could be improved by additionally employing SQuAD-like training data. The retriever without knowledge is particularly prone to non sequiturs, or rapidly changing the subject. During unseen conversations, it is especially likely to discuss something other than the chosen topic. In contrast, the retriever with knowledge tends to stick to the chosen topic strongly, but has difficulty if the human changes the subject. Frequently in unseen topics, the retriever with knowledge produces similar, but factually inaccurate answers to user queries. For example, when one user asks about parts of Ireland to visit, the model enumerates a list of locations in Greece. Nonetheless, its repertoire of available responses often include inviting responses, allowing the bot to have a more natural conversational flow. Selected conversations with the retriever with knowledge may be found in Figure 4, for both seen and unseen topics. The generator without knowledge is particularly prone to many of the typical behaviors of seq2seq systems (Li et al., 2016; Vijayakumar et al., 2016), including local repetition (“cookies are made of flour, oil, oil, and oil”), global repetition (producing the near same utterance for multiple turns), or inconsistencies in its personality (saying it both likes and dislikes movies). The generator with knowledge has significantly fewer issues with repetition, as it errs on the side of copying large fragments from the Wikipedia knowledge. The generator with knowledge can also act as a selfish conversationalist, choosing to respond or detail information without inviting a response. Although it generally produces accurate statements, it sometimes produces statements using an incorrect date, name or word. It also frequently produces formulaic responses, like “I don’t know, but I do know that [Wikipedia excerpt]”. Nonetheless, we find the generator with knowledge is able to successfully generalize to unseen topics using the knowledge from Wikipedia. Selected conversations with the generator with knowledge may be found in Figure 5.
1. What is the main motivation behind developing a knowledgeable bot, and how could it be applied in real-world scenarios? 2. How can the chatbot effectively handle divergent conversations without a specific goal or restriction? 3. Can the authors provide further analysis of the dataset, such as interesting applications or post-analysis on the types of responses from the apprentices? 4. How did the authors ensure annotator engagement and filter out bad dialogs during data collection? 5. How does the proposed model differ from previous works, and have the authors tried alternative approaches for dealing with the knowledge part? 6. Are there any experiments to demonstrate the effect of different lambda values in the loss of the generative model? 7. Have the authors considered using automatic metrics like BLEU instead of PPL and Unigram-F1 for evaluating the generative model? 8. How can the authors improve the human evaluation metric to better assess the effectiveness of the dialog system? 9. Has the author attempted training a model for the apprentice role and having two models chat with each other?
Review
Review This paper collects a new annotated dataset for knowledge grounded dialog task. The proposed models combine two recent neural networks, Memory Net and Transformer, for the purpose of the task. I highly appreciate the efforts to collect such a precious dialog dataset for the community. Also, the setup in data collection actually narrows down the scope of chitchat dialog into a specific topic by grounding it to a set of knowledge. Here are summaries of my concerns and questions about the paper. # applicability of the knowledgeable bot What is the basic motivation of this work? Once you develop a chatbot that can produce a response grounded by knowledge, how could it be applied to real-world applications? Are you trying to teach a student who is looking for more knowledge about a topic? If so, you should be more careful about what knowledge the student (or apprentice in the paper) knows or don’t know about the topic and how their knowledge models dynamically change over the chat. Otherwise, the proposed model seems a simple knowledge retrieval model given the dialog context. Would you please provide motivations of the work? # No explicit goal of a dialog makes the chat divergent and open-ended Without a specific goal given to the annotators or a restriction in the instruction, a dialog in the current setting might diverge beyond the context. For example, if an apprentice says about her/his personal opinion about the topic (e.g., I hate the Gouda cheese) or past experience (e.g., I went to a music festival by Michael Jackson 23 years ago), then how do you control the chat between two annotators or how do you train a model not to pay much attention on out-of-topic utterances? # Lack of further analysis of the dataset Data collection part itself seems to be the biggest contribution to this work. Why don’t you bring one of real dialog example in Figure 3 to the main paper and say more about it? For example, what other interesting applications can you develop on this dataset? Compared to the Wizard, the role of apprentice seems unclear to me. I found from the examples in Figure 3 that most of the apprentices’ responses are a follow-up question about the knowledge, a personal agreement or feeling or their preference. Do you have any post analysis on the types of responses from the apprentices so highlighting utilities of the dataset in a real application? # Some questions on data collection Do you have any incentive mechanism to make annotators more engage in the dialog? Did you filter out some bad dialogs? Then, how did you measure the quality of a dialog? How do you penalize bad annotators that often make aggressive words or don’t follow the instruction you set up? # A question on the model Compared to previous works such as (Zhang at al., ACL18), the proposed model seems to have the only replacement with Transformer encoder and a loss term for knowledge selection. Have you tried another way of dealing with the knowledge part? For example, a ranking loss might be better than the attention. # Questions on the Experiment section Any experiment to show the effect of different \lambda value in the loss of the generative model? When you evaluate the generative model, have you also tried other automatic metrics such as BLEU instead of only PPL and Unigram-F1? For this task, the possible response grounded by the topic+knowledge might be too diverse to measure though. Could you possibly add some constraints to the annotators to do some clear tasks over the dialog so you can systematically evaluate the dialog w.r.t the constraint? Otherwise, evaluation of this task seems to be mostly the same as chitchat systems. In Table 5, human evaluators only measure the likeness of the dialog which seems very naive. Why don’t you measure whether the apprentice gets new knowledge of which s/he didn’t know before, whether the knowledge provided from the model was informative, whether the dialog was fun and engaging or more? The current human evaluation seems very weak though. This might be an auxiliary question: have you tried to train the model for apprentice and make two models chat with each other? How does the chat look like then?
ICLR
Title Wizard of Wikipedia: Knowledge-Powered Conversational Agents Abstract In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction. 1 INTRODUCTION Arguably, one of the key goals of AI, and the ultimate the goal of natural language research, is for humans to be able to talk to machines. In order to get close to this goal, machines must master a number of skills: to be able to comprehend language, employ memory to retain and recall knowledge, to reason about these concepts together, and finally output a response that both fulfills functional goals in the conversation while simultaneously being captivating to their human speaking partner. The current state-of-the-art approaches, sequence to sequence models of various kinds (Sutskever et al., 2014; Vinyals & Le, 2015; Serban et al., 2016; Vaswani et al., 2017) attempt to address some of these skills, but generally suffer from an inability to bring memory and knowledge to bear; as indicated by their name, they involve encoding an input sequence, providing limited reasoning by transforming their hidden state given the input, and then decoding to an output. To converse intelligently on a given topic, a speaker clearly needs knowledge of that subject, and it is our contention here that more direct knowledge memory mechanisms need to be employed. In this work we consider setups where this can be naturally measured and built. We consider the task of open-domain dialogue, where two speakers conduct open-ended chit-chat given an initial starting topic, and during the course of the conversation the topic can broaden or focus on related themes. During such conversations, an interlocutor can glean new information and personal points of view from their speaking partner, while providing similarly themselves. This is a challenging task as it requires several components not found in many standard models. We design a set of architectures specifically for this goal that combine elements of Memory Network architectures (Sukhbaatar et al., 2015) to retrieve knowledge and read and condition on it, and Transformer architectures (Vaswani et al., 2017) to provide state-of-the-art text representations and sequence models for generating outputs, which we term Transformer Memory Networks. As, to our knowledge, no public domain dataset of requisite scale exists, we build a supervised dataset of human-human conversations using crowd-sourced workers, first crowd-sourcing 1365 diverse discussion topics and then conversations involving 201, 999 utterances about them. Each ∗Joint first authors. topic is connected to Wikipedia, and one of the humans (the wizard) is asked to link the knowledge they use to sentences from existing articles. In this way, we have both a natural way to train a knowledgeable conversation agent, by employing a memory component that can recall and ground on this existing text, and a natural way to evaluate models that we build, by assessing their ability at locating and using such knowledge. Our Transformer Memory Network architectures, both in retrieval and generative versions, are tested in this setup using both automatic metrics and human evaluations. We show their ability to execute engaging knowledgeable conversations with humans, compared to a number of baselines such as standard Memory Networks or Transformers. Our new benchmark, publicly in ParlAI (http:// parl.ai/projects/wizard of wikipedia/), aims to encourage and measure further improvements in this important research direction. 2 RELATED WORK Many existing dialogue tasks do not study the use of knowledge explicitly. For example, popular chit-chat datasets such as Open-Subtitles (Vinyals & Le, 2015), Persona-Chat (Zhang et al., 2018) and Twitter (Sordoni et al., 2015) have tested the ability of sequence-to-sequence models that attend over the recent dialogue history, but do not attempt to recall long-term knowledge beyond encoding it directly into the weights of the feed-forward network. In the area of goal-directed dialogue, separate from open domain chit-chat, such as airline (El Asri et al., 2017) or restaurant booking (Henderson et al., 2014; Wen et al., 2016; Bordes et al., 2017), knowledge conditioning is typically employed by allowing access to a database through API calls or otherwise. In contrast, our work investigates unstructured knowledge across a large, diverse set of topics potentially spanning all of Wikipedia. In question answering one does not produce a dialogue response based on a conversation history, but a factual answer based on a question. In that case, it is clear that retrieving and conditioning knowledge is vital. For example, in SQuAD neural models have been developed that attend to a given paragraph from Wikipedia to answer questions (Rajpurkar et al., 2016), or Open-SQuAD which extends this to answering without being given the paragraph, instead performing retrieval over the entirety of Wikipedia (Chen et al., 2017). Recently, the QuAC dataset investigates similar themes, but as a sequence of questions and answers in dialogue form instead (Choi et al., 2018). In this work we do not address question answering, but focus on natural human dialogues which contain a diverse set of utterances, not just questions and answers. The closest work to ours lies in the area of non-goal directed dialogue incorporating knowledge. The work of Dodge et al. (2016) employed Memory Networks to perform dialogue discussing movies in terms of recommendation and open-ended discussion from Reddit, conditioning on a structured knowledge base. Zhou et al. (2018) also links Reddit to structured knowledge. Both Parthasarathi & Pineau (2018) and Ghazvininejad et al. (2018) use unstructured text instead, as we do: the former to discuss news articles using Wikipedia summaries as knowledge, and the latter to discuss local businesses in two-turn dialogues using Foursquare tips as knowledge. Ghazvininejad et al. (2018) uses an extended Encoder-Decoder where the decoder is provided with an encoding of the context along with the external knowledge encoding. Neither involves dialogue authored with the given knowledge, so it is unclear when knowledge is useful or not. In contrast, in our task, we know the Wikipedia articles and sentences that ground crowdworkers dialogues. Model-wise, Parthasarathi & Pineau (2018) uses a Bag-of-Words Memory Network type fact encoder and an RNN decoder. Our work compares Memory Networks (Sukhbaatar et al., 2015) and Transformers which have been shown to be on-par or superior to RNN encoder-decoders (Vaswani et al., 2017), and develops an architecture that combines these approaches. Concurrently with our work Moghe et al. (2018) proposed a dataset based on the closed domain of movie chats. Our paper shows models working on full multi-turn dialogue in an open-domain setting, which to our knowledge was not shown before. 3 WIZARD OF WIKIPEDIA We consider the following general open-domain dialogue setting: two participants engage in chitchat, with one of the participants selecting a beginning topic, and during the conversation the topic is allowed to naturally change. The two participants, however, are not quite symmetric: one will play the role of a knowledgeable expert (which we refer to as the wizard) while the other is a curious learner (the apprentice). Apprentice At each stage of the conversation the apprentice talks to the wizard freely, playing the role of a curious learner, eager to chat. Their goal is to go into depth about a chosen topic that interests themselves or their partner, while keeping the conversation engaging and fun. Note that the instruction to delve deeply into a topic makes this different to more “shallow” chit-chat tasks. In this task the use of knowledge is emphasized more. Wizard The wizard is given the following instructions: “You have just met the other person, who seems quite curious, and you are eager to discuss a topic with them!” Their goal is to inform their conversation partner about a topic that one of them will choose. Crucially, the wizard has access to an information retrieval system that shows them paragraphs from Wikipedia possibly relevant to the conversation, which are unobserved by the apprentice. Before each conversation turn the wizard can read these paragraphs and then potentially base their next reply on that observed knowledge. Note, the wizard is particularly instructed not to simply parrot this knowledge, but to use it to craft a relevant reply, and to present any relevant knowledge in a fun and engaging way, if possible. Conversation Flow The flow of the conversation thus takes place as follows. 1. Either the wizard or apprentice is picked to choose the topic and speak first. The other player receives the topic information, and the conversation begins. 2. When the apprentice sends the wizard a message, the wizard is shown relevant knowledge (described below), and chooses a relevant sentence in order to construct a response, or else chooses the no sentence used option. 3. The Wizard responds to the apprentice basing their response on their chosen sentence. 4. The conversation repeats until one of the conversation partners ends the chat (after a minimum of 4 or 5 turns each, randomly chosen beforehand). After collecting data of such wizard-apprentice conversations between humans, the goal is to then replace the human wizard with a learned agent that will speak to a human apprentice instead, similar to the procedure in Wizard of Oz experiments (Bernsen et al., 2012). Topics We crowd-sourced a set of 1365 natural, open-domain dialogue topics, each linked to a Wikipedia article. These include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger. Knowledge Retrieval At each step of the dialogue the wizard has access to a set of passages of knowledge which may be relevant to the given dialogue context. While this is a potentially learnable part of the model, we required for this to be fixed so that we could present the results to the annotator when collecting the dataset. We thus used exactly the same retriever that is commonly used for the Open-SQuAD dataset in Chen et al. (2017). It uses a simple inverted index lookup followed by term vector model scoring. Articles and queries are compared as TF-IDF weighted bag-of-word and n-gram vectors, using the hashing trick. We retrieve the top 7 articles (first paragraph only) for the last two turns of dialogue (by wizard and apprentice) and the article (first 10 sentences only) for the original topic, and present these articles to the wizard as knowledge context, along with their titles. Note that while this system is used to build the dataset, a superior method can in principle be learned and used by a model at test time. Knowledge Selection and Response Generation During data collection, the wizard can click on any of the retrieved article titles in the dialogue UI to expand that article, at which point they can click on a sentence that is most relevant to the response they want to make (only one article, and one sentence can be selected on any turn, for simplicity). If they see no relevant article or sentence they can choose no sentence used instead. The wizard then enters their response to the apprentice. An image of the Wizard’s UI is shown in Appendix A.1. Final Dialogue Dataset The final dialogue dataset we collect consists of 22,311 dialogues with 201,999 turns, which we divide into 166,787 for train, 17,715 for validation, and 17,497 for test. The test set is split into two subsets, Test Seen and Test Unseen. Test Seen contains 533 overlapping topics with the training set with new dialogues about those topics. Test Unseen consists of 58 topics never seen before in train or validation. Overall data statistics can be found in Table 1, and further statistics and examples of collected conversations in Appendix A.2. We observe wizards and apprentices both asking and answering questions, and providing each other with a mixture of facts and personal feelings during their general discussion. 4 MODELS In this work we consider learning dialogue models to replace the wizard in our learning tasks, i.e. the knowledgeable speaker. The dialogue model thus can have access to a knowledge source, in this case Wikipedia, to ground the conversation with. We thus develop extensions of the Memory Network (Sukhbaatar et al., 2015) and Transformer (Vaswani et al., 2017) models that can (i) retrieve from a large memory relevant information conditioned on the dialogue history, (ii) carefully read and attend over the retrieved set of knowledge, and then (iii) generate the next dialogue utterance. This model is then used consecutively on each turn to form an entire dialogue with a user. We develop two classes of models capable of leveraging knowledge: (i) retrieval models that produce an output among a set of candidate responses (the set of utterances from the training set); and (ii) generative models that generate word-by-word (using a beam). The input to either model is the same: at each dialogue turn where the model is intended to make a response, it is given the current dialogue context x1, . . . , xt of t dialogue turns, where x1 is always the initial starting topic (e.g. “Kurt Cobain”), and the remaining turns swap between the two speakers. The goal at each stage is to output the next utterance xt+1. Knowledge Retrieval We assume a large knowledge base (memory) m1, . . . ,mN which is hierarchically organized into documents consisting of paragraphs and sentences. As it is infeasible for current neural attention techniques to operate on this scale, we use standard information retrieval (IR) techniques (c = IR(x,m)) as a first step to return a smaller set of candidates mc1 , . . . ,mcK for fine-grained selection. In our experiments, we use the IR system provided to the human annotators during dataset creation, detailed in Section 3. The retriever operates on the topic (x1) and the last two turns (xt and xt−1) if they exist, effectively calling the IR system three times with three different queries. Empirically, this provided better performance compared to merging into one query, likely because it can address quite different topics. We retrieve the top 7 articles (first paragraph only) for each lookup and then flatten all the results into separate sentences (i.e. remove the organization of sentences belonging to articles), but prepend every sentence with its article title. In this way the candidates mc1 , . . . ,mcK given to the neural model in the next stage can be attended to independently without having to deal with hierarchical issues. Knowledge Attention We use an attention mechanism to perform fine-grained selection of which knowledge sentences will be used to produce the next turn of dialogue. Each sentence in the memory is independently encoded with a Transformer encoder (Vaswani et al., 2017), and the same Trans- former is used to encode the dialogue context x. We then perform standard dot-product attention between the memory candidates and the dialogue context. Utterance Prediction Given the hidden state derived from the memory attention process described above, the final stage is to predict the output utterance that will form the next dialogue turn. We consider different variants of the two stages above, knowledge attention and utterance prediction, when considering retrieval and generative variants of our models. We will now detail these in turn. 4.1 RETRIEVAL TRANSFORMER MEMORY NETWORK This model encodes each knowledge sentence mc1 , . . . ,mcK and the dialogue context x with a Transformer, as described above. The final input encoding is calculated by performing dot-product attention over enc(mc1), . . . , enc(mcK ) and adding the resulting weighted sum of these vectors to enc(x) to get the representation repLHS(mc1 , . . . ,mcK , x). The candidate responses r1, . . . , rL are encoded with a separate Transformer to get repRHS(ri) for each i. We choose as a response r` where ` = argmax i∈{1,...,L} repLHS(mc1 , · · · ,mcK , x) ‖repLHS(mc1 , . . . ,mcK , x)‖2 • repRHS(ri)‖repRHS(ri)‖2 . The model is trained to minimize the cross-entropy loss, where the negative candidates for each example are the responses to the other examples in the batch (Henderson et al., 2017). 4.2 GENERATIVE TRANSFORMER MEMORY NETWORK We consider two versions: a Two-stage and an End-to-end version. Both models find the most relevant piece of knowledge mbest, and then perform an encoding step by concatenating it with the dialogue context, allowing the decoder to attend over both the knowledge and dialogue when formulating its response. We employ a beam search of 5 to select our best response. All generative models employ BPE encoding (Sennrich et al., 2016), which we found effective at enabling generators to copy rare words from Wikipedia sentences (Fan et al., 2018). In the End-to-end version, a shared Transformer encoder is used to encode all candidates mci and the dialogue history. The encoded candidates are flattened into vectors using the normalization from Cer et al. (2018) (summing, and normalizing by the square root of the sentence length in order to balance short and long sentences) to produce an attention prediction over the memory. The full sequence encoding of the single highest selected knowledgembest is concatenated with the encoding of the dialogue, and passed into a Transformer decoder. An illustration of our End-to-end model is shown in Figure 1. We train the model to minimize the negative log-likelihood of the response utterance. We can add additional supervision by forcing the knowledge selection to correctly choose the same knowledge candidate as the human wizard in the training set by adding an additional crossentropy loss over the knowledge attention, modulated by a weight λ: L = (1− λ)LNLL + λLknowledge. In the Two-stage version, we employ two separately trained models for each of these two tasks, knowledge selection and utterance prediction. As the knowledge selection step creates a hard deci- sion influencing the output of the generator, we find maximizing the performance of this component to be vital. We can also improve performance of the decoder by employing knowledge dropout (K.D.), wherein we artificially prevent the model from attending to knowledge a fraction of the time during training. We find this helps the generator be more resilient to errors at the knowledge selection stage, and makes training faster. K. D. is a novel technique we propose here, however it is similar to many other dropout techniques, e.g. feature dropout used in Wu et al. (2017). 5 EXPERIMENTS We describe each of our experimental setups and results. We first investigate the ability of our models to select knowledge appropriately, and then consider the full task of dialogue with knowledge. 5.1 KNOWLEDGE SELECTION TASK Before looking at the full Wizard dialogue task, we assess the ability of models to predict the knowledge selected by human wizards in the dataset given the dialogue history. This will inform us of the feasibility of this task and the best models to use in a two-stage architecture. We compare Transformers against various baselines including a random baseline; an Information Retrieval (IR) baseline, which uses simple word overlap; and a Bag-of-Words Memory Network (Sukhbaatar et al., 2015). Where noted, the Transformer is pretrained on Reddit data (Mazaré et al., 2018), and fine-tuned for our task. The results are shown in Table 2. Transformers work best, as long as they are pretrained on a large dataset (Reddit), while multi-tasking on SQuAD provides marginal impact. Further analysis of this task using other models is provided in Appendix B.1. We use the best performing Transformer model reported here for our two-stage generative Memory Network in the full dialogue task. 5.2 FULL TASK: DIALOGUE WITH KNOWLEDGE We evaluate our models on the full task of dialogue generation given knowledge in two settings: given the gold knowledge sentence chosen by a human, or where the model needs to predict which knowledge to use. We separately describe experiments for retrieval and generative models. Retrieval Experiments We use similar baselines as in the knowledge selection experiments, but now also apply Transformer Memory Networks, which attend over knowledge. Models are evaluated measuring Recall@1 when ranking the gold response among 99 randomly chosen candidates, and unigram F1 of the model’s prediction with the gold response. The results are shown in Table 3. We find that the addition of knowledge improves all models (improving Bow MemNet from 56 to 71 R@1 and the Transformer MemNet from 79 to 87 R@1) for predicted knowledge. Performance improves dramatically when models are provided gold knowledge, but otherwise retain similar trends. Generative Experiments We compare our generative End-to-end and Two-stage Transformer Memory Network models to two more baselines: repeating the last utterance, and a generative Transformer model trained to respond to dialogue but without access to knowledge. Models are evaluated using perplexity (PPL) of the gold response and unigram F1. The results are given in Table 4. Our experiments show that both the End-to-end and Two-stage models employ the knowledge in their response predictions, as they outperform their counterpart Transformer without knowledge, and demonstrate substantial improvements when provided the gold knowledge. While the Two-stage model produces significantly stronger perplexity and F1 scores using the predicted knowledge, the End-to-end model outperforms the Two-stage model in the gold knowledge experiments. This suggests that the Two-stage model benefits from the strong knowledge selection module (Section 5.1), but that the End-to-end model is better at employing the selected knowledge. Furthermore, we find that the additional knowledge selection supervision (auxiliary loss) in the End-to-end model improves it on every metric, suggesting that tightly integrating these tasks is beneficial. Knowledge dropout (K. D.) also helps (compare last two rows). More evidence for this is shown in Appendix B.1. Lastly, we note that both Two-stage models give higher F1 scores than any of the retrieval models shown in Table 3. 5.3 HUMAN EVALUATION We perform human evaluation of our models using crowd-sourced workers. Humans are paired with our models and asked to chat about a specific topic (given a choice of 2–3 topics) for 3–5 dialogue turns. Following their conversation, the humans are asked to rate their dialogue partner on a scale of 1–5, with the rating indicating how much they “liked” the conversation (5 is best), which we refer to as the engagingness rating. Using the collected conversations, we also calculate a metric we call the Wiki F1 sore: the F1 overlap of the model’s utterances with the first 10 sentences of the Wikipedia page for the chosen topic as a proxy for how much knowledge the model exhibits. We seek a model that can be simultaneously engaging and knowledgeable, hence we would like to maximize both these metrics1. For comparison, we also collect 100 human-human conversations, with only one human choosing the topic and performing evaluation. In total, we collect a total of 546 conversations with ratings from 464 distinct workers. These results are shown in Table 5. We find that the retrieval models significantly outperform the generative models on the human engagingness evaluation(Student’s t-test, p < .05). The human engagingness differences between retriever models with and without knowledge are not significant, but note they both trend toward use of knowledge due to the candidate sentences retrieved, with the knowledgeable version obtaining significantly higher Wiki F1 scores in both seen and unseen test sets. For the generative models, we find human engagingness ratings are significantly improved by the use of knowledge (p < .01). The significantly higher Wiki F1 scores indicate that (i) these models convey more knowledge than their counterparts without knowledge conditioning; and (ii) on both seen and unseen sets they convey more knowledge than the retrieval models. In particular, on unseen 1For example, a model could display knowledge by copying parts of Wikipedia, but not be engaging at all. data the gap between retrieval and generative models is larger. This is understandable, as retrieval models are limited to producing a response from the training set where the unseen topic did not appear. There is still a considerable gap to human ratings of other humans compared to all our models (first row of Table 5). Figure 2 shows example conversations with the retrieval and generative models. Additional analysis and examples can be found in Appendix B.3 and C. 6 CONCLUSION In this work we build dialogue agents which are able to employ large memory systems containing encyclopedic knowledge about the world in order to conduct engaging open-domain conversations. We develop a set of architectures, Transformer Memory Network models, that are capable of retrieving and attending to such knowledge and outputting a response, either in retrieval or generative modes. To train and evaluate such models, we collect the Wizard of Wikipedia dataset, a large collection of open-domain dialogues grounded by Wikipedia knowledge, and demonstrated the effectiveness of our models in automatic and human experiments. Our new publicly available benchmark aims to encourage further model exploration, and we expect such efforts will result in significant advances in this important research direction. There is much future work to be explored using our task and dataset. Some of these include: (i) bridging the gap between the engagingness of retrieval responses versus the ability of generative models to work on new knowledge and topics, (ii) learning to retrieve and reason simultaneously rather than using a separate IR component; and (iii) investigating the relationship between knowledge-grounded dialogue and existing QA tasks which also employ such IR systems. The aim is for those strands to come together to obtain an engaging and knowledgeable conversational agent. A DATASET COLLECTION A.1 HUMAN ANNOTATION INTERFACE (FOR WIZARD) A.2 WIZARD OF WIKIPEDIA EXAMPLES A.3 TOPIC SELECTION To choose between topics that are natural we employed the existing Persona-Chat dataset (Zhang et al., 2018) where crowdworkers where asked to create personas of typical speakers. There are ∼1000 personas, each of which consists of 4-5 sentences describing that person’s interests, e.g. “I love watching Game of Thrones”, “I like to eat cheetos” and “I recently got a cat”. These can thus naturally be seen as topics of interest, and using another set of annotators we mapped each sentence to 1 or more relevant Wikipedia pages, if possible, e.g. “Ariel is my favorite princess” was labeled with the Wikipedia page for The Little Mermaid. As some sentences are harder to connect with Wikipedia, e.g. “I am witty”, they are left unlabeled. We thus obtain 1,431 topics in total to use for our task. We retain the persona topic sets and thus present 2-3 related topic choices as conversation starters per dialogue during data collection. B ADDITIONAL EXPERIMENTS B.1 KNOWLEDGE SELECTION B.2 FULL DIALOGUE WITH RETRIEVAL B.3 HUMAN EXPERIMENTS C ERROR ANALYSIS We perform an analysis of the dialogues produced from the human evaluation experiments detailed in Section 5.3. We sample 20 conversations from each experimental setting, split between seen and unseen. Conversations are re-tokenized and lowercased to reduce superficial differences between models, and then analyzed in a single-blind setup. We note of common errors and behaviors exhibited in each of the different conversations. In general, the human-human conversations are starkly different than any of the bot conversations – humans tend to have more small talk, or use the topic of discussion as a mere icebreaker, with neither human behaving as a wizard. This is in contrast to human-human conversations from the Wizard dataset itself, where one human has access to Wikipedia, and the conversation becomes more grounded in factual sentences. Similarly, all models attempt to play the role of wizard and produce more factual sentences too. In some rounds, humans treat the bot as a sort of question-answer machine, suggesting that the models could be improved by additionally employing SQuAD-like training data. The retriever without knowledge is particularly prone to non sequiturs, or rapidly changing the subject. During unseen conversations, it is especially likely to discuss something other than the chosen topic. In contrast, the retriever with knowledge tends to stick to the chosen topic strongly, but has difficulty if the human changes the subject. Frequently in unseen topics, the retriever with knowledge produces similar, but factually inaccurate answers to user queries. For example, when one user asks about parts of Ireland to visit, the model enumerates a list of locations in Greece. Nonetheless, its repertoire of available responses often include inviting responses, allowing the bot to have a more natural conversational flow. Selected conversations with the retriever with knowledge may be found in Figure 4, for both seen and unseen topics. The generator without knowledge is particularly prone to many of the typical behaviors of seq2seq systems (Li et al., 2016; Vijayakumar et al., 2016), including local repetition (“cookies are made of flour, oil, oil, and oil”), global repetition (producing the near same utterance for multiple turns), or inconsistencies in its personality (saying it both likes and dislikes movies). The generator with knowledge has significantly fewer issues with repetition, as it errs on the side of copying large fragments from the Wikipedia knowledge. The generator with knowledge can also act as a selfish conversationalist, choosing to respond or detail information without inviting a response. Although it generally produces accurate statements, it sometimes produces statements using an incorrect date, name or word. It also frequently produces formulaic responses, like “I don’t know, but I do know that [Wikipedia excerpt]”. Nonetheless, we find the generator with knowledge is able to successfully generalize to unseen topics using the knowledge from Wikipedia. Selected conversations with the generator with knowledge may be found in Figure 5.
1. What is the main contribution of the paper regarding dialogue systems and external unstructured knowledge? 2. What are the strengths and weaknesses of the proposed dataset and experimental setup? 3. How does the paper differ from other works on non-goal directed dialogue systems? 4. What is the significance of the paper and its potential impact on the dialogue systems community? 5. Are there any missing references or unclear descriptions in the paper? 6. How does the paper address the issue of co-reference problems in knowledge selection and response generation? 7. What are some potential future directions for improving the use of external knowledge in chatbots?
Review
Review This paper introduces a new dataset and method for chatbots. In contrast to previous work, this paper specifically probes how well a dialogue system can use external unstructured knowledge. Quality: Overall, this is a very high-quality paper. The dataset is developed well, the experimental setup is well thought-through and the authors perform many ablation studies to test different model variants. The main criticism I have would be that the human evaluation is rather simple (rating 1-5), I would have expected more fine-grained categories, especially ones that relate to how much knowledge the system uses (I appreciate the "Wiki F1" metric, but that is an automatic metric). As it is, the human evaluation shows that most of their contributions are not appreciated by human annotators. Further, the paper ends a bit abruptly, I would have expected a more in-depth discussion of next steps. Clarity: The description of the work is clear in most places. I particularly like the abstract and introduction, which set up the rest of the paper nicely. In some places, perhaps due to space restrictions, method descriptions are a bit too short. Originality: The paper is fairly original, especially the aspect about specifically using external knowledge. The authors could have been more clear on how the work differs from other work on non-goal directed dialogue work though (last paragraph of related work section). Significance: The dataset is really well-developed, hence I believe many working in the dialogue systems community will re-use the developed benchmark and build on this paper. More detailed comments: - Missing reference for goal-oriented dialogue datasets: Wen et al. 2017, A Network-based End-to-End Trainable Task-oriented Dialogue System, https://arxiv.org/abs/1604.04562 - How does the proposed dataset differ from the Reddit and Wikipedia datasets discussed in the last paragraph of the related work section? This should be explained. - Page 3, paragraph "Conversational Flow": what is the maximum number of turns, if the minimum is 5? - Page 3, paragraph "Knowledge Retrieval": how were the top 7 articles and first 10 sentences choices made? This seems arbitrary. Also, why wasn't the whole text used? - Page 3, paragraph "Knowledge Selection and Response Generation": how do you deal with co-reference problems if you only ever select one sentence at a time? The same goes for the "Knowledge Attention" model described in Section 4. - Page 3, paragraph "Knowledge Selection and Response Generation": how often do annotators choose "no sentence selected"? It would be interesting to see more such statistics about the dataset - Section 4.2: did you run experiments for BPE encoding? Would be good to see as this is a bit of a non-standard choice. - Section 4.2: it would be good to explain the Cer et al. 2018 method directly in the paper - Section 4.2: is there a reference for knowledge dropout? Also, it would be good to show ablation results for this. - Section 5.1: why did you choose to pre-train on the Reddit data? There should be some more in-depth description of the Reddit dataset to motivate this choice. - Section 5.1: what is the setup you use for multi-task learning on SQuAD? Is it just a hard parameter sharing model, or? - Section 5.3: as stated above, the human evaluation is a little bit underwhelming, both in terms of setup and results. I'd expect a more fine-grained way of assessing conversations by humans, and also an explanation of why the retrieval performer without knowledge was assessed as being on par with the retrieval transformer memnet. - Section 5.3: I assume higher=better for the human scores? This should be made explicit. - Section 5.3: Have others used the "F1 overlap score"? If so, cite. - Section 5.3: I don't understand the argument that the human evaluation shows that humans prefer more natural responses. How does it show that? - Section 5.3: The Wiki F1 score is kind of interesting because it shows to what degree the model uses knowledge. But the side-by-side comparison with the human scores shows that humans don't necessarily prefer chatbot models that use a lot of knowledge. I'd expect this to be discussed, and suggestions for future work to be made accordingly. - Section 6: The paper ends a bit abruptly. It's be nice to suggest future areas of improvement.
ICLR
Title And the Bit Goes Down: Revisiting the Quantization of Neural Networks Abstract In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss reconstruction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using bytealigned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5 MB (20× compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet object classification and by compressing a Mask R-CNN with a 26× factor.1 1 INTRODUCTION There is a growing need for compressing the best convolutional networks (or ConvNets) to support embedded devices for applications like robotics and virtual/augmented reality. Indeed, the performance of ConvNets on image classification has steadily improved since the introduction of AlexNet (Krizhevsky et al., 2012). This progress has been fueled by deeper and richer architectures such as the ResNets (He et al., 2015) and their variants ResNeXts (Xie et al., 2017) or DenseNets (Huang et al., 2017). Those models particularly benefit from the recent progress made with weak supervision (Mahajan et al., 2018; Yalniz et al., 2019; Berthelot et al., 2019). Compression of ConvNets has been an active research topic in the recent years, leading to networks with a 71% top-1 accuracy on ImageNet object classification that fit in 1MB (Wang et al., 2018b). In this work, we propose a compression method particularly adapted to ResNet-like architectures. Our approach takes advantage of the high correlation in the convolutions by the use of a structured quantization algorithm, Product Quantization (PQ) (Jégou et al., 2011). More precisely, we exploit the spatial redundancy of information inherent to standard convolution filters (Denton et al., 2014). Besides reducing the memory footprint, we also produce compressed networks allowing efficient inference on CPU by using byte-aligned indexes, as opposed to entropy decoders (Han et al., 2016). Our approach departs from traditional scalar quantizers (Han et al., 2016) and vector quantizers (Gong et al., 2014; Carreira-Perpiñán & Idelbayev, 2017) by focusing on the accuracy of the activations rather than the weights. This is achieved by leveraging a weighted k-means technique. To our knowledge this strategy (see Section 3) is novel in this context. The closest work we are aware of is the one by Choi et al. (2016), but the authors use a different objective (their weighted term is derived from second-order information) along with a different quantization technique (scalar quantization). Our method targets a better in-domain reconstruction, as depicted by Figure 1. Finally, we compress the network sequentially to account for the dependency of our method to the activations at each layer. To prevent the accumulation of errors across layers, we guide this compression with the activations of the uncompressed network on unlabelled data: training by distillation (Hinton et al., 2014) allows for both an efficient layer-by-layer compression procedure and a global fine-tuning of the codewords. Thus, we only need a set of unlabelled images to adjust the codewords. As opposed to recent works by Mishra & Marr (2017) or Lopes et al. (2017), our distillation scheme is sequential and the underlying compression method is different (PQ vs. scalar). 1Code and compressed models: https://github.com/facebookresearch/kill-the-bits. We show that applying our approach to the semi-supervised ResNet-50 of Yalniz et al. (Yalniz et al., 2019) leads to a 5 MB memory footprint and a 76.1% top-1 accuracy on ImageNet object classification (hence 20× compression vs. the original model). Moreover, our approach generalizes to other tasks such as image detection. As shown in Section 4.3, we compress a Mask R-CNN (He et al., 2017) with a size budget around 6 MB (26× compression factor) while maintaining a competitive performance. 2 RELATED WORK There is a large body of literature on network compression. We review the works closest to ours and refer the reader to two recent surveys (Guo, 2018; Cheng et al., 2017) for a comprehensive overview. Low-precision training. Since early works like those of Courbariaux et al. (2015), researchers have developed various approaches to train networks with low precision weights. Those approaches include training with binary or ternary weights (Shayer et al., 2017; Zhu et al., 2016; Li & Liu, 2016; Rastegari et al., 2016; McDonnell, 2018), learning a combination of binary bases (Lin et al., 2017) and quantizing the activations (Zhou et al., 2016; 2017; Mishra et al., 2017). Some of these methods assume the possibility to employ specialized hardware that speed up inference and improve power efficiency by replacing most arithmetic operations with bit-wise operations. However, the back-propagation has to be adapted to the case where the weights are discrete. Quantization. Vector Quantization (VQ) and Product Quantization (PQ) have been extensively studied in the context of nearest-neighbor search (Jegou et al., 2011; Ge et al., 2014; Norouzi & Fleet, 2013). The idea is to decompose the original high-dimensional space into a cartesian product of subspaces that are quantized separately with a joint codebook. To our knowledge, Gong et al. (2014) were the first to introduce these stronger quantizers for neural network quantization, followed by Carreira-Perpiñán & Idelbayev (2017). As we will see in the remainder of this paper, employing this discretization off-the-shelf does not optimize the right objective function, and leads to a catastrophic drift of performance for deep networks. Pruning. Network pruning amounts to removing connections according to an importance criteria (typically the magnitude of the weight associated with this connection) until the desired model size/accuracy tradeoff is reached (LeCun et al., 1990). A natural extension of this work is to prune structural components of the network, for instance by enforcing channel-level (Liu et al., 2017) or filter-level (Luo et al., 2017) sparsity. However, these methods alternate between pruning and re-training steps and thus typically require a long training time. Dedicated architectures. Architectures such as SqueezeNet (Iandola et al., 2016), NASNet (Zoph et al., 2017), ShuffleNet (Zhang et al., 2017; Ma et al., 2018), MobileNets (Sandler et al., 2018) and EfficientNets (Tan & Le, 2019) are designed to be memory efficient. As they typically rely on a combination of depth-wise and point-wise convolutional filters, sometimes along with channel shuffling, they are less prone than ResNets to structured quantization techniques such as PQ. These architectures are either designed by hand or using the framework of architecture search (Howard et al., 2019). For instance, the respective model size and test top-1 accuracy of ImageNet of a MobileNet are 13.4 MB for 71.9%, to be compared with a vanilla ResNet-50 with size 97.5 MB for a top-1 of 76.2%. Moreover, larger models such as ResNets can benefit from large-scale weakly- or semi-supervised learning to reach better performance (Mahajan et al., 2018; Yalniz et al., 2019). Combining some of the mentioned approaches yields high compression factors as demonstrated by Han et al. with Deep Compression (DC) (Han et al., 2016) or more recently by Tung & Mori (Tung & Mori, 2018). Moreover and from a practical point of view, the process of compressing networks depends on the type of hardware on which the networks will run. Recent work directly quantizes to optimize energy-efficiency and latency time on a specific hardware (Wang et al., 2018a). Finally, the memory overhead of storing the full activations is negligible compared to the storage of the weights for two reasons. First, in realistic real-time inference setups, the batch size is almost always equal to one. Second, a forward pass only requires to store the activations of the current layer –which are often smaller than the size of the input– and not the whole activations of the network. 3 OUR APPROACH In this section, we describe our strategy for network compression and we show how to extend our approach to quantize a modern ConvNet architecture. The specificity of our approach is that it aims at a small reconstruction error for the outputs of the layer rather than the layer weights themselves. We first describe how we quantize a single fully connected and convolutional layer. Then we describe how we quantize a full pre-trained network and finetune it. 3.1 QUANTIZATION OF A FULLY-CONNECTED LAYER We consider a fully-connected layer with weights W ∈ RCin×Cout and, without loss of generality, we omit the bias since it does not impact reconstruction error. Product Quantization (PQ). Applying the PQ algorithm to the columns of W consists in evenly splitting each column into m contiguous subvectors and learning a codebook on the resulting mCout subvectors. Then, a column of W is quantized by mapping each of its subvector to its nearest codeword in the codebook. For simplicity, we assume that Cin is a multiple of m, i.e., that all the subvectors have the same dimension d = Cin/m. More formally, the codebook C = {c1, . . . , ck} contains k codewords of dimension d. Any column wj of W is mapped to its quantized version q(wj) = (ci1 , . . . , cim) where i1 denotes the index of the codeword assigned to the first subvector of wj , and so forth. The codebook is then learned by minimizing the following objective function: ‖W − Ŵ‖22 = ∑ j ‖wj − q(wj)‖22, (1) where Ŵ denotes the quantized weights. This objective can be efficiently minimized with k-means. When m is set to 1, PQ is equivalent to vector quantization (VQ) and when m is equal to Cin, it is the scalar k-means algorithm. The main benefit of PQ is its expressivity: each column wj is mapped to a vector in the product C = C × · · · × C, thus PQ generates an implicit codebook of size km. Our algorithm. PQ quantizes the weight matrix of the fully-connected layer. However, in practice, we are interested in preserving the output of the layer, not its weights. This is illustrated in the case of a non-linear classifier in Figure 1: preserving the weights a layer does not necessarily guarantee preserving its output. In other words, the Frobenius approximation of the weights of a layer is not guaranteed to be the best approximation of the output over some arbitrary domain (in particular for in-domain inputs). We thus propose an alternative to PQ that directly minimizes the reconstruction error on the output activations obtained by applying the layer to in-domain inputs. More precisely, given a batch of B input activations x ∈ RB×Cin , we are interested in learning a codebook C that minimizes the difference between the output activations and their reconstructions: ‖y − ŷ‖22 = ∑ j ‖x(wj − q(wj))‖22, (2) where y = xW is the output and ŷ = xŴ its reconstruction. Our objective is a re-weighting of the objective in Equation (1). We can thus learn our codebook with a weighted k-means algorithm. First, we unroll x of size B × Cin into x̃ of size (B × m) × d i.e. we split each row of x into m subvectors of size d and stack these subvectors. Next, we adapt the EM algorithm as follows. (1) E-step (cluster assignment). Recall that every column wj is divided into m subvectors of dimension d. Each subvector v is assigned to the codeword cj such that cj = argmin c∈C ‖x̃(c− v)‖22. (3) This step is performed by exhaustive exploration. Our implementation relies on broadcasting to be computationally efficient. (2) M-step (codeword update). Let us consider a codeword c ∈ C. We denote (vp)p∈Ic the subvectors that are currently assigned to c. Then, we update c← c?, where c? = argmin c∈Rd ∑ p∈Ic ‖x̃(c− vp)‖22. (4) This step explicitly computes the solution of the least-squares problem2. Our implementation performs the computation of the pseudo-inverse of x̃ before alternating between the Expectation and Minimization steps as it does not depend on the learned codebook C. We initialize the codebook C by uniformly sampling k vectors among those we wish to quantize. After performing the E-step, some clusters may be empty. To resolve this issue, we iteratively perform the following additional steps for each empty cluster of index i. (1) Find codeword c0 corresponding to the most populated cluster ; (2) define new codewords c′0 = c0+e and c ′ i = c0−e, where e ∼ N (0, εI) and (3) perform again the E-step. We proceed to the M-step after all the empty clusters are resolved. We set ε = 1e−8 and we observe that its generally takes less than 1 or 2 E-M iterations to resolve all the empty clusters. Note that the quality of the resulting compression is sensitive to the choice of x. 3.2 CONVOLUTIONAL LAYERS Despite being presented in the case of a fully-connected layer, our approach works on any set of vectors. As a consequence, our apporoach can be applied to a convolutional layer if we split the associated 4D weight matrix into a set of vectors. There are many ways to split a 4D matrix in a set of vectors and we are aiming for one that maximizes the correlation between the vectors since vector quantization based methods work the best when the vectors are highly correlated. Given a convolutional layer, we have Cout filters of size K × K × Cin, leading to an overall 4D weight matrix W ∈ RCout×Cin×K×K . The dimensions along the output and input coordinate have no particular reason to be correlated. On the other hand, the spatial dimensions related to the filter size are by nature very correlated: nearby patches or pixels likely share information. As depicted in Figure 2, we thus reshape the weight matrix in a way that lead to spatially coherent quantization. More precisely, we quantize W spatially into subvectors of size d = K × K using the following procedure. We first reshape W into a 2D matrix of size (Cin × K × K) × Cout. Column j of the reshaped matrix Wr corresponds to the jth filter of W and is divided into Cin subvectors of size K ×K. Similarly, we reshape the input activations x accordingly to xr so that reshaping back the matrix xrWr yields the same result as x ∗W. In other words, we adopt a dual approach to the one using bi-level Toeplitz matrices to represent the weights. Then, we apply our method exposed in Section 3.1 to quantize each column of Wr into m = Cin subvectors of size d = K × K with k codewords, using xr as input activations in (2). As a natural extension, we also quantize with larger subvectors, for example subvectors of size d = 2×K ×K, see Section 4 for details. 2Denoting x̃+ the Moore-Penrose pseudoinverse of x̃, we obtain c∗ = 1|Ic| x̃ +x̃ (∑ p∈Ic vp ) In our implementation, we adapt the reshaping of W and x to various types of convolutions. We account for the padding, the stride, the number of groups (for depthwise convolutions and in particular for pointwise convolutions) and the kernel size. We refer the reader to the code for more details. 3.3 NETWORK QUANTIZATION In this section, we describe our approach for quantizing a neural network. We quantize the network sequentially starting from the lowest layer to the highest layer. We guide the compression of the student network by the non-compressed teacher network, as detailled below. Learning the codebook. We recover the current input activations of the layer, i.e. the input activations obtained by forwarding a batch of images through the quantized lower layers, and we quantize the current layer using those activations. Experimentally, we observed a drift in both the reconstruction and classification errors when using the activations of the non-compressed network rather than the current activations. Finetuning the codebook. We finetune the codewords by distillation (Hinton et al., 2014) using the non-compressed network as the teacher network and the compressed network (up to the current layer) as the student network. Denoting yt (resp. ys) the output probabilities of the teacher (resp. student) network, the loss we optimize is the Kullback-Leibler divergence L = KL(ys,yt). Finetuning on codewords is done by averaging the gradients of each subvector assigned to a given codeword. More formally, after the quantization step, we fix the assignments once for all. Then, denoting (bp)p∈Ic the subvectors that are assigned to codeword c, we perform the SGD update with a learning rate η c← c− η 1 |Ic| ∑ p∈Ic ∂L ∂bp . (5) Experimentally, we find the approach to perform better than finetuning on the target of the images as demonstrated in Table 3. Moreover, this approach does not require any labelled data. 3.4 GLOBAL FINETUNING In a final step, we globally finetune the codebooks of all the layers to reduce any residual drifts and we update the running statistics of the BatchNorm layers: We empirically find it beneficial to finetune all the centroids after the whole network is quantized. The finetuning procedure is exactly the same as described in Section 3.3, except that we additionally switch the BatchNorms to the training mode, meaning that the learnt coefficients are still fixed but that the batch statistics (running mean and variance) are still being updated with the standard moving average procedure. We perform the global finetuning using the standard ImageNet training set for 9 epochs with an initial learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. The learning rate is decayed by a factor 10 every 3 epochs. As demonstrated in the ablation study in Table 3, finetuning on the true labels performs worse than finetuning by distillation. A possible explanation is that the supervision signal coming from the teacher network is richer than the one-hot vector used as a traditional learning signal in supervised learning (Hinton et al., 2014). 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We quantize vanilla ResNet-18 and ResNet-50 architectures pretrained on the ImageNet dataset (Deng et al., 2009). Unless explicit mention of the contrary, the pretrained models are taken from the PyTorch model zoo3. We run our method on a 16 GB Volta V100 GPU. Quantizing a ResNet50 with our method (including all finetuning steps) takes about one day on 1 GPU. We detail our experimental setup below. Our code and the compressed models are open-sourced. Compression regimes. We explore a large block sizes (resp.small block sizes) compression regime by setting the subvector size of regular 3×3 convolutions to d=9 (resp.d = 18) and the subvector size of pointwise convolutions to d = 4 (resp.d = 8). For ResNet-18, the block size of pointwise convolutions is always equal to 4. The number of codewords or centroids is set to k ∈ {256, 512, 1024, 2048} for each compression regime. Note that we clamp the number of centroids to min(k,Cout × m/4) for stability. For instance, the first layer of the first stage of the ResNet-50 has size 64× 64× 1 ×1, thus we always use k = 128 centroids with a block size d = 8. For a given number of centroids k, small blocks lead to a lower compression ratio than large blocks. Sampling the input activations. Before quantizing each layer, we randomly sample a batch of 1024 training images to obtain the input activations of the current layer and reshape it as described in Section 3.2. Then, before each iteration (E+M step) of our method, we randomly sample 10, 000 rows from those reshaped input activations. Hyperparameters. We quantize each layer while performing 100 steps of our method (sufficient for convergence in practice). We finetune the centroids of each layer on the standard ImageNet training set during 2,500 iterations with a batch size of 128 (resp 64) for the ResNet-18 (resp.ResNet50) with a learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. For accuracy and memory reasons, the classifier is always quantized with a block size d = 4 and k = 2048 (resp. k = 1024) centroids for the ResNet-18 (resp., ResNet-50). Moreover, the first convolutional layer of size 7 × 7 is not quantized, as it represents less than 0.1% (resp., 0.05%) of the weights of a ResNet-18 (resp.ResNet-50). Metrics. We focus on the tradeoff between accuracy and memory. The accuracy is the top-1 error on the standard validation set of ImageNet. The memory footprint is calculated as the indexing cost (number of bits per weight) plus the overhead of storing the centroids in float16. As an example, quantizing a layer of size 128 × 128 × 3 × 3 with k=256 centroids (1 byte per subvector) and a block size of d = 9 leads to an indexing cost of 16 kB for m = 16, 384 blocks plus the cost of storing the centroids of 4.5 kB. 4.2 IMAGE CLASSIFICATION RESULTS We report below the results of our method applied to various ResNet models. First, we compare our method with the state of the art on the standard ResNet-18 and ResNet-50 architecture. Next, we show the potential of our approach on a competitive ResNet-50. Finally, an ablation study validates the pertinence of our method. Vanilla ResNet-18 and ResNet-50. We evaluate our method on the ImageNet benchmark for ResNet-18 and ResNet-50 architectures and compare our results to the following methods: Trained Ternary Quantization (TTQ) (Zhu et al., 2016), LR-Net (Shayer et al., 2017), ABC-Net (Lin et al., 2017), Binary Weight Network (XNOR-Net or BWN) (Rastegari et al., 2016), Deep Compression (DC) (Han et al., 2016) and Hardware-Aware Automated Quantization (HAQ) (Wang et al., 2018a). We report the accuracies and compression factors in the original papers and/or in the two surveys (Guo, 2018; Cheng et al., 2017) for a given architecture when the result is available. We do not compare our method to DoReFa-Net (Zhou et al., 2016) and WRPN (Mishra et al., 2017) as those approaches also use low-precision activations and hence get lower accuracies, e.g., 51.2% top-1 accuracy for a XNOR-Net with ResNet-18. The results are presented in Figure 4.2. For better readability, some results for our method are also displayed in Table 1. We report the average accuracy and standard deviation over 3 runs. Our method significantly outperforms state of the art papers for 3https://pytorch.org/docs/stable/torchvision/models various operating points. For instance, for a ResNet-18, our method with large blocks and k = 512 centroids reaches a larger accuracy than ABC-Net (M = 2) with a compression ratio that is 2x larger. Similarly, on the ResNet-50, our compressed model with k = 256 centroids in the large blocks setup yields a comparable accuracy to DC (2 bits) with a compression ratio that is 2x larger. The work by Tung & Mori (Tung & Mori, 2018) is likely the only one that remains competitive with ours with a 6.8 MB network after compression, with a technique that prunes the network and therefore implicitly changes the architecture. The authors report the delta accuracy for which we have no direct comparable top-1 accuracy, but their method is arguably complementary to ours. Semi-supervised ResNet-50. Recent works (Mahajan et al., 2018; Yalniz et al., 2019) have demonstrated the possibility to leverage a large collection of unlabelled images to improve the performance of a given architecture. In particular, Yalniz et al. (Yalniz et al., 2019) use the publicly available YFCC-100M dataset (Thomee et al., 2015) to train a ResNet-50 that reaches 79.1% top-1 accuracy on the standard validation set of ImageNet. In the following, we use this particular model and refer to it as semi-supervised ResNet-50. In the low compression regime (block sizes of 4 and 9), with k = 256 centroids (practical for implementation), our compressed semi-supervised ResNet-50 reaches 76.12% top-1 accuracy. In other words, the model compressed to 5 MB attains the performance of a vanilla, non-compressed ResNet50 (vs.97.5MB for the non-compressed ResNet-50). Comparison for a given size budget. To ensure a fair comparison, we compare our method for a given model size budget against the reference methods in Table 2. It should be noted that our method can further benefit from advances in semi-supervised learning to boosts the performance of the non-compressed and hence of the compressed network. Ablation study. We perform an ablation study on the vanilla ResNet-18 to study the respective effects of quantizing using the activations and finetuning by distillation (here, finetuning refers both to the per-layer finetuning and to the global finetuning after the quantization described in Section 3). We refer to our method as Act + Distill. First, we still finetune by distillation but change the quantization: instead of quantizing using our method (see Equation (2)), we quantizing using the standard PQ algorithm and do not take the activations into account, see Equation (1). We refer to this method as No act + Distill. Second, we quantize using our method but perform a standard finetuning using the image labels (Act + Labels). The results are displayed in Table 3. Our approach consistently yields significantly better results. As a side note, quantizing all the layers of a ResNet-18 with the standard PQ algorithm and without any finetuning leads to top-1 accuracies below 25% for all operating points, which illustrates the drift in accuracy occurring when compressing deep networks with standard methods (as opposed to our method). 4.3 IMAGE DETECTION RESULTS To demonstrate the generality of our method, we compress the Mask R-CNN architecture used for image detection in many real-life applications (He et al., 2017). We compress the backbone (ResNet50 FPN) in the small blocks compression regime and refer the reader to the open-sourced compressed model for the block sizes used in the various heads of the network. We use k = 256 centroids for every layer. We perform the fine-tuning (layer-wise and global) using distributed training on 8 V100 GPUs. Results are displayed in Table 4. We argue that this provides an interesting point of comparison for future work aiming at compressing such architectures for various applications. 5 CONCLUSION We presented a quantization method based on Product Quantization that gives state of the art results on ResNet architectures and that generalizes to other architectures such as Mask R-CNN. Our compression scheme does not require labeled data and the resulting models are byte-aligned, allowing for efficient inference on CPU. Further research directions include testing our method on a wider variety of architectures. In particular, our method can be readily adapted to simultaneously compress and transfer ResNets trained on ImageNet to other domains. Finally, we plan to take the non-linearity into account to improve our reconstruction error.
1. How does the proposed algorithm reduce the distortion of each layer in compressing network weights? 2. What is the purpose of selecting candidate codeword vectors using k-means clustering and fine-tuning them via knowledge distillation? 3. Are there any concerns or doubts regarding the algorithm's application, particularly in k-means clustering? 4. How effective is the proposed algorithm in minimizing the training loss compared to naïve PQ? 5. Is there a recommended approach for choosing the optimal number of centroids and block size for a given target compression rate? 6. Why were other compression schemes such as network pruning and low-rank approximation not included in the comparison?
Review
Review This paper addresses to compress the network weights by quantizing their values to some fixed codeword vectors. The authors aim to reduce the distortion of each layer rather than the weight distortion. The proposed algorithm first selects the candidate codeword vectors using k-means clustering and fine-tune them via knowledge distillation. The authors verify the proposed algorithm by comparing it with existing algorithms for ResNet-18 and ResNet-50. Overall, I think that the proposed algorithm is easy to apply and the draft is relatively well written. Some questions and doubts are listed below. -In k-means clustering (E-step and M-step), is it correct to multiply \tilde x to (c-v)? I think that the error arising from quantizing v into c is only affected by a subset of rows of \tilde x. For example, if v is the first subvector of w_j, then I think that only 1-st, m+1-th, 2m+1-th, … rows of \tilde x affect to the error. -Does minimizing reconstruction error minimizes the training loss (before any further fine-tuning) compared to naïve PQ? If not, -Is there any guideline for choosing the optimal number of centroids and the optimal block size given a target compression rate? -Is there any reason not comparing the proposed algorithm with other compression schemes? (e.g., network pruning and low-rank approximation)
ICLR
Title And the Bit Goes Down: Revisiting the Quantization of Neural Networks Abstract In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss reconstruction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using bytealigned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5 MB (20× compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet object classification and by compressing a Mask R-CNN with a 26× factor.1 1 INTRODUCTION There is a growing need for compressing the best convolutional networks (or ConvNets) to support embedded devices for applications like robotics and virtual/augmented reality. Indeed, the performance of ConvNets on image classification has steadily improved since the introduction of AlexNet (Krizhevsky et al., 2012). This progress has been fueled by deeper and richer architectures such as the ResNets (He et al., 2015) and their variants ResNeXts (Xie et al., 2017) or DenseNets (Huang et al., 2017). Those models particularly benefit from the recent progress made with weak supervision (Mahajan et al., 2018; Yalniz et al., 2019; Berthelot et al., 2019). Compression of ConvNets has been an active research topic in the recent years, leading to networks with a 71% top-1 accuracy on ImageNet object classification that fit in 1MB (Wang et al., 2018b). In this work, we propose a compression method particularly adapted to ResNet-like architectures. Our approach takes advantage of the high correlation in the convolutions by the use of a structured quantization algorithm, Product Quantization (PQ) (Jégou et al., 2011). More precisely, we exploit the spatial redundancy of information inherent to standard convolution filters (Denton et al., 2014). Besides reducing the memory footprint, we also produce compressed networks allowing efficient inference on CPU by using byte-aligned indexes, as opposed to entropy decoders (Han et al., 2016). Our approach departs from traditional scalar quantizers (Han et al., 2016) and vector quantizers (Gong et al., 2014; Carreira-Perpiñán & Idelbayev, 2017) by focusing on the accuracy of the activations rather than the weights. This is achieved by leveraging a weighted k-means technique. To our knowledge this strategy (see Section 3) is novel in this context. The closest work we are aware of is the one by Choi et al. (2016), but the authors use a different objective (their weighted term is derived from second-order information) along with a different quantization technique (scalar quantization). Our method targets a better in-domain reconstruction, as depicted by Figure 1. Finally, we compress the network sequentially to account for the dependency of our method to the activations at each layer. To prevent the accumulation of errors across layers, we guide this compression with the activations of the uncompressed network on unlabelled data: training by distillation (Hinton et al., 2014) allows for both an efficient layer-by-layer compression procedure and a global fine-tuning of the codewords. Thus, we only need a set of unlabelled images to adjust the codewords. As opposed to recent works by Mishra & Marr (2017) or Lopes et al. (2017), our distillation scheme is sequential and the underlying compression method is different (PQ vs. scalar). 1Code and compressed models: https://github.com/facebookresearch/kill-the-bits. We show that applying our approach to the semi-supervised ResNet-50 of Yalniz et al. (Yalniz et al., 2019) leads to a 5 MB memory footprint and a 76.1% top-1 accuracy on ImageNet object classification (hence 20× compression vs. the original model). Moreover, our approach generalizes to other tasks such as image detection. As shown in Section 4.3, we compress a Mask R-CNN (He et al., 2017) with a size budget around 6 MB (26× compression factor) while maintaining a competitive performance. 2 RELATED WORK There is a large body of literature on network compression. We review the works closest to ours and refer the reader to two recent surveys (Guo, 2018; Cheng et al., 2017) for a comprehensive overview. Low-precision training. Since early works like those of Courbariaux et al. (2015), researchers have developed various approaches to train networks with low precision weights. Those approaches include training with binary or ternary weights (Shayer et al., 2017; Zhu et al., 2016; Li & Liu, 2016; Rastegari et al., 2016; McDonnell, 2018), learning a combination of binary bases (Lin et al., 2017) and quantizing the activations (Zhou et al., 2016; 2017; Mishra et al., 2017). Some of these methods assume the possibility to employ specialized hardware that speed up inference and improve power efficiency by replacing most arithmetic operations with bit-wise operations. However, the back-propagation has to be adapted to the case where the weights are discrete. Quantization. Vector Quantization (VQ) and Product Quantization (PQ) have been extensively studied in the context of nearest-neighbor search (Jegou et al., 2011; Ge et al., 2014; Norouzi & Fleet, 2013). The idea is to decompose the original high-dimensional space into a cartesian product of subspaces that are quantized separately with a joint codebook. To our knowledge, Gong et al. (2014) were the first to introduce these stronger quantizers for neural network quantization, followed by Carreira-Perpiñán & Idelbayev (2017). As we will see in the remainder of this paper, employing this discretization off-the-shelf does not optimize the right objective function, and leads to a catastrophic drift of performance for deep networks. Pruning. Network pruning amounts to removing connections according to an importance criteria (typically the magnitude of the weight associated with this connection) until the desired model size/accuracy tradeoff is reached (LeCun et al., 1990). A natural extension of this work is to prune structural components of the network, for instance by enforcing channel-level (Liu et al., 2017) or filter-level (Luo et al., 2017) sparsity. However, these methods alternate between pruning and re-training steps and thus typically require a long training time. Dedicated architectures. Architectures such as SqueezeNet (Iandola et al., 2016), NASNet (Zoph et al., 2017), ShuffleNet (Zhang et al., 2017; Ma et al., 2018), MobileNets (Sandler et al., 2018) and EfficientNets (Tan & Le, 2019) are designed to be memory efficient. As they typically rely on a combination of depth-wise and point-wise convolutional filters, sometimes along with channel shuffling, they are less prone than ResNets to structured quantization techniques such as PQ. These architectures are either designed by hand or using the framework of architecture search (Howard et al., 2019). For instance, the respective model size and test top-1 accuracy of ImageNet of a MobileNet are 13.4 MB for 71.9%, to be compared with a vanilla ResNet-50 with size 97.5 MB for a top-1 of 76.2%. Moreover, larger models such as ResNets can benefit from large-scale weakly- or semi-supervised learning to reach better performance (Mahajan et al., 2018; Yalniz et al., 2019). Combining some of the mentioned approaches yields high compression factors as demonstrated by Han et al. with Deep Compression (DC) (Han et al., 2016) or more recently by Tung & Mori (Tung & Mori, 2018). Moreover and from a practical point of view, the process of compressing networks depends on the type of hardware on which the networks will run. Recent work directly quantizes to optimize energy-efficiency and latency time on a specific hardware (Wang et al., 2018a). Finally, the memory overhead of storing the full activations is negligible compared to the storage of the weights for two reasons. First, in realistic real-time inference setups, the batch size is almost always equal to one. Second, a forward pass only requires to store the activations of the current layer –which are often smaller than the size of the input– and not the whole activations of the network. 3 OUR APPROACH In this section, we describe our strategy for network compression and we show how to extend our approach to quantize a modern ConvNet architecture. The specificity of our approach is that it aims at a small reconstruction error for the outputs of the layer rather than the layer weights themselves. We first describe how we quantize a single fully connected and convolutional layer. Then we describe how we quantize a full pre-trained network and finetune it. 3.1 QUANTIZATION OF A FULLY-CONNECTED LAYER We consider a fully-connected layer with weights W ∈ RCin×Cout and, without loss of generality, we omit the bias since it does not impact reconstruction error. Product Quantization (PQ). Applying the PQ algorithm to the columns of W consists in evenly splitting each column into m contiguous subvectors and learning a codebook on the resulting mCout subvectors. Then, a column of W is quantized by mapping each of its subvector to its nearest codeword in the codebook. For simplicity, we assume that Cin is a multiple of m, i.e., that all the subvectors have the same dimension d = Cin/m. More formally, the codebook C = {c1, . . . , ck} contains k codewords of dimension d. Any column wj of W is mapped to its quantized version q(wj) = (ci1 , . . . , cim) where i1 denotes the index of the codeword assigned to the first subvector of wj , and so forth. The codebook is then learned by minimizing the following objective function: ‖W − Ŵ‖22 = ∑ j ‖wj − q(wj)‖22, (1) where Ŵ denotes the quantized weights. This objective can be efficiently minimized with k-means. When m is set to 1, PQ is equivalent to vector quantization (VQ) and when m is equal to Cin, it is the scalar k-means algorithm. The main benefit of PQ is its expressivity: each column wj is mapped to a vector in the product C = C × · · · × C, thus PQ generates an implicit codebook of size km. Our algorithm. PQ quantizes the weight matrix of the fully-connected layer. However, in practice, we are interested in preserving the output of the layer, not its weights. This is illustrated in the case of a non-linear classifier in Figure 1: preserving the weights a layer does not necessarily guarantee preserving its output. In other words, the Frobenius approximation of the weights of a layer is not guaranteed to be the best approximation of the output over some arbitrary domain (in particular for in-domain inputs). We thus propose an alternative to PQ that directly minimizes the reconstruction error on the output activations obtained by applying the layer to in-domain inputs. More precisely, given a batch of B input activations x ∈ RB×Cin , we are interested in learning a codebook C that minimizes the difference between the output activations and their reconstructions: ‖y − ŷ‖22 = ∑ j ‖x(wj − q(wj))‖22, (2) where y = xW is the output and ŷ = xŴ its reconstruction. Our objective is a re-weighting of the objective in Equation (1). We can thus learn our codebook with a weighted k-means algorithm. First, we unroll x of size B × Cin into x̃ of size (B × m) × d i.e. we split each row of x into m subvectors of size d and stack these subvectors. Next, we adapt the EM algorithm as follows. (1) E-step (cluster assignment). Recall that every column wj is divided into m subvectors of dimension d. Each subvector v is assigned to the codeword cj such that cj = argmin c∈C ‖x̃(c− v)‖22. (3) This step is performed by exhaustive exploration. Our implementation relies on broadcasting to be computationally efficient. (2) M-step (codeword update). Let us consider a codeword c ∈ C. We denote (vp)p∈Ic the subvectors that are currently assigned to c. Then, we update c← c?, where c? = argmin c∈Rd ∑ p∈Ic ‖x̃(c− vp)‖22. (4) This step explicitly computes the solution of the least-squares problem2. Our implementation performs the computation of the pseudo-inverse of x̃ before alternating between the Expectation and Minimization steps as it does not depend on the learned codebook C. We initialize the codebook C by uniformly sampling k vectors among those we wish to quantize. After performing the E-step, some clusters may be empty. To resolve this issue, we iteratively perform the following additional steps for each empty cluster of index i. (1) Find codeword c0 corresponding to the most populated cluster ; (2) define new codewords c′0 = c0+e and c ′ i = c0−e, where e ∼ N (0, εI) and (3) perform again the E-step. We proceed to the M-step after all the empty clusters are resolved. We set ε = 1e−8 and we observe that its generally takes less than 1 or 2 E-M iterations to resolve all the empty clusters. Note that the quality of the resulting compression is sensitive to the choice of x. 3.2 CONVOLUTIONAL LAYERS Despite being presented in the case of a fully-connected layer, our approach works on any set of vectors. As a consequence, our apporoach can be applied to a convolutional layer if we split the associated 4D weight matrix into a set of vectors. There are many ways to split a 4D matrix in a set of vectors and we are aiming for one that maximizes the correlation between the vectors since vector quantization based methods work the best when the vectors are highly correlated. Given a convolutional layer, we have Cout filters of size K × K × Cin, leading to an overall 4D weight matrix W ∈ RCout×Cin×K×K . The dimensions along the output and input coordinate have no particular reason to be correlated. On the other hand, the spatial dimensions related to the filter size are by nature very correlated: nearby patches or pixels likely share information. As depicted in Figure 2, we thus reshape the weight matrix in a way that lead to spatially coherent quantization. More precisely, we quantize W spatially into subvectors of size d = K × K using the following procedure. We first reshape W into a 2D matrix of size (Cin × K × K) × Cout. Column j of the reshaped matrix Wr corresponds to the jth filter of W and is divided into Cin subvectors of size K ×K. Similarly, we reshape the input activations x accordingly to xr so that reshaping back the matrix xrWr yields the same result as x ∗W. In other words, we adopt a dual approach to the one using bi-level Toeplitz matrices to represent the weights. Then, we apply our method exposed in Section 3.1 to quantize each column of Wr into m = Cin subvectors of size d = K × K with k codewords, using xr as input activations in (2). As a natural extension, we also quantize with larger subvectors, for example subvectors of size d = 2×K ×K, see Section 4 for details. 2Denoting x̃+ the Moore-Penrose pseudoinverse of x̃, we obtain c∗ = 1|Ic| x̃ +x̃ (∑ p∈Ic vp ) In our implementation, we adapt the reshaping of W and x to various types of convolutions. We account for the padding, the stride, the number of groups (for depthwise convolutions and in particular for pointwise convolutions) and the kernel size. We refer the reader to the code for more details. 3.3 NETWORK QUANTIZATION In this section, we describe our approach for quantizing a neural network. We quantize the network sequentially starting from the lowest layer to the highest layer. We guide the compression of the student network by the non-compressed teacher network, as detailled below. Learning the codebook. We recover the current input activations of the layer, i.e. the input activations obtained by forwarding a batch of images through the quantized lower layers, and we quantize the current layer using those activations. Experimentally, we observed a drift in both the reconstruction and classification errors when using the activations of the non-compressed network rather than the current activations. Finetuning the codebook. We finetune the codewords by distillation (Hinton et al., 2014) using the non-compressed network as the teacher network and the compressed network (up to the current layer) as the student network. Denoting yt (resp. ys) the output probabilities of the teacher (resp. student) network, the loss we optimize is the Kullback-Leibler divergence L = KL(ys,yt). Finetuning on codewords is done by averaging the gradients of each subvector assigned to a given codeword. More formally, after the quantization step, we fix the assignments once for all. Then, denoting (bp)p∈Ic the subvectors that are assigned to codeword c, we perform the SGD update with a learning rate η c← c− η 1 |Ic| ∑ p∈Ic ∂L ∂bp . (5) Experimentally, we find the approach to perform better than finetuning on the target of the images as demonstrated in Table 3. Moreover, this approach does not require any labelled data. 3.4 GLOBAL FINETUNING In a final step, we globally finetune the codebooks of all the layers to reduce any residual drifts and we update the running statistics of the BatchNorm layers: We empirically find it beneficial to finetune all the centroids after the whole network is quantized. The finetuning procedure is exactly the same as described in Section 3.3, except that we additionally switch the BatchNorms to the training mode, meaning that the learnt coefficients are still fixed but that the batch statistics (running mean and variance) are still being updated with the standard moving average procedure. We perform the global finetuning using the standard ImageNet training set for 9 epochs with an initial learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. The learning rate is decayed by a factor 10 every 3 epochs. As demonstrated in the ablation study in Table 3, finetuning on the true labels performs worse than finetuning by distillation. A possible explanation is that the supervision signal coming from the teacher network is richer than the one-hot vector used as a traditional learning signal in supervised learning (Hinton et al., 2014). 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We quantize vanilla ResNet-18 and ResNet-50 architectures pretrained on the ImageNet dataset (Deng et al., 2009). Unless explicit mention of the contrary, the pretrained models are taken from the PyTorch model zoo3. We run our method on a 16 GB Volta V100 GPU. Quantizing a ResNet50 with our method (including all finetuning steps) takes about one day on 1 GPU. We detail our experimental setup below. Our code and the compressed models are open-sourced. Compression regimes. We explore a large block sizes (resp.small block sizes) compression regime by setting the subvector size of regular 3×3 convolutions to d=9 (resp.d = 18) and the subvector size of pointwise convolutions to d = 4 (resp.d = 8). For ResNet-18, the block size of pointwise convolutions is always equal to 4. The number of codewords or centroids is set to k ∈ {256, 512, 1024, 2048} for each compression regime. Note that we clamp the number of centroids to min(k,Cout × m/4) for stability. For instance, the first layer of the first stage of the ResNet-50 has size 64× 64× 1 ×1, thus we always use k = 128 centroids with a block size d = 8. For a given number of centroids k, small blocks lead to a lower compression ratio than large blocks. Sampling the input activations. Before quantizing each layer, we randomly sample a batch of 1024 training images to obtain the input activations of the current layer and reshape it as described in Section 3.2. Then, before each iteration (E+M step) of our method, we randomly sample 10, 000 rows from those reshaped input activations. Hyperparameters. We quantize each layer while performing 100 steps of our method (sufficient for convergence in practice). We finetune the centroids of each layer on the standard ImageNet training set during 2,500 iterations with a batch size of 128 (resp 64) for the ResNet-18 (resp.ResNet50) with a learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. For accuracy and memory reasons, the classifier is always quantized with a block size d = 4 and k = 2048 (resp. k = 1024) centroids for the ResNet-18 (resp., ResNet-50). Moreover, the first convolutional layer of size 7 × 7 is not quantized, as it represents less than 0.1% (resp., 0.05%) of the weights of a ResNet-18 (resp.ResNet-50). Metrics. We focus on the tradeoff between accuracy and memory. The accuracy is the top-1 error on the standard validation set of ImageNet. The memory footprint is calculated as the indexing cost (number of bits per weight) plus the overhead of storing the centroids in float16. As an example, quantizing a layer of size 128 × 128 × 3 × 3 with k=256 centroids (1 byte per subvector) and a block size of d = 9 leads to an indexing cost of 16 kB for m = 16, 384 blocks plus the cost of storing the centroids of 4.5 kB. 4.2 IMAGE CLASSIFICATION RESULTS We report below the results of our method applied to various ResNet models. First, we compare our method with the state of the art on the standard ResNet-18 and ResNet-50 architecture. Next, we show the potential of our approach on a competitive ResNet-50. Finally, an ablation study validates the pertinence of our method. Vanilla ResNet-18 and ResNet-50. We evaluate our method on the ImageNet benchmark for ResNet-18 and ResNet-50 architectures and compare our results to the following methods: Trained Ternary Quantization (TTQ) (Zhu et al., 2016), LR-Net (Shayer et al., 2017), ABC-Net (Lin et al., 2017), Binary Weight Network (XNOR-Net or BWN) (Rastegari et al., 2016), Deep Compression (DC) (Han et al., 2016) and Hardware-Aware Automated Quantization (HAQ) (Wang et al., 2018a). We report the accuracies and compression factors in the original papers and/or in the two surveys (Guo, 2018; Cheng et al., 2017) for a given architecture when the result is available. We do not compare our method to DoReFa-Net (Zhou et al., 2016) and WRPN (Mishra et al., 2017) as those approaches also use low-precision activations and hence get lower accuracies, e.g., 51.2% top-1 accuracy for a XNOR-Net with ResNet-18. The results are presented in Figure 4.2. For better readability, some results for our method are also displayed in Table 1. We report the average accuracy and standard deviation over 3 runs. Our method significantly outperforms state of the art papers for 3https://pytorch.org/docs/stable/torchvision/models various operating points. For instance, for a ResNet-18, our method with large blocks and k = 512 centroids reaches a larger accuracy than ABC-Net (M = 2) with a compression ratio that is 2x larger. Similarly, on the ResNet-50, our compressed model with k = 256 centroids in the large blocks setup yields a comparable accuracy to DC (2 bits) with a compression ratio that is 2x larger. The work by Tung & Mori (Tung & Mori, 2018) is likely the only one that remains competitive with ours with a 6.8 MB network after compression, with a technique that prunes the network and therefore implicitly changes the architecture. The authors report the delta accuracy for which we have no direct comparable top-1 accuracy, but their method is arguably complementary to ours. Semi-supervised ResNet-50. Recent works (Mahajan et al., 2018; Yalniz et al., 2019) have demonstrated the possibility to leverage a large collection of unlabelled images to improve the performance of a given architecture. In particular, Yalniz et al. (Yalniz et al., 2019) use the publicly available YFCC-100M dataset (Thomee et al., 2015) to train a ResNet-50 that reaches 79.1% top-1 accuracy on the standard validation set of ImageNet. In the following, we use this particular model and refer to it as semi-supervised ResNet-50. In the low compression regime (block sizes of 4 and 9), with k = 256 centroids (practical for implementation), our compressed semi-supervised ResNet-50 reaches 76.12% top-1 accuracy. In other words, the model compressed to 5 MB attains the performance of a vanilla, non-compressed ResNet50 (vs.97.5MB for the non-compressed ResNet-50). Comparison for a given size budget. To ensure a fair comparison, we compare our method for a given model size budget against the reference methods in Table 2. It should be noted that our method can further benefit from advances in semi-supervised learning to boosts the performance of the non-compressed and hence of the compressed network. Ablation study. We perform an ablation study on the vanilla ResNet-18 to study the respective effects of quantizing using the activations and finetuning by distillation (here, finetuning refers both to the per-layer finetuning and to the global finetuning after the quantization described in Section 3). We refer to our method as Act + Distill. First, we still finetune by distillation but change the quantization: instead of quantizing using our method (see Equation (2)), we quantizing using the standard PQ algorithm and do not take the activations into account, see Equation (1). We refer to this method as No act + Distill. Second, we quantize using our method but perform a standard finetuning using the image labels (Act + Labels). The results are displayed in Table 3. Our approach consistently yields significantly better results. As a side note, quantizing all the layers of a ResNet-18 with the standard PQ algorithm and without any finetuning leads to top-1 accuracies below 25% for all operating points, which illustrates the drift in accuracy occurring when compressing deep networks with standard methods (as opposed to our method). 4.3 IMAGE DETECTION RESULTS To demonstrate the generality of our method, we compress the Mask R-CNN architecture used for image detection in many real-life applications (He et al., 2017). We compress the backbone (ResNet50 FPN) in the small blocks compression regime and refer the reader to the open-sourced compressed model for the block sizes used in the various heads of the network. We use k = 256 centroids for every layer. We perform the fine-tuning (layer-wise and global) using distributed training on 8 V100 GPUs. Results are displayed in Table 4. We argue that this provides an interesting point of comparison for future work aiming at compressing such architectures for various applications. 5 CONCLUSION We presented a quantization method based on Product Quantization that gives state of the art results on ResNet architectures and that generalizes to other architectures such as Mask R-CNN. Our compression scheme does not require labeled data and the resulting models are byte-aligned, allowing for efficient inference on CPU. Further research directions include testing our method on a wider variety of architectures. In particular, our method can be readily adapted to simultaneously compress and transfer ResNets trained on ImageNet to other domains. Finally, we plan to take the non-linearity into account to improve our reconstruction error.
1. What is the focus and contribution of the paper on neural network quantization? 2. What are the strengths of the proposed approach, particularly in its modification of the Product Quantization algorithm? 3. Do you have any concerns or questions regarding the method's ability to handle non-linear neuron activations? 4. How does the reviewer assess the efficiency and scalability of the proposed quantization approach compared to other methods?
Review
Review This paper suggests a quantization approach for neural networks, based on the Product Quantization (PQ) algorithm which has been successful in quantization for similarity search. The basic idea is to quantize the weights of a neuron/single layer with a variant of PQ, which is modified to optimize the quantization error of inner products of sample inputs with the weights, rather than the weights themselves. This is cast as a weighted variant of k-means. The inner product is more directly related to the network output (though still does not account for non-linear neuron activations) and thus is expected to yield better downstream performance, and only requires introducing unlabeled input samples into the quantization process. This approach is built into a pipeline that gradually quantizes the entire network. Overall, I support the paper and recommend acceptance. PQ is known to be successful for quantization in other contexts, and the specialization suggested here for neural networks is natural and well-motivated. The method can be expected to perform well empirically, which the experiments verify, and to have potential impact. Questions: 1. Can you comment on the quantization time of the suggested method? Repeatedly solving the EM steps can add up to quite an overhead. Does it pose a difficulty? How does it compare to other methods? 2. Can you elaborate on the issue of non-linearity? It is mentioned only briefly in the conclusion. What is the difficulty in incorporating it? Is it in solving equation (4)? And perhaps, how do you expect it to effect the results?
ICLR
Title And the Bit Goes Down: Revisiting the Quantization of Neural Networks Abstract In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss reconstruction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using bytealigned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5 MB (20× compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet object classification and by compressing a Mask R-CNN with a 26× factor.1 1 INTRODUCTION There is a growing need for compressing the best convolutional networks (or ConvNets) to support embedded devices for applications like robotics and virtual/augmented reality. Indeed, the performance of ConvNets on image classification has steadily improved since the introduction of AlexNet (Krizhevsky et al., 2012). This progress has been fueled by deeper and richer architectures such as the ResNets (He et al., 2015) and their variants ResNeXts (Xie et al., 2017) or DenseNets (Huang et al., 2017). Those models particularly benefit from the recent progress made with weak supervision (Mahajan et al., 2018; Yalniz et al., 2019; Berthelot et al., 2019). Compression of ConvNets has been an active research topic in the recent years, leading to networks with a 71% top-1 accuracy on ImageNet object classification that fit in 1MB (Wang et al., 2018b). In this work, we propose a compression method particularly adapted to ResNet-like architectures. Our approach takes advantage of the high correlation in the convolutions by the use of a structured quantization algorithm, Product Quantization (PQ) (Jégou et al., 2011). More precisely, we exploit the spatial redundancy of information inherent to standard convolution filters (Denton et al., 2014). Besides reducing the memory footprint, we also produce compressed networks allowing efficient inference on CPU by using byte-aligned indexes, as opposed to entropy decoders (Han et al., 2016). Our approach departs from traditional scalar quantizers (Han et al., 2016) and vector quantizers (Gong et al., 2014; Carreira-Perpiñán & Idelbayev, 2017) by focusing on the accuracy of the activations rather than the weights. This is achieved by leveraging a weighted k-means technique. To our knowledge this strategy (see Section 3) is novel in this context. The closest work we are aware of is the one by Choi et al. (2016), but the authors use a different objective (their weighted term is derived from second-order information) along with a different quantization technique (scalar quantization). Our method targets a better in-domain reconstruction, as depicted by Figure 1. Finally, we compress the network sequentially to account for the dependency of our method to the activations at each layer. To prevent the accumulation of errors across layers, we guide this compression with the activations of the uncompressed network on unlabelled data: training by distillation (Hinton et al., 2014) allows for both an efficient layer-by-layer compression procedure and a global fine-tuning of the codewords. Thus, we only need a set of unlabelled images to adjust the codewords. As opposed to recent works by Mishra & Marr (2017) or Lopes et al. (2017), our distillation scheme is sequential and the underlying compression method is different (PQ vs. scalar). 1Code and compressed models: https://github.com/facebookresearch/kill-the-bits. We show that applying our approach to the semi-supervised ResNet-50 of Yalniz et al. (Yalniz et al., 2019) leads to a 5 MB memory footprint and a 76.1% top-1 accuracy on ImageNet object classification (hence 20× compression vs. the original model). Moreover, our approach generalizes to other tasks such as image detection. As shown in Section 4.3, we compress a Mask R-CNN (He et al., 2017) with a size budget around 6 MB (26× compression factor) while maintaining a competitive performance. 2 RELATED WORK There is a large body of literature on network compression. We review the works closest to ours and refer the reader to two recent surveys (Guo, 2018; Cheng et al., 2017) for a comprehensive overview. Low-precision training. Since early works like those of Courbariaux et al. (2015), researchers have developed various approaches to train networks with low precision weights. Those approaches include training with binary or ternary weights (Shayer et al., 2017; Zhu et al., 2016; Li & Liu, 2016; Rastegari et al., 2016; McDonnell, 2018), learning a combination of binary bases (Lin et al., 2017) and quantizing the activations (Zhou et al., 2016; 2017; Mishra et al., 2017). Some of these methods assume the possibility to employ specialized hardware that speed up inference and improve power efficiency by replacing most arithmetic operations with bit-wise operations. However, the back-propagation has to be adapted to the case where the weights are discrete. Quantization. Vector Quantization (VQ) and Product Quantization (PQ) have been extensively studied in the context of nearest-neighbor search (Jegou et al., 2011; Ge et al., 2014; Norouzi & Fleet, 2013). The idea is to decompose the original high-dimensional space into a cartesian product of subspaces that are quantized separately with a joint codebook. To our knowledge, Gong et al. (2014) were the first to introduce these stronger quantizers for neural network quantization, followed by Carreira-Perpiñán & Idelbayev (2017). As we will see in the remainder of this paper, employing this discretization off-the-shelf does not optimize the right objective function, and leads to a catastrophic drift of performance for deep networks. Pruning. Network pruning amounts to removing connections according to an importance criteria (typically the magnitude of the weight associated with this connection) until the desired model size/accuracy tradeoff is reached (LeCun et al., 1990). A natural extension of this work is to prune structural components of the network, for instance by enforcing channel-level (Liu et al., 2017) or filter-level (Luo et al., 2017) sparsity. However, these methods alternate between pruning and re-training steps and thus typically require a long training time. Dedicated architectures. Architectures such as SqueezeNet (Iandola et al., 2016), NASNet (Zoph et al., 2017), ShuffleNet (Zhang et al., 2017; Ma et al., 2018), MobileNets (Sandler et al., 2018) and EfficientNets (Tan & Le, 2019) are designed to be memory efficient. As they typically rely on a combination of depth-wise and point-wise convolutional filters, sometimes along with channel shuffling, they are less prone than ResNets to structured quantization techniques such as PQ. These architectures are either designed by hand or using the framework of architecture search (Howard et al., 2019). For instance, the respective model size and test top-1 accuracy of ImageNet of a MobileNet are 13.4 MB for 71.9%, to be compared with a vanilla ResNet-50 with size 97.5 MB for a top-1 of 76.2%. Moreover, larger models such as ResNets can benefit from large-scale weakly- or semi-supervised learning to reach better performance (Mahajan et al., 2018; Yalniz et al., 2019). Combining some of the mentioned approaches yields high compression factors as demonstrated by Han et al. with Deep Compression (DC) (Han et al., 2016) or more recently by Tung & Mori (Tung & Mori, 2018). Moreover and from a practical point of view, the process of compressing networks depends on the type of hardware on which the networks will run. Recent work directly quantizes to optimize energy-efficiency and latency time on a specific hardware (Wang et al., 2018a). Finally, the memory overhead of storing the full activations is negligible compared to the storage of the weights for two reasons. First, in realistic real-time inference setups, the batch size is almost always equal to one. Second, a forward pass only requires to store the activations of the current layer –which are often smaller than the size of the input– and not the whole activations of the network. 3 OUR APPROACH In this section, we describe our strategy for network compression and we show how to extend our approach to quantize a modern ConvNet architecture. The specificity of our approach is that it aims at a small reconstruction error for the outputs of the layer rather than the layer weights themselves. We first describe how we quantize a single fully connected and convolutional layer. Then we describe how we quantize a full pre-trained network and finetune it. 3.1 QUANTIZATION OF A FULLY-CONNECTED LAYER We consider a fully-connected layer with weights W ∈ RCin×Cout and, without loss of generality, we omit the bias since it does not impact reconstruction error. Product Quantization (PQ). Applying the PQ algorithm to the columns of W consists in evenly splitting each column into m contiguous subvectors and learning a codebook on the resulting mCout subvectors. Then, a column of W is quantized by mapping each of its subvector to its nearest codeword in the codebook. For simplicity, we assume that Cin is a multiple of m, i.e., that all the subvectors have the same dimension d = Cin/m. More formally, the codebook C = {c1, . . . , ck} contains k codewords of dimension d. Any column wj of W is mapped to its quantized version q(wj) = (ci1 , . . . , cim) where i1 denotes the index of the codeword assigned to the first subvector of wj , and so forth. The codebook is then learned by minimizing the following objective function: ‖W − Ŵ‖22 = ∑ j ‖wj − q(wj)‖22, (1) where Ŵ denotes the quantized weights. This objective can be efficiently minimized with k-means. When m is set to 1, PQ is equivalent to vector quantization (VQ) and when m is equal to Cin, it is the scalar k-means algorithm. The main benefit of PQ is its expressivity: each column wj is mapped to a vector in the product C = C × · · · × C, thus PQ generates an implicit codebook of size km. Our algorithm. PQ quantizes the weight matrix of the fully-connected layer. However, in practice, we are interested in preserving the output of the layer, not its weights. This is illustrated in the case of a non-linear classifier in Figure 1: preserving the weights a layer does not necessarily guarantee preserving its output. In other words, the Frobenius approximation of the weights of a layer is not guaranteed to be the best approximation of the output over some arbitrary domain (in particular for in-domain inputs). We thus propose an alternative to PQ that directly minimizes the reconstruction error on the output activations obtained by applying the layer to in-domain inputs. More precisely, given a batch of B input activations x ∈ RB×Cin , we are interested in learning a codebook C that minimizes the difference between the output activations and their reconstructions: ‖y − ŷ‖22 = ∑ j ‖x(wj − q(wj))‖22, (2) where y = xW is the output and ŷ = xŴ its reconstruction. Our objective is a re-weighting of the objective in Equation (1). We can thus learn our codebook with a weighted k-means algorithm. First, we unroll x of size B × Cin into x̃ of size (B × m) × d i.e. we split each row of x into m subvectors of size d and stack these subvectors. Next, we adapt the EM algorithm as follows. (1) E-step (cluster assignment). Recall that every column wj is divided into m subvectors of dimension d. Each subvector v is assigned to the codeword cj such that cj = argmin c∈C ‖x̃(c− v)‖22. (3) This step is performed by exhaustive exploration. Our implementation relies on broadcasting to be computationally efficient. (2) M-step (codeword update). Let us consider a codeword c ∈ C. We denote (vp)p∈Ic the subvectors that are currently assigned to c. Then, we update c← c?, where c? = argmin c∈Rd ∑ p∈Ic ‖x̃(c− vp)‖22. (4) This step explicitly computes the solution of the least-squares problem2. Our implementation performs the computation of the pseudo-inverse of x̃ before alternating between the Expectation and Minimization steps as it does not depend on the learned codebook C. We initialize the codebook C by uniformly sampling k vectors among those we wish to quantize. After performing the E-step, some clusters may be empty. To resolve this issue, we iteratively perform the following additional steps for each empty cluster of index i. (1) Find codeword c0 corresponding to the most populated cluster ; (2) define new codewords c′0 = c0+e and c ′ i = c0−e, where e ∼ N (0, εI) and (3) perform again the E-step. We proceed to the M-step after all the empty clusters are resolved. We set ε = 1e−8 and we observe that its generally takes less than 1 or 2 E-M iterations to resolve all the empty clusters. Note that the quality of the resulting compression is sensitive to the choice of x. 3.2 CONVOLUTIONAL LAYERS Despite being presented in the case of a fully-connected layer, our approach works on any set of vectors. As a consequence, our apporoach can be applied to a convolutional layer if we split the associated 4D weight matrix into a set of vectors. There are many ways to split a 4D matrix in a set of vectors and we are aiming for one that maximizes the correlation between the vectors since vector quantization based methods work the best when the vectors are highly correlated. Given a convolutional layer, we have Cout filters of size K × K × Cin, leading to an overall 4D weight matrix W ∈ RCout×Cin×K×K . The dimensions along the output and input coordinate have no particular reason to be correlated. On the other hand, the spatial dimensions related to the filter size are by nature very correlated: nearby patches or pixels likely share information. As depicted in Figure 2, we thus reshape the weight matrix in a way that lead to spatially coherent quantization. More precisely, we quantize W spatially into subvectors of size d = K × K using the following procedure. We first reshape W into a 2D matrix of size (Cin × K × K) × Cout. Column j of the reshaped matrix Wr corresponds to the jth filter of W and is divided into Cin subvectors of size K ×K. Similarly, we reshape the input activations x accordingly to xr so that reshaping back the matrix xrWr yields the same result as x ∗W. In other words, we adopt a dual approach to the one using bi-level Toeplitz matrices to represent the weights. Then, we apply our method exposed in Section 3.1 to quantize each column of Wr into m = Cin subvectors of size d = K × K with k codewords, using xr as input activations in (2). As a natural extension, we also quantize with larger subvectors, for example subvectors of size d = 2×K ×K, see Section 4 for details. 2Denoting x̃+ the Moore-Penrose pseudoinverse of x̃, we obtain c∗ = 1|Ic| x̃ +x̃ (∑ p∈Ic vp ) In our implementation, we adapt the reshaping of W and x to various types of convolutions. We account for the padding, the stride, the number of groups (for depthwise convolutions and in particular for pointwise convolutions) and the kernel size. We refer the reader to the code for more details. 3.3 NETWORK QUANTIZATION In this section, we describe our approach for quantizing a neural network. We quantize the network sequentially starting from the lowest layer to the highest layer. We guide the compression of the student network by the non-compressed teacher network, as detailled below. Learning the codebook. We recover the current input activations of the layer, i.e. the input activations obtained by forwarding a batch of images through the quantized lower layers, and we quantize the current layer using those activations. Experimentally, we observed a drift in both the reconstruction and classification errors when using the activations of the non-compressed network rather than the current activations. Finetuning the codebook. We finetune the codewords by distillation (Hinton et al., 2014) using the non-compressed network as the teacher network and the compressed network (up to the current layer) as the student network. Denoting yt (resp. ys) the output probabilities of the teacher (resp. student) network, the loss we optimize is the Kullback-Leibler divergence L = KL(ys,yt). Finetuning on codewords is done by averaging the gradients of each subvector assigned to a given codeword. More formally, after the quantization step, we fix the assignments once for all. Then, denoting (bp)p∈Ic the subvectors that are assigned to codeword c, we perform the SGD update with a learning rate η c← c− η 1 |Ic| ∑ p∈Ic ∂L ∂bp . (5) Experimentally, we find the approach to perform better than finetuning on the target of the images as demonstrated in Table 3. Moreover, this approach does not require any labelled data. 3.4 GLOBAL FINETUNING In a final step, we globally finetune the codebooks of all the layers to reduce any residual drifts and we update the running statistics of the BatchNorm layers: We empirically find it beneficial to finetune all the centroids after the whole network is quantized. The finetuning procedure is exactly the same as described in Section 3.3, except that we additionally switch the BatchNorms to the training mode, meaning that the learnt coefficients are still fixed but that the batch statistics (running mean and variance) are still being updated with the standard moving average procedure. We perform the global finetuning using the standard ImageNet training set for 9 epochs with an initial learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. The learning rate is decayed by a factor 10 every 3 epochs. As demonstrated in the ablation study in Table 3, finetuning on the true labels performs worse than finetuning by distillation. A possible explanation is that the supervision signal coming from the teacher network is richer than the one-hot vector used as a traditional learning signal in supervised learning (Hinton et al., 2014). 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We quantize vanilla ResNet-18 and ResNet-50 architectures pretrained on the ImageNet dataset (Deng et al., 2009). Unless explicit mention of the contrary, the pretrained models are taken from the PyTorch model zoo3. We run our method on a 16 GB Volta V100 GPU. Quantizing a ResNet50 with our method (including all finetuning steps) takes about one day on 1 GPU. We detail our experimental setup below. Our code and the compressed models are open-sourced. Compression regimes. We explore a large block sizes (resp.small block sizes) compression regime by setting the subvector size of regular 3×3 convolutions to d=9 (resp.d = 18) and the subvector size of pointwise convolutions to d = 4 (resp.d = 8). For ResNet-18, the block size of pointwise convolutions is always equal to 4. The number of codewords or centroids is set to k ∈ {256, 512, 1024, 2048} for each compression regime. Note that we clamp the number of centroids to min(k,Cout × m/4) for stability. For instance, the first layer of the first stage of the ResNet-50 has size 64× 64× 1 ×1, thus we always use k = 128 centroids with a block size d = 8. For a given number of centroids k, small blocks lead to a lower compression ratio than large blocks. Sampling the input activations. Before quantizing each layer, we randomly sample a batch of 1024 training images to obtain the input activations of the current layer and reshape it as described in Section 3.2. Then, before each iteration (E+M step) of our method, we randomly sample 10, 000 rows from those reshaped input activations. Hyperparameters. We quantize each layer while performing 100 steps of our method (sufficient for convergence in practice). We finetune the centroids of each layer on the standard ImageNet training set during 2,500 iterations with a batch size of 128 (resp 64) for the ResNet-18 (resp.ResNet50) with a learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. For accuracy and memory reasons, the classifier is always quantized with a block size d = 4 and k = 2048 (resp. k = 1024) centroids for the ResNet-18 (resp., ResNet-50). Moreover, the first convolutional layer of size 7 × 7 is not quantized, as it represents less than 0.1% (resp., 0.05%) of the weights of a ResNet-18 (resp.ResNet-50). Metrics. We focus on the tradeoff between accuracy and memory. The accuracy is the top-1 error on the standard validation set of ImageNet. The memory footprint is calculated as the indexing cost (number of bits per weight) plus the overhead of storing the centroids in float16. As an example, quantizing a layer of size 128 × 128 × 3 × 3 with k=256 centroids (1 byte per subvector) and a block size of d = 9 leads to an indexing cost of 16 kB for m = 16, 384 blocks plus the cost of storing the centroids of 4.5 kB. 4.2 IMAGE CLASSIFICATION RESULTS We report below the results of our method applied to various ResNet models. First, we compare our method with the state of the art on the standard ResNet-18 and ResNet-50 architecture. Next, we show the potential of our approach on a competitive ResNet-50. Finally, an ablation study validates the pertinence of our method. Vanilla ResNet-18 and ResNet-50. We evaluate our method on the ImageNet benchmark for ResNet-18 and ResNet-50 architectures and compare our results to the following methods: Trained Ternary Quantization (TTQ) (Zhu et al., 2016), LR-Net (Shayer et al., 2017), ABC-Net (Lin et al., 2017), Binary Weight Network (XNOR-Net or BWN) (Rastegari et al., 2016), Deep Compression (DC) (Han et al., 2016) and Hardware-Aware Automated Quantization (HAQ) (Wang et al., 2018a). We report the accuracies and compression factors in the original papers and/or in the two surveys (Guo, 2018; Cheng et al., 2017) for a given architecture when the result is available. We do not compare our method to DoReFa-Net (Zhou et al., 2016) and WRPN (Mishra et al., 2017) as those approaches also use low-precision activations and hence get lower accuracies, e.g., 51.2% top-1 accuracy for a XNOR-Net with ResNet-18. The results are presented in Figure 4.2. For better readability, some results for our method are also displayed in Table 1. We report the average accuracy and standard deviation over 3 runs. Our method significantly outperforms state of the art papers for 3https://pytorch.org/docs/stable/torchvision/models various operating points. For instance, for a ResNet-18, our method with large blocks and k = 512 centroids reaches a larger accuracy than ABC-Net (M = 2) with a compression ratio that is 2x larger. Similarly, on the ResNet-50, our compressed model with k = 256 centroids in the large blocks setup yields a comparable accuracy to DC (2 bits) with a compression ratio that is 2x larger. The work by Tung & Mori (Tung & Mori, 2018) is likely the only one that remains competitive with ours with a 6.8 MB network after compression, with a technique that prunes the network and therefore implicitly changes the architecture. The authors report the delta accuracy for which we have no direct comparable top-1 accuracy, but their method is arguably complementary to ours. Semi-supervised ResNet-50. Recent works (Mahajan et al., 2018; Yalniz et al., 2019) have demonstrated the possibility to leverage a large collection of unlabelled images to improve the performance of a given architecture. In particular, Yalniz et al. (Yalniz et al., 2019) use the publicly available YFCC-100M dataset (Thomee et al., 2015) to train a ResNet-50 that reaches 79.1% top-1 accuracy on the standard validation set of ImageNet. In the following, we use this particular model and refer to it as semi-supervised ResNet-50. In the low compression regime (block sizes of 4 and 9), with k = 256 centroids (practical for implementation), our compressed semi-supervised ResNet-50 reaches 76.12% top-1 accuracy. In other words, the model compressed to 5 MB attains the performance of a vanilla, non-compressed ResNet50 (vs.97.5MB for the non-compressed ResNet-50). Comparison for a given size budget. To ensure a fair comparison, we compare our method for a given model size budget against the reference methods in Table 2. It should be noted that our method can further benefit from advances in semi-supervised learning to boosts the performance of the non-compressed and hence of the compressed network. Ablation study. We perform an ablation study on the vanilla ResNet-18 to study the respective effects of quantizing using the activations and finetuning by distillation (here, finetuning refers both to the per-layer finetuning and to the global finetuning after the quantization described in Section 3). We refer to our method as Act + Distill. First, we still finetune by distillation but change the quantization: instead of quantizing using our method (see Equation (2)), we quantizing using the standard PQ algorithm and do not take the activations into account, see Equation (1). We refer to this method as No act + Distill. Second, we quantize using our method but perform a standard finetuning using the image labels (Act + Labels). The results are displayed in Table 3. Our approach consistently yields significantly better results. As a side note, quantizing all the layers of a ResNet-18 with the standard PQ algorithm and without any finetuning leads to top-1 accuracies below 25% for all operating points, which illustrates the drift in accuracy occurring when compressing deep networks with standard methods (as opposed to our method). 4.3 IMAGE DETECTION RESULTS To demonstrate the generality of our method, we compress the Mask R-CNN architecture used for image detection in many real-life applications (He et al., 2017). We compress the backbone (ResNet50 FPN) in the small blocks compression regime and refer the reader to the open-sourced compressed model for the block sizes used in the various heads of the network. We use k = 256 centroids for every layer. We perform the fine-tuning (layer-wise and global) using distributed training on 8 V100 GPUs. Results are displayed in Table 4. We argue that this provides an interesting point of comparison for future work aiming at compressing such architectures for various applications. 5 CONCLUSION We presented a quantization method based on Product Quantization that gives state of the art results on ResNet architectures and that generalizes to other architectures such as Mask R-CNN. Our compression scheme does not require labeled data and the resulting models are byte-aligned, allowing for efficient inference on CPU. Further research directions include testing our method on a wider variety of architectures. In particular, our method can be readily adapted to simultaneously compress and transfer ResNets trained on ImageNet to other domains. Finally, we plan to take the non-linearity into account to improve our reconstruction error.
1. What are the main contributions of the paper on weight compression and quantization? 2. Are there any concerns regarding the novelty of the proposed approach? 3. How does the reviewer assess the clarity and understanding of the paper's content? 4. What are the questions raised by the reviewer regarding the compression ratio and its computation? 5. How does the reviewer suggest improving the paper, particularly in comparing and contrasting the proposed method with prior works?
Review
Review This paper proposes to use codes and codebooks to compress the weights. The authors also try minimizing the layer reconstruction error instead of weight approximation error for better quantization results. Distillation loss is also used for fine-tuning the quantized weight. Empirical results on resnets show that the proposed method has a good compression ratio while maintaining competitive accuracy. This paper is overall easy to follow. My main concern comes from the novelty of this paper. The two main contributions of the paper: (1) using codes and codebooks to compress weights; and (2) minimizing layer reconstruction error instead of weight approximation error are both not new. For instance, using codes and codebooks to compress the weights has already been used in [1,2]. A weighted k-means solver is also used in [2], though the "weighted" in [2] comes from second-order information instead of minimizing reconstruction error. In addition, minimizing reconstruction error has already been used in low-rank approximation[3] and network pruning[4]. Clarification of the connections/differences, and comparison with these related methods should be made to show the efficacy of the proposed method. It is not clear how the compression ratio in table 1 is obtained. Say for block size d=4, an index is required for each block, and the resulting compression ratio is at most 4 (correct me if I understand it wrong). Can the authors provide an example to explain how to compute the compression ratio? [1]. Model compression as constrained optimization, with application to neural nets. part ii: quantization. [2]. Towards the limit of network quantization. [3]. Efficient and Accurate Approximations of Nonlinear Convolutional Networks. [4]. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression.
ICLR
Title And the Bit Goes Down: Revisiting the Quantization of Neural Networks Abstract In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss reconstruction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using bytealigned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5 MB (20× compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet object classification and by compressing a Mask R-CNN with a 26× factor.1 1 INTRODUCTION There is a growing need for compressing the best convolutional networks (or ConvNets) to support embedded devices for applications like robotics and virtual/augmented reality. Indeed, the performance of ConvNets on image classification has steadily improved since the introduction of AlexNet (Krizhevsky et al., 2012). This progress has been fueled by deeper and richer architectures such as the ResNets (He et al., 2015) and their variants ResNeXts (Xie et al., 2017) or DenseNets (Huang et al., 2017). Those models particularly benefit from the recent progress made with weak supervision (Mahajan et al., 2018; Yalniz et al., 2019; Berthelot et al., 2019). Compression of ConvNets has been an active research topic in the recent years, leading to networks with a 71% top-1 accuracy on ImageNet object classification that fit in 1MB (Wang et al., 2018b). In this work, we propose a compression method particularly adapted to ResNet-like architectures. Our approach takes advantage of the high correlation in the convolutions by the use of a structured quantization algorithm, Product Quantization (PQ) (Jégou et al., 2011). More precisely, we exploit the spatial redundancy of information inherent to standard convolution filters (Denton et al., 2014). Besides reducing the memory footprint, we also produce compressed networks allowing efficient inference on CPU by using byte-aligned indexes, as opposed to entropy decoders (Han et al., 2016). Our approach departs from traditional scalar quantizers (Han et al., 2016) and vector quantizers (Gong et al., 2014; Carreira-Perpiñán & Idelbayev, 2017) by focusing on the accuracy of the activations rather than the weights. This is achieved by leveraging a weighted k-means technique. To our knowledge this strategy (see Section 3) is novel in this context. The closest work we are aware of is the one by Choi et al. (2016), but the authors use a different objective (their weighted term is derived from second-order information) along with a different quantization technique (scalar quantization). Our method targets a better in-domain reconstruction, as depicted by Figure 1. Finally, we compress the network sequentially to account for the dependency of our method to the activations at each layer. To prevent the accumulation of errors across layers, we guide this compression with the activations of the uncompressed network on unlabelled data: training by distillation (Hinton et al., 2014) allows for both an efficient layer-by-layer compression procedure and a global fine-tuning of the codewords. Thus, we only need a set of unlabelled images to adjust the codewords. As opposed to recent works by Mishra & Marr (2017) or Lopes et al. (2017), our distillation scheme is sequential and the underlying compression method is different (PQ vs. scalar). 1Code and compressed models: https://github.com/facebookresearch/kill-the-bits. We show that applying our approach to the semi-supervised ResNet-50 of Yalniz et al. (Yalniz et al., 2019) leads to a 5 MB memory footprint and a 76.1% top-1 accuracy on ImageNet object classification (hence 20× compression vs. the original model). Moreover, our approach generalizes to other tasks such as image detection. As shown in Section 4.3, we compress a Mask R-CNN (He et al., 2017) with a size budget around 6 MB (26× compression factor) while maintaining a competitive performance. 2 RELATED WORK There is a large body of literature on network compression. We review the works closest to ours and refer the reader to two recent surveys (Guo, 2018; Cheng et al., 2017) for a comprehensive overview. Low-precision training. Since early works like those of Courbariaux et al. (2015), researchers have developed various approaches to train networks with low precision weights. Those approaches include training with binary or ternary weights (Shayer et al., 2017; Zhu et al., 2016; Li & Liu, 2016; Rastegari et al., 2016; McDonnell, 2018), learning a combination of binary bases (Lin et al., 2017) and quantizing the activations (Zhou et al., 2016; 2017; Mishra et al., 2017). Some of these methods assume the possibility to employ specialized hardware that speed up inference and improve power efficiency by replacing most arithmetic operations with bit-wise operations. However, the back-propagation has to be adapted to the case where the weights are discrete. Quantization. Vector Quantization (VQ) and Product Quantization (PQ) have been extensively studied in the context of nearest-neighbor search (Jegou et al., 2011; Ge et al., 2014; Norouzi & Fleet, 2013). The idea is to decompose the original high-dimensional space into a cartesian product of subspaces that are quantized separately with a joint codebook. To our knowledge, Gong et al. (2014) were the first to introduce these stronger quantizers for neural network quantization, followed by Carreira-Perpiñán & Idelbayev (2017). As we will see in the remainder of this paper, employing this discretization off-the-shelf does not optimize the right objective function, and leads to a catastrophic drift of performance for deep networks. Pruning. Network pruning amounts to removing connections according to an importance criteria (typically the magnitude of the weight associated with this connection) until the desired model size/accuracy tradeoff is reached (LeCun et al., 1990). A natural extension of this work is to prune structural components of the network, for instance by enforcing channel-level (Liu et al., 2017) or filter-level (Luo et al., 2017) sparsity. However, these methods alternate between pruning and re-training steps and thus typically require a long training time. Dedicated architectures. Architectures such as SqueezeNet (Iandola et al., 2016), NASNet (Zoph et al., 2017), ShuffleNet (Zhang et al., 2017; Ma et al., 2018), MobileNets (Sandler et al., 2018) and EfficientNets (Tan & Le, 2019) are designed to be memory efficient. As they typically rely on a combination of depth-wise and point-wise convolutional filters, sometimes along with channel shuffling, they are less prone than ResNets to structured quantization techniques such as PQ. These architectures are either designed by hand or using the framework of architecture search (Howard et al., 2019). For instance, the respective model size and test top-1 accuracy of ImageNet of a MobileNet are 13.4 MB for 71.9%, to be compared with a vanilla ResNet-50 with size 97.5 MB for a top-1 of 76.2%. Moreover, larger models such as ResNets can benefit from large-scale weakly- or semi-supervised learning to reach better performance (Mahajan et al., 2018; Yalniz et al., 2019). Combining some of the mentioned approaches yields high compression factors as demonstrated by Han et al. with Deep Compression (DC) (Han et al., 2016) or more recently by Tung & Mori (Tung & Mori, 2018). Moreover and from a practical point of view, the process of compressing networks depends on the type of hardware on which the networks will run. Recent work directly quantizes to optimize energy-efficiency and latency time on a specific hardware (Wang et al., 2018a). Finally, the memory overhead of storing the full activations is negligible compared to the storage of the weights for two reasons. First, in realistic real-time inference setups, the batch size is almost always equal to one. Second, a forward pass only requires to store the activations of the current layer –which are often smaller than the size of the input– and not the whole activations of the network. 3 OUR APPROACH In this section, we describe our strategy for network compression and we show how to extend our approach to quantize a modern ConvNet architecture. The specificity of our approach is that it aims at a small reconstruction error for the outputs of the layer rather than the layer weights themselves. We first describe how we quantize a single fully connected and convolutional layer. Then we describe how we quantize a full pre-trained network and finetune it. 3.1 QUANTIZATION OF A FULLY-CONNECTED LAYER We consider a fully-connected layer with weights W ∈ RCin×Cout and, without loss of generality, we omit the bias since it does not impact reconstruction error. Product Quantization (PQ). Applying the PQ algorithm to the columns of W consists in evenly splitting each column into m contiguous subvectors and learning a codebook on the resulting mCout subvectors. Then, a column of W is quantized by mapping each of its subvector to its nearest codeword in the codebook. For simplicity, we assume that Cin is a multiple of m, i.e., that all the subvectors have the same dimension d = Cin/m. More formally, the codebook C = {c1, . . . , ck} contains k codewords of dimension d. Any column wj of W is mapped to its quantized version q(wj) = (ci1 , . . . , cim) where i1 denotes the index of the codeword assigned to the first subvector of wj , and so forth. The codebook is then learned by minimizing the following objective function: ‖W − Ŵ‖22 = ∑ j ‖wj − q(wj)‖22, (1) where Ŵ denotes the quantized weights. This objective can be efficiently minimized with k-means. When m is set to 1, PQ is equivalent to vector quantization (VQ) and when m is equal to Cin, it is the scalar k-means algorithm. The main benefit of PQ is its expressivity: each column wj is mapped to a vector in the product C = C × · · · × C, thus PQ generates an implicit codebook of size km. Our algorithm. PQ quantizes the weight matrix of the fully-connected layer. However, in practice, we are interested in preserving the output of the layer, not its weights. This is illustrated in the case of a non-linear classifier in Figure 1: preserving the weights a layer does not necessarily guarantee preserving its output. In other words, the Frobenius approximation of the weights of a layer is not guaranteed to be the best approximation of the output over some arbitrary domain (in particular for in-domain inputs). We thus propose an alternative to PQ that directly minimizes the reconstruction error on the output activations obtained by applying the layer to in-domain inputs. More precisely, given a batch of B input activations x ∈ RB×Cin , we are interested in learning a codebook C that minimizes the difference between the output activations and their reconstructions: ‖y − ŷ‖22 = ∑ j ‖x(wj − q(wj))‖22, (2) where y = xW is the output and ŷ = xŴ its reconstruction. Our objective is a re-weighting of the objective in Equation (1). We can thus learn our codebook with a weighted k-means algorithm. First, we unroll x of size B × Cin into x̃ of size (B × m) × d i.e. we split each row of x into m subvectors of size d and stack these subvectors. Next, we adapt the EM algorithm as follows. (1) E-step (cluster assignment). Recall that every column wj is divided into m subvectors of dimension d. Each subvector v is assigned to the codeword cj such that cj = argmin c∈C ‖x̃(c− v)‖22. (3) This step is performed by exhaustive exploration. Our implementation relies on broadcasting to be computationally efficient. (2) M-step (codeword update). Let us consider a codeword c ∈ C. We denote (vp)p∈Ic the subvectors that are currently assigned to c. Then, we update c← c?, where c? = argmin c∈Rd ∑ p∈Ic ‖x̃(c− vp)‖22. (4) This step explicitly computes the solution of the least-squares problem2. Our implementation performs the computation of the pseudo-inverse of x̃ before alternating between the Expectation and Minimization steps as it does not depend on the learned codebook C. We initialize the codebook C by uniformly sampling k vectors among those we wish to quantize. After performing the E-step, some clusters may be empty. To resolve this issue, we iteratively perform the following additional steps for each empty cluster of index i. (1) Find codeword c0 corresponding to the most populated cluster ; (2) define new codewords c′0 = c0+e and c ′ i = c0−e, where e ∼ N (0, εI) and (3) perform again the E-step. We proceed to the M-step after all the empty clusters are resolved. We set ε = 1e−8 and we observe that its generally takes less than 1 or 2 E-M iterations to resolve all the empty clusters. Note that the quality of the resulting compression is sensitive to the choice of x. 3.2 CONVOLUTIONAL LAYERS Despite being presented in the case of a fully-connected layer, our approach works on any set of vectors. As a consequence, our apporoach can be applied to a convolutional layer if we split the associated 4D weight matrix into a set of vectors. There are many ways to split a 4D matrix in a set of vectors and we are aiming for one that maximizes the correlation between the vectors since vector quantization based methods work the best when the vectors are highly correlated. Given a convolutional layer, we have Cout filters of size K × K × Cin, leading to an overall 4D weight matrix W ∈ RCout×Cin×K×K . The dimensions along the output and input coordinate have no particular reason to be correlated. On the other hand, the spatial dimensions related to the filter size are by nature very correlated: nearby patches or pixels likely share information. As depicted in Figure 2, we thus reshape the weight matrix in a way that lead to spatially coherent quantization. More precisely, we quantize W spatially into subvectors of size d = K × K using the following procedure. We first reshape W into a 2D matrix of size (Cin × K × K) × Cout. Column j of the reshaped matrix Wr corresponds to the jth filter of W and is divided into Cin subvectors of size K ×K. Similarly, we reshape the input activations x accordingly to xr so that reshaping back the matrix xrWr yields the same result as x ∗W. In other words, we adopt a dual approach to the one using bi-level Toeplitz matrices to represent the weights. Then, we apply our method exposed in Section 3.1 to quantize each column of Wr into m = Cin subvectors of size d = K × K with k codewords, using xr as input activations in (2). As a natural extension, we also quantize with larger subvectors, for example subvectors of size d = 2×K ×K, see Section 4 for details. 2Denoting x̃+ the Moore-Penrose pseudoinverse of x̃, we obtain c∗ = 1|Ic| x̃ +x̃ (∑ p∈Ic vp ) In our implementation, we adapt the reshaping of W and x to various types of convolutions. We account for the padding, the stride, the number of groups (for depthwise convolutions and in particular for pointwise convolutions) and the kernel size. We refer the reader to the code for more details. 3.3 NETWORK QUANTIZATION In this section, we describe our approach for quantizing a neural network. We quantize the network sequentially starting from the lowest layer to the highest layer. We guide the compression of the student network by the non-compressed teacher network, as detailled below. Learning the codebook. We recover the current input activations of the layer, i.e. the input activations obtained by forwarding a batch of images through the quantized lower layers, and we quantize the current layer using those activations. Experimentally, we observed a drift in both the reconstruction and classification errors when using the activations of the non-compressed network rather than the current activations. Finetuning the codebook. We finetune the codewords by distillation (Hinton et al., 2014) using the non-compressed network as the teacher network and the compressed network (up to the current layer) as the student network. Denoting yt (resp. ys) the output probabilities of the teacher (resp. student) network, the loss we optimize is the Kullback-Leibler divergence L = KL(ys,yt). Finetuning on codewords is done by averaging the gradients of each subvector assigned to a given codeword. More formally, after the quantization step, we fix the assignments once for all. Then, denoting (bp)p∈Ic the subvectors that are assigned to codeword c, we perform the SGD update with a learning rate η c← c− η 1 |Ic| ∑ p∈Ic ∂L ∂bp . (5) Experimentally, we find the approach to perform better than finetuning on the target of the images as demonstrated in Table 3. Moreover, this approach does not require any labelled data. 3.4 GLOBAL FINETUNING In a final step, we globally finetune the codebooks of all the layers to reduce any residual drifts and we update the running statistics of the BatchNorm layers: We empirically find it beneficial to finetune all the centroids after the whole network is quantized. The finetuning procedure is exactly the same as described in Section 3.3, except that we additionally switch the BatchNorms to the training mode, meaning that the learnt coefficients are still fixed but that the batch statistics (running mean and variance) are still being updated with the standard moving average procedure. We perform the global finetuning using the standard ImageNet training set for 9 epochs with an initial learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. The learning rate is decayed by a factor 10 every 3 epochs. As demonstrated in the ablation study in Table 3, finetuning on the true labels performs worse than finetuning by distillation. A possible explanation is that the supervision signal coming from the teacher network is richer than the one-hot vector used as a traditional learning signal in supervised learning (Hinton et al., 2014). 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We quantize vanilla ResNet-18 and ResNet-50 architectures pretrained on the ImageNet dataset (Deng et al., 2009). Unless explicit mention of the contrary, the pretrained models are taken from the PyTorch model zoo3. We run our method on a 16 GB Volta V100 GPU. Quantizing a ResNet50 with our method (including all finetuning steps) takes about one day on 1 GPU. We detail our experimental setup below. Our code and the compressed models are open-sourced. Compression regimes. We explore a large block sizes (resp.small block sizes) compression regime by setting the subvector size of regular 3×3 convolutions to d=9 (resp.d = 18) and the subvector size of pointwise convolutions to d = 4 (resp.d = 8). For ResNet-18, the block size of pointwise convolutions is always equal to 4. The number of codewords or centroids is set to k ∈ {256, 512, 1024, 2048} for each compression regime. Note that we clamp the number of centroids to min(k,Cout × m/4) for stability. For instance, the first layer of the first stage of the ResNet-50 has size 64× 64× 1 ×1, thus we always use k = 128 centroids with a block size d = 8. For a given number of centroids k, small blocks lead to a lower compression ratio than large blocks. Sampling the input activations. Before quantizing each layer, we randomly sample a batch of 1024 training images to obtain the input activations of the current layer and reshape it as described in Section 3.2. Then, before each iteration (E+M step) of our method, we randomly sample 10, 000 rows from those reshaped input activations. Hyperparameters. We quantize each layer while performing 100 steps of our method (sufficient for convergence in practice). We finetune the centroids of each layer on the standard ImageNet training set during 2,500 iterations with a batch size of 128 (resp 64) for the ResNet-18 (resp.ResNet50) with a learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. For accuracy and memory reasons, the classifier is always quantized with a block size d = 4 and k = 2048 (resp. k = 1024) centroids for the ResNet-18 (resp., ResNet-50). Moreover, the first convolutional layer of size 7 × 7 is not quantized, as it represents less than 0.1% (resp., 0.05%) of the weights of a ResNet-18 (resp.ResNet-50). Metrics. We focus on the tradeoff between accuracy and memory. The accuracy is the top-1 error on the standard validation set of ImageNet. The memory footprint is calculated as the indexing cost (number of bits per weight) plus the overhead of storing the centroids in float16. As an example, quantizing a layer of size 128 × 128 × 3 × 3 with k=256 centroids (1 byte per subvector) and a block size of d = 9 leads to an indexing cost of 16 kB for m = 16, 384 blocks plus the cost of storing the centroids of 4.5 kB. 4.2 IMAGE CLASSIFICATION RESULTS We report below the results of our method applied to various ResNet models. First, we compare our method with the state of the art on the standard ResNet-18 and ResNet-50 architecture. Next, we show the potential of our approach on a competitive ResNet-50. Finally, an ablation study validates the pertinence of our method. Vanilla ResNet-18 and ResNet-50. We evaluate our method on the ImageNet benchmark for ResNet-18 and ResNet-50 architectures and compare our results to the following methods: Trained Ternary Quantization (TTQ) (Zhu et al., 2016), LR-Net (Shayer et al., 2017), ABC-Net (Lin et al., 2017), Binary Weight Network (XNOR-Net or BWN) (Rastegari et al., 2016), Deep Compression (DC) (Han et al., 2016) and Hardware-Aware Automated Quantization (HAQ) (Wang et al., 2018a). We report the accuracies and compression factors in the original papers and/or in the two surveys (Guo, 2018; Cheng et al., 2017) for a given architecture when the result is available. We do not compare our method to DoReFa-Net (Zhou et al., 2016) and WRPN (Mishra et al., 2017) as those approaches also use low-precision activations and hence get lower accuracies, e.g., 51.2% top-1 accuracy for a XNOR-Net with ResNet-18. The results are presented in Figure 4.2. For better readability, some results for our method are also displayed in Table 1. We report the average accuracy and standard deviation over 3 runs. Our method significantly outperforms state of the art papers for 3https://pytorch.org/docs/stable/torchvision/models various operating points. For instance, for a ResNet-18, our method with large blocks and k = 512 centroids reaches a larger accuracy than ABC-Net (M = 2) with a compression ratio that is 2x larger. Similarly, on the ResNet-50, our compressed model with k = 256 centroids in the large blocks setup yields a comparable accuracy to DC (2 bits) with a compression ratio that is 2x larger. The work by Tung & Mori (Tung & Mori, 2018) is likely the only one that remains competitive with ours with a 6.8 MB network after compression, with a technique that prunes the network and therefore implicitly changes the architecture. The authors report the delta accuracy for which we have no direct comparable top-1 accuracy, but their method is arguably complementary to ours. Semi-supervised ResNet-50. Recent works (Mahajan et al., 2018; Yalniz et al., 2019) have demonstrated the possibility to leverage a large collection of unlabelled images to improve the performance of a given architecture. In particular, Yalniz et al. (Yalniz et al., 2019) use the publicly available YFCC-100M dataset (Thomee et al., 2015) to train a ResNet-50 that reaches 79.1% top-1 accuracy on the standard validation set of ImageNet. In the following, we use this particular model and refer to it as semi-supervised ResNet-50. In the low compression regime (block sizes of 4 and 9), with k = 256 centroids (practical for implementation), our compressed semi-supervised ResNet-50 reaches 76.12% top-1 accuracy. In other words, the model compressed to 5 MB attains the performance of a vanilla, non-compressed ResNet50 (vs.97.5MB for the non-compressed ResNet-50). Comparison for a given size budget. To ensure a fair comparison, we compare our method for a given model size budget against the reference methods in Table 2. It should be noted that our method can further benefit from advances in semi-supervised learning to boosts the performance of the non-compressed and hence of the compressed network. Ablation study. We perform an ablation study on the vanilla ResNet-18 to study the respective effects of quantizing using the activations and finetuning by distillation (here, finetuning refers both to the per-layer finetuning and to the global finetuning after the quantization described in Section 3). We refer to our method as Act + Distill. First, we still finetune by distillation but change the quantization: instead of quantizing using our method (see Equation (2)), we quantizing using the standard PQ algorithm and do not take the activations into account, see Equation (1). We refer to this method as No act + Distill. Second, we quantize using our method but perform a standard finetuning using the image labels (Act + Labels). The results are displayed in Table 3. Our approach consistently yields significantly better results. As a side note, quantizing all the layers of a ResNet-18 with the standard PQ algorithm and without any finetuning leads to top-1 accuracies below 25% for all operating points, which illustrates the drift in accuracy occurring when compressing deep networks with standard methods (as opposed to our method). 4.3 IMAGE DETECTION RESULTS To demonstrate the generality of our method, we compress the Mask R-CNN architecture used for image detection in many real-life applications (He et al., 2017). We compress the backbone (ResNet50 FPN) in the small blocks compression regime and refer the reader to the open-sourced compressed model for the block sizes used in the various heads of the network. We use k = 256 centroids for every layer. We perform the fine-tuning (layer-wise and global) using distributed training on 8 V100 GPUs. Results are displayed in Table 4. We argue that this provides an interesting point of comparison for future work aiming at compressing such architectures for various applications. 5 CONCLUSION We presented a quantization method based on Product Quantization that gives state of the art results on ResNet architectures and that generalizes to other architectures such as Mask R-CNN. Our compression scheme does not require labeled data and the resulting models are byte-aligned, allowing for efficient inference on CPU. Further research directions include testing our method on a wider variety of architectures. In particular, our method can be readily adapted to simultaneously compress and transfer ResNets trained on ImageNet to other domains. Finally, we plan to take the non-linearity into account to improve our reconstruction error.
1. What is the main contribution of the paper on neural network compression? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its ability to trade off performance for compression ratios? 3. How does the method work, and what are the implications of its approach to quantizing matrices of linear operations? 4. What are some potential applications where the suggested method might be valuable, and how does the trade-off between performance and compression ratio factor into these applications? 5. What are some limitations or assumptions of the method that could be explored further in future research?
Review
Review The suggested method proposes a technique to compress neural networks bases on PQ quantization. The algorithm quantizes matrices of linear operations, and, by generalization, also works on convolutional networks. Rather than trying to compress weights (i.e. to minimize distance between original and quantized weights), the algorithm considers a distribution of unlabeled inputs and looks for such quantization which would affect output activations as little as possible over that distribution of data. The algorithm works by splitting each column of W_ij into m equal subvectors, learning a codebook for those subvectors, and encoding each of those subvectors as one of the words from the codebook. The method provides impressive compression ratios (in the order of x20-30) but at the cost of a lower performance. Whether this is a valuable trade-off is highly application dependent. Overall I find the paper interesting and enjoyable. However, as I am not an expert in the research area, I can not assess how state of the art the suggested method is. There are a few other questions that I think would be nice to answer. I will try to describe them below: Suppose we have a matric W_{ij} with dimensions NxM where changing i for a given j defines a column. By definition, linear operation is defined y_i = sum_j W_ij x_j . Now say each column of matrix W is quantized into m subvectors. We can express W_ij in the following way: W_ij = (V^1_ij + V^2_ij + ... V^m_ij)x_j where V^m_ij is zero everywhere except for the rows covering a given quantized vector. For example, if W had dimensions of 8x16 and m=4, V^2_{3,j}=0, for all j, V^2_{4,j}=non_zero, V^2_{7,j}=non_zero, V^2_{8,j}=0, V^2_{i=4:8,j}=one_of_the_quantized_vectors. y_i = sum_j W_ij x_j = sum_k sum_j (V^k_ij) x_j =def= sum_k z^k_i where z^k are partial products: z^k_i=0 for i<k*N/m and i>(k+1)N/m Thus, the suggested solution effectively splits the output vector y_i into m sections, defines sparse matrices V^k_{ij} 1<=k<=m, and performs column-wise vector quantization for these matrices separately. Generally, it is not ovious or given that the current method would be able to compress general matrices well, as it implicitly assumes that weight W_{ij} has a high "correlation" with weights W_{i+kN/m,j} (which I call "vertical" correlation), W_{i,k+some_number} (which I call "horizontal" correlation) and W_{i+kN/m,k+some_number} (which I call "other" correlation). It is not given that those kind of redundancies would exist in arbitrary weight matrices. Naturally, the method will work well when weight matrices have a lot of structure and then quantized vectors can be reused. Matrices can have either "horizontal" or "vertical" redundancy (or "other" or neither). It would be very interesting to see which kind of redundancy their method managed to caprture. In the 'horizontal' case, it should work well when inputs have a lot of redundancy (say x_j' and x_j'' are highly correlated making it possible to reuse code-words horizontally within any given V^k: V^k_ij'=V^k_ij''). However, if thise was the case, it would make more sense to simply remove redundancy by prunning input vector x_j by removing either x_j' or x_j'' from it. This can be dome by removing one of the outputs from the previous layer. This can be a symptom of a redundant input. Another option is exploiting "vertical" redundancy: this happens when output y_i' is correlated with output y_{i'+N/m}. This allows the same code-word to be reused vertically. This can be a symptom of a redundant output. It could also be the case that compressibility could be further subtantially improved by trying different matrix row permutations. Also, if one notices that y_i' ir correlated with y_i'', it might make sense to permute matrix rows in such a way that both rows would end up a multiple N/m apart. It would be interesting to see how this would affect compressibility. The third case is when code words are reused in arbitrary cases. Generally, I think that answering the following questions would be interesting and could guide further research: 1. It would be very interesting to know what kind of code-word reusa patterns the algorithm was able to capture, as this may guide further research. 2. How invariance copressibility is under random permutations of matrix rows (thus also output vectors)?
ICLR
Title Monte Carlo Deep Neural Network Arithmetic Abstract Quantization is a crucial technique for achieving low-power, low latency and high throughput hardware implementations of Deep Neural Networks. Quantized floating point representations have received recent interest due to their hardware efficiency benefits and ability to represent a higher dynamic range than fixed point representations, leading to improvements in accuracy. We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCDA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic. We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss. The method makes no assumptions regarding the underlying parameter distributions. We evaluate our method on pre-trained image classification models on the CIFAR-10 and ImageNet datasets. For the same network topology and dataset, we demonstrate the ability to gain the equivalent of bits of precision by simply choosing weight parameter sets which demonstrate a lower loss of significance from the Monte Carlo trials. Additionally, we can apply MCDA to compare the sensitivity of different network topologies to quantization effects.1 1 INTRODUCTION Deep Neural Networks have achieved state-of-the-art performances in many machine learning tasks such as such as speech recognition (Collobert et al., 2011), machine translation (Bahdanau et al., 2014), object detection (Ren et al., 2015) and image classification (Krizhevsky et al., 2012). However, excellent performance comes at the cost of significantly high computational and memory complexity, typically requiring teraops of computation during inference and Gigabytes of storage. To overcome these complexities, compression methods have been utilized, aiming to exploit the inherent resilience of DNNs to noise. These engender representations which maintain algorithm performance but significantly improve latency, throughput and power consumption of hardware implementations. In particular, exploiting reduced numerical precision for data representations through quantization has been emphatically promising, whereby on customizable hardware, efficiency scales quadratically with each bit of precision. Quantization of fixed-point arithmetic (Q-FX) for DNN inference has been extensively studied, and more recently there has been increasing interest in quantized floating point (Q-FP) arithmetic for both DNN inference and training (Wang et al., 2018). Q-FP has the advantage of higher dynamic range compared to equivalent Q-FX representations and reduced hardware cost over single-precision floating point (FP). This has influenced application specific integrated circuits (ASICs) such as Google’s tensor processing unit (TPU), which supports 16-bit floating point (16-FP) and soft processors such as Microsoft’s Project Brainwave which utilizes 8-FP. To illustrate these hardware benefits, we synthesized arithmetic logic units (ALUs) in different formats and different precision on an FPGA and present performance estimates in operations per second (OPs) and area estimates in Look-up Tables (LUTs) per operation (LUTs/Op) in Figure 1. As shown, 8-bit fixed point (8-FX) achieves improved performance and area over 8-FP. However, 7-FP is a significant improvement over 8 or 12-FX and 8 or 9-FP. These examples demonstrate substantial performance and area benefits from reducing FP precision by only 1 to 2-bits. Thus, if we 1Source code will be available if the paper is accepted can design networks which not only achieve high accuracy but are robust to quantization, higher performing hardware solutions are possible. Since in Q-FP, we are trying to represent the infinite set of real numbers using a finite number of bits, quantization and rounding artefacts will be introduced, with inaccuracies being cascaded along the computation graph (IEEE, 1985; Goldberg, 1991; Higham, 2002). This paper proposes Monte Carlo Deep Neural Network Arithmetic (MCDA), a novel way to apply Monte Carlo Arithmetic (MCA) (Parker et al., 2000) for determining the sensitivity of Deep Neural Networks to Q-FP representations. It allows hardware-software designers to quantify the impact of quantization, enabling more efficient systems to be discovered. We do this by exploiting Monte Carlo simulations under which rounding effects are randomized. This, in turn, allows one to infer the sensitivity of executing a computation graph to quantization effects. Our MCDA technique is highly sensitive, allowing very small differences in quantization behaviour to be detected. The technique makes no assumptions regarding data distributions and directly measures the effects of quantization on the problem under study. This allows us to provide insights into the precision requirements of any inference network for any given dataset. Additionally we can use the technique to select weight parameters which are more robust to floating point rounding. The theoretical and practical contributions of this work can be summarized as follows: • We introduce a novel and rigorous analysis technique, Monte Carlo Deep Neural Network Arithmetic (MCDA), which measures the sensitivity of Deep Neural Network inference computation to floating point rounding error. • When applied to neural network inference, we show MCDA can determine the precision requirements of different networks, rank them, and detect small differences between different neural network topologies and weight sets. • We demonstrate that while a network with the same topology but different weights may have the same loss and validation accuracy, their sensitivity to quantization can be vastly different. Using the CIFAR-10 and ImageNet datasets, we introduce a method to choose weights which are more robust to rounding error, resulting in a greatly improved accuracyarea tradeoff over state-of-the-art methods. It is worth noting that although we consider convolutional neural networks for image classification, this method could be applied for any neural network model architectures and applications. Moreover, while the experiments in this paper are limited to inference, it may be possible to apply the same idea to analyze training algorithms. 2 RELATED WORK Low-precision representations of deep learning have been extensively studied. Many training methods have been developed to design representations for fixed point inference (Jacob et al., 2018; Faraone et al., 2018; Zhou et al., 2016) and training (Wu et al., 2018b; Yang et al., 2019; Gupta et al., 2015; Sakr & Shanbhag, 2019). Other methods have also utilized Q-FP arithmetic for inference and training whilst maintaining single-precision accuracy. (Micikevicius et al., 2018) implemented 16- FP arithmetic training whilst storing a 32-FP master copy for the weight updates. Additionally, (Wang et al., 2018; Mellempudi et al., 2019) with 8-FP arithmetic whilst using a 16-FP copy of the weights and 16/32-bits for the accumulator. Techniques for determining per-layer sensitivity to quantization have also been studied (Choi et al., 2016; Sakr & Shanbhag, 2018). Further, other studies have successfully determined the minimum fixed point precision requirements for a given DNN accuracy threshold (Sakr et al., 2017). The accuracy and stability of various numerical algorithms in finite precision arithmetic has been studied in (Higham, 2002; Wilkinson, 1994). This has led to techniques for tracking information lost from finite precision arithmetic using random perturbation such as Monte Carlo Arithmetic (Parker et al., 2000; Frechtling & Leong, 2015). Monte Carlo methods have also been used in Bayesian Neural Networks (Buchholz et al., 2018; Blier & Ollivier, 2018). In particular, (Achterhold et al., 2018) introduced a quantizing prior to learn weights which are either close to a quantized representation or have high variance. (Louizos et al., 2017) used hierarchical priors to prune nodes and posterior uncertainties to determine the optimal fixed point precision. (Blundell et al., 2015) use a Monte Carlo approach to learn a probability distribution on the weights of a neural network. To the best of our knowledge, our work is the first to present a technique for directly determining the sensitivity of DNNs to floating point rounding and to explicitly compute precision bounds of a trained network. These ideas can be very usefully applied to extending the limits of low-precision representations in deep learning applications. 3 BACKGROUND In this section, we describe background theory upon which our technique for determining the sensitivity of DNNs to floating point rounding is based. 3.1 FLOATING POINT ARITHMETIC The IEEE-754 binary floating point format (IEEE, 1985) represents most real numbers x by a subset in normal form as: x̂ = (−1)sx(1 +mx)2ex (1) where sx ∈ {0, 1} is the sign bit, ex is an integer representing the exponent of x̂ and mx is the mantissa of x̂. Such number formats can be described as a (sx, ex,mx) tuple. In binary form the representation is (bs, be1, b e 2, ..., b e Bex , bm1 , b m 2 ..., b m Bmx ) ∈ {0, 1}B , with Bex and Bmx being the number of exponent and mantissa bits, respectively. The infinite set of real numbers R is represented in a computer withB = 1+Bex+Bmx bits, and we define the finite set of real numbers representable in floating point format as exact values, F ⊂ R. Real numbers which aren’t representable are rounded to their nearest exact value. We call this set of numbers inexact values, I, where I∪F = R. The approximation x̂ = F(x) = x(1 + δ), given x ∈ I, introduces rounding error into the computation. The value of δ = ∥∥x−x̂ x ∥∥, represents the relative error which is a function of the machine hardware precision, p, as δ ≤ , where = 2−p (IEEE, 1985; Goldberg, 1991; Higham, 2002). In general, inexactness can be caused by finite representations or errors propagating from earlier parts of the computation. Often the primary cause of error in floating point arithmetic is catastrophic cancellation which causes numerical inaccuracy. Catastrophic cancellation occurs when for example, two near equal FP numbers, sharing k significant digits, are subtracted from one another as shown in (2) (Higham, 2002). 0. f1 f2 ... fk f(k+1) ... ft − 0. f1 f2 ... fk g(k+1) ... gt = 0. 0 0 .... 0 h(k+1) ... ht (2) 0. f1 f2 ... fk f(k+1) ... ft r(t+1) ... rp − 0. f1 f2 ... fk g(k+1) ... gt r̂(t+1) ... r̂p = 0. 0 0 .... 0 h(k+1) ... ht i(t+1) ... ip (3) In normalized form, the leading zeros are removed by shifting the result to the left and adjusting the exponent accordingly. The result is 0.hk+1...hti1...ik which has only (t − k) accurate digits and digits i which are unknown. Additionally, the remaining accurate digits h are most likely affected by rounding error in previous computations. This can significantly magnify errors, especially in computing large computational graphs such as that of state-of-the-art DNNs. If either operand in (2) is inexact, then the digits h are no more significant than any other sequence of digits. Yet, FP arithmetic has no mechanism of recording this loss of significance. By padding both our operands with random digits r and r̂ in (3), the resulting digits i are randomized. If k digits are lost in the result, then k random digits will be in the normalized result and when computed over many random trials, the results will disagree on the trailing k digits. In this case, we are able to detect catastrophic cancellation because the randomization over many trials provides a statistical simulation of round-off errors. We can use techniques from numerical analysis such as Monte Carlo methods to appropriately insert precision-dependant randomization in this way. 3.2 MONTE CARLO ARITHMETIC Monte Carlo methods can be used to analyze rounding by representing inexact values as random variables (Parker et al., 2000; Frechtling & Leong, 2015). The real value x, as represented in (1), can be modelled to t digits, using: inexact(x̂, t, δ) = x̂+ 2ex−tδ = (−1)sx(1 +mx + 2−tδ)2ex (4) where δ ∈ U(− 12 , 1 2 ) is a uniformly distributed random variable and t is a positive integer representing the virtual precision of concern. For the same input x̂ in (4), we can run many Monte Carlo trials which will yield different values on each trial, where 0 < t < p so that the MCA can be run accurately on a computer with machine precision p. The ability to vary t is useful because it allows us to then evaluate the hardware precision requirements of a given system or computational graph for a given DNN. MCA is a method to model the effect of rounding on a computational graph by randomizing all arithmetic operations. The randomization is applied for both generating inexact operands and also in rounding. In each operation using MCA, ideally both catastrophic cancellation and rounding error can be detected. An operation using MCA is defined as: x ◦ y = round(inexact(inexact(x) ◦ inexact(y))) (5) where ◦ ∈ ( +,−,×,÷). By applying the inexact function to both operators we make it possible to detect catastrophic cancellation. Furthermore, applying the inexact function to the operation output and then rounding this value implements random rounding and hence is used to detect rounding error (Parker & Langley, 1997). Hence, for the same input into the system, each trial will yield different operands and output. After modifying the inexact and rounding operations as described, we use random sampling to simulate Monte Carlo trials. For each trial, we collect data on the resulting output of the system and compute summary statistics to quantify its behaviour (Parker et al., 2000). With sufficiently large number of Monte Carlo trials and virtual precision t, the expected value of the output from these trials will equal the value from using real arithmetic. As explained in the next section, we can determine the total number of digits lost to rounding error and the minimum precision required to avoid a total loss of significance. 3.3 ANALYSIS The relative error is bounded by δ ≤ 2−p from the design of IEEE FP arithmetic (Wilkinson, 1994; Goldberg, 1991). With this inequality, we can determine the expected number of significant binary digits available from a p-digit FP system as p ≤ −log2(δ). These definitions can be adapted for MCA by replacing the precision of the FP system, p, by the virtual precision, t, of an MCA operation. Thus, the relative error of an MCA operation, for virtual precision t, is δ ≤ 2−t and the expected number of significant binary digits in a t-digit MCA operation is at most t. Using this definition and the proof provided in (Parker & Langley, 1997), the total significant binary digits in a set of M trials is s′ = log2 µσ where µ is the mean and σ the standard deviation. The output of the system should be some scalar value so that we can perform such analysis. For experimentation, M trials are run for all of t ∈ {1, 2, 3, ..., tmax}. The total number of base-2 significant digits lost in a set of M trials is Kt, in (6): Kt = t− s′ = t− log2( µ σ ) = log2 Θ + t (6) where Θ = σµ , for (µ 6= 0), is the relative standard deviation (RSD) of the MCA results. The virtual precision t controls the perturbation strength applied by the inexact function. For a given Kt, as we reduce t, the RSD should increase according to equation 6. At some point, an unexpected loss of significance (Frechtling & Leong, 2015) is encountered due to the nonlinear effects of quantization. The value at which this occurs is defined as tmin. The number of significant digits lost for the system being analyzed is then computed by averaging all Kt whereby t > tmin as shown in (7): K = { 1 tmax−tmin ∑tmax t=tmin Kt where tmax > 1 Kt where tmax = 1 (7) For DNN inference, we propose to use K as a sensitivity measure for the network to FP rounding. The method for implementing this is discussed in the next section. 4 MONTE CARLO DEEP NEURAL NETWORK ARITHMETIC We now describe MCDA, a methodology for applying MCA techniques to DNN computation, allowing us to understand the sensitivity of a given network and its weight representation to FP rounding. 4.1 NETWORK MODEL We consider a generalized non-linear L-layer neural network with an output vector yL, input data vector x and learnable weight parameter tensor w = ⋃ wl (l = 1, . . . , L), whereby yL = f(x;w). To compute y, several layers consisting of general matrix multiplication (GEMM) operations (such as convolutional and fully-connected layers) between the layer input xl and weight parameters wl, followed by a non-linear activation function, h, producing intermediate layer outputs xl, i.e. yl = h(xl ⊗wl). The output of a given layer becomes the input to the subsequent layer, i.e. xl+1 = yl, with x1 = x. A loss function is the objective function to minimize updating w via an optimizer such as stochastic gradient descent. For a given set of input data X , the total network loss during inference is calculated by applying a loss function loss(f(x;w), ŷ(x)) where ŷ(x) is the target ground truth output for x. The total loss for X is then a scalar output, such that: L(X;w) = 1 |X| ∑ x∈X loss(f(x;w), ŷ(x)) (8) Updates of w are usually done in small batches over subsets of X . Naively applying MCA to each operation (fine-grained MCA) as described in equation (5) poses significant computational difficulties for DNN models. We observe two primary issues with employing fine-grained MCA to a DNN computational graph: • Firstly, the number of required trials for Monte Carlo experiments to generate robust results can typically be in the hundreds or thousands. As DNN inference of state-of-the-art networks typically consist of billions of operations, the computational requirements of applying MCA after each operation will be very large, making the technique impractical. • Using the accuracy as the system output for MCA experiments is problematic because it is a discrete value. For high values of t, Monte Carlo results across different t then become indistinguishable and the standard deviation for a given t is potentially zero. 4.2 MONTE CARLO NETWORK INFERENCE To reduce the computational cost of Monte Carlo experiments, we employ MCDA, which is a coarsegrained approach to MCA for GEMM operations. Conveniently, these can be naturally implemented in modern machine learning frameworks such as PyTorch. Furthermore, to ensure the system output is a continuous value, the loss function output is used, rather than the accuracy. In this case, small perturbations in layer operands are more likely to produce observable changes in the output. MCDA applies a vector version of (5) to the DNN inference computational problem in (8). Our operands in this case are vectors and ◦ represents a neural network layer operation. For example, the output from performing a GEMM operation can thus be represented by: yl = round(inexact(inexact(xl)⊗ inexact(wl))) (9) Since the inexact function is applied to the inputs and outputs of a GEMM operation, an optimized implementation can be used. This is in contrast to full MCA which requires the application of (5) to every individual scalar operation. The vector form in (9) is applied to each edge of neural network computational graph where multiply (division) and/or add (subtract) operations are performed. Hence, it is not applied to operations such as MaxPool and ReLu. As an example, in Figure 2 we show where the inexact function is applied for a residual block with folded batch normalization, which is a repeating sequence of layers found in ResNet models (He et al., 2015). At the final output of the network, the loss is computed with (8) and the inexact function is applied to the outputs y and also the loss output scalar value L. From analyzing the behavior of the loss, we infer the sensitivity of the accuracy of the system to FP rounding. By using MCDA for the GEMM operations, we will not be able to detect all instances of catastrophic cancellation. However, we significantly reduce execution time over fine-grained MCA and show in the next section that we can still retrieve valuable information about our system. In fact, for one trial with one batch of 32 images on ImageNet running on a Nvidia Titan Xp GPU, the speed up of regular inference without MCDA is only 1.05× (for a single Monte Carlo trial). We also note that fine-grained MCA could be applied with a simple modification and would be possible given appropriate customized hardware support for parallel Monte Carlo computations (Yeung et al., 2011). 5 EXPERIMENTAL RESULTS In this section, we present experimental results for applying MCDA to exemplary convolutional neural networks. We use the CIFAR-10 and ImageNet image classification datasets to compare MobileNet-v2 (Sandler et al., 2018), EfficientNet (Tan & Le, 2019), AlexNet, ResNet (He et al., 2015), SqueezeNet (Iandola et al., 2016) and MnasNet (Tan et al., 2018). For CIFAR-10, we use a batch size of 128, whilst we use a batch size of 32 for ImageNet experiments. Cross-entropy is used as the loss function for both datasets. For a given network, dataset, weight representation and t, we apply MCDA with M = 1000 trials. The resulting loss from each trial is computed with the same single batch of images and hence additional data is not required. We compute Θt from our results for all t ∈ {1, 2, 3, ..., 16}. Following this, we run linear regression analysis using the MCALIB2 (Frechtling & Leong, 2015) R library, on our Θt values, to determine tmin and K. tmin is defined as the point of lowest t where the difference between the regression line and the equivalent Θt is less than half a binary digit, i.e. log10(2 0.5). Further detail on the calculation of K and tmin from MCALIB can be found in Appendix A.1. We report Q-FP validation accuracy using the quantization function from (Wang et al., 2018) with stochastic rounding3 (See Appendix A.2). 5.1 DISTINGUISHING WEIGHT PARAMETER REPRESENTATIONS As discussed in Section 3.1, the inexactness in FP arithmetic largely depends on the numerical value of operands. Two instances of the same network and dataset, with the same validation accuracy, but vastly different weight representations, will likely produce differing sensitivities to FP rounding. We first train 8 instances of EfficientNet-b0 and MobileNet-V2 on CIFAR-10 from scratch with random initialization from (Glorot & Bengio, 2010), all achieving within 1% validation accuracy of one another. Using MCDA, we calculate the K values for each model (See Appendix A.3). 2https://github.com/mfrechtling/mcalib 3https://github.com/Tiiiger/QPyTorch We then test their percentage validation accuracy decrease from using post-training quantization (i.e. no finetuning) with varying Q-FP precisions. In Figure 3, we see that the models with higher K values typically experience a larger drop in Q-FP accuracy, indicating they are more sensitive to floating point rounding error. Notably, the model with lowest K for 7-bit MobileNet-v2 experiences a lower percentage validation accuracy drop than three of the 8-bit models. In this case, MCDA model selection enables the saving of a bit of precision while achieving smaller accuracy decrease than some of the trained 8-FP models. 5.2 COMPARISON TO PREVIOUS WORK One practical use case from the insights gained by MCDA is model selection for quantization. Typically when quantizing a given model trained on a given dataset, the model with highest validation accuracy is chosen and the sensitivity to quantization is assumed to be the same across models. As discussed, for Q-FP representations this is not necessarily the case. We can use K from MCDA to predict which models will be more robust to quantization. To demonstrate this, in Table 1 we compare post-training quantization results for model selection based on K from MCDA, against a baseline model chosen based on the highest single-precision validation accuracy. Evidently, even though the single-precision accuracy is initially as much as 0.9% higher, after quantizing the network to 8-5 bits, the accuracy of the network chosen by smallest K is always significantly higher. 5.3 NETWORK COMPARISON Modern DNNs consist of convolutional blocks with highly varying computational graphs (Wu et al., 2018a; Howard et al., 2017). Using MCDA we can also compute and compare their sensitivites to floating point rounding error to determine which networks will be robust to Q-FP representations. In Figure 4 we show the Θt of pre-trained models, trained on the ImageNet dataset from PyTorch4 5, for differing values of t and run linear regression analysis over our data points. From here we can then assign aK value to each network and compare their loss of significance. At each t, the distance from the regression lines to the ideal line represents the values of Kt, as described in (6). From the MCDA results, AlexNet is the least sensitive to rounding and ResNet-50 is the most, with various models in between these two. Additionally we compare EfficientNet at two different model scales and evidently the larger model has much larger sensitivity. We then also compare the validation accuracy percentage decrease of all models at 10, 9 and 8-FP post-training in Figure 5. At 8-FP, besides MnasNet which experiences a large accuracy drop, K is able to predict validation accuracy degradation. Thus, MCDA provides very valuable information about Q-FP model design. 6 CONCLUSION We present a novel, highly sensitive, technique to quantify rounding error in DNNs. This is the first method to successfully compare the sensitivity of networks to floating point rounding error. Ultimately, this technique provides a tool for enabling the design of networks which perform higher when quantized. We do this by applying concepts from Monte Carlo Arithmetic theory to DNN computation. Furthermore, we show that by calculating the loss of significance metric K from MCDA, on the CIFAR-10 and ImageNet datasets, we can compare network sensitivities to floating point rounding error and gain valuable insights to potentially design better neural networks. This is an important contribution due to the increasing interest in low-precision floating point arithmetic for efficient DNN hardware systems. The theoretical and practical contributions of this paper will likely translate well to analyzing floating point rounding in backpropagation in future work. 4https://github.com/pytorch/vision/tree/master/torchvision 5https://github.com/rwightman/pytorch-image-models A APPENDIX A.1 EXPERIMENTAL SETUP In all our experiments, for a given network, dataset, weight representation and t, we run 1000 Monte Carlo trials. The network loss output from each trial is computed with the same single batch of images from the training dataset. We then independently compute the Θt of the network loss for t ∈ {1, 2, 3, ..., tmax} where tmax = 16. Following this, we then run linear regression and calculate K and tmin using MCALIB6. To compute the linear regression, MCALIB uses a log transformed variable, with log(Θ) as the dependant variable and t as the exploratory variable (Frechtling & Leong, 2015). log10(Θ) = log10(2 K−t) (10) = − log10(2)t+ log10(2)K (11) = mt+ c (12) 6https://github.com/mfrechtling/mcalib Algorithm 1 Summary of Linear Regression Analysis for MCDA Initialize: Pre-train a single-precision DNN model. Set tmax. Set number of trials M . Inputs: Batch of inputs & targets (X, Ŷ ), loss function loss(f(x;w), ˆy(x)), current weights w Outputs: tmin and K Monte Carlo Trials: for t = 1 to tmax do for trials=1 to M do L(X;w) = ForwardPath (X, Ŷ ,w, t) using (9) end for Compute µ and σ of all trials Compute Θt using (6) end for Calculate K and tmin: for t = 1 to tmax do Compute Pt using (10) - (14) Compute Pt −Θt if Pt −Θt < log10(20.5) then tmin = t break else continue end if end for Compute K using (6) and (7) where m = − log10(2) = −0.30103 is the slope and c is the intercept such that K = log2(10c). Given these inputs, the intercept c is calculated by minimizing the following objective function using Brents method (Brent, 1973) for single variable optimization: f(x) = tmax∑ t=1 γtmax−iρH(ei) (13) where ei = Θi − (mti + c) is the residual error, c ∈ [(Θtmax −mtmax)± 2m] is the initial search space for the intercept, γ = 0.75 and ρH(e) is the Huber loss function (Huber, 1964): ρH(e) = { 1 2e 2 for |e| ≤ k k |e| − 12k 2 for |e| > k (14) where k = 1.345σe and σe is the standard deviation of the residual error set, e. After determining the linear regression model, Pt = mt+ c, we determine whether each Θt is an outlier. If a value for Θt differs to the equivalent Pt by more than half a binary digit, then it is classed an outlier. tmin is then defined as the lowest t where Pt − Θt < log10(20.5). To compute K for a given network, we then average the values for Kt whereby t > tmin. This removes the outliers from the computation of K. We have summarized how the experiments were simulated in Algorithm 1. A.2 QUANTIZED FLOATING POINT WITH STOCHASTIC ROUNDING To quantize our pre-trained networks to Q-FP representations, we used stochastic rounding as described in (Wang et al., 2018) and implemented in QPytorch7. Two common forms of rounding for FP arithmetic are round-to-nearest and stochastic rounding. The former discards information in the least significant bit (LSB) which is rounded off. This information loss can be significant, especially when quantizing to a small number of bits. Stochastic rounding provides a method to capture this information loss from rounding off the LSB. Given x = (−1)sx(1 + mx)2ex as described in (1). 7https://github.com/Tiiiger/QPyTorch Assume thatmx is in fixed precision with z′ bits which needs to be rounded to z bits, then stochastic rounding is as follows: Round(x) = { (−1)sx(1 + bmxc+ )2ex with probability mx−bmxc (−1)sx(1 + bmxc)2ex with probability 1− mx−bmxc (15) where bmxc is the truncation of z′ − z LSBs of m, = 2−z . For each Q-FP accuracy reported in our experiments, we tested all possible combinations of the number of bits allowed for the exponent and mantissa which satisfied the desired precision. We then chose the combination which produced the highest accuracy. A.3 CIFAR-10 REGRESSION ANALYSIS In Figure 6, we display the linear regression analysis using MCALIB for the 8 MobileNet-v2 and EfficientNet-b0 models trained on CIFAR-10. As shown, Monte Carlo trials were run for each of these models for t ∈ {1, 2, 3, ..., 16}. The corresponding tmin and K values were then calculated using methods discussed in Appendix A.1. Thus, in Figure 3, it is these K values which are plotted against 8 and 7-bit Q-FP validation accuracy. Also, the models with lowest K values are chosen for MCDA model selection in Table 1.
1. What is the focus of the paper regarding neural network sensitivity to floating-point rounding errors? 2. What are the strengths of the proposed approach, particularly in its application in image recognition? 3. What are the weaknesses of the paper, especially regarding its experimental scope and potential applications? 4. How does the reviewer suggest improving the paper's impact, such as optimizing the loss of significance metric K during training? 5. Are there any concerns or questions regarding the methodology, such as addressing the second bullet point in Section 4.1 or making the metric task-agnostic or input-distribution-agnostic?
Review
Review The authors propose a scalable method based on Monte Carlo arithmetic for quantifying the sensitivity of trained neural networks to floating point rounding errors. They demonstrate that the loss of significance metric K estimated from the process can be used for selecting networks that are more robust to quantization, and compare popular architectures (AlexNet, ResNet etc.) for their varying sensitivities. Strengths: - The paper tackles an important problem of analyzing sensitivity of networks to quantization and offers a well-correlated metric that can be computed without actually running models on quantized mode - Experiments cover a wide range of architectures in image recognition Weaknesses: - The proposed method in Section 4.2 appears to be a straightforward modification to MCA for NN - Experiments only demonstrate model selection and evaluating trained networks. Can this metric be used in optimization? For example, can you optimize for lowering K (say with fixed t) during training, so you can find a well-performing weight that also is robust to quantization? 1000 random samples interleaved in training may be slow, but perhaps you can use coarse approximation. This could significantly improve the impact of the paper. Some Bayesian NN literatures may be relevant (dropout, SGLD etc). Other Comments: - How is the second bullet point in Section 4.1 addressed in the proposed method? - Can you make this metric task-agnostic or input-distribution-agnostic (e.g. just based on variance in predictions over some input datasets)? (e.g. you may pick a difference loss function or different test distribution to evaluate afterwards) - Does different t give different K? If so, what’s the K reported? (are those points on Figure 3)?
ICLR
Title Monte Carlo Deep Neural Network Arithmetic Abstract Quantization is a crucial technique for achieving low-power, low latency and high throughput hardware implementations of Deep Neural Networks. Quantized floating point representations have received recent interest due to their hardware efficiency benefits and ability to represent a higher dynamic range than fixed point representations, leading to improvements in accuracy. We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCDA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic. We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss. The method makes no assumptions regarding the underlying parameter distributions. We evaluate our method on pre-trained image classification models on the CIFAR-10 and ImageNet datasets. For the same network topology and dataset, we demonstrate the ability to gain the equivalent of bits of precision by simply choosing weight parameter sets which demonstrate a lower loss of significance from the Monte Carlo trials. Additionally, we can apply MCDA to compare the sensitivity of different network topologies to quantization effects.1 1 INTRODUCTION Deep Neural Networks have achieved state-of-the-art performances in many machine learning tasks such as such as speech recognition (Collobert et al., 2011), machine translation (Bahdanau et al., 2014), object detection (Ren et al., 2015) and image classification (Krizhevsky et al., 2012). However, excellent performance comes at the cost of significantly high computational and memory complexity, typically requiring teraops of computation during inference and Gigabytes of storage. To overcome these complexities, compression methods have been utilized, aiming to exploit the inherent resilience of DNNs to noise. These engender representations which maintain algorithm performance but significantly improve latency, throughput and power consumption of hardware implementations. In particular, exploiting reduced numerical precision for data representations through quantization has been emphatically promising, whereby on customizable hardware, efficiency scales quadratically with each bit of precision. Quantization of fixed-point arithmetic (Q-FX) for DNN inference has been extensively studied, and more recently there has been increasing interest in quantized floating point (Q-FP) arithmetic for both DNN inference and training (Wang et al., 2018). Q-FP has the advantage of higher dynamic range compared to equivalent Q-FX representations and reduced hardware cost over single-precision floating point (FP). This has influenced application specific integrated circuits (ASICs) such as Google’s tensor processing unit (TPU), which supports 16-bit floating point (16-FP) and soft processors such as Microsoft’s Project Brainwave which utilizes 8-FP. To illustrate these hardware benefits, we synthesized arithmetic logic units (ALUs) in different formats and different precision on an FPGA and present performance estimates in operations per second (OPs) and area estimates in Look-up Tables (LUTs) per operation (LUTs/Op) in Figure 1. As shown, 8-bit fixed point (8-FX) achieves improved performance and area over 8-FP. However, 7-FP is a significant improvement over 8 or 12-FX and 8 or 9-FP. These examples demonstrate substantial performance and area benefits from reducing FP precision by only 1 to 2-bits. Thus, if we 1Source code will be available if the paper is accepted can design networks which not only achieve high accuracy but are robust to quantization, higher performing hardware solutions are possible. Since in Q-FP, we are trying to represent the infinite set of real numbers using a finite number of bits, quantization and rounding artefacts will be introduced, with inaccuracies being cascaded along the computation graph (IEEE, 1985; Goldberg, 1991; Higham, 2002). This paper proposes Monte Carlo Deep Neural Network Arithmetic (MCDA), a novel way to apply Monte Carlo Arithmetic (MCA) (Parker et al., 2000) for determining the sensitivity of Deep Neural Networks to Q-FP representations. It allows hardware-software designers to quantify the impact of quantization, enabling more efficient systems to be discovered. We do this by exploiting Monte Carlo simulations under which rounding effects are randomized. This, in turn, allows one to infer the sensitivity of executing a computation graph to quantization effects. Our MCDA technique is highly sensitive, allowing very small differences in quantization behaviour to be detected. The technique makes no assumptions regarding data distributions and directly measures the effects of quantization on the problem under study. This allows us to provide insights into the precision requirements of any inference network for any given dataset. Additionally we can use the technique to select weight parameters which are more robust to floating point rounding. The theoretical and practical contributions of this work can be summarized as follows: • We introduce a novel and rigorous analysis technique, Monte Carlo Deep Neural Network Arithmetic (MCDA), which measures the sensitivity of Deep Neural Network inference computation to floating point rounding error. • When applied to neural network inference, we show MCDA can determine the precision requirements of different networks, rank them, and detect small differences between different neural network topologies and weight sets. • We demonstrate that while a network with the same topology but different weights may have the same loss and validation accuracy, their sensitivity to quantization can be vastly different. Using the CIFAR-10 and ImageNet datasets, we introduce a method to choose weights which are more robust to rounding error, resulting in a greatly improved accuracyarea tradeoff over state-of-the-art methods. It is worth noting that although we consider convolutional neural networks for image classification, this method could be applied for any neural network model architectures and applications. Moreover, while the experiments in this paper are limited to inference, it may be possible to apply the same idea to analyze training algorithms. 2 RELATED WORK Low-precision representations of deep learning have been extensively studied. Many training methods have been developed to design representations for fixed point inference (Jacob et al., 2018; Faraone et al., 2018; Zhou et al., 2016) and training (Wu et al., 2018b; Yang et al., 2019; Gupta et al., 2015; Sakr & Shanbhag, 2019). Other methods have also utilized Q-FP arithmetic for inference and training whilst maintaining single-precision accuracy. (Micikevicius et al., 2018) implemented 16- FP arithmetic training whilst storing a 32-FP master copy for the weight updates. Additionally, (Wang et al., 2018; Mellempudi et al., 2019) with 8-FP arithmetic whilst using a 16-FP copy of the weights and 16/32-bits for the accumulator. Techniques for determining per-layer sensitivity to quantization have also been studied (Choi et al., 2016; Sakr & Shanbhag, 2018). Further, other studies have successfully determined the minimum fixed point precision requirements for a given DNN accuracy threshold (Sakr et al., 2017). The accuracy and stability of various numerical algorithms in finite precision arithmetic has been studied in (Higham, 2002; Wilkinson, 1994). This has led to techniques for tracking information lost from finite precision arithmetic using random perturbation such as Monte Carlo Arithmetic (Parker et al., 2000; Frechtling & Leong, 2015). Monte Carlo methods have also been used in Bayesian Neural Networks (Buchholz et al., 2018; Blier & Ollivier, 2018). In particular, (Achterhold et al., 2018) introduced a quantizing prior to learn weights which are either close to a quantized representation or have high variance. (Louizos et al., 2017) used hierarchical priors to prune nodes and posterior uncertainties to determine the optimal fixed point precision. (Blundell et al., 2015) use a Monte Carlo approach to learn a probability distribution on the weights of a neural network. To the best of our knowledge, our work is the first to present a technique for directly determining the sensitivity of DNNs to floating point rounding and to explicitly compute precision bounds of a trained network. These ideas can be very usefully applied to extending the limits of low-precision representations in deep learning applications. 3 BACKGROUND In this section, we describe background theory upon which our technique for determining the sensitivity of DNNs to floating point rounding is based. 3.1 FLOATING POINT ARITHMETIC The IEEE-754 binary floating point format (IEEE, 1985) represents most real numbers x by a subset in normal form as: x̂ = (−1)sx(1 +mx)2ex (1) where sx ∈ {0, 1} is the sign bit, ex is an integer representing the exponent of x̂ and mx is the mantissa of x̂. Such number formats can be described as a (sx, ex,mx) tuple. In binary form the representation is (bs, be1, b e 2, ..., b e Bex , bm1 , b m 2 ..., b m Bmx ) ∈ {0, 1}B , with Bex and Bmx being the number of exponent and mantissa bits, respectively. The infinite set of real numbers R is represented in a computer withB = 1+Bex+Bmx bits, and we define the finite set of real numbers representable in floating point format as exact values, F ⊂ R. Real numbers which aren’t representable are rounded to their nearest exact value. We call this set of numbers inexact values, I, where I∪F = R. The approximation x̂ = F(x) = x(1 + δ), given x ∈ I, introduces rounding error into the computation. The value of δ = ∥∥x−x̂ x ∥∥, represents the relative error which is a function of the machine hardware precision, p, as δ ≤ , where = 2−p (IEEE, 1985; Goldberg, 1991; Higham, 2002). In general, inexactness can be caused by finite representations or errors propagating from earlier parts of the computation. Often the primary cause of error in floating point arithmetic is catastrophic cancellation which causes numerical inaccuracy. Catastrophic cancellation occurs when for example, two near equal FP numbers, sharing k significant digits, are subtracted from one another as shown in (2) (Higham, 2002). 0. f1 f2 ... fk f(k+1) ... ft − 0. f1 f2 ... fk g(k+1) ... gt = 0. 0 0 .... 0 h(k+1) ... ht (2) 0. f1 f2 ... fk f(k+1) ... ft r(t+1) ... rp − 0. f1 f2 ... fk g(k+1) ... gt r̂(t+1) ... r̂p = 0. 0 0 .... 0 h(k+1) ... ht i(t+1) ... ip (3) In normalized form, the leading zeros are removed by shifting the result to the left and adjusting the exponent accordingly. The result is 0.hk+1...hti1...ik which has only (t − k) accurate digits and digits i which are unknown. Additionally, the remaining accurate digits h are most likely affected by rounding error in previous computations. This can significantly magnify errors, especially in computing large computational graphs such as that of state-of-the-art DNNs. If either operand in (2) is inexact, then the digits h are no more significant than any other sequence of digits. Yet, FP arithmetic has no mechanism of recording this loss of significance. By padding both our operands with random digits r and r̂ in (3), the resulting digits i are randomized. If k digits are lost in the result, then k random digits will be in the normalized result and when computed over many random trials, the results will disagree on the trailing k digits. In this case, we are able to detect catastrophic cancellation because the randomization over many trials provides a statistical simulation of round-off errors. We can use techniques from numerical analysis such as Monte Carlo methods to appropriately insert precision-dependant randomization in this way. 3.2 MONTE CARLO ARITHMETIC Monte Carlo methods can be used to analyze rounding by representing inexact values as random variables (Parker et al., 2000; Frechtling & Leong, 2015). The real value x, as represented in (1), can be modelled to t digits, using: inexact(x̂, t, δ) = x̂+ 2ex−tδ = (−1)sx(1 +mx + 2−tδ)2ex (4) where δ ∈ U(− 12 , 1 2 ) is a uniformly distributed random variable and t is a positive integer representing the virtual precision of concern. For the same input x̂ in (4), we can run many Monte Carlo trials which will yield different values on each trial, where 0 < t < p so that the MCA can be run accurately on a computer with machine precision p. The ability to vary t is useful because it allows us to then evaluate the hardware precision requirements of a given system or computational graph for a given DNN. MCA is a method to model the effect of rounding on a computational graph by randomizing all arithmetic operations. The randomization is applied for both generating inexact operands and also in rounding. In each operation using MCA, ideally both catastrophic cancellation and rounding error can be detected. An operation using MCA is defined as: x ◦ y = round(inexact(inexact(x) ◦ inexact(y))) (5) where ◦ ∈ ( +,−,×,÷). By applying the inexact function to both operators we make it possible to detect catastrophic cancellation. Furthermore, applying the inexact function to the operation output and then rounding this value implements random rounding and hence is used to detect rounding error (Parker & Langley, 1997). Hence, for the same input into the system, each trial will yield different operands and output. After modifying the inexact and rounding operations as described, we use random sampling to simulate Monte Carlo trials. For each trial, we collect data on the resulting output of the system and compute summary statistics to quantify its behaviour (Parker et al., 2000). With sufficiently large number of Monte Carlo trials and virtual precision t, the expected value of the output from these trials will equal the value from using real arithmetic. As explained in the next section, we can determine the total number of digits lost to rounding error and the minimum precision required to avoid a total loss of significance. 3.3 ANALYSIS The relative error is bounded by δ ≤ 2−p from the design of IEEE FP arithmetic (Wilkinson, 1994; Goldberg, 1991). With this inequality, we can determine the expected number of significant binary digits available from a p-digit FP system as p ≤ −log2(δ). These definitions can be adapted for MCA by replacing the precision of the FP system, p, by the virtual precision, t, of an MCA operation. Thus, the relative error of an MCA operation, for virtual precision t, is δ ≤ 2−t and the expected number of significant binary digits in a t-digit MCA operation is at most t. Using this definition and the proof provided in (Parker & Langley, 1997), the total significant binary digits in a set of M trials is s′ = log2 µσ where µ is the mean and σ the standard deviation. The output of the system should be some scalar value so that we can perform such analysis. For experimentation, M trials are run for all of t ∈ {1, 2, 3, ..., tmax}. The total number of base-2 significant digits lost in a set of M trials is Kt, in (6): Kt = t− s′ = t− log2( µ σ ) = log2 Θ + t (6) where Θ = σµ , for (µ 6= 0), is the relative standard deviation (RSD) of the MCA results. The virtual precision t controls the perturbation strength applied by the inexact function. For a given Kt, as we reduce t, the RSD should increase according to equation 6. At some point, an unexpected loss of significance (Frechtling & Leong, 2015) is encountered due to the nonlinear effects of quantization. The value at which this occurs is defined as tmin. The number of significant digits lost for the system being analyzed is then computed by averaging all Kt whereby t > tmin as shown in (7): K = { 1 tmax−tmin ∑tmax t=tmin Kt where tmax > 1 Kt where tmax = 1 (7) For DNN inference, we propose to use K as a sensitivity measure for the network to FP rounding. The method for implementing this is discussed in the next section. 4 MONTE CARLO DEEP NEURAL NETWORK ARITHMETIC We now describe MCDA, a methodology for applying MCA techniques to DNN computation, allowing us to understand the sensitivity of a given network and its weight representation to FP rounding. 4.1 NETWORK MODEL We consider a generalized non-linear L-layer neural network with an output vector yL, input data vector x and learnable weight parameter tensor w = ⋃ wl (l = 1, . . . , L), whereby yL = f(x;w). To compute y, several layers consisting of general matrix multiplication (GEMM) operations (such as convolutional and fully-connected layers) between the layer input xl and weight parameters wl, followed by a non-linear activation function, h, producing intermediate layer outputs xl, i.e. yl = h(xl ⊗wl). The output of a given layer becomes the input to the subsequent layer, i.e. xl+1 = yl, with x1 = x. A loss function is the objective function to minimize updating w via an optimizer such as stochastic gradient descent. For a given set of input data X , the total network loss during inference is calculated by applying a loss function loss(f(x;w), ŷ(x)) where ŷ(x) is the target ground truth output for x. The total loss for X is then a scalar output, such that: L(X;w) = 1 |X| ∑ x∈X loss(f(x;w), ŷ(x)) (8) Updates of w are usually done in small batches over subsets of X . Naively applying MCA to each operation (fine-grained MCA) as described in equation (5) poses significant computational difficulties for DNN models. We observe two primary issues with employing fine-grained MCA to a DNN computational graph: • Firstly, the number of required trials for Monte Carlo experiments to generate robust results can typically be in the hundreds or thousands. As DNN inference of state-of-the-art networks typically consist of billions of operations, the computational requirements of applying MCA after each operation will be very large, making the technique impractical. • Using the accuracy as the system output for MCA experiments is problematic because it is a discrete value. For high values of t, Monte Carlo results across different t then become indistinguishable and the standard deviation for a given t is potentially zero. 4.2 MONTE CARLO NETWORK INFERENCE To reduce the computational cost of Monte Carlo experiments, we employ MCDA, which is a coarsegrained approach to MCA for GEMM operations. Conveniently, these can be naturally implemented in modern machine learning frameworks such as PyTorch. Furthermore, to ensure the system output is a continuous value, the loss function output is used, rather than the accuracy. In this case, small perturbations in layer operands are more likely to produce observable changes in the output. MCDA applies a vector version of (5) to the DNN inference computational problem in (8). Our operands in this case are vectors and ◦ represents a neural network layer operation. For example, the output from performing a GEMM operation can thus be represented by: yl = round(inexact(inexact(xl)⊗ inexact(wl))) (9) Since the inexact function is applied to the inputs and outputs of a GEMM operation, an optimized implementation can be used. This is in contrast to full MCA which requires the application of (5) to every individual scalar operation. The vector form in (9) is applied to each edge of neural network computational graph where multiply (division) and/or add (subtract) operations are performed. Hence, it is not applied to operations such as MaxPool and ReLu. As an example, in Figure 2 we show where the inexact function is applied for a residual block with folded batch normalization, which is a repeating sequence of layers found in ResNet models (He et al., 2015). At the final output of the network, the loss is computed with (8) and the inexact function is applied to the outputs y and also the loss output scalar value L. From analyzing the behavior of the loss, we infer the sensitivity of the accuracy of the system to FP rounding. By using MCDA for the GEMM operations, we will not be able to detect all instances of catastrophic cancellation. However, we significantly reduce execution time over fine-grained MCA and show in the next section that we can still retrieve valuable information about our system. In fact, for one trial with one batch of 32 images on ImageNet running on a Nvidia Titan Xp GPU, the speed up of regular inference without MCDA is only 1.05× (for a single Monte Carlo trial). We also note that fine-grained MCA could be applied with a simple modification and would be possible given appropriate customized hardware support for parallel Monte Carlo computations (Yeung et al., 2011). 5 EXPERIMENTAL RESULTS In this section, we present experimental results for applying MCDA to exemplary convolutional neural networks. We use the CIFAR-10 and ImageNet image classification datasets to compare MobileNet-v2 (Sandler et al., 2018), EfficientNet (Tan & Le, 2019), AlexNet, ResNet (He et al., 2015), SqueezeNet (Iandola et al., 2016) and MnasNet (Tan et al., 2018). For CIFAR-10, we use a batch size of 128, whilst we use a batch size of 32 for ImageNet experiments. Cross-entropy is used as the loss function for both datasets. For a given network, dataset, weight representation and t, we apply MCDA with M = 1000 trials. The resulting loss from each trial is computed with the same single batch of images and hence additional data is not required. We compute Θt from our results for all t ∈ {1, 2, 3, ..., 16}. Following this, we run linear regression analysis using the MCALIB2 (Frechtling & Leong, 2015) R library, on our Θt values, to determine tmin and K. tmin is defined as the point of lowest t where the difference between the regression line and the equivalent Θt is less than half a binary digit, i.e. log10(2 0.5). Further detail on the calculation of K and tmin from MCALIB can be found in Appendix A.1. We report Q-FP validation accuracy using the quantization function from (Wang et al., 2018) with stochastic rounding3 (See Appendix A.2). 5.1 DISTINGUISHING WEIGHT PARAMETER REPRESENTATIONS As discussed in Section 3.1, the inexactness in FP arithmetic largely depends on the numerical value of operands. Two instances of the same network and dataset, with the same validation accuracy, but vastly different weight representations, will likely produce differing sensitivities to FP rounding. We first train 8 instances of EfficientNet-b0 and MobileNet-V2 on CIFAR-10 from scratch with random initialization from (Glorot & Bengio, 2010), all achieving within 1% validation accuracy of one another. Using MCDA, we calculate the K values for each model (See Appendix A.3). 2https://github.com/mfrechtling/mcalib 3https://github.com/Tiiiger/QPyTorch We then test their percentage validation accuracy decrease from using post-training quantization (i.e. no finetuning) with varying Q-FP precisions. In Figure 3, we see that the models with higher K values typically experience a larger drop in Q-FP accuracy, indicating they are more sensitive to floating point rounding error. Notably, the model with lowest K for 7-bit MobileNet-v2 experiences a lower percentage validation accuracy drop than three of the 8-bit models. In this case, MCDA model selection enables the saving of a bit of precision while achieving smaller accuracy decrease than some of the trained 8-FP models. 5.2 COMPARISON TO PREVIOUS WORK One practical use case from the insights gained by MCDA is model selection for quantization. Typically when quantizing a given model trained on a given dataset, the model with highest validation accuracy is chosen and the sensitivity to quantization is assumed to be the same across models. As discussed, for Q-FP representations this is not necessarily the case. We can use K from MCDA to predict which models will be more robust to quantization. To demonstrate this, in Table 1 we compare post-training quantization results for model selection based on K from MCDA, against a baseline model chosen based on the highest single-precision validation accuracy. Evidently, even though the single-precision accuracy is initially as much as 0.9% higher, after quantizing the network to 8-5 bits, the accuracy of the network chosen by smallest K is always significantly higher. 5.3 NETWORK COMPARISON Modern DNNs consist of convolutional blocks with highly varying computational graphs (Wu et al., 2018a; Howard et al., 2017). Using MCDA we can also compute and compare their sensitivites to floating point rounding error to determine which networks will be robust to Q-FP representations. In Figure 4 we show the Θt of pre-trained models, trained on the ImageNet dataset from PyTorch4 5, for differing values of t and run linear regression analysis over our data points. From here we can then assign aK value to each network and compare their loss of significance. At each t, the distance from the regression lines to the ideal line represents the values of Kt, as described in (6). From the MCDA results, AlexNet is the least sensitive to rounding and ResNet-50 is the most, with various models in between these two. Additionally we compare EfficientNet at two different model scales and evidently the larger model has much larger sensitivity. We then also compare the validation accuracy percentage decrease of all models at 10, 9 and 8-FP post-training in Figure 5. At 8-FP, besides MnasNet which experiences a large accuracy drop, K is able to predict validation accuracy degradation. Thus, MCDA provides very valuable information about Q-FP model design. 6 CONCLUSION We present a novel, highly sensitive, technique to quantify rounding error in DNNs. This is the first method to successfully compare the sensitivity of networks to floating point rounding error. Ultimately, this technique provides a tool for enabling the design of networks which perform higher when quantized. We do this by applying concepts from Monte Carlo Arithmetic theory to DNN computation. Furthermore, we show that by calculating the loss of significance metric K from MCDA, on the CIFAR-10 and ImageNet datasets, we can compare network sensitivities to floating point rounding error and gain valuable insights to potentially design better neural networks. This is an important contribution due to the increasing interest in low-precision floating point arithmetic for efficient DNN hardware systems. The theoretical and practical contributions of this paper will likely translate well to analyzing floating point rounding in backpropagation in future work. 4https://github.com/pytorch/vision/tree/master/torchvision 5https://github.com/rwightman/pytorch-image-models A APPENDIX A.1 EXPERIMENTAL SETUP In all our experiments, for a given network, dataset, weight representation and t, we run 1000 Monte Carlo trials. The network loss output from each trial is computed with the same single batch of images from the training dataset. We then independently compute the Θt of the network loss for t ∈ {1, 2, 3, ..., tmax} where tmax = 16. Following this, we then run linear regression and calculate K and tmin using MCALIB6. To compute the linear regression, MCALIB uses a log transformed variable, with log(Θ) as the dependant variable and t as the exploratory variable (Frechtling & Leong, 2015). log10(Θ) = log10(2 K−t) (10) = − log10(2)t+ log10(2)K (11) = mt+ c (12) 6https://github.com/mfrechtling/mcalib Algorithm 1 Summary of Linear Regression Analysis for MCDA Initialize: Pre-train a single-precision DNN model. Set tmax. Set number of trials M . Inputs: Batch of inputs & targets (X, Ŷ ), loss function loss(f(x;w), ˆy(x)), current weights w Outputs: tmin and K Monte Carlo Trials: for t = 1 to tmax do for trials=1 to M do L(X;w) = ForwardPath (X, Ŷ ,w, t) using (9) end for Compute µ and σ of all trials Compute Θt using (6) end for Calculate K and tmin: for t = 1 to tmax do Compute Pt using (10) - (14) Compute Pt −Θt if Pt −Θt < log10(20.5) then tmin = t break else continue end if end for Compute K using (6) and (7) where m = − log10(2) = −0.30103 is the slope and c is the intercept such that K = log2(10c). Given these inputs, the intercept c is calculated by minimizing the following objective function using Brents method (Brent, 1973) for single variable optimization: f(x) = tmax∑ t=1 γtmax−iρH(ei) (13) where ei = Θi − (mti + c) is the residual error, c ∈ [(Θtmax −mtmax)± 2m] is the initial search space for the intercept, γ = 0.75 and ρH(e) is the Huber loss function (Huber, 1964): ρH(e) = { 1 2e 2 for |e| ≤ k k |e| − 12k 2 for |e| > k (14) where k = 1.345σe and σe is the standard deviation of the residual error set, e. After determining the linear regression model, Pt = mt+ c, we determine whether each Θt is an outlier. If a value for Θt differs to the equivalent Pt by more than half a binary digit, then it is classed an outlier. tmin is then defined as the lowest t where Pt − Θt < log10(20.5). To compute K for a given network, we then average the values for Kt whereby t > tmin. This removes the outliers from the computation of K. We have summarized how the experiments were simulated in Algorithm 1. A.2 QUANTIZED FLOATING POINT WITH STOCHASTIC ROUNDING To quantize our pre-trained networks to Q-FP representations, we used stochastic rounding as described in (Wang et al., 2018) and implemented in QPytorch7. Two common forms of rounding for FP arithmetic are round-to-nearest and stochastic rounding. The former discards information in the least significant bit (LSB) which is rounded off. This information loss can be significant, especially when quantizing to a small number of bits. Stochastic rounding provides a method to capture this information loss from rounding off the LSB. Given x = (−1)sx(1 + mx)2ex as described in (1). 7https://github.com/Tiiiger/QPyTorch Assume thatmx is in fixed precision with z′ bits which needs to be rounded to z bits, then stochastic rounding is as follows: Round(x) = { (−1)sx(1 + bmxc+ )2ex with probability mx−bmxc (−1)sx(1 + bmxc)2ex with probability 1− mx−bmxc (15) where bmxc is the truncation of z′ − z LSBs of m, = 2−z . For each Q-FP accuracy reported in our experiments, we tested all possible combinations of the number of bits allowed for the exponent and mantissa which satisfied the desired precision. We then chose the combination which produced the highest accuracy. A.3 CIFAR-10 REGRESSION ANALYSIS In Figure 6, we display the linear regression analysis using MCALIB for the 8 MobileNet-v2 and EfficientNet-b0 models trained on CIFAR-10. As shown, Monte Carlo trials were run for each of these models for t ∈ {1, 2, 3, ..., 16}. The corresponding tmin and K values were then calculated using methods discussed in Appendix A.1. Thus, in Figure 3, it is these K values which are plotted against 8 and 7-bit Q-FP validation accuracy. Also, the models with lowest K values are chosen for MCDA model selection in Table 1.
1. What is the main contribution of the paper regarding the deployment of deep neural networks? 2. What are the strengths of the paper, particularly in its motivation and experimental design? 3. Do you have any concerns or questions about the paper's approach to quantization and sensitivity analysis? 4. How does the reviewer assess the clarity and effectiveness of the paper's writing and figures? 5. Are there any relevant works in the field of Monte Carlo methods for Bayesian neural networks that could complement or enhance the paper's findings?
Review
Review The premise of this paper is that quantization plays an important role in the deployment of deep neural networks; ie in the inference stage. However, errors due to quantization affect different neural architectures differently. It would be useful if we could predict ahead of time which models are more amenable to quantization. I think this is a very interesting premise and the paper is very well motivated. The paper is also very clear and well written, making the claims precise and backing these up with experiments. At the heart of the paper is the replacement of floating point numbers with inexact values, which are treated as random variables and defined precisely in equation 4. This definition enables the authors to apply Monte Carlo methods to obtain network predictions as shown in equation (10) and figure 2, and subsequently carry out sensitivity analysis. The experiments show that a measure of sensitivity (K) is indeed a good augmentation to cross-validation for model selection for the purpose of trading-off accuracy and resource consumption when launching deep neural networks with floating point rounding errors. One question I have for the authors is the following: There has been a large body of literature on Monte Carlo methods for Bayesian neural networks. Could those works have something to say in addressing some of the challenges posed in Section 4.1?
ICLR
Title Deep Repulsive Clustering of Ordered Data Based on Order-Identity Decomposition Abstract We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. N/A We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. 1 INTRODUCTION There are various types of ‘ordered’ data. For instance, in facial age estimation (Ricanek & Tesafaye, 2006), face photos are ranked according to the ages. Also, in a video-sharing platform, videos can be sorted according to the numbers of views or likes. In these ordered data, classes, representing ranks or preferences, form an ordered set (Schröder, 2003). Attempts have been made to estimate the classes of objects, including multi-class classification (Pan et al., 2018), ordinal regression (Frank & Hall, 2001), metric regression (Fu & Huang, 2008). Recently, a new approach, called order learning (Lim et al., 2020), was proposed to solve this problem. Order learning is based on the idea that it is easier to predict ordering relationship between objects than to estimate the absolute classes (or ranks); telling the older one between two people is easier than estimating their exact ages. Hence, in order learning, the pairwise ordering relationship is learned from training data. Then, the rank of a test object is estimated by comparing it with reference objects with known ranks. However, some objects cannot be easily compared. It is less easy to tell the older one between people of different genders than between those of the same gender. Lim et al. (2020) tried to deal with this issue, by dividing an ordered dataset into disjoint chains. But, the chains were not clearly separated, and no meaningful properties were discovered from the chains. In this paper, we propose a reliable clustering algorithm, called deep repulsive clustering (DRC), of ordered data based on order-identity decomposition (ORID). Figure 1 shows a clustering example of ordered data. Note that some characteristics of objects, such as genders or races in age estimation, are not related to their ranks, and the ranks of objects sharing such characteristics can be compared more reliably. To discover such characteristics without any supervision, the proposed ORID network decomposes the information of an object instance into an order-related feature and an identity feature unrelated to the rank. Then, the proposed DRC clusters object instances using their identity features; in each cluster, the instances share similar identity features. Furthermore, given a test instance, we decide its cluster based on the nearest neighbor (NN) rule, and compare it with reference instances within the cluster to estimate its rank. To this end, we develop a maximum a posteriori (MAP) estimation rule. Experimental results on ordered data for facial age estimation, aesthetic score regression (Kong et al., 2016), and historical color image classification (Palermo et al., 2012) demonstrate that the proposed algorithm separates ordered data clearly into meaningful clusters and provides excellent rank estimation performances for unseen test instances. The contributions of this paper can be summarized as follows. • We first propose the notion of identity features of ordered data and develop the ORID network for the order-identity decomposition. • We develop the DRC algorithm to cluster data on a unit sphere effectively using a repulsive term. We also prove the local optimality of the solution. • We propose the MAP decision rule for rank estimation. The proposed algorithm provides the state-of-the-art performances for facial age estimation and aesthetic score regression. 2 RELATED WORK 2.1 ORDER LEARNING The notion of order learning was first proposed by Lim et al. (2020). It aims to determine the order graph of classes and classify an object into one of the classes. In practice, it trains a pairwise comparator, which is a ternary classifier, to categorize the relationship between two objects into one of three cases: one object is bigger than, similar to, or smaller than the other. Then, it estimates the rank of a test object, by comparing it with reference objects with known ranks. However, not every pair of objects are easily comparable. Although Lim et al. (2020) attempted to group objects into clusters, in which objects could be more accurately compared, their clustering results were unreliable. Pairwise comparison has been used to estimate object ranks, because relative evaluation is easier than absolute evaluation in general. Saaty (1977) proposed the scaling method to estimate absolute priorities from relative priorities, which has been applied to various decision processes, including aesthetic score regression (Lee & Kim, 2019). Also, some learning to rank (LTR) algorithms are based on pairwise comparison (Liu, 2009; Cohen et al., 1998; Burges et al., 2005; Tsai et al., 2007). Order learning attempts to combine (possibly inconsistent) pairwise ordering results to determine the rank of each object. Thus, it is closely related to the Cohen et al.’s LTR algorithm (1998), which learns a pairwise preference function and obtains a total order of a set to maximize agreements among preference judgments of pairs of elements. Also, order learning is related to rank aggregation (Dwork et al., 2001), in which partially ordered sets are combined into a linearly ordered set to achieve the maximum consensus among those partial sets. Rank aggregation has been studied in various fields (Brüggemann et al., 2004). Since optimal aggregation is NP-hard, Dwork et al. (2001) proposed an approximate algorithm, called Markov chain ordering. There are many other approximate schemes, such as the local Kemenization, Borda count, and scaled footrule aggregation. 2.2 CLUSTERING Data clustering is a fundamental problem to partition data into disjoint groups, such that elements in the same group are similar to one another but elements from different groups are dissimilar. Although various clustering algorithms have been proposed (Hartigan & Wong, 1979; Ester et al., 1996; Kohonen, 1990; Dhillon & Modha, 2001; Reynolds, 2009), conventional algorithms often yield poor performance on high-dimensional data due to the curse of dimensionality and ineffectiveness of similarity metrics. Dimensionality reduction and feature transform methods have been studied to map raw data into a new feature space, in which they are more easily separated. Linear transforms, such as PCA (Wold et al., 1987), and non-linear transformations, including kernel methods (Hofmann et al., 2008) and spectral clustering (Ng et al., 2002), have been proposed. Recently, deep neural networks have been adopted effectively as feature embedding functions (LeCun et al., 2015), and these deep-learning-based feature embedding functions have been combined with classical clustering algorithms. For instance, Caron et al. (2018) proposed a deep clustering algorithm based on k-means. It clusters features from a neural network and then trains the network using the cluster assignments as pseudo-labels. This is done iteratively. Also, Yang et al. (2016) jointly learned feature representations and clustered images, based on agglomerative clustering. Chang et al. (2017) recast the image clustering task into a binary classification problem to predict whether a pair of images belong to the same cluster or different clusters. Similarly to these algorithms, we use a neural network to determine a feature space in which clustering is done more effectively. However, we consider the clustering of ordered data, and each cluster should consists of elements, whose ranks can be compared more accurately. There are conventional approaches to use clustering ideas to aid in classification or rank estimation. For example, Yan et al. (2015) developed a hierarchical classifier, which clusters fine categories into coarse category groups and classifies an object into a fine category within its coarse category group. For extreme multiclass classification, Daumé III et al. (2017) proposed to predict a class label among candidate classes only, which are dynamically selected by the recall tree. It is however noted that the leaves of the recall tree do not partition the set of classes. Also, for age estimation, Li et al. (2019) proposed a tree-like structure, called bridge-tree, to divide data into overlapping age groups and train a local regressor for each group. The set of local regressors can be more accurate than a global regressor to deal with the entire age range. Whereas these conventional approaches group data in the label dimension to perform their tasks more effectively, the proposed algorithm cluster data in the dimension orthogonal to the label dimension. In other words, we cluster data using identity features, instead of using order features. 3 PROPOSED ALGORITHM 3.1 PROBLEM DEFINITION An order is a binary relation, often denoted by ≤, on a set Θ = {θ1, θ2, . . . , θm} (Schröder, 2003). It should satisfy three properties of reflexivity (θi ≤ θi for all i), antisymmetry (θi ≤ θj and θj ≤ θi imply θi = θj), and transitivity (θi ≤ θj and θj ≤ θk imply θi ≤ θk). Then, Θ is called a partially ordered set. Furthermore, if every pair of elements are comparable (θi ≤ θj or θj ≤ θi for all i, j), Θ is called a chain or linearly ordered set. An order describes ranks or priorities of classes. For example, in age estimation, θi may represent the age class of i-year-olds. Then, θ14 ≤ θ49 represents that 14-year-olds are younger than 49-year-olds. As mentioned previously, it is less easy to tell the older one between people of different genders. An algorithm, hence, may compare a subject with reference subjects of the same gender only. In such a case, each age class θi represents two subclasses θfemalei and θ male i of different types, and the algorithm compares only subjects of the same type. Lim et al. (2020) assumed that subclasses of different types are incomparable and thus the set of subclasses is the union of k disjoint chains, where k is the number of types. However, in many ranking applications, objects of different types can be compared (although less easily than those of the same type are). Thus, instead of assuming incomparability across chains, we assume that there is a total order on Θ = {θ1, θ2, . . . , θm}, in which each class θi consists of k types of subclasses, and that object instances of the same type are more easily compared than those of different types. Suppose that n training instances in X = {x1, x2, . . . , xn} are given. Also, suppose that there are m ranks and the ground-truth rank of each instance is known. In this sense, X contains ordered data. The problem is twofold. The first goal is to decompose the whole instances X into k disjoint clusters {Cj}kj=1 in which instances are more easily compared; X = ⋃k j=1 Cj (1) where Ci∩Cj = ∅ for i 6= j. In other words, we aim to partition the ordered data inX into k clusters, by grouping them according to their characteristics unrelated to their ranks. These characteristics, which tend to remain the same even when an object experiences rank changes, are referred to as ‘identity’ features in this work. For example, in age estimation, genders or races can be identity features. However, we perform the clustering without any supervision for identity features. Notice that instances within a cluster would be compared more easily than those across clusters, since they have similar identity features. The number k of clusters is assumed to be known a priori. Impacts of k on the clustering performance are discussed in Appendix B.7. The second goal is to assign an unseen test instance into one of the clusters and determine its rank by comparing it with reference instances within the cluster. To achieve these goals, we propose the ORID network and the DRC algorithm. 3.2 ORDER-IDENTITY DECOMPOSITION In general, object instances can be compared more easily, as they have more similar identity features irrelevant to order. Therefore, we decompose the information of each object instance into an order feature and an identity feature. To this end, we propose the ORID network in Figure 2, composed of three parts: autoencoder, discriminator, and comparator. 1) Autoencoder: Similarly to deep clustering algorithms in (Yang et al., 2017; Dizaji et al., 2017; Chen et al., 2017; Ji et al., 2017), we use the autoencoder G ◦ F (·), based on a neural network, to extract feature vectors. The encoder hx = F (x) maps an input vector x to a feature vector hx, while the decoder x̂ = G(hx) reconstructs x̂ from hx. By minimizing the reconstruction loss ‖x− x̂‖1, F is trained to represent x compactly with as little loss of information as possible. We decompose the overall feature hx ∈ Rdor+did into the order feature hxor and the identity feature hxid, given by hxor = [h x 1 , h x 2 , . . . , h x dor ] t (2) hxid = [h x dor+1, h x dor+2, . . . , h x dor+did ]t/‖[hxdor+1, h x dor+2, . . . , h x dor+did ]‖ (3) where dor and did are the dimensions of hxor and h x id. However, without additional control, the output hx of the neural network F would be highly entangled (Higgins et al., 2018). To put together order-related information into hxor, we employ the comparator. 2) Comparator: Using the order features hxor and h y or of a pair of instances x and y, we train the comparator, which classifies their ordering relationship into one of three categories ‘bigger,’ ‘similar,’ and ‘smaller’: x y if θ(x)− θ(y) > τ, x ≈ y if |θ(x)− θ(y)| ≤ τ, x ≺ y if θ(x)− θ(y) < −τ, (4) where θ(·) denotes the class of an instance. As in (Lim et al., 2020), ‘ ,≈,≺’ represent the ordering relationship between instances, while ‘>,=, <’ do the mathematical order between classes. The comparator outputs the softmax probability pxy = (pxy , p xy ≈ , p xy ≺ ). It is trained to minimize the cross-entropy between pxy and the ground-truth one-hot vector qxy = (qxy , q xy ≈ , q xy ≺ ). Because it is trained jointly with the autoencoder, the information deciding the ordering relationship tends to be encoded into the order features hxor and h y or. On the other hand, the remaining information necessary for the reconstruction of x̂ and ŷ are encoded into the identity features hxid and h y id. 3) Discriminator: We adopt the discriminator D that tells real images from synthesized images, generated by the decoder G. Using the GAN loss (Goodfellow et al., 2014), the discriminator helps the decoder to reconstruct more realistic output x̂ and ŷ. Appendix A provides detailed network structures of these components in ORID. 3.3 DEEP REPULSIVE CLUSTERING After obtaining the identity features hx1id , h x2 id , . . . , h xn id of all instances xi ∈ X , we partition them into k clusters. Each cluster contains instances that are more easily comparable to one another. The identity features are normalized in Eq. (3) and lie on the unit sphere in Rdid . In other words, we cluster data points on the unit sphere. Thus, the cosine similarity is a natural affinity metric. Let Cj , 1 ≤ j ≤ k, denote the k clusters. Also, let cj , constrained to be on the unit sphere, denote the ‘centroid’ or the representative vector for the instances in cluster Cj . We define the quality of cluster Cj as ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) (5) where the first term is the similarity of an instance in Cj to the centroid cj , the second term with the negative sign quantifies the average dissimilarity of the instance from the other centroids, and α is a nonnegative weight. For a high quality cluster, instances should be concentrated around the centroid and be far from the other clusters. The second term is referred to as the repulsive term, as its objective is similar to the repulsive rule in (Lee et al., 2015). Although conventional methods also try to increase inter-cluster dissimilarity (Ward Jr, 1963; Lee et al., 2015), to the best of our knowledge, DRC is the first attempt to use an explicit repulsive term in deep clustering, which jointly optimizes clustering and feature embedding. Next, we measure the overall quality of the clustering by J({Cj}kj=1, {cj}kj=1) = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) . (6) We aim to find the optimum clusters to maximize this objective function J , yet finding the global optimum is NP-complete (Kleinberg et al., 1998; Garey et al., 1982). Hence, we propose an iterative algorithm, called DRC, to find a local optimum, as in the k-means algorithm (Gersho & Gray, 1991). 1. Centroid rule: After fixing the clusters {Cj}kj=1, we update the centroids {cj}kj=1 to maximize J in Eq. (6). Because the centroids should lie on the unit sphere, we solve the constrained optimization problem: maximize J({cj}kj=1) subject to ctjcj = 1 for all j = 1, . . . , k. (7) Using Lagrangian multipliers (Bertsekas, 1996), the optimal centroids are obtained as cj = (∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ) / ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥. (8) 2. NN rule: On the other hand, after fixing the centroids, we update the membership of each instance to maximize J in Eq. (6). The optimal cluster Cj is given by Cj = { x | (hxid)tcj ≥ (hxid)tcl for all 1 ≤ l ≤ k } . (9) In other words, an instance should be assigned to Cj if its nearest centroid is cj . We apply the centroid rule and the NN rule iteratively until convergence. Because both rules monotonically increase the same objective function J and the inequality J ≤ n+ αk−1n always holds, J is guaranteed to converge to a local maximum. Readers interested in the convergence are referred to (Sabin & Gray, 1986; Pollard, 1982). Without the repulsive term in Eq. (6) (i.e. at α = 0), centroid cj in Eq. (8) is updated by cj = ∑ x∈Cj h x id/‖ ∑ x∈Cj h x id‖, (10) as done in the spherical k-means (Dhillon & Modha, 2001). In contrast, with a positive α, the objective function J is reduced when the centroids are far from one another. Ideally, in equilibrium, the centroid of a cluster should be the opposite of the centroid of all the other clusters;( ∑ x∈Cj hxid ‖ ∑ x∈Cj hxid‖ )t( ∑ x∈X\Cj hxid ‖ ∑ x∈X\Cj hxid‖ ) = −1 for all j = 1, 2, . . . , k. (11) Note that the ORID network and thus the encoded feature space are trained jointly with the repulsive clustering. As the training goes on, the centroids repel one another, and the clusters are separated more clearly due to the repulsive term. We jointly optimize the clusters and the ORID network parameters, as described in Algorithm 1. First, we train the ORID network for warm-up epochs, by employing every pair of instances x and y as input. Then, using the identity features, we partition the input data into k clusters using k-means. Second, we repeat the fine-tuning of the ORID network and the repulsive clustering alternately. In the fine-tuning, a pair of x and y are constrained to be from the same cluster, and the following loss function is employed. ` = λrec`rec + λclu`clu + λcom`com + λgan`gan. (12) Appendix B describes this loss function in detail, proves the optimality of the centroid and NN rules in Eq. (8) and (9), and analyzes the impacts of the repulsive term in Eq. (6). Algorithm 1 DRC-ORID Input: Ordered data X = {x1, x2, . . . , xn}, k = the number of clusters 1: Train ORID network for warm-up epochs to minimize loss λrec`rec + λcom`com + λgan`gan 2: Partition X into C1, C2, . . . , Ck using k-means 3: repeat 4: Fine-tune ORID network to minimize loss λrec`rec + λclu`clu + λcom`com + λgan`gan 5: repeat 6: for all j = 1, 2, . . . , k do 7: Update centroid cj via Eq. (8) . Centroid rule 8: end for 9: for all j = 1, 2, . . . , k do 10: Update cluster Cj via Eq. (9) . NN rule 11: end for 12: until convergence or predefined number of iterations 13: until predefined number of epochs Output: Clusters {Cj}kj=1, centroids {cj}kj=1, ORID network 3.4 RANK ESTIMATION Using the output of the DRC-ORID algorithm, we can estimate the rank of an unseen test instance x. First, we extract its identity feature hxid using the ORID encoder. By comparing h x id with the centroids {cj}kj=1 based on the NN rule, we find the most similar centroid cl. Then, x is declared to belong to cluster Cl. Without loss of generality, let us assume that the classes (or ranks) are the first m natural numbers, Θ = {1, 2, . . .m}. Then, for each i ∈ Θ, we select a reference instance yi with rank i from cluster Cl, so that it is the most similar to x. Specifically, yi = arg maxy∈Cl : θ(y)=i(h x id) thyid. (13) We estimate the rank θ(x) of the test instance x, by comparing it with the chosen references yi, 1 ≤ i ≤ m. For the rank estimation, Lim et al. (2020) developed the maximum consistency rule, which however does not exploit the probability information, generated by the comparator. In this paper, we use the maximum a posteriori (MAP) estimation rule, which is described in detail in Appendix B.10. 4 EXPERIMENTAL RESULTS This section provides various experimental results. Due to space limitation, implementation details and more results are available in Appendices C, D, and E. 4.1 FACIAL AGE ESTIMATION Datasets: We use two datasets. First, MORPH II (Ricanek & Tesafaye, 2006) is a collection of about 55,000 facial images in the age range [16, 77]. It provides gender (female, male) and race (African American, Asian, Caucasian, Hispanic) labels as well. We employ the four evaluation settings A, B, C, and D in Appendix C.2. Second, the balanced dataset (Lim et al., 2020) is sampled from the three datasets of MORPH II, AFAD (Niu et al., 2016), and UTK (Zhang et al., 2017) to overcome bias to specific ethnic groups or genders. It contains about 6,000 images for each combination of gender in {female, male} and ethnic group in {African, Asian, European}. Clustering: Figure 3 shows clustering results on MORPH II (setting A), when the number of clusters is k = 2. Setting A contains faces of Caucasian descent only. Thus, the proposed DRC-ORID divides those faces into two clusters according to genders in general, although the annotated gender information is not used. Most males are assigned to cluster 1, while a majority of females to cluster 2. On the other hand, setting B consists of Africans and Caucasians. Thus, those images are clustered according to the races, as shown in Appendix C.3. Figure 4 is the results on the balanced dataset at k = 3, which is composed of MORPH II, AFAD, and UTK images. Due to different characteristics of these sources, images are clearly divided according to their sources. At k = 2, MORPH II images are separated from the others. This is because, unlike the MORPH II images, the boundaries of most AFAD and UTK images are zeroed for alignment using SeetaFaceEngine (Zhang et al., 2014). Lim et al. (2020) also tried the clustering of the balanced dataset. Figure 5 visualizes the feature space using t-SNE (Maaten & Hinton, 2008). Although their method aligns the features according to ages, their clusters are not separated, overlapping one another. In contrast, the proposed DRC-ORID separates the three clusters clearly, as well as sorts features according to the ages within each cluster. More t-SNE plots for analyzing the impacts of the repulsive term are available in Appendix B.5. Age transformation: We assess the decomposition performance of ORID. Although ORID is not designed for age transformation (Or-El et al., 2020), it decomposes an image x into the order and identity features, hxor and h x id. Thus, the age can be transformed in two steps. First, we replace hxor of x with h y or of a reference image y at a target age. Second, we decode the resultant feature (concatenation of hyor and h x id) to obtain the transformed image. Figure 6 shows some results on MORPH II images. Order-related properties, such as skin textures and hair colors, are modified plausibly, but identity information is preserved. This indicates the reliability of ORID. Age estimation: Table 1 compares the proposed algorithm with conventional age estimators on the four evaluation settings of MORPH II. These conventional algorithms take 224× 224 or bigger images as input, while ORID takes 64× 64 images. Moreover, most of them adopt VGG16 (Simonyan & Zisserman, 2015) as their backbones, which is more complicated than the ORID encoder. Thus, for comparison, after fixing clusters using DRC-ORID, we train another pairwise comparator based on VGG16, whose architecture is the same as Lim et al. (2020). We measure the age estimation performance by the mean absolute error (MAE) and the cumulative score (CS). MAE is the average absolute error between estimated and ground-truth ages, and CS computes the percentage of test samples whose absolute errors are less than or equal to a tolerance level of 5. Mainly due to the smaller input size of 64× 64, the vanilla version yields poorer performances than the conventional algorithms. The VGG version, however, outperforms them significantly. First, in the proposed-VGG (k = 1), all instances can be compared, as in the OL algorithm. In other words, the clustering is not performed. Thus, the pairwise comparators of OL and the proposedVGG (k = 1) are trained in the same way, but their rank estimation rules are different. Whereas OL uses the maximum consistency rule, the proposed algorithm performs the MAP estimation. The score gaps between them confirm that the MAP estimation is more accurate. Moreover, by clustering facial images into two groups, the proposed-VGG (k = 2) improves the results meaningfully. The proposed-VGG (k = 2) provides the state-of-the-art results, except for the MAE test in setting D. 4.2 AESTHETIC SCORE REGRESSION The aesthetics and attribute database (AADB) is composed of 10,000 photographs of various themes such as scenery and close-up (Kong et al., 2016). Each image is annotated with an aesthetic score in [0, 1]. We quantize the continuous score with a step size of 0.01 to make 101 score classes. Compared to facial images, AADB contains more diverse data. It is hence more challenging to cluster AADB images. Figure 8 shows example images in each cluster at k = 8. Images in the same cluster have similar colors, similar contents, or similar composition. This means that ORID extracts identity features effectively, corresponding to contents or styles that are not directly related to aesthetic scores. Using those identity features, DRC discovers meaningful clusters. Figure 9 visualizes the feature space of AADB. Aesthetic scores are sorted along one direction, while clusters are separated in the other orthogonal direction. In other words, the scores look like latitudes, while the clusters appear to be separated by meridians (or lines of longitude). As a point on the earth surface can be located by its latitude and longitude, an image is represented by its aesthetic score (order feature) and cluster (identity feature). Table 2 compares regression results. Even without clustering process, the proposed algorithm outperforms the Reg-Net and ASM algorithms. Moreover, by using the eight unsupervised clusters in Figure 8, the proposed algorithm further reduces the MAE to yield the state-of-the-art result. 4.3 HISTORICAL COLOR IMAGE CLASSIFICATION HCI (Palermo et al., 2012) is a dataset for determining the decade when a photograph was taken. It contains images from five decades from 1930s to 1970s. Each decade category has 265 images: 210, 5, and 50 are used for training, validation and testing. Figure 7 shows the clustering results at k = 4. We observe similarity of contents in each cluster. Table 3 compares the quinary classification results. Frank & Hall (2001), Cardoso & da Costa (2007), Palermo et al. (2012), and RED-SVM use traditional features, while the others deep features. The performance gaps between these two approaches are not huge, since 1,050 images are insufficient for training deep networks. 5 IMPACTS OF APPLICATIONS The proposed algorithm can be applied to various ranking problems. In this paper, we demonstrated three vision applications: facial age estimation, aesthetic score regression, and historical image classification. In particular, the proposed age estimator has various potential uses. For example, it can block or recommend media contents to people according to their ages. However, it has harmful impacts, as well as positive ones. Moreover, although age information lacks the distinctiveness to identify an individual, identity features, extracted by ORID, can be misused in facial recognition systems, causing serious problems such as unwanted invasion of privacy (Raji et al., 2020). Hence ethical considerations should be made before the use of the proposed algorithm. Recently, ethical concerns about the fairness and safety of automated systems have been raised (Castelvecchi, 2020; Roussi, 2020; Noorden, 2020). Especially, due to the intrinsic imbalance of facial datasets (Ricanek & Tesafaye, 2006; Zhang et al., 2017; Niu et al., 2016), most deep learning methods on facial analysis (Wen et al., 2020; Or-El et al., 2020) have unwanted gender or racial bias. The proposed algorithm is not free from this bias either. Hence, before any practical usage, the bias should be resolved. Also, even though the proposed algorithm groups data in an unsupervised manner, data are clustered according to genders or races on MORPH II. These results should never be misinterpreted in such a way as to encourage any racial or gender discrimination. We recommend using the proposed age estimator for research only. 6 CONCLUSIONS The DRC algorithm of ordered data based on ORID was proposed in this work. First, the ORID network decomposes the information of an object into the order and identity features. Then, DRC groups objects into clusters using their identity features in a repulsive manner. Also, we can estimate the rank of an unseen test by comparing it with references within the corresponding cluster based on the MAP decision. Extensive experimental results on various ordered data demonstrated that the proposed algorithm provides excellent clustering and rank estimation performances. ACKNOWLEDGMENTS This work was supported in part by the MSIT, Korea, under the ITRC support program (IITP-20202016-0-00464) supervised by the IITP, and in part by the National Research Foundation of Korea (NRF) through the Korea Government (MSIP) under Grant NRF-2018R1A2B3003896. A NETWORK STRUCTURE OF ORID As described in Section 3.2, the ORID network consists of the encoder F , the decoder G, the comparator C, and the discriminator D. The network structures of these components are detailed in Tables 4∼ 7, where ‘kh×kw-s-c Conv’ and ‘kh×kw-s-c Deconv’ denote the 2D convolution and 2D deconvolution with kernel size kh×kw, stride s, and c output channels, respectively. ‘BN’ means batch normalization (Ioffe & Szegedy, 2015), and ‘c Dense’ is a dense layer with c output channels. Note that the encoder takes a 64 × 64 RGB image as input, and the identity feature of the encoder output is l2-normalized in Eq. (3). Also, we set dor = 128 and did = 896. B ALGORITHMS – DETAILS B.1 OPTIMALITY OF CENTROID RULE To solve the constrained optimization problem in Eq. (7), we construct the Lagrangian function L = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) − λj ∑k j=1(cj tcj − 1) (14) where λj , 1 ≤ j ≤ k, are Lagrangian multipliers (Bertsekas, 1996). By differentiating L with respect to cj and setting it to zero, we have ∂L ∂cj = ∑ x∈Cj h x id − α 1k−1 ∑ l 6=j ∑ x∈Cl h x id − 2λjcj (15) = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id − 2λjcj (16) = 0 (17) for j = 1, . . . , k. Therefore, the optimal centroid cj is given by cj = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id 2λj . (18) Because of the normalization constraint cjtcj = 1, we have 2λj = ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥, (19) which leads to the centroid rule in Eq. (8). B.2 OPTIMALITY OF NN RULE Let us consider two cases. First, instance x is declared to belong to cluster Cj . It then contributes to the objective function J in Eq. (6) by βj = (h x id) tcj − α 1k−1 ∑ l 6=j(h x id) tcl. (20) Second, x is declared to belong to another cluster Cj′ . Then, its contribution is βj′ = (h x id) tcj′ − α 1k−1 ∑ l 6=j′(h x id) tcl. (21) By comparing the two contributions, we have βj − βj′ = (hxid)t(cj − cj′)− α 1k−1 (h x id) t(cj′ − cj) (22) = ( 1 + α 1k−1 ) (hxid) t(cj − cj′). (23) This means that βj ≥ βj′ when (hxid)tcj ≥ (hxid)tcj′ . Therefore, x should be assigned to the optimal cluster Cj∗ such that the cosine similarity (hxid)tcj∗ is maximized. Equivalently, we have the NN rule in Eq. (9). B.3 REGULARIZATION CONSTRAINT IN DRC To prevent empty clusters and balance the partitioning, we enforce a regularization constraint so that every cluster contains at least a predefined number of instances. More specifically, when applying the NN rule, we enforce that at least 12k of instances are assigned to each cluster Cj . The instances are selected in the decreasing order of cosine similarity (hxid) tcj . B.4 LOSS FUNCTIONS In the DRC-ORID algorithm, we use the loss function ` = λrec`rec + λclu`clu + λcom`com + λgan`gan (24) where the reconstruction, clustering, comparator, and GAN losses are given by `rec = 1 2N ∑N i=1 ( ‖xi −G(F (xi))‖1 + ‖yi −G(F (yi))‖1 ) , (25) `clu = − 12N ∑N i=1 ( (hxiid ) tcj + (h yi id) tcj ) , (26) `com = − 1N ∑N i=1 ( qxiyi log p xiyi + q xiyi ≈ log p xiyi ≈ + q xiyi ≺ log p xiyi ≺ ) , (27) `gan = − 12N ∑N i=1 ( log(1−D(G(F (xi)))) + log(1−D(G(F (yi)))) ) , (28) respectively. Here, N is the number of image pairs in a minibatch. The weighting parameters are set to λrec = 5, λclu = 0.1, λcom = 1, and λgan = 1. B.5 IMPACTS OF REPULSIVE TERM ON CLUSTERING To analyze the impacts of the repulsive term in Eq. (6), we first compare clustering qualities with α = 0 and α = 0.1. At α = 0, the repulsive term is excluded from the objective function J and the centroid rule is reduced to Eq. (10) in the spherical k-means (Dhillon & Modha, 2001). However, different from the spherical k-means, even at α = 0, the clustering is jointly performed with the training of the ORID network. We adopt two metrics to measure the quality of clustering: normalized mutual information (NMI) (Strehl & Ghosh, 2002) and centroid affinity (CA). NMI measures the information shared between two different partitioning of the same data A = ∪Ui=1Ai and B = ∪Vj=1Bj , NMI(A,B) = ∑U i=1 ∑V j=1 |Ai ∩Bj | log N |Ai∩Bj | |Ai||Bj |√ ( ∑U i=1 |Ai| log |Ai| N )( ∑V j=1 |Bj | log |Bj | N ) (29) where U and V are the numbers of clusters in A and B, respectively, N is the total number of samples, and | · | denotes the cardinality. Also, we define the centroid affinity (CA) as CA({c}kj=1) = 2k(k−1) ∑k j=1 ∑k l>j c t jcl. (30) For high-quality clustering, the centroids should be far from one another and thus should yield a low CA score. Figure 10 plots how NMI and CA vary as the iteration goes on. In this test, MORPH II (setting B) is used and the number of clusters k is set to 2. Since setting B consists of Africans and Caucasians, we use the race groups as the ground-truth partitioning for the NMI measurement. At early iterations, the NMI score of DRC-ORID with α = 0.1 is slightly better than that with α = 0. However, as the iterative training and clustering go on, the score gap gets larger. After the convergence, DRC-ORID with α = 0.1 outperforms the option α = 0 by a significant NMI gap of 0.13. Also, CA of the option α = 0.1 gradually decreases, whereas that of α = 0 does not. At α = 0.1, the repulsive term makes the centroids repel each other. As a result, CA, which is the cosine similarity between the two centroids, becomes almost −1, which means the equilibrium state in Eq. (11) is almost achieved. We also visualize the feature spaces of the two options, α = 0 and α = 0.1, using t-SNE in Figure 11. It is observed that two clusters are more clearly separated by DRC-ORID with α = 0.1. Figure 12 shows the t-SNE results after the convergence with age labels. Figure 13 compares the NMI curves at different α’s. The choice of α affects the quality of clustering, as α controls the intensity of the repulsive force between centroids. When α is too large, the centroids move too quickly, making the training of the ORID network difficult. On the other hand, when α is too small, the repulsive term does not affect the clustering meaningfully. Hence, α should be selected to strike a balance between training reliability and effective repulsion. It was found experimentally that clustering is performed well around α = 0.1. Finally, it is worth pointing out that, if the identity features were not normalized as in Eq. (3) and the repulsive clustering were performed in an unbounded space, the distances between centroids would get larger and larger as the iteration goes on. Thus, convergence would not be achieved. This is why we perform DRC on the bounded unit sphere. B.6 IMPACTS OF REPULSIVE TERM ON RANK ESTIMATION Table 8 compares the rank estimation results when the clustering is performed with and without the repulsive term. In this experiment, we use MORPH II (setting A) and set k = 2. Without the repulsive term, lower-quality clusters make the training of the comparator more difficult. As a result, the age estimation performance degrades significantly in terms of both MAE and CS. In other words, the quality of clustering affects the rank estimation performance greatly, and the proposed DRC algorithm provides high quality clusters suitable for the rank estimation. B.7 IMPACTS OF THE NUMBER k OF CLUSTERS ON RANK ESTIMATION Tables 9 and 10 compare the rank estimation results according to the number k of clusters on the MORPH II (setting A) and AADB datasets, respectively. On MORPH II, the age estimation performance decreases as k increases. Since the training set in setting A consists of only 4,394 images, each cluster at a large k contains too few instances. Thus, the comparator is trained inefficiently with fewer training pairs, degrading the performance. In contrast, AADB contains a large number of diverse images. Due to the diversity, a relatively large k should be used to group images into meaningful clusters. Also, even at a large k, each cluster contains a sufficient number of data. Thus, as compared to MORPH II, results on AADB are less sensitive to k. In addition, we provide age estimation results on the balanced dataset in Table 14, in which k has marginal impacts on the rank estimation performance. As mentioned previously, the quality of clustering significantly affects the rank estimation performance. Also, similarly to other algorithms based on k-means, the clustering quality of DRC is affected by k. Hence, for the proposed algorithm to be used on a new ordered dataset, k should be determined effectively to obtain good clustering and rank estimation results. Readers interested in the selection of k are referred to Pham et al. (2005). B.8 CLUSTERING USING OTHER FEATURES Instead of clustering identity features hx1id , h x2 id , . . . , h xn id , we test clustering order features hx1or , h x2 or , . . . , h xn or or whole features h x1 , hx2 , . . . , hxn . In this test, MORPH II (setting A) is used and k = 2. Figure 14 compares the clustering results. When using order features or whole features, instances are divided by their ages. We see that instances younger than 30 mostly belong to cluster 1 and the others to cluster 2. Table 11 compares the performances of the age estimators trained using these clustering results. The best performance is achieved when the clustering is done on identity features. B.9 RELIABILITY OF FEATURE DECOMPOSITION Performing the comparison using order features only does not theoretically guarantee that orderrelated information is fully excluded from identity features. However, we observed empirically that the decomposition is sufficiently reliable if the dimension of an identity feature is selected properly. If the dimension is too small, the encoder may lose a significant portion of order-irrelevant information. On the contrary, if the dimension is too large, the encoder may encode order information redundantly. In our experiments, we use 128 and 896 dimensional vectors for order and identity features (dor = 128 and did = 896), and obtain satisfactory decomposition results. To show that order-related information is excluded from identity features, we compare the accuracies of the comparator (i.e. ternary classifier), when identity features are used instead of order features. Specifically, we first extract order features and identity features from all instances in MORPH II using the pretrained ORID network. Then, we train two comparators that predict the ordering relationship between two instances x and y: one takes the order features hxor and h y or as input and the other takes the identity features hxid and h y id. Table 12 lists the comparator accuracies. We see that the comparator fails to predict ordering relationships from identity features. Also, Figure 15 is t-SNE visualization of the identity feature spaces with age or cluster labels, which confirms that order-related information is excluded effectively from identity features. B.10 MAP ESTIMATION Let us describe the MAP estimation rule for rank estimation in Section 3.4. Given a test instance x, we select references yi by Eq. (13). Then, by comparing x with yi, the comparator yields the probability vector pxyi = (pxyi , p xyi ≈ , p xyi ≺ ) for the three cases in Eq. (4). Thus, given yi, the probability of θ(x) = r can be written as Pθ(x)(r | yi) = pxyi · Pθ(x)(r |x yi) + pxyi≈ · Pθ(x)(r |x ≈ yi) + p xyi ≺ · Pθ(x)(r |x ≺ yi). (31) Suppose that x yi. Then, θ(x) − θ(yi) = r − i > τ from Eq. (4). Also, the maximum possible rank is m. We hence assume that θ(x) has the uniform distribution between i + τ + 1 and m. In other words, Pθ(x)(r |x yi) ∼ U(i+ τ + 1,m), where U denotes a discrete uniform distribution. Similarly, we have Pθ(x)(r |x ≈ yi) ∼ U(i−τ, i+τ) and Pθ(x)(r |x ≺ yi) ∼ U(1, i−τ−1). Then, we approximate the a posteriori probability Pθ(x)(r | y1, . . . ym) by averaging those single-reference inferences in Eq. (31); Pθ(x)(r | y1, . . . ym) = 1m ∑m i=1 Pθ(x)(r | yi). (32) Finally, we obtain the MAP estimate of the rank of x, which is given by θ̂(x) = arg max r∈Θl Pθ(x)(r | y1, . . . ym). (33) C FACIAL AGE ESTIMATION – MORE EXPERIMENTS AND DETAILS C.1 IMPLEMENTATION DETAILS We initialize the parameters of the ORID network for facial age estimation using the Glorot normal method (Glorot & Bengio, 2010). We use the Adam optimizer with a learning rate of 10−4 and decrease the rate by a factor of 0.5 every 50,000 steps. For data augmentation, we do random horizontal flips only. This is because other augmentation schemes, such as brightness or contrast modification, may deform identity information such as skin colors. Also, dor and did are set to be 128 and 896, respectively. In Eq. (6), we set α to 0.1 and decrease it to 0.05 after 200 epochs. C.2 EVALUATION SETTINGS For evaluation on the MORPH II dataset, we adopt four widely used testing protocols. • Setting A – 5,492 images of the Caucasian race are selected and then randomly divided into two non-overlapping parts: 80% for training and 20% for test. • Setting B – 21,000 images of Africans and Caucasians are selected to satisfy two con- straints: the ratio between Africans and Caucasians should be 1 : 1, and that between females and males 1 : 3. They are split into three disjoint subsets S1, S2, and S3. The training and testing are repeated twice: 1) training on S1, testing on S2 + S3, and 2) training on S2, testing on S1 + S3. The average performance of the two experiments is reported. • Setting C – This setting is the 5-fold cross-validation on the entire dataset. Images are randomly split into five folds, but the same person’s images should belong to only one fold. The average performance of the five experiments is reported. • Setting D – This is called the 80-20 protocol. Without any constraint, the entire dataset is randomly divided into the training and test sets with ratio 8 : 2. Thus, setting D is similar to one experiment in setting C, but the same person’s images may belong to both training and test sets. C.3 CLUSTERING We provide more clustering results on MORPH II. Figure 16 is the clustering results on setting B at k = 2. Since setting B consists of Africans and Caucasians, the images are clustered according to the races. Also, Table 13 summarizes the clustering results for settings A, B, and C at k = 2. The clustering result on setting D is omitted, since it is almost identical with that on setting C. In all settings, the proposed DRC-ORID divides facial images into two clusters with meaningful criteria, which are gender for setting A and race for settings B, C, and D. 1 897 196 3,507 63 8 33,008 4 0 46 C.4 AGE ESTIMATION We implement a VGG-based pairwise comparator and follow the settings of Lim et al. (2020). Specifically, instead of Eq. (4), we use the ternary categorization based on the geometric ratio and set τ = 0.1. We initialize its feature extractor using VGG16 pre-trained on the ILSVRC2012 dataset (Deng et al., 2009) and its fully connected layers using the Glorot normal method. We employ the Adam optimizer with a minibatch size of 32. We start with a learning rate of 10−4 and shrink it by a factor of 0.5 after every 80,000 steps. Table 14 lists age estimation results on the balanced dataset according to the number k of clusters. OL-supervised trains the comparator using supervised clusters separated according to gender or ethnic group annotations. Specifically, the supervised clusters at k = 2, 3, and 6 are divided according to genders, ethnic groups, and both genders and ethnic groups, respectively. On the other hand, OL-unsupervised and the proposed algorithm determine their clusters in unsupervised manners. We see that the proposed algorithm performs better than the conventional algorithms in all tests. By employing multiple clusters, the proposed algorithm improves MAE by 0.12 and CS by 0.73% on average. In contrast, OL-unsupervised improves MAE by 0.04 and CS by 0.07% only. This indicates that, by employing identity features, the proposed DRC-ORID algorithm groups instances into meaningful clusters, in which instance ranks can be compared more accurately. C.5 AGE TRANSFORMATION More age transformation results are in Figure 17. Note that, in Figure 6, given an image x, we select the reference y at a target age, whose identity feature is the most similar to that of x, as in Eq. (13). Hence, the image x and the reference y have similar appearance. On the other hand, Figure 17 shows transformed images using randomly selected references. The first two cases transform the same image x with different references, but the transformed images are similar. Also, even when the gender and/or race of y are different from those of x, the identity information of x is preserved well in the transformed image. This confirms the reliability of ORID. C.6 RECONSTRUCTION Figure 18 shows reconstructed faces using whole feature (hxor ⊕ hxid), order feature only (hxor ⊕ 0), and identity feature only (0⊕hxid). Without the order feature, each decoded face is degraded but the person can be identified. In contrast, without the identity feature, the reconstruction is not related to the person except that it seems to be an average face of people at the same age as the person. These results confirm that order and identity features are complementary. C.7 REFERENCE IMAGES Figure 19 shows examples of reference images, which are used for the rank estimation on MORPH II (setting D) at k = 2. Given a test image x, reference image yi of age class i is selected via Eq. (13) from the training set. In the default mode, for each age i, a single reference image is selected. However, the top r most similar references can be selected and used for the estimation. We use a single reference because multiple references improve the estimation performance only negligibly. In Figure 19, the top three reference images are shown for each age from 16 to 53. In setting D, the two clusters are divided by Africans and the others in general. However, we see that test and reference images tend to have the same gender, as well as the same race. Furthermore, they have similar appearance, even when they have a big age difference. D AESTHETIC SCORE REGRESSION – MORE EXPERIMENTS AND DETAILS D.1 IMPLEMENTATION DETAILS For aesthetic score regression, we implement a pairwise comparator based on EfficientNetB4 (Tan & Le, 2019). The pairwise comparator has the same architecture as that for facial age estimation, except for the backbone network. To initialize the backbone, we adopt the parameters pre-trained on the ILSVRC2012 dataset. We initialize the other layers using the Glorot normal method. We update the network parameters using the Adam optimizer with a minibatch size of 16. We start with a learning rate of 10−4 and shrink it by a factor of 0.8 every 8000 steps. Training images are augmented by random horizontal flipping. We set τ = 0.15 for the ternary categorization in Eq. (4). D.2 CLUSTERING Notice that the AADB dataset contains images of diverse contents and styles. Hence, when clustering with a small k, it is hard to observe the characteristics shared by images within each cluster, whereas k = 2 or 3 is sufficient for facial age data. We empirically found that at least eight clusters are required (k = 8) to partition the AADB dataset by meaningful criteria. Figure 20 provides more examples of clustering results at k = 8. Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 D.3 REFERENCE IMAGES Figure 21 shows examples of reference images, which are used for the aesthetic score regression. Given a test image x, the reference image yi of aesthetic class i is selected by Eq. (13). For the aesthetic score regression, we use a single reference image for each aesthetic class, as done in the facial age estimation. Thus, 101 reference images are used in total. E HCI CLASSIFICATION – MORE EXPERIMENTS AND DETAILS E.1 IMPLEMENTATION DETAILS For DRC-ORID for HCI classification, we set all hyper-parameters in the same way as we do in Appendix C.1. We set τ = 1 for the ternary categorization of ordering relationship in Eq. (4). Note that there are five decade classes from 1 to 5. E.2 CLUSTERING Figure 22 shows some sample images in the HCI dataset, which are ordered according to the decade classes. Figure 23 shows more example HCI images grouped into four clusters (k = 4). 1930s 1940s 1950s 1960s 1970s Cluster 1 Cluster 2 Cluster 3 Cluster 4 E.3 REFERENCE IMAGES Figure 24 shows the five reference images for each of six test image examples. Note that, given a test image, the reference images of similar contents, tones, or composition are selected from the five decade classes.
1. What is the reviewer's opinion on the novelty of the network structure proposed in the paper? 2. How does the reviewer feel about the method used to decompose the feature into two feature types? 3. Does the reviewer think that the authors should provide more visual differences between the two feature types? 4. Is the reviewer satisfied with the clarity of the article's expression? 5. Are there any basic theories that the reviewer thinks do not need to be explained in detail? 6. What is the reviewer's concern regarding the use of h_id and h_or for reconstruction? 7. Does the reviewer think it would be better to prove that using only the identity feature h_id is better than using the overall latent vector h_id + h_or?
Review
Review The novelty of the network structure is marginal. The decomposition way of feature is very common in computer vision. Just utilizing the latent vector of the encoder with only the comparator loss to decompose the feature into two feature types is limited. The authors should show the visual differences between these two feature types. The expression of the article is very clear, but some basic theories need not be explained in detail (Such in Section 3.4) One more concern : h_id and h_or are both used for reconstruction. It’s best to prove that only using identity feature h_id is better than the overall latent vector h_id + h_or.
ICLR
Title Deep Repulsive Clustering of Ordered Data Based on Order-Identity Decomposition Abstract We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. N/A We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. 1 INTRODUCTION There are various types of ‘ordered’ data. For instance, in facial age estimation (Ricanek & Tesafaye, 2006), face photos are ranked according to the ages. Also, in a video-sharing platform, videos can be sorted according to the numbers of views or likes. In these ordered data, classes, representing ranks or preferences, form an ordered set (Schröder, 2003). Attempts have been made to estimate the classes of objects, including multi-class classification (Pan et al., 2018), ordinal regression (Frank & Hall, 2001), metric regression (Fu & Huang, 2008). Recently, a new approach, called order learning (Lim et al., 2020), was proposed to solve this problem. Order learning is based on the idea that it is easier to predict ordering relationship between objects than to estimate the absolute classes (or ranks); telling the older one between two people is easier than estimating their exact ages. Hence, in order learning, the pairwise ordering relationship is learned from training data. Then, the rank of a test object is estimated by comparing it with reference objects with known ranks. However, some objects cannot be easily compared. It is less easy to tell the older one between people of different genders than between those of the same gender. Lim et al. (2020) tried to deal with this issue, by dividing an ordered dataset into disjoint chains. But, the chains were not clearly separated, and no meaningful properties were discovered from the chains. In this paper, we propose a reliable clustering algorithm, called deep repulsive clustering (DRC), of ordered data based on order-identity decomposition (ORID). Figure 1 shows a clustering example of ordered data. Note that some characteristics of objects, such as genders or races in age estimation, are not related to their ranks, and the ranks of objects sharing such characteristics can be compared more reliably. To discover such characteristics without any supervision, the proposed ORID network decomposes the information of an object instance into an order-related feature and an identity feature unrelated to the rank. Then, the proposed DRC clusters object instances using their identity features; in each cluster, the instances share similar identity features. Furthermore, given a test instance, we decide its cluster based on the nearest neighbor (NN) rule, and compare it with reference instances within the cluster to estimate its rank. To this end, we develop a maximum a posteriori (MAP) estimation rule. Experimental results on ordered data for facial age estimation, aesthetic score regression (Kong et al., 2016), and historical color image classification (Palermo et al., 2012) demonstrate that the proposed algorithm separates ordered data clearly into meaningful clusters and provides excellent rank estimation performances for unseen test instances. The contributions of this paper can be summarized as follows. • We first propose the notion of identity features of ordered data and develop the ORID network for the order-identity decomposition. • We develop the DRC algorithm to cluster data on a unit sphere effectively using a repulsive term. We also prove the local optimality of the solution. • We propose the MAP decision rule for rank estimation. The proposed algorithm provides the state-of-the-art performances for facial age estimation and aesthetic score regression. 2 RELATED WORK 2.1 ORDER LEARNING The notion of order learning was first proposed by Lim et al. (2020). It aims to determine the order graph of classes and classify an object into one of the classes. In practice, it trains a pairwise comparator, which is a ternary classifier, to categorize the relationship between two objects into one of three cases: one object is bigger than, similar to, or smaller than the other. Then, it estimates the rank of a test object, by comparing it with reference objects with known ranks. However, not every pair of objects are easily comparable. Although Lim et al. (2020) attempted to group objects into clusters, in which objects could be more accurately compared, their clustering results were unreliable. Pairwise comparison has been used to estimate object ranks, because relative evaluation is easier than absolute evaluation in general. Saaty (1977) proposed the scaling method to estimate absolute priorities from relative priorities, which has been applied to various decision processes, including aesthetic score regression (Lee & Kim, 2019). Also, some learning to rank (LTR) algorithms are based on pairwise comparison (Liu, 2009; Cohen et al., 1998; Burges et al., 2005; Tsai et al., 2007). Order learning attempts to combine (possibly inconsistent) pairwise ordering results to determine the rank of each object. Thus, it is closely related to the Cohen et al.’s LTR algorithm (1998), which learns a pairwise preference function and obtains a total order of a set to maximize agreements among preference judgments of pairs of elements. Also, order learning is related to rank aggregation (Dwork et al., 2001), in which partially ordered sets are combined into a linearly ordered set to achieve the maximum consensus among those partial sets. Rank aggregation has been studied in various fields (Brüggemann et al., 2004). Since optimal aggregation is NP-hard, Dwork et al. (2001) proposed an approximate algorithm, called Markov chain ordering. There are many other approximate schemes, such as the local Kemenization, Borda count, and scaled footrule aggregation. 2.2 CLUSTERING Data clustering is a fundamental problem to partition data into disjoint groups, such that elements in the same group are similar to one another but elements from different groups are dissimilar. Although various clustering algorithms have been proposed (Hartigan & Wong, 1979; Ester et al., 1996; Kohonen, 1990; Dhillon & Modha, 2001; Reynolds, 2009), conventional algorithms often yield poor performance on high-dimensional data due to the curse of dimensionality and ineffectiveness of similarity metrics. Dimensionality reduction and feature transform methods have been studied to map raw data into a new feature space, in which they are more easily separated. Linear transforms, such as PCA (Wold et al., 1987), and non-linear transformations, including kernel methods (Hofmann et al., 2008) and spectral clustering (Ng et al., 2002), have been proposed. Recently, deep neural networks have been adopted effectively as feature embedding functions (LeCun et al., 2015), and these deep-learning-based feature embedding functions have been combined with classical clustering algorithms. For instance, Caron et al. (2018) proposed a deep clustering algorithm based on k-means. It clusters features from a neural network and then trains the network using the cluster assignments as pseudo-labels. This is done iteratively. Also, Yang et al. (2016) jointly learned feature representations and clustered images, based on agglomerative clustering. Chang et al. (2017) recast the image clustering task into a binary classification problem to predict whether a pair of images belong to the same cluster or different clusters. Similarly to these algorithms, we use a neural network to determine a feature space in which clustering is done more effectively. However, we consider the clustering of ordered data, and each cluster should consists of elements, whose ranks can be compared more accurately. There are conventional approaches to use clustering ideas to aid in classification or rank estimation. For example, Yan et al. (2015) developed a hierarchical classifier, which clusters fine categories into coarse category groups and classifies an object into a fine category within its coarse category group. For extreme multiclass classification, Daumé III et al. (2017) proposed to predict a class label among candidate classes only, which are dynamically selected by the recall tree. It is however noted that the leaves of the recall tree do not partition the set of classes. Also, for age estimation, Li et al. (2019) proposed a tree-like structure, called bridge-tree, to divide data into overlapping age groups and train a local regressor for each group. The set of local regressors can be more accurate than a global regressor to deal with the entire age range. Whereas these conventional approaches group data in the label dimension to perform their tasks more effectively, the proposed algorithm cluster data in the dimension orthogonal to the label dimension. In other words, we cluster data using identity features, instead of using order features. 3 PROPOSED ALGORITHM 3.1 PROBLEM DEFINITION An order is a binary relation, often denoted by ≤, on a set Θ = {θ1, θ2, . . . , θm} (Schröder, 2003). It should satisfy three properties of reflexivity (θi ≤ θi for all i), antisymmetry (θi ≤ θj and θj ≤ θi imply θi = θj), and transitivity (θi ≤ θj and θj ≤ θk imply θi ≤ θk). Then, Θ is called a partially ordered set. Furthermore, if every pair of elements are comparable (θi ≤ θj or θj ≤ θi for all i, j), Θ is called a chain or linearly ordered set. An order describes ranks or priorities of classes. For example, in age estimation, θi may represent the age class of i-year-olds. Then, θ14 ≤ θ49 represents that 14-year-olds are younger than 49-year-olds. As mentioned previously, it is less easy to tell the older one between people of different genders. An algorithm, hence, may compare a subject with reference subjects of the same gender only. In such a case, each age class θi represents two subclasses θfemalei and θ male i of different types, and the algorithm compares only subjects of the same type. Lim et al. (2020) assumed that subclasses of different types are incomparable and thus the set of subclasses is the union of k disjoint chains, where k is the number of types. However, in many ranking applications, objects of different types can be compared (although less easily than those of the same type are). Thus, instead of assuming incomparability across chains, we assume that there is a total order on Θ = {θ1, θ2, . . . , θm}, in which each class θi consists of k types of subclasses, and that object instances of the same type are more easily compared than those of different types. Suppose that n training instances in X = {x1, x2, . . . , xn} are given. Also, suppose that there are m ranks and the ground-truth rank of each instance is known. In this sense, X contains ordered data. The problem is twofold. The first goal is to decompose the whole instances X into k disjoint clusters {Cj}kj=1 in which instances are more easily compared; X = ⋃k j=1 Cj (1) where Ci∩Cj = ∅ for i 6= j. In other words, we aim to partition the ordered data inX into k clusters, by grouping them according to their characteristics unrelated to their ranks. These characteristics, which tend to remain the same even when an object experiences rank changes, are referred to as ‘identity’ features in this work. For example, in age estimation, genders or races can be identity features. However, we perform the clustering without any supervision for identity features. Notice that instances within a cluster would be compared more easily than those across clusters, since they have similar identity features. The number k of clusters is assumed to be known a priori. Impacts of k on the clustering performance are discussed in Appendix B.7. The second goal is to assign an unseen test instance into one of the clusters and determine its rank by comparing it with reference instances within the cluster. To achieve these goals, we propose the ORID network and the DRC algorithm. 3.2 ORDER-IDENTITY DECOMPOSITION In general, object instances can be compared more easily, as they have more similar identity features irrelevant to order. Therefore, we decompose the information of each object instance into an order feature and an identity feature. To this end, we propose the ORID network in Figure 2, composed of three parts: autoencoder, discriminator, and comparator. 1) Autoencoder: Similarly to deep clustering algorithms in (Yang et al., 2017; Dizaji et al., 2017; Chen et al., 2017; Ji et al., 2017), we use the autoencoder G ◦ F (·), based on a neural network, to extract feature vectors. The encoder hx = F (x) maps an input vector x to a feature vector hx, while the decoder x̂ = G(hx) reconstructs x̂ from hx. By minimizing the reconstruction loss ‖x− x̂‖1, F is trained to represent x compactly with as little loss of information as possible. We decompose the overall feature hx ∈ Rdor+did into the order feature hxor and the identity feature hxid, given by hxor = [h x 1 , h x 2 , . . . , h x dor ] t (2) hxid = [h x dor+1, h x dor+2, . . . , h x dor+did ]t/‖[hxdor+1, h x dor+2, . . . , h x dor+did ]‖ (3) where dor and did are the dimensions of hxor and h x id. However, without additional control, the output hx of the neural network F would be highly entangled (Higgins et al., 2018). To put together order-related information into hxor, we employ the comparator. 2) Comparator: Using the order features hxor and h y or of a pair of instances x and y, we train the comparator, which classifies their ordering relationship into one of three categories ‘bigger,’ ‘similar,’ and ‘smaller’: x y if θ(x)− θ(y) > τ, x ≈ y if |θ(x)− θ(y)| ≤ τ, x ≺ y if θ(x)− θ(y) < −τ, (4) where θ(·) denotes the class of an instance. As in (Lim et al., 2020), ‘ ,≈,≺’ represent the ordering relationship between instances, while ‘>,=, <’ do the mathematical order between classes. The comparator outputs the softmax probability pxy = (pxy , p xy ≈ , p xy ≺ ). It is trained to minimize the cross-entropy between pxy and the ground-truth one-hot vector qxy = (qxy , q xy ≈ , q xy ≺ ). Because it is trained jointly with the autoencoder, the information deciding the ordering relationship tends to be encoded into the order features hxor and h y or. On the other hand, the remaining information necessary for the reconstruction of x̂ and ŷ are encoded into the identity features hxid and h y id. 3) Discriminator: We adopt the discriminator D that tells real images from synthesized images, generated by the decoder G. Using the GAN loss (Goodfellow et al., 2014), the discriminator helps the decoder to reconstruct more realistic output x̂ and ŷ. Appendix A provides detailed network structures of these components in ORID. 3.3 DEEP REPULSIVE CLUSTERING After obtaining the identity features hx1id , h x2 id , . . . , h xn id of all instances xi ∈ X , we partition them into k clusters. Each cluster contains instances that are more easily comparable to one another. The identity features are normalized in Eq. (3) and lie on the unit sphere in Rdid . In other words, we cluster data points on the unit sphere. Thus, the cosine similarity is a natural affinity metric. Let Cj , 1 ≤ j ≤ k, denote the k clusters. Also, let cj , constrained to be on the unit sphere, denote the ‘centroid’ or the representative vector for the instances in cluster Cj . We define the quality of cluster Cj as ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) (5) where the first term is the similarity of an instance in Cj to the centroid cj , the second term with the negative sign quantifies the average dissimilarity of the instance from the other centroids, and α is a nonnegative weight. For a high quality cluster, instances should be concentrated around the centroid and be far from the other clusters. The second term is referred to as the repulsive term, as its objective is similar to the repulsive rule in (Lee et al., 2015). Although conventional methods also try to increase inter-cluster dissimilarity (Ward Jr, 1963; Lee et al., 2015), to the best of our knowledge, DRC is the first attempt to use an explicit repulsive term in deep clustering, which jointly optimizes clustering and feature embedding. Next, we measure the overall quality of the clustering by J({Cj}kj=1, {cj}kj=1) = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) . (6) We aim to find the optimum clusters to maximize this objective function J , yet finding the global optimum is NP-complete (Kleinberg et al., 1998; Garey et al., 1982). Hence, we propose an iterative algorithm, called DRC, to find a local optimum, as in the k-means algorithm (Gersho & Gray, 1991). 1. Centroid rule: After fixing the clusters {Cj}kj=1, we update the centroids {cj}kj=1 to maximize J in Eq. (6). Because the centroids should lie on the unit sphere, we solve the constrained optimization problem: maximize J({cj}kj=1) subject to ctjcj = 1 for all j = 1, . . . , k. (7) Using Lagrangian multipliers (Bertsekas, 1996), the optimal centroids are obtained as cj = (∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ) / ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥. (8) 2. NN rule: On the other hand, after fixing the centroids, we update the membership of each instance to maximize J in Eq. (6). The optimal cluster Cj is given by Cj = { x | (hxid)tcj ≥ (hxid)tcl for all 1 ≤ l ≤ k } . (9) In other words, an instance should be assigned to Cj if its nearest centroid is cj . We apply the centroid rule and the NN rule iteratively until convergence. Because both rules monotonically increase the same objective function J and the inequality J ≤ n+ αk−1n always holds, J is guaranteed to converge to a local maximum. Readers interested in the convergence are referred to (Sabin & Gray, 1986; Pollard, 1982). Without the repulsive term in Eq. (6) (i.e. at α = 0), centroid cj in Eq. (8) is updated by cj = ∑ x∈Cj h x id/‖ ∑ x∈Cj h x id‖, (10) as done in the spherical k-means (Dhillon & Modha, 2001). In contrast, with a positive α, the objective function J is reduced when the centroids are far from one another. Ideally, in equilibrium, the centroid of a cluster should be the opposite of the centroid of all the other clusters;( ∑ x∈Cj hxid ‖ ∑ x∈Cj hxid‖ )t( ∑ x∈X\Cj hxid ‖ ∑ x∈X\Cj hxid‖ ) = −1 for all j = 1, 2, . . . , k. (11) Note that the ORID network and thus the encoded feature space are trained jointly with the repulsive clustering. As the training goes on, the centroids repel one another, and the clusters are separated more clearly due to the repulsive term. We jointly optimize the clusters and the ORID network parameters, as described in Algorithm 1. First, we train the ORID network for warm-up epochs, by employing every pair of instances x and y as input. Then, using the identity features, we partition the input data into k clusters using k-means. Second, we repeat the fine-tuning of the ORID network and the repulsive clustering alternately. In the fine-tuning, a pair of x and y are constrained to be from the same cluster, and the following loss function is employed. ` = λrec`rec + λclu`clu + λcom`com + λgan`gan. (12) Appendix B describes this loss function in detail, proves the optimality of the centroid and NN rules in Eq. (8) and (9), and analyzes the impacts of the repulsive term in Eq. (6). Algorithm 1 DRC-ORID Input: Ordered data X = {x1, x2, . . . , xn}, k = the number of clusters 1: Train ORID network for warm-up epochs to minimize loss λrec`rec + λcom`com + λgan`gan 2: Partition X into C1, C2, . . . , Ck using k-means 3: repeat 4: Fine-tune ORID network to minimize loss λrec`rec + λclu`clu + λcom`com + λgan`gan 5: repeat 6: for all j = 1, 2, . . . , k do 7: Update centroid cj via Eq. (8) . Centroid rule 8: end for 9: for all j = 1, 2, . . . , k do 10: Update cluster Cj via Eq. (9) . NN rule 11: end for 12: until convergence or predefined number of iterations 13: until predefined number of epochs Output: Clusters {Cj}kj=1, centroids {cj}kj=1, ORID network 3.4 RANK ESTIMATION Using the output of the DRC-ORID algorithm, we can estimate the rank of an unseen test instance x. First, we extract its identity feature hxid using the ORID encoder. By comparing h x id with the centroids {cj}kj=1 based on the NN rule, we find the most similar centroid cl. Then, x is declared to belong to cluster Cl. Without loss of generality, let us assume that the classes (or ranks) are the first m natural numbers, Θ = {1, 2, . . .m}. Then, for each i ∈ Θ, we select a reference instance yi with rank i from cluster Cl, so that it is the most similar to x. Specifically, yi = arg maxy∈Cl : θ(y)=i(h x id) thyid. (13) We estimate the rank θ(x) of the test instance x, by comparing it with the chosen references yi, 1 ≤ i ≤ m. For the rank estimation, Lim et al. (2020) developed the maximum consistency rule, which however does not exploit the probability information, generated by the comparator. In this paper, we use the maximum a posteriori (MAP) estimation rule, which is described in detail in Appendix B.10. 4 EXPERIMENTAL RESULTS This section provides various experimental results. Due to space limitation, implementation details and more results are available in Appendices C, D, and E. 4.1 FACIAL AGE ESTIMATION Datasets: We use two datasets. First, MORPH II (Ricanek & Tesafaye, 2006) is a collection of about 55,000 facial images in the age range [16, 77]. It provides gender (female, male) and race (African American, Asian, Caucasian, Hispanic) labels as well. We employ the four evaluation settings A, B, C, and D in Appendix C.2. Second, the balanced dataset (Lim et al., 2020) is sampled from the three datasets of MORPH II, AFAD (Niu et al., 2016), and UTK (Zhang et al., 2017) to overcome bias to specific ethnic groups or genders. It contains about 6,000 images for each combination of gender in {female, male} and ethnic group in {African, Asian, European}. Clustering: Figure 3 shows clustering results on MORPH II (setting A), when the number of clusters is k = 2. Setting A contains faces of Caucasian descent only. Thus, the proposed DRC-ORID divides those faces into two clusters according to genders in general, although the annotated gender information is not used. Most males are assigned to cluster 1, while a majority of females to cluster 2. On the other hand, setting B consists of Africans and Caucasians. Thus, those images are clustered according to the races, as shown in Appendix C.3. Figure 4 is the results on the balanced dataset at k = 3, which is composed of MORPH II, AFAD, and UTK images. Due to different characteristics of these sources, images are clearly divided according to their sources. At k = 2, MORPH II images are separated from the others. This is because, unlike the MORPH II images, the boundaries of most AFAD and UTK images are zeroed for alignment using SeetaFaceEngine (Zhang et al., 2014). Lim et al. (2020) also tried the clustering of the balanced dataset. Figure 5 visualizes the feature space using t-SNE (Maaten & Hinton, 2008). Although their method aligns the features according to ages, their clusters are not separated, overlapping one another. In contrast, the proposed DRC-ORID separates the three clusters clearly, as well as sorts features according to the ages within each cluster. More t-SNE plots for analyzing the impacts of the repulsive term are available in Appendix B.5. Age transformation: We assess the decomposition performance of ORID. Although ORID is not designed for age transformation (Or-El et al., 2020), it decomposes an image x into the order and identity features, hxor and h x id. Thus, the age can be transformed in two steps. First, we replace hxor of x with h y or of a reference image y at a target age. Second, we decode the resultant feature (concatenation of hyor and h x id) to obtain the transformed image. Figure 6 shows some results on MORPH II images. Order-related properties, such as skin textures and hair colors, are modified plausibly, but identity information is preserved. This indicates the reliability of ORID. Age estimation: Table 1 compares the proposed algorithm with conventional age estimators on the four evaluation settings of MORPH II. These conventional algorithms take 224× 224 or bigger images as input, while ORID takes 64× 64 images. Moreover, most of them adopt VGG16 (Simonyan & Zisserman, 2015) as their backbones, which is more complicated than the ORID encoder. Thus, for comparison, after fixing clusters using DRC-ORID, we train another pairwise comparator based on VGG16, whose architecture is the same as Lim et al. (2020). We measure the age estimation performance by the mean absolute error (MAE) and the cumulative score (CS). MAE is the average absolute error between estimated and ground-truth ages, and CS computes the percentage of test samples whose absolute errors are less than or equal to a tolerance level of 5. Mainly due to the smaller input size of 64× 64, the vanilla version yields poorer performances than the conventional algorithms. The VGG version, however, outperforms them significantly. First, in the proposed-VGG (k = 1), all instances can be compared, as in the OL algorithm. In other words, the clustering is not performed. Thus, the pairwise comparators of OL and the proposedVGG (k = 1) are trained in the same way, but their rank estimation rules are different. Whereas OL uses the maximum consistency rule, the proposed algorithm performs the MAP estimation. The score gaps between them confirm that the MAP estimation is more accurate. Moreover, by clustering facial images into two groups, the proposed-VGG (k = 2) improves the results meaningfully. The proposed-VGG (k = 2) provides the state-of-the-art results, except for the MAE test in setting D. 4.2 AESTHETIC SCORE REGRESSION The aesthetics and attribute database (AADB) is composed of 10,000 photographs of various themes such as scenery and close-up (Kong et al., 2016). Each image is annotated with an aesthetic score in [0, 1]. We quantize the continuous score with a step size of 0.01 to make 101 score classes. Compared to facial images, AADB contains more diverse data. It is hence more challenging to cluster AADB images. Figure 8 shows example images in each cluster at k = 8. Images in the same cluster have similar colors, similar contents, or similar composition. This means that ORID extracts identity features effectively, corresponding to contents or styles that are not directly related to aesthetic scores. Using those identity features, DRC discovers meaningful clusters. Figure 9 visualizes the feature space of AADB. Aesthetic scores are sorted along one direction, while clusters are separated in the other orthogonal direction. In other words, the scores look like latitudes, while the clusters appear to be separated by meridians (or lines of longitude). As a point on the earth surface can be located by its latitude and longitude, an image is represented by its aesthetic score (order feature) and cluster (identity feature). Table 2 compares regression results. Even without clustering process, the proposed algorithm outperforms the Reg-Net and ASM algorithms. Moreover, by using the eight unsupervised clusters in Figure 8, the proposed algorithm further reduces the MAE to yield the state-of-the-art result. 4.3 HISTORICAL COLOR IMAGE CLASSIFICATION HCI (Palermo et al., 2012) is a dataset for determining the decade when a photograph was taken. It contains images from five decades from 1930s to 1970s. Each decade category has 265 images: 210, 5, and 50 are used for training, validation and testing. Figure 7 shows the clustering results at k = 4. We observe similarity of contents in each cluster. Table 3 compares the quinary classification results. Frank & Hall (2001), Cardoso & da Costa (2007), Palermo et al. (2012), and RED-SVM use traditional features, while the others deep features. The performance gaps between these two approaches are not huge, since 1,050 images are insufficient for training deep networks. 5 IMPACTS OF APPLICATIONS The proposed algorithm can be applied to various ranking problems. In this paper, we demonstrated three vision applications: facial age estimation, aesthetic score regression, and historical image classification. In particular, the proposed age estimator has various potential uses. For example, it can block or recommend media contents to people according to their ages. However, it has harmful impacts, as well as positive ones. Moreover, although age information lacks the distinctiveness to identify an individual, identity features, extracted by ORID, can be misused in facial recognition systems, causing serious problems such as unwanted invasion of privacy (Raji et al., 2020). Hence ethical considerations should be made before the use of the proposed algorithm. Recently, ethical concerns about the fairness and safety of automated systems have been raised (Castelvecchi, 2020; Roussi, 2020; Noorden, 2020). Especially, due to the intrinsic imbalance of facial datasets (Ricanek & Tesafaye, 2006; Zhang et al., 2017; Niu et al., 2016), most deep learning methods on facial analysis (Wen et al., 2020; Or-El et al., 2020) have unwanted gender or racial bias. The proposed algorithm is not free from this bias either. Hence, before any practical usage, the bias should be resolved. Also, even though the proposed algorithm groups data in an unsupervised manner, data are clustered according to genders or races on MORPH II. These results should never be misinterpreted in such a way as to encourage any racial or gender discrimination. We recommend using the proposed age estimator for research only. 6 CONCLUSIONS The DRC algorithm of ordered data based on ORID was proposed in this work. First, the ORID network decomposes the information of an object into the order and identity features. Then, DRC groups objects into clusters using their identity features in a repulsive manner. Also, we can estimate the rank of an unseen test by comparing it with references within the corresponding cluster based on the MAP decision. Extensive experimental results on various ordered data demonstrated that the proposed algorithm provides excellent clustering and rank estimation performances. ACKNOWLEDGMENTS This work was supported in part by the MSIT, Korea, under the ITRC support program (IITP-20202016-0-00464) supervised by the IITP, and in part by the National Research Foundation of Korea (NRF) through the Korea Government (MSIP) under Grant NRF-2018R1A2B3003896. A NETWORK STRUCTURE OF ORID As described in Section 3.2, the ORID network consists of the encoder F , the decoder G, the comparator C, and the discriminator D. The network structures of these components are detailed in Tables 4∼ 7, where ‘kh×kw-s-c Conv’ and ‘kh×kw-s-c Deconv’ denote the 2D convolution and 2D deconvolution with kernel size kh×kw, stride s, and c output channels, respectively. ‘BN’ means batch normalization (Ioffe & Szegedy, 2015), and ‘c Dense’ is a dense layer with c output channels. Note that the encoder takes a 64 × 64 RGB image as input, and the identity feature of the encoder output is l2-normalized in Eq. (3). Also, we set dor = 128 and did = 896. B ALGORITHMS – DETAILS B.1 OPTIMALITY OF CENTROID RULE To solve the constrained optimization problem in Eq. (7), we construct the Lagrangian function L = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) − λj ∑k j=1(cj tcj − 1) (14) where λj , 1 ≤ j ≤ k, are Lagrangian multipliers (Bertsekas, 1996). By differentiating L with respect to cj and setting it to zero, we have ∂L ∂cj = ∑ x∈Cj h x id − α 1k−1 ∑ l 6=j ∑ x∈Cl h x id − 2λjcj (15) = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id − 2λjcj (16) = 0 (17) for j = 1, . . . , k. Therefore, the optimal centroid cj is given by cj = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id 2λj . (18) Because of the normalization constraint cjtcj = 1, we have 2λj = ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥, (19) which leads to the centroid rule in Eq. (8). B.2 OPTIMALITY OF NN RULE Let us consider two cases. First, instance x is declared to belong to cluster Cj . It then contributes to the objective function J in Eq. (6) by βj = (h x id) tcj − α 1k−1 ∑ l 6=j(h x id) tcl. (20) Second, x is declared to belong to another cluster Cj′ . Then, its contribution is βj′ = (h x id) tcj′ − α 1k−1 ∑ l 6=j′(h x id) tcl. (21) By comparing the two contributions, we have βj − βj′ = (hxid)t(cj − cj′)− α 1k−1 (h x id) t(cj′ − cj) (22) = ( 1 + α 1k−1 ) (hxid) t(cj − cj′). (23) This means that βj ≥ βj′ when (hxid)tcj ≥ (hxid)tcj′ . Therefore, x should be assigned to the optimal cluster Cj∗ such that the cosine similarity (hxid)tcj∗ is maximized. Equivalently, we have the NN rule in Eq. (9). B.3 REGULARIZATION CONSTRAINT IN DRC To prevent empty clusters and balance the partitioning, we enforce a regularization constraint so that every cluster contains at least a predefined number of instances. More specifically, when applying the NN rule, we enforce that at least 12k of instances are assigned to each cluster Cj . The instances are selected in the decreasing order of cosine similarity (hxid) tcj . B.4 LOSS FUNCTIONS In the DRC-ORID algorithm, we use the loss function ` = λrec`rec + λclu`clu + λcom`com + λgan`gan (24) where the reconstruction, clustering, comparator, and GAN losses are given by `rec = 1 2N ∑N i=1 ( ‖xi −G(F (xi))‖1 + ‖yi −G(F (yi))‖1 ) , (25) `clu = − 12N ∑N i=1 ( (hxiid ) tcj + (h yi id) tcj ) , (26) `com = − 1N ∑N i=1 ( qxiyi log p xiyi + q xiyi ≈ log p xiyi ≈ + q xiyi ≺ log p xiyi ≺ ) , (27) `gan = − 12N ∑N i=1 ( log(1−D(G(F (xi)))) + log(1−D(G(F (yi)))) ) , (28) respectively. Here, N is the number of image pairs in a minibatch. The weighting parameters are set to λrec = 5, λclu = 0.1, λcom = 1, and λgan = 1. B.5 IMPACTS OF REPULSIVE TERM ON CLUSTERING To analyze the impacts of the repulsive term in Eq. (6), we first compare clustering qualities with α = 0 and α = 0.1. At α = 0, the repulsive term is excluded from the objective function J and the centroid rule is reduced to Eq. (10) in the spherical k-means (Dhillon & Modha, 2001). However, different from the spherical k-means, even at α = 0, the clustering is jointly performed with the training of the ORID network. We adopt two metrics to measure the quality of clustering: normalized mutual information (NMI) (Strehl & Ghosh, 2002) and centroid affinity (CA). NMI measures the information shared between two different partitioning of the same data A = ∪Ui=1Ai and B = ∪Vj=1Bj , NMI(A,B) = ∑U i=1 ∑V j=1 |Ai ∩Bj | log N |Ai∩Bj | |Ai||Bj |√ ( ∑U i=1 |Ai| log |Ai| N )( ∑V j=1 |Bj | log |Bj | N ) (29) where U and V are the numbers of clusters in A and B, respectively, N is the total number of samples, and | · | denotes the cardinality. Also, we define the centroid affinity (CA) as CA({c}kj=1) = 2k(k−1) ∑k j=1 ∑k l>j c t jcl. (30) For high-quality clustering, the centroids should be far from one another and thus should yield a low CA score. Figure 10 plots how NMI and CA vary as the iteration goes on. In this test, MORPH II (setting B) is used and the number of clusters k is set to 2. Since setting B consists of Africans and Caucasians, we use the race groups as the ground-truth partitioning for the NMI measurement. At early iterations, the NMI score of DRC-ORID with α = 0.1 is slightly better than that with α = 0. However, as the iterative training and clustering go on, the score gap gets larger. After the convergence, DRC-ORID with α = 0.1 outperforms the option α = 0 by a significant NMI gap of 0.13. Also, CA of the option α = 0.1 gradually decreases, whereas that of α = 0 does not. At α = 0.1, the repulsive term makes the centroids repel each other. As a result, CA, which is the cosine similarity between the two centroids, becomes almost −1, which means the equilibrium state in Eq. (11) is almost achieved. We also visualize the feature spaces of the two options, α = 0 and α = 0.1, using t-SNE in Figure 11. It is observed that two clusters are more clearly separated by DRC-ORID with α = 0.1. Figure 12 shows the t-SNE results after the convergence with age labels. Figure 13 compares the NMI curves at different α’s. The choice of α affects the quality of clustering, as α controls the intensity of the repulsive force between centroids. When α is too large, the centroids move too quickly, making the training of the ORID network difficult. On the other hand, when α is too small, the repulsive term does not affect the clustering meaningfully. Hence, α should be selected to strike a balance between training reliability and effective repulsion. It was found experimentally that clustering is performed well around α = 0.1. Finally, it is worth pointing out that, if the identity features were not normalized as in Eq. (3) and the repulsive clustering were performed in an unbounded space, the distances between centroids would get larger and larger as the iteration goes on. Thus, convergence would not be achieved. This is why we perform DRC on the bounded unit sphere. B.6 IMPACTS OF REPULSIVE TERM ON RANK ESTIMATION Table 8 compares the rank estimation results when the clustering is performed with and without the repulsive term. In this experiment, we use MORPH II (setting A) and set k = 2. Without the repulsive term, lower-quality clusters make the training of the comparator more difficult. As a result, the age estimation performance degrades significantly in terms of both MAE and CS. In other words, the quality of clustering affects the rank estimation performance greatly, and the proposed DRC algorithm provides high quality clusters suitable for the rank estimation. B.7 IMPACTS OF THE NUMBER k OF CLUSTERS ON RANK ESTIMATION Tables 9 and 10 compare the rank estimation results according to the number k of clusters on the MORPH II (setting A) and AADB datasets, respectively. On MORPH II, the age estimation performance decreases as k increases. Since the training set in setting A consists of only 4,394 images, each cluster at a large k contains too few instances. Thus, the comparator is trained inefficiently with fewer training pairs, degrading the performance. In contrast, AADB contains a large number of diverse images. Due to the diversity, a relatively large k should be used to group images into meaningful clusters. Also, even at a large k, each cluster contains a sufficient number of data. Thus, as compared to MORPH II, results on AADB are less sensitive to k. In addition, we provide age estimation results on the balanced dataset in Table 14, in which k has marginal impacts on the rank estimation performance. As mentioned previously, the quality of clustering significantly affects the rank estimation performance. Also, similarly to other algorithms based on k-means, the clustering quality of DRC is affected by k. Hence, for the proposed algorithm to be used on a new ordered dataset, k should be determined effectively to obtain good clustering and rank estimation results. Readers interested in the selection of k are referred to Pham et al. (2005). B.8 CLUSTERING USING OTHER FEATURES Instead of clustering identity features hx1id , h x2 id , . . . , h xn id , we test clustering order features hx1or , h x2 or , . . . , h xn or or whole features h x1 , hx2 , . . . , hxn . In this test, MORPH II (setting A) is used and k = 2. Figure 14 compares the clustering results. When using order features or whole features, instances are divided by their ages. We see that instances younger than 30 mostly belong to cluster 1 and the others to cluster 2. Table 11 compares the performances of the age estimators trained using these clustering results. The best performance is achieved when the clustering is done on identity features. B.9 RELIABILITY OF FEATURE DECOMPOSITION Performing the comparison using order features only does not theoretically guarantee that orderrelated information is fully excluded from identity features. However, we observed empirically that the decomposition is sufficiently reliable if the dimension of an identity feature is selected properly. If the dimension is too small, the encoder may lose a significant portion of order-irrelevant information. On the contrary, if the dimension is too large, the encoder may encode order information redundantly. In our experiments, we use 128 and 896 dimensional vectors for order and identity features (dor = 128 and did = 896), and obtain satisfactory decomposition results. To show that order-related information is excluded from identity features, we compare the accuracies of the comparator (i.e. ternary classifier), when identity features are used instead of order features. Specifically, we first extract order features and identity features from all instances in MORPH II using the pretrained ORID network. Then, we train two comparators that predict the ordering relationship between two instances x and y: one takes the order features hxor and h y or as input and the other takes the identity features hxid and h y id. Table 12 lists the comparator accuracies. We see that the comparator fails to predict ordering relationships from identity features. Also, Figure 15 is t-SNE visualization of the identity feature spaces with age or cluster labels, which confirms that order-related information is excluded effectively from identity features. B.10 MAP ESTIMATION Let us describe the MAP estimation rule for rank estimation in Section 3.4. Given a test instance x, we select references yi by Eq. (13). Then, by comparing x with yi, the comparator yields the probability vector pxyi = (pxyi , p xyi ≈ , p xyi ≺ ) for the three cases in Eq. (4). Thus, given yi, the probability of θ(x) = r can be written as Pθ(x)(r | yi) = pxyi · Pθ(x)(r |x yi) + pxyi≈ · Pθ(x)(r |x ≈ yi) + p xyi ≺ · Pθ(x)(r |x ≺ yi). (31) Suppose that x yi. Then, θ(x) − θ(yi) = r − i > τ from Eq. (4). Also, the maximum possible rank is m. We hence assume that θ(x) has the uniform distribution between i + τ + 1 and m. In other words, Pθ(x)(r |x yi) ∼ U(i+ τ + 1,m), where U denotes a discrete uniform distribution. Similarly, we have Pθ(x)(r |x ≈ yi) ∼ U(i−τ, i+τ) and Pθ(x)(r |x ≺ yi) ∼ U(1, i−τ−1). Then, we approximate the a posteriori probability Pθ(x)(r | y1, . . . ym) by averaging those single-reference inferences in Eq. (31); Pθ(x)(r | y1, . . . ym) = 1m ∑m i=1 Pθ(x)(r | yi). (32) Finally, we obtain the MAP estimate of the rank of x, which is given by θ̂(x) = arg max r∈Θl Pθ(x)(r | y1, . . . ym). (33) C FACIAL AGE ESTIMATION – MORE EXPERIMENTS AND DETAILS C.1 IMPLEMENTATION DETAILS We initialize the parameters of the ORID network for facial age estimation using the Glorot normal method (Glorot & Bengio, 2010). We use the Adam optimizer with a learning rate of 10−4 and decrease the rate by a factor of 0.5 every 50,000 steps. For data augmentation, we do random horizontal flips only. This is because other augmentation schemes, such as brightness or contrast modification, may deform identity information such as skin colors. Also, dor and did are set to be 128 and 896, respectively. In Eq. (6), we set α to 0.1 and decrease it to 0.05 after 200 epochs. C.2 EVALUATION SETTINGS For evaluation on the MORPH II dataset, we adopt four widely used testing protocols. • Setting A – 5,492 images of the Caucasian race are selected and then randomly divided into two non-overlapping parts: 80% for training and 20% for test. • Setting B – 21,000 images of Africans and Caucasians are selected to satisfy two con- straints: the ratio between Africans and Caucasians should be 1 : 1, and that between females and males 1 : 3. They are split into three disjoint subsets S1, S2, and S3. The training and testing are repeated twice: 1) training on S1, testing on S2 + S3, and 2) training on S2, testing on S1 + S3. The average performance of the two experiments is reported. • Setting C – This setting is the 5-fold cross-validation on the entire dataset. Images are randomly split into five folds, but the same person’s images should belong to only one fold. The average performance of the five experiments is reported. • Setting D – This is called the 80-20 protocol. Without any constraint, the entire dataset is randomly divided into the training and test sets with ratio 8 : 2. Thus, setting D is similar to one experiment in setting C, but the same person’s images may belong to both training and test sets. C.3 CLUSTERING We provide more clustering results on MORPH II. Figure 16 is the clustering results on setting B at k = 2. Since setting B consists of Africans and Caucasians, the images are clustered according to the races. Also, Table 13 summarizes the clustering results for settings A, B, and C at k = 2. The clustering result on setting D is omitted, since it is almost identical with that on setting C. In all settings, the proposed DRC-ORID divides facial images into two clusters with meaningful criteria, which are gender for setting A and race for settings B, C, and D. 1 897 196 3,507 63 8 33,008 4 0 46 C.4 AGE ESTIMATION We implement a VGG-based pairwise comparator and follow the settings of Lim et al. (2020). Specifically, instead of Eq. (4), we use the ternary categorization based on the geometric ratio and set τ = 0.1. We initialize its feature extractor using VGG16 pre-trained on the ILSVRC2012 dataset (Deng et al., 2009) and its fully connected layers using the Glorot normal method. We employ the Adam optimizer with a minibatch size of 32. We start with a learning rate of 10−4 and shrink it by a factor of 0.5 after every 80,000 steps. Table 14 lists age estimation results on the balanced dataset according to the number k of clusters. OL-supervised trains the comparator using supervised clusters separated according to gender or ethnic group annotations. Specifically, the supervised clusters at k = 2, 3, and 6 are divided according to genders, ethnic groups, and both genders and ethnic groups, respectively. On the other hand, OL-unsupervised and the proposed algorithm determine their clusters in unsupervised manners. We see that the proposed algorithm performs better than the conventional algorithms in all tests. By employing multiple clusters, the proposed algorithm improves MAE by 0.12 and CS by 0.73% on average. In contrast, OL-unsupervised improves MAE by 0.04 and CS by 0.07% only. This indicates that, by employing identity features, the proposed DRC-ORID algorithm groups instances into meaningful clusters, in which instance ranks can be compared more accurately. C.5 AGE TRANSFORMATION More age transformation results are in Figure 17. Note that, in Figure 6, given an image x, we select the reference y at a target age, whose identity feature is the most similar to that of x, as in Eq. (13). Hence, the image x and the reference y have similar appearance. On the other hand, Figure 17 shows transformed images using randomly selected references. The first two cases transform the same image x with different references, but the transformed images are similar. Also, even when the gender and/or race of y are different from those of x, the identity information of x is preserved well in the transformed image. This confirms the reliability of ORID. C.6 RECONSTRUCTION Figure 18 shows reconstructed faces using whole feature (hxor ⊕ hxid), order feature only (hxor ⊕ 0), and identity feature only (0⊕hxid). Without the order feature, each decoded face is degraded but the person can be identified. In contrast, without the identity feature, the reconstruction is not related to the person except that it seems to be an average face of people at the same age as the person. These results confirm that order and identity features are complementary. C.7 REFERENCE IMAGES Figure 19 shows examples of reference images, which are used for the rank estimation on MORPH II (setting D) at k = 2. Given a test image x, reference image yi of age class i is selected via Eq. (13) from the training set. In the default mode, for each age i, a single reference image is selected. However, the top r most similar references can be selected and used for the estimation. We use a single reference because multiple references improve the estimation performance only negligibly. In Figure 19, the top three reference images are shown for each age from 16 to 53. In setting D, the two clusters are divided by Africans and the others in general. However, we see that test and reference images tend to have the same gender, as well as the same race. Furthermore, they have similar appearance, even when they have a big age difference. D AESTHETIC SCORE REGRESSION – MORE EXPERIMENTS AND DETAILS D.1 IMPLEMENTATION DETAILS For aesthetic score regression, we implement a pairwise comparator based on EfficientNetB4 (Tan & Le, 2019). The pairwise comparator has the same architecture as that for facial age estimation, except for the backbone network. To initialize the backbone, we adopt the parameters pre-trained on the ILSVRC2012 dataset. We initialize the other layers using the Glorot normal method. We update the network parameters using the Adam optimizer with a minibatch size of 16. We start with a learning rate of 10−4 and shrink it by a factor of 0.8 every 8000 steps. Training images are augmented by random horizontal flipping. We set τ = 0.15 for the ternary categorization in Eq. (4). D.2 CLUSTERING Notice that the AADB dataset contains images of diverse contents and styles. Hence, when clustering with a small k, it is hard to observe the characteristics shared by images within each cluster, whereas k = 2 or 3 is sufficient for facial age data. We empirically found that at least eight clusters are required (k = 8) to partition the AADB dataset by meaningful criteria. Figure 20 provides more examples of clustering results at k = 8. Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 D.3 REFERENCE IMAGES Figure 21 shows examples of reference images, which are used for the aesthetic score regression. Given a test image x, the reference image yi of aesthetic class i is selected by Eq. (13). For the aesthetic score regression, we use a single reference image for each aesthetic class, as done in the facial age estimation. Thus, 101 reference images are used in total. E HCI CLASSIFICATION – MORE EXPERIMENTS AND DETAILS E.1 IMPLEMENTATION DETAILS For DRC-ORID for HCI classification, we set all hyper-parameters in the same way as we do in Appendix C.1. We set τ = 1 for the ternary categorization of ordering relationship in Eq. (4). Note that there are five decade classes from 1 to 5. E.2 CLUSTERING Figure 22 shows some sample images in the HCI dataset, which are ordered according to the decade classes. Figure 23 shows more example HCI images grouped into four clusters (k = 4). 1930s 1940s 1950s 1960s 1970s Cluster 1 Cluster 2 Cluster 3 Cluster 4 E.3 REFERENCE IMAGES Figure 24 shows the five reference images for each of six test image examples. Note that, given a test image, the reference images of similar contents, tones, or composition are selected from the five decade classes.
1. What is the main contribution of the paper, and how does it relate to previous work? 2. How effective is the proposed approach in improving the ranking performance? 3. How does the repulsive clustering term affect the ranking performance? 4. How sensitive is the proposed approach to the number of clusters chosen, and how can one choose the right number of clusters? 5. What are the dimensions of the order-related and identity-related features used in the experiments?
Review
Review It is well presented. The idea of splitting the encoding feature space into task related features and non-task related features is probably not new. But the use of it in estimating rank might be new and intuitively it makes sense to use it. They also propose an extension to the clustering algorithm using a repulsive term and propose MAP estimation algorithm to assign a rank based on the output probabilities of the comparator when the max possible rank is known. Experiments are conducted on 3 data sets. The results show the effectiveness of the approach. The experiments, I feel, are sufficient to show that clustering instances based on non-rank related features will help improve effectiveness of comparison based ranking of new instances. They also show the effectiveness of their proposed MAP estimation rule for assigning a rank. The effectiveness of the repulsive clustering on ranking performance is not clear. The authors discuss that using the repulsive term in the objective for clustering produces more distinct clusters but how does this "improved" cluster quality translate to better performance in ranking? As this is one of the key contributions of the paper, a comparison of ranking performances with and without the use of the repulsive term in clustering would be useful. How sensitive/robust is the proposed approach to the number of clusters chosen? How can one choose the right number of clusters to use? A discussion on these would be useful. In each experiment, what was the dimensions of the order-related feature and identity-related feature? In general, I think this paper is above the borderline. But I would also like to see the comments from other reviewers.
ICLR
Title Deep Repulsive Clustering of Ordered Data Based on Order-Identity Decomposition Abstract We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. N/A We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. 1 INTRODUCTION There are various types of ‘ordered’ data. For instance, in facial age estimation (Ricanek & Tesafaye, 2006), face photos are ranked according to the ages. Also, in a video-sharing platform, videos can be sorted according to the numbers of views or likes. In these ordered data, classes, representing ranks or preferences, form an ordered set (Schröder, 2003). Attempts have been made to estimate the classes of objects, including multi-class classification (Pan et al., 2018), ordinal regression (Frank & Hall, 2001), metric regression (Fu & Huang, 2008). Recently, a new approach, called order learning (Lim et al., 2020), was proposed to solve this problem. Order learning is based on the idea that it is easier to predict ordering relationship between objects than to estimate the absolute classes (or ranks); telling the older one between two people is easier than estimating their exact ages. Hence, in order learning, the pairwise ordering relationship is learned from training data. Then, the rank of a test object is estimated by comparing it with reference objects with known ranks. However, some objects cannot be easily compared. It is less easy to tell the older one between people of different genders than between those of the same gender. Lim et al. (2020) tried to deal with this issue, by dividing an ordered dataset into disjoint chains. But, the chains were not clearly separated, and no meaningful properties were discovered from the chains. In this paper, we propose a reliable clustering algorithm, called deep repulsive clustering (DRC), of ordered data based on order-identity decomposition (ORID). Figure 1 shows a clustering example of ordered data. Note that some characteristics of objects, such as genders or races in age estimation, are not related to their ranks, and the ranks of objects sharing such characteristics can be compared more reliably. To discover such characteristics without any supervision, the proposed ORID network decomposes the information of an object instance into an order-related feature and an identity feature unrelated to the rank. Then, the proposed DRC clusters object instances using their identity features; in each cluster, the instances share similar identity features. Furthermore, given a test instance, we decide its cluster based on the nearest neighbor (NN) rule, and compare it with reference instances within the cluster to estimate its rank. To this end, we develop a maximum a posteriori (MAP) estimation rule. Experimental results on ordered data for facial age estimation, aesthetic score regression (Kong et al., 2016), and historical color image classification (Palermo et al., 2012) demonstrate that the proposed algorithm separates ordered data clearly into meaningful clusters and provides excellent rank estimation performances for unseen test instances. The contributions of this paper can be summarized as follows. • We first propose the notion of identity features of ordered data and develop the ORID network for the order-identity decomposition. • We develop the DRC algorithm to cluster data on a unit sphere effectively using a repulsive term. We also prove the local optimality of the solution. • We propose the MAP decision rule for rank estimation. The proposed algorithm provides the state-of-the-art performances for facial age estimation and aesthetic score regression. 2 RELATED WORK 2.1 ORDER LEARNING The notion of order learning was first proposed by Lim et al. (2020). It aims to determine the order graph of classes and classify an object into one of the classes. In practice, it trains a pairwise comparator, which is a ternary classifier, to categorize the relationship between two objects into one of three cases: one object is bigger than, similar to, or smaller than the other. Then, it estimates the rank of a test object, by comparing it with reference objects with known ranks. However, not every pair of objects are easily comparable. Although Lim et al. (2020) attempted to group objects into clusters, in which objects could be more accurately compared, their clustering results were unreliable. Pairwise comparison has been used to estimate object ranks, because relative evaluation is easier than absolute evaluation in general. Saaty (1977) proposed the scaling method to estimate absolute priorities from relative priorities, which has been applied to various decision processes, including aesthetic score regression (Lee & Kim, 2019). Also, some learning to rank (LTR) algorithms are based on pairwise comparison (Liu, 2009; Cohen et al., 1998; Burges et al., 2005; Tsai et al., 2007). Order learning attempts to combine (possibly inconsistent) pairwise ordering results to determine the rank of each object. Thus, it is closely related to the Cohen et al.’s LTR algorithm (1998), which learns a pairwise preference function and obtains a total order of a set to maximize agreements among preference judgments of pairs of elements. Also, order learning is related to rank aggregation (Dwork et al., 2001), in which partially ordered sets are combined into a linearly ordered set to achieve the maximum consensus among those partial sets. Rank aggregation has been studied in various fields (Brüggemann et al., 2004). Since optimal aggregation is NP-hard, Dwork et al. (2001) proposed an approximate algorithm, called Markov chain ordering. There are many other approximate schemes, such as the local Kemenization, Borda count, and scaled footrule aggregation. 2.2 CLUSTERING Data clustering is a fundamental problem to partition data into disjoint groups, such that elements in the same group are similar to one another but elements from different groups are dissimilar. Although various clustering algorithms have been proposed (Hartigan & Wong, 1979; Ester et al., 1996; Kohonen, 1990; Dhillon & Modha, 2001; Reynolds, 2009), conventional algorithms often yield poor performance on high-dimensional data due to the curse of dimensionality and ineffectiveness of similarity metrics. Dimensionality reduction and feature transform methods have been studied to map raw data into a new feature space, in which they are more easily separated. Linear transforms, such as PCA (Wold et al., 1987), and non-linear transformations, including kernel methods (Hofmann et al., 2008) and spectral clustering (Ng et al., 2002), have been proposed. Recently, deep neural networks have been adopted effectively as feature embedding functions (LeCun et al., 2015), and these deep-learning-based feature embedding functions have been combined with classical clustering algorithms. For instance, Caron et al. (2018) proposed a deep clustering algorithm based on k-means. It clusters features from a neural network and then trains the network using the cluster assignments as pseudo-labels. This is done iteratively. Also, Yang et al. (2016) jointly learned feature representations and clustered images, based on agglomerative clustering. Chang et al. (2017) recast the image clustering task into a binary classification problem to predict whether a pair of images belong to the same cluster or different clusters. Similarly to these algorithms, we use a neural network to determine a feature space in which clustering is done more effectively. However, we consider the clustering of ordered data, and each cluster should consists of elements, whose ranks can be compared more accurately. There are conventional approaches to use clustering ideas to aid in classification or rank estimation. For example, Yan et al. (2015) developed a hierarchical classifier, which clusters fine categories into coarse category groups and classifies an object into a fine category within its coarse category group. For extreme multiclass classification, Daumé III et al. (2017) proposed to predict a class label among candidate classes only, which are dynamically selected by the recall tree. It is however noted that the leaves of the recall tree do not partition the set of classes. Also, for age estimation, Li et al. (2019) proposed a tree-like structure, called bridge-tree, to divide data into overlapping age groups and train a local regressor for each group. The set of local regressors can be more accurate than a global regressor to deal with the entire age range. Whereas these conventional approaches group data in the label dimension to perform their tasks more effectively, the proposed algorithm cluster data in the dimension orthogonal to the label dimension. In other words, we cluster data using identity features, instead of using order features. 3 PROPOSED ALGORITHM 3.1 PROBLEM DEFINITION An order is a binary relation, often denoted by ≤, on a set Θ = {θ1, θ2, . . . , θm} (Schröder, 2003). It should satisfy three properties of reflexivity (θi ≤ θi for all i), antisymmetry (θi ≤ θj and θj ≤ θi imply θi = θj), and transitivity (θi ≤ θj and θj ≤ θk imply θi ≤ θk). Then, Θ is called a partially ordered set. Furthermore, if every pair of elements are comparable (θi ≤ θj or θj ≤ θi for all i, j), Θ is called a chain or linearly ordered set. An order describes ranks or priorities of classes. For example, in age estimation, θi may represent the age class of i-year-olds. Then, θ14 ≤ θ49 represents that 14-year-olds are younger than 49-year-olds. As mentioned previously, it is less easy to tell the older one between people of different genders. An algorithm, hence, may compare a subject with reference subjects of the same gender only. In such a case, each age class θi represents two subclasses θfemalei and θ male i of different types, and the algorithm compares only subjects of the same type. Lim et al. (2020) assumed that subclasses of different types are incomparable and thus the set of subclasses is the union of k disjoint chains, where k is the number of types. However, in many ranking applications, objects of different types can be compared (although less easily than those of the same type are). Thus, instead of assuming incomparability across chains, we assume that there is a total order on Θ = {θ1, θ2, . . . , θm}, in which each class θi consists of k types of subclasses, and that object instances of the same type are more easily compared than those of different types. Suppose that n training instances in X = {x1, x2, . . . , xn} are given. Also, suppose that there are m ranks and the ground-truth rank of each instance is known. In this sense, X contains ordered data. The problem is twofold. The first goal is to decompose the whole instances X into k disjoint clusters {Cj}kj=1 in which instances are more easily compared; X = ⋃k j=1 Cj (1) where Ci∩Cj = ∅ for i 6= j. In other words, we aim to partition the ordered data inX into k clusters, by grouping them according to their characteristics unrelated to their ranks. These characteristics, which tend to remain the same even when an object experiences rank changes, are referred to as ‘identity’ features in this work. For example, in age estimation, genders or races can be identity features. However, we perform the clustering without any supervision for identity features. Notice that instances within a cluster would be compared more easily than those across clusters, since they have similar identity features. The number k of clusters is assumed to be known a priori. Impacts of k on the clustering performance are discussed in Appendix B.7. The second goal is to assign an unseen test instance into one of the clusters and determine its rank by comparing it with reference instances within the cluster. To achieve these goals, we propose the ORID network and the DRC algorithm. 3.2 ORDER-IDENTITY DECOMPOSITION In general, object instances can be compared more easily, as they have more similar identity features irrelevant to order. Therefore, we decompose the information of each object instance into an order feature and an identity feature. To this end, we propose the ORID network in Figure 2, composed of three parts: autoencoder, discriminator, and comparator. 1) Autoencoder: Similarly to deep clustering algorithms in (Yang et al., 2017; Dizaji et al., 2017; Chen et al., 2017; Ji et al., 2017), we use the autoencoder G ◦ F (·), based on a neural network, to extract feature vectors. The encoder hx = F (x) maps an input vector x to a feature vector hx, while the decoder x̂ = G(hx) reconstructs x̂ from hx. By minimizing the reconstruction loss ‖x− x̂‖1, F is trained to represent x compactly with as little loss of information as possible. We decompose the overall feature hx ∈ Rdor+did into the order feature hxor and the identity feature hxid, given by hxor = [h x 1 , h x 2 , . . . , h x dor ] t (2) hxid = [h x dor+1, h x dor+2, . . . , h x dor+did ]t/‖[hxdor+1, h x dor+2, . . . , h x dor+did ]‖ (3) where dor and did are the dimensions of hxor and h x id. However, without additional control, the output hx of the neural network F would be highly entangled (Higgins et al., 2018). To put together order-related information into hxor, we employ the comparator. 2) Comparator: Using the order features hxor and h y or of a pair of instances x and y, we train the comparator, which classifies their ordering relationship into one of three categories ‘bigger,’ ‘similar,’ and ‘smaller’: x y if θ(x)− θ(y) > τ, x ≈ y if |θ(x)− θ(y)| ≤ τ, x ≺ y if θ(x)− θ(y) < −τ, (4) where θ(·) denotes the class of an instance. As in (Lim et al., 2020), ‘ ,≈,≺’ represent the ordering relationship between instances, while ‘>,=, <’ do the mathematical order between classes. The comparator outputs the softmax probability pxy = (pxy , p xy ≈ , p xy ≺ ). It is trained to minimize the cross-entropy between pxy and the ground-truth one-hot vector qxy = (qxy , q xy ≈ , q xy ≺ ). Because it is trained jointly with the autoencoder, the information deciding the ordering relationship tends to be encoded into the order features hxor and h y or. On the other hand, the remaining information necessary for the reconstruction of x̂ and ŷ are encoded into the identity features hxid and h y id. 3) Discriminator: We adopt the discriminator D that tells real images from synthesized images, generated by the decoder G. Using the GAN loss (Goodfellow et al., 2014), the discriminator helps the decoder to reconstruct more realistic output x̂ and ŷ. Appendix A provides detailed network structures of these components in ORID. 3.3 DEEP REPULSIVE CLUSTERING After obtaining the identity features hx1id , h x2 id , . . . , h xn id of all instances xi ∈ X , we partition them into k clusters. Each cluster contains instances that are more easily comparable to one another. The identity features are normalized in Eq. (3) and lie on the unit sphere in Rdid . In other words, we cluster data points on the unit sphere. Thus, the cosine similarity is a natural affinity metric. Let Cj , 1 ≤ j ≤ k, denote the k clusters. Also, let cj , constrained to be on the unit sphere, denote the ‘centroid’ or the representative vector for the instances in cluster Cj . We define the quality of cluster Cj as ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) (5) where the first term is the similarity of an instance in Cj to the centroid cj , the second term with the negative sign quantifies the average dissimilarity of the instance from the other centroids, and α is a nonnegative weight. For a high quality cluster, instances should be concentrated around the centroid and be far from the other clusters. The second term is referred to as the repulsive term, as its objective is similar to the repulsive rule in (Lee et al., 2015). Although conventional methods also try to increase inter-cluster dissimilarity (Ward Jr, 1963; Lee et al., 2015), to the best of our knowledge, DRC is the first attempt to use an explicit repulsive term in deep clustering, which jointly optimizes clustering and feature embedding. Next, we measure the overall quality of the clustering by J({Cj}kj=1, {cj}kj=1) = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) . (6) We aim to find the optimum clusters to maximize this objective function J , yet finding the global optimum is NP-complete (Kleinberg et al., 1998; Garey et al., 1982). Hence, we propose an iterative algorithm, called DRC, to find a local optimum, as in the k-means algorithm (Gersho & Gray, 1991). 1. Centroid rule: After fixing the clusters {Cj}kj=1, we update the centroids {cj}kj=1 to maximize J in Eq. (6). Because the centroids should lie on the unit sphere, we solve the constrained optimization problem: maximize J({cj}kj=1) subject to ctjcj = 1 for all j = 1, . . . , k. (7) Using Lagrangian multipliers (Bertsekas, 1996), the optimal centroids are obtained as cj = (∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ) / ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥. (8) 2. NN rule: On the other hand, after fixing the centroids, we update the membership of each instance to maximize J in Eq. (6). The optimal cluster Cj is given by Cj = { x | (hxid)tcj ≥ (hxid)tcl for all 1 ≤ l ≤ k } . (9) In other words, an instance should be assigned to Cj if its nearest centroid is cj . We apply the centroid rule and the NN rule iteratively until convergence. Because both rules monotonically increase the same objective function J and the inequality J ≤ n+ αk−1n always holds, J is guaranteed to converge to a local maximum. Readers interested in the convergence are referred to (Sabin & Gray, 1986; Pollard, 1982). Without the repulsive term in Eq. (6) (i.e. at α = 0), centroid cj in Eq. (8) is updated by cj = ∑ x∈Cj h x id/‖ ∑ x∈Cj h x id‖, (10) as done in the spherical k-means (Dhillon & Modha, 2001). In contrast, with a positive α, the objective function J is reduced when the centroids are far from one another. Ideally, in equilibrium, the centroid of a cluster should be the opposite of the centroid of all the other clusters;( ∑ x∈Cj hxid ‖ ∑ x∈Cj hxid‖ )t( ∑ x∈X\Cj hxid ‖ ∑ x∈X\Cj hxid‖ ) = −1 for all j = 1, 2, . . . , k. (11) Note that the ORID network and thus the encoded feature space are trained jointly with the repulsive clustering. As the training goes on, the centroids repel one another, and the clusters are separated more clearly due to the repulsive term. We jointly optimize the clusters and the ORID network parameters, as described in Algorithm 1. First, we train the ORID network for warm-up epochs, by employing every pair of instances x and y as input. Then, using the identity features, we partition the input data into k clusters using k-means. Second, we repeat the fine-tuning of the ORID network and the repulsive clustering alternately. In the fine-tuning, a pair of x and y are constrained to be from the same cluster, and the following loss function is employed. ` = λrec`rec + λclu`clu + λcom`com + λgan`gan. (12) Appendix B describes this loss function in detail, proves the optimality of the centroid and NN rules in Eq. (8) and (9), and analyzes the impacts of the repulsive term in Eq. (6). Algorithm 1 DRC-ORID Input: Ordered data X = {x1, x2, . . . , xn}, k = the number of clusters 1: Train ORID network for warm-up epochs to minimize loss λrec`rec + λcom`com + λgan`gan 2: Partition X into C1, C2, . . . , Ck using k-means 3: repeat 4: Fine-tune ORID network to minimize loss λrec`rec + λclu`clu + λcom`com + λgan`gan 5: repeat 6: for all j = 1, 2, . . . , k do 7: Update centroid cj via Eq. (8) . Centroid rule 8: end for 9: for all j = 1, 2, . . . , k do 10: Update cluster Cj via Eq. (9) . NN rule 11: end for 12: until convergence or predefined number of iterations 13: until predefined number of epochs Output: Clusters {Cj}kj=1, centroids {cj}kj=1, ORID network 3.4 RANK ESTIMATION Using the output of the DRC-ORID algorithm, we can estimate the rank of an unseen test instance x. First, we extract its identity feature hxid using the ORID encoder. By comparing h x id with the centroids {cj}kj=1 based on the NN rule, we find the most similar centroid cl. Then, x is declared to belong to cluster Cl. Without loss of generality, let us assume that the classes (or ranks) are the first m natural numbers, Θ = {1, 2, . . .m}. Then, for each i ∈ Θ, we select a reference instance yi with rank i from cluster Cl, so that it is the most similar to x. Specifically, yi = arg maxy∈Cl : θ(y)=i(h x id) thyid. (13) We estimate the rank θ(x) of the test instance x, by comparing it with the chosen references yi, 1 ≤ i ≤ m. For the rank estimation, Lim et al. (2020) developed the maximum consistency rule, which however does not exploit the probability information, generated by the comparator. In this paper, we use the maximum a posteriori (MAP) estimation rule, which is described in detail in Appendix B.10. 4 EXPERIMENTAL RESULTS This section provides various experimental results. Due to space limitation, implementation details and more results are available in Appendices C, D, and E. 4.1 FACIAL AGE ESTIMATION Datasets: We use two datasets. First, MORPH II (Ricanek & Tesafaye, 2006) is a collection of about 55,000 facial images in the age range [16, 77]. It provides gender (female, male) and race (African American, Asian, Caucasian, Hispanic) labels as well. We employ the four evaluation settings A, B, C, and D in Appendix C.2. Second, the balanced dataset (Lim et al., 2020) is sampled from the three datasets of MORPH II, AFAD (Niu et al., 2016), and UTK (Zhang et al., 2017) to overcome bias to specific ethnic groups or genders. It contains about 6,000 images for each combination of gender in {female, male} and ethnic group in {African, Asian, European}. Clustering: Figure 3 shows clustering results on MORPH II (setting A), when the number of clusters is k = 2. Setting A contains faces of Caucasian descent only. Thus, the proposed DRC-ORID divides those faces into two clusters according to genders in general, although the annotated gender information is not used. Most males are assigned to cluster 1, while a majority of females to cluster 2. On the other hand, setting B consists of Africans and Caucasians. Thus, those images are clustered according to the races, as shown in Appendix C.3. Figure 4 is the results on the balanced dataset at k = 3, which is composed of MORPH II, AFAD, and UTK images. Due to different characteristics of these sources, images are clearly divided according to their sources. At k = 2, MORPH II images are separated from the others. This is because, unlike the MORPH II images, the boundaries of most AFAD and UTK images are zeroed for alignment using SeetaFaceEngine (Zhang et al., 2014). Lim et al. (2020) also tried the clustering of the balanced dataset. Figure 5 visualizes the feature space using t-SNE (Maaten & Hinton, 2008). Although their method aligns the features according to ages, their clusters are not separated, overlapping one another. In contrast, the proposed DRC-ORID separates the three clusters clearly, as well as sorts features according to the ages within each cluster. More t-SNE plots for analyzing the impacts of the repulsive term are available in Appendix B.5. Age transformation: We assess the decomposition performance of ORID. Although ORID is not designed for age transformation (Or-El et al., 2020), it decomposes an image x into the order and identity features, hxor and h x id. Thus, the age can be transformed in two steps. First, we replace hxor of x with h y or of a reference image y at a target age. Second, we decode the resultant feature (concatenation of hyor and h x id) to obtain the transformed image. Figure 6 shows some results on MORPH II images. Order-related properties, such as skin textures and hair colors, are modified plausibly, but identity information is preserved. This indicates the reliability of ORID. Age estimation: Table 1 compares the proposed algorithm with conventional age estimators on the four evaluation settings of MORPH II. These conventional algorithms take 224× 224 or bigger images as input, while ORID takes 64× 64 images. Moreover, most of them adopt VGG16 (Simonyan & Zisserman, 2015) as their backbones, which is more complicated than the ORID encoder. Thus, for comparison, after fixing clusters using DRC-ORID, we train another pairwise comparator based on VGG16, whose architecture is the same as Lim et al. (2020). We measure the age estimation performance by the mean absolute error (MAE) and the cumulative score (CS). MAE is the average absolute error between estimated and ground-truth ages, and CS computes the percentage of test samples whose absolute errors are less than or equal to a tolerance level of 5. Mainly due to the smaller input size of 64× 64, the vanilla version yields poorer performances than the conventional algorithms. The VGG version, however, outperforms them significantly. First, in the proposed-VGG (k = 1), all instances can be compared, as in the OL algorithm. In other words, the clustering is not performed. Thus, the pairwise comparators of OL and the proposedVGG (k = 1) are trained in the same way, but their rank estimation rules are different. Whereas OL uses the maximum consistency rule, the proposed algorithm performs the MAP estimation. The score gaps between them confirm that the MAP estimation is more accurate. Moreover, by clustering facial images into two groups, the proposed-VGG (k = 2) improves the results meaningfully. The proposed-VGG (k = 2) provides the state-of-the-art results, except for the MAE test in setting D. 4.2 AESTHETIC SCORE REGRESSION The aesthetics and attribute database (AADB) is composed of 10,000 photographs of various themes such as scenery and close-up (Kong et al., 2016). Each image is annotated with an aesthetic score in [0, 1]. We quantize the continuous score with a step size of 0.01 to make 101 score classes. Compared to facial images, AADB contains more diverse data. It is hence more challenging to cluster AADB images. Figure 8 shows example images in each cluster at k = 8. Images in the same cluster have similar colors, similar contents, or similar composition. This means that ORID extracts identity features effectively, corresponding to contents or styles that are not directly related to aesthetic scores. Using those identity features, DRC discovers meaningful clusters. Figure 9 visualizes the feature space of AADB. Aesthetic scores are sorted along one direction, while clusters are separated in the other orthogonal direction. In other words, the scores look like latitudes, while the clusters appear to be separated by meridians (or lines of longitude). As a point on the earth surface can be located by its latitude and longitude, an image is represented by its aesthetic score (order feature) and cluster (identity feature). Table 2 compares regression results. Even without clustering process, the proposed algorithm outperforms the Reg-Net and ASM algorithms. Moreover, by using the eight unsupervised clusters in Figure 8, the proposed algorithm further reduces the MAE to yield the state-of-the-art result. 4.3 HISTORICAL COLOR IMAGE CLASSIFICATION HCI (Palermo et al., 2012) is a dataset for determining the decade when a photograph was taken. It contains images from five decades from 1930s to 1970s. Each decade category has 265 images: 210, 5, and 50 are used for training, validation and testing. Figure 7 shows the clustering results at k = 4. We observe similarity of contents in each cluster. Table 3 compares the quinary classification results. Frank & Hall (2001), Cardoso & da Costa (2007), Palermo et al. (2012), and RED-SVM use traditional features, while the others deep features. The performance gaps between these two approaches are not huge, since 1,050 images are insufficient for training deep networks. 5 IMPACTS OF APPLICATIONS The proposed algorithm can be applied to various ranking problems. In this paper, we demonstrated three vision applications: facial age estimation, aesthetic score regression, and historical image classification. In particular, the proposed age estimator has various potential uses. For example, it can block or recommend media contents to people according to their ages. However, it has harmful impacts, as well as positive ones. Moreover, although age information lacks the distinctiveness to identify an individual, identity features, extracted by ORID, can be misused in facial recognition systems, causing serious problems such as unwanted invasion of privacy (Raji et al., 2020). Hence ethical considerations should be made before the use of the proposed algorithm. Recently, ethical concerns about the fairness and safety of automated systems have been raised (Castelvecchi, 2020; Roussi, 2020; Noorden, 2020). Especially, due to the intrinsic imbalance of facial datasets (Ricanek & Tesafaye, 2006; Zhang et al., 2017; Niu et al., 2016), most deep learning methods on facial analysis (Wen et al., 2020; Or-El et al., 2020) have unwanted gender or racial bias. The proposed algorithm is not free from this bias either. Hence, before any practical usage, the bias should be resolved. Also, even though the proposed algorithm groups data in an unsupervised manner, data are clustered according to genders or races on MORPH II. These results should never be misinterpreted in such a way as to encourage any racial or gender discrimination. We recommend using the proposed age estimator for research only. 6 CONCLUSIONS The DRC algorithm of ordered data based on ORID was proposed in this work. First, the ORID network decomposes the information of an object into the order and identity features. Then, DRC groups objects into clusters using their identity features in a repulsive manner. Also, we can estimate the rank of an unseen test by comparing it with references within the corresponding cluster based on the MAP decision. Extensive experimental results on various ordered data demonstrated that the proposed algorithm provides excellent clustering and rank estimation performances. ACKNOWLEDGMENTS This work was supported in part by the MSIT, Korea, under the ITRC support program (IITP-20202016-0-00464) supervised by the IITP, and in part by the National Research Foundation of Korea (NRF) through the Korea Government (MSIP) under Grant NRF-2018R1A2B3003896. A NETWORK STRUCTURE OF ORID As described in Section 3.2, the ORID network consists of the encoder F , the decoder G, the comparator C, and the discriminator D. The network structures of these components are detailed in Tables 4∼ 7, where ‘kh×kw-s-c Conv’ and ‘kh×kw-s-c Deconv’ denote the 2D convolution and 2D deconvolution with kernel size kh×kw, stride s, and c output channels, respectively. ‘BN’ means batch normalization (Ioffe & Szegedy, 2015), and ‘c Dense’ is a dense layer with c output channels. Note that the encoder takes a 64 × 64 RGB image as input, and the identity feature of the encoder output is l2-normalized in Eq. (3). Also, we set dor = 128 and did = 896. B ALGORITHMS – DETAILS B.1 OPTIMALITY OF CENTROID RULE To solve the constrained optimization problem in Eq. (7), we construct the Lagrangian function L = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) − λj ∑k j=1(cj tcj − 1) (14) where λj , 1 ≤ j ≤ k, are Lagrangian multipliers (Bertsekas, 1996). By differentiating L with respect to cj and setting it to zero, we have ∂L ∂cj = ∑ x∈Cj h x id − α 1k−1 ∑ l 6=j ∑ x∈Cl h x id − 2λjcj (15) = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id − 2λjcj (16) = 0 (17) for j = 1, . . . , k. Therefore, the optimal centroid cj is given by cj = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id 2λj . (18) Because of the normalization constraint cjtcj = 1, we have 2λj = ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥, (19) which leads to the centroid rule in Eq. (8). B.2 OPTIMALITY OF NN RULE Let us consider two cases. First, instance x is declared to belong to cluster Cj . It then contributes to the objective function J in Eq. (6) by βj = (h x id) tcj − α 1k−1 ∑ l 6=j(h x id) tcl. (20) Second, x is declared to belong to another cluster Cj′ . Then, its contribution is βj′ = (h x id) tcj′ − α 1k−1 ∑ l 6=j′(h x id) tcl. (21) By comparing the two contributions, we have βj − βj′ = (hxid)t(cj − cj′)− α 1k−1 (h x id) t(cj′ − cj) (22) = ( 1 + α 1k−1 ) (hxid) t(cj − cj′). (23) This means that βj ≥ βj′ when (hxid)tcj ≥ (hxid)tcj′ . Therefore, x should be assigned to the optimal cluster Cj∗ such that the cosine similarity (hxid)tcj∗ is maximized. Equivalently, we have the NN rule in Eq. (9). B.3 REGULARIZATION CONSTRAINT IN DRC To prevent empty clusters and balance the partitioning, we enforce a regularization constraint so that every cluster contains at least a predefined number of instances. More specifically, when applying the NN rule, we enforce that at least 12k of instances are assigned to each cluster Cj . The instances are selected in the decreasing order of cosine similarity (hxid) tcj . B.4 LOSS FUNCTIONS In the DRC-ORID algorithm, we use the loss function ` = λrec`rec + λclu`clu + λcom`com + λgan`gan (24) where the reconstruction, clustering, comparator, and GAN losses are given by `rec = 1 2N ∑N i=1 ( ‖xi −G(F (xi))‖1 + ‖yi −G(F (yi))‖1 ) , (25) `clu = − 12N ∑N i=1 ( (hxiid ) tcj + (h yi id) tcj ) , (26) `com = − 1N ∑N i=1 ( qxiyi log p xiyi + q xiyi ≈ log p xiyi ≈ + q xiyi ≺ log p xiyi ≺ ) , (27) `gan = − 12N ∑N i=1 ( log(1−D(G(F (xi)))) + log(1−D(G(F (yi)))) ) , (28) respectively. Here, N is the number of image pairs in a minibatch. The weighting parameters are set to λrec = 5, λclu = 0.1, λcom = 1, and λgan = 1. B.5 IMPACTS OF REPULSIVE TERM ON CLUSTERING To analyze the impacts of the repulsive term in Eq. (6), we first compare clustering qualities with α = 0 and α = 0.1. At α = 0, the repulsive term is excluded from the objective function J and the centroid rule is reduced to Eq. (10) in the spherical k-means (Dhillon & Modha, 2001). However, different from the spherical k-means, even at α = 0, the clustering is jointly performed with the training of the ORID network. We adopt two metrics to measure the quality of clustering: normalized mutual information (NMI) (Strehl & Ghosh, 2002) and centroid affinity (CA). NMI measures the information shared between two different partitioning of the same data A = ∪Ui=1Ai and B = ∪Vj=1Bj , NMI(A,B) = ∑U i=1 ∑V j=1 |Ai ∩Bj | log N |Ai∩Bj | |Ai||Bj |√ ( ∑U i=1 |Ai| log |Ai| N )( ∑V j=1 |Bj | log |Bj | N ) (29) where U and V are the numbers of clusters in A and B, respectively, N is the total number of samples, and | · | denotes the cardinality. Also, we define the centroid affinity (CA) as CA({c}kj=1) = 2k(k−1) ∑k j=1 ∑k l>j c t jcl. (30) For high-quality clustering, the centroids should be far from one another and thus should yield a low CA score. Figure 10 plots how NMI and CA vary as the iteration goes on. In this test, MORPH II (setting B) is used and the number of clusters k is set to 2. Since setting B consists of Africans and Caucasians, we use the race groups as the ground-truth partitioning for the NMI measurement. At early iterations, the NMI score of DRC-ORID with α = 0.1 is slightly better than that with α = 0. However, as the iterative training and clustering go on, the score gap gets larger. After the convergence, DRC-ORID with α = 0.1 outperforms the option α = 0 by a significant NMI gap of 0.13. Also, CA of the option α = 0.1 gradually decreases, whereas that of α = 0 does not. At α = 0.1, the repulsive term makes the centroids repel each other. As a result, CA, which is the cosine similarity between the two centroids, becomes almost −1, which means the equilibrium state in Eq. (11) is almost achieved. We also visualize the feature spaces of the two options, α = 0 and α = 0.1, using t-SNE in Figure 11. It is observed that two clusters are more clearly separated by DRC-ORID with α = 0.1. Figure 12 shows the t-SNE results after the convergence with age labels. Figure 13 compares the NMI curves at different α’s. The choice of α affects the quality of clustering, as α controls the intensity of the repulsive force between centroids. When α is too large, the centroids move too quickly, making the training of the ORID network difficult. On the other hand, when α is too small, the repulsive term does not affect the clustering meaningfully. Hence, α should be selected to strike a balance between training reliability and effective repulsion. It was found experimentally that clustering is performed well around α = 0.1. Finally, it is worth pointing out that, if the identity features were not normalized as in Eq. (3) and the repulsive clustering were performed in an unbounded space, the distances between centroids would get larger and larger as the iteration goes on. Thus, convergence would not be achieved. This is why we perform DRC on the bounded unit sphere. B.6 IMPACTS OF REPULSIVE TERM ON RANK ESTIMATION Table 8 compares the rank estimation results when the clustering is performed with and without the repulsive term. In this experiment, we use MORPH II (setting A) and set k = 2. Without the repulsive term, lower-quality clusters make the training of the comparator more difficult. As a result, the age estimation performance degrades significantly in terms of both MAE and CS. In other words, the quality of clustering affects the rank estimation performance greatly, and the proposed DRC algorithm provides high quality clusters suitable for the rank estimation. B.7 IMPACTS OF THE NUMBER k OF CLUSTERS ON RANK ESTIMATION Tables 9 and 10 compare the rank estimation results according to the number k of clusters on the MORPH II (setting A) and AADB datasets, respectively. On MORPH II, the age estimation performance decreases as k increases. Since the training set in setting A consists of only 4,394 images, each cluster at a large k contains too few instances. Thus, the comparator is trained inefficiently with fewer training pairs, degrading the performance. In contrast, AADB contains a large number of diverse images. Due to the diversity, a relatively large k should be used to group images into meaningful clusters. Also, even at a large k, each cluster contains a sufficient number of data. Thus, as compared to MORPH II, results on AADB are less sensitive to k. In addition, we provide age estimation results on the balanced dataset in Table 14, in which k has marginal impacts on the rank estimation performance. As mentioned previously, the quality of clustering significantly affects the rank estimation performance. Also, similarly to other algorithms based on k-means, the clustering quality of DRC is affected by k. Hence, for the proposed algorithm to be used on a new ordered dataset, k should be determined effectively to obtain good clustering and rank estimation results. Readers interested in the selection of k are referred to Pham et al. (2005). B.8 CLUSTERING USING OTHER FEATURES Instead of clustering identity features hx1id , h x2 id , . . . , h xn id , we test clustering order features hx1or , h x2 or , . . . , h xn or or whole features h x1 , hx2 , . . . , hxn . In this test, MORPH II (setting A) is used and k = 2. Figure 14 compares the clustering results. When using order features or whole features, instances are divided by their ages. We see that instances younger than 30 mostly belong to cluster 1 and the others to cluster 2. Table 11 compares the performances of the age estimators trained using these clustering results. The best performance is achieved when the clustering is done on identity features. B.9 RELIABILITY OF FEATURE DECOMPOSITION Performing the comparison using order features only does not theoretically guarantee that orderrelated information is fully excluded from identity features. However, we observed empirically that the decomposition is sufficiently reliable if the dimension of an identity feature is selected properly. If the dimension is too small, the encoder may lose a significant portion of order-irrelevant information. On the contrary, if the dimension is too large, the encoder may encode order information redundantly. In our experiments, we use 128 and 896 dimensional vectors for order and identity features (dor = 128 and did = 896), and obtain satisfactory decomposition results. To show that order-related information is excluded from identity features, we compare the accuracies of the comparator (i.e. ternary classifier), when identity features are used instead of order features. Specifically, we first extract order features and identity features from all instances in MORPH II using the pretrained ORID network. Then, we train two comparators that predict the ordering relationship between two instances x and y: one takes the order features hxor and h y or as input and the other takes the identity features hxid and h y id. Table 12 lists the comparator accuracies. We see that the comparator fails to predict ordering relationships from identity features. Also, Figure 15 is t-SNE visualization of the identity feature spaces with age or cluster labels, which confirms that order-related information is excluded effectively from identity features. B.10 MAP ESTIMATION Let us describe the MAP estimation rule for rank estimation in Section 3.4. Given a test instance x, we select references yi by Eq. (13). Then, by comparing x with yi, the comparator yields the probability vector pxyi = (pxyi , p xyi ≈ , p xyi ≺ ) for the three cases in Eq. (4). Thus, given yi, the probability of θ(x) = r can be written as Pθ(x)(r | yi) = pxyi · Pθ(x)(r |x yi) + pxyi≈ · Pθ(x)(r |x ≈ yi) + p xyi ≺ · Pθ(x)(r |x ≺ yi). (31) Suppose that x yi. Then, θ(x) − θ(yi) = r − i > τ from Eq. (4). Also, the maximum possible rank is m. We hence assume that θ(x) has the uniform distribution between i + τ + 1 and m. In other words, Pθ(x)(r |x yi) ∼ U(i+ τ + 1,m), where U denotes a discrete uniform distribution. Similarly, we have Pθ(x)(r |x ≈ yi) ∼ U(i−τ, i+τ) and Pθ(x)(r |x ≺ yi) ∼ U(1, i−τ−1). Then, we approximate the a posteriori probability Pθ(x)(r | y1, . . . ym) by averaging those single-reference inferences in Eq. (31); Pθ(x)(r | y1, . . . ym) = 1m ∑m i=1 Pθ(x)(r | yi). (32) Finally, we obtain the MAP estimate of the rank of x, which is given by θ̂(x) = arg max r∈Θl Pθ(x)(r | y1, . . . ym). (33) C FACIAL AGE ESTIMATION – MORE EXPERIMENTS AND DETAILS C.1 IMPLEMENTATION DETAILS We initialize the parameters of the ORID network for facial age estimation using the Glorot normal method (Glorot & Bengio, 2010). We use the Adam optimizer with a learning rate of 10−4 and decrease the rate by a factor of 0.5 every 50,000 steps. For data augmentation, we do random horizontal flips only. This is because other augmentation schemes, such as brightness or contrast modification, may deform identity information such as skin colors. Also, dor and did are set to be 128 and 896, respectively. In Eq. (6), we set α to 0.1 and decrease it to 0.05 after 200 epochs. C.2 EVALUATION SETTINGS For evaluation on the MORPH II dataset, we adopt four widely used testing protocols. • Setting A – 5,492 images of the Caucasian race are selected and then randomly divided into two non-overlapping parts: 80% for training and 20% for test. • Setting B – 21,000 images of Africans and Caucasians are selected to satisfy two con- straints: the ratio between Africans and Caucasians should be 1 : 1, and that between females and males 1 : 3. They are split into three disjoint subsets S1, S2, and S3. The training and testing are repeated twice: 1) training on S1, testing on S2 + S3, and 2) training on S2, testing on S1 + S3. The average performance of the two experiments is reported. • Setting C – This setting is the 5-fold cross-validation on the entire dataset. Images are randomly split into five folds, but the same person’s images should belong to only one fold. The average performance of the five experiments is reported. • Setting D – This is called the 80-20 protocol. Without any constraint, the entire dataset is randomly divided into the training and test sets with ratio 8 : 2. Thus, setting D is similar to one experiment in setting C, but the same person’s images may belong to both training and test sets. C.3 CLUSTERING We provide more clustering results on MORPH II. Figure 16 is the clustering results on setting B at k = 2. Since setting B consists of Africans and Caucasians, the images are clustered according to the races. Also, Table 13 summarizes the clustering results for settings A, B, and C at k = 2. The clustering result on setting D is omitted, since it is almost identical with that on setting C. In all settings, the proposed DRC-ORID divides facial images into two clusters with meaningful criteria, which are gender for setting A and race for settings B, C, and D. 1 897 196 3,507 63 8 33,008 4 0 46 C.4 AGE ESTIMATION We implement a VGG-based pairwise comparator and follow the settings of Lim et al. (2020). Specifically, instead of Eq. (4), we use the ternary categorization based on the geometric ratio and set τ = 0.1. We initialize its feature extractor using VGG16 pre-trained on the ILSVRC2012 dataset (Deng et al., 2009) and its fully connected layers using the Glorot normal method. We employ the Adam optimizer with a minibatch size of 32. We start with a learning rate of 10−4 and shrink it by a factor of 0.5 after every 80,000 steps. Table 14 lists age estimation results on the balanced dataset according to the number k of clusters. OL-supervised trains the comparator using supervised clusters separated according to gender or ethnic group annotations. Specifically, the supervised clusters at k = 2, 3, and 6 are divided according to genders, ethnic groups, and both genders and ethnic groups, respectively. On the other hand, OL-unsupervised and the proposed algorithm determine their clusters in unsupervised manners. We see that the proposed algorithm performs better than the conventional algorithms in all tests. By employing multiple clusters, the proposed algorithm improves MAE by 0.12 and CS by 0.73% on average. In contrast, OL-unsupervised improves MAE by 0.04 and CS by 0.07% only. This indicates that, by employing identity features, the proposed DRC-ORID algorithm groups instances into meaningful clusters, in which instance ranks can be compared more accurately. C.5 AGE TRANSFORMATION More age transformation results are in Figure 17. Note that, in Figure 6, given an image x, we select the reference y at a target age, whose identity feature is the most similar to that of x, as in Eq. (13). Hence, the image x and the reference y have similar appearance. On the other hand, Figure 17 shows transformed images using randomly selected references. The first two cases transform the same image x with different references, but the transformed images are similar. Also, even when the gender and/or race of y are different from those of x, the identity information of x is preserved well in the transformed image. This confirms the reliability of ORID. C.6 RECONSTRUCTION Figure 18 shows reconstructed faces using whole feature (hxor ⊕ hxid), order feature only (hxor ⊕ 0), and identity feature only (0⊕hxid). Without the order feature, each decoded face is degraded but the person can be identified. In contrast, without the identity feature, the reconstruction is not related to the person except that it seems to be an average face of people at the same age as the person. These results confirm that order and identity features are complementary. C.7 REFERENCE IMAGES Figure 19 shows examples of reference images, which are used for the rank estimation on MORPH II (setting D) at k = 2. Given a test image x, reference image yi of age class i is selected via Eq. (13) from the training set. In the default mode, for each age i, a single reference image is selected. However, the top r most similar references can be selected and used for the estimation. We use a single reference because multiple references improve the estimation performance only negligibly. In Figure 19, the top three reference images are shown for each age from 16 to 53. In setting D, the two clusters are divided by Africans and the others in general. However, we see that test and reference images tend to have the same gender, as well as the same race. Furthermore, they have similar appearance, even when they have a big age difference. D AESTHETIC SCORE REGRESSION – MORE EXPERIMENTS AND DETAILS D.1 IMPLEMENTATION DETAILS For aesthetic score regression, we implement a pairwise comparator based on EfficientNetB4 (Tan & Le, 2019). The pairwise comparator has the same architecture as that for facial age estimation, except for the backbone network. To initialize the backbone, we adopt the parameters pre-trained on the ILSVRC2012 dataset. We initialize the other layers using the Glorot normal method. We update the network parameters using the Adam optimizer with a minibatch size of 16. We start with a learning rate of 10−4 and shrink it by a factor of 0.8 every 8000 steps. Training images are augmented by random horizontal flipping. We set τ = 0.15 for the ternary categorization in Eq. (4). D.2 CLUSTERING Notice that the AADB dataset contains images of diverse contents and styles. Hence, when clustering with a small k, it is hard to observe the characteristics shared by images within each cluster, whereas k = 2 or 3 is sufficient for facial age data. We empirically found that at least eight clusters are required (k = 8) to partition the AADB dataset by meaningful criteria. Figure 20 provides more examples of clustering results at k = 8. Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 D.3 REFERENCE IMAGES Figure 21 shows examples of reference images, which are used for the aesthetic score regression. Given a test image x, the reference image yi of aesthetic class i is selected by Eq. (13). For the aesthetic score regression, we use a single reference image for each aesthetic class, as done in the facial age estimation. Thus, 101 reference images are used in total. E HCI CLASSIFICATION – MORE EXPERIMENTS AND DETAILS E.1 IMPLEMENTATION DETAILS For DRC-ORID for HCI classification, we set all hyper-parameters in the same way as we do in Appendix C.1. We set τ = 1 for the ternary categorization of ordering relationship in Eq. (4). Note that there are five decade classes from 1 to 5. E.2 CLUSTERING Figure 22 shows some sample images in the HCI dataset, which are ordered according to the decade classes. Figure 23 shows more example HCI images grouped into four clusters (k = 4). 1930s 1940s 1950s 1960s 1970s Cluster 1 Cluster 2 Cluster 3 Cluster 4 E.3 REFERENCE IMAGES Figure 24 shows the five reference images for each of six test image examples. Note that, given a test image, the reference images of similar contents, tones, or composition are selected from the five decade classes.
1. What are the strengths and weaknesses of the proposed method for ordered learning? 2. How does the reviewer assess the novelty and depth of exploration of the approach? 3. What are the potential use cases and broader impacts of the system, particularly in the task of estimating ages from photographs? 4. What are the ethical, safety, and fairness concerns related to this application? 5. Were there any other clustering objectives considered beyond the repulsive-based one? 6. How does the paper's approach connect to previous works like Learning to Order Things and Logarithmic Time One-Against-Some?
Review
Review Summary of paper: This paper considers the task ordered learning, making predicting a class label for a point among an ordered graph of classes. The paper proposes a clustering objective that encourages the model to separate data into groups such that classification prediction is easier within each cluster. The method is intuitive, clearly explained and well motivated. The paper indicates state of the art results on a task of estimating ages of individuals from photographs. Review summary: Missing crucial discussion on discussion of use cases / broader impact of task of estimating ages from photographs. Otherwise intuitive and effective method for ordered data; effective empirical results; limited novelty / exploration of methodological approach. Strengths: The authors describe an intuitive and effective method for making predictions on ordered data. The approach uses a intuitive clustering-based method that groups data into subsets where items are easier to order. The paper is clearly written and explains the approach clearly. The paper shows several examples of predicted output of the method and shows results on two tasks (estimating ages, aesthetic score regression). The method achieves state of the art results on the task of estimating ages and is competitive on the other task. The authors show further results on age transformation. Weakness: Broader Impacts of Applications: One of the primary applications of the paper is estimating ages of individuals based on their photographs. While this is paper is not the first to focus on such a task, it is very remiss of this paper to not discuss the motivations for this task and the broader impacts and ethical considerations of this task. I would very strongly encourage the authors to add a discussion of the potential uses of their system and the benefits (as well as harms) that come from these uses. I think that it is crucially important to discuss this both in the context of this work as well as previous work on the task. In particular, it would be important to mention how the use of clustering (into groups based on gender/race) in this model factors into potential biases when the model is used. I think it would be necessary to include this discussion in the body of the paper itself rather than an appendix. I greatly believe that this discussion is necessary and the lack of it is one of my top concerns about the paper. Distinctions between total ordering and partial ordered / related work: The presentation of the approach indicates that observations are not directly comparable across clusters. However, the overall model does in fact provide a total ordering -- each point is mapped one of the clusters and then compared within that cluster. I think the presentation would be greatly improved if it were described not in a way that implies a partial ordering (only within each cluster) is there, but instead that the total ordering function is this multi-modal, cluster-based ordering. Further, I think it would important to discuss the relationships between this work and work on partially ordering sets, particularly work on combining partially ordered sets. It might also be good to consider more related work on ordering, such as, Learning to Order Things (https://papers.nips.cc/paper/1431-learning-to-order-things.pdf). Also, I think that it is especially important to address other work (such as that in extreme classification) that organizes class labels into groups that are easier to discriminate between (i.e., Logarithmic Time One-Against-Some ( https://arxiv.org/abs/1606.04988)). Novelty of approach / depth of exploration: The core novelty of the approach is in the use of clustering to separate the data into groups that are easier to rank. This is a nice idea and appears give strong empirical benefits. I worry that since the clustering component is the core contribution of the paper, that the analysis of the method of clustering is not very deeply explored empirically. The idea is intuitive, but I feel the limited deviation from classic approaches that combine clustering + classification would benefit from additional analysis of the approach, along the dimension of the clustering objective that is selected. Questions for the authors: What are the potential use cases for the system & its applications to age prediction? What are the fairness/ethical/safety concerns of such an application? Were clustering objectives other than the repulsive-based one considered? How does your work connect to papers such as Logarithmic Time One-Against-Some ( https://arxiv.org/abs/1606.04988) which also organize classes into clusters ?
ICLR
Title Deep Repulsive Clustering of Ordered Data Based on Order-Identity Decomposition Abstract We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. N/A We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. 1 INTRODUCTION There are various types of ‘ordered’ data. For instance, in facial age estimation (Ricanek & Tesafaye, 2006), face photos are ranked according to the ages. Also, in a video-sharing platform, videos can be sorted according to the numbers of views or likes. In these ordered data, classes, representing ranks or preferences, form an ordered set (Schröder, 2003). Attempts have been made to estimate the classes of objects, including multi-class classification (Pan et al., 2018), ordinal regression (Frank & Hall, 2001), metric regression (Fu & Huang, 2008). Recently, a new approach, called order learning (Lim et al., 2020), was proposed to solve this problem. Order learning is based on the idea that it is easier to predict ordering relationship between objects than to estimate the absolute classes (or ranks); telling the older one between two people is easier than estimating their exact ages. Hence, in order learning, the pairwise ordering relationship is learned from training data. Then, the rank of a test object is estimated by comparing it with reference objects with known ranks. However, some objects cannot be easily compared. It is less easy to tell the older one between people of different genders than between those of the same gender. Lim et al. (2020) tried to deal with this issue, by dividing an ordered dataset into disjoint chains. But, the chains were not clearly separated, and no meaningful properties were discovered from the chains. In this paper, we propose a reliable clustering algorithm, called deep repulsive clustering (DRC), of ordered data based on order-identity decomposition (ORID). Figure 1 shows a clustering example of ordered data. Note that some characteristics of objects, such as genders or races in age estimation, are not related to their ranks, and the ranks of objects sharing such characteristics can be compared more reliably. To discover such characteristics without any supervision, the proposed ORID network decomposes the information of an object instance into an order-related feature and an identity feature unrelated to the rank. Then, the proposed DRC clusters object instances using their identity features; in each cluster, the instances share similar identity features. Furthermore, given a test instance, we decide its cluster based on the nearest neighbor (NN) rule, and compare it with reference instances within the cluster to estimate its rank. To this end, we develop a maximum a posteriori (MAP) estimation rule. Experimental results on ordered data for facial age estimation, aesthetic score regression (Kong et al., 2016), and historical color image classification (Palermo et al., 2012) demonstrate that the proposed algorithm separates ordered data clearly into meaningful clusters and provides excellent rank estimation performances for unseen test instances. The contributions of this paper can be summarized as follows. • We first propose the notion of identity features of ordered data and develop the ORID network for the order-identity decomposition. • We develop the DRC algorithm to cluster data on a unit sphere effectively using a repulsive term. We also prove the local optimality of the solution. • We propose the MAP decision rule for rank estimation. The proposed algorithm provides the state-of-the-art performances for facial age estimation and aesthetic score regression. 2 RELATED WORK 2.1 ORDER LEARNING The notion of order learning was first proposed by Lim et al. (2020). It aims to determine the order graph of classes and classify an object into one of the classes. In practice, it trains a pairwise comparator, which is a ternary classifier, to categorize the relationship between two objects into one of three cases: one object is bigger than, similar to, or smaller than the other. Then, it estimates the rank of a test object, by comparing it with reference objects with known ranks. However, not every pair of objects are easily comparable. Although Lim et al. (2020) attempted to group objects into clusters, in which objects could be more accurately compared, their clustering results were unreliable. Pairwise comparison has been used to estimate object ranks, because relative evaluation is easier than absolute evaluation in general. Saaty (1977) proposed the scaling method to estimate absolute priorities from relative priorities, which has been applied to various decision processes, including aesthetic score regression (Lee & Kim, 2019). Also, some learning to rank (LTR) algorithms are based on pairwise comparison (Liu, 2009; Cohen et al., 1998; Burges et al., 2005; Tsai et al., 2007). Order learning attempts to combine (possibly inconsistent) pairwise ordering results to determine the rank of each object. Thus, it is closely related to the Cohen et al.’s LTR algorithm (1998), which learns a pairwise preference function and obtains a total order of a set to maximize agreements among preference judgments of pairs of elements. Also, order learning is related to rank aggregation (Dwork et al., 2001), in which partially ordered sets are combined into a linearly ordered set to achieve the maximum consensus among those partial sets. Rank aggregation has been studied in various fields (Brüggemann et al., 2004). Since optimal aggregation is NP-hard, Dwork et al. (2001) proposed an approximate algorithm, called Markov chain ordering. There are many other approximate schemes, such as the local Kemenization, Borda count, and scaled footrule aggregation. 2.2 CLUSTERING Data clustering is a fundamental problem to partition data into disjoint groups, such that elements in the same group are similar to one another but elements from different groups are dissimilar. Although various clustering algorithms have been proposed (Hartigan & Wong, 1979; Ester et al., 1996; Kohonen, 1990; Dhillon & Modha, 2001; Reynolds, 2009), conventional algorithms often yield poor performance on high-dimensional data due to the curse of dimensionality and ineffectiveness of similarity metrics. Dimensionality reduction and feature transform methods have been studied to map raw data into a new feature space, in which they are more easily separated. Linear transforms, such as PCA (Wold et al., 1987), and non-linear transformations, including kernel methods (Hofmann et al., 2008) and spectral clustering (Ng et al., 2002), have been proposed. Recently, deep neural networks have been adopted effectively as feature embedding functions (LeCun et al., 2015), and these deep-learning-based feature embedding functions have been combined with classical clustering algorithms. For instance, Caron et al. (2018) proposed a deep clustering algorithm based on k-means. It clusters features from a neural network and then trains the network using the cluster assignments as pseudo-labels. This is done iteratively. Also, Yang et al. (2016) jointly learned feature representations and clustered images, based on agglomerative clustering. Chang et al. (2017) recast the image clustering task into a binary classification problem to predict whether a pair of images belong to the same cluster or different clusters. Similarly to these algorithms, we use a neural network to determine a feature space in which clustering is done more effectively. However, we consider the clustering of ordered data, and each cluster should consists of elements, whose ranks can be compared more accurately. There are conventional approaches to use clustering ideas to aid in classification or rank estimation. For example, Yan et al. (2015) developed a hierarchical classifier, which clusters fine categories into coarse category groups and classifies an object into a fine category within its coarse category group. For extreme multiclass classification, Daumé III et al. (2017) proposed to predict a class label among candidate classes only, which are dynamically selected by the recall tree. It is however noted that the leaves of the recall tree do not partition the set of classes. Also, for age estimation, Li et al. (2019) proposed a tree-like structure, called bridge-tree, to divide data into overlapping age groups and train a local regressor for each group. The set of local regressors can be more accurate than a global regressor to deal with the entire age range. Whereas these conventional approaches group data in the label dimension to perform their tasks more effectively, the proposed algorithm cluster data in the dimension orthogonal to the label dimension. In other words, we cluster data using identity features, instead of using order features. 3 PROPOSED ALGORITHM 3.1 PROBLEM DEFINITION An order is a binary relation, often denoted by ≤, on a set Θ = {θ1, θ2, . . . , θm} (Schröder, 2003). It should satisfy three properties of reflexivity (θi ≤ θi for all i), antisymmetry (θi ≤ θj and θj ≤ θi imply θi = θj), and transitivity (θi ≤ θj and θj ≤ θk imply θi ≤ θk). Then, Θ is called a partially ordered set. Furthermore, if every pair of elements are comparable (θi ≤ θj or θj ≤ θi for all i, j), Θ is called a chain or linearly ordered set. An order describes ranks or priorities of classes. For example, in age estimation, θi may represent the age class of i-year-olds. Then, θ14 ≤ θ49 represents that 14-year-olds are younger than 49-year-olds. As mentioned previously, it is less easy to tell the older one between people of different genders. An algorithm, hence, may compare a subject with reference subjects of the same gender only. In such a case, each age class θi represents two subclasses θfemalei and θ male i of different types, and the algorithm compares only subjects of the same type. Lim et al. (2020) assumed that subclasses of different types are incomparable and thus the set of subclasses is the union of k disjoint chains, where k is the number of types. However, in many ranking applications, objects of different types can be compared (although less easily than those of the same type are). Thus, instead of assuming incomparability across chains, we assume that there is a total order on Θ = {θ1, θ2, . . . , θm}, in which each class θi consists of k types of subclasses, and that object instances of the same type are more easily compared than those of different types. Suppose that n training instances in X = {x1, x2, . . . , xn} are given. Also, suppose that there are m ranks and the ground-truth rank of each instance is known. In this sense, X contains ordered data. The problem is twofold. The first goal is to decompose the whole instances X into k disjoint clusters {Cj}kj=1 in which instances are more easily compared; X = ⋃k j=1 Cj (1) where Ci∩Cj = ∅ for i 6= j. In other words, we aim to partition the ordered data inX into k clusters, by grouping them according to their characteristics unrelated to their ranks. These characteristics, which tend to remain the same even when an object experiences rank changes, are referred to as ‘identity’ features in this work. For example, in age estimation, genders or races can be identity features. However, we perform the clustering without any supervision for identity features. Notice that instances within a cluster would be compared more easily than those across clusters, since they have similar identity features. The number k of clusters is assumed to be known a priori. Impacts of k on the clustering performance are discussed in Appendix B.7. The second goal is to assign an unseen test instance into one of the clusters and determine its rank by comparing it with reference instances within the cluster. To achieve these goals, we propose the ORID network and the DRC algorithm. 3.2 ORDER-IDENTITY DECOMPOSITION In general, object instances can be compared more easily, as they have more similar identity features irrelevant to order. Therefore, we decompose the information of each object instance into an order feature and an identity feature. To this end, we propose the ORID network in Figure 2, composed of three parts: autoencoder, discriminator, and comparator. 1) Autoencoder: Similarly to deep clustering algorithms in (Yang et al., 2017; Dizaji et al., 2017; Chen et al., 2017; Ji et al., 2017), we use the autoencoder G ◦ F (·), based on a neural network, to extract feature vectors. The encoder hx = F (x) maps an input vector x to a feature vector hx, while the decoder x̂ = G(hx) reconstructs x̂ from hx. By minimizing the reconstruction loss ‖x− x̂‖1, F is trained to represent x compactly with as little loss of information as possible. We decompose the overall feature hx ∈ Rdor+did into the order feature hxor and the identity feature hxid, given by hxor = [h x 1 , h x 2 , . . . , h x dor ] t (2) hxid = [h x dor+1, h x dor+2, . . . , h x dor+did ]t/‖[hxdor+1, h x dor+2, . . . , h x dor+did ]‖ (3) where dor and did are the dimensions of hxor and h x id. However, without additional control, the output hx of the neural network F would be highly entangled (Higgins et al., 2018). To put together order-related information into hxor, we employ the comparator. 2) Comparator: Using the order features hxor and h y or of a pair of instances x and y, we train the comparator, which classifies their ordering relationship into one of three categories ‘bigger,’ ‘similar,’ and ‘smaller’: x y if θ(x)− θ(y) > τ, x ≈ y if |θ(x)− θ(y)| ≤ τ, x ≺ y if θ(x)− θ(y) < −τ, (4) where θ(·) denotes the class of an instance. As in (Lim et al., 2020), ‘ ,≈,≺’ represent the ordering relationship between instances, while ‘>,=, <’ do the mathematical order between classes. The comparator outputs the softmax probability pxy = (pxy , p xy ≈ , p xy ≺ ). It is trained to minimize the cross-entropy between pxy and the ground-truth one-hot vector qxy = (qxy , q xy ≈ , q xy ≺ ). Because it is trained jointly with the autoencoder, the information deciding the ordering relationship tends to be encoded into the order features hxor and h y or. On the other hand, the remaining information necessary for the reconstruction of x̂ and ŷ are encoded into the identity features hxid and h y id. 3) Discriminator: We adopt the discriminator D that tells real images from synthesized images, generated by the decoder G. Using the GAN loss (Goodfellow et al., 2014), the discriminator helps the decoder to reconstruct more realistic output x̂ and ŷ. Appendix A provides detailed network structures of these components in ORID. 3.3 DEEP REPULSIVE CLUSTERING After obtaining the identity features hx1id , h x2 id , . . . , h xn id of all instances xi ∈ X , we partition them into k clusters. Each cluster contains instances that are more easily comparable to one another. The identity features are normalized in Eq. (3) and lie on the unit sphere in Rdid . In other words, we cluster data points on the unit sphere. Thus, the cosine similarity is a natural affinity metric. Let Cj , 1 ≤ j ≤ k, denote the k clusters. Also, let cj , constrained to be on the unit sphere, denote the ‘centroid’ or the representative vector for the instances in cluster Cj . We define the quality of cluster Cj as ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) (5) where the first term is the similarity of an instance in Cj to the centroid cj , the second term with the negative sign quantifies the average dissimilarity of the instance from the other centroids, and α is a nonnegative weight. For a high quality cluster, instances should be concentrated around the centroid and be far from the other clusters. The second term is referred to as the repulsive term, as its objective is similar to the repulsive rule in (Lee et al., 2015). Although conventional methods also try to increase inter-cluster dissimilarity (Ward Jr, 1963; Lee et al., 2015), to the best of our knowledge, DRC is the first attempt to use an explicit repulsive term in deep clustering, which jointly optimizes clustering and feature embedding. Next, we measure the overall quality of the clustering by J({Cj}kj=1, {cj}kj=1) = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) . (6) We aim to find the optimum clusters to maximize this objective function J , yet finding the global optimum is NP-complete (Kleinberg et al., 1998; Garey et al., 1982). Hence, we propose an iterative algorithm, called DRC, to find a local optimum, as in the k-means algorithm (Gersho & Gray, 1991). 1. Centroid rule: After fixing the clusters {Cj}kj=1, we update the centroids {cj}kj=1 to maximize J in Eq. (6). Because the centroids should lie on the unit sphere, we solve the constrained optimization problem: maximize J({cj}kj=1) subject to ctjcj = 1 for all j = 1, . . . , k. (7) Using Lagrangian multipliers (Bertsekas, 1996), the optimal centroids are obtained as cj = (∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ) / ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥. (8) 2. NN rule: On the other hand, after fixing the centroids, we update the membership of each instance to maximize J in Eq. (6). The optimal cluster Cj is given by Cj = { x | (hxid)tcj ≥ (hxid)tcl for all 1 ≤ l ≤ k } . (9) In other words, an instance should be assigned to Cj if its nearest centroid is cj . We apply the centroid rule and the NN rule iteratively until convergence. Because both rules monotonically increase the same objective function J and the inequality J ≤ n+ αk−1n always holds, J is guaranteed to converge to a local maximum. Readers interested in the convergence are referred to (Sabin & Gray, 1986; Pollard, 1982). Without the repulsive term in Eq. (6) (i.e. at α = 0), centroid cj in Eq. (8) is updated by cj = ∑ x∈Cj h x id/‖ ∑ x∈Cj h x id‖, (10) as done in the spherical k-means (Dhillon & Modha, 2001). In contrast, with a positive α, the objective function J is reduced when the centroids are far from one another. Ideally, in equilibrium, the centroid of a cluster should be the opposite of the centroid of all the other clusters;( ∑ x∈Cj hxid ‖ ∑ x∈Cj hxid‖ )t( ∑ x∈X\Cj hxid ‖ ∑ x∈X\Cj hxid‖ ) = −1 for all j = 1, 2, . . . , k. (11) Note that the ORID network and thus the encoded feature space are trained jointly with the repulsive clustering. As the training goes on, the centroids repel one another, and the clusters are separated more clearly due to the repulsive term. We jointly optimize the clusters and the ORID network parameters, as described in Algorithm 1. First, we train the ORID network for warm-up epochs, by employing every pair of instances x and y as input. Then, using the identity features, we partition the input data into k clusters using k-means. Second, we repeat the fine-tuning of the ORID network and the repulsive clustering alternately. In the fine-tuning, a pair of x and y are constrained to be from the same cluster, and the following loss function is employed. ` = λrec`rec + λclu`clu + λcom`com + λgan`gan. (12) Appendix B describes this loss function in detail, proves the optimality of the centroid and NN rules in Eq. (8) and (9), and analyzes the impacts of the repulsive term in Eq. (6). Algorithm 1 DRC-ORID Input: Ordered data X = {x1, x2, . . . , xn}, k = the number of clusters 1: Train ORID network for warm-up epochs to minimize loss λrec`rec + λcom`com + λgan`gan 2: Partition X into C1, C2, . . . , Ck using k-means 3: repeat 4: Fine-tune ORID network to minimize loss λrec`rec + λclu`clu + λcom`com + λgan`gan 5: repeat 6: for all j = 1, 2, . . . , k do 7: Update centroid cj via Eq. (8) . Centroid rule 8: end for 9: for all j = 1, 2, . . . , k do 10: Update cluster Cj via Eq. (9) . NN rule 11: end for 12: until convergence or predefined number of iterations 13: until predefined number of epochs Output: Clusters {Cj}kj=1, centroids {cj}kj=1, ORID network 3.4 RANK ESTIMATION Using the output of the DRC-ORID algorithm, we can estimate the rank of an unseen test instance x. First, we extract its identity feature hxid using the ORID encoder. By comparing h x id with the centroids {cj}kj=1 based on the NN rule, we find the most similar centroid cl. Then, x is declared to belong to cluster Cl. Without loss of generality, let us assume that the classes (or ranks) are the first m natural numbers, Θ = {1, 2, . . .m}. Then, for each i ∈ Θ, we select a reference instance yi with rank i from cluster Cl, so that it is the most similar to x. Specifically, yi = arg maxy∈Cl : θ(y)=i(h x id) thyid. (13) We estimate the rank θ(x) of the test instance x, by comparing it with the chosen references yi, 1 ≤ i ≤ m. For the rank estimation, Lim et al. (2020) developed the maximum consistency rule, which however does not exploit the probability information, generated by the comparator. In this paper, we use the maximum a posteriori (MAP) estimation rule, which is described in detail in Appendix B.10. 4 EXPERIMENTAL RESULTS This section provides various experimental results. Due to space limitation, implementation details and more results are available in Appendices C, D, and E. 4.1 FACIAL AGE ESTIMATION Datasets: We use two datasets. First, MORPH II (Ricanek & Tesafaye, 2006) is a collection of about 55,000 facial images in the age range [16, 77]. It provides gender (female, male) and race (African American, Asian, Caucasian, Hispanic) labels as well. We employ the four evaluation settings A, B, C, and D in Appendix C.2. Second, the balanced dataset (Lim et al., 2020) is sampled from the three datasets of MORPH II, AFAD (Niu et al., 2016), and UTK (Zhang et al., 2017) to overcome bias to specific ethnic groups or genders. It contains about 6,000 images for each combination of gender in {female, male} and ethnic group in {African, Asian, European}. Clustering: Figure 3 shows clustering results on MORPH II (setting A), when the number of clusters is k = 2. Setting A contains faces of Caucasian descent only. Thus, the proposed DRC-ORID divides those faces into two clusters according to genders in general, although the annotated gender information is not used. Most males are assigned to cluster 1, while a majority of females to cluster 2. On the other hand, setting B consists of Africans and Caucasians. Thus, those images are clustered according to the races, as shown in Appendix C.3. Figure 4 is the results on the balanced dataset at k = 3, which is composed of MORPH II, AFAD, and UTK images. Due to different characteristics of these sources, images are clearly divided according to their sources. At k = 2, MORPH II images are separated from the others. This is because, unlike the MORPH II images, the boundaries of most AFAD and UTK images are zeroed for alignment using SeetaFaceEngine (Zhang et al., 2014). Lim et al. (2020) also tried the clustering of the balanced dataset. Figure 5 visualizes the feature space using t-SNE (Maaten & Hinton, 2008). Although their method aligns the features according to ages, their clusters are not separated, overlapping one another. In contrast, the proposed DRC-ORID separates the three clusters clearly, as well as sorts features according to the ages within each cluster. More t-SNE plots for analyzing the impacts of the repulsive term are available in Appendix B.5. Age transformation: We assess the decomposition performance of ORID. Although ORID is not designed for age transformation (Or-El et al., 2020), it decomposes an image x into the order and identity features, hxor and h x id. Thus, the age can be transformed in two steps. First, we replace hxor of x with h y or of a reference image y at a target age. Second, we decode the resultant feature (concatenation of hyor and h x id) to obtain the transformed image. Figure 6 shows some results on MORPH II images. Order-related properties, such as skin textures and hair colors, are modified plausibly, but identity information is preserved. This indicates the reliability of ORID. Age estimation: Table 1 compares the proposed algorithm with conventional age estimators on the four evaluation settings of MORPH II. These conventional algorithms take 224× 224 or bigger images as input, while ORID takes 64× 64 images. Moreover, most of them adopt VGG16 (Simonyan & Zisserman, 2015) as their backbones, which is more complicated than the ORID encoder. Thus, for comparison, after fixing clusters using DRC-ORID, we train another pairwise comparator based on VGG16, whose architecture is the same as Lim et al. (2020). We measure the age estimation performance by the mean absolute error (MAE) and the cumulative score (CS). MAE is the average absolute error between estimated and ground-truth ages, and CS computes the percentage of test samples whose absolute errors are less than or equal to a tolerance level of 5. Mainly due to the smaller input size of 64× 64, the vanilla version yields poorer performances than the conventional algorithms. The VGG version, however, outperforms them significantly. First, in the proposed-VGG (k = 1), all instances can be compared, as in the OL algorithm. In other words, the clustering is not performed. Thus, the pairwise comparators of OL and the proposedVGG (k = 1) are trained in the same way, but their rank estimation rules are different. Whereas OL uses the maximum consistency rule, the proposed algorithm performs the MAP estimation. The score gaps between them confirm that the MAP estimation is more accurate. Moreover, by clustering facial images into two groups, the proposed-VGG (k = 2) improves the results meaningfully. The proposed-VGG (k = 2) provides the state-of-the-art results, except for the MAE test in setting D. 4.2 AESTHETIC SCORE REGRESSION The aesthetics and attribute database (AADB) is composed of 10,000 photographs of various themes such as scenery and close-up (Kong et al., 2016). Each image is annotated with an aesthetic score in [0, 1]. We quantize the continuous score with a step size of 0.01 to make 101 score classes. Compared to facial images, AADB contains more diverse data. It is hence more challenging to cluster AADB images. Figure 8 shows example images in each cluster at k = 8. Images in the same cluster have similar colors, similar contents, or similar composition. This means that ORID extracts identity features effectively, corresponding to contents or styles that are not directly related to aesthetic scores. Using those identity features, DRC discovers meaningful clusters. Figure 9 visualizes the feature space of AADB. Aesthetic scores are sorted along one direction, while clusters are separated in the other orthogonal direction. In other words, the scores look like latitudes, while the clusters appear to be separated by meridians (or lines of longitude). As a point on the earth surface can be located by its latitude and longitude, an image is represented by its aesthetic score (order feature) and cluster (identity feature). Table 2 compares regression results. Even without clustering process, the proposed algorithm outperforms the Reg-Net and ASM algorithms. Moreover, by using the eight unsupervised clusters in Figure 8, the proposed algorithm further reduces the MAE to yield the state-of-the-art result. 4.3 HISTORICAL COLOR IMAGE CLASSIFICATION HCI (Palermo et al., 2012) is a dataset for determining the decade when a photograph was taken. It contains images from five decades from 1930s to 1970s. Each decade category has 265 images: 210, 5, and 50 are used for training, validation and testing. Figure 7 shows the clustering results at k = 4. We observe similarity of contents in each cluster. Table 3 compares the quinary classification results. Frank & Hall (2001), Cardoso & da Costa (2007), Palermo et al. (2012), and RED-SVM use traditional features, while the others deep features. The performance gaps between these two approaches are not huge, since 1,050 images are insufficient for training deep networks. 5 IMPACTS OF APPLICATIONS The proposed algorithm can be applied to various ranking problems. In this paper, we demonstrated three vision applications: facial age estimation, aesthetic score regression, and historical image classification. In particular, the proposed age estimator has various potential uses. For example, it can block or recommend media contents to people according to their ages. However, it has harmful impacts, as well as positive ones. Moreover, although age information lacks the distinctiveness to identify an individual, identity features, extracted by ORID, can be misused in facial recognition systems, causing serious problems such as unwanted invasion of privacy (Raji et al., 2020). Hence ethical considerations should be made before the use of the proposed algorithm. Recently, ethical concerns about the fairness and safety of automated systems have been raised (Castelvecchi, 2020; Roussi, 2020; Noorden, 2020). Especially, due to the intrinsic imbalance of facial datasets (Ricanek & Tesafaye, 2006; Zhang et al., 2017; Niu et al., 2016), most deep learning methods on facial analysis (Wen et al., 2020; Or-El et al., 2020) have unwanted gender or racial bias. The proposed algorithm is not free from this bias either. Hence, before any practical usage, the bias should be resolved. Also, even though the proposed algorithm groups data in an unsupervised manner, data are clustered according to genders or races on MORPH II. These results should never be misinterpreted in such a way as to encourage any racial or gender discrimination. We recommend using the proposed age estimator for research only. 6 CONCLUSIONS The DRC algorithm of ordered data based on ORID was proposed in this work. First, the ORID network decomposes the information of an object into the order and identity features. Then, DRC groups objects into clusters using their identity features in a repulsive manner. Also, we can estimate the rank of an unseen test by comparing it with references within the corresponding cluster based on the MAP decision. Extensive experimental results on various ordered data demonstrated that the proposed algorithm provides excellent clustering and rank estimation performances. ACKNOWLEDGMENTS This work was supported in part by the MSIT, Korea, under the ITRC support program (IITP-20202016-0-00464) supervised by the IITP, and in part by the National Research Foundation of Korea (NRF) through the Korea Government (MSIP) under Grant NRF-2018R1A2B3003896. A NETWORK STRUCTURE OF ORID As described in Section 3.2, the ORID network consists of the encoder F , the decoder G, the comparator C, and the discriminator D. The network structures of these components are detailed in Tables 4∼ 7, where ‘kh×kw-s-c Conv’ and ‘kh×kw-s-c Deconv’ denote the 2D convolution and 2D deconvolution with kernel size kh×kw, stride s, and c output channels, respectively. ‘BN’ means batch normalization (Ioffe & Szegedy, 2015), and ‘c Dense’ is a dense layer with c output channels. Note that the encoder takes a 64 × 64 RGB image as input, and the identity feature of the encoder output is l2-normalized in Eq. (3). Also, we set dor = 128 and did = 896. B ALGORITHMS – DETAILS B.1 OPTIMALITY OF CENTROID RULE To solve the constrained optimization problem in Eq. (7), we construct the Lagrangian function L = ∑k j=1 ∑ x∈Cj ( (hxid) tcj − α 1k−1 ∑ l 6=j(h x id) tcl ) − λj ∑k j=1(cj tcj − 1) (14) where λj , 1 ≤ j ≤ k, are Lagrangian multipliers (Bertsekas, 1996). By differentiating L with respect to cj and setting it to zero, we have ∂L ∂cj = ∑ x∈Cj h x id − α 1k−1 ∑ l 6=j ∑ x∈Cl h x id − 2λjcj (15) = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id − 2λjcj (16) = 0 (17) for j = 1, . . . , k. Therefore, the optimal centroid cj is given by cj = ∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id 2λj . (18) Because of the normalization constraint cjtcj = 1, we have 2λj = ∥∥∑ x∈Cj h x id − α 1k−1 ∑ x∈X\Cj h x id ∥∥, (19) which leads to the centroid rule in Eq. (8). B.2 OPTIMALITY OF NN RULE Let us consider two cases. First, instance x is declared to belong to cluster Cj . It then contributes to the objective function J in Eq. (6) by βj = (h x id) tcj − α 1k−1 ∑ l 6=j(h x id) tcl. (20) Second, x is declared to belong to another cluster Cj′ . Then, its contribution is βj′ = (h x id) tcj′ − α 1k−1 ∑ l 6=j′(h x id) tcl. (21) By comparing the two contributions, we have βj − βj′ = (hxid)t(cj − cj′)− α 1k−1 (h x id) t(cj′ − cj) (22) = ( 1 + α 1k−1 ) (hxid) t(cj − cj′). (23) This means that βj ≥ βj′ when (hxid)tcj ≥ (hxid)tcj′ . Therefore, x should be assigned to the optimal cluster Cj∗ such that the cosine similarity (hxid)tcj∗ is maximized. Equivalently, we have the NN rule in Eq. (9). B.3 REGULARIZATION CONSTRAINT IN DRC To prevent empty clusters and balance the partitioning, we enforce a regularization constraint so that every cluster contains at least a predefined number of instances. More specifically, when applying the NN rule, we enforce that at least 12k of instances are assigned to each cluster Cj . The instances are selected in the decreasing order of cosine similarity (hxid) tcj . B.4 LOSS FUNCTIONS In the DRC-ORID algorithm, we use the loss function ` = λrec`rec + λclu`clu + λcom`com + λgan`gan (24) where the reconstruction, clustering, comparator, and GAN losses are given by `rec = 1 2N ∑N i=1 ( ‖xi −G(F (xi))‖1 + ‖yi −G(F (yi))‖1 ) , (25) `clu = − 12N ∑N i=1 ( (hxiid ) tcj + (h yi id) tcj ) , (26) `com = − 1N ∑N i=1 ( qxiyi log p xiyi + q xiyi ≈ log p xiyi ≈ + q xiyi ≺ log p xiyi ≺ ) , (27) `gan = − 12N ∑N i=1 ( log(1−D(G(F (xi)))) + log(1−D(G(F (yi)))) ) , (28) respectively. Here, N is the number of image pairs in a minibatch. The weighting parameters are set to λrec = 5, λclu = 0.1, λcom = 1, and λgan = 1. B.5 IMPACTS OF REPULSIVE TERM ON CLUSTERING To analyze the impacts of the repulsive term in Eq. (6), we first compare clustering qualities with α = 0 and α = 0.1. At α = 0, the repulsive term is excluded from the objective function J and the centroid rule is reduced to Eq. (10) in the spherical k-means (Dhillon & Modha, 2001). However, different from the spherical k-means, even at α = 0, the clustering is jointly performed with the training of the ORID network. We adopt two metrics to measure the quality of clustering: normalized mutual information (NMI) (Strehl & Ghosh, 2002) and centroid affinity (CA). NMI measures the information shared between two different partitioning of the same data A = ∪Ui=1Ai and B = ∪Vj=1Bj , NMI(A,B) = ∑U i=1 ∑V j=1 |Ai ∩Bj | log N |Ai∩Bj | |Ai||Bj |√ ( ∑U i=1 |Ai| log |Ai| N )( ∑V j=1 |Bj | log |Bj | N ) (29) where U and V are the numbers of clusters in A and B, respectively, N is the total number of samples, and | · | denotes the cardinality. Also, we define the centroid affinity (CA) as CA({c}kj=1) = 2k(k−1) ∑k j=1 ∑k l>j c t jcl. (30) For high-quality clustering, the centroids should be far from one another and thus should yield a low CA score. Figure 10 plots how NMI and CA vary as the iteration goes on. In this test, MORPH II (setting B) is used and the number of clusters k is set to 2. Since setting B consists of Africans and Caucasians, we use the race groups as the ground-truth partitioning for the NMI measurement. At early iterations, the NMI score of DRC-ORID with α = 0.1 is slightly better than that with α = 0. However, as the iterative training and clustering go on, the score gap gets larger. After the convergence, DRC-ORID with α = 0.1 outperforms the option α = 0 by a significant NMI gap of 0.13. Also, CA of the option α = 0.1 gradually decreases, whereas that of α = 0 does not. At α = 0.1, the repulsive term makes the centroids repel each other. As a result, CA, which is the cosine similarity between the two centroids, becomes almost −1, which means the equilibrium state in Eq. (11) is almost achieved. We also visualize the feature spaces of the two options, α = 0 and α = 0.1, using t-SNE in Figure 11. It is observed that two clusters are more clearly separated by DRC-ORID with α = 0.1. Figure 12 shows the t-SNE results after the convergence with age labels. Figure 13 compares the NMI curves at different α’s. The choice of α affects the quality of clustering, as α controls the intensity of the repulsive force between centroids. When α is too large, the centroids move too quickly, making the training of the ORID network difficult. On the other hand, when α is too small, the repulsive term does not affect the clustering meaningfully. Hence, α should be selected to strike a balance between training reliability and effective repulsion. It was found experimentally that clustering is performed well around α = 0.1. Finally, it is worth pointing out that, if the identity features were not normalized as in Eq. (3) and the repulsive clustering were performed in an unbounded space, the distances between centroids would get larger and larger as the iteration goes on. Thus, convergence would not be achieved. This is why we perform DRC on the bounded unit sphere. B.6 IMPACTS OF REPULSIVE TERM ON RANK ESTIMATION Table 8 compares the rank estimation results when the clustering is performed with and without the repulsive term. In this experiment, we use MORPH II (setting A) and set k = 2. Without the repulsive term, lower-quality clusters make the training of the comparator more difficult. As a result, the age estimation performance degrades significantly in terms of both MAE and CS. In other words, the quality of clustering affects the rank estimation performance greatly, and the proposed DRC algorithm provides high quality clusters suitable for the rank estimation. B.7 IMPACTS OF THE NUMBER k OF CLUSTERS ON RANK ESTIMATION Tables 9 and 10 compare the rank estimation results according to the number k of clusters on the MORPH II (setting A) and AADB datasets, respectively. On MORPH II, the age estimation performance decreases as k increases. Since the training set in setting A consists of only 4,394 images, each cluster at a large k contains too few instances. Thus, the comparator is trained inefficiently with fewer training pairs, degrading the performance. In contrast, AADB contains a large number of diverse images. Due to the diversity, a relatively large k should be used to group images into meaningful clusters. Also, even at a large k, each cluster contains a sufficient number of data. Thus, as compared to MORPH II, results on AADB are less sensitive to k. In addition, we provide age estimation results on the balanced dataset in Table 14, in which k has marginal impacts on the rank estimation performance. As mentioned previously, the quality of clustering significantly affects the rank estimation performance. Also, similarly to other algorithms based on k-means, the clustering quality of DRC is affected by k. Hence, for the proposed algorithm to be used on a new ordered dataset, k should be determined effectively to obtain good clustering and rank estimation results. Readers interested in the selection of k are referred to Pham et al. (2005). B.8 CLUSTERING USING OTHER FEATURES Instead of clustering identity features hx1id , h x2 id , . . . , h xn id , we test clustering order features hx1or , h x2 or , . . . , h xn or or whole features h x1 , hx2 , . . . , hxn . In this test, MORPH II (setting A) is used and k = 2. Figure 14 compares the clustering results. When using order features or whole features, instances are divided by their ages. We see that instances younger than 30 mostly belong to cluster 1 and the others to cluster 2. Table 11 compares the performances of the age estimators trained using these clustering results. The best performance is achieved when the clustering is done on identity features. B.9 RELIABILITY OF FEATURE DECOMPOSITION Performing the comparison using order features only does not theoretically guarantee that orderrelated information is fully excluded from identity features. However, we observed empirically that the decomposition is sufficiently reliable if the dimension of an identity feature is selected properly. If the dimension is too small, the encoder may lose a significant portion of order-irrelevant information. On the contrary, if the dimension is too large, the encoder may encode order information redundantly. In our experiments, we use 128 and 896 dimensional vectors for order and identity features (dor = 128 and did = 896), and obtain satisfactory decomposition results. To show that order-related information is excluded from identity features, we compare the accuracies of the comparator (i.e. ternary classifier), when identity features are used instead of order features. Specifically, we first extract order features and identity features from all instances in MORPH II using the pretrained ORID network. Then, we train two comparators that predict the ordering relationship between two instances x and y: one takes the order features hxor and h y or as input and the other takes the identity features hxid and h y id. Table 12 lists the comparator accuracies. We see that the comparator fails to predict ordering relationships from identity features. Also, Figure 15 is t-SNE visualization of the identity feature spaces with age or cluster labels, which confirms that order-related information is excluded effectively from identity features. B.10 MAP ESTIMATION Let us describe the MAP estimation rule for rank estimation in Section 3.4. Given a test instance x, we select references yi by Eq. (13). Then, by comparing x with yi, the comparator yields the probability vector pxyi = (pxyi , p xyi ≈ , p xyi ≺ ) for the three cases in Eq. (4). Thus, given yi, the probability of θ(x) = r can be written as Pθ(x)(r | yi) = pxyi · Pθ(x)(r |x yi) + pxyi≈ · Pθ(x)(r |x ≈ yi) + p xyi ≺ · Pθ(x)(r |x ≺ yi). (31) Suppose that x yi. Then, θ(x) − θ(yi) = r − i > τ from Eq. (4). Also, the maximum possible rank is m. We hence assume that θ(x) has the uniform distribution between i + τ + 1 and m. In other words, Pθ(x)(r |x yi) ∼ U(i+ τ + 1,m), where U denotes a discrete uniform distribution. Similarly, we have Pθ(x)(r |x ≈ yi) ∼ U(i−τ, i+τ) and Pθ(x)(r |x ≺ yi) ∼ U(1, i−τ−1). Then, we approximate the a posteriori probability Pθ(x)(r | y1, . . . ym) by averaging those single-reference inferences in Eq. (31); Pθ(x)(r | y1, . . . ym) = 1m ∑m i=1 Pθ(x)(r | yi). (32) Finally, we obtain the MAP estimate of the rank of x, which is given by θ̂(x) = arg max r∈Θl Pθ(x)(r | y1, . . . ym). (33) C FACIAL AGE ESTIMATION – MORE EXPERIMENTS AND DETAILS C.1 IMPLEMENTATION DETAILS We initialize the parameters of the ORID network for facial age estimation using the Glorot normal method (Glorot & Bengio, 2010). We use the Adam optimizer with a learning rate of 10−4 and decrease the rate by a factor of 0.5 every 50,000 steps. For data augmentation, we do random horizontal flips only. This is because other augmentation schemes, such as brightness or contrast modification, may deform identity information such as skin colors. Also, dor and did are set to be 128 and 896, respectively. In Eq. (6), we set α to 0.1 and decrease it to 0.05 after 200 epochs. C.2 EVALUATION SETTINGS For evaluation on the MORPH II dataset, we adopt four widely used testing protocols. • Setting A – 5,492 images of the Caucasian race are selected and then randomly divided into two non-overlapping parts: 80% for training and 20% for test. • Setting B – 21,000 images of Africans and Caucasians are selected to satisfy two con- straints: the ratio between Africans and Caucasians should be 1 : 1, and that between females and males 1 : 3. They are split into three disjoint subsets S1, S2, and S3. The training and testing are repeated twice: 1) training on S1, testing on S2 + S3, and 2) training on S2, testing on S1 + S3. The average performance of the two experiments is reported. • Setting C – This setting is the 5-fold cross-validation on the entire dataset. Images are randomly split into five folds, but the same person’s images should belong to only one fold. The average performance of the five experiments is reported. • Setting D – This is called the 80-20 protocol. Without any constraint, the entire dataset is randomly divided into the training and test sets with ratio 8 : 2. Thus, setting D is similar to one experiment in setting C, but the same person’s images may belong to both training and test sets. C.3 CLUSTERING We provide more clustering results on MORPH II. Figure 16 is the clustering results on setting B at k = 2. Since setting B consists of Africans and Caucasians, the images are clustered according to the races. Also, Table 13 summarizes the clustering results for settings A, B, and C at k = 2. The clustering result on setting D is omitted, since it is almost identical with that on setting C. In all settings, the proposed DRC-ORID divides facial images into two clusters with meaningful criteria, which are gender for setting A and race for settings B, C, and D. 1 897 196 3,507 63 8 33,008 4 0 46 C.4 AGE ESTIMATION We implement a VGG-based pairwise comparator and follow the settings of Lim et al. (2020). Specifically, instead of Eq. (4), we use the ternary categorization based on the geometric ratio and set τ = 0.1. We initialize its feature extractor using VGG16 pre-trained on the ILSVRC2012 dataset (Deng et al., 2009) and its fully connected layers using the Glorot normal method. We employ the Adam optimizer with a minibatch size of 32. We start with a learning rate of 10−4 and shrink it by a factor of 0.5 after every 80,000 steps. Table 14 lists age estimation results on the balanced dataset according to the number k of clusters. OL-supervised trains the comparator using supervised clusters separated according to gender or ethnic group annotations. Specifically, the supervised clusters at k = 2, 3, and 6 are divided according to genders, ethnic groups, and both genders and ethnic groups, respectively. On the other hand, OL-unsupervised and the proposed algorithm determine their clusters in unsupervised manners. We see that the proposed algorithm performs better than the conventional algorithms in all tests. By employing multiple clusters, the proposed algorithm improves MAE by 0.12 and CS by 0.73% on average. In contrast, OL-unsupervised improves MAE by 0.04 and CS by 0.07% only. This indicates that, by employing identity features, the proposed DRC-ORID algorithm groups instances into meaningful clusters, in which instance ranks can be compared more accurately. C.5 AGE TRANSFORMATION More age transformation results are in Figure 17. Note that, in Figure 6, given an image x, we select the reference y at a target age, whose identity feature is the most similar to that of x, as in Eq. (13). Hence, the image x and the reference y have similar appearance. On the other hand, Figure 17 shows transformed images using randomly selected references. The first two cases transform the same image x with different references, but the transformed images are similar. Also, even when the gender and/or race of y are different from those of x, the identity information of x is preserved well in the transformed image. This confirms the reliability of ORID. C.6 RECONSTRUCTION Figure 18 shows reconstructed faces using whole feature (hxor ⊕ hxid), order feature only (hxor ⊕ 0), and identity feature only (0⊕hxid). Without the order feature, each decoded face is degraded but the person can be identified. In contrast, without the identity feature, the reconstruction is not related to the person except that it seems to be an average face of people at the same age as the person. These results confirm that order and identity features are complementary. C.7 REFERENCE IMAGES Figure 19 shows examples of reference images, which are used for the rank estimation on MORPH II (setting D) at k = 2. Given a test image x, reference image yi of age class i is selected via Eq. (13) from the training set. In the default mode, for each age i, a single reference image is selected. However, the top r most similar references can be selected and used for the estimation. We use a single reference because multiple references improve the estimation performance only negligibly. In Figure 19, the top three reference images are shown for each age from 16 to 53. In setting D, the two clusters are divided by Africans and the others in general. However, we see that test and reference images tend to have the same gender, as well as the same race. Furthermore, they have similar appearance, even when they have a big age difference. D AESTHETIC SCORE REGRESSION – MORE EXPERIMENTS AND DETAILS D.1 IMPLEMENTATION DETAILS For aesthetic score regression, we implement a pairwise comparator based on EfficientNetB4 (Tan & Le, 2019). The pairwise comparator has the same architecture as that for facial age estimation, except for the backbone network. To initialize the backbone, we adopt the parameters pre-trained on the ILSVRC2012 dataset. We initialize the other layers using the Glorot normal method. We update the network parameters using the Adam optimizer with a minibatch size of 16. We start with a learning rate of 10−4 and shrink it by a factor of 0.8 every 8000 steps. Training images are augmented by random horizontal flipping. We set τ = 0.15 for the ternary categorization in Eq. (4). D.2 CLUSTERING Notice that the AADB dataset contains images of diverse contents and styles. Hence, when clustering with a small k, it is hard to observe the characteristics shared by images within each cluster, whereas k = 2 or 3 is sufficient for facial age data. We empirically found that at least eight clusters are required (k = 8) to partition the AADB dataset by meaningful criteria. Figure 20 provides more examples of clustering results at k = 8. Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 D.3 REFERENCE IMAGES Figure 21 shows examples of reference images, which are used for the aesthetic score regression. Given a test image x, the reference image yi of aesthetic class i is selected by Eq. (13). For the aesthetic score regression, we use a single reference image for each aesthetic class, as done in the facial age estimation. Thus, 101 reference images are used in total. E HCI CLASSIFICATION – MORE EXPERIMENTS AND DETAILS E.1 IMPLEMENTATION DETAILS For DRC-ORID for HCI classification, we set all hyper-parameters in the same way as we do in Appendix C.1. We set τ = 1 for the ternary categorization of ordering relationship in Eq. (4). Note that there are five decade classes from 1 to 5. E.2 CLUSTERING Figure 22 shows some sample images in the HCI dataset, which are ordered according to the decade classes. Figure 23 shows more example HCI images grouped into four clusters (k = 4). 1930s 1940s 1950s 1960s 1970s Cluster 1 Cluster 2 Cluster 3 Cluster 4 E.3 REFERENCE IMAGES Figure 24 shows the five reference images for each of six test image examples. Note that, given a test image, the reference images of similar contents, tones, or composition are selected from the five decade classes.
1. What is the focus of the paper on order learning? 2. What are the concerns regarding the proposed ORID model structure? 3. How does introducing a discriminator help in the approach? 4. What are the issues with normalizing vectors in the latent space? 5. What are the doubts about the convergence guarantee of the DRC algorithm? 6. Why is selecting a y_i in Eq. (13) necessary when looping over all y in Eq. (15)? 7. Do you have any suggestions for improving the experimental results? 8. Are there any minor errors or typos in the paper that need correction?
Review
Review Summary: This paper considers the problem of order learning, which learns an ordinal classification function. This paper proposes to learn separarted order-relavent and order-irrelavent latent representations to improve the performance of existing methods, which is a very interesting and promising idea. However, the approach lacks novelty and convincing theoretical guarantees, as well as not showing convincing performance even through the insufficient empirical evaluation. Main concerns: The ORID model structure: The latent representation is separated to h_{or} and h_{id}, and the comparison loss is defined on h_{or}. However, this need not to exclude order-relavent information from h_{id}. Also, it needs to be clarified that to what exent introudcing a discriminator helps, as this turns a minimization problem into an unstable min-max optimization problems. How it works without the discriminator? Normalization of h_{id}: Normalizing vectors in a space may result a totally different cluster structure, different clusters may appear to be overlapped with each other by normalization. Euclidean distance can be the natural dissimilarity metric without normalization. The DRC algorithm: The idea of encouraging inner-cluster similarity and iter-cluster dissimilarity of Eq. (9) is not new. Also, right after Algorithm 1 in the paper, "DRC is guaranteed to converge to a local maximum" is quite suspicious. Is it true that different rules optimiziming the same objective alternatively is guaranteed to converge? At least some references need to be provided as it is a crucial point of the main contribution. The decisioin rule: Eq.(15) loops over all y, so what is the point of selecting a y_i in Eq.(13)? Experimental results seem to be fine, and authors are honest to report unfavorable results. However, in my humble opinion, results for a sufficient number of repetitions (5 or 10) is needed to achieve a least convincibility. Minor comments: In Eq.(4), the rightmost inequation should be \theta(x) - \theta(y) < -r.
ICLR
Title Adversarial Robustness Against the Union of Multiple Perturbation Models Abstract Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers, but the vast majority has defended against single types of attacks. Recent work has looked at defending against multiple attacks, specifically on the MNIST dataset, yet this approach used a relatively complex architecture, claiming that standard adversarial training can not apply because it “overfits” to a particular norm. In this work, we show that it is indeed possible to adversarially train a robust model against a union of norm-bounded attacks, by using a natural generalization of the standard PGD-based procedure for adversarial training to multiple threat models. With this approach, we are able to train standard architectures which are robust against `∞, `2, and `1 attacks, outperforming past approaches on the MNIST dataset and providing the first CIFAR10 network trained to be simultaneously robust against (`∞, `2, `1) threat models, which achieves adversarial accuracy of 46.1% against the union of (`∞, `2, `1) perturbations with radius = (0.03, 0.5, 12). 1 INTRODUCTION Machine learning algorithms have been shown to be susceptible to adversarial examples (Szegedy et al., 2014) through the existence of data points which can be adversarially perturbed to be misclassified, but are “close enough” to the original example to be imperceptible to the human eye. Methods to generate adversarial examples, or “attacks”, typically rely on gradient information, and most commonly use variations of projected gradient descent (PGD) to maximize the loss within a small perturbation region, usually referred to as the adversary’s threat model. Since then, a number of heuristic defenses have been proposed to defend against this phenomenon, e.g. distillation (Papernot et al., 2016) or more recently logit-pairing (Kannan et al., 2018). However, as time goes by, the original robustness claims of these defenses typically don’t hold up to more advanced adversaries or more thorough attacks (Carlini & Wagner, 2017; Engstrom et al., 2018; Mosbach et al., 2018). One heuristic defense that seems to have survived (to this day) is to use adversarial training against a PGD adversary (Madry et al., 2018), which remains quite popular due to its simplicity and apparent empirical robustness. The method continues to perform well in empirical benchmarks even when compared to recent work in provable defenses, although it comes with no formal guarantees. Some recent work, however, pointed out that adversarial training against `∞ perturbations “overfits” to the `∞ threat model, and used this as motivation to propose a more complicated architecture in order to achieve robustness to multiple perturbation types on the MNIST dataset (Schott et al., 2019). In this work, we offer a alternative viewpoint: while adversarial training can overfit to the individual threat models, we show that it is indeed possible to use adversarial training to learn a model which is simultaneously robust against multiple types of `p norm bounded attacks (we consider `∞, `2, and `1 attacks, but the approach can apply to more general attacks). First, we show while simple generalizations of adversarial training to multiple threat models can achieve some degree of robustness against the union of these threat models, the performance is inconsistent and converges to suboptimal tradeoffs which may not actually minimize the robust objective. Second, we propose a slightly modified PGD-based algorithm called multi steepest descent (MSD) for adversarial training which more naturally incorporates the different perturbations within the PGD iterates, further improving the adversarial training approach by directly minimizing the robust optimization objective. Third, we show empirically that our approach improves upon past work by being applicable to standard network architectures, easily scaling beyond the MNIST dataset, and outperforming past results on robustness against multiple perturbation types. 2 RELATED WORK After their original introduction, one of the first widely-considered attacks against deep networks had been the Fast Gradient Sign Method (Goodfellow et al., 2015), which showed that a single, small step in the direction of the sign of the gradient could sometimes fool machine learning classifiers. While this worked to some degree, the Basic Iterative Method (Kurakin et al., 2017) (now typically referred to as the PGD attack) was significantly more successful at creating adversarial examples, and now lies at the core of many papers. Since then, a number of improvements and adaptations have been made to the base PGD algorithm to overcome heuristic defenses and create stronger adversaries. Adversarial attacks were thought to be safe under realistic transformations (Lu et al., 2017) until the attack was augmented to be robust to them (Athalye et al., 2018b). Adversarial examples generated using PGD on surrogate models can transfer to black box models (Papernot et al., 2017). Utilizing core optimization techniques such as momentum can greatly improve the attack success rate and transferability, and was the winner of the NIPS 2017 competition on adversarial examples (Dong et al., 2018). Uesato et al. (2018) showed that a number of ImageNet defenses were not as robust as originally thought, and Athalye et al. (2018a) defeated many of the heuristic defenses submitted to ICLR 2018 shortly after the reviewing cycle ended, all with stronger PGD variations. Throughout this cycle of attack and defense, some defenses were uncovered that remain robust to this day. The aforementioned PGD attack, and the related defense known as adversarial training with a PGD adversary (which incorporates PGD-attacked examples into the training process) has so far remained empirically robust (Madry et al., 2018). Verification methods to certify robustness properties of networks were developed, utilizing techniques such as SMT solvers (Katz et al., 2017), SDP relaxations (Raghunathan et al., 2018b), and mixed-integer linear programming (Tjeng et al., 2019), the last of which has recently been successfully scaled to reasonably sized networks. Other work has folded verification into the training process to create provably robust networks (Wong & Kolter, 2018; Raghunathan et al., 2018a), some of which have also been scaled to even larger networks (Wong et al., 2018; Mirman et al., 2018; Gowal et al., 2018). Although some of these could potentially be extended to apply to multiple perturbations simultaneously, most of these works have focused primarily on defending against and verifying only a single type of adversarial perturbation at a time. Last but most relevant to this work are adversarial defenses that attempt to be robust against multiple types of attacks simultaneously. Schott et al. (2019) used multiple variational autoencoders to construct a complex architecture for the MNIST dataset that is not as easily attacked by `∞, `2, and `0 adversaries. Importantly, Schott et al. (2019) compare to adversarial training with an `∞-bounded PGD adversary as described by Madry et al. (2018), claiming that the adversarial training defense overfits to the `∞ metric, and they do not consider other forms of adversarial training. Following this, a number of concurrent papers have since been released. While not studied as a defense, Kang et al. (2019) study the transferability of adversarial robustness between models trained against different threat models. Croce & Hein (2019) propose a provable adversarial defense against all `p norms for p ≥ 1 using a regularization term. Finally, Tramèr & Boneh (2019) study the theoretical and empirical trade-offs of adversarial robustness in various settings when defending against multiple adversaries, however, they use a rotation and translation adversary instead of an `2 adversary for CIFAR10. Contributions In this work we demonstrate the effectiveness of adversarial training for learning models that are robust against a union of multiple perturbation models. First, we show that while simple aggregations of different adversarial attacks can achieve robustness against multiple perturbations models without resorting to complex architectures, the results are inconsistent across datasets and make suboptimal tradeoffs between the threat models. Second, we propose a modified PGD iteration that more naturally considers multiple perturbation models within the inner optimization loop of adversarial training. Third, we evaluate all approaches on the MNIST and CIFAR10 datasets, showing that our proposed generalizations of adversarial training can significantly outperform past approaches for the union of `∞, `2, and `1 attacks. Specifically, on MNIST, our model achieves 58.7% (individually 63.7%, 82.6%, 62.3%) adversarial accuracy against the union of all three attacks (`∞, `2, `1) for = (0.3, 1.5, 12) respectively, substantially improving upon the multiple-perturbation-model robustness described in Schott et al. (2019) and also improving upon the simpler aggregations of multiple adversarial attacks. Unlike past work, we also train a CIFAR10 model, which achieves 46.1% (individually 47.6%, 64.3%, 53.4%) adversarial accuracy against the union of all three attacks (`∞, `2, `1) for = (0.03, 0.5, 12). Finally, for completeness, we also draw relevant comparisons to concurrent work, and show that the relative advantage of our approach still holds. 3 OVERVIEW OF ADVERSARIAL TRAINING Adversarial training is an approach to learn a classifier which minimizes the worst case loss within some perturbation region (the threat model). Specifically, for some network fθ parameterized by θ, loss function `, and training data {xi, yi}i=1...n, the robust optimization problem of minimizing the worst case loss within `p norm-bounded perturbations with radius is min θ ∑ i max δ∈∆p, `(fθ(xi + δ), yi), (1) where ∆p, = {δ : ‖δ‖p ≤ } is the `p ball with radius centered around the origin. To simplify the notation, we will abbreviate `(fθ(x+ δ), y) = `(x+ δ; θ). 3.1 SOLVING THE INNER OPTIMIZATION PROBLEM We first look at solving the inner maximization problem, namely max δ∈∆p, `(x+ δ; θ). (2) This is the problem addressed by the “attackers” in the space of adversarial examples, hoping that the classifier can be tricked by the optimal perturbed image, x+ δ?. Typical solutions solve this problem by running a form of projected gradient descent, which iteratively takes steps in the gradient direction to increase the loss followed by a projection step back onto the feasible region, the `p ball. Since the gradients at the example points themselves (i.e., δ = 0) are typically too small to make efficient progress, more commonly used is a variation called projected steepest descent. Steepest descent For some norm ‖ · ‖p and step size α, the direction of steepest descent on the loss function ` for a perturbation δ is vp(δ) = arg max ‖v‖p≤α vT∇`(x+ δ; θ). (3) Then, instead of taking gradient steps, steepest descent uses the following iteration δ(t+1) = δ(t) + vp(δ (t)). (4) In practice, the norm used in steepest descent is typically taken to be the same `p norm used to define the perturbation region ∆p, . However, depending on the norm used, the direction of steepest descent can be quite different from the actual gradient (Figure 1). Note that a single steepest descent step with respect to the `∞ norm reduces to v∞(x) = α · sign(∇`(x+ δ; θ)), better known in the adversarial examples literature as the Fast Gradient Sign Method (Goodfellow et al., 2015). Projections The second component of projected steepest descent for adversarial examples is to project iterates back onto the `p ball around x. Specifically, projected steepest descent performs the following iteration δ(t+1) = P∆p, ( δ(t) + vp(δ (t)) ) (5) where P∆p, (δ) is the standard projection operator that finds the perturbation δ′ ∈ ∆p, that is “closest” in Euclidean space to the input δ, defined as P∆p, (δ) = arg min δ′∈∆p, ‖δ − δ′‖22. (6) Visually, a depiction of this procedure (steepest descent followed by a projection onto the perturbation region) for an `2 adversary can be found in Figure 1. If we instead project the steepest descent directions with respect to the `∞ norm onto the `∞ ball of allowable perturbations, the projected steepest descent iteration reduces to δ(t+1) = P∆∞, (δ (t) + v∞(δ (t))) = clip [− , ] ( δ(t) + α · sign(∇`(x+ δ(t); θ)) ) (7) where clip[− ,+ ] “clips” the input to lie within the range [− , ]. This is exactly the Basic Iterative Method used in Kurakin et al. (2017), typically referred to in the literature as an `∞ PGD adversary. 3.2 SOLVING THE OUTER OPTIMIZATION PROBLEM We next look at how to solve the outer optimization problem, or the problem of learning the weights θ that minimize the loss of our classifier. While many approaches have been proposed in the literature, we will focus on a heuristic called adversarial training, which has generally worked well in practice. Adversarial training Although solving the min-max optimization problem may seem daunting, a classical result known as Danskin’s theorem (Danskin, 1967) says that the gradient of a maximization problem is equal to the gradient of the objective evaluated at the optimum. For learning models that minimize the robust optimization problem from Equation (1), this means that ∇θ (∑ i max δ∈∆p, `(xi + δ; θ) ) = ∑ i ∇θ`(xi + δ∗(xi); θ) (8) where δ∗(xi) = arg maxδ∈∆p, `(xi+δ; θ). In other words, this means that in order to backpropagate through the robust optimization problem, we can solve the inner maximization and backpropagate through the solution. Adversarial training does this by empirically maximizing the inner problem with a PGD adversary. Note that since the inner problem is not solved exactly, Danskin’s theorem does not strictly apply. However, in practice, adversarial training does seem to provide good empirical robustness, at least when evaluated against the `p threat model it was trained against. 4 ADVERSARIAL TRAINING FOR MULTIPLE PERTURBATION MODELS We can now consider the core of this work, adversarial training procedures against multiple threat models. More formally, let S represent a set of threat models, such that p ∈ S corresponds to the `p perturbation model ∆p, , and let ∆S = ⋃ p∈S ∆p, be the union of all perturbation models in S. Note that the chosen for each ball is not typically the same, but we still use the same notation for simplicity, since the context will always make clear which `p-ball we are talking about. Then, the generalization of the robust optimization problem in Equation (1) to multiple perturbation models is min θ ∑ i max δ∈∆S `(xi + δ; θ). (9) The key difference is in the inner maximization, where the worst case adversarial loss is now taken over multiple `p perturbation models. In order to perform adversarial training, using the same motivational idea from Danskin’s theorem, we can backpropagate through the inner maximization by first finding (empirically) the optimal perturbation, δ∗ = arg max δ∈∆S `(x+ δ; θ). (10) To find the optimal perturbation over the union of threat models, we begin by considering straightforward generalizations of standard adversarial training, which will use PGD to approximately solve the inner maximization over multiple adversaries. 4.1 SIMPLE COMBINATIONS OF MULTIPLE PERTURBATIONS First, we study two simple approaches to generalizing adversarial training to multiple threat models. These methods can perform reasonably well in practice and are competitive with existing approaches without relying on complicated architectures. While these methods work to some degree, we later find empirically that these methods do not necessarily minimize the worst-case performance, and can converge to unexpected tradeoffs between multiple threat models. Worst-case perturbation One way to generalize adversarial training to multiple threat models is to use each threat model independently, and train on the adversarial perturbation that achieved the maximum loss. Specifically, for each adversary p ∈ S , we solve the innermost maximization with an `p PGD adversary to get an approximate worst-case perturbation δp, δp = arg max δ∈∆p, `(x+ δ; θ), (11) and then approximate the maximum over all adversaries as δ∗ ≈ arg max δp `(x+ δp; θ). (12) When |S| = 1, then this reduces to standard adversarial training. Note that if each PGD adversary solved their subproblem from Equation (11) exactly, then this is exactly the optimal perturbation δ?. PGD augmentation with all perturbations Another way to generalize adversarial training is to train on all the adversarial perturbations for all p ∈ S to form a larger adversarial dataset. Specifically, instead of solving the robust problem for multiple adversaries in Equation (9), we instead solve min θ ∑ i ∑ p∈S max δ∈∆p, `(xi + δ; θ) (13) by using individual `p PGD adversaries to approximate the inner maximization for each threat model. Again, this reduces to standard adversarial training when |S| = 1. While these methods work reasonably well in practice (which is shown later in Section 5), both approaches solve the inner maximization problem independently for each adversary, so each individual PGD adversary is not taking advantage of the fact that the perturbation region is enlarged by other threat models. To take advantage of the full perturbation region, we propose a modification to standard adversarial training, which combines information from all considered threat models into a single PGD adversary that is potentially stronger than the combination of independent adversaries. 4.2 MULTI STEEPEST DESCENT To create a PGD adversary with full knowledge of the perturbation region, we propose an algorithm that incorporates the different threat models within each step of projected steepest descent. Rather than generating adversarial examples for each threat model with separate PGD adversaries, the core idea is to create a single adversarial perturbation by simultaneously maximizing the worst case loss over all threat models at each projected steepest descent step. We call our method multi steepest descent (MSD), which can be summarized as the following iteration: δ(t+1)p = P∆p, (δ (t) + vp(δ (t))) for p ∈ S δ(t+1) = arg max δ (t+1) p `(x+ δ(t+1)p ) (14) Algorithm 1 Multi steepest descent for learning classifiers that are simultaneously robust to `p attacks for p ∈ S Input: classifier fθ, data x, labels y Parameters: p, αp for p ∈ S, maximum iterations T , loss function ` δ(0) = 0 for t = 0 . . . T − 1 do for p ∈ S do δ (t+1) p = P∆p, (δ (t) + vp(δ (t))) end for δ(t+1) = arg max δ (t+1) p `(fθ(x+ δ (t+1) p ), y) end for return δ(T ) The key difference here is that at each iteration of MSD, we choose a projected steepest descent direction that maximizes the loss over all attack models p ∈ S , whereas standard adversarial training and the simpler approaches use comparatively myopic PGD subroutines that only use one threat model at a time. The full algorithm is in Algorithm 1, and can be used as a drop in replacement for standard PGD adversaries to learn robust classifiers with adversarial training. We direct the reader to Appendix A for a complete description of steepest descent directions and projection operators for `∞, `2, and `1 norms1. 5 RESULTS In this section, we present experimental results on using generalizations of adversarial training to achieve simultaneous robustness to `∞, `2, and `1 perturbations on the MNIST and CIFAR10 datasets. Our primary goal is to show that adversarial training can in fact be adapted to a union of perturbation models using standard architectures to achieve competitive results, without the pitfalls described by Schott et al. (2019). Our results improve upon the state-of-the-art in three key ways. First, we can use simpler, standard architectures for image classifiers, without relying on complex architectures or input binarization. Second, our method is able to learn a single MNIST model which is simultaneously robust to all three threat models, whereas previous work was only robust against two at a time. Finally, our method is easily scalable to datasets beyond MNIST, providing the first CIFAR10 model trained to be simultaneously robust against `∞, `2, and `1 adversaries. We trained models using both the simple generalizations of adversarial training to multiple adversaries and also using MSD. Since the analysis by synthesis model is not scalable to CIFAR10, we additionally trained CIFAR10 models against individual PGD adversaries to measure the changes and tradeoffs in universal robustness. We evaluated these models with a broad suite of both gradient and non-gradient based attacks using Foolbox2 (the same attacks used by Schott et al. (2019)), and also incorporated all the PGD-based adversaries discussed in this paper. All aggregate statistics that combine multiple attacks compute the worst case error rate over all attacks for each example. Summaries of these results at specific thresholds can be found in Tables 1 and 2, where B-ABS and ABS refer to binarized and non-binarized versions of the analysis by synthesis models from Schott et al. (2019), Pp refers to a model trained against a PGD adversary with respect to the pnorm, Worst-PGD and PGD-Aug refer to models trained using the worst-case and data augmentation generalizations of adversarial training, and MSD refers to models trained using multi steepest descent. Full tables containing the complete breakdown of these numbers over all individual attacks used in the evaluation are in Appendix C. We report the results against individual attacks and threat models for completeness, however note that the goal of all these algorithms is to minimize the robust optimization objective from Equation (9). While there may be different implicit tradeoffs between individual threat 1The pure `1 steepest descent step is inefficient since it only updates one coordinate at a time. It can be improved by taking steps on multiple coordinates, similar to that used in Tramèr & Boneh (2019), and is also explained in Appendix A. 2https://github.com/bethgelab/foolbox (Rauber et al., 2017) models, in the end, the most meaningful metric for measuring the effective performance is the robust optimization objective, or the performance against the union of all attacks. 5.1 EXPERIMENTAL SETUP Architectures and hyperparameters For MNIST, we use a four layer convolutional network with two convolutional layers consisting of 32 and 64 5× 5 filters and 2 units of padding, followed by a fully connected layer with 1024 hidden units, where both convolutional layers are followed by 2× 2 Max Pooling layers and ReLU activations (this is the same architecture used by Madry et al. (2018)). This is in contrast to past work on MNIST, which relied on per-class variational autoencoders to achieve robustness against multiple threat models (Schott et al., 2019), which was also not easily scalable to larger datasets. Since our methods have the same complexity as standard adversarial training, they also easily apply to standard CIFAR10 architectures, and in this paper we use the well known pre-activation version of the ResNet18 architecture consisting of nine residual units with two convolutional layers each (He et al., 2016). A complete description of the hyperparameters used is in Appendix B, with hyperparameters for PGD adversaries in Appendix B.1, and hyperparameters for adversarial training in Appendix B.2. All reported are for images scaled to be between the range [0, 1]. All experiments can be run on modern GPU hardware (e.g. a single 1080ti). Attacks used for evaluation To evaluate the model, we incorporate the attacks from Schott et al. (2019) as well as our PGD based adversaries using projected steepest descent, however we provide a short description here. Note that we exclude attacks based on gradient estimation, since the gradient for the standard architectures used here are readily available. For `∞ attacks, although we find the `∞ PGD adversary to be quite effective, for completeness, we additionally use the Foolbox implementations of Fast Gradient Sign Method (Goodfellow et al., 2015), PGD adversary (Madry et al., 2018), and the Momentum Iterative Method (Dong et al., 2018). For `2 attacks, in addition to the `2 PGD adversary, we use the Foolbox implementations of the same PGD adversary, the Gaussian noise attack (Rauber et al., 2017), the boundary attack (Brendel et al., 2017), DeepFool (Moosavi-Dezfooli et al., 2016), the pointwise attack (Schott et al., 2019), DDN based attack (Rony et al., 2018), and C&W attack (Carlini & Wagner, 2017). For `1 attacks, we use both the `1 PGD adversary as well as additional Foolbox implementations of `0 attacks at the same radius, namely the salt & pepper attack (Rauber et al., 2017) and the pointwise attack (Schott et al., 2019). Note that an `1 adversary with radius is strictly stronger than an `0 adversary with the same radius, and so we choose to explicitly defend against `1 perturbations instead of the `0 perturbations considered by Schott et al. (2019). We make 10 random restarts for each of the evaluation results mentioned hereon for both MNIST and CIFAR10 3. We encourage future work in this area to incorporate the same, since the success of all attacks, specially decision based or gradient free ones, is observed to increase significantly over restarts. 3 All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration and DDN based Attack which does not benefit from the same owing to a deterministic initialization of δ. 4Results are from Schott et al. (2019), which used an `0 threat model of the same radius and evaluated against `0 attacks. So the reported number here is an upper bound on the `1 adversarial accuracy. Further, they evaluate 5.2 MNIST We first present results on the MNIST dataset, which are summarized in Table 1 (a more detailed breakdown over each individual attack is in Appendix C.1). While considered an “easy” dataset, we note that the previous state-of-the-art result for multiple threat models on MNIST (and our primary comparison) is only able to defend against two out of three threat models at a time (Schott et al., 2019) using comparatively complex variational autoencoder architectures. The model trained with MSD achieves the best performance against all attacks, achieving an error rate of 58.7% (individually 63.7%, 82.6%, and 62.3)% against the union of (`∞, `2, and `1) perturbations with radius = (0.3, 1.5, 12). Complete robustness curves over a range of epsilons over each threat model can be found in Figure 2. A comparison of our results with concurrent work (Tramèr & Boneh, 2019) can be found in Appendix D. 5.3 CIFAR10 Next, we present results on the CIFAR10 dataset, which are summarized in Table 2 (a more detailed breakdown over each individual attack is in Appendix C.2). Our MSD approach reaches the best performance against the union of attacks, and achieves 46.1% (individually 47.6%, 64.3%, 53.4%) adversarial accuracy against the union of (`∞, `2, `1) perturbations of size = (0.03, 0.5, 12). Interestingly, note that the P1 model trained against an `1 PGD adversary is not very robust when evaluated against other attacks, even though it can defend reasonably well against the `1 PGD attack in isolation (Table 4 in Appendix C.2). Complete robustness curves over a range of epsilons over each threat model can be found in Figure 3. A comparison of our results with concurrent work (Tramèr & Boneh, 2019) can be found in Appendix D. While adversarial defenses are generally not intended to defend against attacks outside of the threat model, we show some experiments exploring this aspect in Appendix E. their model without restarts and the adversarial accuracy against all attacks is an upper bound based on the reported accuracies for the individual threat models. Finally, all ABS results were computed using numerical gradient estimation, since gradients are not readily available. On tradeoffs and variability of the simpler defenses One major drawback to the simpler methods for generalizing adversarial training to multiple threat models is their variability and unclear tradeoffs over different settings. For example, on MNIST we see that the data augmentation approach fails to reduce the robust optimization objective: the `∞ threat model dominates the training process and we get a suboptimal tradeoff between threat models which isn’t robust to the union. Similarly, on CIFAR10 we see that the worst-case approach for adversarial training also converges to a model which has suboptimal robust performance against the union of threat models. This highlights the inconsistency of the simpler generalizations of adversarial training: depending on the dataset and the threat models, they may not ultimately minimize the robust optimization objective from Equation (9), and the tradeoffs may vary significantly with the problem setting. On the other hand, in both problem settings, we find MSD is consistent at finding a more optimal tradeoff which minimizes the worst-case loss in the union of the threat models. As a result, rather than using one of the simpler methods and convergence to a potentially unclear tradeoff between threat models, we recommend using MSD which directly minimizes the worst case performance among the specified threat models. 6 CONCLUSION In this paper, we showed that adversarial training can be quite effective when training against a union of multiple perturbation models. We compare two simple generalizations of adversarial training and an improved adversarial training procedure, multi steepest descent, which incorporates the different perturbation models directly into the direction of steepest descent. MSD based adversarial training procedure is able to outperform past approaches, demonstrating that adversarial training can in fact learn networks that are robust to multiple perturbation models simultaneously (as long as they are included in the threat model) while being scalable beyond MNIST and using standard architectures. B EXPERIMENTAL DETAILS B.1 HYPERPARAMETERS FOR PGD ADVERSARIES In this section, we describe the parameters used for all PGD adversaries in this paper. MNIST The `∞ adversary used a step size α = 0.01 within a radius of = 0.3 for 50 iterations. The `2 adversary used a step size α = 0.1 within a radius of = 1.5 for 100 iterations. The `1 adversary used a step size of α = 0.05 within a radius of = 12 for 50 iterations. By default the attack is run with two restarts, once starting with δ = 0 and once by randomly initializing δ in the allowable perturbation ball. k1 = 5, k2 = 20 as described in A.1. The MSD adversary used step sizes of α = (0.01, 0.2, 0.05) for the (`∞, `2, `1) directions within a radius of = (0.3, 1.5, 12) for 100 iterations. At test time, we increase the number of iterations to (100, 200, 100) for (`∞, `2, `1). CIFAR10 The `∞ adversary used a step size α = 0.003 within a radius of = 0.03 for 40 iterations. The `2 adversary used a step size α = 0.05 within a radius of = 0.5 for 50 iterations. The `1 adversary used a step size α = 0.1 within a radius of = 12 for 50 iterations. k1 = 5, k2 = 20 as described in A.1. The MSD adversary used step sizes of α = (0.003, 0.05, 0.05) for the (`∞, `2, `1) directions within a radius of = (0.03, 0.3, 12) for 50 iterations. Note that the MSD model trained for `2 radius of 0.3 is in fact robust to a higher radius of 0.5. B.2 TRAINING HYPERPARAMETERS In this section, we describe the parameters used for adversarial training. For all the models, we used the SGD optimizer with momentum 0.9 and weight decay 5 · 10−4. MNIST We train the models to a maximum of 20 epochs. We used a variation of the learning rate schedule from Smith (2018), which is piecewise linear from 0 to 0.1 over the first 7 epochs, down to 0.001 over the next 8 epochs, and finally back down to 0.0001 in the last 5 epochs. CIFAR10 We used a variation of the learning rate schedule from Smith (2018) to achieve superconvergence in 50 epochs, which is piecewise linear from 0 to 0.1 over the first 20 epochs, down to 0.005 over the next 20 epochs, and finally back down to 0 in the last 10 epochs. C EXTENDED RESULTS Here, we show the full tables which break down the overall adversarial error rates over individual attacks for both MNIST and CIFAR10, along with robustness curves for all models in the paper. C.1 MNIST RESULTS Expanded table of results Table 3 contains the full table of results for all attacks on all models on the MNIST dataset. All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration, and DDN attack, which does not benefit from restarts owing to a deterministic starting point. Note that the results for B-ABS and ABS models are from Schott et al. (2019), which uses gradient estimation techniques whenever a gradient is needed, and the robustness against all attacks for B-ABS and ABS is an upper bound based on the reported results. Further, these models are not evaluated with restarts, pushing the reported results even higher than actual. C.2 CIFAR10 RESULTS Expanded table of results Table 4 contains the full table of results for all attacks on all models on the CIFAR10 dataset. All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration, and DDN attack, which does not benefit from restarts owing to a deterministic starting point. Further note that salt & pepper and pointwise attacks in the `1 section are technically `0 attacks, but produce perturbations in the `1 ball. Finally, it is clear here that while the training against an `1 PGD adversary defends against said PGD adversary, it does not seem to transfer to robustness against other attacks. D COMPARISON WITH CONCURRENT WORK In this section we compare the results of our trained MSD model with that of Tramèr & Boneh (2019), who study the theoretical and empirical trade-offs of adversarial robustness in various settings when defending against multiple adversaries. Training methods presented by them in their comparisons, namely Advavg and Advmax closely resemble the naive approaches discussed in this paper: PGDAug and Worst-PGD respectively. We use the results as is from their work, and additionally compare the position of our MSD models at the revised thresholds used by Tramèr & Boneh (2019) without specially retraining them. The results of Tables 5 and 6 show that the relative advantage of MSD over naive techniques does hold up. While we do make a comparison to the most relevant concurrent work for completeness, the following differences can bias the robust accuracies reported for the MSD models to relatively lower than expected (and correspondingly, the robust accuracies reported for the other models are relatively higher than expected): 1. Use of random restarts: We observe in our experiments that using up to 10 restarts for all our attacks leads to a decrease in model accuracy from 5 to 10% across all models. Tramèr & Boneh do not mention restarting their attacks for these models and so the results for models apart from MSD in Tables 5, 6 could potentially be lowered with random restarts. 2. Different training and testing thresholds: The MSD model for the MNIST dataset was trained at = (0.3, 1.5, 12) for the `∞, `2, `1 perturbation balls respectively, while Tramèr & Boneh (2019) tested at = (0.3, 2.0, 10). This may lower the robust accuracy at these thresholds for the MSD model, since it was not trained for that particular threshold. Likewise, the MSD model for CIFAR10 was also trained at = (0.03, 0.05, 12) for the `∞, `2, `1 perturbation balls respectively, while Tramèr & Boneh (2019) tested at = ( 4255 , 0, 2000 255 ). 3. Different perturbation models: For the CIFAR10 results in Table 6, Advavg & Advmax models are trained and tested only for `1 and `∞ adversarial perturbations, whereas the MSD model is robust to the union of `1, `2 and `∞, achieving a much harder task. 4. Larger Suite of Attacks Used: The attacks used by Tramèr & Boneh are PGD, EAD (Chen et al., 2017) and Pointwise Attack (Schott et al., 2019) for `1; PGD, C&W (Carlini & Wagner, 2017) and Boundary Attack (Brendel et al., 2017) for `2; and PGD for `∞ adversaries. We use a more expansive suite of attacks as shown in Appendix C. Some of the attacks like DDN, which proved to be strong adversaries in most cases, were not considered E ATTACKS OUTSIDE THE THREAT MODEL In this section, we present some additional experiments exploring the performance of our model on attacks which lie beyond the threat model. Note that there is no principled reason why we would believe this to be the case (as most adversarial defenses tend to not generalize beyond the threat model defended against), and this is presented for exploratory reasons. Common corruptions We measure the performance of all the models on CIFAR-10-C, which is a CIFAR10 benchmark which has had common corruptions applied to it (e.g. noise, blur, and compression). We report the results in Table 7. We find that that, apart from the P1 model, the rest achieve some improved robustness against these common corruptions above the standard CIFAR10 model. Defending against `1 and `∞ and evaluating on `2 We also briefly study what happens when one trains against `1 and `∞ threat models, while evaluating against the `2 adversary. Specifically, we take the MSD approach on MNIST and simply remove the `2 adversary from the threat model. This results in a model which has its `1 and `∞ robust performance against a PGD adversary drop by 1% and its `2 robust performance against a PGD adversary (which it was not trained for) drops by 2% in comparison to the original MSD approach on all three threat models. As a result, we empirically observe that including the `2 threat model in this setting actually improved overall robustness against all three threat models. Unsurprisingly, the `2 performance drops to some degree, but the model does not lose all of its robustness.
1. What is the main contribution of the paper regarding adversarial training? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. What are some suggestions provided by the reviewer for future research directions?
Review
Review The paper proposes to do adversarial training on multiple L_p norm perturbation models simultaneously, to make the model robust against various types of attacks. [Novelty] I feel this is just a natural extension of adversarial training. If we define the perturbation set in PGD to be S, then in general S can be union of perturbation set of several L_p norm, and the resulting algorithm will be MSD (everytime you do a gradient update and then find the worst case projection in S). It would be interesting to study the convergence of this kind of algorithms, since S is no longer convex, the projection is trickier to define. Unfortunately this is not discussed in the paper. In terms of experiments, this is an interesting data point to show that we can have a model that is (weakly) robust to L1, L2 and Linf norms simultaneously. However, the results are not surprising since there's more than 10% performance decreases compared to the original adversarial training under each particular attack. So it's still not clear whether we can get a model that simultaneously achieves L1, L2, Linf robust error comparable to original PGD training. [Performance] - It seems MSD is not always better than others (worst PGD and PGD Aug). For MNIST, MSD performs poorly on Linf norm and it's not clear why. - There's significant performance drop in clean accuracy, especially MSD on MNIST data. [Suggestions] - As mentioned before, studying the convergence properties of the proposed methods will be interesting. - It will be interesting if you can train on a set of perturbation models and make it also robust to another perturbation not in the training phase. For instance, can we apply the proposed method to L{1,inf} in training and generalize to L2 perturbation? ===== Thanks for the response. I still have concerns about novelty so would like to keep my rating unchanged.
ICLR
Title Adversarial Robustness Against the Union of Multiple Perturbation Models Abstract Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers, but the vast majority has defended against single types of attacks. Recent work has looked at defending against multiple attacks, specifically on the MNIST dataset, yet this approach used a relatively complex architecture, claiming that standard adversarial training can not apply because it “overfits” to a particular norm. In this work, we show that it is indeed possible to adversarially train a robust model against a union of norm-bounded attacks, by using a natural generalization of the standard PGD-based procedure for adversarial training to multiple threat models. With this approach, we are able to train standard architectures which are robust against `∞, `2, and `1 attacks, outperforming past approaches on the MNIST dataset and providing the first CIFAR10 network trained to be simultaneously robust against (`∞, `2, `1) threat models, which achieves adversarial accuracy of 46.1% against the union of (`∞, `2, `1) perturbations with radius = (0.03, 0.5, 12). 1 INTRODUCTION Machine learning algorithms have been shown to be susceptible to adversarial examples (Szegedy et al., 2014) through the existence of data points which can be adversarially perturbed to be misclassified, but are “close enough” to the original example to be imperceptible to the human eye. Methods to generate adversarial examples, or “attacks”, typically rely on gradient information, and most commonly use variations of projected gradient descent (PGD) to maximize the loss within a small perturbation region, usually referred to as the adversary’s threat model. Since then, a number of heuristic defenses have been proposed to defend against this phenomenon, e.g. distillation (Papernot et al., 2016) or more recently logit-pairing (Kannan et al., 2018). However, as time goes by, the original robustness claims of these defenses typically don’t hold up to more advanced adversaries or more thorough attacks (Carlini & Wagner, 2017; Engstrom et al., 2018; Mosbach et al., 2018). One heuristic defense that seems to have survived (to this day) is to use adversarial training against a PGD adversary (Madry et al., 2018), which remains quite popular due to its simplicity and apparent empirical robustness. The method continues to perform well in empirical benchmarks even when compared to recent work in provable defenses, although it comes with no formal guarantees. Some recent work, however, pointed out that adversarial training against `∞ perturbations “overfits” to the `∞ threat model, and used this as motivation to propose a more complicated architecture in order to achieve robustness to multiple perturbation types on the MNIST dataset (Schott et al., 2019). In this work, we offer a alternative viewpoint: while adversarial training can overfit to the individual threat models, we show that it is indeed possible to use adversarial training to learn a model which is simultaneously robust against multiple types of `p norm bounded attacks (we consider `∞, `2, and `1 attacks, but the approach can apply to more general attacks). First, we show while simple generalizations of adversarial training to multiple threat models can achieve some degree of robustness against the union of these threat models, the performance is inconsistent and converges to suboptimal tradeoffs which may not actually minimize the robust objective. Second, we propose a slightly modified PGD-based algorithm called multi steepest descent (MSD) for adversarial training which more naturally incorporates the different perturbations within the PGD iterates, further improving the adversarial training approach by directly minimizing the robust optimization objective. Third, we show empirically that our approach improves upon past work by being applicable to standard network architectures, easily scaling beyond the MNIST dataset, and outperforming past results on robustness against multiple perturbation types. 2 RELATED WORK After their original introduction, one of the first widely-considered attacks against deep networks had been the Fast Gradient Sign Method (Goodfellow et al., 2015), which showed that a single, small step in the direction of the sign of the gradient could sometimes fool machine learning classifiers. While this worked to some degree, the Basic Iterative Method (Kurakin et al., 2017) (now typically referred to as the PGD attack) was significantly more successful at creating adversarial examples, and now lies at the core of many papers. Since then, a number of improvements and adaptations have been made to the base PGD algorithm to overcome heuristic defenses and create stronger adversaries. Adversarial attacks were thought to be safe under realistic transformations (Lu et al., 2017) until the attack was augmented to be robust to them (Athalye et al., 2018b). Adversarial examples generated using PGD on surrogate models can transfer to black box models (Papernot et al., 2017). Utilizing core optimization techniques such as momentum can greatly improve the attack success rate and transferability, and was the winner of the NIPS 2017 competition on adversarial examples (Dong et al., 2018). Uesato et al. (2018) showed that a number of ImageNet defenses were not as robust as originally thought, and Athalye et al. (2018a) defeated many of the heuristic defenses submitted to ICLR 2018 shortly after the reviewing cycle ended, all with stronger PGD variations. Throughout this cycle of attack and defense, some defenses were uncovered that remain robust to this day. The aforementioned PGD attack, and the related defense known as adversarial training with a PGD adversary (which incorporates PGD-attacked examples into the training process) has so far remained empirically robust (Madry et al., 2018). Verification methods to certify robustness properties of networks were developed, utilizing techniques such as SMT solvers (Katz et al., 2017), SDP relaxations (Raghunathan et al., 2018b), and mixed-integer linear programming (Tjeng et al., 2019), the last of which has recently been successfully scaled to reasonably sized networks. Other work has folded verification into the training process to create provably robust networks (Wong & Kolter, 2018; Raghunathan et al., 2018a), some of which have also been scaled to even larger networks (Wong et al., 2018; Mirman et al., 2018; Gowal et al., 2018). Although some of these could potentially be extended to apply to multiple perturbations simultaneously, most of these works have focused primarily on defending against and verifying only a single type of adversarial perturbation at a time. Last but most relevant to this work are adversarial defenses that attempt to be robust against multiple types of attacks simultaneously. Schott et al. (2019) used multiple variational autoencoders to construct a complex architecture for the MNIST dataset that is not as easily attacked by `∞, `2, and `0 adversaries. Importantly, Schott et al. (2019) compare to adversarial training with an `∞-bounded PGD adversary as described by Madry et al. (2018), claiming that the adversarial training defense overfits to the `∞ metric, and they do not consider other forms of adversarial training. Following this, a number of concurrent papers have since been released. While not studied as a defense, Kang et al. (2019) study the transferability of adversarial robustness between models trained against different threat models. Croce & Hein (2019) propose a provable adversarial defense against all `p norms for p ≥ 1 using a regularization term. Finally, Tramèr & Boneh (2019) study the theoretical and empirical trade-offs of adversarial robustness in various settings when defending against multiple adversaries, however, they use a rotation and translation adversary instead of an `2 adversary for CIFAR10. Contributions In this work we demonstrate the effectiveness of adversarial training for learning models that are robust against a union of multiple perturbation models. First, we show that while simple aggregations of different adversarial attacks can achieve robustness against multiple perturbations models without resorting to complex architectures, the results are inconsistent across datasets and make suboptimal tradeoffs between the threat models. Second, we propose a modified PGD iteration that more naturally considers multiple perturbation models within the inner optimization loop of adversarial training. Third, we evaluate all approaches on the MNIST and CIFAR10 datasets, showing that our proposed generalizations of adversarial training can significantly outperform past approaches for the union of `∞, `2, and `1 attacks. Specifically, on MNIST, our model achieves 58.7% (individually 63.7%, 82.6%, 62.3%) adversarial accuracy against the union of all three attacks (`∞, `2, `1) for = (0.3, 1.5, 12) respectively, substantially improving upon the multiple-perturbation-model robustness described in Schott et al. (2019) and also improving upon the simpler aggregations of multiple adversarial attacks. Unlike past work, we also train a CIFAR10 model, which achieves 46.1% (individually 47.6%, 64.3%, 53.4%) adversarial accuracy against the union of all three attacks (`∞, `2, `1) for = (0.03, 0.5, 12). Finally, for completeness, we also draw relevant comparisons to concurrent work, and show that the relative advantage of our approach still holds. 3 OVERVIEW OF ADVERSARIAL TRAINING Adversarial training is an approach to learn a classifier which minimizes the worst case loss within some perturbation region (the threat model). Specifically, for some network fθ parameterized by θ, loss function `, and training data {xi, yi}i=1...n, the robust optimization problem of minimizing the worst case loss within `p norm-bounded perturbations with radius is min θ ∑ i max δ∈∆p, `(fθ(xi + δ), yi), (1) where ∆p, = {δ : ‖δ‖p ≤ } is the `p ball with radius centered around the origin. To simplify the notation, we will abbreviate `(fθ(x+ δ), y) = `(x+ δ; θ). 3.1 SOLVING THE INNER OPTIMIZATION PROBLEM We first look at solving the inner maximization problem, namely max δ∈∆p, `(x+ δ; θ). (2) This is the problem addressed by the “attackers” in the space of adversarial examples, hoping that the classifier can be tricked by the optimal perturbed image, x+ δ?. Typical solutions solve this problem by running a form of projected gradient descent, which iteratively takes steps in the gradient direction to increase the loss followed by a projection step back onto the feasible region, the `p ball. Since the gradients at the example points themselves (i.e., δ = 0) are typically too small to make efficient progress, more commonly used is a variation called projected steepest descent. Steepest descent For some norm ‖ · ‖p and step size α, the direction of steepest descent on the loss function ` for a perturbation δ is vp(δ) = arg max ‖v‖p≤α vT∇`(x+ δ; θ). (3) Then, instead of taking gradient steps, steepest descent uses the following iteration δ(t+1) = δ(t) + vp(δ (t)). (4) In practice, the norm used in steepest descent is typically taken to be the same `p norm used to define the perturbation region ∆p, . However, depending on the norm used, the direction of steepest descent can be quite different from the actual gradient (Figure 1). Note that a single steepest descent step with respect to the `∞ norm reduces to v∞(x) = α · sign(∇`(x+ δ; θ)), better known in the adversarial examples literature as the Fast Gradient Sign Method (Goodfellow et al., 2015). Projections The second component of projected steepest descent for adversarial examples is to project iterates back onto the `p ball around x. Specifically, projected steepest descent performs the following iteration δ(t+1) = P∆p, ( δ(t) + vp(δ (t)) ) (5) where P∆p, (δ) is the standard projection operator that finds the perturbation δ′ ∈ ∆p, that is “closest” in Euclidean space to the input δ, defined as P∆p, (δ) = arg min δ′∈∆p, ‖δ − δ′‖22. (6) Visually, a depiction of this procedure (steepest descent followed by a projection onto the perturbation region) for an `2 adversary can be found in Figure 1. If we instead project the steepest descent directions with respect to the `∞ norm onto the `∞ ball of allowable perturbations, the projected steepest descent iteration reduces to δ(t+1) = P∆∞, (δ (t) + v∞(δ (t))) = clip [− , ] ( δ(t) + α · sign(∇`(x+ δ(t); θ)) ) (7) where clip[− ,+ ] “clips” the input to lie within the range [− , ]. This is exactly the Basic Iterative Method used in Kurakin et al. (2017), typically referred to in the literature as an `∞ PGD adversary. 3.2 SOLVING THE OUTER OPTIMIZATION PROBLEM We next look at how to solve the outer optimization problem, or the problem of learning the weights θ that minimize the loss of our classifier. While many approaches have been proposed in the literature, we will focus on a heuristic called adversarial training, which has generally worked well in practice. Adversarial training Although solving the min-max optimization problem may seem daunting, a classical result known as Danskin’s theorem (Danskin, 1967) says that the gradient of a maximization problem is equal to the gradient of the objective evaluated at the optimum. For learning models that minimize the robust optimization problem from Equation (1), this means that ∇θ (∑ i max δ∈∆p, `(xi + δ; θ) ) = ∑ i ∇θ`(xi + δ∗(xi); θ) (8) where δ∗(xi) = arg maxδ∈∆p, `(xi+δ; θ). In other words, this means that in order to backpropagate through the robust optimization problem, we can solve the inner maximization and backpropagate through the solution. Adversarial training does this by empirically maximizing the inner problem with a PGD adversary. Note that since the inner problem is not solved exactly, Danskin’s theorem does not strictly apply. However, in practice, adversarial training does seem to provide good empirical robustness, at least when evaluated against the `p threat model it was trained against. 4 ADVERSARIAL TRAINING FOR MULTIPLE PERTURBATION MODELS We can now consider the core of this work, adversarial training procedures against multiple threat models. More formally, let S represent a set of threat models, such that p ∈ S corresponds to the `p perturbation model ∆p, , and let ∆S = ⋃ p∈S ∆p, be the union of all perturbation models in S. Note that the chosen for each ball is not typically the same, but we still use the same notation for simplicity, since the context will always make clear which `p-ball we are talking about. Then, the generalization of the robust optimization problem in Equation (1) to multiple perturbation models is min θ ∑ i max δ∈∆S `(xi + δ; θ). (9) The key difference is in the inner maximization, where the worst case adversarial loss is now taken over multiple `p perturbation models. In order to perform adversarial training, using the same motivational idea from Danskin’s theorem, we can backpropagate through the inner maximization by first finding (empirically) the optimal perturbation, δ∗ = arg max δ∈∆S `(x+ δ; θ). (10) To find the optimal perturbation over the union of threat models, we begin by considering straightforward generalizations of standard adversarial training, which will use PGD to approximately solve the inner maximization over multiple adversaries. 4.1 SIMPLE COMBINATIONS OF MULTIPLE PERTURBATIONS First, we study two simple approaches to generalizing adversarial training to multiple threat models. These methods can perform reasonably well in practice and are competitive with existing approaches without relying on complicated architectures. While these methods work to some degree, we later find empirically that these methods do not necessarily minimize the worst-case performance, and can converge to unexpected tradeoffs between multiple threat models. Worst-case perturbation One way to generalize adversarial training to multiple threat models is to use each threat model independently, and train on the adversarial perturbation that achieved the maximum loss. Specifically, for each adversary p ∈ S , we solve the innermost maximization with an `p PGD adversary to get an approximate worst-case perturbation δp, δp = arg max δ∈∆p, `(x+ δ; θ), (11) and then approximate the maximum over all adversaries as δ∗ ≈ arg max δp `(x+ δp; θ). (12) When |S| = 1, then this reduces to standard adversarial training. Note that if each PGD adversary solved their subproblem from Equation (11) exactly, then this is exactly the optimal perturbation δ?. PGD augmentation with all perturbations Another way to generalize adversarial training is to train on all the adversarial perturbations for all p ∈ S to form a larger adversarial dataset. Specifically, instead of solving the robust problem for multiple adversaries in Equation (9), we instead solve min θ ∑ i ∑ p∈S max δ∈∆p, `(xi + δ; θ) (13) by using individual `p PGD adversaries to approximate the inner maximization for each threat model. Again, this reduces to standard adversarial training when |S| = 1. While these methods work reasonably well in practice (which is shown later in Section 5), both approaches solve the inner maximization problem independently for each adversary, so each individual PGD adversary is not taking advantage of the fact that the perturbation region is enlarged by other threat models. To take advantage of the full perturbation region, we propose a modification to standard adversarial training, which combines information from all considered threat models into a single PGD adversary that is potentially stronger than the combination of independent adversaries. 4.2 MULTI STEEPEST DESCENT To create a PGD adversary with full knowledge of the perturbation region, we propose an algorithm that incorporates the different threat models within each step of projected steepest descent. Rather than generating adversarial examples for each threat model with separate PGD adversaries, the core idea is to create a single adversarial perturbation by simultaneously maximizing the worst case loss over all threat models at each projected steepest descent step. We call our method multi steepest descent (MSD), which can be summarized as the following iteration: δ(t+1)p = P∆p, (δ (t) + vp(δ (t))) for p ∈ S δ(t+1) = arg max δ (t+1) p `(x+ δ(t+1)p ) (14) Algorithm 1 Multi steepest descent for learning classifiers that are simultaneously robust to `p attacks for p ∈ S Input: classifier fθ, data x, labels y Parameters: p, αp for p ∈ S, maximum iterations T , loss function ` δ(0) = 0 for t = 0 . . . T − 1 do for p ∈ S do δ (t+1) p = P∆p, (δ (t) + vp(δ (t))) end for δ(t+1) = arg max δ (t+1) p `(fθ(x+ δ (t+1) p ), y) end for return δ(T ) The key difference here is that at each iteration of MSD, we choose a projected steepest descent direction that maximizes the loss over all attack models p ∈ S , whereas standard adversarial training and the simpler approaches use comparatively myopic PGD subroutines that only use one threat model at a time. The full algorithm is in Algorithm 1, and can be used as a drop in replacement for standard PGD adversaries to learn robust classifiers with adversarial training. We direct the reader to Appendix A for a complete description of steepest descent directions and projection operators for `∞, `2, and `1 norms1. 5 RESULTS In this section, we present experimental results on using generalizations of adversarial training to achieve simultaneous robustness to `∞, `2, and `1 perturbations on the MNIST and CIFAR10 datasets. Our primary goal is to show that adversarial training can in fact be adapted to a union of perturbation models using standard architectures to achieve competitive results, without the pitfalls described by Schott et al. (2019). Our results improve upon the state-of-the-art in three key ways. First, we can use simpler, standard architectures for image classifiers, without relying on complex architectures or input binarization. Second, our method is able to learn a single MNIST model which is simultaneously robust to all three threat models, whereas previous work was only robust against two at a time. Finally, our method is easily scalable to datasets beyond MNIST, providing the first CIFAR10 model trained to be simultaneously robust against `∞, `2, and `1 adversaries. We trained models using both the simple generalizations of adversarial training to multiple adversaries and also using MSD. Since the analysis by synthesis model is not scalable to CIFAR10, we additionally trained CIFAR10 models against individual PGD adversaries to measure the changes and tradeoffs in universal robustness. We evaluated these models with a broad suite of both gradient and non-gradient based attacks using Foolbox2 (the same attacks used by Schott et al. (2019)), and also incorporated all the PGD-based adversaries discussed in this paper. All aggregate statistics that combine multiple attacks compute the worst case error rate over all attacks for each example. Summaries of these results at specific thresholds can be found in Tables 1 and 2, where B-ABS and ABS refer to binarized and non-binarized versions of the analysis by synthesis models from Schott et al. (2019), Pp refers to a model trained against a PGD adversary with respect to the pnorm, Worst-PGD and PGD-Aug refer to models trained using the worst-case and data augmentation generalizations of adversarial training, and MSD refers to models trained using multi steepest descent. Full tables containing the complete breakdown of these numbers over all individual attacks used in the evaluation are in Appendix C. We report the results against individual attacks and threat models for completeness, however note that the goal of all these algorithms is to minimize the robust optimization objective from Equation (9). While there may be different implicit tradeoffs between individual threat 1The pure `1 steepest descent step is inefficient since it only updates one coordinate at a time. It can be improved by taking steps on multiple coordinates, similar to that used in Tramèr & Boneh (2019), and is also explained in Appendix A. 2https://github.com/bethgelab/foolbox (Rauber et al., 2017) models, in the end, the most meaningful metric for measuring the effective performance is the robust optimization objective, or the performance against the union of all attacks. 5.1 EXPERIMENTAL SETUP Architectures and hyperparameters For MNIST, we use a four layer convolutional network with two convolutional layers consisting of 32 and 64 5× 5 filters and 2 units of padding, followed by a fully connected layer with 1024 hidden units, where both convolutional layers are followed by 2× 2 Max Pooling layers and ReLU activations (this is the same architecture used by Madry et al. (2018)). This is in contrast to past work on MNIST, which relied on per-class variational autoencoders to achieve robustness against multiple threat models (Schott et al., 2019), which was also not easily scalable to larger datasets. Since our methods have the same complexity as standard adversarial training, they also easily apply to standard CIFAR10 architectures, and in this paper we use the well known pre-activation version of the ResNet18 architecture consisting of nine residual units with two convolutional layers each (He et al., 2016). A complete description of the hyperparameters used is in Appendix B, with hyperparameters for PGD adversaries in Appendix B.1, and hyperparameters for adversarial training in Appendix B.2. All reported are for images scaled to be between the range [0, 1]. All experiments can be run on modern GPU hardware (e.g. a single 1080ti). Attacks used for evaluation To evaluate the model, we incorporate the attacks from Schott et al. (2019) as well as our PGD based adversaries using projected steepest descent, however we provide a short description here. Note that we exclude attacks based on gradient estimation, since the gradient for the standard architectures used here are readily available. For `∞ attacks, although we find the `∞ PGD adversary to be quite effective, for completeness, we additionally use the Foolbox implementations of Fast Gradient Sign Method (Goodfellow et al., 2015), PGD adversary (Madry et al., 2018), and the Momentum Iterative Method (Dong et al., 2018). For `2 attacks, in addition to the `2 PGD adversary, we use the Foolbox implementations of the same PGD adversary, the Gaussian noise attack (Rauber et al., 2017), the boundary attack (Brendel et al., 2017), DeepFool (Moosavi-Dezfooli et al., 2016), the pointwise attack (Schott et al., 2019), DDN based attack (Rony et al., 2018), and C&W attack (Carlini & Wagner, 2017). For `1 attacks, we use both the `1 PGD adversary as well as additional Foolbox implementations of `0 attacks at the same radius, namely the salt & pepper attack (Rauber et al., 2017) and the pointwise attack (Schott et al., 2019). Note that an `1 adversary with radius is strictly stronger than an `0 adversary with the same radius, and so we choose to explicitly defend against `1 perturbations instead of the `0 perturbations considered by Schott et al. (2019). We make 10 random restarts for each of the evaluation results mentioned hereon for both MNIST and CIFAR10 3. We encourage future work in this area to incorporate the same, since the success of all attacks, specially decision based or gradient free ones, is observed to increase significantly over restarts. 3 All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration and DDN based Attack which does not benefit from the same owing to a deterministic initialization of δ. 4Results are from Schott et al. (2019), which used an `0 threat model of the same radius and evaluated against `0 attacks. So the reported number here is an upper bound on the `1 adversarial accuracy. Further, they evaluate 5.2 MNIST We first present results on the MNIST dataset, which are summarized in Table 1 (a more detailed breakdown over each individual attack is in Appendix C.1). While considered an “easy” dataset, we note that the previous state-of-the-art result for multiple threat models on MNIST (and our primary comparison) is only able to defend against two out of three threat models at a time (Schott et al., 2019) using comparatively complex variational autoencoder architectures. The model trained with MSD achieves the best performance against all attacks, achieving an error rate of 58.7% (individually 63.7%, 82.6%, and 62.3)% against the union of (`∞, `2, and `1) perturbations with radius = (0.3, 1.5, 12). Complete robustness curves over a range of epsilons over each threat model can be found in Figure 2. A comparison of our results with concurrent work (Tramèr & Boneh, 2019) can be found in Appendix D. 5.3 CIFAR10 Next, we present results on the CIFAR10 dataset, which are summarized in Table 2 (a more detailed breakdown over each individual attack is in Appendix C.2). Our MSD approach reaches the best performance against the union of attacks, and achieves 46.1% (individually 47.6%, 64.3%, 53.4%) adversarial accuracy against the union of (`∞, `2, `1) perturbations of size = (0.03, 0.5, 12). Interestingly, note that the P1 model trained against an `1 PGD adversary is not very robust when evaluated against other attacks, even though it can defend reasonably well against the `1 PGD attack in isolation (Table 4 in Appendix C.2). Complete robustness curves over a range of epsilons over each threat model can be found in Figure 3. A comparison of our results with concurrent work (Tramèr & Boneh, 2019) can be found in Appendix D. While adversarial defenses are generally not intended to defend against attacks outside of the threat model, we show some experiments exploring this aspect in Appendix E. their model without restarts and the adversarial accuracy against all attacks is an upper bound based on the reported accuracies for the individual threat models. Finally, all ABS results were computed using numerical gradient estimation, since gradients are not readily available. On tradeoffs and variability of the simpler defenses One major drawback to the simpler methods for generalizing adversarial training to multiple threat models is their variability and unclear tradeoffs over different settings. For example, on MNIST we see that the data augmentation approach fails to reduce the robust optimization objective: the `∞ threat model dominates the training process and we get a suboptimal tradeoff between threat models which isn’t robust to the union. Similarly, on CIFAR10 we see that the worst-case approach for adversarial training also converges to a model which has suboptimal robust performance against the union of threat models. This highlights the inconsistency of the simpler generalizations of adversarial training: depending on the dataset and the threat models, they may not ultimately minimize the robust optimization objective from Equation (9), and the tradeoffs may vary significantly with the problem setting. On the other hand, in both problem settings, we find MSD is consistent at finding a more optimal tradeoff which minimizes the worst-case loss in the union of the threat models. As a result, rather than using one of the simpler methods and convergence to a potentially unclear tradeoff between threat models, we recommend using MSD which directly minimizes the worst case performance among the specified threat models. 6 CONCLUSION In this paper, we showed that adversarial training can be quite effective when training against a union of multiple perturbation models. We compare two simple generalizations of adversarial training and an improved adversarial training procedure, multi steepest descent, which incorporates the different perturbation models directly into the direction of steepest descent. MSD based adversarial training procedure is able to outperform past approaches, demonstrating that adversarial training can in fact learn networks that are robust to multiple perturbation models simultaneously (as long as they are included in the threat model) while being scalable beyond MNIST and using standard architectures. B EXPERIMENTAL DETAILS B.1 HYPERPARAMETERS FOR PGD ADVERSARIES In this section, we describe the parameters used for all PGD adversaries in this paper. MNIST The `∞ adversary used a step size α = 0.01 within a radius of = 0.3 for 50 iterations. The `2 adversary used a step size α = 0.1 within a radius of = 1.5 for 100 iterations. The `1 adversary used a step size of α = 0.05 within a radius of = 12 for 50 iterations. By default the attack is run with two restarts, once starting with δ = 0 and once by randomly initializing δ in the allowable perturbation ball. k1 = 5, k2 = 20 as described in A.1. The MSD adversary used step sizes of α = (0.01, 0.2, 0.05) for the (`∞, `2, `1) directions within a radius of = (0.3, 1.5, 12) for 100 iterations. At test time, we increase the number of iterations to (100, 200, 100) for (`∞, `2, `1). CIFAR10 The `∞ adversary used a step size α = 0.003 within a radius of = 0.03 for 40 iterations. The `2 adversary used a step size α = 0.05 within a radius of = 0.5 for 50 iterations. The `1 adversary used a step size α = 0.1 within a radius of = 12 for 50 iterations. k1 = 5, k2 = 20 as described in A.1. The MSD adversary used step sizes of α = (0.003, 0.05, 0.05) for the (`∞, `2, `1) directions within a radius of = (0.03, 0.3, 12) for 50 iterations. Note that the MSD model trained for `2 radius of 0.3 is in fact robust to a higher radius of 0.5. B.2 TRAINING HYPERPARAMETERS In this section, we describe the parameters used for adversarial training. For all the models, we used the SGD optimizer with momentum 0.9 and weight decay 5 · 10−4. MNIST We train the models to a maximum of 20 epochs. We used a variation of the learning rate schedule from Smith (2018), which is piecewise linear from 0 to 0.1 over the first 7 epochs, down to 0.001 over the next 8 epochs, and finally back down to 0.0001 in the last 5 epochs. CIFAR10 We used a variation of the learning rate schedule from Smith (2018) to achieve superconvergence in 50 epochs, which is piecewise linear from 0 to 0.1 over the first 20 epochs, down to 0.005 over the next 20 epochs, and finally back down to 0 in the last 10 epochs. C EXTENDED RESULTS Here, we show the full tables which break down the overall adversarial error rates over individual attacks for both MNIST and CIFAR10, along with robustness curves for all models in the paper. C.1 MNIST RESULTS Expanded table of results Table 3 contains the full table of results for all attacks on all models on the MNIST dataset. All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration, and DDN attack, which does not benefit from restarts owing to a deterministic starting point. Note that the results for B-ABS and ABS models are from Schott et al. (2019), which uses gradient estimation techniques whenever a gradient is needed, and the robustness against all attacks for B-ABS and ABS is an upper bound based on the reported results. Further, these models are not evaluated with restarts, pushing the reported results even higher than actual. C.2 CIFAR10 RESULTS Expanded table of results Table 4 contains the full table of results for all attacks on all models on the CIFAR10 dataset. All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration, and DDN attack, which does not benefit from restarts owing to a deterministic starting point. Further note that salt & pepper and pointwise attacks in the `1 section are technically `0 attacks, but produce perturbations in the `1 ball. Finally, it is clear here that while the training against an `1 PGD adversary defends against said PGD adversary, it does not seem to transfer to robustness against other attacks. D COMPARISON WITH CONCURRENT WORK In this section we compare the results of our trained MSD model with that of Tramèr & Boneh (2019), who study the theoretical and empirical trade-offs of adversarial robustness in various settings when defending against multiple adversaries. Training methods presented by them in their comparisons, namely Advavg and Advmax closely resemble the naive approaches discussed in this paper: PGDAug and Worst-PGD respectively. We use the results as is from their work, and additionally compare the position of our MSD models at the revised thresholds used by Tramèr & Boneh (2019) without specially retraining them. The results of Tables 5 and 6 show that the relative advantage of MSD over naive techniques does hold up. While we do make a comparison to the most relevant concurrent work for completeness, the following differences can bias the robust accuracies reported for the MSD models to relatively lower than expected (and correspondingly, the robust accuracies reported for the other models are relatively higher than expected): 1. Use of random restarts: We observe in our experiments that using up to 10 restarts for all our attacks leads to a decrease in model accuracy from 5 to 10% across all models. Tramèr & Boneh do not mention restarting their attacks for these models and so the results for models apart from MSD in Tables 5, 6 could potentially be lowered with random restarts. 2. Different training and testing thresholds: The MSD model for the MNIST dataset was trained at = (0.3, 1.5, 12) for the `∞, `2, `1 perturbation balls respectively, while Tramèr & Boneh (2019) tested at = (0.3, 2.0, 10). This may lower the robust accuracy at these thresholds for the MSD model, since it was not trained for that particular threshold. Likewise, the MSD model for CIFAR10 was also trained at = (0.03, 0.05, 12) for the `∞, `2, `1 perturbation balls respectively, while Tramèr & Boneh (2019) tested at = ( 4255 , 0, 2000 255 ). 3. Different perturbation models: For the CIFAR10 results in Table 6, Advavg & Advmax models are trained and tested only for `1 and `∞ adversarial perturbations, whereas the MSD model is robust to the union of `1, `2 and `∞, achieving a much harder task. 4. Larger Suite of Attacks Used: The attacks used by Tramèr & Boneh are PGD, EAD (Chen et al., 2017) and Pointwise Attack (Schott et al., 2019) for `1; PGD, C&W (Carlini & Wagner, 2017) and Boundary Attack (Brendel et al., 2017) for `2; and PGD for `∞ adversaries. We use a more expansive suite of attacks as shown in Appendix C. Some of the attacks like DDN, which proved to be strong adversaries in most cases, were not considered E ATTACKS OUTSIDE THE THREAT MODEL In this section, we present some additional experiments exploring the performance of our model on attacks which lie beyond the threat model. Note that there is no principled reason why we would believe this to be the case (as most adversarial defenses tend to not generalize beyond the threat model defended against), and this is presented for exploratory reasons. Common corruptions We measure the performance of all the models on CIFAR-10-C, which is a CIFAR10 benchmark which has had common corruptions applied to it (e.g. noise, blur, and compression). We report the results in Table 7. We find that that, apart from the P1 model, the rest achieve some improved robustness against these common corruptions above the standard CIFAR10 model. Defending against `1 and `∞ and evaluating on `2 We also briefly study what happens when one trains against `1 and `∞ threat models, while evaluating against the `2 adversary. Specifically, we take the MSD approach on MNIST and simply remove the `2 adversary from the threat model. This results in a model which has its `1 and `∞ robust performance against a PGD adversary drop by 1% and its `2 robust performance against a PGD adversary (which it was not trained for) drops by 2% in comparison to the original MSD approach on all three threat models. As a result, we empirically observe that including the `2 threat model in this setting actually improved overall robustness against all three threat models. Unsurprisingly, the `2 performance drops to some degree, but the model does not lose all of its robustness.
1. What is the focus of the paper regarding adversarial training? 2. What are the strengths of the proposed algorithm, particularly its simplicity and ease of implementation? 3. What are the weaknesses of the paper, especially regarding the experimental comparisons and understanding of defense strategies? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Summary of the paper: The paper describes adversarial training aiming to build models that are robust to multiple adversarial attacks - with L_1, L_2 and L_inf norms. The method is a based on adversarial training against a union of adversaries. That union is created by taking (projected) gradient steps like PGD (Kurakin 2017), but choosing the maximal loss over GD steps for L1, L2, L_inf at each step. Strengths: The topic is trendy and interesting. The proposed algorithm is simple and easy to implement. The experimental results demonstrate improvement over several baselines. Weaknesses: -- I am missing a more systematic comparisons to baseline defenses in the experiments. Figures 2 and 3 should have shown the accuracy as a function of radius also for PGD-aug, PGD-worst, Schott et al. Also what about comparisons to the latest SoTA defenses, e.g. recent baselines from from www.robust-ml.org/defenses/. -- An implicit expectation from this paper is that it addresses the key issue of "Defend against one attack but face a different attack". The paper could have done more to advance our understanding of this issue. Specifically: The approach improves over baselines for the "all attacks" mode, but under-performs compared with PGDaug and PGDworst when attacked with a single norm (Tab 1). While this is expected and probably cannot be avoided, it leaves the reader with an unclear conclusion about risk tradeoffs. It would have been useful to clarify the regime of mixtures of attacks where the various approaches are best. For instance, if one uses a of mix attack samples from the three norms, what mixtures would it be best to defend using MSD, wand what mixtures would it be best to use PGD-aug? or ABS?
ICLR
Title Adversarial Robustness Against the Union of Multiple Perturbation Models Abstract Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers, but the vast majority has defended against single types of attacks. Recent work has looked at defending against multiple attacks, specifically on the MNIST dataset, yet this approach used a relatively complex architecture, claiming that standard adversarial training can not apply because it “overfits” to a particular norm. In this work, we show that it is indeed possible to adversarially train a robust model against a union of norm-bounded attacks, by using a natural generalization of the standard PGD-based procedure for adversarial training to multiple threat models. With this approach, we are able to train standard architectures which are robust against `∞, `2, and `1 attacks, outperforming past approaches on the MNIST dataset and providing the first CIFAR10 network trained to be simultaneously robust against (`∞, `2, `1) threat models, which achieves adversarial accuracy of 46.1% against the union of (`∞, `2, `1) perturbations with radius = (0.03, 0.5, 12). 1 INTRODUCTION Machine learning algorithms have been shown to be susceptible to adversarial examples (Szegedy et al., 2014) through the existence of data points which can be adversarially perturbed to be misclassified, but are “close enough” to the original example to be imperceptible to the human eye. Methods to generate adversarial examples, or “attacks”, typically rely on gradient information, and most commonly use variations of projected gradient descent (PGD) to maximize the loss within a small perturbation region, usually referred to as the adversary’s threat model. Since then, a number of heuristic defenses have been proposed to defend against this phenomenon, e.g. distillation (Papernot et al., 2016) or more recently logit-pairing (Kannan et al., 2018). However, as time goes by, the original robustness claims of these defenses typically don’t hold up to more advanced adversaries or more thorough attacks (Carlini & Wagner, 2017; Engstrom et al., 2018; Mosbach et al., 2018). One heuristic defense that seems to have survived (to this day) is to use adversarial training against a PGD adversary (Madry et al., 2018), which remains quite popular due to its simplicity and apparent empirical robustness. The method continues to perform well in empirical benchmarks even when compared to recent work in provable defenses, although it comes with no formal guarantees. Some recent work, however, pointed out that adversarial training against `∞ perturbations “overfits” to the `∞ threat model, and used this as motivation to propose a more complicated architecture in order to achieve robustness to multiple perturbation types on the MNIST dataset (Schott et al., 2019). In this work, we offer a alternative viewpoint: while adversarial training can overfit to the individual threat models, we show that it is indeed possible to use adversarial training to learn a model which is simultaneously robust against multiple types of `p norm bounded attacks (we consider `∞, `2, and `1 attacks, but the approach can apply to more general attacks). First, we show while simple generalizations of adversarial training to multiple threat models can achieve some degree of robustness against the union of these threat models, the performance is inconsistent and converges to suboptimal tradeoffs which may not actually minimize the robust objective. Second, we propose a slightly modified PGD-based algorithm called multi steepest descent (MSD) for adversarial training which more naturally incorporates the different perturbations within the PGD iterates, further improving the adversarial training approach by directly minimizing the robust optimization objective. Third, we show empirically that our approach improves upon past work by being applicable to standard network architectures, easily scaling beyond the MNIST dataset, and outperforming past results on robustness against multiple perturbation types. 2 RELATED WORK After their original introduction, one of the first widely-considered attacks against deep networks had been the Fast Gradient Sign Method (Goodfellow et al., 2015), which showed that a single, small step in the direction of the sign of the gradient could sometimes fool machine learning classifiers. While this worked to some degree, the Basic Iterative Method (Kurakin et al., 2017) (now typically referred to as the PGD attack) was significantly more successful at creating adversarial examples, and now lies at the core of many papers. Since then, a number of improvements and adaptations have been made to the base PGD algorithm to overcome heuristic defenses and create stronger adversaries. Adversarial attacks were thought to be safe under realistic transformations (Lu et al., 2017) until the attack was augmented to be robust to them (Athalye et al., 2018b). Adversarial examples generated using PGD on surrogate models can transfer to black box models (Papernot et al., 2017). Utilizing core optimization techniques such as momentum can greatly improve the attack success rate and transferability, and was the winner of the NIPS 2017 competition on adversarial examples (Dong et al., 2018). Uesato et al. (2018) showed that a number of ImageNet defenses were not as robust as originally thought, and Athalye et al. (2018a) defeated many of the heuristic defenses submitted to ICLR 2018 shortly after the reviewing cycle ended, all with stronger PGD variations. Throughout this cycle of attack and defense, some defenses were uncovered that remain robust to this day. The aforementioned PGD attack, and the related defense known as adversarial training with a PGD adversary (which incorporates PGD-attacked examples into the training process) has so far remained empirically robust (Madry et al., 2018). Verification methods to certify robustness properties of networks were developed, utilizing techniques such as SMT solvers (Katz et al., 2017), SDP relaxations (Raghunathan et al., 2018b), and mixed-integer linear programming (Tjeng et al., 2019), the last of which has recently been successfully scaled to reasonably sized networks. Other work has folded verification into the training process to create provably robust networks (Wong & Kolter, 2018; Raghunathan et al., 2018a), some of which have also been scaled to even larger networks (Wong et al., 2018; Mirman et al., 2018; Gowal et al., 2018). Although some of these could potentially be extended to apply to multiple perturbations simultaneously, most of these works have focused primarily on defending against and verifying only a single type of adversarial perturbation at a time. Last but most relevant to this work are adversarial defenses that attempt to be robust against multiple types of attacks simultaneously. Schott et al. (2019) used multiple variational autoencoders to construct a complex architecture for the MNIST dataset that is not as easily attacked by `∞, `2, and `0 adversaries. Importantly, Schott et al. (2019) compare to adversarial training with an `∞-bounded PGD adversary as described by Madry et al. (2018), claiming that the adversarial training defense overfits to the `∞ metric, and they do not consider other forms of adversarial training. Following this, a number of concurrent papers have since been released. While not studied as a defense, Kang et al. (2019) study the transferability of adversarial robustness between models trained against different threat models. Croce & Hein (2019) propose a provable adversarial defense against all `p norms for p ≥ 1 using a regularization term. Finally, Tramèr & Boneh (2019) study the theoretical and empirical trade-offs of adversarial robustness in various settings when defending against multiple adversaries, however, they use a rotation and translation adversary instead of an `2 adversary for CIFAR10. Contributions In this work we demonstrate the effectiveness of adversarial training for learning models that are robust against a union of multiple perturbation models. First, we show that while simple aggregations of different adversarial attacks can achieve robustness against multiple perturbations models without resorting to complex architectures, the results are inconsistent across datasets and make suboptimal tradeoffs between the threat models. Second, we propose a modified PGD iteration that more naturally considers multiple perturbation models within the inner optimization loop of adversarial training. Third, we evaluate all approaches on the MNIST and CIFAR10 datasets, showing that our proposed generalizations of adversarial training can significantly outperform past approaches for the union of `∞, `2, and `1 attacks. Specifically, on MNIST, our model achieves 58.7% (individually 63.7%, 82.6%, 62.3%) adversarial accuracy against the union of all three attacks (`∞, `2, `1) for = (0.3, 1.5, 12) respectively, substantially improving upon the multiple-perturbation-model robustness described in Schott et al. (2019) and also improving upon the simpler aggregations of multiple adversarial attacks. Unlike past work, we also train a CIFAR10 model, which achieves 46.1% (individually 47.6%, 64.3%, 53.4%) adversarial accuracy against the union of all three attacks (`∞, `2, `1) for = (0.03, 0.5, 12). Finally, for completeness, we also draw relevant comparisons to concurrent work, and show that the relative advantage of our approach still holds. 3 OVERVIEW OF ADVERSARIAL TRAINING Adversarial training is an approach to learn a classifier which minimizes the worst case loss within some perturbation region (the threat model). Specifically, for some network fθ parameterized by θ, loss function `, and training data {xi, yi}i=1...n, the robust optimization problem of minimizing the worst case loss within `p norm-bounded perturbations with radius is min θ ∑ i max δ∈∆p, `(fθ(xi + δ), yi), (1) where ∆p, = {δ : ‖δ‖p ≤ } is the `p ball with radius centered around the origin. To simplify the notation, we will abbreviate `(fθ(x+ δ), y) = `(x+ δ; θ). 3.1 SOLVING THE INNER OPTIMIZATION PROBLEM We first look at solving the inner maximization problem, namely max δ∈∆p, `(x+ δ; θ). (2) This is the problem addressed by the “attackers” in the space of adversarial examples, hoping that the classifier can be tricked by the optimal perturbed image, x+ δ?. Typical solutions solve this problem by running a form of projected gradient descent, which iteratively takes steps in the gradient direction to increase the loss followed by a projection step back onto the feasible region, the `p ball. Since the gradients at the example points themselves (i.e., δ = 0) are typically too small to make efficient progress, more commonly used is a variation called projected steepest descent. Steepest descent For some norm ‖ · ‖p and step size α, the direction of steepest descent on the loss function ` for a perturbation δ is vp(δ) = arg max ‖v‖p≤α vT∇`(x+ δ; θ). (3) Then, instead of taking gradient steps, steepest descent uses the following iteration δ(t+1) = δ(t) + vp(δ (t)). (4) In practice, the norm used in steepest descent is typically taken to be the same `p norm used to define the perturbation region ∆p, . However, depending on the norm used, the direction of steepest descent can be quite different from the actual gradient (Figure 1). Note that a single steepest descent step with respect to the `∞ norm reduces to v∞(x) = α · sign(∇`(x+ δ; θ)), better known in the adversarial examples literature as the Fast Gradient Sign Method (Goodfellow et al., 2015). Projections The second component of projected steepest descent for adversarial examples is to project iterates back onto the `p ball around x. Specifically, projected steepest descent performs the following iteration δ(t+1) = P∆p, ( δ(t) + vp(δ (t)) ) (5) where P∆p, (δ) is the standard projection operator that finds the perturbation δ′ ∈ ∆p, that is “closest” in Euclidean space to the input δ, defined as P∆p, (δ) = arg min δ′∈∆p, ‖δ − δ′‖22. (6) Visually, a depiction of this procedure (steepest descent followed by a projection onto the perturbation region) for an `2 adversary can be found in Figure 1. If we instead project the steepest descent directions with respect to the `∞ norm onto the `∞ ball of allowable perturbations, the projected steepest descent iteration reduces to δ(t+1) = P∆∞, (δ (t) + v∞(δ (t))) = clip [− , ] ( δ(t) + α · sign(∇`(x+ δ(t); θ)) ) (7) where clip[− ,+ ] “clips” the input to lie within the range [− , ]. This is exactly the Basic Iterative Method used in Kurakin et al. (2017), typically referred to in the literature as an `∞ PGD adversary. 3.2 SOLVING THE OUTER OPTIMIZATION PROBLEM We next look at how to solve the outer optimization problem, or the problem of learning the weights θ that minimize the loss of our classifier. While many approaches have been proposed in the literature, we will focus on a heuristic called adversarial training, which has generally worked well in practice. Adversarial training Although solving the min-max optimization problem may seem daunting, a classical result known as Danskin’s theorem (Danskin, 1967) says that the gradient of a maximization problem is equal to the gradient of the objective evaluated at the optimum. For learning models that minimize the robust optimization problem from Equation (1), this means that ∇θ (∑ i max δ∈∆p, `(xi + δ; θ) ) = ∑ i ∇θ`(xi + δ∗(xi); θ) (8) where δ∗(xi) = arg maxδ∈∆p, `(xi+δ; θ). In other words, this means that in order to backpropagate through the robust optimization problem, we can solve the inner maximization and backpropagate through the solution. Adversarial training does this by empirically maximizing the inner problem with a PGD adversary. Note that since the inner problem is not solved exactly, Danskin’s theorem does not strictly apply. However, in practice, adversarial training does seem to provide good empirical robustness, at least when evaluated against the `p threat model it was trained against. 4 ADVERSARIAL TRAINING FOR MULTIPLE PERTURBATION MODELS We can now consider the core of this work, adversarial training procedures against multiple threat models. More formally, let S represent a set of threat models, such that p ∈ S corresponds to the `p perturbation model ∆p, , and let ∆S = ⋃ p∈S ∆p, be the union of all perturbation models in S. Note that the chosen for each ball is not typically the same, but we still use the same notation for simplicity, since the context will always make clear which `p-ball we are talking about. Then, the generalization of the robust optimization problem in Equation (1) to multiple perturbation models is min θ ∑ i max δ∈∆S `(xi + δ; θ). (9) The key difference is in the inner maximization, where the worst case adversarial loss is now taken over multiple `p perturbation models. In order to perform adversarial training, using the same motivational idea from Danskin’s theorem, we can backpropagate through the inner maximization by first finding (empirically) the optimal perturbation, δ∗ = arg max δ∈∆S `(x+ δ; θ). (10) To find the optimal perturbation over the union of threat models, we begin by considering straightforward generalizations of standard adversarial training, which will use PGD to approximately solve the inner maximization over multiple adversaries. 4.1 SIMPLE COMBINATIONS OF MULTIPLE PERTURBATIONS First, we study two simple approaches to generalizing adversarial training to multiple threat models. These methods can perform reasonably well in practice and are competitive with existing approaches without relying on complicated architectures. While these methods work to some degree, we later find empirically that these methods do not necessarily minimize the worst-case performance, and can converge to unexpected tradeoffs between multiple threat models. Worst-case perturbation One way to generalize adversarial training to multiple threat models is to use each threat model independently, and train on the adversarial perturbation that achieved the maximum loss. Specifically, for each adversary p ∈ S , we solve the innermost maximization with an `p PGD adversary to get an approximate worst-case perturbation δp, δp = arg max δ∈∆p, `(x+ δ; θ), (11) and then approximate the maximum over all adversaries as δ∗ ≈ arg max δp `(x+ δp; θ). (12) When |S| = 1, then this reduces to standard adversarial training. Note that if each PGD adversary solved their subproblem from Equation (11) exactly, then this is exactly the optimal perturbation δ?. PGD augmentation with all perturbations Another way to generalize adversarial training is to train on all the adversarial perturbations for all p ∈ S to form a larger adversarial dataset. Specifically, instead of solving the robust problem for multiple adversaries in Equation (9), we instead solve min θ ∑ i ∑ p∈S max δ∈∆p, `(xi + δ; θ) (13) by using individual `p PGD adversaries to approximate the inner maximization for each threat model. Again, this reduces to standard adversarial training when |S| = 1. While these methods work reasonably well in practice (which is shown later in Section 5), both approaches solve the inner maximization problem independently for each adversary, so each individual PGD adversary is not taking advantage of the fact that the perturbation region is enlarged by other threat models. To take advantage of the full perturbation region, we propose a modification to standard adversarial training, which combines information from all considered threat models into a single PGD adversary that is potentially stronger than the combination of independent adversaries. 4.2 MULTI STEEPEST DESCENT To create a PGD adversary with full knowledge of the perturbation region, we propose an algorithm that incorporates the different threat models within each step of projected steepest descent. Rather than generating adversarial examples for each threat model with separate PGD adversaries, the core idea is to create a single adversarial perturbation by simultaneously maximizing the worst case loss over all threat models at each projected steepest descent step. We call our method multi steepest descent (MSD), which can be summarized as the following iteration: δ(t+1)p = P∆p, (δ (t) + vp(δ (t))) for p ∈ S δ(t+1) = arg max δ (t+1) p `(x+ δ(t+1)p ) (14) Algorithm 1 Multi steepest descent for learning classifiers that are simultaneously robust to `p attacks for p ∈ S Input: classifier fθ, data x, labels y Parameters: p, αp for p ∈ S, maximum iterations T , loss function ` δ(0) = 0 for t = 0 . . . T − 1 do for p ∈ S do δ (t+1) p = P∆p, (δ (t) + vp(δ (t))) end for δ(t+1) = arg max δ (t+1) p `(fθ(x+ δ (t+1) p ), y) end for return δ(T ) The key difference here is that at each iteration of MSD, we choose a projected steepest descent direction that maximizes the loss over all attack models p ∈ S , whereas standard adversarial training and the simpler approaches use comparatively myopic PGD subroutines that only use one threat model at a time. The full algorithm is in Algorithm 1, and can be used as a drop in replacement for standard PGD adversaries to learn robust classifiers with adversarial training. We direct the reader to Appendix A for a complete description of steepest descent directions and projection operators for `∞, `2, and `1 norms1. 5 RESULTS In this section, we present experimental results on using generalizations of adversarial training to achieve simultaneous robustness to `∞, `2, and `1 perturbations on the MNIST and CIFAR10 datasets. Our primary goal is to show that adversarial training can in fact be adapted to a union of perturbation models using standard architectures to achieve competitive results, without the pitfalls described by Schott et al. (2019). Our results improve upon the state-of-the-art in three key ways. First, we can use simpler, standard architectures for image classifiers, without relying on complex architectures or input binarization. Second, our method is able to learn a single MNIST model which is simultaneously robust to all three threat models, whereas previous work was only robust against two at a time. Finally, our method is easily scalable to datasets beyond MNIST, providing the first CIFAR10 model trained to be simultaneously robust against `∞, `2, and `1 adversaries. We trained models using both the simple generalizations of adversarial training to multiple adversaries and also using MSD. Since the analysis by synthesis model is not scalable to CIFAR10, we additionally trained CIFAR10 models against individual PGD adversaries to measure the changes and tradeoffs in universal robustness. We evaluated these models with a broad suite of both gradient and non-gradient based attacks using Foolbox2 (the same attacks used by Schott et al. (2019)), and also incorporated all the PGD-based adversaries discussed in this paper. All aggregate statistics that combine multiple attacks compute the worst case error rate over all attacks for each example. Summaries of these results at specific thresholds can be found in Tables 1 and 2, where B-ABS and ABS refer to binarized and non-binarized versions of the analysis by synthesis models from Schott et al. (2019), Pp refers to a model trained against a PGD adversary with respect to the pnorm, Worst-PGD and PGD-Aug refer to models trained using the worst-case and data augmentation generalizations of adversarial training, and MSD refers to models trained using multi steepest descent. Full tables containing the complete breakdown of these numbers over all individual attacks used in the evaluation are in Appendix C. We report the results against individual attacks and threat models for completeness, however note that the goal of all these algorithms is to minimize the robust optimization objective from Equation (9). While there may be different implicit tradeoffs between individual threat 1The pure `1 steepest descent step is inefficient since it only updates one coordinate at a time. It can be improved by taking steps on multiple coordinates, similar to that used in Tramèr & Boneh (2019), and is also explained in Appendix A. 2https://github.com/bethgelab/foolbox (Rauber et al., 2017) models, in the end, the most meaningful metric for measuring the effective performance is the robust optimization objective, or the performance against the union of all attacks. 5.1 EXPERIMENTAL SETUP Architectures and hyperparameters For MNIST, we use a four layer convolutional network with two convolutional layers consisting of 32 and 64 5× 5 filters and 2 units of padding, followed by a fully connected layer with 1024 hidden units, where both convolutional layers are followed by 2× 2 Max Pooling layers and ReLU activations (this is the same architecture used by Madry et al. (2018)). This is in contrast to past work on MNIST, which relied on per-class variational autoencoders to achieve robustness against multiple threat models (Schott et al., 2019), which was also not easily scalable to larger datasets. Since our methods have the same complexity as standard adversarial training, they also easily apply to standard CIFAR10 architectures, and in this paper we use the well known pre-activation version of the ResNet18 architecture consisting of nine residual units with two convolutional layers each (He et al., 2016). A complete description of the hyperparameters used is in Appendix B, with hyperparameters for PGD adversaries in Appendix B.1, and hyperparameters for adversarial training in Appendix B.2. All reported are for images scaled to be between the range [0, 1]. All experiments can be run on modern GPU hardware (e.g. a single 1080ti). Attacks used for evaluation To evaluate the model, we incorporate the attacks from Schott et al. (2019) as well as our PGD based adversaries using projected steepest descent, however we provide a short description here. Note that we exclude attacks based on gradient estimation, since the gradient for the standard architectures used here are readily available. For `∞ attacks, although we find the `∞ PGD adversary to be quite effective, for completeness, we additionally use the Foolbox implementations of Fast Gradient Sign Method (Goodfellow et al., 2015), PGD adversary (Madry et al., 2018), and the Momentum Iterative Method (Dong et al., 2018). For `2 attacks, in addition to the `2 PGD adversary, we use the Foolbox implementations of the same PGD adversary, the Gaussian noise attack (Rauber et al., 2017), the boundary attack (Brendel et al., 2017), DeepFool (Moosavi-Dezfooli et al., 2016), the pointwise attack (Schott et al., 2019), DDN based attack (Rony et al., 2018), and C&W attack (Carlini & Wagner, 2017). For `1 attacks, we use both the `1 PGD adversary as well as additional Foolbox implementations of `0 attacks at the same radius, namely the salt & pepper attack (Rauber et al., 2017) and the pointwise attack (Schott et al., 2019). Note that an `1 adversary with radius is strictly stronger than an `0 adversary with the same radius, and so we choose to explicitly defend against `1 perturbations instead of the `0 perturbations considered by Schott et al. (2019). We make 10 random restarts for each of the evaluation results mentioned hereon for both MNIST and CIFAR10 3. We encourage future work in this area to incorporate the same, since the success of all attacks, specially decision based or gradient free ones, is observed to increase significantly over restarts. 3 All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration and DDN based Attack which does not benefit from the same owing to a deterministic initialization of δ. 4Results are from Schott et al. (2019), which used an `0 threat model of the same radius and evaluated against `0 attacks. So the reported number here is an upper bound on the `1 adversarial accuracy. Further, they evaluate 5.2 MNIST We first present results on the MNIST dataset, which are summarized in Table 1 (a more detailed breakdown over each individual attack is in Appendix C.1). While considered an “easy” dataset, we note that the previous state-of-the-art result for multiple threat models on MNIST (and our primary comparison) is only able to defend against two out of three threat models at a time (Schott et al., 2019) using comparatively complex variational autoencoder architectures. The model trained with MSD achieves the best performance against all attacks, achieving an error rate of 58.7% (individually 63.7%, 82.6%, and 62.3)% against the union of (`∞, `2, and `1) perturbations with radius = (0.3, 1.5, 12). Complete robustness curves over a range of epsilons over each threat model can be found in Figure 2. A comparison of our results with concurrent work (Tramèr & Boneh, 2019) can be found in Appendix D. 5.3 CIFAR10 Next, we present results on the CIFAR10 dataset, which are summarized in Table 2 (a more detailed breakdown over each individual attack is in Appendix C.2). Our MSD approach reaches the best performance against the union of attacks, and achieves 46.1% (individually 47.6%, 64.3%, 53.4%) adversarial accuracy against the union of (`∞, `2, `1) perturbations of size = (0.03, 0.5, 12). Interestingly, note that the P1 model trained against an `1 PGD adversary is not very robust when evaluated against other attacks, even though it can defend reasonably well against the `1 PGD attack in isolation (Table 4 in Appendix C.2). Complete robustness curves over a range of epsilons over each threat model can be found in Figure 3. A comparison of our results with concurrent work (Tramèr & Boneh, 2019) can be found in Appendix D. While adversarial defenses are generally not intended to defend against attacks outside of the threat model, we show some experiments exploring this aspect in Appendix E. their model without restarts and the adversarial accuracy against all attacks is an upper bound based on the reported accuracies for the individual threat models. Finally, all ABS results were computed using numerical gradient estimation, since gradients are not readily available. On tradeoffs and variability of the simpler defenses One major drawback to the simpler methods for generalizing adversarial training to multiple threat models is their variability and unclear tradeoffs over different settings. For example, on MNIST we see that the data augmentation approach fails to reduce the robust optimization objective: the `∞ threat model dominates the training process and we get a suboptimal tradeoff between threat models which isn’t robust to the union. Similarly, on CIFAR10 we see that the worst-case approach for adversarial training also converges to a model which has suboptimal robust performance against the union of threat models. This highlights the inconsistency of the simpler generalizations of adversarial training: depending on the dataset and the threat models, they may not ultimately minimize the robust optimization objective from Equation (9), and the tradeoffs may vary significantly with the problem setting. On the other hand, in both problem settings, we find MSD is consistent at finding a more optimal tradeoff which minimizes the worst-case loss in the union of the threat models. As a result, rather than using one of the simpler methods and convergence to a potentially unclear tradeoff between threat models, we recommend using MSD which directly minimizes the worst case performance among the specified threat models. 6 CONCLUSION In this paper, we showed that adversarial training can be quite effective when training against a union of multiple perturbation models. We compare two simple generalizations of adversarial training and an improved adversarial training procedure, multi steepest descent, which incorporates the different perturbation models directly into the direction of steepest descent. MSD based adversarial training procedure is able to outperform past approaches, demonstrating that adversarial training can in fact learn networks that are robust to multiple perturbation models simultaneously (as long as they are included in the threat model) while being scalable beyond MNIST and using standard architectures. B EXPERIMENTAL DETAILS B.1 HYPERPARAMETERS FOR PGD ADVERSARIES In this section, we describe the parameters used for all PGD adversaries in this paper. MNIST The `∞ adversary used a step size α = 0.01 within a radius of = 0.3 for 50 iterations. The `2 adversary used a step size α = 0.1 within a radius of = 1.5 for 100 iterations. The `1 adversary used a step size of α = 0.05 within a radius of = 12 for 50 iterations. By default the attack is run with two restarts, once starting with δ = 0 and once by randomly initializing δ in the allowable perturbation ball. k1 = 5, k2 = 20 as described in A.1. The MSD adversary used step sizes of α = (0.01, 0.2, 0.05) for the (`∞, `2, `1) directions within a radius of = (0.3, 1.5, 12) for 100 iterations. At test time, we increase the number of iterations to (100, 200, 100) for (`∞, `2, `1). CIFAR10 The `∞ adversary used a step size α = 0.003 within a radius of = 0.03 for 40 iterations. The `2 adversary used a step size α = 0.05 within a radius of = 0.5 for 50 iterations. The `1 adversary used a step size α = 0.1 within a radius of = 12 for 50 iterations. k1 = 5, k2 = 20 as described in A.1. The MSD adversary used step sizes of α = (0.003, 0.05, 0.05) for the (`∞, `2, `1) directions within a radius of = (0.03, 0.3, 12) for 50 iterations. Note that the MSD model trained for `2 radius of 0.3 is in fact robust to a higher radius of 0.5. B.2 TRAINING HYPERPARAMETERS In this section, we describe the parameters used for adversarial training. For all the models, we used the SGD optimizer with momentum 0.9 and weight decay 5 · 10−4. MNIST We train the models to a maximum of 20 epochs. We used a variation of the learning rate schedule from Smith (2018), which is piecewise linear from 0 to 0.1 over the first 7 epochs, down to 0.001 over the next 8 epochs, and finally back down to 0.0001 in the last 5 epochs. CIFAR10 We used a variation of the learning rate schedule from Smith (2018) to achieve superconvergence in 50 epochs, which is piecewise linear from 0 to 0.1 over the first 20 epochs, down to 0.005 over the next 20 epochs, and finally back down to 0 in the last 10 epochs. C EXTENDED RESULTS Here, we show the full tables which break down the overall adversarial error rates over individual attacks for both MNIST and CIFAR10, along with robustness curves for all models in the paper. C.1 MNIST RESULTS Expanded table of results Table 3 contains the full table of results for all attacks on all models on the MNIST dataset. All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration, and DDN attack, which does not benefit from restarts owing to a deterministic starting point. Note that the results for B-ABS and ABS models are from Schott et al. (2019), which uses gradient estimation techniques whenever a gradient is needed, and the robustness against all attacks for B-ABS and ABS is an upper bound based on the reported results. Further, these models are not evaluated with restarts, pushing the reported results even higher than actual. C.2 CIFAR10 RESULTS Expanded table of results Table 4 contains the full table of results for all attacks on all models on the CIFAR10 dataset. All attacks were run on a subset of the first 1000 test examples with 10 random restarts, with the exception of Boundary Attack, which by default makes 25 trials per iteration, and DDN attack, which does not benefit from restarts owing to a deterministic starting point. Further note that salt & pepper and pointwise attacks in the `1 section are technically `0 attacks, but produce perturbations in the `1 ball. Finally, it is clear here that while the training against an `1 PGD adversary defends against said PGD adversary, it does not seem to transfer to robustness against other attacks. D COMPARISON WITH CONCURRENT WORK In this section we compare the results of our trained MSD model with that of Tramèr & Boneh (2019), who study the theoretical and empirical trade-offs of adversarial robustness in various settings when defending against multiple adversaries. Training methods presented by them in their comparisons, namely Advavg and Advmax closely resemble the naive approaches discussed in this paper: PGDAug and Worst-PGD respectively. We use the results as is from their work, and additionally compare the position of our MSD models at the revised thresholds used by Tramèr & Boneh (2019) without specially retraining them. The results of Tables 5 and 6 show that the relative advantage of MSD over naive techniques does hold up. While we do make a comparison to the most relevant concurrent work for completeness, the following differences can bias the robust accuracies reported for the MSD models to relatively lower than expected (and correspondingly, the robust accuracies reported for the other models are relatively higher than expected): 1. Use of random restarts: We observe in our experiments that using up to 10 restarts for all our attacks leads to a decrease in model accuracy from 5 to 10% across all models. Tramèr & Boneh do not mention restarting their attacks for these models and so the results for models apart from MSD in Tables 5, 6 could potentially be lowered with random restarts. 2. Different training and testing thresholds: The MSD model for the MNIST dataset was trained at = (0.3, 1.5, 12) for the `∞, `2, `1 perturbation balls respectively, while Tramèr & Boneh (2019) tested at = (0.3, 2.0, 10). This may lower the robust accuracy at these thresholds for the MSD model, since it was not trained for that particular threshold. Likewise, the MSD model for CIFAR10 was also trained at = (0.03, 0.05, 12) for the `∞, `2, `1 perturbation balls respectively, while Tramèr & Boneh (2019) tested at = ( 4255 , 0, 2000 255 ). 3. Different perturbation models: For the CIFAR10 results in Table 6, Advavg & Advmax models are trained and tested only for `1 and `∞ adversarial perturbations, whereas the MSD model is robust to the union of `1, `2 and `∞, achieving a much harder task. 4. Larger Suite of Attacks Used: The attacks used by Tramèr & Boneh are PGD, EAD (Chen et al., 2017) and Pointwise Attack (Schott et al., 2019) for `1; PGD, C&W (Carlini & Wagner, 2017) and Boundary Attack (Brendel et al., 2017) for `2; and PGD for `∞ adversaries. We use a more expansive suite of attacks as shown in Appendix C. Some of the attacks like DDN, which proved to be strong adversaries in most cases, were not considered E ATTACKS OUTSIDE THE THREAT MODEL In this section, we present some additional experiments exploring the performance of our model on attacks which lie beyond the threat model. Note that there is no principled reason why we would believe this to be the case (as most adversarial defenses tend to not generalize beyond the threat model defended against), and this is presented for exploratory reasons. Common corruptions We measure the performance of all the models on CIFAR-10-C, which is a CIFAR10 benchmark which has had common corruptions applied to it (e.g. noise, blur, and compression). We report the results in Table 7. We find that that, apart from the P1 model, the rest achieve some improved robustness against these common corruptions above the standard CIFAR10 model. Defending against `1 and `∞ and evaluating on `2 We also briefly study what happens when one trains against `1 and `∞ threat models, while evaluating against the `2 adversary. Specifically, we take the MSD approach on MNIST and simply remove the `2 adversary from the threat model. This results in a model which has its `1 and `∞ robust performance against a PGD adversary drop by 1% and its `2 robust performance against a PGD adversary (which it was not trained for) drops by 2% in comparison to the original MSD approach on all three threat models. As a result, we empirically observe that including the `2 threat model in this setting actually improved overall robustness against all three threat models. Unsurprisingly, the `2 performance drops to some degree, but the model does not lose all of its robustness.
1. What is the main contribution of the paper regarding adversarial training? 2. What are the strengths and weaknesses of the proposed method compared to previous works? 3. Do you have any concerns about the motivation and claims made in the paper? 4. How does the reviewer assess the novelty and creativity of the paper's content? 5. Are there any suggestions or requests for additional experiments or analyses to support the paper's conclusions?
Review
Review This paper adversarially trains models against l_p norms where p is of there different values. They then propose a method which does somewhat better than the obvious way of adversarially training against more than one l_p perturbation. The motivation for the paper is limited, in that they suggest previous works have suggested adversarial training itself "overfits" to the given l_p norm. This isn't surprising that it works, since the straightforward baseline works. They make it seem surprising by suggesting that ABS suggested adversarial training is doomed and cannot provide robustness to l_1, l_2, l_\infty norms simultaneously. The other motivation is that this is a step toward studying an expanded threat model, but the authors have not demonstrated that the learned representations are any bit more robust to common corruptions (could the authors show the generalization performance on CIFAR-10-C or generalization to unforeseen corruptions?). Without further evidence, we are left to believe this only helps for this narrow threat model. Overall the paper is deficient in creativity and generality, so I vote for rejection. Small comments: > take more time than a single norm, it is a step closer towards the end goal of truly robust models, with adversarial robustness against all perturbations. Please show model performance on CIFAR-10-C since if the model is more robust, it should hopefully be more robust to stochastic adversaries. > has claimed that adversarial training “overfits” to the particular type of perturbation used to generate the adversarial examples Wouldn't this be that l_\infty training fits specifically to l_\infty examples, not that robust optimization cannot handle more than one norm at a time? Who is claiming that? > First, we show that even simple aggregations of different adversarial attacks can achieve competitive universal robustness against multiple perturbations models without resorting to complex architectures. I am not sure this was in doubt. The phrase "universal robustness" is misleading. How were the budgets chosen for l_2 and l_1? Those values seem small.
ICLR
Title SketchEmbedNet: Learning Novel Concepts by Imitating Drawings Abstract Sketch drawings are an intuitive visual domain that appeals to human instinct. Previous work has shown that recurrent neural networks are capable of producing sketch drawings of a single or few classes at a time. In this work we investigate representations developed by training a generative model to produce sketches from pixel images across many classes in a sketch domain. We find that the embeddings learned by this sketching model are extremely informative for visual tasks and infer a unique visual understanding. We then use them to exceed state-of-the-art performance in unsupervised few-shot classification on the Omniglot and miniImageNet benchmarks. We also leverage the generative capacity of our model to produce high quality sketches of novel classes based on just a single example. 1 INTRODUCTION Upon encountering a novel concept, such as a six-legged turtle, humans can quickly generalize this concept by composing a mental picture. The ability to generate drawings greatly facilitates communicating new ideas. This dates back to the advent of writing, as many ancient written languages are based on logograms, such as Chinese hanzi and Egyptian hieroglyphs, where each character is essentially a sketch of the object it represents. We often see complex visual concepts summarized by a few simple strokes. Inspired by the human ability to draw, recent research has explored the potential to generate sketches using a wide variety of machine learning models, ranging from hierarchical Bayesian models (Lake et al., 2015), to more recent deep autoregressive models (Gregor et al., 2015; Ha & Eck, 2018; Chen et al., 2017) and generative adversarial nets (GANs) (Li et al., 2019). It is a natural question to ask whether we can obtain useful intermediate representations from models that produce sketches in the output space, as has been shown by other generative models (Ranzato et al., 2006; Kingma & Welling, 2014; Goodfellow et al., 2014; Donahue et al., 2017; Doersch et al., 2015). Unfortunately, a hierarchical Bayesian model suffers from prolonged inference time, while other current sketch models mostly focus on producing drawings in a closed set setting with a few classes (Ha & Eck, 2018; Chen et al., 2017), or on improving log likelihood at the pixel level (Rezende et al., 2016). Leveraging the learned representation from these drawing models remains a rather unexplored topic. In this paper, we pose the following question: Can we learn a generalized embedding function that captures salient and compositional features by directly imitating human sketches? The answer is affirmative. In our experiments we develop SketchEmbedNet, an RNN-based sketch model trained to map grayscale and natural image pixels to the sketch domain. It is trained on hundreds of classes without the use of class labels to learn a robust drawing model that can sketch diverse and unseen inputs. We demonstrate salience by achieving state-of-the-art performance on the Omniglot few-shot classification benchmark and visual recognizability in one-shot generations. Then we explore how the embeddings capture image components and their spatial relationships to explore image space compositionality and also show a surprising property of conceptual composition. We then push the boundary further by applying our sketch model to natural images—to our knowledge, we are the first to extend stroke-based autoregressive models to produce drawings of open domain natural images. We train our model with adapted SVG images from the Sketchy dataset (Sangkloy et al., 2016) and then evaluate the embedding quality directly on unseen classes in the mini-ImageNet task for few-shot classification (Vinyals et al., 2016). Our approach is competitive with existing unsupervised few-shot learning methods (Hsu et al., 2019; Khodadadeh et al., 2019; Antoniou & Storkey, 2019) on this natural image benchmark. In both the sketch and natural image domain, we show that by learning to draw, our methods generalize well even across different datasets and classes. 2 RELATED WORK In this section we review relevant literature including generating sketch-like images, unsupervised representation learning, unsupervised few-shot classification and sketch-based image retrieval (SBIR). Autoregressive drawing models: Graves (2013) use an LSTM to directly output the pen coordinates to imitate handwriting sequences. SketchRNN (Ha & Eck, 2018) builds on this by applying it to general sketches beyond characters. Song et al. (2018); Cao et al. (2019); Ribeiro et al. (2020) all extend SketchRNN through architectural improvements. Chen et al. (2017) change inputs to be pixel images. This and the previous 3 works consider multi-class sketching, but none handle more than 20 classes. Autoregressive models also generate images directly in the pixel domain. DRAW (Gregor et al., 2015) uses recurrent attention to plot pixels; Rezende et al. (2016) extends this to one-shot generation and PixelCNN (van den Oord et al., 2016) generates image pixels sequentially. Image processing methods & GANs: Other works produce sketch-like images based on style transfer or low-level image processing techniques. Classic methods are based on edge detection and image segmentation (Arbelaez et al., 2011; Xie & Tu, 2017). Zhang et al. (2015) use a CNN to directly produce sketch-like pixels for face images. Photo-sketch and pix2pix (Li et al., 2019; Isola et al., 2017) propose a conditional GAN to generate images across different style domains. Image processing based methods do not acquire high-level image understanding, as all the operations are in terms of low-level filtering; none of the GAN sketching methods are designed to mimic human drawings on open domain natural images. Unsupervised representation learning: In the sketch image domain, our method is similar to the large category of generative models which learn unsupervised representations by the principle of analysis-by-synthesis. Work by Hinton & Nair (2005) operates in a sketch domain and learns to draw by synthesizing an interpretable motor program. Bayesian Program Learning (Lake et al., 2015) draws through exact inference of common strokes but learning and inference are computationally challenging. As such, a variety of deep generative models aim to perform approximate Bayesian inference by using an encoder structure that directly predicts the embedding, e.g., deep autoencoders (Vincent et al., 2010), Helmholtz Machine (Dayan et al., 1995), variational autoencoder (VAE) (Kingma & Welling, 2014), BiGAN (Donahue et al., 2017), etc. Our method is also related to the literature of self-supervised representation learning (Doersch et al., 2015; Noroozi & Favaro, 2016; Gidaris et al., 2018; Zhang et al., 2016), as sketch strokes are part of the input data itself. In few-shot learning (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017), recent work has explored unsupervised meta-training. CACTUs, AAL and UMTRA (Hsu et al., 2019; Antoniou & Storkey, 2019; Khodadadeh et al., 2019) all operate by generating pseudo-labels for training. Sketch-based image retrieval (SBIR): In SBIR, a model is provided a sketch-drawing and retrieves a photo of the same class. The area is split into fine-grained (FG-SBIR) (Yu et al., 2016; Sangkloy et al., 2016; Bhunia et al., 2020) and a zero-shot setting (ZS-SBIR) (Dutta & Akata, 2019; Pandey et al., 2020; Dey et al., 2019). FG-SBIR considers minute details while ZS-SBIR learns high-level cross-domain semantics and a joint latent space to perform retrieval. 3 LEARNING TO IMITATE DRAWINGS Here we describe learning to draw through sketch imitation. Our architecture is a generative encoderdecoder model with a CNN encoder for pixel images and an RNN decoder to output vector sketches as shown in Figure 1. Unlike other drawing models that only train on a single or few classes (Ha & Eck, 2018; Chen et al., 2017), SketchEmbedNet is not limited by class inputs and can sketch a wide variety of images. We also introduce a differentiable rasterization function for computing an additional pixel-based training loss. Input & output representation Unlike SketchRNN which encodes drawing sequences, we learn an image embedding by mapping pixels to sketches, similar to Chen et al. (2017). Training data for this task (adopted from Ha & Eck (2018)) consists of a tuple (x,y), where x ∈ RH×W×C is the input image and y ∈ RT×5 is the stroke target. T is the maximum sequence length of the stroke data y, and each stroke yt consists of 5 elements, (∆x,∆y, s1, s2, s3). The first 2 elements are horizontal and vertical displacements on the drawing canvas from the endpoint of the previous stroke. The latter 3 elements are mutually exclusive pen states: s1 indicates the pen is on paper for the next stroke, s2 indicates the pen is lifted, and s3 indicates the sketch sequence has ended. y0 is initialized with (0, 0, 1, 0, 0) to start the generative process. Note that no class information is available while training. SketchEmbedding as a compositional encoding of images We use a CNN to encode the input image x and obtain the latent space representation z, as shown in Figure 1. To model intra-class variance, z is a Gaussian random variable parameterized by CNN outputs, similar to a VAE (Kingma & Welling, 2014). Throughout this paper, we refer to z as the SketchEmbedding. In typical image representations the embedding is trained to classify object classes, or to reconstruct the input pixels. Here, since the SketchEmbedding is fed into an RNN decoder to produce a sequence of drawing actions, z is additionally encouraged to have a compositional understanding of the object structure, instead of just an unstructured set of pixel features. For example when drawing the legs of a turtle, the model must explicitly generate each leg instance. While pixel-based models suffer from blurriness and in generating the image at once, does not distinguish between individual components such as the legs, body and head. The loss of this component information by pixel models has been observed in GAN literature (Goodfellow, 2017) which we propose is avoided by our sketching task. To accommodate the increased training data complexity by including hundreds of classes, we also upscale the size of our model in comparison to work by Chen et al. (2017); Ha & Eck (2018); Song et al. (2018). The backbone is either a 4-layer CNN (Conv4) (Vinyals et al., 2016) for consistent comparisons in the few-shot setting or a ResNet12 (Oreshkin et al., 2018) which produces better drawing results. In comparison, Chen et al. (2017) only use 2D convolution with a maximum of 8 filters. RNN decoder The RNN decoder used in SketchEmbedNet is the same as in SketchRNN (Ha & Eck, 2018). The decoder outputs a mixture density which represents the stroke distribution at each timestep. Specifically, the stroke distribution is a mixture of some hyperparameter M bivariate Gaussians denoting spatial offsets as well as the probability of the three pen states s1−3. The spatial offsets ∆ = (∆x,∆y) are sampled from the mixture of Gaussians, described by: (1) the normalized mixture weight πj ; (2) mixture means µj = (µx, µy)j ; and (3) covariance matrices Σj . We further reparameterize each Σj with its standard deviation σj = (σx, σy)j and correlation coefficient ρxy,j . Thus, the stroke offset distribution is p(∆) = ∑M j=1 πjN (∆|µj ,Σj). The RNN is implemented using a HyperLSTM (Ha et al., 2017); LSTM weights are generated at each timestep by a smaller recurrent “hypernetwork” to improve training stability. Generation is autoregressive, using z ∈ RD, concatenated with the stroke from the previous timestep yt−1, to form the input to the LSTM. Stroke yt−1 is the ground truth supervision at train time (teacher forcing), or a sample y′t−1, from the mixture distribution output by the model during from timestep t− 1. 3.1 LEARNING We train the drawing model in an end-to-end fashion by jointly optimizing three losses: a pen loss Lpen for learning pen states, a stroke loss Lstroke for learning pen offsets, and our proposed pixel loss Lpixel for matching the visual similarity of the predicted and the target sketch: L = Lpen + (1− α)Lstroke + αLpixel, (1) where α is a loss weighting hyperparameter. Both Lpen and Lstroke were in SketchRNN, while the Lpixel is our novel contribution to stroke-based generative models. Unlike SketchRNN, we do not impose a prior using KL divergence as we are not interested in unconditional sampling and it worsens quantitative results in later sections. Pen loss The pen-states predictions {s′1, s′2, s′3} are optimized as a simple 3-way classification with the softmax cross-entropy loss, Lpen = − 1T ∑T t=1 ∑3 m=1 sm,tlog(s ′ m,t). Stroke loss The stroke loss maximizes the log-likelihood of the spatial offsets of each ground truth stroke ∆t given the mixture density distribution pt at each timestep: Lstroke = − 1T ∑T t=1 log pt(∆t). Pixel loss While pixel-level reconstruction objectives are common in generative models (Kingma & Welling, 2014; Vincent et al., 2010; Gregor et al., 2015), we introduce a pixel-based objective for vector sketch generation. After decoding, a differentiable rasterization function fraster is used to map the sketch into a pixel image. fraster transforms a stroke sequence y into a set of 2D line segments (l0, l1), (l1, l2) . . . (lT−1, lT ) where lt = ∑t τ=0 ∆τ . It renders by fixing canvas dimensions, scaling and centering strokes before determining pixel intensity based on the L2 distance between each pixel to lines in the drawing. Further details on fraster can be found in Appendix A. fraster is applied to both the prediction y′ and ground truth y, to produce two pixel images. Gaussian blur gblur(·) is used to reduce strictness before computing the binary cross-entropy loss, Lpixel: I = gblur(fraster(y)), I ′ = gblur(fraster(y ′)), Lpixel = − 1 HW H∑ i=1 W∑ j=1 Iij log(I ′ij). (2) Curriculum training schedule We find that α (in Equation 1) is an important hyperparameter that impacts both the learned embedding space and the generation quality of SketchEmbedNet. A curriculum training schedule is used, increasing α to prioritize Lpixel relative to Lstroke as training progresses; this makes intuitive sense as a single drawing can be produced by many different stroke sequences but learning to draw in a fixed manner is easier. While Lpen promotes reproducing a specific drawing sequence, Lpixel only requires that the generated drawing visually matches the image. Like a human, the model should learn to follow one drawing style (a la paint-by-numbers) before learning to draw freely. 4 DRAWING IMITATION EXPERIMENTS In this section, we introduce our experiments on training SketchEmbedNet using two sketching datasets. The first is based on pure stroke-based drawings, and the second consists of natural image and drawing pairs. Quickdraw: Stroke-based image sketching The Quickdraw (Jongejan et al., 2016) dataset consists of 345 classes of each with 70,000 examples, produced by human players participating in the game “Quick, Draw!”. Examples from the Quickdraw dataset are shown in Figure 2b. The input image x is a direct rasterization of the drawing data y. 300 of 345 classes are randomly selected for training; x is rasterized to a resolution of 28 × 28 and stroke labels y padded up to length T = 64. Any drawing samples exceeding this length were discarded. Note that this an unsupervised representation learning approach, as no class information is used by the system. Data processing procedures and class splits are in Appendix G. Sketchy: Open domain natural image sketching We further extend our stroke-based generation model on open domain natural images. Here, the input is an RGB photo, and the output is a human drawing which does not align with the photo precisely and also does not match with the low-level image details. This is a novel setting, as prior efforts by Ha & Eck (2018); Chen et al. (2017); Song et al. (2018) have only applied their sketch RNN models on the Quickdraw dataset or natural images with only two object classes (shoe/chair) and scrubbed backgrounds (Yu et al., 2016). Learning to sketch open domain natural images is very challenging as it requires the model to identify the subject and filter unnecessary details not present in the sketch. At test time, we further challenge our method by evaluating on unseen data distributions necessitating generalization over natural images. For this task we use the Sketchy dataset (Sangkloy et al., 2016) which consists of ImageNet images paired with vector sketches for a total of 56k examples after processing. Sketches are stored as SVGs with timestamps preserving their original drawing sequence which we adapt by sampling paths in this order. Images are also centered, padded and resized to resolution 84× 84 (see Figure 2a). We fix the maximum sequence length to T = 100, and use all 125 categories but remove classes that have overlapping child synsets with the test classes of mini-ImageNet (Vinyals et al., 2016). This enables testing on mini-ImageNet without any alterations to the benchmark. Once again this is an unsupervised learning formulation. 4.1 RESULTS AND VISUALIZATIONS Figure 3 shows drawings conditioned on sketch image inputs. There is little noticeable drop in quality when we sample sketches from unseen classes compared to those it has seen before. Figure 4 shows examples of sketches generated from natural images. While they are not fine-grained renditions, these sketches clearly demonstrate SketchEmbedNet’s ability to capture key components of seen and unseen classes. The model effectively isolates the subject in each natural image and captures the circular and square shapes in the cakes and storefronts respectively. Even with the challenging lion images, it identifies the silhouette of the laying lion despite low contrast and appropriately localizes the one on the mountainside. Unlike pixel-based auto-encoder models, our sketches do not follow the exact pose of the original strokes, but rather capture a general notion of component groupings. In the basketball example of Figure 3, the lines are not a good pixel-wise match to those in the original image yet they are placed in sensible relative positions. Weaker examples are presented in the last row of Figure 3 and 4; regardless, even poorer examples still capture some structural aspects of the original image. Implementation details can be found in Appendix B. In later sections we explore the uses of SketchEmbeddings and fix models for all downstream tasks. 5 FEW-SHOT CLASSIFICATION USING SKETCHEMBEDDING We would like to assess the benefits of learning to draw by performing few-shot classification with our learned embedding space. Examining performance on discriminative tasks reveals that learning to imitate sketches allows the embeddings to capture salient information of novel object classes. Below we describe our few-shot classification procedure and summarize results on the Omniglot (Lake et al., 2015) and mini-ImageNet benchmarks (Vinyals et al., 2016). Comparison to unsupervised few-shot classification In unsupervised few-shot classification, a model is not provided with any class labels during meta-training, until it is given a few labeled examples ("shots") of the novel classes at meta-test time. While our model is provided a "target"—a sequence of strokes—during training, it is not given any class information. Therefore, we propose that the presented sketch imitation training, though it uses sketch sequences, is comparable to other class-label-free representation learning approaches (Berthelot et al., 2019; Donahue et al., 2017; Caron et al., 2018) and the learned SketchEmbeddings can be applied to unsupervised few-shot classification methods. In our experiments, we compare to previous unsupervised few-shot learning approaches: CACTUs (Hsu et al., 2019), AAL (Antoniou & Storkey, 2019), and UMTRA (Khodadadeh et al., 2019). These methods create pseudo-labels during meta-training using either clustering or data augmentation. As additional baselines, a Conv-VAE (Kingma & Welling, 2014) and a random CNN are also included, both using the same Conv4 backbone. Few-shot experimental setup The CNN encoder of SketchEmbedNet is used as an embedding function combined with a linear classification head to perform few-shot classification. The embedding is made deterministic by taking the mean of the random normal latent space z and discarding the variance parameter from the encoder. Otherwise, the conventional episodic setup for few-shot classification is used; each episode consists of a labeled "support" set of N ×K (N-way K-shot) embeddings and an unlabeled "query" set. The linear classification head is trained on the labeled support set and evaluated on the query set. 5.1 FEW-SHOT CLASSIFICATION ON OMNIGLOT The Omniglot (Lake et al., 2015) dataset contains 50 alphabets, 1623 unique character types, each with 20 examples and is presented as both a greyscale image and a stroke drawing. We use the same train-test split as Vinyals et al. (2016) along with randomly sampled episodes. Experiments using the more challenging Lake split where episodes are sampled within alphabet, as proposed by Lake et al. (2015), are in Appendix E and random seed experiments in Appendix F. To ensure a fair comparison with other few-shot classification models, we use the same convolutional encoder (Conv4) as Vinyals et al. (2016). Results from training only on Omniglot (Lake et al., 2015) are also presented to demonstrate effectiveness without the larger Quickdraw(Jongejan et al., 2016) dataset. No significant improvements were observed using the deeper ResNet12(Oreshkin et al., 2018) architecture; additional results are in Appendix I. All of our methods out-perform the previous state-of-the-art on the unsupervised Omniglot benchmark (Table 1). The Quickdraw trained model surpasses supervised MAML (Finn et al., 2017), and is on par with a supervised ProtoNet (Snell et al., 2017) model , especially in the 5-shot settings. Both baselines, a Conv-VAE and a random CNN, perform well compared to other unsupervised methods. 5.2 FEW-SHOT CLASSIFICATION ON MINI-IMAGENET We extend our investigation and assess SketchEmbeddings for the classification of natural images in the mini-ImageNet benchmark (Vinyals et al., 2016). The same CNN encoder model from the natural image sketching task is used to match the visual domain of the examples we hope to classify. The mini-ImageNet (Vinyals et al., 2016) dataset consists of 100 classes each with 600 examples. The setup proposed by Ravi & Larochelle (2017) is used, where the classes are split 64-16-20 for training, validation and test. As noted earlier, any examples in the Sketchy dataset that are also present in the mini-ImageNet test were removed by filtering the synset (and children synset) IDs ensuring train and test classes are disjoint. Classification results on mini-ImageNet are shown in Table 2. Natural image classification is a far more challenging problem. Learning to reconstruct pixels of an image actually worsens our results; the trained Conv-VAE is outperformed by the VAE with random weights. However, sketch reconstruction is still a valuable task; our models are competitive while our best model out-performs previous state-of-the-art unsupervised methods on few-shot settings. A full table is in Appendix J, seeding results are in Appendix F. 5.3 SKETCHING TO LEARN CLASS-IDENTIFIABLE INFORMATION Existing sketch works have focused on generating better drawings or unifying sketches with other image domains. We present a new paradigm: using sketching as an auxiliary task to learn visual content. Only by training a drawing model that can sketch general image inputs are we able to transfer the learned understanding to new data distributions. By considering the stroke distribution of the Quickdraw dataset, we are able to interpret image inputs from the separate Omniglot dataset and tackle the few-shot classification task with surprising accuracy. While the natural image sketching task is challenging and does not always produce high-fidelity results, it still learns useful visual information. By training on the Sketchy dataset, we learn how to draw other data distributions for which no sequential stroke data exists. Then, by knowing how to sketch this mini-ImageNet data we are able to produce distinguishable embeddings that enable competitive few-shot classification performance. Varying weighting of pixel-loss For both settings we sweep the pixel loss coefficient αmax to ablate its impact on model performance on the Omniglot task (Table 3). There is a substantial improvement in few-shot classification when αmax is non-zero. αmax= 0.50 achieves the best results, and the trend goes downward when αmax approaches to 1.0, i.e. the weighting for Lstroke goes to 0.0. This is reasonable as the training of SketchEmbedNet is more stable under the guidance of ground truth strokes. 6 PROPERTIES OF SKETCHEMBEDDINGS We hypothesize that reproducing a sketch drawing rather than a pixel-based approach requires the preservation of more structural information due to sequential RNN generation. By learning in this manner, SketchEmbeddings are aware of spatial properties and the composition of elements in image space. We examine this compositionality through several comparisons of SketchEmbeddings with those generated by a Conv-VAE. Component arrangements We construct examples that contain the same set of objects but in different arrangements to test sensitivity to component arrangement and composition in image space. We then embed these examples with both generative models and project into 2D space using UMAP (McInnes et al., 2018) to visualize their organization. In the first 2 panels of Figure 5-A, we see that the SketchEmbeddings are easily separated in unsupervised clustering. The rightmost panel of Figure 5-A exhibits non-synthetic classes with duplicated shapes; snowmen with circles and televisions with squares. With these results, we demonstrate the greater component level awareness of SketchEmbeddings. The 4 rearranged shapes and the nested circle and squares have similar silhouettes that are difficult to differentiate to a conventional pixel loss. To SketchEmbeddings, the canvas offset and different drawing sequence of each shape make them substantially different in embedding space. Spatial relationships Drawing also builds awareness of relevant underlying variables, such as spatial relationships between components of the image. We examine the degree to which the underlying variables of angle, distance or size are captured by the embedding, by constructing images that vary along each dimension respectively. The embeddings are again grouped by a 2D projection in Figure 5-B using the UMAP (McInnes et al., 2018) algorithm. When clustered, the 2D projection of SketchEmbeddings arranges the examples along an axis corresponding to the latent variable compared to the Conv-VAE embeddings which is visibly non-linear and arranges in clusters. This clear axis-alignment suggests a greater level of latent variable disentanglement in the SketchEmbeddings. Conceptual composition Finally, we explore concept space composition using SketchEmbeddings (Figure 5-C) by embedding different Quickdraw examples then performing arithmetic with the latent vectors. By subtracting a circle embedding and adding a square embedding from a snowman composed of stacked circles, we produce stacked boxes. This property of vector arithmetic is reminiscent of language representations, as evidenced in analogies like King - Man + Woman = Queen (Ethayarajh et al., 2019). Our results indicate that this property is captured to a greater degree in SketchEmbedding than in the pixel-based VAE embeddings. Composing SketchEmbeddings produces decoded examples that appeal to our intuitive conceptual understanding while the VAE degrades to blurry, fragmented images. We provide more examples of the setup in Figure 5-C as well as additional classes in Appendix K. 7 ONE-SHOT GENERATION To evaluate the sketches generated by our model, we make qualitative comparisons to other one-shot generative models and quantitatively assess our model through visual classification via a ResNet101 (He et al., 2016). In this section, all models use the ResNet12 (Oreshkin et al., 2018) backbone. Qualitative comparisons We compare SketchEmbedNet one-shot generations of Omniglot characters with examples from other few-shot (Reed et al., 2017) and one-shot (Rezende et al., 2016) approaches (Figure 6). In the settings shown, none of the models have seen any examples from the character class, or the parent alphabet. Furthermore, the drawer has seen no written characters during training and is trained only on the Quickdraw dataset. Visually, our generated images better resemble the support examples and the variations by stretching and shifting strokes better preserves the semantics of each character. Generations in pixel space may disrupt strokes and alter the character to human perception. This is especially true for written characters as they are frequently defined by a specific set of strokes instead of blurry clusters of pixels. Quantitative evaluation of generation quality Evaluating generative models is often challenging. Per-pixel metrics like in Reed et al. (2017); Rezende et al. (2016) may penalize generative variance that still preserves meaning. We propose an Inception Score (Salimans et al., 2016) inspired metric to quantify class-recognizability and generalization of generated examples. We train two separate ResNet classifiers (He et al., 2016), each on a different set of 45 Quickdraw classes. One set was part of the training set of SketchEmbedNet (referred to as “seen”) and the other set was held out during training (referred to as “unseen”). We then have SketchEmbedNet generate one-shot sketches from each set and have the corresponding classifier predict a class. The accuracy of the classifier on generated examples is compared with its training accuracy in Table 4. For a ResNet classifier, SketchEmbedNet generations are more recognizable for both classes seen and unseen classes. 8 CONCLUSION Learning to draw is not only an artistic pursuit but drives a distillation of real-world visual concepts. We present a generalized drawing model capable of producing accurate sketches and visual summaries of open-domain natural images. While sketch data may be challenging to source, we show that training to draw either sketch or natural images can generalize for downstream tasks, not only within each domain but also well beyond the training data. More generally research in this direction may lead to more lifelike image understanding inspired by how humans communicate visual concepts. A RASTERIZATION The key enabler of our novel pixel loss for sketch drawings is our differentiable rasterization function fraster. Sequence based loss functions such as Lstroke are sensitive to the order of points while in reality, drawings are sequence invariant. Visually, a square is a square whether it is drawn clockwise or counterclockwise. The purpose of a sketch representation is to lower the complexity of the data space and decode in a more visually intuitive manner. While it is a necessary departure point, the sequential generation of drawings is not key to our visual representation and we would like SketchEmbedNet to be agnostic to any specific sequence needed to draw the sketch that is representative of the image input. To facilitate this, we develop our rasterization function fraster which renders an input sequence of strokes as a pixel image. However, during training, the RNN outputs a mixture of Gaussians at each timestep. To convert this to a stroke sequence, we sample from these Gaussians; this can be repeated to reduce the variance of the pixel loss. We then scale our predicted and ground truth sequences by the properties of the latter before rasterization. Stroke sampling At the end of sequence generation we haveNs×(6M+3) parameters, 6 Gaussian mixture parameters, 3 pen states, Ns times, one for each stroke. To obtain the actual drawing we sample from the mixture of Gaussians:[ ∆xt ∆yt ] = [ µx,t µy,t ] + [ σx,t 0 ρxy,tσy,t σy,t √ 1− ρ2xy,t ] , ∼ N (0,12). (3) After sampling we compute the cumulative sum of every stroke over the timestep so that we obtain the absolute displacement from the initial position:[ xt yt ] = T∑ τ=0 [ ∆xτ ∆yτ ] . (4) yt,abs = (xt, yt, s1, s2, s3). (5) Scaling Each sketch generated by our model begins at (0,0) and the variance of all strokes in the training set is normalized to 1. On a fixed canvas the image is both very small and localized to the top left corner. We remedy this by computing a scale λ and shift xshift, yshift using labels y and apply them to both the prediction y′ as well as the ground truth y. These parameters are computed as: λ = min { W xmax − xmin , H ymax − ymin } , (6) xshift = xmax + xmin 2 λ, yshift = ymax + ymin 2 λ. (7) xmax, xmin, ymax, ymin are the minimum and maximum values of xt, yt from the supervised stroke labels and not the generated strokes. W and H are the width and height in pixels of our output canvas. Calculate pixel intensity Finally we are able to calculate the pixel pij intensity of every pixel in our H ×W canvas. pij = σ [ 2− 5× min t=1...Ns ( dist ( (i, j), (xt−1, yt−1), (xt, yt) ) + (1− bs1,t−1e)106 )] , (8) where the distance function is the distance between point (i, j) from the line segment defined by the absolute points (xt−1, yt−1) and (xt, yt). We also blow up any distances where s1,t−1 < 0.5 so as to not render any strokes where the pen is not touching the paper. B IMPLEMENTATION DETAILS We train our model for 300k iterations with a batch size of 256 for the Quickdraw dataset and 64 for Sketchy due to memory constraints. The initial learning rate is 1e-3 which decays by 0.85 every 15k steps. We use the Adam (Kingma & Ba, 2015) optimizer and clip gradient values at 1.0. σ = 2.0 is used for the Gaussian blur in Lpixel. For the curriculum learning schedule, the value of α is set to 0 initially and increases by 0.05 every 10k training steps with an empirically obtained cap at αmax = 0.50 for Quickdraw and αmax = 0.75 for Sketchy. The ResNet12 (Oreshkin et al., 2018) encoder uses 4 ResNet blocks with 64, 128, 256, 512 filters respectively and ReLU activations. The Conv4 backbone has 4 blocks of convolution, batch norm (Ioffe & Szegedy, 2015), ReLU and max pool, identical to Vinyals et al. (2016). We select the latent space to be 256 dimensions, RNN output size to be 1024, and the hypernetwork embedding size to be 64. We use a mixture of M = 30 bivariate Gaussians for the mixture density output of the stroke offset distribution. C LATENT SPACE INTERPOLATION Like in many encoding-decoding models we evaluate the interpolation of our latent space. We select 4 embeddings at random and use bi-linear interpolation to produce new embeddings. Results are in Figures 7a and 7b. (a) Interpolation of classes: power outlet, snowman, jacket, elbow (b) Interpolation of classes: cloud, power outlet, basket, compass Figure 7: Latent space interpolations of randomly selected examples We observe that compositionality is also present in these interpolations. In the top row of Figure 7a, the model first plots a third small circle when interpolating from the 2-circle power outlet and the 3-circle snowman. This small circle is treated as single component that grows as it transitions between classes until it’s final size in the far right snowman drawing. Some other RNN-based sketching models (Ha & Eck, 2018; Chen et al., 2017) experience other classes materializing in interpolations between two unrelated classes. Our model does not exhibit this same behaviour as our embedding space is learned from more classes and thus does not contain local groupings of classes. D EFFECT OF α ON FEW-SHOT CLASSIFICATION We performed additional experiments exploring the impact of our curriculum training schedule for α. The encoding component of our drawing model was evaluated on the few-shot classification task for different values of αmax every 25k iterations during training. A graph is shown in Figure 8 and the full table of all values of αmax is in Table 5. E INTRA-ALPHABET LAKE SPLIT The creators of the Omniglot dataset and one-shot classification benchmark originally proposed an intra-alphabet classification task. This task is more challenging than the common Vinyals split as characters from the same alphabet may exhibit similar stylistics of sub-components that makes visual differentiation more difficult. This benchmark has been less explored by researchers; however, we still present the performance of our SketchEmbedding model against evaluations of other few-shot classification models on the benchmark. Results are shown in Table 6. Unsurprisingly, our model is outperformed by supervised models and does fall behind by a more substantial margin than in the Vinyals split. However, our SketchEmbedding approach still achieves respectable classification accuracy overall and greatly outperforms a Conv-VAE baseline. F EFFECT OF RANDOM SEEDING ON FEW-SHOT CLASSIFICATION The training objective for SketchEmbedNet is to reproduce sketch drawings of the input. This task is unrelated to few-shot classification may perform variably given different initialization. We quantify this variance by training our model with 15 unique random seeds and evaluating the performance of the latent space on the few-shot classification tasks. We disregard the per (evaluation) episode variance of our model in each test stage and only present the mean accuracy. We then compute a new confidence interval over random seeds. Results are presented in Tables 7, 8, 9. G DATA PROCESSING We apply the same data processing methods as in Ha & Eck (2018) with no additional changes to produce our stroke labels y. When rasterizing for our input x, we scale, center the strokes then pad the image with 10% of the resolution in that dimension rounded to the nearest integer. The following list of classes were used for training: The Eiffel Tower, The Mona Lisa, aircraft carrier, alarm clock, ambulance, angel, animal migration, ant, apple, arm, asparagus, banana, barn, baseball, baseball bat, bathtub, beach, bear, bed, bee, belt, bench, bicycle, binoculars, bird, blueberry, book, boomerang, bottlecap, bread, bridge, broccoli, broom, bucket, bulldozer, bus, bush, butterfly, cactus, cake, calculator, calendar, camel, camera, camouflage, campfire, candle, cannon, car, carrot, castle, cat, ceiling fan, cell phone, cello, chair, chandelier, church, circle, clarinet, clock, coffee cup, computer, cookie, couch, cow, crayon, crocodile, crown, cruise ship, diamond, dishwasher, diving board, dog, dolphin, donut, door, dragon, dresser, drill, drums, duck, dumbbell, ear, eye, eyeglasses, face, fan, feather, fence, finger, fire hydrant, fireplace, firetruck, fish, flamingo, flashlight, flip flops, flower, foot, fork, frog, frying pan, garden, garden hose, giraffe, goatee, grapes, grass, guitar, hamburger, hand, harp, hat, headphones, hedgehog, helicopter, helmet, hockey puck, hockey stick, horse, hospital, hot air balloon, hot dog, hourglass, house, house plant, ice cream, key, keyboard, knee, knife, ladder, lantern, leaf, leg, light bulb, lighter, lighthouse, lightning, line, lipstick, lobster, mailbox, map, marker, matches, megaphone, mermaid, microphone, microwave, monkey, mosquito, motorbike, mountain, mouse, moustache, mouth, mushroom, nail, necklace, nose, octopus, onion, oven, owl, paint can, paintbrush, palm tree, parachute, passport, peanut, pear, pencil, penguin, piano, pickup truck, pig, pineapple, pliers, police car, pool, popsicle, postcard, purse, rabbit, raccoon, radio, rain, rainbow, rake, remote control, rhinoceros, river, rollerskates, sailboat, sandwich, saxophone, scissors, see saw, shark, sheep, shoe, shorts, shovel, sink, skull, sleeping bag, smiley face, snail, snake, snowflake, soccer ball, speedboat, square, star, steak, stereo, stitches, stop sign, strawberry, streetlight, string bean, submarine, sun, swing set, syringe, t-shirt, table, teapot, teddy-bear, tennis racquet, tent, tiger, toe, tooth, toothpaste, tractor, traffic light, train, triangle, trombone, truck, trumpet, umbrella, underwear, van, vase, watermelon, wheel, windmill, wine bottle, wine glass, wristwatch, zigzag, blackberry, power outlet, peas, hot tub, toothbrush, skateboard, cloud, elbow, bat, pond, compass, elephant, hurricane, jail, school bus, skyscraper, tornado, picture frame, lollipop, spoon, saw, cup, roller coaster, pants, jacket, rifle, yoga, toilet, waterslide, axe, snowman, bracelet, basket, anvil, octagon, washing machine, tree, television, bowtie, sweater, backpack, zebra, suitcase, stairs, The Great Wall of China G.2 OMNIGLOT We derive our Omniglot tasks from the stroke dataset originally provided by Lake et al. (2015) rather than the image analogues. We translate the Omniglot stroke-by-stroke format to the same one used in Quickdraw. Then we apply the Ramer-Douglas-Peucker (Douglas & Peucker, 1973) algorithm with an epsilon value of 2 and normalize variance to 1 to produce y. We also rasterize our images in the same manner as above for our input x. G.3 SKETCHY Sketchy data is provided as an SVG image composed of line paths that are either straight lines or Bezier curves. To generate stroke data we sample sequences of points from Bezier curves at a high resolution that we then simplify with RDP, = 5. We also eliminate continuous strokes with a short path length or small displacement to reduce our stroke length and remove small and noisy strokes. Path length and displacement are considered with respect to the scale of the entire sketch. Once again we normalize stroke variance and rasterize for our input image in the same manners as above. The following classes were use for training after removing overlapping classes with mini-ImageNet: hot-air_balloon, violin, tiger, eyeglasses, mouse, jack-o-lantern, lobster, teddy_bear, teapot, helicopter, duck, wading_bird, rabbit, penguin, sheep, windmill, piano, jel- lyfish, table, fan, beetle, cabin, scorpion, scissors, banana, tank, umbrella, crocodilian, volcano, knife, cup, saxophone, pistol, swan, chicken, sword, seal, alarm_clock, rocket, bicycle, owl, squirrel, hermit_crab, horse, spoon, cow, hotdog, camel, turtle, pizza, spider, songbird, rifle, chair, starfish, tree, airplane, bread, bench, harp, seagull, blimp, apple, geyser, trumpet, frog, lizard, axe, sea_turtle, pretzel, snail, butterfly, bear, ray, wine_bottle, , elephant, raccoon, rhinoceros, door, hat, deer, snake, ape, flower, car_(sedan), kangaroo, dolphin, hamburger, castle, pineapple, saw, zebra, candle, cannon, racket, church, fish, mushroom, strawberry, window, sailboat, hourglass, cat, shoe, hedgehog, couch, giraffe, hammer, motorcycle, shark H AUTOREGRESSIVE DRAWING MODEL COMPARISONS We summarize the key components of SketchEmbedNet in comparison to other autoregressive drawing models in Table 10. I FEW-SHOT CLASSIFICATION ON OMNIGLOT – FULL RESULTS The full results table for few-shot classification on the Omniglot (Lake et al., 2015) dataset, including the ResNet12 (Oreshkin et al., 2018) model. J FEW-SHOT CLASSIFICATION ON MINI-IMAGENET – FULL RESULTS The full results table for few-shot classification on the mini-ImageNet dataset, including the ResNet12 (Oreshkin et al., 2018) model and Conv4 models. K ADDITIONAL CONCEPTUAL COMPOSITIONALITY L EMBEDDING PROPERTIES OF OTHER BASELINE MODELS Here we substantiate the uniqueness of the properties observed in SketchEmbeddings by applying the same experiments to a β-VAE (Higgins et al., 2017) as well a vanilla autoencoder trained on the same dataset. We also include results of a SketchEmbedNet trained with a KL objective. L.1 β-VAE The β-VAE (Higgins et al., 2017) exhibits similar unsupervised clustering in comparison to the Conv-VAE and is generally incapable of distinguishing input images that have different shape compositions but the same overall silhouette (first two examples from the left). Differently it is better at distinguishing non-synthetic examples that contain multiple squares or circles (3rd figure). However, it utterly fails the latent variable regression task and does not exhibit any significant conceptual composition in latent space. L.2 AUTOENCODER AND SKETCHEMBEDNET-KL We show that the performance of SketchEmbedding embeddings in our experiments in Section 6 which focuses on organization in latent space is not correlated with the KL term. We present both a vanilla autoencoder without the KL objective and a SketchEmbedNet trained with a KL objective. We observe a drop in overall generation quality in the Conceptual Composition decoding as is expected with an additional constraint but maintained performance in the other tasks. Meanwhile, the autoencoder does not demonstrate any marked improvements over the Conv-VAE in the main paper or any other baseline comparison. M ADDITIONAL COMPOSITIONALITY MODES We provide additional clustering methods t-SNE (Maaten & Hinton, 2008) and PCA as well as 2 new experiments that explore the compositionality of our latent SketchEmbedding. Additional clustering methods We include additional t-SNE and PCA results of the experiments in the main paper. These are presented in Figures 13, 14, 15 16, 17. t-SNE and UMAP are stochastic and do not always produce the same visualization while PCA is deterministic and prioritizes the most important dimensions. Additional Experiments Here we provide different investigations into the compositionality of our learned embedding space that were not present in our main paper. These results presented in Figure 18 and 19. In Figure 18 we place a square in the center of the example and place a circle above, below or to the sides of it. Once again we find that our SketchEmbedding embedding clusters better than the VAE approach. New examples are generated where each class has a different numbers of circles. Both the VAE approach and our SketchEmbedding cluster well and neither appear to learn the count manifold. N HYPERNETWORK ACTIVATIONS To further explore how our network understands drawings, we examine the relationships between the activations of the hypernetwork of our HyperLSTM (Ha et al., 2017). The hypernetwork determines the weights of the LSTM that generates the RNN at each decoding timestep. These activations are 512-dimensional vectors. We collect the activations from many examples, cluster them in 512-dimensional space and visualize the strokes belonging to each cluster for each example. A full decoding is also rendered where each cluster within an example is assigned a color. Single class: snowman First we explore this clustering using only the snowman class from Quickdraw (Jongejan et al., 2016). We expect substantial reuse of a "circle" both within and over many examples. Clustering of the strokes is done with the DBSCAN (Ester et al., 1996) and parameter = 3.9. Results are in Figure 20. Each row is a separate input; the far left column is the color-coded, composed image, the second is the noise cluster and every subsequent column is a unique cluster. While cluster re-use is limited, cluster 0 often contains a large, fully enclosed circle. Many other clusters may contain circles or partial strokes with some reuse. Larger, fully composed and coloured sketches are presented in Figure 21 Many classes: round objects We repeat the above experiment with a mixture of classes that generally can be expected to contain circles. These classes were circles, snowmen, clocks and cups. The two former classes are frequently composed only of circles while the latter are expected to consistently contain other distinct shapes. Results are presented in Figure 22 and select examples in Figure 23. We still observe that the model continues to isolate circles in the first column and note it continues to do so for the cup and clock classes which are not exclusively circular. Many random classes: Finally, we repeat the above clustering with the 45 randomly selected holdout classes from the Quickdraw training process of SketchEmbedding. Results are once again presented in Figure 24 and select examples in Figure 25.
1. What is the focus of the paper regarding sketch recognition and embedding? 2. How does the proposed method compare to other approaches in terms of its ability to capture salient and compositional features of sketches? 3. Can the authors provide additional information about the results presented in Figure 5, particularly the successful conceptual composition examples? 4. Are there any limitations to the approach when applied to natural images? 5. What potential future directions could be explored to enhance the performance of the proposed method?
Review
Review Authors investigate the possibility to learn a generalized embedding function that captures salient and compositional features of sketches by directly imitating human sketches. The manuscript is written clearly and concisely. Methods have been presented with enough detail and seem accurate. Particularly, the results from the Quickdraw and Omniglot datasets showing generated sketches are rather impressive, and the ones for the natural images seem promising. Overall, I very much enjoyed reading the paper and suggest it for publication without any major changes. In my view the results presented in Figure 5, and especially 5C, are the most impressive and interesting ones. These results deserve more space in the manuscript. I was curious to know whether there were also many unsuccessful conceptual composition examples. Are the examples shown in Figure 5C the best ones, or are they representative of performance in general? Does this approach also work with natural images to any extent? Could the authors elaborate on why or why not this may be the case?
ICLR
Title SketchEmbedNet: Learning Novel Concepts by Imitating Drawings Abstract Sketch drawings are an intuitive visual domain that appeals to human instinct. Previous work has shown that recurrent neural networks are capable of producing sketch drawings of a single or few classes at a time. In this work we investigate representations developed by training a generative model to produce sketches from pixel images across many classes in a sketch domain. We find that the embeddings learned by this sketching model are extremely informative for visual tasks and infer a unique visual understanding. We then use them to exceed state-of-the-art performance in unsupervised few-shot classification on the Omniglot and miniImageNet benchmarks. We also leverage the generative capacity of our model to produce high quality sketches of novel classes based on just a single example. 1 INTRODUCTION Upon encountering a novel concept, such as a six-legged turtle, humans can quickly generalize this concept by composing a mental picture. The ability to generate drawings greatly facilitates communicating new ideas. This dates back to the advent of writing, as many ancient written languages are based on logograms, such as Chinese hanzi and Egyptian hieroglyphs, where each character is essentially a sketch of the object it represents. We often see complex visual concepts summarized by a few simple strokes. Inspired by the human ability to draw, recent research has explored the potential to generate sketches using a wide variety of machine learning models, ranging from hierarchical Bayesian models (Lake et al., 2015), to more recent deep autoregressive models (Gregor et al., 2015; Ha & Eck, 2018; Chen et al., 2017) and generative adversarial nets (GANs) (Li et al., 2019). It is a natural question to ask whether we can obtain useful intermediate representations from models that produce sketches in the output space, as has been shown by other generative models (Ranzato et al., 2006; Kingma & Welling, 2014; Goodfellow et al., 2014; Donahue et al., 2017; Doersch et al., 2015). Unfortunately, a hierarchical Bayesian model suffers from prolonged inference time, while other current sketch models mostly focus on producing drawings in a closed set setting with a few classes (Ha & Eck, 2018; Chen et al., 2017), or on improving log likelihood at the pixel level (Rezende et al., 2016). Leveraging the learned representation from these drawing models remains a rather unexplored topic. In this paper, we pose the following question: Can we learn a generalized embedding function that captures salient and compositional features by directly imitating human sketches? The answer is affirmative. In our experiments we develop SketchEmbedNet, an RNN-based sketch model trained to map grayscale and natural image pixels to the sketch domain. It is trained on hundreds of classes without the use of class labels to learn a robust drawing model that can sketch diverse and unseen inputs. We demonstrate salience by achieving state-of-the-art performance on the Omniglot few-shot classification benchmark and visual recognizability in one-shot generations. Then we explore how the embeddings capture image components and their spatial relationships to explore image space compositionality and also show a surprising property of conceptual composition. We then push the boundary further by applying our sketch model to natural images—to our knowledge, we are the first to extend stroke-based autoregressive models to produce drawings of open domain natural images. We train our model with adapted SVG images from the Sketchy dataset (Sangkloy et al., 2016) and then evaluate the embedding quality directly on unseen classes in the mini-ImageNet task for few-shot classification (Vinyals et al., 2016). Our approach is competitive with existing unsupervised few-shot learning methods (Hsu et al., 2019; Khodadadeh et al., 2019; Antoniou & Storkey, 2019) on this natural image benchmark. In both the sketch and natural image domain, we show that by learning to draw, our methods generalize well even across different datasets and classes. 2 RELATED WORK In this section we review relevant literature including generating sketch-like images, unsupervised representation learning, unsupervised few-shot classification and sketch-based image retrieval (SBIR). Autoregressive drawing models: Graves (2013) use an LSTM to directly output the pen coordinates to imitate handwriting sequences. SketchRNN (Ha & Eck, 2018) builds on this by applying it to general sketches beyond characters. Song et al. (2018); Cao et al. (2019); Ribeiro et al. (2020) all extend SketchRNN through architectural improvements. Chen et al. (2017) change inputs to be pixel images. This and the previous 3 works consider multi-class sketching, but none handle more than 20 classes. Autoregressive models also generate images directly in the pixel domain. DRAW (Gregor et al., 2015) uses recurrent attention to plot pixels; Rezende et al. (2016) extends this to one-shot generation and PixelCNN (van den Oord et al., 2016) generates image pixels sequentially. Image processing methods & GANs: Other works produce sketch-like images based on style transfer or low-level image processing techniques. Classic methods are based on edge detection and image segmentation (Arbelaez et al., 2011; Xie & Tu, 2017). Zhang et al. (2015) use a CNN to directly produce sketch-like pixels for face images. Photo-sketch and pix2pix (Li et al., 2019; Isola et al., 2017) propose a conditional GAN to generate images across different style domains. Image processing based methods do not acquire high-level image understanding, as all the operations are in terms of low-level filtering; none of the GAN sketching methods are designed to mimic human drawings on open domain natural images. Unsupervised representation learning: In the sketch image domain, our method is similar to the large category of generative models which learn unsupervised representations by the principle of analysis-by-synthesis. Work by Hinton & Nair (2005) operates in a sketch domain and learns to draw by synthesizing an interpretable motor program. Bayesian Program Learning (Lake et al., 2015) draws through exact inference of common strokes but learning and inference are computationally challenging. As such, a variety of deep generative models aim to perform approximate Bayesian inference by using an encoder structure that directly predicts the embedding, e.g., deep autoencoders (Vincent et al., 2010), Helmholtz Machine (Dayan et al., 1995), variational autoencoder (VAE) (Kingma & Welling, 2014), BiGAN (Donahue et al., 2017), etc. Our method is also related to the literature of self-supervised representation learning (Doersch et al., 2015; Noroozi & Favaro, 2016; Gidaris et al., 2018; Zhang et al., 2016), as sketch strokes are part of the input data itself. In few-shot learning (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017), recent work has explored unsupervised meta-training. CACTUs, AAL and UMTRA (Hsu et al., 2019; Antoniou & Storkey, 2019; Khodadadeh et al., 2019) all operate by generating pseudo-labels for training. Sketch-based image retrieval (SBIR): In SBIR, a model is provided a sketch-drawing and retrieves a photo of the same class. The area is split into fine-grained (FG-SBIR) (Yu et al., 2016; Sangkloy et al., 2016; Bhunia et al., 2020) and a zero-shot setting (ZS-SBIR) (Dutta & Akata, 2019; Pandey et al., 2020; Dey et al., 2019). FG-SBIR considers minute details while ZS-SBIR learns high-level cross-domain semantics and a joint latent space to perform retrieval. 3 LEARNING TO IMITATE DRAWINGS Here we describe learning to draw through sketch imitation. Our architecture is a generative encoderdecoder model with a CNN encoder for pixel images and an RNN decoder to output vector sketches as shown in Figure 1. Unlike other drawing models that only train on a single or few classes (Ha & Eck, 2018; Chen et al., 2017), SketchEmbedNet is not limited by class inputs and can sketch a wide variety of images. We also introduce a differentiable rasterization function for computing an additional pixel-based training loss. Input & output representation Unlike SketchRNN which encodes drawing sequences, we learn an image embedding by mapping pixels to sketches, similar to Chen et al. (2017). Training data for this task (adopted from Ha & Eck (2018)) consists of a tuple (x,y), where x ∈ RH×W×C is the input image and y ∈ RT×5 is the stroke target. T is the maximum sequence length of the stroke data y, and each stroke yt consists of 5 elements, (∆x,∆y, s1, s2, s3). The first 2 elements are horizontal and vertical displacements on the drawing canvas from the endpoint of the previous stroke. The latter 3 elements are mutually exclusive pen states: s1 indicates the pen is on paper for the next stroke, s2 indicates the pen is lifted, and s3 indicates the sketch sequence has ended. y0 is initialized with (0, 0, 1, 0, 0) to start the generative process. Note that no class information is available while training. SketchEmbedding as a compositional encoding of images We use a CNN to encode the input image x and obtain the latent space representation z, as shown in Figure 1. To model intra-class variance, z is a Gaussian random variable parameterized by CNN outputs, similar to a VAE (Kingma & Welling, 2014). Throughout this paper, we refer to z as the SketchEmbedding. In typical image representations the embedding is trained to classify object classes, or to reconstruct the input pixels. Here, since the SketchEmbedding is fed into an RNN decoder to produce a sequence of drawing actions, z is additionally encouraged to have a compositional understanding of the object structure, instead of just an unstructured set of pixel features. For example when drawing the legs of a turtle, the model must explicitly generate each leg instance. While pixel-based models suffer from blurriness and in generating the image at once, does not distinguish between individual components such as the legs, body and head. The loss of this component information by pixel models has been observed in GAN literature (Goodfellow, 2017) which we propose is avoided by our sketching task. To accommodate the increased training data complexity by including hundreds of classes, we also upscale the size of our model in comparison to work by Chen et al. (2017); Ha & Eck (2018); Song et al. (2018). The backbone is either a 4-layer CNN (Conv4) (Vinyals et al., 2016) for consistent comparisons in the few-shot setting or a ResNet12 (Oreshkin et al., 2018) which produces better drawing results. In comparison, Chen et al. (2017) only use 2D convolution with a maximum of 8 filters. RNN decoder The RNN decoder used in SketchEmbedNet is the same as in SketchRNN (Ha & Eck, 2018). The decoder outputs a mixture density which represents the stroke distribution at each timestep. Specifically, the stroke distribution is a mixture of some hyperparameter M bivariate Gaussians denoting spatial offsets as well as the probability of the three pen states s1−3. The spatial offsets ∆ = (∆x,∆y) are sampled from the mixture of Gaussians, described by: (1) the normalized mixture weight πj ; (2) mixture means µj = (µx, µy)j ; and (3) covariance matrices Σj . We further reparameterize each Σj with its standard deviation σj = (σx, σy)j and correlation coefficient ρxy,j . Thus, the stroke offset distribution is p(∆) = ∑M j=1 πjN (∆|µj ,Σj). The RNN is implemented using a HyperLSTM (Ha et al., 2017); LSTM weights are generated at each timestep by a smaller recurrent “hypernetwork” to improve training stability. Generation is autoregressive, using z ∈ RD, concatenated with the stroke from the previous timestep yt−1, to form the input to the LSTM. Stroke yt−1 is the ground truth supervision at train time (teacher forcing), or a sample y′t−1, from the mixture distribution output by the model during from timestep t− 1. 3.1 LEARNING We train the drawing model in an end-to-end fashion by jointly optimizing three losses: a pen loss Lpen for learning pen states, a stroke loss Lstroke for learning pen offsets, and our proposed pixel loss Lpixel for matching the visual similarity of the predicted and the target sketch: L = Lpen + (1− α)Lstroke + αLpixel, (1) where α is a loss weighting hyperparameter. Both Lpen and Lstroke were in SketchRNN, while the Lpixel is our novel contribution to stroke-based generative models. Unlike SketchRNN, we do not impose a prior using KL divergence as we are not interested in unconditional sampling and it worsens quantitative results in later sections. Pen loss The pen-states predictions {s′1, s′2, s′3} are optimized as a simple 3-way classification with the softmax cross-entropy loss, Lpen = − 1T ∑T t=1 ∑3 m=1 sm,tlog(s ′ m,t). Stroke loss The stroke loss maximizes the log-likelihood of the spatial offsets of each ground truth stroke ∆t given the mixture density distribution pt at each timestep: Lstroke = − 1T ∑T t=1 log pt(∆t). Pixel loss While pixel-level reconstruction objectives are common in generative models (Kingma & Welling, 2014; Vincent et al., 2010; Gregor et al., 2015), we introduce a pixel-based objective for vector sketch generation. After decoding, a differentiable rasterization function fraster is used to map the sketch into a pixel image. fraster transforms a stroke sequence y into a set of 2D line segments (l0, l1), (l1, l2) . . . (lT−1, lT ) where lt = ∑t τ=0 ∆τ . It renders by fixing canvas dimensions, scaling and centering strokes before determining pixel intensity based on the L2 distance between each pixel to lines in the drawing. Further details on fraster can be found in Appendix A. fraster is applied to both the prediction y′ and ground truth y, to produce two pixel images. Gaussian blur gblur(·) is used to reduce strictness before computing the binary cross-entropy loss, Lpixel: I = gblur(fraster(y)), I ′ = gblur(fraster(y ′)), Lpixel = − 1 HW H∑ i=1 W∑ j=1 Iij log(I ′ij). (2) Curriculum training schedule We find that α (in Equation 1) is an important hyperparameter that impacts both the learned embedding space and the generation quality of SketchEmbedNet. A curriculum training schedule is used, increasing α to prioritize Lpixel relative to Lstroke as training progresses; this makes intuitive sense as a single drawing can be produced by many different stroke sequences but learning to draw in a fixed manner is easier. While Lpen promotes reproducing a specific drawing sequence, Lpixel only requires that the generated drawing visually matches the image. Like a human, the model should learn to follow one drawing style (a la paint-by-numbers) before learning to draw freely. 4 DRAWING IMITATION EXPERIMENTS In this section, we introduce our experiments on training SketchEmbedNet using two sketching datasets. The first is based on pure stroke-based drawings, and the second consists of natural image and drawing pairs. Quickdraw: Stroke-based image sketching The Quickdraw (Jongejan et al., 2016) dataset consists of 345 classes of each with 70,000 examples, produced by human players participating in the game “Quick, Draw!”. Examples from the Quickdraw dataset are shown in Figure 2b. The input image x is a direct rasterization of the drawing data y. 300 of 345 classes are randomly selected for training; x is rasterized to a resolution of 28 × 28 and stroke labels y padded up to length T = 64. Any drawing samples exceeding this length were discarded. Note that this an unsupervised representation learning approach, as no class information is used by the system. Data processing procedures and class splits are in Appendix G. Sketchy: Open domain natural image sketching We further extend our stroke-based generation model on open domain natural images. Here, the input is an RGB photo, and the output is a human drawing which does not align with the photo precisely and also does not match with the low-level image details. This is a novel setting, as prior efforts by Ha & Eck (2018); Chen et al. (2017); Song et al. (2018) have only applied their sketch RNN models on the Quickdraw dataset or natural images with only two object classes (shoe/chair) and scrubbed backgrounds (Yu et al., 2016). Learning to sketch open domain natural images is very challenging as it requires the model to identify the subject and filter unnecessary details not present in the sketch. At test time, we further challenge our method by evaluating on unseen data distributions necessitating generalization over natural images. For this task we use the Sketchy dataset (Sangkloy et al., 2016) which consists of ImageNet images paired with vector sketches for a total of 56k examples after processing. Sketches are stored as SVGs with timestamps preserving their original drawing sequence which we adapt by sampling paths in this order. Images are also centered, padded and resized to resolution 84× 84 (see Figure 2a). We fix the maximum sequence length to T = 100, and use all 125 categories but remove classes that have overlapping child synsets with the test classes of mini-ImageNet (Vinyals et al., 2016). This enables testing on mini-ImageNet without any alterations to the benchmark. Once again this is an unsupervised learning formulation. 4.1 RESULTS AND VISUALIZATIONS Figure 3 shows drawings conditioned on sketch image inputs. There is little noticeable drop in quality when we sample sketches from unseen classes compared to those it has seen before. Figure 4 shows examples of sketches generated from natural images. While they are not fine-grained renditions, these sketches clearly demonstrate SketchEmbedNet’s ability to capture key components of seen and unseen classes. The model effectively isolates the subject in each natural image and captures the circular and square shapes in the cakes and storefronts respectively. Even with the challenging lion images, it identifies the silhouette of the laying lion despite low contrast and appropriately localizes the one on the mountainside. Unlike pixel-based auto-encoder models, our sketches do not follow the exact pose of the original strokes, but rather capture a general notion of component groupings. In the basketball example of Figure 3, the lines are not a good pixel-wise match to those in the original image yet they are placed in sensible relative positions. Weaker examples are presented in the last row of Figure 3 and 4; regardless, even poorer examples still capture some structural aspects of the original image. Implementation details can be found in Appendix B. In later sections we explore the uses of SketchEmbeddings and fix models for all downstream tasks. 5 FEW-SHOT CLASSIFICATION USING SKETCHEMBEDDING We would like to assess the benefits of learning to draw by performing few-shot classification with our learned embedding space. Examining performance on discriminative tasks reveals that learning to imitate sketches allows the embeddings to capture salient information of novel object classes. Below we describe our few-shot classification procedure and summarize results on the Omniglot (Lake et al., 2015) and mini-ImageNet benchmarks (Vinyals et al., 2016). Comparison to unsupervised few-shot classification In unsupervised few-shot classification, a model is not provided with any class labels during meta-training, until it is given a few labeled examples ("shots") of the novel classes at meta-test time. While our model is provided a "target"—a sequence of strokes—during training, it is not given any class information. Therefore, we propose that the presented sketch imitation training, though it uses sketch sequences, is comparable to other class-label-free representation learning approaches (Berthelot et al., 2019; Donahue et al., 2017; Caron et al., 2018) and the learned SketchEmbeddings can be applied to unsupervised few-shot classification methods. In our experiments, we compare to previous unsupervised few-shot learning approaches: CACTUs (Hsu et al., 2019), AAL (Antoniou & Storkey, 2019), and UMTRA (Khodadadeh et al., 2019). These methods create pseudo-labels during meta-training using either clustering or data augmentation. As additional baselines, a Conv-VAE (Kingma & Welling, 2014) and a random CNN are also included, both using the same Conv4 backbone. Few-shot experimental setup The CNN encoder of SketchEmbedNet is used as an embedding function combined with a linear classification head to perform few-shot classification. The embedding is made deterministic by taking the mean of the random normal latent space z and discarding the variance parameter from the encoder. Otherwise, the conventional episodic setup for few-shot classification is used; each episode consists of a labeled "support" set of N ×K (N-way K-shot) embeddings and an unlabeled "query" set. The linear classification head is trained on the labeled support set and evaluated on the query set. 5.1 FEW-SHOT CLASSIFICATION ON OMNIGLOT The Omniglot (Lake et al., 2015) dataset contains 50 alphabets, 1623 unique character types, each with 20 examples and is presented as both a greyscale image and a stroke drawing. We use the same train-test split as Vinyals et al. (2016) along with randomly sampled episodes. Experiments using the more challenging Lake split where episodes are sampled within alphabet, as proposed by Lake et al. (2015), are in Appendix E and random seed experiments in Appendix F. To ensure a fair comparison with other few-shot classification models, we use the same convolutional encoder (Conv4) as Vinyals et al. (2016). Results from training only on Omniglot (Lake et al., 2015) are also presented to demonstrate effectiveness without the larger Quickdraw(Jongejan et al., 2016) dataset. No significant improvements were observed using the deeper ResNet12(Oreshkin et al., 2018) architecture; additional results are in Appendix I. All of our methods out-perform the previous state-of-the-art on the unsupervised Omniglot benchmark (Table 1). The Quickdraw trained model surpasses supervised MAML (Finn et al., 2017), and is on par with a supervised ProtoNet (Snell et al., 2017) model , especially in the 5-shot settings. Both baselines, a Conv-VAE and a random CNN, perform well compared to other unsupervised methods. 5.2 FEW-SHOT CLASSIFICATION ON MINI-IMAGENET We extend our investigation and assess SketchEmbeddings for the classification of natural images in the mini-ImageNet benchmark (Vinyals et al., 2016). The same CNN encoder model from the natural image sketching task is used to match the visual domain of the examples we hope to classify. The mini-ImageNet (Vinyals et al., 2016) dataset consists of 100 classes each with 600 examples. The setup proposed by Ravi & Larochelle (2017) is used, where the classes are split 64-16-20 for training, validation and test. As noted earlier, any examples in the Sketchy dataset that are also present in the mini-ImageNet test were removed by filtering the synset (and children synset) IDs ensuring train and test classes are disjoint. Classification results on mini-ImageNet are shown in Table 2. Natural image classification is a far more challenging problem. Learning to reconstruct pixels of an image actually worsens our results; the trained Conv-VAE is outperformed by the VAE with random weights. However, sketch reconstruction is still a valuable task; our models are competitive while our best model out-performs previous state-of-the-art unsupervised methods on few-shot settings. A full table is in Appendix J, seeding results are in Appendix F. 5.3 SKETCHING TO LEARN CLASS-IDENTIFIABLE INFORMATION Existing sketch works have focused on generating better drawings or unifying sketches with other image domains. We present a new paradigm: using sketching as an auxiliary task to learn visual content. Only by training a drawing model that can sketch general image inputs are we able to transfer the learned understanding to new data distributions. By considering the stroke distribution of the Quickdraw dataset, we are able to interpret image inputs from the separate Omniglot dataset and tackle the few-shot classification task with surprising accuracy. While the natural image sketching task is challenging and does not always produce high-fidelity results, it still learns useful visual information. By training on the Sketchy dataset, we learn how to draw other data distributions for which no sequential stroke data exists. Then, by knowing how to sketch this mini-ImageNet data we are able to produce distinguishable embeddings that enable competitive few-shot classification performance. Varying weighting of pixel-loss For both settings we sweep the pixel loss coefficient αmax to ablate its impact on model performance on the Omniglot task (Table 3). There is a substantial improvement in few-shot classification when αmax is non-zero. αmax= 0.50 achieves the best results, and the trend goes downward when αmax approaches to 1.0, i.e. the weighting for Lstroke goes to 0.0. This is reasonable as the training of SketchEmbedNet is more stable under the guidance of ground truth strokes. 6 PROPERTIES OF SKETCHEMBEDDINGS We hypothesize that reproducing a sketch drawing rather than a pixel-based approach requires the preservation of more structural information due to sequential RNN generation. By learning in this manner, SketchEmbeddings are aware of spatial properties and the composition of elements in image space. We examine this compositionality through several comparisons of SketchEmbeddings with those generated by a Conv-VAE. Component arrangements We construct examples that contain the same set of objects but in different arrangements to test sensitivity to component arrangement and composition in image space. We then embed these examples with both generative models and project into 2D space using UMAP (McInnes et al., 2018) to visualize their organization. In the first 2 panels of Figure 5-A, we see that the SketchEmbeddings are easily separated in unsupervised clustering. The rightmost panel of Figure 5-A exhibits non-synthetic classes with duplicated shapes; snowmen with circles and televisions with squares. With these results, we demonstrate the greater component level awareness of SketchEmbeddings. The 4 rearranged shapes and the nested circle and squares have similar silhouettes that are difficult to differentiate to a conventional pixel loss. To SketchEmbeddings, the canvas offset and different drawing sequence of each shape make them substantially different in embedding space. Spatial relationships Drawing also builds awareness of relevant underlying variables, such as spatial relationships between components of the image. We examine the degree to which the underlying variables of angle, distance or size are captured by the embedding, by constructing images that vary along each dimension respectively. The embeddings are again grouped by a 2D projection in Figure 5-B using the UMAP (McInnes et al., 2018) algorithm. When clustered, the 2D projection of SketchEmbeddings arranges the examples along an axis corresponding to the latent variable compared to the Conv-VAE embeddings which is visibly non-linear and arranges in clusters. This clear axis-alignment suggests a greater level of latent variable disentanglement in the SketchEmbeddings. Conceptual composition Finally, we explore concept space composition using SketchEmbeddings (Figure 5-C) by embedding different Quickdraw examples then performing arithmetic with the latent vectors. By subtracting a circle embedding and adding a square embedding from a snowman composed of stacked circles, we produce stacked boxes. This property of vector arithmetic is reminiscent of language representations, as evidenced in analogies like King - Man + Woman = Queen (Ethayarajh et al., 2019). Our results indicate that this property is captured to a greater degree in SketchEmbedding than in the pixel-based VAE embeddings. Composing SketchEmbeddings produces decoded examples that appeal to our intuitive conceptual understanding while the VAE degrades to blurry, fragmented images. We provide more examples of the setup in Figure 5-C as well as additional classes in Appendix K. 7 ONE-SHOT GENERATION To evaluate the sketches generated by our model, we make qualitative comparisons to other one-shot generative models and quantitatively assess our model through visual classification via a ResNet101 (He et al., 2016). In this section, all models use the ResNet12 (Oreshkin et al., 2018) backbone. Qualitative comparisons We compare SketchEmbedNet one-shot generations of Omniglot characters with examples from other few-shot (Reed et al., 2017) and one-shot (Rezende et al., 2016) approaches (Figure 6). In the settings shown, none of the models have seen any examples from the character class, or the parent alphabet. Furthermore, the drawer has seen no written characters during training and is trained only on the Quickdraw dataset. Visually, our generated images better resemble the support examples and the variations by stretching and shifting strokes better preserves the semantics of each character. Generations in pixel space may disrupt strokes and alter the character to human perception. This is especially true for written characters as they are frequently defined by a specific set of strokes instead of blurry clusters of pixels. Quantitative evaluation of generation quality Evaluating generative models is often challenging. Per-pixel metrics like in Reed et al. (2017); Rezende et al. (2016) may penalize generative variance that still preserves meaning. We propose an Inception Score (Salimans et al., 2016) inspired metric to quantify class-recognizability and generalization of generated examples. We train two separate ResNet classifiers (He et al., 2016), each on a different set of 45 Quickdraw classes. One set was part of the training set of SketchEmbedNet (referred to as “seen”) and the other set was held out during training (referred to as “unseen”). We then have SketchEmbedNet generate one-shot sketches from each set and have the corresponding classifier predict a class. The accuracy of the classifier on generated examples is compared with its training accuracy in Table 4. For a ResNet classifier, SketchEmbedNet generations are more recognizable for both classes seen and unseen classes. 8 CONCLUSION Learning to draw is not only an artistic pursuit but drives a distillation of real-world visual concepts. We present a generalized drawing model capable of producing accurate sketches and visual summaries of open-domain natural images. While sketch data may be challenging to source, we show that training to draw either sketch or natural images can generalize for downstream tasks, not only within each domain but also well beyond the training data. More generally research in this direction may lead to more lifelike image understanding inspired by how humans communicate visual concepts. A RASTERIZATION The key enabler of our novel pixel loss for sketch drawings is our differentiable rasterization function fraster. Sequence based loss functions such as Lstroke are sensitive to the order of points while in reality, drawings are sequence invariant. Visually, a square is a square whether it is drawn clockwise or counterclockwise. The purpose of a sketch representation is to lower the complexity of the data space and decode in a more visually intuitive manner. While it is a necessary departure point, the sequential generation of drawings is not key to our visual representation and we would like SketchEmbedNet to be agnostic to any specific sequence needed to draw the sketch that is representative of the image input. To facilitate this, we develop our rasterization function fraster which renders an input sequence of strokes as a pixel image. However, during training, the RNN outputs a mixture of Gaussians at each timestep. To convert this to a stroke sequence, we sample from these Gaussians; this can be repeated to reduce the variance of the pixel loss. We then scale our predicted and ground truth sequences by the properties of the latter before rasterization. Stroke sampling At the end of sequence generation we haveNs×(6M+3) parameters, 6 Gaussian mixture parameters, 3 pen states, Ns times, one for each stroke. To obtain the actual drawing we sample from the mixture of Gaussians:[ ∆xt ∆yt ] = [ µx,t µy,t ] + [ σx,t 0 ρxy,tσy,t σy,t √ 1− ρ2xy,t ] , ∼ N (0,12). (3) After sampling we compute the cumulative sum of every stroke over the timestep so that we obtain the absolute displacement from the initial position:[ xt yt ] = T∑ τ=0 [ ∆xτ ∆yτ ] . (4) yt,abs = (xt, yt, s1, s2, s3). (5) Scaling Each sketch generated by our model begins at (0,0) and the variance of all strokes in the training set is normalized to 1. On a fixed canvas the image is both very small and localized to the top left corner. We remedy this by computing a scale λ and shift xshift, yshift using labels y and apply them to both the prediction y′ as well as the ground truth y. These parameters are computed as: λ = min { W xmax − xmin , H ymax − ymin } , (6) xshift = xmax + xmin 2 λ, yshift = ymax + ymin 2 λ. (7) xmax, xmin, ymax, ymin are the minimum and maximum values of xt, yt from the supervised stroke labels and not the generated strokes. W and H are the width and height in pixels of our output canvas. Calculate pixel intensity Finally we are able to calculate the pixel pij intensity of every pixel in our H ×W canvas. pij = σ [ 2− 5× min t=1...Ns ( dist ( (i, j), (xt−1, yt−1), (xt, yt) ) + (1− bs1,t−1e)106 )] , (8) where the distance function is the distance between point (i, j) from the line segment defined by the absolute points (xt−1, yt−1) and (xt, yt). We also blow up any distances where s1,t−1 < 0.5 so as to not render any strokes where the pen is not touching the paper. B IMPLEMENTATION DETAILS We train our model for 300k iterations with a batch size of 256 for the Quickdraw dataset and 64 for Sketchy due to memory constraints. The initial learning rate is 1e-3 which decays by 0.85 every 15k steps. We use the Adam (Kingma & Ba, 2015) optimizer and clip gradient values at 1.0. σ = 2.0 is used for the Gaussian blur in Lpixel. For the curriculum learning schedule, the value of α is set to 0 initially and increases by 0.05 every 10k training steps with an empirically obtained cap at αmax = 0.50 for Quickdraw and αmax = 0.75 for Sketchy. The ResNet12 (Oreshkin et al., 2018) encoder uses 4 ResNet blocks with 64, 128, 256, 512 filters respectively and ReLU activations. The Conv4 backbone has 4 blocks of convolution, batch norm (Ioffe & Szegedy, 2015), ReLU and max pool, identical to Vinyals et al. (2016). We select the latent space to be 256 dimensions, RNN output size to be 1024, and the hypernetwork embedding size to be 64. We use a mixture of M = 30 bivariate Gaussians for the mixture density output of the stroke offset distribution. C LATENT SPACE INTERPOLATION Like in many encoding-decoding models we evaluate the interpolation of our latent space. We select 4 embeddings at random and use bi-linear interpolation to produce new embeddings. Results are in Figures 7a and 7b. (a) Interpolation of classes: power outlet, snowman, jacket, elbow (b) Interpolation of classes: cloud, power outlet, basket, compass Figure 7: Latent space interpolations of randomly selected examples We observe that compositionality is also present in these interpolations. In the top row of Figure 7a, the model first plots a third small circle when interpolating from the 2-circle power outlet and the 3-circle snowman. This small circle is treated as single component that grows as it transitions between classes until it’s final size in the far right snowman drawing. Some other RNN-based sketching models (Ha & Eck, 2018; Chen et al., 2017) experience other classes materializing in interpolations between two unrelated classes. Our model does not exhibit this same behaviour as our embedding space is learned from more classes and thus does not contain local groupings of classes. D EFFECT OF α ON FEW-SHOT CLASSIFICATION We performed additional experiments exploring the impact of our curriculum training schedule for α. The encoding component of our drawing model was evaluated on the few-shot classification task for different values of αmax every 25k iterations during training. A graph is shown in Figure 8 and the full table of all values of αmax is in Table 5. E INTRA-ALPHABET LAKE SPLIT The creators of the Omniglot dataset and one-shot classification benchmark originally proposed an intra-alphabet classification task. This task is more challenging than the common Vinyals split as characters from the same alphabet may exhibit similar stylistics of sub-components that makes visual differentiation more difficult. This benchmark has been less explored by researchers; however, we still present the performance of our SketchEmbedding model against evaluations of other few-shot classification models on the benchmark. Results are shown in Table 6. Unsurprisingly, our model is outperformed by supervised models and does fall behind by a more substantial margin than in the Vinyals split. However, our SketchEmbedding approach still achieves respectable classification accuracy overall and greatly outperforms a Conv-VAE baseline. F EFFECT OF RANDOM SEEDING ON FEW-SHOT CLASSIFICATION The training objective for SketchEmbedNet is to reproduce sketch drawings of the input. This task is unrelated to few-shot classification may perform variably given different initialization. We quantify this variance by training our model with 15 unique random seeds and evaluating the performance of the latent space on the few-shot classification tasks. We disregard the per (evaluation) episode variance of our model in each test stage and only present the mean accuracy. We then compute a new confidence interval over random seeds. Results are presented in Tables 7, 8, 9. G DATA PROCESSING We apply the same data processing methods as in Ha & Eck (2018) with no additional changes to produce our stroke labels y. When rasterizing for our input x, we scale, center the strokes then pad the image with 10% of the resolution in that dimension rounded to the nearest integer. The following list of classes were used for training: The Eiffel Tower, The Mona Lisa, aircraft carrier, alarm clock, ambulance, angel, animal migration, ant, apple, arm, asparagus, banana, barn, baseball, baseball bat, bathtub, beach, bear, bed, bee, belt, bench, bicycle, binoculars, bird, blueberry, book, boomerang, bottlecap, bread, bridge, broccoli, broom, bucket, bulldozer, bus, bush, butterfly, cactus, cake, calculator, calendar, camel, camera, camouflage, campfire, candle, cannon, car, carrot, castle, cat, ceiling fan, cell phone, cello, chair, chandelier, church, circle, clarinet, clock, coffee cup, computer, cookie, couch, cow, crayon, crocodile, crown, cruise ship, diamond, dishwasher, diving board, dog, dolphin, donut, door, dragon, dresser, drill, drums, duck, dumbbell, ear, eye, eyeglasses, face, fan, feather, fence, finger, fire hydrant, fireplace, firetruck, fish, flamingo, flashlight, flip flops, flower, foot, fork, frog, frying pan, garden, garden hose, giraffe, goatee, grapes, grass, guitar, hamburger, hand, harp, hat, headphones, hedgehog, helicopter, helmet, hockey puck, hockey stick, horse, hospital, hot air balloon, hot dog, hourglass, house, house plant, ice cream, key, keyboard, knee, knife, ladder, lantern, leaf, leg, light bulb, lighter, lighthouse, lightning, line, lipstick, lobster, mailbox, map, marker, matches, megaphone, mermaid, microphone, microwave, monkey, mosquito, motorbike, mountain, mouse, moustache, mouth, mushroom, nail, necklace, nose, octopus, onion, oven, owl, paint can, paintbrush, palm tree, parachute, passport, peanut, pear, pencil, penguin, piano, pickup truck, pig, pineapple, pliers, police car, pool, popsicle, postcard, purse, rabbit, raccoon, radio, rain, rainbow, rake, remote control, rhinoceros, river, rollerskates, sailboat, sandwich, saxophone, scissors, see saw, shark, sheep, shoe, shorts, shovel, sink, skull, sleeping bag, smiley face, snail, snake, snowflake, soccer ball, speedboat, square, star, steak, stereo, stitches, stop sign, strawberry, streetlight, string bean, submarine, sun, swing set, syringe, t-shirt, table, teapot, teddy-bear, tennis racquet, tent, tiger, toe, tooth, toothpaste, tractor, traffic light, train, triangle, trombone, truck, trumpet, umbrella, underwear, van, vase, watermelon, wheel, windmill, wine bottle, wine glass, wristwatch, zigzag, blackberry, power outlet, peas, hot tub, toothbrush, skateboard, cloud, elbow, bat, pond, compass, elephant, hurricane, jail, school bus, skyscraper, tornado, picture frame, lollipop, spoon, saw, cup, roller coaster, pants, jacket, rifle, yoga, toilet, waterslide, axe, snowman, bracelet, basket, anvil, octagon, washing machine, tree, television, bowtie, sweater, backpack, zebra, suitcase, stairs, The Great Wall of China G.2 OMNIGLOT We derive our Omniglot tasks from the stroke dataset originally provided by Lake et al. (2015) rather than the image analogues. We translate the Omniglot stroke-by-stroke format to the same one used in Quickdraw. Then we apply the Ramer-Douglas-Peucker (Douglas & Peucker, 1973) algorithm with an epsilon value of 2 and normalize variance to 1 to produce y. We also rasterize our images in the same manner as above for our input x. G.3 SKETCHY Sketchy data is provided as an SVG image composed of line paths that are either straight lines or Bezier curves. To generate stroke data we sample sequences of points from Bezier curves at a high resolution that we then simplify with RDP, = 5. We also eliminate continuous strokes with a short path length or small displacement to reduce our stroke length and remove small and noisy strokes. Path length and displacement are considered with respect to the scale of the entire sketch. Once again we normalize stroke variance and rasterize for our input image in the same manners as above. The following classes were use for training after removing overlapping classes with mini-ImageNet: hot-air_balloon, violin, tiger, eyeglasses, mouse, jack-o-lantern, lobster, teddy_bear, teapot, helicopter, duck, wading_bird, rabbit, penguin, sheep, windmill, piano, jel- lyfish, table, fan, beetle, cabin, scorpion, scissors, banana, tank, umbrella, crocodilian, volcano, knife, cup, saxophone, pistol, swan, chicken, sword, seal, alarm_clock, rocket, bicycle, owl, squirrel, hermit_crab, horse, spoon, cow, hotdog, camel, turtle, pizza, spider, songbird, rifle, chair, starfish, tree, airplane, bread, bench, harp, seagull, blimp, apple, geyser, trumpet, frog, lizard, axe, sea_turtle, pretzel, snail, butterfly, bear, ray, wine_bottle, , elephant, raccoon, rhinoceros, door, hat, deer, snake, ape, flower, car_(sedan), kangaroo, dolphin, hamburger, castle, pineapple, saw, zebra, candle, cannon, racket, church, fish, mushroom, strawberry, window, sailboat, hourglass, cat, shoe, hedgehog, couch, giraffe, hammer, motorcycle, shark H AUTOREGRESSIVE DRAWING MODEL COMPARISONS We summarize the key components of SketchEmbedNet in comparison to other autoregressive drawing models in Table 10. I FEW-SHOT CLASSIFICATION ON OMNIGLOT – FULL RESULTS The full results table for few-shot classification on the Omniglot (Lake et al., 2015) dataset, including the ResNet12 (Oreshkin et al., 2018) model. J FEW-SHOT CLASSIFICATION ON MINI-IMAGENET – FULL RESULTS The full results table for few-shot classification on the mini-ImageNet dataset, including the ResNet12 (Oreshkin et al., 2018) model and Conv4 models. K ADDITIONAL CONCEPTUAL COMPOSITIONALITY L EMBEDDING PROPERTIES OF OTHER BASELINE MODELS Here we substantiate the uniqueness of the properties observed in SketchEmbeddings by applying the same experiments to a β-VAE (Higgins et al., 2017) as well a vanilla autoencoder trained on the same dataset. We also include results of a SketchEmbedNet trained with a KL objective. L.1 β-VAE The β-VAE (Higgins et al., 2017) exhibits similar unsupervised clustering in comparison to the Conv-VAE and is generally incapable of distinguishing input images that have different shape compositions but the same overall silhouette (first two examples from the left). Differently it is better at distinguishing non-synthetic examples that contain multiple squares or circles (3rd figure). However, it utterly fails the latent variable regression task and does not exhibit any significant conceptual composition in latent space. L.2 AUTOENCODER AND SKETCHEMBEDNET-KL We show that the performance of SketchEmbedding embeddings in our experiments in Section 6 which focuses on organization in latent space is not correlated with the KL term. We present both a vanilla autoencoder without the KL objective and a SketchEmbedNet trained with a KL objective. We observe a drop in overall generation quality in the Conceptual Composition decoding as is expected with an additional constraint but maintained performance in the other tasks. Meanwhile, the autoencoder does not demonstrate any marked improvements over the Conv-VAE in the main paper or any other baseline comparison. M ADDITIONAL COMPOSITIONALITY MODES We provide additional clustering methods t-SNE (Maaten & Hinton, 2008) and PCA as well as 2 new experiments that explore the compositionality of our latent SketchEmbedding. Additional clustering methods We include additional t-SNE and PCA results of the experiments in the main paper. These are presented in Figures 13, 14, 15 16, 17. t-SNE and UMAP are stochastic and do not always produce the same visualization while PCA is deterministic and prioritizes the most important dimensions. Additional Experiments Here we provide different investigations into the compositionality of our learned embedding space that were not present in our main paper. These results presented in Figure 18 and 19. In Figure 18 we place a square in the center of the example and place a circle above, below or to the sides of it. Once again we find that our SketchEmbedding embedding clusters better than the VAE approach. New examples are generated where each class has a different numbers of circles. Both the VAE approach and our SketchEmbedding cluster well and neither appear to learn the count manifold. N HYPERNETWORK ACTIVATIONS To further explore how our network understands drawings, we examine the relationships between the activations of the hypernetwork of our HyperLSTM (Ha et al., 2017). The hypernetwork determines the weights of the LSTM that generates the RNN at each decoding timestep. These activations are 512-dimensional vectors. We collect the activations from many examples, cluster them in 512-dimensional space and visualize the strokes belonging to each cluster for each example. A full decoding is also rendered where each cluster within an example is assigned a color. Single class: snowman First we explore this clustering using only the snowman class from Quickdraw (Jongejan et al., 2016). We expect substantial reuse of a "circle" both within and over many examples. Clustering of the strokes is done with the DBSCAN (Ester et al., 1996) and parameter = 3.9. Results are in Figure 20. Each row is a separate input; the far left column is the color-coded, composed image, the second is the noise cluster and every subsequent column is a unique cluster. While cluster re-use is limited, cluster 0 often contains a large, fully enclosed circle. Many other clusters may contain circles or partial strokes with some reuse. Larger, fully composed and coloured sketches are presented in Figure 21 Many classes: round objects We repeat the above experiment with a mixture of classes that generally can be expected to contain circles. These classes were circles, snowmen, clocks and cups. The two former classes are frequently composed only of circles while the latter are expected to consistently contain other distinct shapes. Results are presented in Figure 22 and select examples in Figure 23. We still observe that the model continues to isolate circles in the first column and note it continues to do so for the cup and clock classes which are not exclusively circular. Many random classes: Finally, we repeat the above clustering with the 45 randomly selected holdout classes from the Quickdraw training process of SketchEmbedding. Results are once again presented in Figure 24 and select examples in Figure 25.
1. What is the focus of the paper regarding image representation? 2. What are the strengths of the proposed approach, particularly in terms of supervised training? 3. What are the weaknesses of the paper, especially regarding its claims and experiments? 4. How does the reviewer assess the clarity and relevance of the paper's content? 5. Are there any suggestions or recommendations for future research related to the paper's topic?
Review
Review A brief summary: This paper shows that the model trained to restore sequential data from images in a supervised manner tends to capture more informative latent representations of the data. Strengths: Demonstrates that training a decoder model to reconstruct sequential sketches leads for an encoder to better represent the input image. Achieves the SOTA result in Omniglot recognition task, compared against existing unsupervised methods. Weaknesses: Confuses readers with the ambiguous claim. The embedding function is said to produce `salient and compositional features' (p1), but no evaluations on compositions were included in the main paper. Does not include any ablation studies to show the effectiveness of each components of the proposed model. Does not include thorough explanations or analysis on each set of experiments. Initial recommendation: Borderline reject. Reasons: While this paper provides a lot of experimental results, which I very much appreciate, I still found most of them quite irrelevant to support the main claim by the authors, which discusses compositional embeddings. The main contribution of this paper, I believe, is mainly the idea to utilize the sequential sketch data during the supervised training time. This needs to be clearly stated, particularly inside the tables, as most baselines only use sketch images, not their sequential data. That being said, this trick is shown to work well, and if these significant improvement on Omniglot is further verified, this trick will be found useful by the community of researchers who work with sequential data, such as sketches or handwriting. However, I found most of the implementation details quite unclear, and experimental results were often misleading. For instance, the results from Table 4, if we look closely, the authors' results use ResNet12 trained with Sketchy data, which quite differs from other results in the list. Feedbacks: Authors briefly mention in p.4 that a KL term did negatively effect the performance, but no detailed explanations/experiments were given. What happens when you provide the class labels and train the model with classification errors as well? Will it improve the test result? If so, how much? May have been much more interesting if this paper explores the unsupervised disentanglement in latent space, to support their claims for what they note as structured embeddings. Will other latent disentanglement methods (e.g. beta-, factor-, VQ-VAE, etc.) lead to better representations? Conv-VAE alone may not be the best baseline.
ICLR
Title SketchEmbedNet: Learning Novel Concepts by Imitating Drawings Abstract Sketch drawings are an intuitive visual domain that appeals to human instinct. Previous work has shown that recurrent neural networks are capable of producing sketch drawings of a single or few classes at a time. In this work we investigate representations developed by training a generative model to produce sketches from pixel images across many classes in a sketch domain. We find that the embeddings learned by this sketching model are extremely informative for visual tasks and infer a unique visual understanding. We then use them to exceed state-of-the-art performance in unsupervised few-shot classification on the Omniglot and miniImageNet benchmarks. We also leverage the generative capacity of our model to produce high quality sketches of novel classes based on just a single example. 1 INTRODUCTION Upon encountering a novel concept, such as a six-legged turtle, humans can quickly generalize this concept by composing a mental picture. The ability to generate drawings greatly facilitates communicating new ideas. This dates back to the advent of writing, as many ancient written languages are based on logograms, such as Chinese hanzi and Egyptian hieroglyphs, where each character is essentially a sketch of the object it represents. We often see complex visual concepts summarized by a few simple strokes. Inspired by the human ability to draw, recent research has explored the potential to generate sketches using a wide variety of machine learning models, ranging from hierarchical Bayesian models (Lake et al., 2015), to more recent deep autoregressive models (Gregor et al., 2015; Ha & Eck, 2018; Chen et al., 2017) and generative adversarial nets (GANs) (Li et al., 2019). It is a natural question to ask whether we can obtain useful intermediate representations from models that produce sketches in the output space, as has been shown by other generative models (Ranzato et al., 2006; Kingma & Welling, 2014; Goodfellow et al., 2014; Donahue et al., 2017; Doersch et al., 2015). Unfortunately, a hierarchical Bayesian model suffers from prolonged inference time, while other current sketch models mostly focus on producing drawings in a closed set setting with a few classes (Ha & Eck, 2018; Chen et al., 2017), or on improving log likelihood at the pixel level (Rezende et al., 2016). Leveraging the learned representation from these drawing models remains a rather unexplored topic. In this paper, we pose the following question: Can we learn a generalized embedding function that captures salient and compositional features by directly imitating human sketches? The answer is affirmative. In our experiments we develop SketchEmbedNet, an RNN-based sketch model trained to map grayscale and natural image pixels to the sketch domain. It is trained on hundreds of classes without the use of class labels to learn a robust drawing model that can sketch diverse and unseen inputs. We demonstrate salience by achieving state-of-the-art performance on the Omniglot few-shot classification benchmark and visual recognizability in one-shot generations. Then we explore how the embeddings capture image components and their spatial relationships to explore image space compositionality and also show a surprising property of conceptual composition. We then push the boundary further by applying our sketch model to natural images—to our knowledge, we are the first to extend stroke-based autoregressive models to produce drawings of open domain natural images. We train our model with adapted SVG images from the Sketchy dataset (Sangkloy et al., 2016) and then evaluate the embedding quality directly on unseen classes in the mini-ImageNet task for few-shot classification (Vinyals et al., 2016). Our approach is competitive with existing unsupervised few-shot learning methods (Hsu et al., 2019; Khodadadeh et al., 2019; Antoniou & Storkey, 2019) on this natural image benchmark. In both the sketch and natural image domain, we show that by learning to draw, our methods generalize well even across different datasets and classes. 2 RELATED WORK In this section we review relevant literature including generating sketch-like images, unsupervised representation learning, unsupervised few-shot classification and sketch-based image retrieval (SBIR). Autoregressive drawing models: Graves (2013) use an LSTM to directly output the pen coordinates to imitate handwriting sequences. SketchRNN (Ha & Eck, 2018) builds on this by applying it to general sketches beyond characters. Song et al. (2018); Cao et al. (2019); Ribeiro et al. (2020) all extend SketchRNN through architectural improvements. Chen et al. (2017) change inputs to be pixel images. This and the previous 3 works consider multi-class sketching, but none handle more than 20 classes. Autoregressive models also generate images directly in the pixel domain. DRAW (Gregor et al., 2015) uses recurrent attention to plot pixels; Rezende et al. (2016) extends this to one-shot generation and PixelCNN (van den Oord et al., 2016) generates image pixels sequentially. Image processing methods & GANs: Other works produce sketch-like images based on style transfer or low-level image processing techniques. Classic methods are based on edge detection and image segmentation (Arbelaez et al., 2011; Xie & Tu, 2017). Zhang et al. (2015) use a CNN to directly produce sketch-like pixels for face images. Photo-sketch and pix2pix (Li et al., 2019; Isola et al., 2017) propose a conditional GAN to generate images across different style domains. Image processing based methods do not acquire high-level image understanding, as all the operations are in terms of low-level filtering; none of the GAN sketching methods are designed to mimic human drawings on open domain natural images. Unsupervised representation learning: In the sketch image domain, our method is similar to the large category of generative models which learn unsupervised representations by the principle of analysis-by-synthesis. Work by Hinton & Nair (2005) operates in a sketch domain and learns to draw by synthesizing an interpretable motor program. Bayesian Program Learning (Lake et al., 2015) draws through exact inference of common strokes but learning and inference are computationally challenging. As such, a variety of deep generative models aim to perform approximate Bayesian inference by using an encoder structure that directly predicts the embedding, e.g., deep autoencoders (Vincent et al., 2010), Helmholtz Machine (Dayan et al., 1995), variational autoencoder (VAE) (Kingma & Welling, 2014), BiGAN (Donahue et al., 2017), etc. Our method is also related to the literature of self-supervised representation learning (Doersch et al., 2015; Noroozi & Favaro, 2016; Gidaris et al., 2018; Zhang et al., 2016), as sketch strokes are part of the input data itself. In few-shot learning (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017), recent work has explored unsupervised meta-training. CACTUs, AAL and UMTRA (Hsu et al., 2019; Antoniou & Storkey, 2019; Khodadadeh et al., 2019) all operate by generating pseudo-labels for training. Sketch-based image retrieval (SBIR): In SBIR, a model is provided a sketch-drawing and retrieves a photo of the same class. The area is split into fine-grained (FG-SBIR) (Yu et al., 2016; Sangkloy et al., 2016; Bhunia et al., 2020) and a zero-shot setting (ZS-SBIR) (Dutta & Akata, 2019; Pandey et al., 2020; Dey et al., 2019). FG-SBIR considers minute details while ZS-SBIR learns high-level cross-domain semantics and a joint latent space to perform retrieval. 3 LEARNING TO IMITATE DRAWINGS Here we describe learning to draw through sketch imitation. Our architecture is a generative encoderdecoder model with a CNN encoder for pixel images and an RNN decoder to output vector sketches as shown in Figure 1. Unlike other drawing models that only train on a single or few classes (Ha & Eck, 2018; Chen et al., 2017), SketchEmbedNet is not limited by class inputs and can sketch a wide variety of images. We also introduce a differentiable rasterization function for computing an additional pixel-based training loss. Input & output representation Unlike SketchRNN which encodes drawing sequences, we learn an image embedding by mapping pixels to sketches, similar to Chen et al. (2017). Training data for this task (adopted from Ha & Eck (2018)) consists of a tuple (x,y), where x ∈ RH×W×C is the input image and y ∈ RT×5 is the stroke target. T is the maximum sequence length of the stroke data y, and each stroke yt consists of 5 elements, (∆x,∆y, s1, s2, s3). The first 2 elements are horizontal and vertical displacements on the drawing canvas from the endpoint of the previous stroke. The latter 3 elements are mutually exclusive pen states: s1 indicates the pen is on paper for the next stroke, s2 indicates the pen is lifted, and s3 indicates the sketch sequence has ended. y0 is initialized with (0, 0, 1, 0, 0) to start the generative process. Note that no class information is available while training. SketchEmbedding as a compositional encoding of images We use a CNN to encode the input image x and obtain the latent space representation z, as shown in Figure 1. To model intra-class variance, z is a Gaussian random variable parameterized by CNN outputs, similar to a VAE (Kingma & Welling, 2014). Throughout this paper, we refer to z as the SketchEmbedding. In typical image representations the embedding is trained to classify object classes, or to reconstruct the input pixels. Here, since the SketchEmbedding is fed into an RNN decoder to produce a sequence of drawing actions, z is additionally encouraged to have a compositional understanding of the object structure, instead of just an unstructured set of pixel features. For example when drawing the legs of a turtle, the model must explicitly generate each leg instance. While pixel-based models suffer from blurriness and in generating the image at once, does not distinguish between individual components such as the legs, body and head. The loss of this component information by pixel models has been observed in GAN literature (Goodfellow, 2017) which we propose is avoided by our sketching task. To accommodate the increased training data complexity by including hundreds of classes, we also upscale the size of our model in comparison to work by Chen et al. (2017); Ha & Eck (2018); Song et al. (2018). The backbone is either a 4-layer CNN (Conv4) (Vinyals et al., 2016) for consistent comparisons in the few-shot setting or a ResNet12 (Oreshkin et al., 2018) which produces better drawing results. In comparison, Chen et al. (2017) only use 2D convolution with a maximum of 8 filters. RNN decoder The RNN decoder used in SketchEmbedNet is the same as in SketchRNN (Ha & Eck, 2018). The decoder outputs a mixture density which represents the stroke distribution at each timestep. Specifically, the stroke distribution is a mixture of some hyperparameter M bivariate Gaussians denoting spatial offsets as well as the probability of the three pen states s1−3. The spatial offsets ∆ = (∆x,∆y) are sampled from the mixture of Gaussians, described by: (1) the normalized mixture weight πj ; (2) mixture means µj = (µx, µy)j ; and (3) covariance matrices Σj . We further reparameterize each Σj with its standard deviation σj = (σx, σy)j and correlation coefficient ρxy,j . Thus, the stroke offset distribution is p(∆) = ∑M j=1 πjN (∆|µj ,Σj). The RNN is implemented using a HyperLSTM (Ha et al., 2017); LSTM weights are generated at each timestep by a smaller recurrent “hypernetwork” to improve training stability. Generation is autoregressive, using z ∈ RD, concatenated with the stroke from the previous timestep yt−1, to form the input to the LSTM. Stroke yt−1 is the ground truth supervision at train time (teacher forcing), or a sample y′t−1, from the mixture distribution output by the model during from timestep t− 1. 3.1 LEARNING We train the drawing model in an end-to-end fashion by jointly optimizing three losses: a pen loss Lpen for learning pen states, a stroke loss Lstroke for learning pen offsets, and our proposed pixel loss Lpixel for matching the visual similarity of the predicted and the target sketch: L = Lpen + (1− α)Lstroke + αLpixel, (1) where α is a loss weighting hyperparameter. Both Lpen and Lstroke were in SketchRNN, while the Lpixel is our novel contribution to stroke-based generative models. Unlike SketchRNN, we do not impose a prior using KL divergence as we are not interested in unconditional sampling and it worsens quantitative results in later sections. Pen loss The pen-states predictions {s′1, s′2, s′3} are optimized as a simple 3-way classification with the softmax cross-entropy loss, Lpen = − 1T ∑T t=1 ∑3 m=1 sm,tlog(s ′ m,t). Stroke loss The stroke loss maximizes the log-likelihood of the spatial offsets of each ground truth stroke ∆t given the mixture density distribution pt at each timestep: Lstroke = − 1T ∑T t=1 log pt(∆t). Pixel loss While pixel-level reconstruction objectives are common in generative models (Kingma & Welling, 2014; Vincent et al., 2010; Gregor et al., 2015), we introduce a pixel-based objective for vector sketch generation. After decoding, a differentiable rasterization function fraster is used to map the sketch into a pixel image. fraster transforms a stroke sequence y into a set of 2D line segments (l0, l1), (l1, l2) . . . (lT−1, lT ) where lt = ∑t τ=0 ∆τ . It renders by fixing canvas dimensions, scaling and centering strokes before determining pixel intensity based on the L2 distance between each pixel to lines in the drawing. Further details on fraster can be found in Appendix A. fraster is applied to both the prediction y′ and ground truth y, to produce two pixel images. Gaussian blur gblur(·) is used to reduce strictness before computing the binary cross-entropy loss, Lpixel: I = gblur(fraster(y)), I ′ = gblur(fraster(y ′)), Lpixel = − 1 HW H∑ i=1 W∑ j=1 Iij log(I ′ij). (2) Curriculum training schedule We find that α (in Equation 1) is an important hyperparameter that impacts both the learned embedding space and the generation quality of SketchEmbedNet. A curriculum training schedule is used, increasing α to prioritize Lpixel relative to Lstroke as training progresses; this makes intuitive sense as a single drawing can be produced by many different stroke sequences but learning to draw in a fixed manner is easier. While Lpen promotes reproducing a specific drawing sequence, Lpixel only requires that the generated drawing visually matches the image. Like a human, the model should learn to follow one drawing style (a la paint-by-numbers) before learning to draw freely. 4 DRAWING IMITATION EXPERIMENTS In this section, we introduce our experiments on training SketchEmbedNet using two sketching datasets. The first is based on pure stroke-based drawings, and the second consists of natural image and drawing pairs. Quickdraw: Stroke-based image sketching The Quickdraw (Jongejan et al., 2016) dataset consists of 345 classes of each with 70,000 examples, produced by human players participating in the game “Quick, Draw!”. Examples from the Quickdraw dataset are shown in Figure 2b. The input image x is a direct rasterization of the drawing data y. 300 of 345 classes are randomly selected for training; x is rasterized to a resolution of 28 × 28 and stroke labels y padded up to length T = 64. Any drawing samples exceeding this length were discarded. Note that this an unsupervised representation learning approach, as no class information is used by the system. Data processing procedures and class splits are in Appendix G. Sketchy: Open domain natural image sketching We further extend our stroke-based generation model on open domain natural images. Here, the input is an RGB photo, and the output is a human drawing which does not align with the photo precisely and also does not match with the low-level image details. This is a novel setting, as prior efforts by Ha & Eck (2018); Chen et al. (2017); Song et al. (2018) have only applied their sketch RNN models on the Quickdraw dataset or natural images with only two object classes (shoe/chair) and scrubbed backgrounds (Yu et al., 2016). Learning to sketch open domain natural images is very challenging as it requires the model to identify the subject and filter unnecessary details not present in the sketch. At test time, we further challenge our method by evaluating on unseen data distributions necessitating generalization over natural images. For this task we use the Sketchy dataset (Sangkloy et al., 2016) which consists of ImageNet images paired with vector sketches for a total of 56k examples after processing. Sketches are stored as SVGs with timestamps preserving their original drawing sequence which we adapt by sampling paths in this order. Images are also centered, padded and resized to resolution 84× 84 (see Figure 2a). We fix the maximum sequence length to T = 100, and use all 125 categories but remove classes that have overlapping child synsets with the test classes of mini-ImageNet (Vinyals et al., 2016). This enables testing on mini-ImageNet without any alterations to the benchmark. Once again this is an unsupervised learning formulation. 4.1 RESULTS AND VISUALIZATIONS Figure 3 shows drawings conditioned on sketch image inputs. There is little noticeable drop in quality when we sample sketches from unseen classes compared to those it has seen before. Figure 4 shows examples of sketches generated from natural images. While they are not fine-grained renditions, these sketches clearly demonstrate SketchEmbedNet’s ability to capture key components of seen and unseen classes. The model effectively isolates the subject in each natural image and captures the circular and square shapes in the cakes and storefronts respectively. Even with the challenging lion images, it identifies the silhouette of the laying lion despite low contrast and appropriately localizes the one on the mountainside. Unlike pixel-based auto-encoder models, our sketches do not follow the exact pose of the original strokes, but rather capture a general notion of component groupings. In the basketball example of Figure 3, the lines are not a good pixel-wise match to those in the original image yet they are placed in sensible relative positions. Weaker examples are presented in the last row of Figure 3 and 4; regardless, even poorer examples still capture some structural aspects of the original image. Implementation details can be found in Appendix B. In later sections we explore the uses of SketchEmbeddings and fix models for all downstream tasks. 5 FEW-SHOT CLASSIFICATION USING SKETCHEMBEDDING We would like to assess the benefits of learning to draw by performing few-shot classification with our learned embedding space. Examining performance on discriminative tasks reveals that learning to imitate sketches allows the embeddings to capture salient information of novel object classes. Below we describe our few-shot classification procedure and summarize results on the Omniglot (Lake et al., 2015) and mini-ImageNet benchmarks (Vinyals et al., 2016). Comparison to unsupervised few-shot classification In unsupervised few-shot classification, a model is not provided with any class labels during meta-training, until it is given a few labeled examples ("shots") of the novel classes at meta-test time. While our model is provided a "target"—a sequence of strokes—during training, it is not given any class information. Therefore, we propose that the presented sketch imitation training, though it uses sketch sequences, is comparable to other class-label-free representation learning approaches (Berthelot et al., 2019; Donahue et al., 2017; Caron et al., 2018) and the learned SketchEmbeddings can be applied to unsupervised few-shot classification methods. In our experiments, we compare to previous unsupervised few-shot learning approaches: CACTUs (Hsu et al., 2019), AAL (Antoniou & Storkey, 2019), and UMTRA (Khodadadeh et al., 2019). These methods create pseudo-labels during meta-training using either clustering or data augmentation. As additional baselines, a Conv-VAE (Kingma & Welling, 2014) and a random CNN are also included, both using the same Conv4 backbone. Few-shot experimental setup The CNN encoder of SketchEmbedNet is used as an embedding function combined with a linear classification head to perform few-shot classification. The embedding is made deterministic by taking the mean of the random normal latent space z and discarding the variance parameter from the encoder. Otherwise, the conventional episodic setup for few-shot classification is used; each episode consists of a labeled "support" set of N ×K (N-way K-shot) embeddings and an unlabeled "query" set. The linear classification head is trained on the labeled support set and evaluated on the query set. 5.1 FEW-SHOT CLASSIFICATION ON OMNIGLOT The Omniglot (Lake et al., 2015) dataset contains 50 alphabets, 1623 unique character types, each with 20 examples and is presented as both a greyscale image and a stroke drawing. We use the same train-test split as Vinyals et al. (2016) along with randomly sampled episodes. Experiments using the more challenging Lake split where episodes are sampled within alphabet, as proposed by Lake et al. (2015), are in Appendix E and random seed experiments in Appendix F. To ensure a fair comparison with other few-shot classification models, we use the same convolutional encoder (Conv4) as Vinyals et al. (2016). Results from training only on Omniglot (Lake et al., 2015) are also presented to demonstrate effectiveness without the larger Quickdraw(Jongejan et al., 2016) dataset. No significant improvements were observed using the deeper ResNet12(Oreshkin et al., 2018) architecture; additional results are in Appendix I. All of our methods out-perform the previous state-of-the-art on the unsupervised Omniglot benchmark (Table 1). The Quickdraw trained model surpasses supervised MAML (Finn et al., 2017), and is on par with a supervised ProtoNet (Snell et al., 2017) model , especially in the 5-shot settings. Both baselines, a Conv-VAE and a random CNN, perform well compared to other unsupervised methods. 5.2 FEW-SHOT CLASSIFICATION ON MINI-IMAGENET We extend our investigation and assess SketchEmbeddings for the classification of natural images in the mini-ImageNet benchmark (Vinyals et al., 2016). The same CNN encoder model from the natural image sketching task is used to match the visual domain of the examples we hope to classify. The mini-ImageNet (Vinyals et al., 2016) dataset consists of 100 classes each with 600 examples. The setup proposed by Ravi & Larochelle (2017) is used, where the classes are split 64-16-20 for training, validation and test. As noted earlier, any examples in the Sketchy dataset that are also present in the mini-ImageNet test were removed by filtering the synset (and children synset) IDs ensuring train and test classes are disjoint. Classification results on mini-ImageNet are shown in Table 2. Natural image classification is a far more challenging problem. Learning to reconstruct pixels of an image actually worsens our results; the trained Conv-VAE is outperformed by the VAE with random weights. However, sketch reconstruction is still a valuable task; our models are competitive while our best model out-performs previous state-of-the-art unsupervised methods on few-shot settings. A full table is in Appendix J, seeding results are in Appendix F. 5.3 SKETCHING TO LEARN CLASS-IDENTIFIABLE INFORMATION Existing sketch works have focused on generating better drawings or unifying sketches with other image domains. We present a new paradigm: using sketching as an auxiliary task to learn visual content. Only by training a drawing model that can sketch general image inputs are we able to transfer the learned understanding to new data distributions. By considering the stroke distribution of the Quickdraw dataset, we are able to interpret image inputs from the separate Omniglot dataset and tackle the few-shot classification task with surprising accuracy. While the natural image sketching task is challenging and does not always produce high-fidelity results, it still learns useful visual information. By training on the Sketchy dataset, we learn how to draw other data distributions for which no sequential stroke data exists. Then, by knowing how to sketch this mini-ImageNet data we are able to produce distinguishable embeddings that enable competitive few-shot classification performance. Varying weighting of pixel-loss For both settings we sweep the pixel loss coefficient αmax to ablate its impact on model performance on the Omniglot task (Table 3). There is a substantial improvement in few-shot classification when αmax is non-zero. αmax= 0.50 achieves the best results, and the trend goes downward when αmax approaches to 1.0, i.e. the weighting for Lstroke goes to 0.0. This is reasonable as the training of SketchEmbedNet is more stable under the guidance of ground truth strokes. 6 PROPERTIES OF SKETCHEMBEDDINGS We hypothesize that reproducing a sketch drawing rather than a pixel-based approach requires the preservation of more structural information due to sequential RNN generation. By learning in this manner, SketchEmbeddings are aware of spatial properties and the composition of elements in image space. We examine this compositionality through several comparisons of SketchEmbeddings with those generated by a Conv-VAE. Component arrangements We construct examples that contain the same set of objects but in different arrangements to test sensitivity to component arrangement and composition in image space. We then embed these examples with both generative models and project into 2D space using UMAP (McInnes et al., 2018) to visualize their organization. In the first 2 panels of Figure 5-A, we see that the SketchEmbeddings are easily separated in unsupervised clustering. The rightmost panel of Figure 5-A exhibits non-synthetic classes with duplicated shapes; snowmen with circles and televisions with squares. With these results, we demonstrate the greater component level awareness of SketchEmbeddings. The 4 rearranged shapes and the nested circle and squares have similar silhouettes that are difficult to differentiate to a conventional pixel loss. To SketchEmbeddings, the canvas offset and different drawing sequence of each shape make them substantially different in embedding space. Spatial relationships Drawing also builds awareness of relevant underlying variables, such as spatial relationships between components of the image. We examine the degree to which the underlying variables of angle, distance or size are captured by the embedding, by constructing images that vary along each dimension respectively. The embeddings are again grouped by a 2D projection in Figure 5-B using the UMAP (McInnes et al., 2018) algorithm. When clustered, the 2D projection of SketchEmbeddings arranges the examples along an axis corresponding to the latent variable compared to the Conv-VAE embeddings which is visibly non-linear and arranges in clusters. This clear axis-alignment suggests a greater level of latent variable disentanglement in the SketchEmbeddings. Conceptual composition Finally, we explore concept space composition using SketchEmbeddings (Figure 5-C) by embedding different Quickdraw examples then performing arithmetic with the latent vectors. By subtracting a circle embedding and adding a square embedding from a snowman composed of stacked circles, we produce stacked boxes. This property of vector arithmetic is reminiscent of language representations, as evidenced in analogies like King - Man + Woman = Queen (Ethayarajh et al., 2019). Our results indicate that this property is captured to a greater degree in SketchEmbedding than in the pixel-based VAE embeddings. Composing SketchEmbeddings produces decoded examples that appeal to our intuitive conceptual understanding while the VAE degrades to blurry, fragmented images. We provide more examples of the setup in Figure 5-C as well as additional classes in Appendix K. 7 ONE-SHOT GENERATION To evaluate the sketches generated by our model, we make qualitative comparisons to other one-shot generative models and quantitatively assess our model through visual classification via a ResNet101 (He et al., 2016). In this section, all models use the ResNet12 (Oreshkin et al., 2018) backbone. Qualitative comparisons We compare SketchEmbedNet one-shot generations of Omniglot characters with examples from other few-shot (Reed et al., 2017) and one-shot (Rezende et al., 2016) approaches (Figure 6). In the settings shown, none of the models have seen any examples from the character class, or the parent alphabet. Furthermore, the drawer has seen no written characters during training and is trained only on the Quickdraw dataset. Visually, our generated images better resemble the support examples and the variations by stretching and shifting strokes better preserves the semantics of each character. Generations in pixel space may disrupt strokes and alter the character to human perception. This is especially true for written characters as they are frequently defined by a specific set of strokes instead of blurry clusters of pixels. Quantitative evaluation of generation quality Evaluating generative models is often challenging. Per-pixel metrics like in Reed et al. (2017); Rezende et al. (2016) may penalize generative variance that still preserves meaning. We propose an Inception Score (Salimans et al., 2016) inspired metric to quantify class-recognizability and generalization of generated examples. We train two separate ResNet classifiers (He et al., 2016), each on a different set of 45 Quickdraw classes. One set was part of the training set of SketchEmbedNet (referred to as “seen”) and the other set was held out during training (referred to as “unseen”). We then have SketchEmbedNet generate one-shot sketches from each set and have the corresponding classifier predict a class. The accuracy of the classifier on generated examples is compared with its training accuracy in Table 4. For a ResNet classifier, SketchEmbedNet generations are more recognizable for both classes seen and unseen classes. 8 CONCLUSION Learning to draw is not only an artistic pursuit but drives a distillation of real-world visual concepts. We present a generalized drawing model capable of producing accurate sketches and visual summaries of open-domain natural images. While sketch data may be challenging to source, we show that training to draw either sketch or natural images can generalize for downstream tasks, not only within each domain but also well beyond the training data. More generally research in this direction may lead to more lifelike image understanding inspired by how humans communicate visual concepts. A RASTERIZATION The key enabler of our novel pixel loss for sketch drawings is our differentiable rasterization function fraster. Sequence based loss functions such as Lstroke are sensitive to the order of points while in reality, drawings are sequence invariant. Visually, a square is a square whether it is drawn clockwise or counterclockwise. The purpose of a sketch representation is to lower the complexity of the data space and decode in a more visually intuitive manner. While it is a necessary departure point, the sequential generation of drawings is not key to our visual representation and we would like SketchEmbedNet to be agnostic to any specific sequence needed to draw the sketch that is representative of the image input. To facilitate this, we develop our rasterization function fraster which renders an input sequence of strokes as a pixel image. However, during training, the RNN outputs a mixture of Gaussians at each timestep. To convert this to a stroke sequence, we sample from these Gaussians; this can be repeated to reduce the variance of the pixel loss. We then scale our predicted and ground truth sequences by the properties of the latter before rasterization. Stroke sampling At the end of sequence generation we haveNs×(6M+3) parameters, 6 Gaussian mixture parameters, 3 pen states, Ns times, one for each stroke. To obtain the actual drawing we sample from the mixture of Gaussians:[ ∆xt ∆yt ] = [ µx,t µy,t ] + [ σx,t 0 ρxy,tσy,t σy,t √ 1− ρ2xy,t ] , ∼ N (0,12). (3) After sampling we compute the cumulative sum of every stroke over the timestep so that we obtain the absolute displacement from the initial position:[ xt yt ] = T∑ τ=0 [ ∆xτ ∆yτ ] . (4) yt,abs = (xt, yt, s1, s2, s3). (5) Scaling Each sketch generated by our model begins at (0,0) and the variance of all strokes in the training set is normalized to 1. On a fixed canvas the image is both very small and localized to the top left corner. We remedy this by computing a scale λ and shift xshift, yshift using labels y and apply them to both the prediction y′ as well as the ground truth y. These parameters are computed as: λ = min { W xmax − xmin , H ymax − ymin } , (6) xshift = xmax + xmin 2 λ, yshift = ymax + ymin 2 λ. (7) xmax, xmin, ymax, ymin are the minimum and maximum values of xt, yt from the supervised stroke labels and not the generated strokes. W and H are the width and height in pixels of our output canvas. Calculate pixel intensity Finally we are able to calculate the pixel pij intensity of every pixel in our H ×W canvas. pij = σ [ 2− 5× min t=1...Ns ( dist ( (i, j), (xt−1, yt−1), (xt, yt) ) + (1− bs1,t−1e)106 )] , (8) where the distance function is the distance between point (i, j) from the line segment defined by the absolute points (xt−1, yt−1) and (xt, yt). We also blow up any distances where s1,t−1 < 0.5 so as to not render any strokes where the pen is not touching the paper. B IMPLEMENTATION DETAILS We train our model for 300k iterations with a batch size of 256 for the Quickdraw dataset and 64 for Sketchy due to memory constraints. The initial learning rate is 1e-3 which decays by 0.85 every 15k steps. We use the Adam (Kingma & Ba, 2015) optimizer and clip gradient values at 1.0. σ = 2.0 is used for the Gaussian blur in Lpixel. For the curriculum learning schedule, the value of α is set to 0 initially and increases by 0.05 every 10k training steps with an empirically obtained cap at αmax = 0.50 for Quickdraw and αmax = 0.75 for Sketchy. The ResNet12 (Oreshkin et al., 2018) encoder uses 4 ResNet blocks with 64, 128, 256, 512 filters respectively and ReLU activations. The Conv4 backbone has 4 blocks of convolution, batch norm (Ioffe & Szegedy, 2015), ReLU and max pool, identical to Vinyals et al. (2016). We select the latent space to be 256 dimensions, RNN output size to be 1024, and the hypernetwork embedding size to be 64. We use a mixture of M = 30 bivariate Gaussians for the mixture density output of the stroke offset distribution. C LATENT SPACE INTERPOLATION Like in many encoding-decoding models we evaluate the interpolation of our latent space. We select 4 embeddings at random and use bi-linear interpolation to produce new embeddings. Results are in Figures 7a and 7b. (a) Interpolation of classes: power outlet, snowman, jacket, elbow (b) Interpolation of classes: cloud, power outlet, basket, compass Figure 7: Latent space interpolations of randomly selected examples We observe that compositionality is also present in these interpolations. In the top row of Figure 7a, the model first plots a third small circle when interpolating from the 2-circle power outlet and the 3-circle snowman. This small circle is treated as single component that grows as it transitions between classes until it’s final size in the far right snowman drawing. Some other RNN-based sketching models (Ha & Eck, 2018; Chen et al., 2017) experience other classes materializing in interpolations between two unrelated classes. Our model does not exhibit this same behaviour as our embedding space is learned from more classes and thus does not contain local groupings of classes. D EFFECT OF α ON FEW-SHOT CLASSIFICATION We performed additional experiments exploring the impact of our curriculum training schedule for α. The encoding component of our drawing model was evaluated on the few-shot classification task for different values of αmax every 25k iterations during training. A graph is shown in Figure 8 and the full table of all values of αmax is in Table 5. E INTRA-ALPHABET LAKE SPLIT The creators of the Omniglot dataset and one-shot classification benchmark originally proposed an intra-alphabet classification task. This task is more challenging than the common Vinyals split as characters from the same alphabet may exhibit similar stylistics of sub-components that makes visual differentiation more difficult. This benchmark has been less explored by researchers; however, we still present the performance of our SketchEmbedding model against evaluations of other few-shot classification models on the benchmark. Results are shown in Table 6. Unsurprisingly, our model is outperformed by supervised models and does fall behind by a more substantial margin than in the Vinyals split. However, our SketchEmbedding approach still achieves respectable classification accuracy overall and greatly outperforms a Conv-VAE baseline. F EFFECT OF RANDOM SEEDING ON FEW-SHOT CLASSIFICATION The training objective for SketchEmbedNet is to reproduce sketch drawings of the input. This task is unrelated to few-shot classification may perform variably given different initialization. We quantify this variance by training our model with 15 unique random seeds and evaluating the performance of the latent space on the few-shot classification tasks. We disregard the per (evaluation) episode variance of our model in each test stage and only present the mean accuracy. We then compute a new confidence interval over random seeds. Results are presented in Tables 7, 8, 9. G DATA PROCESSING We apply the same data processing methods as in Ha & Eck (2018) with no additional changes to produce our stroke labels y. When rasterizing for our input x, we scale, center the strokes then pad the image with 10% of the resolution in that dimension rounded to the nearest integer. The following list of classes were used for training: The Eiffel Tower, The Mona Lisa, aircraft carrier, alarm clock, ambulance, angel, animal migration, ant, apple, arm, asparagus, banana, barn, baseball, baseball bat, bathtub, beach, bear, bed, bee, belt, bench, bicycle, binoculars, bird, blueberry, book, boomerang, bottlecap, bread, bridge, broccoli, broom, bucket, bulldozer, bus, bush, butterfly, cactus, cake, calculator, calendar, camel, camera, camouflage, campfire, candle, cannon, car, carrot, castle, cat, ceiling fan, cell phone, cello, chair, chandelier, church, circle, clarinet, clock, coffee cup, computer, cookie, couch, cow, crayon, crocodile, crown, cruise ship, diamond, dishwasher, diving board, dog, dolphin, donut, door, dragon, dresser, drill, drums, duck, dumbbell, ear, eye, eyeglasses, face, fan, feather, fence, finger, fire hydrant, fireplace, firetruck, fish, flamingo, flashlight, flip flops, flower, foot, fork, frog, frying pan, garden, garden hose, giraffe, goatee, grapes, grass, guitar, hamburger, hand, harp, hat, headphones, hedgehog, helicopter, helmet, hockey puck, hockey stick, horse, hospital, hot air balloon, hot dog, hourglass, house, house plant, ice cream, key, keyboard, knee, knife, ladder, lantern, leaf, leg, light bulb, lighter, lighthouse, lightning, line, lipstick, lobster, mailbox, map, marker, matches, megaphone, mermaid, microphone, microwave, monkey, mosquito, motorbike, mountain, mouse, moustache, mouth, mushroom, nail, necklace, nose, octopus, onion, oven, owl, paint can, paintbrush, palm tree, parachute, passport, peanut, pear, pencil, penguin, piano, pickup truck, pig, pineapple, pliers, police car, pool, popsicle, postcard, purse, rabbit, raccoon, radio, rain, rainbow, rake, remote control, rhinoceros, river, rollerskates, sailboat, sandwich, saxophone, scissors, see saw, shark, sheep, shoe, shorts, shovel, sink, skull, sleeping bag, smiley face, snail, snake, snowflake, soccer ball, speedboat, square, star, steak, stereo, stitches, stop sign, strawberry, streetlight, string bean, submarine, sun, swing set, syringe, t-shirt, table, teapot, teddy-bear, tennis racquet, tent, tiger, toe, tooth, toothpaste, tractor, traffic light, train, triangle, trombone, truck, trumpet, umbrella, underwear, van, vase, watermelon, wheel, windmill, wine bottle, wine glass, wristwatch, zigzag, blackberry, power outlet, peas, hot tub, toothbrush, skateboard, cloud, elbow, bat, pond, compass, elephant, hurricane, jail, school bus, skyscraper, tornado, picture frame, lollipop, spoon, saw, cup, roller coaster, pants, jacket, rifle, yoga, toilet, waterslide, axe, snowman, bracelet, basket, anvil, octagon, washing machine, tree, television, bowtie, sweater, backpack, zebra, suitcase, stairs, The Great Wall of China G.2 OMNIGLOT We derive our Omniglot tasks from the stroke dataset originally provided by Lake et al. (2015) rather than the image analogues. We translate the Omniglot stroke-by-stroke format to the same one used in Quickdraw. Then we apply the Ramer-Douglas-Peucker (Douglas & Peucker, 1973) algorithm with an epsilon value of 2 and normalize variance to 1 to produce y. We also rasterize our images in the same manner as above for our input x. G.3 SKETCHY Sketchy data is provided as an SVG image composed of line paths that are either straight lines or Bezier curves. To generate stroke data we sample sequences of points from Bezier curves at a high resolution that we then simplify with RDP, = 5. We also eliminate continuous strokes with a short path length or small displacement to reduce our stroke length and remove small and noisy strokes. Path length and displacement are considered with respect to the scale of the entire sketch. Once again we normalize stroke variance and rasterize for our input image in the same manners as above. The following classes were use for training after removing overlapping classes with mini-ImageNet: hot-air_balloon, violin, tiger, eyeglasses, mouse, jack-o-lantern, lobster, teddy_bear, teapot, helicopter, duck, wading_bird, rabbit, penguin, sheep, windmill, piano, jel- lyfish, table, fan, beetle, cabin, scorpion, scissors, banana, tank, umbrella, crocodilian, volcano, knife, cup, saxophone, pistol, swan, chicken, sword, seal, alarm_clock, rocket, bicycle, owl, squirrel, hermit_crab, horse, spoon, cow, hotdog, camel, turtle, pizza, spider, songbird, rifle, chair, starfish, tree, airplane, bread, bench, harp, seagull, blimp, apple, geyser, trumpet, frog, lizard, axe, sea_turtle, pretzel, snail, butterfly, bear, ray, wine_bottle, , elephant, raccoon, rhinoceros, door, hat, deer, snake, ape, flower, car_(sedan), kangaroo, dolphin, hamburger, castle, pineapple, saw, zebra, candle, cannon, racket, church, fish, mushroom, strawberry, window, sailboat, hourglass, cat, shoe, hedgehog, couch, giraffe, hammer, motorcycle, shark H AUTOREGRESSIVE DRAWING MODEL COMPARISONS We summarize the key components of SketchEmbedNet in comparison to other autoregressive drawing models in Table 10. I FEW-SHOT CLASSIFICATION ON OMNIGLOT – FULL RESULTS The full results table for few-shot classification on the Omniglot (Lake et al., 2015) dataset, including the ResNet12 (Oreshkin et al., 2018) model. J FEW-SHOT CLASSIFICATION ON MINI-IMAGENET – FULL RESULTS The full results table for few-shot classification on the mini-ImageNet dataset, including the ResNet12 (Oreshkin et al., 2018) model and Conv4 models. K ADDITIONAL CONCEPTUAL COMPOSITIONALITY L EMBEDDING PROPERTIES OF OTHER BASELINE MODELS Here we substantiate the uniqueness of the properties observed in SketchEmbeddings by applying the same experiments to a β-VAE (Higgins et al., 2017) as well a vanilla autoencoder trained on the same dataset. We also include results of a SketchEmbedNet trained with a KL objective. L.1 β-VAE The β-VAE (Higgins et al., 2017) exhibits similar unsupervised clustering in comparison to the Conv-VAE and is generally incapable of distinguishing input images that have different shape compositions but the same overall silhouette (first two examples from the left). Differently it is better at distinguishing non-synthetic examples that contain multiple squares or circles (3rd figure). However, it utterly fails the latent variable regression task and does not exhibit any significant conceptual composition in latent space. L.2 AUTOENCODER AND SKETCHEMBEDNET-KL We show that the performance of SketchEmbedding embeddings in our experiments in Section 6 which focuses on organization in latent space is not correlated with the KL term. We present both a vanilla autoencoder without the KL objective and a SketchEmbedNet trained with a KL objective. We observe a drop in overall generation quality in the Conceptual Composition decoding as is expected with an additional constraint but maintained performance in the other tasks. Meanwhile, the autoencoder does not demonstrate any marked improvements over the Conv-VAE in the main paper or any other baseline comparison. M ADDITIONAL COMPOSITIONALITY MODES We provide additional clustering methods t-SNE (Maaten & Hinton, 2008) and PCA as well as 2 new experiments that explore the compositionality of our latent SketchEmbedding. Additional clustering methods We include additional t-SNE and PCA results of the experiments in the main paper. These are presented in Figures 13, 14, 15 16, 17. t-SNE and UMAP are stochastic and do not always produce the same visualization while PCA is deterministic and prioritizes the most important dimensions. Additional Experiments Here we provide different investigations into the compositionality of our learned embedding space that were not present in our main paper. These results presented in Figure 18 and 19. In Figure 18 we place a square in the center of the example and place a circle above, below or to the sides of it. Once again we find that our SketchEmbedding embedding clusters better than the VAE approach. New examples are generated where each class has a different numbers of circles. Both the VAE approach and our SketchEmbedding cluster well and neither appear to learn the count manifold. N HYPERNETWORK ACTIVATIONS To further explore how our network understands drawings, we examine the relationships between the activations of the hypernetwork of our HyperLSTM (Ha et al., 2017). The hypernetwork determines the weights of the LSTM that generates the RNN at each decoding timestep. These activations are 512-dimensional vectors. We collect the activations from many examples, cluster them in 512-dimensional space and visualize the strokes belonging to each cluster for each example. A full decoding is also rendered where each cluster within an example is assigned a color. Single class: snowman First we explore this clustering using only the snowman class from Quickdraw (Jongejan et al., 2016). We expect substantial reuse of a "circle" both within and over many examples. Clustering of the strokes is done with the DBSCAN (Ester et al., 1996) and parameter = 3.9. Results are in Figure 20. Each row is a separate input; the far left column is the color-coded, composed image, the second is the noise cluster and every subsequent column is a unique cluster. While cluster re-use is limited, cluster 0 often contains a large, fully enclosed circle. Many other clusters may contain circles or partial strokes with some reuse. Larger, fully composed and coloured sketches are presented in Figure 21 Many classes: round objects We repeat the above experiment with a mixture of classes that generally can be expected to contain circles. These classes were circles, snowmen, clocks and cups. The two former classes are frequently composed only of circles while the latter are expected to consistently contain other distinct shapes. Results are presented in Figure 22 and select examples in Figure 23. We still observe that the model continues to isolate circles in the first column and note it continues to do so for the cup and clock classes which are not exclusively circular. Many random classes: Finally, we repeat the above clustering with the 45 randomly selected holdout classes from the Quickdraw training process of SketchEmbedding. Results are once again presented in Figure 24 and select examples in Figure 25.
1. What is the focus and contribution of the paper regarding learning embeddings for sketches or natural images? 2. What are the strengths of the proposed approach, particularly in its ability to capture long-range structure? 3. What are the weaknesses of the paper, especially regarding its comparison with other methods and experimental design? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any additional questions or concerns that the reviewer has regarding the paper?
Review
Review Summary This paper proposes learning embeddings for sketch or natural images by training a network that takes in a raster image and outputs and collection of sketch strokes. The architecture consists of a standard CNN encoder followed by an RNN decoder. The authors evaluate their learned embeddings on few-shot classification tasks and explore the the quality of the latent space. They demonstrate that they outperform unsupervised few-shot classification approaches and seem to obtain a latent space that is more aware of long-range structure than those from methods that operate purely in raster space. Explanation of rating Interpreting and synthesizing sketches in a deep learning context is an promising research direction. While the idea of focusing on of sketch-aware embeddings of images is an interesting one, the main technical contribution simply involves taking a standard convolutional encoder with a recurrent decoder, which has already been used for sketch generation (sketch-rnn). In addition to this, some of the claims made in the paper require some clarification or additional experimentation, as I explain below. Thus, I believe that some additions and changes must be made for the paper to be accepted. Pros I like the idea of using the vector structure of sketches to gain more insight into image content. It is difficult to design deep networks that are able to operate on non-raster data. In some sense, this approach sidesteps this issue by allowing the relationship between raster and vector to be learned. The experiments in Section 6 confirm the idea that the embeddings are aware of this high-level long-range relationships that aren't so obvious on the pixel level. Cons and questions I am not sure I follow the intuition behind why the proposed model achieves better semantic awareness than a convolutional VAE. The authors use the example of a six legged turtle and state that the VAE would only retain information about a single "leg-like feature but not how many legs are present." How is having an RNN stroke-based decoder different from a standard convolutional decoder in this sense? In both cases, the final reconstruction must contain the original number of legs, and so the latent vector is encouraged to retain this information. The performance on natural images in Figure 4, especially on unseen classes, is not great. I would be interested to see the nearest neighbor images in the training set for the examples shown. Even in the case of unseen classes, the resulting sketches look like they might match a similar image from the training set. The authors claim that balancing the stroke and and pixel losses via a curriculum mirrors how humans learn to draw ("paint-by-numbers"). However, I'm not sure how some of the experiments fit into this methodology. In particular, most of the experiments are done on datasets that have ground truth sketch strokes but not their ordering (e.g., SVG files). In this case, it seems like imposing an order on the strokes (and asking the decoder to replicate it) is a counterintuitive constraint for the model. On the other hand, in the natural image experiments, the pixel loss cannot be used at all. Some discussion of the consequences there would be interesting. I'm not sure that it is fair to compare to fully unsupervised few-shot classification methods. While the proposed method indeed does not use class labels, the ground truth stroke information may provide considerably more information than just the raster image data. Perhaps it would help to have a baseline without stroke loss (i.e., α = 1 ). Even the ablation study in Table 3 does not include this case. How sensitive is the approach to quality of stroke decomposition? For instance, what happens if you subdivide each stroke in the ground truth SVGs? The authors have addressed many of my concerns in the rebuttal, and so I am increasing my rating.
ICLR
Title SketchEmbedNet: Learning Novel Concepts by Imitating Drawings Abstract Sketch drawings are an intuitive visual domain that appeals to human instinct. Previous work has shown that recurrent neural networks are capable of producing sketch drawings of a single or few classes at a time. In this work we investigate representations developed by training a generative model to produce sketches from pixel images across many classes in a sketch domain. We find that the embeddings learned by this sketching model are extremely informative for visual tasks and infer a unique visual understanding. We then use them to exceed state-of-the-art performance in unsupervised few-shot classification on the Omniglot and miniImageNet benchmarks. We also leverage the generative capacity of our model to produce high quality sketches of novel classes based on just a single example. 1 INTRODUCTION Upon encountering a novel concept, such as a six-legged turtle, humans can quickly generalize this concept by composing a mental picture. The ability to generate drawings greatly facilitates communicating new ideas. This dates back to the advent of writing, as many ancient written languages are based on logograms, such as Chinese hanzi and Egyptian hieroglyphs, where each character is essentially a sketch of the object it represents. We often see complex visual concepts summarized by a few simple strokes. Inspired by the human ability to draw, recent research has explored the potential to generate sketches using a wide variety of machine learning models, ranging from hierarchical Bayesian models (Lake et al., 2015), to more recent deep autoregressive models (Gregor et al., 2015; Ha & Eck, 2018; Chen et al., 2017) and generative adversarial nets (GANs) (Li et al., 2019). It is a natural question to ask whether we can obtain useful intermediate representations from models that produce sketches in the output space, as has been shown by other generative models (Ranzato et al., 2006; Kingma & Welling, 2014; Goodfellow et al., 2014; Donahue et al., 2017; Doersch et al., 2015). Unfortunately, a hierarchical Bayesian model suffers from prolonged inference time, while other current sketch models mostly focus on producing drawings in a closed set setting with a few classes (Ha & Eck, 2018; Chen et al., 2017), or on improving log likelihood at the pixel level (Rezende et al., 2016). Leveraging the learned representation from these drawing models remains a rather unexplored topic. In this paper, we pose the following question: Can we learn a generalized embedding function that captures salient and compositional features by directly imitating human sketches? The answer is affirmative. In our experiments we develop SketchEmbedNet, an RNN-based sketch model trained to map grayscale and natural image pixels to the sketch domain. It is trained on hundreds of classes without the use of class labels to learn a robust drawing model that can sketch diverse and unseen inputs. We demonstrate salience by achieving state-of-the-art performance on the Omniglot few-shot classification benchmark and visual recognizability in one-shot generations. Then we explore how the embeddings capture image components and their spatial relationships to explore image space compositionality and also show a surprising property of conceptual composition. We then push the boundary further by applying our sketch model to natural images—to our knowledge, we are the first to extend stroke-based autoregressive models to produce drawings of open domain natural images. We train our model with adapted SVG images from the Sketchy dataset (Sangkloy et al., 2016) and then evaluate the embedding quality directly on unseen classes in the mini-ImageNet task for few-shot classification (Vinyals et al., 2016). Our approach is competitive with existing unsupervised few-shot learning methods (Hsu et al., 2019; Khodadadeh et al., 2019; Antoniou & Storkey, 2019) on this natural image benchmark. In both the sketch and natural image domain, we show that by learning to draw, our methods generalize well even across different datasets and classes. 2 RELATED WORK In this section we review relevant literature including generating sketch-like images, unsupervised representation learning, unsupervised few-shot classification and sketch-based image retrieval (SBIR). Autoregressive drawing models: Graves (2013) use an LSTM to directly output the pen coordinates to imitate handwriting sequences. SketchRNN (Ha & Eck, 2018) builds on this by applying it to general sketches beyond characters. Song et al. (2018); Cao et al. (2019); Ribeiro et al. (2020) all extend SketchRNN through architectural improvements. Chen et al. (2017) change inputs to be pixel images. This and the previous 3 works consider multi-class sketching, but none handle more than 20 classes. Autoregressive models also generate images directly in the pixel domain. DRAW (Gregor et al., 2015) uses recurrent attention to plot pixels; Rezende et al. (2016) extends this to one-shot generation and PixelCNN (van den Oord et al., 2016) generates image pixels sequentially. Image processing methods & GANs: Other works produce sketch-like images based on style transfer or low-level image processing techniques. Classic methods are based on edge detection and image segmentation (Arbelaez et al., 2011; Xie & Tu, 2017). Zhang et al. (2015) use a CNN to directly produce sketch-like pixels for face images. Photo-sketch and pix2pix (Li et al., 2019; Isola et al., 2017) propose a conditional GAN to generate images across different style domains. Image processing based methods do not acquire high-level image understanding, as all the operations are in terms of low-level filtering; none of the GAN sketching methods are designed to mimic human drawings on open domain natural images. Unsupervised representation learning: In the sketch image domain, our method is similar to the large category of generative models which learn unsupervised representations by the principle of analysis-by-synthesis. Work by Hinton & Nair (2005) operates in a sketch domain and learns to draw by synthesizing an interpretable motor program. Bayesian Program Learning (Lake et al., 2015) draws through exact inference of common strokes but learning and inference are computationally challenging. As such, a variety of deep generative models aim to perform approximate Bayesian inference by using an encoder structure that directly predicts the embedding, e.g., deep autoencoders (Vincent et al., 2010), Helmholtz Machine (Dayan et al., 1995), variational autoencoder (VAE) (Kingma & Welling, 2014), BiGAN (Donahue et al., 2017), etc. Our method is also related to the literature of self-supervised representation learning (Doersch et al., 2015; Noroozi & Favaro, 2016; Gidaris et al., 2018; Zhang et al., 2016), as sketch strokes are part of the input data itself. In few-shot learning (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017), recent work has explored unsupervised meta-training. CACTUs, AAL and UMTRA (Hsu et al., 2019; Antoniou & Storkey, 2019; Khodadadeh et al., 2019) all operate by generating pseudo-labels for training. Sketch-based image retrieval (SBIR): In SBIR, a model is provided a sketch-drawing and retrieves a photo of the same class. The area is split into fine-grained (FG-SBIR) (Yu et al., 2016; Sangkloy et al., 2016; Bhunia et al., 2020) and a zero-shot setting (ZS-SBIR) (Dutta & Akata, 2019; Pandey et al., 2020; Dey et al., 2019). FG-SBIR considers minute details while ZS-SBIR learns high-level cross-domain semantics and a joint latent space to perform retrieval. 3 LEARNING TO IMITATE DRAWINGS Here we describe learning to draw through sketch imitation. Our architecture is a generative encoderdecoder model with a CNN encoder for pixel images and an RNN decoder to output vector sketches as shown in Figure 1. Unlike other drawing models that only train on a single or few classes (Ha & Eck, 2018; Chen et al., 2017), SketchEmbedNet is not limited by class inputs and can sketch a wide variety of images. We also introduce a differentiable rasterization function for computing an additional pixel-based training loss. Input & output representation Unlike SketchRNN which encodes drawing sequences, we learn an image embedding by mapping pixels to sketches, similar to Chen et al. (2017). Training data for this task (adopted from Ha & Eck (2018)) consists of a tuple (x,y), where x ∈ RH×W×C is the input image and y ∈ RT×5 is the stroke target. T is the maximum sequence length of the stroke data y, and each stroke yt consists of 5 elements, (∆x,∆y, s1, s2, s3). The first 2 elements are horizontal and vertical displacements on the drawing canvas from the endpoint of the previous stroke. The latter 3 elements are mutually exclusive pen states: s1 indicates the pen is on paper for the next stroke, s2 indicates the pen is lifted, and s3 indicates the sketch sequence has ended. y0 is initialized with (0, 0, 1, 0, 0) to start the generative process. Note that no class information is available while training. SketchEmbedding as a compositional encoding of images We use a CNN to encode the input image x and obtain the latent space representation z, as shown in Figure 1. To model intra-class variance, z is a Gaussian random variable parameterized by CNN outputs, similar to a VAE (Kingma & Welling, 2014). Throughout this paper, we refer to z as the SketchEmbedding. In typical image representations the embedding is trained to classify object classes, or to reconstruct the input pixels. Here, since the SketchEmbedding is fed into an RNN decoder to produce a sequence of drawing actions, z is additionally encouraged to have a compositional understanding of the object structure, instead of just an unstructured set of pixel features. For example when drawing the legs of a turtle, the model must explicitly generate each leg instance. While pixel-based models suffer from blurriness and in generating the image at once, does not distinguish between individual components such as the legs, body and head. The loss of this component information by pixel models has been observed in GAN literature (Goodfellow, 2017) which we propose is avoided by our sketching task. To accommodate the increased training data complexity by including hundreds of classes, we also upscale the size of our model in comparison to work by Chen et al. (2017); Ha & Eck (2018); Song et al. (2018). The backbone is either a 4-layer CNN (Conv4) (Vinyals et al., 2016) for consistent comparisons in the few-shot setting or a ResNet12 (Oreshkin et al., 2018) which produces better drawing results. In comparison, Chen et al. (2017) only use 2D convolution with a maximum of 8 filters. RNN decoder The RNN decoder used in SketchEmbedNet is the same as in SketchRNN (Ha & Eck, 2018). The decoder outputs a mixture density which represents the stroke distribution at each timestep. Specifically, the stroke distribution is a mixture of some hyperparameter M bivariate Gaussians denoting spatial offsets as well as the probability of the three pen states s1−3. The spatial offsets ∆ = (∆x,∆y) are sampled from the mixture of Gaussians, described by: (1) the normalized mixture weight πj ; (2) mixture means µj = (µx, µy)j ; and (3) covariance matrices Σj . We further reparameterize each Σj with its standard deviation σj = (σx, σy)j and correlation coefficient ρxy,j . Thus, the stroke offset distribution is p(∆) = ∑M j=1 πjN (∆|µj ,Σj). The RNN is implemented using a HyperLSTM (Ha et al., 2017); LSTM weights are generated at each timestep by a smaller recurrent “hypernetwork” to improve training stability. Generation is autoregressive, using z ∈ RD, concatenated with the stroke from the previous timestep yt−1, to form the input to the LSTM. Stroke yt−1 is the ground truth supervision at train time (teacher forcing), or a sample y′t−1, from the mixture distribution output by the model during from timestep t− 1. 3.1 LEARNING We train the drawing model in an end-to-end fashion by jointly optimizing three losses: a pen loss Lpen for learning pen states, a stroke loss Lstroke for learning pen offsets, and our proposed pixel loss Lpixel for matching the visual similarity of the predicted and the target sketch: L = Lpen + (1− α)Lstroke + αLpixel, (1) where α is a loss weighting hyperparameter. Both Lpen and Lstroke were in SketchRNN, while the Lpixel is our novel contribution to stroke-based generative models. Unlike SketchRNN, we do not impose a prior using KL divergence as we are not interested in unconditional sampling and it worsens quantitative results in later sections. Pen loss The pen-states predictions {s′1, s′2, s′3} are optimized as a simple 3-way classification with the softmax cross-entropy loss, Lpen = − 1T ∑T t=1 ∑3 m=1 sm,tlog(s ′ m,t). Stroke loss The stroke loss maximizes the log-likelihood of the spatial offsets of each ground truth stroke ∆t given the mixture density distribution pt at each timestep: Lstroke = − 1T ∑T t=1 log pt(∆t). Pixel loss While pixel-level reconstruction objectives are common in generative models (Kingma & Welling, 2014; Vincent et al., 2010; Gregor et al., 2015), we introduce a pixel-based objective for vector sketch generation. After decoding, a differentiable rasterization function fraster is used to map the sketch into a pixel image. fraster transforms a stroke sequence y into a set of 2D line segments (l0, l1), (l1, l2) . . . (lT−1, lT ) where lt = ∑t τ=0 ∆τ . It renders by fixing canvas dimensions, scaling and centering strokes before determining pixel intensity based on the L2 distance between each pixel to lines in the drawing. Further details on fraster can be found in Appendix A. fraster is applied to both the prediction y′ and ground truth y, to produce two pixel images. Gaussian blur gblur(·) is used to reduce strictness before computing the binary cross-entropy loss, Lpixel: I = gblur(fraster(y)), I ′ = gblur(fraster(y ′)), Lpixel = − 1 HW H∑ i=1 W∑ j=1 Iij log(I ′ij). (2) Curriculum training schedule We find that α (in Equation 1) is an important hyperparameter that impacts both the learned embedding space and the generation quality of SketchEmbedNet. A curriculum training schedule is used, increasing α to prioritize Lpixel relative to Lstroke as training progresses; this makes intuitive sense as a single drawing can be produced by many different stroke sequences but learning to draw in a fixed manner is easier. While Lpen promotes reproducing a specific drawing sequence, Lpixel only requires that the generated drawing visually matches the image. Like a human, the model should learn to follow one drawing style (a la paint-by-numbers) before learning to draw freely. 4 DRAWING IMITATION EXPERIMENTS In this section, we introduce our experiments on training SketchEmbedNet using two sketching datasets. The first is based on pure stroke-based drawings, and the second consists of natural image and drawing pairs. Quickdraw: Stroke-based image sketching The Quickdraw (Jongejan et al., 2016) dataset consists of 345 classes of each with 70,000 examples, produced by human players participating in the game “Quick, Draw!”. Examples from the Quickdraw dataset are shown in Figure 2b. The input image x is a direct rasterization of the drawing data y. 300 of 345 classes are randomly selected for training; x is rasterized to a resolution of 28 × 28 and stroke labels y padded up to length T = 64. Any drawing samples exceeding this length were discarded. Note that this an unsupervised representation learning approach, as no class information is used by the system. Data processing procedures and class splits are in Appendix G. Sketchy: Open domain natural image sketching We further extend our stroke-based generation model on open domain natural images. Here, the input is an RGB photo, and the output is a human drawing which does not align with the photo precisely and also does not match with the low-level image details. This is a novel setting, as prior efforts by Ha & Eck (2018); Chen et al. (2017); Song et al. (2018) have only applied their sketch RNN models on the Quickdraw dataset or natural images with only two object classes (shoe/chair) and scrubbed backgrounds (Yu et al., 2016). Learning to sketch open domain natural images is very challenging as it requires the model to identify the subject and filter unnecessary details not present in the sketch. At test time, we further challenge our method by evaluating on unseen data distributions necessitating generalization over natural images. For this task we use the Sketchy dataset (Sangkloy et al., 2016) which consists of ImageNet images paired with vector sketches for a total of 56k examples after processing. Sketches are stored as SVGs with timestamps preserving their original drawing sequence which we adapt by sampling paths in this order. Images are also centered, padded and resized to resolution 84× 84 (see Figure 2a). We fix the maximum sequence length to T = 100, and use all 125 categories but remove classes that have overlapping child synsets with the test classes of mini-ImageNet (Vinyals et al., 2016). This enables testing on mini-ImageNet without any alterations to the benchmark. Once again this is an unsupervised learning formulation. 4.1 RESULTS AND VISUALIZATIONS Figure 3 shows drawings conditioned on sketch image inputs. There is little noticeable drop in quality when we sample sketches from unseen classes compared to those it has seen before. Figure 4 shows examples of sketches generated from natural images. While they are not fine-grained renditions, these sketches clearly demonstrate SketchEmbedNet’s ability to capture key components of seen and unseen classes. The model effectively isolates the subject in each natural image and captures the circular and square shapes in the cakes and storefronts respectively. Even with the challenging lion images, it identifies the silhouette of the laying lion despite low contrast and appropriately localizes the one on the mountainside. Unlike pixel-based auto-encoder models, our sketches do not follow the exact pose of the original strokes, but rather capture a general notion of component groupings. In the basketball example of Figure 3, the lines are not a good pixel-wise match to those in the original image yet they are placed in sensible relative positions. Weaker examples are presented in the last row of Figure 3 and 4; regardless, even poorer examples still capture some structural aspects of the original image. Implementation details can be found in Appendix B. In later sections we explore the uses of SketchEmbeddings and fix models for all downstream tasks. 5 FEW-SHOT CLASSIFICATION USING SKETCHEMBEDDING We would like to assess the benefits of learning to draw by performing few-shot classification with our learned embedding space. Examining performance on discriminative tasks reveals that learning to imitate sketches allows the embeddings to capture salient information of novel object classes. Below we describe our few-shot classification procedure and summarize results on the Omniglot (Lake et al., 2015) and mini-ImageNet benchmarks (Vinyals et al., 2016). Comparison to unsupervised few-shot classification In unsupervised few-shot classification, a model is not provided with any class labels during meta-training, until it is given a few labeled examples ("shots") of the novel classes at meta-test time. While our model is provided a "target"—a sequence of strokes—during training, it is not given any class information. Therefore, we propose that the presented sketch imitation training, though it uses sketch sequences, is comparable to other class-label-free representation learning approaches (Berthelot et al., 2019; Donahue et al., 2017; Caron et al., 2018) and the learned SketchEmbeddings can be applied to unsupervised few-shot classification methods. In our experiments, we compare to previous unsupervised few-shot learning approaches: CACTUs (Hsu et al., 2019), AAL (Antoniou & Storkey, 2019), and UMTRA (Khodadadeh et al., 2019). These methods create pseudo-labels during meta-training using either clustering or data augmentation. As additional baselines, a Conv-VAE (Kingma & Welling, 2014) and a random CNN are also included, both using the same Conv4 backbone. Few-shot experimental setup The CNN encoder of SketchEmbedNet is used as an embedding function combined with a linear classification head to perform few-shot classification. The embedding is made deterministic by taking the mean of the random normal latent space z and discarding the variance parameter from the encoder. Otherwise, the conventional episodic setup for few-shot classification is used; each episode consists of a labeled "support" set of N ×K (N-way K-shot) embeddings and an unlabeled "query" set. The linear classification head is trained on the labeled support set and evaluated on the query set. 5.1 FEW-SHOT CLASSIFICATION ON OMNIGLOT The Omniglot (Lake et al., 2015) dataset contains 50 alphabets, 1623 unique character types, each with 20 examples and is presented as both a greyscale image and a stroke drawing. We use the same train-test split as Vinyals et al. (2016) along with randomly sampled episodes. Experiments using the more challenging Lake split where episodes are sampled within alphabet, as proposed by Lake et al. (2015), are in Appendix E and random seed experiments in Appendix F. To ensure a fair comparison with other few-shot classification models, we use the same convolutional encoder (Conv4) as Vinyals et al. (2016). Results from training only on Omniglot (Lake et al., 2015) are also presented to demonstrate effectiveness without the larger Quickdraw(Jongejan et al., 2016) dataset. No significant improvements were observed using the deeper ResNet12(Oreshkin et al., 2018) architecture; additional results are in Appendix I. All of our methods out-perform the previous state-of-the-art on the unsupervised Omniglot benchmark (Table 1). The Quickdraw trained model surpasses supervised MAML (Finn et al., 2017), and is on par with a supervised ProtoNet (Snell et al., 2017) model , especially in the 5-shot settings. Both baselines, a Conv-VAE and a random CNN, perform well compared to other unsupervised methods. 5.2 FEW-SHOT CLASSIFICATION ON MINI-IMAGENET We extend our investigation and assess SketchEmbeddings for the classification of natural images in the mini-ImageNet benchmark (Vinyals et al., 2016). The same CNN encoder model from the natural image sketching task is used to match the visual domain of the examples we hope to classify. The mini-ImageNet (Vinyals et al., 2016) dataset consists of 100 classes each with 600 examples. The setup proposed by Ravi & Larochelle (2017) is used, where the classes are split 64-16-20 for training, validation and test. As noted earlier, any examples in the Sketchy dataset that are also present in the mini-ImageNet test were removed by filtering the synset (and children synset) IDs ensuring train and test classes are disjoint. Classification results on mini-ImageNet are shown in Table 2. Natural image classification is a far more challenging problem. Learning to reconstruct pixels of an image actually worsens our results; the trained Conv-VAE is outperformed by the VAE with random weights. However, sketch reconstruction is still a valuable task; our models are competitive while our best model out-performs previous state-of-the-art unsupervised methods on few-shot settings. A full table is in Appendix J, seeding results are in Appendix F. 5.3 SKETCHING TO LEARN CLASS-IDENTIFIABLE INFORMATION Existing sketch works have focused on generating better drawings or unifying sketches with other image domains. We present a new paradigm: using sketching as an auxiliary task to learn visual content. Only by training a drawing model that can sketch general image inputs are we able to transfer the learned understanding to new data distributions. By considering the stroke distribution of the Quickdraw dataset, we are able to interpret image inputs from the separate Omniglot dataset and tackle the few-shot classification task with surprising accuracy. While the natural image sketching task is challenging and does not always produce high-fidelity results, it still learns useful visual information. By training on the Sketchy dataset, we learn how to draw other data distributions for which no sequential stroke data exists. Then, by knowing how to sketch this mini-ImageNet data we are able to produce distinguishable embeddings that enable competitive few-shot classification performance. Varying weighting of pixel-loss For both settings we sweep the pixel loss coefficient αmax to ablate its impact on model performance on the Omniglot task (Table 3). There is a substantial improvement in few-shot classification when αmax is non-zero. αmax= 0.50 achieves the best results, and the trend goes downward when αmax approaches to 1.0, i.e. the weighting for Lstroke goes to 0.0. This is reasonable as the training of SketchEmbedNet is more stable under the guidance of ground truth strokes. 6 PROPERTIES OF SKETCHEMBEDDINGS We hypothesize that reproducing a sketch drawing rather than a pixel-based approach requires the preservation of more structural information due to sequential RNN generation. By learning in this manner, SketchEmbeddings are aware of spatial properties and the composition of elements in image space. We examine this compositionality through several comparisons of SketchEmbeddings with those generated by a Conv-VAE. Component arrangements We construct examples that contain the same set of objects but in different arrangements to test sensitivity to component arrangement and composition in image space. We then embed these examples with both generative models and project into 2D space using UMAP (McInnes et al., 2018) to visualize their organization. In the first 2 panels of Figure 5-A, we see that the SketchEmbeddings are easily separated in unsupervised clustering. The rightmost panel of Figure 5-A exhibits non-synthetic classes with duplicated shapes; snowmen with circles and televisions with squares. With these results, we demonstrate the greater component level awareness of SketchEmbeddings. The 4 rearranged shapes and the nested circle and squares have similar silhouettes that are difficult to differentiate to a conventional pixel loss. To SketchEmbeddings, the canvas offset and different drawing sequence of each shape make them substantially different in embedding space. Spatial relationships Drawing also builds awareness of relevant underlying variables, such as spatial relationships between components of the image. We examine the degree to which the underlying variables of angle, distance or size are captured by the embedding, by constructing images that vary along each dimension respectively. The embeddings are again grouped by a 2D projection in Figure 5-B using the UMAP (McInnes et al., 2018) algorithm. When clustered, the 2D projection of SketchEmbeddings arranges the examples along an axis corresponding to the latent variable compared to the Conv-VAE embeddings which is visibly non-linear and arranges in clusters. This clear axis-alignment suggests a greater level of latent variable disentanglement in the SketchEmbeddings. Conceptual composition Finally, we explore concept space composition using SketchEmbeddings (Figure 5-C) by embedding different Quickdraw examples then performing arithmetic with the latent vectors. By subtracting a circle embedding and adding a square embedding from a snowman composed of stacked circles, we produce stacked boxes. This property of vector arithmetic is reminiscent of language representations, as evidenced in analogies like King - Man + Woman = Queen (Ethayarajh et al., 2019). Our results indicate that this property is captured to a greater degree in SketchEmbedding than in the pixel-based VAE embeddings. Composing SketchEmbeddings produces decoded examples that appeal to our intuitive conceptual understanding while the VAE degrades to blurry, fragmented images. We provide more examples of the setup in Figure 5-C as well as additional classes in Appendix K. 7 ONE-SHOT GENERATION To evaluate the sketches generated by our model, we make qualitative comparisons to other one-shot generative models and quantitatively assess our model through visual classification via a ResNet101 (He et al., 2016). In this section, all models use the ResNet12 (Oreshkin et al., 2018) backbone. Qualitative comparisons We compare SketchEmbedNet one-shot generations of Omniglot characters with examples from other few-shot (Reed et al., 2017) and one-shot (Rezende et al., 2016) approaches (Figure 6). In the settings shown, none of the models have seen any examples from the character class, or the parent alphabet. Furthermore, the drawer has seen no written characters during training and is trained only on the Quickdraw dataset. Visually, our generated images better resemble the support examples and the variations by stretching and shifting strokes better preserves the semantics of each character. Generations in pixel space may disrupt strokes and alter the character to human perception. This is especially true for written characters as they are frequently defined by a specific set of strokes instead of blurry clusters of pixels. Quantitative evaluation of generation quality Evaluating generative models is often challenging. Per-pixel metrics like in Reed et al. (2017); Rezende et al. (2016) may penalize generative variance that still preserves meaning. We propose an Inception Score (Salimans et al., 2016) inspired metric to quantify class-recognizability and generalization of generated examples. We train two separate ResNet classifiers (He et al., 2016), each on a different set of 45 Quickdraw classes. One set was part of the training set of SketchEmbedNet (referred to as “seen”) and the other set was held out during training (referred to as “unseen”). We then have SketchEmbedNet generate one-shot sketches from each set and have the corresponding classifier predict a class. The accuracy of the classifier on generated examples is compared with its training accuracy in Table 4. For a ResNet classifier, SketchEmbedNet generations are more recognizable for both classes seen and unseen classes. 8 CONCLUSION Learning to draw is not only an artistic pursuit but drives a distillation of real-world visual concepts. We present a generalized drawing model capable of producing accurate sketches and visual summaries of open-domain natural images. While sketch data may be challenging to source, we show that training to draw either sketch or natural images can generalize for downstream tasks, not only within each domain but also well beyond the training data. More generally research in this direction may lead to more lifelike image understanding inspired by how humans communicate visual concepts. A RASTERIZATION The key enabler of our novel pixel loss for sketch drawings is our differentiable rasterization function fraster. Sequence based loss functions such as Lstroke are sensitive to the order of points while in reality, drawings are sequence invariant. Visually, a square is a square whether it is drawn clockwise or counterclockwise. The purpose of a sketch representation is to lower the complexity of the data space and decode in a more visually intuitive manner. While it is a necessary departure point, the sequential generation of drawings is not key to our visual representation and we would like SketchEmbedNet to be agnostic to any specific sequence needed to draw the sketch that is representative of the image input. To facilitate this, we develop our rasterization function fraster which renders an input sequence of strokes as a pixel image. However, during training, the RNN outputs a mixture of Gaussians at each timestep. To convert this to a stroke sequence, we sample from these Gaussians; this can be repeated to reduce the variance of the pixel loss. We then scale our predicted and ground truth sequences by the properties of the latter before rasterization. Stroke sampling At the end of sequence generation we haveNs×(6M+3) parameters, 6 Gaussian mixture parameters, 3 pen states, Ns times, one for each stroke. To obtain the actual drawing we sample from the mixture of Gaussians:[ ∆xt ∆yt ] = [ µx,t µy,t ] + [ σx,t 0 ρxy,tσy,t σy,t √ 1− ρ2xy,t ] , ∼ N (0,12). (3) After sampling we compute the cumulative sum of every stroke over the timestep so that we obtain the absolute displacement from the initial position:[ xt yt ] = T∑ τ=0 [ ∆xτ ∆yτ ] . (4) yt,abs = (xt, yt, s1, s2, s3). (5) Scaling Each sketch generated by our model begins at (0,0) and the variance of all strokes in the training set is normalized to 1. On a fixed canvas the image is both very small and localized to the top left corner. We remedy this by computing a scale λ and shift xshift, yshift using labels y and apply them to both the prediction y′ as well as the ground truth y. These parameters are computed as: λ = min { W xmax − xmin , H ymax − ymin } , (6) xshift = xmax + xmin 2 λ, yshift = ymax + ymin 2 λ. (7) xmax, xmin, ymax, ymin are the minimum and maximum values of xt, yt from the supervised stroke labels and not the generated strokes. W and H are the width and height in pixels of our output canvas. Calculate pixel intensity Finally we are able to calculate the pixel pij intensity of every pixel in our H ×W canvas. pij = σ [ 2− 5× min t=1...Ns ( dist ( (i, j), (xt−1, yt−1), (xt, yt) ) + (1− bs1,t−1e)106 )] , (8) where the distance function is the distance between point (i, j) from the line segment defined by the absolute points (xt−1, yt−1) and (xt, yt). We also blow up any distances where s1,t−1 < 0.5 so as to not render any strokes where the pen is not touching the paper. B IMPLEMENTATION DETAILS We train our model for 300k iterations with a batch size of 256 for the Quickdraw dataset and 64 for Sketchy due to memory constraints. The initial learning rate is 1e-3 which decays by 0.85 every 15k steps. We use the Adam (Kingma & Ba, 2015) optimizer and clip gradient values at 1.0. σ = 2.0 is used for the Gaussian blur in Lpixel. For the curriculum learning schedule, the value of α is set to 0 initially and increases by 0.05 every 10k training steps with an empirically obtained cap at αmax = 0.50 for Quickdraw and αmax = 0.75 for Sketchy. The ResNet12 (Oreshkin et al., 2018) encoder uses 4 ResNet blocks with 64, 128, 256, 512 filters respectively and ReLU activations. The Conv4 backbone has 4 blocks of convolution, batch norm (Ioffe & Szegedy, 2015), ReLU and max pool, identical to Vinyals et al. (2016). We select the latent space to be 256 dimensions, RNN output size to be 1024, and the hypernetwork embedding size to be 64. We use a mixture of M = 30 bivariate Gaussians for the mixture density output of the stroke offset distribution. C LATENT SPACE INTERPOLATION Like in many encoding-decoding models we evaluate the interpolation of our latent space. We select 4 embeddings at random and use bi-linear interpolation to produce new embeddings. Results are in Figures 7a and 7b. (a) Interpolation of classes: power outlet, snowman, jacket, elbow (b) Interpolation of classes: cloud, power outlet, basket, compass Figure 7: Latent space interpolations of randomly selected examples We observe that compositionality is also present in these interpolations. In the top row of Figure 7a, the model first plots a third small circle when interpolating from the 2-circle power outlet and the 3-circle snowman. This small circle is treated as single component that grows as it transitions between classes until it’s final size in the far right snowman drawing. Some other RNN-based sketching models (Ha & Eck, 2018; Chen et al., 2017) experience other classes materializing in interpolations between two unrelated classes. Our model does not exhibit this same behaviour as our embedding space is learned from more classes and thus does not contain local groupings of classes. D EFFECT OF α ON FEW-SHOT CLASSIFICATION We performed additional experiments exploring the impact of our curriculum training schedule for α. The encoding component of our drawing model was evaluated on the few-shot classification task for different values of αmax every 25k iterations during training. A graph is shown in Figure 8 and the full table of all values of αmax is in Table 5. E INTRA-ALPHABET LAKE SPLIT The creators of the Omniglot dataset and one-shot classification benchmark originally proposed an intra-alphabet classification task. This task is more challenging than the common Vinyals split as characters from the same alphabet may exhibit similar stylistics of sub-components that makes visual differentiation more difficult. This benchmark has been less explored by researchers; however, we still present the performance of our SketchEmbedding model against evaluations of other few-shot classification models on the benchmark. Results are shown in Table 6. Unsurprisingly, our model is outperformed by supervised models and does fall behind by a more substantial margin than in the Vinyals split. However, our SketchEmbedding approach still achieves respectable classification accuracy overall and greatly outperforms a Conv-VAE baseline. F EFFECT OF RANDOM SEEDING ON FEW-SHOT CLASSIFICATION The training objective for SketchEmbedNet is to reproduce sketch drawings of the input. This task is unrelated to few-shot classification may perform variably given different initialization. We quantify this variance by training our model with 15 unique random seeds and evaluating the performance of the latent space on the few-shot classification tasks. We disregard the per (evaluation) episode variance of our model in each test stage and only present the mean accuracy. We then compute a new confidence interval over random seeds. Results are presented in Tables 7, 8, 9. G DATA PROCESSING We apply the same data processing methods as in Ha & Eck (2018) with no additional changes to produce our stroke labels y. When rasterizing for our input x, we scale, center the strokes then pad the image with 10% of the resolution in that dimension rounded to the nearest integer. The following list of classes were used for training: The Eiffel Tower, The Mona Lisa, aircraft carrier, alarm clock, ambulance, angel, animal migration, ant, apple, arm, asparagus, banana, barn, baseball, baseball bat, bathtub, beach, bear, bed, bee, belt, bench, bicycle, binoculars, bird, blueberry, book, boomerang, bottlecap, bread, bridge, broccoli, broom, bucket, bulldozer, bus, bush, butterfly, cactus, cake, calculator, calendar, camel, camera, camouflage, campfire, candle, cannon, car, carrot, castle, cat, ceiling fan, cell phone, cello, chair, chandelier, church, circle, clarinet, clock, coffee cup, computer, cookie, couch, cow, crayon, crocodile, crown, cruise ship, diamond, dishwasher, diving board, dog, dolphin, donut, door, dragon, dresser, drill, drums, duck, dumbbell, ear, eye, eyeglasses, face, fan, feather, fence, finger, fire hydrant, fireplace, firetruck, fish, flamingo, flashlight, flip flops, flower, foot, fork, frog, frying pan, garden, garden hose, giraffe, goatee, grapes, grass, guitar, hamburger, hand, harp, hat, headphones, hedgehog, helicopter, helmet, hockey puck, hockey stick, horse, hospital, hot air balloon, hot dog, hourglass, house, house plant, ice cream, key, keyboard, knee, knife, ladder, lantern, leaf, leg, light bulb, lighter, lighthouse, lightning, line, lipstick, lobster, mailbox, map, marker, matches, megaphone, mermaid, microphone, microwave, monkey, mosquito, motorbike, mountain, mouse, moustache, mouth, mushroom, nail, necklace, nose, octopus, onion, oven, owl, paint can, paintbrush, palm tree, parachute, passport, peanut, pear, pencil, penguin, piano, pickup truck, pig, pineapple, pliers, police car, pool, popsicle, postcard, purse, rabbit, raccoon, radio, rain, rainbow, rake, remote control, rhinoceros, river, rollerskates, sailboat, sandwich, saxophone, scissors, see saw, shark, sheep, shoe, shorts, shovel, sink, skull, sleeping bag, smiley face, snail, snake, snowflake, soccer ball, speedboat, square, star, steak, stereo, stitches, stop sign, strawberry, streetlight, string bean, submarine, sun, swing set, syringe, t-shirt, table, teapot, teddy-bear, tennis racquet, tent, tiger, toe, tooth, toothpaste, tractor, traffic light, train, triangle, trombone, truck, trumpet, umbrella, underwear, van, vase, watermelon, wheel, windmill, wine bottle, wine glass, wristwatch, zigzag, blackberry, power outlet, peas, hot tub, toothbrush, skateboard, cloud, elbow, bat, pond, compass, elephant, hurricane, jail, school bus, skyscraper, tornado, picture frame, lollipop, spoon, saw, cup, roller coaster, pants, jacket, rifle, yoga, toilet, waterslide, axe, snowman, bracelet, basket, anvil, octagon, washing machine, tree, television, bowtie, sweater, backpack, zebra, suitcase, stairs, The Great Wall of China G.2 OMNIGLOT We derive our Omniglot tasks from the stroke dataset originally provided by Lake et al. (2015) rather than the image analogues. We translate the Omniglot stroke-by-stroke format to the same one used in Quickdraw. Then we apply the Ramer-Douglas-Peucker (Douglas & Peucker, 1973) algorithm with an epsilon value of 2 and normalize variance to 1 to produce y. We also rasterize our images in the same manner as above for our input x. G.3 SKETCHY Sketchy data is provided as an SVG image composed of line paths that are either straight lines or Bezier curves. To generate stroke data we sample sequences of points from Bezier curves at a high resolution that we then simplify with RDP, = 5. We also eliminate continuous strokes with a short path length or small displacement to reduce our stroke length and remove small and noisy strokes. Path length and displacement are considered with respect to the scale of the entire sketch. Once again we normalize stroke variance and rasterize for our input image in the same manners as above. The following classes were use for training after removing overlapping classes with mini-ImageNet: hot-air_balloon, violin, tiger, eyeglasses, mouse, jack-o-lantern, lobster, teddy_bear, teapot, helicopter, duck, wading_bird, rabbit, penguin, sheep, windmill, piano, jel- lyfish, table, fan, beetle, cabin, scorpion, scissors, banana, tank, umbrella, crocodilian, volcano, knife, cup, saxophone, pistol, swan, chicken, sword, seal, alarm_clock, rocket, bicycle, owl, squirrel, hermit_crab, horse, spoon, cow, hotdog, camel, turtle, pizza, spider, songbird, rifle, chair, starfish, tree, airplane, bread, bench, harp, seagull, blimp, apple, geyser, trumpet, frog, lizard, axe, sea_turtle, pretzel, snail, butterfly, bear, ray, wine_bottle, , elephant, raccoon, rhinoceros, door, hat, deer, snake, ape, flower, car_(sedan), kangaroo, dolphin, hamburger, castle, pineapple, saw, zebra, candle, cannon, racket, church, fish, mushroom, strawberry, window, sailboat, hourglass, cat, shoe, hedgehog, couch, giraffe, hammer, motorcycle, shark H AUTOREGRESSIVE DRAWING MODEL COMPARISONS We summarize the key components of SketchEmbedNet in comparison to other autoregressive drawing models in Table 10. I FEW-SHOT CLASSIFICATION ON OMNIGLOT – FULL RESULTS The full results table for few-shot classification on the Omniglot (Lake et al., 2015) dataset, including the ResNet12 (Oreshkin et al., 2018) model. J FEW-SHOT CLASSIFICATION ON MINI-IMAGENET – FULL RESULTS The full results table for few-shot classification on the mini-ImageNet dataset, including the ResNet12 (Oreshkin et al., 2018) model and Conv4 models. K ADDITIONAL CONCEPTUAL COMPOSITIONALITY L EMBEDDING PROPERTIES OF OTHER BASELINE MODELS Here we substantiate the uniqueness of the properties observed in SketchEmbeddings by applying the same experiments to a β-VAE (Higgins et al., 2017) as well a vanilla autoencoder trained on the same dataset. We also include results of a SketchEmbedNet trained with a KL objective. L.1 β-VAE The β-VAE (Higgins et al., 2017) exhibits similar unsupervised clustering in comparison to the Conv-VAE and is generally incapable of distinguishing input images that have different shape compositions but the same overall silhouette (first two examples from the left). Differently it is better at distinguishing non-synthetic examples that contain multiple squares or circles (3rd figure). However, it utterly fails the latent variable regression task and does not exhibit any significant conceptual composition in latent space. L.2 AUTOENCODER AND SKETCHEMBEDNET-KL We show that the performance of SketchEmbedding embeddings in our experiments in Section 6 which focuses on organization in latent space is not correlated with the KL term. We present both a vanilla autoencoder without the KL objective and a SketchEmbedNet trained with a KL objective. We observe a drop in overall generation quality in the Conceptual Composition decoding as is expected with an additional constraint but maintained performance in the other tasks. Meanwhile, the autoencoder does not demonstrate any marked improvements over the Conv-VAE in the main paper or any other baseline comparison. M ADDITIONAL COMPOSITIONALITY MODES We provide additional clustering methods t-SNE (Maaten & Hinton, 2008) and PCA as well as 2 new experiments that explore the compositionality of our latent SketchEmbedding. Additional clustering methods We include additional t-SNE and PCA results of the experiments in the main paper. These are presented in Figures 13, 14, 15 16, 17. t-SNE and UMAP are stochastic and do not always produce the same visualization while PCA is deterministic and prioritizes the most important dimensions. Additional Experiments Here we provide different investigations into the compositionality of our learned embedding space that were not present in our main paper. These results presented in Figure 18 and 19. In Figure 18 we place a square in the center of the example and place a circle above, below or to the sides of it. Once again we find that our SketchEmbedding embedding clusters better than the VAE approach. New examples are generated where each class has a different numbers of circles. Both the VAE approach and our SketchEmbedding cluster well and neither appear to learn the count manifold. N HYPERNETWORK ACTIVATIONS To further explore how our network understands drawings, we examine the relationships between the activations of the hypernetwork of our HyperLSTM (Ha et al., 2017). The hypernetwork determines the weights of the LSTM that generates the RNN at each decoding timestep. These activations are 512-dimensional vectors. We collect the activations from many examples, cluster them in 512-dimensional space and visualize the strokes belonging to each cluster for each example. A full decoding is also rendered where each cluster within an example is assigned a color. Single class: snowman First we explore this clustering using only the snowman class from Quickdraw (Jongejan et al., 2016). We expect substantial reuse of a "circle" both within and over many examples. Clustering of the strokes is done with the DBSCAN (Ester et al., 1996) and parameter = 3.9. Results are in Figure 20. Each row is a separate input; the far left column is the color-coded, composed image, the second is the noise cluster and every subsequent column is a unique cluster. While cluster re-use is limited, cluster 0 often contains a large, fully enclosed circle. Many other clusters may contain circles or partial strokes with some reuse. Larger, fully composed and coloured sketches are presented in Figure 21 Many classes: round objects We repeat the above experiment with a mixture of classes that generally can be expected to contain circles. These classes were circles, snowmen, clocks and cups. The two former classes are frequently composed only of circles while the latter are expected to consistently contain other distinct shapes. Results are presented in Figure 22 and select examples in Figure 23. We still observe that the model continues to isolate circles in the first column and note it continues to do so for the cup and clock classes which are not exclusively circular. Many random classes: Finally, we repeat the above clustering with the 45 randomly selected holdout classes from the Quickdraw training process of SketchEmbedding. Results are once again presented in Figure 24 and select examples in Figure 25.
1. What are the main contributions of the paper, and how do they relate to the key motivation of capturing and generalizing compositional information from sketch/natural images? 2. How does the proposed model, SketchEmbedNet, preserve the compositional understanding of input sketches/images? 3. Why can't a regular CNN embedding preserve the information about positions and turtle legs number after an average pooling layer? 4. Is the CNN backbone used in SketchEmbedNet sufficient to extract image features well? 5. How does SketchEmbedNet compare to other sketch generative models, such as those mentioned in the references? 6. Why did the authors choose to compare SketchEmbedNet only with VAE, and what additional comparisons would strengthen their claims? 7. How does the removal of the KL loss term impact the organization of the latent space in SketchEmbedNet?
Review
Review This paper proposes a generalized sketch drawing model named SketchEmbedNet for producing sketches and visual summaries of open-domain natural images. The idea is interesting and the experimental results show SketchEmbedNet is able to do not only few-shot classification but also one-shot generation. Overall, I vote for rejecting. In my opinion, the main contributions of this paper are not very clear. The introduced model, SketchEmbedNet has limited novelty on neither the methodology nor the network structure. As stated in the title and introduction, the authors aim to capture and generalize the compositional information from sketch/natural images. Section 4 reports the latent variables organization performance, which is directly related to the key motivation I believe. But the authors only compared SketchEmbedNet with VAE, which is not enough to demonstrate their advantages. Moreover, it is not clear why few-shot classification and one-shot generation performance in Section 5 and 6 support their main idea. Thus, this paper needs further improvements. Detailed comments: (1) In the first paragraph of section 2, the authors claimed that the CNN embeddings must preserve a compositional understanding of the input sketches/images to improve the performance in their pix2seq task. So how did you preserve the information? Many sketch synthesis methods, such as [8] in the reference, can reconstruct the sketch with a sketch image input. Do these methods preserve the ``compositional understanding’’? (2) Still in the same paragraph, I’m pretty sure a vanilla auto-encoder with a CNN encoder containing average pooling layers can reconstruct a six-legs-turtle well as input. Thus, the information about the positions and the turtle legs number must be transported to decoder by the latent embeddings. So why you confirm that a regular CNN embedding cannot preserve that information after an average pooling layer? (3) The same CNN encoder is used for both natural images and sketches. As these two types of images are with totally different patterns, many recent studies, such as reference [55], used a two-branched encoder for natural images and sketches, respectively. In this paper, the CNN backbone is a 4-layer CNN or a ResNet12, which are very basic structures. Is it able to extract the image features well? (4) Figure 5 shows SketchEmbedNet outperforms VAE on latent space organization. In my opinion, the well-organized latent space of SketchEmbedNet is mainly due to getting rid of the KL loss term, which drives the latent distribution to be a uniform distribution. I would like to see the comparison with sketch-pix2seq, which is the reference [8], as both SketchEmbedNet and sketch-pix2seq do not use KL term in training. (5) As this paper focuses on sketch drawing, why there is no comparison between SketchEmbedNet and other sketch generative models, such as [8, 11, 15, 26] in the references? After reading the response from the authors, we raise our score by +1.
ICLR
Title Dirichlet Wrapper to Quantify Classification Uncertainty in Black-Box Systems Abstract Nowadays, machine learning models are becoming a utility in many sectors. AI companies deliver pre-trained encapsulated models as application programming interfaces (APIs) that developers can combine with third party components, their models, and proprietary data, to create complex data products. This complexity and the lack of control and knowledge of the internals of these external components might cause unavoidable effects, such as lack of transparency, difficulty in auditability, and the emergence of uncontrolled potential risks. These issues are especially critical when practitioners use these components as black-boxes in new datasets. In order to provide actionable insights in this type of scenarios, in this work we propose the use of a wrapping deep learning model to enrich the output of a classification black-box with a measure of uncertainty. Given a black-box classifier, we propose a probabilistic neural network that works in parallel to the black-box and uses a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters of the classifier and enables the estimation of aleatoric uncertainty for any data sample. Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system. We showcase the proposed technique and methodology in two practical scenarios, one for NLP and another for computer vision, where a simulated API based is applied to different domains. Results demonstrate the effectiveness of the uncertainty computed by the wrapper and its high correlation to wrong predictions and misclassifications. 1 INTRODUCTION The popularity of machine learning is giving birth to new business models based on the productization and service of these models. In the market there are many application programming interfaces (APIs) serving predictions in object recognition for images (Vision AI1), language detection or sentiment analysis in natural language processing (Cloud Natural Language API2), to mention just a few. As this Machine Learning-as-a-Service model starts to grow, it becomes easier to find these APIs as an integral component of more complex products. The use of pre-trained models gives rise to two different problems. First, we do not know how these models are going to operate in our intended application domain. In order to address this issue, there is a vast literature on transfer learning that can be applied. However, when using third-party proprietary software or APIs, we may not have access to the internals or the possibility of finetuning the model to our domain. If we are to use the model as it is, one must at least understand when the model is going to work and when it is not, to have some confidence metric that tells about the expected performance of the methods when applied to our problem. However, this information is not always provided, especially in deep learning models. This effect can be worsened when these components are just one of the many different parts of a data product. This complexity leads us to the second problem: when different models might interact in complex pipelines, the construction of the appropriate confidence measures can be a challenging task. 1https://cloud.google.com/vision/ 2https://cloud.google.com/natural-language/ In order to solve the previous issues, in this article, we propose a deep learning wrapper algorithm that equips any black-box model with uncertainty prediction. Here a wrapper is understood as a machine learning model that takes any other model and operates without accessing its internals. Because it does not have access to the internal states, parameters, or architecture of the model it is wrapping, the wrapper is model agnostic and can be used on top of any other algorithm as long as it satisfies some desideratum. In this article, we only require the black-box model to produce as output a distribution over the classes, a soft requirement as any model with a soft-max layer satisfies it. More specifically, the proposed wrapper uses a deep learning model and introduces a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters. By sampling from this Dirichlet distribution, the wrapper enables the estimation of aleatoric uncertainty. Uncertainty has been an important topic in machine learning for many years (Koller & Friedman, 2009). With the emergence of deep learning, the reinterpretation of some existing mechanisms such as dropout, or the proposal of stochastic mechanisms such as Montecarlo approaches, has broadened the use of these techniques for accounting for uncertainty in deep models (Gal, 2016). Uncertainty can be categorized into epistemic and aleatoric uncertainty. While the first accounts for the uncertainty that is associated to model parameters, the second corresponds to the uncertainty inherently present in the data3. Uncertainty plays a key role when reporting a decision because it accounts for the reliability of the prediction and can help to show the limitations of the applicability of a machine learning model. In this respect, we advocate for the use of selective prediction (aka rejection techniques) when the uncertainty metric is large in order to avoid potential harm or avoid risks. Selective prediction is a set of techniques based on abstaining from deciding according to some metric threshold. As previously commented, uncertainty is a good candidate for a rejection metric. In literature, we find examples of different rejection functions (Geifman & El-Yaniv, 2019) (De Stefano et al., 2000) and some of them use uncertainty measures (Geifman & El-Yaniv, 2017) for rejection. In the proposed scenario, where we are trying to characterize the uncertainty of a black-box nonmutable model, many of the state-of-the-art techniques are not applicable. For example, some rejectors have to be trained together with the classifier and need access to the internals of the model. In the same line, current models for uncertainty in deep learning need to have access to its internal states (Gal, 2016). The contributions of this article can be summarized as follows: • We propose a wrapper algorithm that equips any other classification model that outputs a distribution over predicted classes with Bayesian treatment without having knowledge or access to its internals. • We use the wrapper to empirically estimate aleatoric uncertainty and show that the computed uncertainty can identify prone to err samples. • Finally, we show that the computed uncertainty can be used by rejection techniques to increase the performance and robustness of the original black-box model in the target domain. We show improvements in transfer problems in natural language processing and computer vision problems. In section 2, we introduce the method proposed for building an uncertainty wrapper around a blackbox model. In section 3, we describe how to obtain an uncertainty score from the wrapper output. Section 4 introduces the concept of rejection and rejection performance metrics. In section 5, we showcase the proposed method in four different scenarios for sentiment analysis in natural language processing and one for computer vision. The results obtained corroborate the importance of the rejection method and show the success of the proposed methodology. Finally, section 6 concludes the article. 3Observe that in this application only aleatoric uncertainty matters since we are dealing with pre-trained, non-mutable models. In this particular scenario, aleatoric uncertainty also serves as a measure of the fitness of the model to the data. 2 BUILDING AN UNCERTAINTY WRAPPER Our goal is to build a wrapper algorithm that takes another black-box model and operates on top of it. As such, there are several constraints to observe. First, we need to exclusively operate on the inputs and outputs of the black-box classifier. We are not allowed to use any intermediate or internal value of the black-box model as we need to be agnostic to it. Second, the input of the wrapper has to be compatible with the original distribution over the output classes. In the literature, other proposals suggest a deep learning model for estimating uncertainty. The problem with those approaches, like in (Kendall & Gal, 2017) where they use independent Gaussian random variables to model the pre-activation value of the logits, is that they do not conform to the constraints in our setting. First, having access to the logits before the softmax breaks the black-box assumption; and, second, independent Gaussian distributions impose unnecessary assumptions and need of additional normalization steps. A more natural approach is to consider the output distribution coming from a Dirichlet probability distribution function. 2.1 DIRICHLET CONCENTRATION REPARAMETERIZATION As commented, given a data set D composed of pairs (xi, yi), i = 1 . . . N , with yi ∈ RC , being C the number of different classes, the wrapper output is assumed to come from a Dirichlet probability density function: p(yw|X,w∗) ∼ Dir(α), (1) wherew∗ are the parameters of the wrapper. We propose to use a decomposition of the concentration parameter in two terms to relate the output of the black-box classifier, ym, with the concentration parameter, α, in the Dirichlet distribution of the wrapper. To that effect, we recall some basic statistics of the Dirichlet distribution. Given a Dirichlet random variable x ∈ RC with concentration parameter α ∈ RC , the expected value of the distribution is defined as E(x) = α/ C∑ i=1 αi. Observe that the expected value has the same properties as a probability distribution and that the output of the black-box ym ∈ RC is already a probability distribution. In this sense, we could directly use the output of the black box as the concentration parameter. However, each term of the concentration parameter is not necessarily constrained to the interval [0, 1]. Let us introduce a new scalar parameter, β ∈ R that will model this difference, such that α = βym. Observe that the value of β does not change the expected value of the output of the wrapper and coincides with the output of the black-box model, i.e. E(yw) = ym. This decomposition has a simple interpretation: While the output of the black-box classifier stands for the mean, parameter β accounts for the spread of the distribution. The same or similar decomposition can be found in other works in a different context(Malinin & Gales, 2018)(Chen et al., 2018)4. An example of the effect of varying this parameter in a three dimensional Dirichlet distribution is shown in Figures (1a) to (1c). Observe that the higher the value of β, the more pointy the distribution is. 4It is worth noting that in the context of those works, there is a degradation in performance when using Dirichlet. This does not happen in our case since the black-box model is non-mutable. This decoupling allows to effectively isolate the contribution of the black-box and the contribution that remains to be computed, i.e. the value of parameter β. Figure 2 shows the integration of the wrapper (in light orange colour) with the black-box classifier (in light blue colour). Observe that the wrapper consists of two blocks: the Dirichlet reparameterization layer of the wrapper that decouples the influence of the black-box model from the rest (see the dashed line), and a deep learning architecture which aims to compute the scalar value of β5. 2.2 INFERENCE IN THE DIRICHLET SETTING SImilarly to (Kendall & Gal, 2017), we approximate the expected value of the classification probabilities using Monte Carlo sampling from the learned Dirichlet distribution for each sample, ŷ.,i ∼ Dir(αi) as E[ŷi] = 1M ∑M m=1 ŷm,i. This distribution is used to define the loss function for our learning stage. Given a set of N training samples, we will use a regularized version of the cross-entropy loss function as follows: L(W ) = − 1 N N∑ i=1 1 C C∑ c=1 yi,c logE[ŷi]c + λ‖β‖2 = − 1 N 1 C N∑ i=1 C∑ c=1 yi,c log ( 1 M M∑ m=1 ŷm,i,c ) + λ‖β‖2. Observe that we introduce the norm of the β value in the minimization function. This term is required since the unregularized cross-entropy forces the value of β to grow unbounded. By adding this term, we control its growth and govern the trade-off with a scalarization parameter λ. 3 OBTAINING AN UNCERTAINTY SCORE FROM THE WRAPPER The described Dirichlet layer effectively allows studying the variability of the parameters of the black-box output. This variability can be used to approximate a value for the heteroscedastic aleatoric uncertainty. In this work, we use Monte Carlo simulation sampling from the obtained Dirichlet function in order to characterize the uncertainty (Gal, 2016). Standard techniques for measuring uncertainty includes variation ratios or predictive entropy. Variation ratios measures the variability of the predictions obtained from the sampling (Freeman, 1965) by computing the fraction of samples with the correct output. Alternatively, predictive entropy considers the average amount of information contained in the predictive distribution. Those results with low entropy values correspond to confident predictions, whereas high entropy leads to large uncertainty. Since the output of the black-box model ym already describes a probability distribution, one could compute its predictive entropy and obtain a measure of its uncertainty with H = − ∑ c y m c log y m c 5The architecture used in this figure corresponds to the one used in the experimental section. However, as the wrapper allows us to model the variability of the parameters of the black-box output distribution, we can compute a predictive entropy that takes into account the variability of the predicted value. In this case, the sampled predictive entropy is defined as H = − ∑ c E[ŷ]c logE[ŷ]c. As we show in the experimental section, this latter approach captures better the uncertainty compared to the predictive entropy of the original model. 4 USING UNCERTAINTY FOR REJECTION Rejection is a mechanism that, given a particular metric related to the confidence in the decision, allows discarding a prediction if the metric value is below some threshold. In our proposal, we use the wrapper computed uncertainty as this rejection metric. In the context of our use cases, the hypothesis is that texts or images with high uncertainty are prone to be misclassified by the blackbox model. In order to use the uncertainty score for evaluating the performance of the black-box in a new dataset, we first proceed to obtain the predictions applying the original model. Then for each pair of data and prediction, we obtain the associated uncertainty score using the wrapper. Next, we sort the predictions based on the uncertainty score, from more uncertain to more confident. From that ordering, we set the rejection threshold that marks where to start trusting the classification model. In order to evaluate the rejection metric, we split the dataset using two criteria: whether the method Rejects the data point or Not; and whether the point is Accurately classified, or Misclassified named as R, N, A or M respectively. Using this terminology, we follow the guidelines in (Condessa et al., 2015) for rejection quality metrics. We have three quality metrics, illustrated in 3: • Non-rejected Accuracy measures the ability of the classifier to classify non-rejected samples accurately: NRA = |A ⋂ N | |N | • Classification Quality measures the ability of the classifier with rejection to classify nonrejected samples accurately and to reject misclassified samples: CQ = |A ⋂ N |+|M ⋂ R| |N |+|R| • Rejection Quality measures the ability to concentrate all misclassified samples onto the set of rejected samples:RQ = |M ⋂ R||A| |A ⋂ R||M | A good rejection point will show a trade-off between the three metrics, being able to divide the misclassified predictions from the right ones and preserve only those points that provide useful information. The higher the value displayed, the better that metric performs for rejection. 5 EXPERIMENTS AND RESULTS This section describes the experiments run for validating the wrapper proposal and results obtained. The experiments include two different scenarios: a use case for sentiment analysis using natural language processing and, another, for image classification. 5.1 A NATURAL LANGUAGE PROCESSING SCENARIO In order to validate the proposal, we use an NLP-based sentiment analysis system applied to product reviews. The goal of the system is to classify each review on whether it is positive or negative. The goal of the experiment is two-fold. First, we want to show how to apply the wrapper for a given NLP task. Second, we demonstrate how the proposed method additionally captures the uncertainty caused by the change in domains. To this end, we include different combinations of training and prediction domains in the experiment. The details on the datasets used are the following: • Stanford Sentiment Treebank (Socher et al., 2013), SST-2, binary version where the task is to classify a movie review in positive or negative. The dataset is split in 65,538 test samples, 872 for validation and 1,821 for testing. • Yelp challenge 20136, the goal is to classify reviews about Yelp venues where their users rated them using 1 to 5 stars. To be able to reuse a classifier trained with the SST-2 problem, we transform the Yelp dataset from a multiclass set to a binary problem, grouping the ratings below three as a negative review, and as positive otherwise. The dataset is split in 186,189 test samples, 20,691 for validation and 22,991 for testing. • Amazon Multi-Domain Sentiment dataset contains product reviews taken from Amazon.com from many product types (domains) (Blitzer et al., 2007). As in Yelp, the dataset consists on ratings from 1 to 5 stars that we label as positive for those with values greater or equal to 3, and negative otherwise, split into training, validation and test datasets. We use two of the domains available: music (109,733/12,193/52,254 examples) and electronics (14,495/1,611/6,903 examples). 5.2 AN IMAGE CLASSIFICATION SCENARIO In addition to the NLP use case presented above, we include here a use case for image classification. The task, in this case, is to classify images in one of the categories defined in the dataset. As in NLP, an image classifier trained using a source dataset, acting as the original API, is then applied to a new set of images belonging to a different dataset. Both datasets share almost the same output classes except for one. By predicting the uncertainty of the different class, we will show how the predicted uncertainty can also be used to detect out of sample images. The details on the datasets used for the vision use case are the following: • STL-10 (Coates et al., 2011), The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. It includes 500 training images, 800 test images per class, belonging to 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. • CIFAR10 (Krizhevsky, 2009), The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes(airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck), with 6000 images per class. There are 50000 training images and 10000 test images. 5.3 EXPERIMENT SET UP On every experiment, we use two datasets: (i) a source dataset for training a model, that will be considered the black-box model from that moment on and (ii) a target dataset that corresponds to the domain we want to apply the black-box model and where to measure the uncertainty using the proposed wrapper . Specifically, the steps followed on each case are: • Train the black-box. First, we train a classifier with the source dataset. In real scenarios, this step would not be necessary as we would be using a pre-trained model or third-party API. • Apply the black-box to the target domain. In this step, we use the black-box to obtain the predictions and evaluate the accuracy of the target dataset, and we can compute the predictive entropy based on the prediction outputs. 6https://www.yelp.com/dataset/challenge • Compute the uncertainty for the target domain using the wrapper model. Once we have the predictions for the target domain, we proceed to train the uncertainty wrapper to approximate the Dirichlet pdf for each input. By sampling the pdf, we compute the sampling predictive entropy of the average of the outputs to get the uncertainty score for each element in the target dataset. • Apply the rejection mechanism. Finally, we use the uncertainty score to sort the predictions from more to less uncertain, and we search for a rejection point that maximizes the three performance measures: non-rejected accuracy, and classification and rejection quality. We run five different scenarios, including the training the black-box with the Yelp dataset and applying it to SST-2 and vice-versa, training the black-box with the Amazon electronic products reviews dataset and applying it to Amazon music products reviews, and vice-versa, and training with STL-10 and applying the black-box to CIFAR10. For each scenario, a selection of optimal training parameters was carried out, including learning rates, batch sizes, number of units and number of epochs. Details on the architectures used for the black-box are given in the Appendix A. 5.4 RESULTS In order to show the effect of the application of the uncertainty wrapper on each of the target domains, we compute the uncertainty score using the three different metrics described in section 4: the predictive entropy of the black-box output (baseline), the predictive entropy obtained after training the aleatoric wrapper (pred. entropy), and the variation ratios (var. ratios). Figures 4 to 8 show the results obtained on each combination for the rejection performance metrics for the three uncertainty scores analysed. From left to right, we find the values for non-rejected accuracy, classification quality and rejection quality. The higher the value in the plot, the better the result. According to the results obtained, the proposed method shows better behaviour in all scenarios and metrics. As we remove more samples according to the uncertainty, the proposed method displays much better accuracy and quality than its counterparts. These results validate the hypothesis that the heteroscedastic aleatoric uncertainty computed by the wrapper effectively captures the confidence in the prediction and the samples prone to error. On the contrary, variation ratios are the worst performant method. Note that, although our proposal performs much better, its absolute gain depends on the scenario. In those domains where the black-box model performs worse, there is more to gain by using the wrapper. If we observe the classification quality (plot at the center of each figure) and the rejection quality, we can see that the proposed metric is also excellent at rejecting the misclassified points. A detailed table with numerical results for the same experiments is included in Appendix B. Results demonstrate how the usage of the uncertainty for rejecting uncertain predictions helps with the adaptation of a pre-trained model to new domains of application. In some cases, the results obtained for the test dataset of the target domain by rejecting 10% of the less certain points overtake those obtained by the source dataset used for training the original model. As a curiosity, the use case where we trained a black-box model using the reviews of Amazon’s electronics products achieves better results when applied to the test target dataset than to the original test dataset. Even in this case, where the applied classifier reaches an accuracy of more than 90 %, the proposed method increases it in almost 5 points. In Appendix C, we analyse how, for the case of images, the proposed method can detect out-of-sample images that belong to an unseen category. 6 CONCLUSIONS AND FUTURE WORK In this work, we introduced a deep learning wrapper technique that can endow any black-box model with uncertainty features. The wrapper uses a reparameterization trick on the Dirichlet distribution, and it can capture the distribution on the multinomial parameters of the output of the black-box classifier. We use the predicted uncertainty to fuel a rejection method and show how this helps in assessing the fitness of a model to a new domain or data set. By measuring the sampling uncertainty and using it for rejection, we can improve the accuracy results by 4%-8% by rejecting just 10% of the samples. Additionally, the method displays a significant value on rejection quality. These results tell us that the predicted uncertainty focuses on intricate, ambiguous, or prone to error cases. We show results in NLP and computer vision domains with successful and encouraging results. As future work, we are planning to keep exploring different architectures and strategies for the wrapper implementation and focus on other usual cases found in real-life implementations, such as how to deal with high dimensional and categorical outputs. A APPENDIX A For the sake of reproducibility, this Appendix details the architectures used for training the blackbox systems. Figure 9 describes the model used for training the black-box models in the two use cases. As stated before, the only purpose of this model is to obtain a black-box classifier for a given source domain. The goal, in this case, is not to obtain the best classifier but to obtain a model which is easy to train and offers good performance. The main difference between the model for NLP and Image classification comes from the embedding component. In the case of NLP, we opted for representing a sentence as the average value of the embedding of each word using pre-trained word2vec embeddings. In the case of images, we trained a MobileNET v2 model (Sandler et al., 2018), initialized with imagenet weights, using as input the STL-10 images, resized to 32x32x3 to accommodate them to the CIFAR10 dataset.7 B APPENDIX B Table 1 shows a detail of the numerical results obtained during the experiments for the four combinations tested. The first column, black-box source acc, describes the accuracy obtained for the source dataset after training the original classifier. Next, column black-box target acc describes the accuracy obtained when applying the black-box to the target dataset. The rest of the columns show the non-rejected accuracy and the classification and rejection quality after rejecting 10, 20 an 30% of the points, using the proposed predictive entropy as a rejector. 7we tried other embeddings such as ELMO, and Seq2seq for text, or VGG-16 (Simonyan & Zisserman, 2015) and ResNET50 (Szegedy et al., 2015) for images, but we stick to word2vec and MobileNet due to limitations on the computing resources. C APPENDIX C This Appendix shows detailed results on the image case. Although the resulting quality obtained for the rejection mechanism in the case of images is not as large as in texts, when comparing to the predictive entropy of the original classifier, we observe that the proposed measure is still excellent for detecting out of sample images. The main difference between STL-10 and CIFAR10 is a variation on one of the classes. Where in STL-10 class 6 held monkeys, in CIFAR10 it corresponds to frogs. As one can expect, the black-box model trained with STL-10 will struggle on detecting frogs. Figure 10: Distribution of the predicted entropies for two of the CIFAR10 classes. Figure 11: Entropies for frogs Figure 12: Entropies for trucks In figure 11 and 12, we can see the distributions of the images that belong to the frogs class and images that belong to the trucks class. For the frogs class, we see that the values of uncertainty are concentrated in the higher band of the diagram, whereas in the case of trucks, we find many with lower uncertainty. This detail shows that the metric assigns significant uncertainty to out-of-sample class points.
1. What is the main contribution of the paper regarding uncertainty quantification in deep learning models? 2. How does the proposed approach utilize a Dirichlet prior to wrap a black-box model for decision-making purposes? 3. Can you elaborate on the experimental results demonstrated in the paper, particularly in NLP and CV domains? 4. What are the limitations of the method, especially concerning the potential dropping of significant samples? 5. Are there any previous works that have explored the addition of a Dirichlet prior for uncertainty quantification in deep learning models?
Review
Review In this paper, the authors propose to use a dirichlet prior over the multinomial distribution outputted by blackbox DL models, to quantify uncertainty in predictions. The main contribution is to learn the parameters of the prior and use it as a wrapper over the black box, to adjudicate whether to retain or reject a particular prediction made by the model. The dirichlet parameters are learnt in conjunction with the model parameters as a fine tuning step in transfer learning tasks. Experiments on NLP and CV domains and multiple datasets demonstrate the efficacy of the method. The paper is well written, and easy to understand. The main motivation of the paper seems to be to learn what samples to drop, but the authors do not address what can be done about the dropped samples (i.e what happens if we end up having to drop 80% of the samples?) . Adding the dirichlet prior to quantify uncertainty has also been studied in the context of VAEs before, (and LDA back in the day) so conceptually there's limited novelty. Nonetheless, the method seems to provide impressive results on multiple datasets, and I think this is interesting enough to warrant an accept. MINOR: 1. Figure 1: the values of \beta is the same in all subfigures 2. Sec 2.2 line 1: SImilarly --> Similar
ICLR
Title Dirichlet Wrapper to Quantify Classification Uncertainty in Black-Box Systems Abstract Nowadays, machine learning models are becoming a utility in many sectors. AI companies deliver pre-trained encapsulated models as application programming interfaces (APIs) that developers can combine with third party components, their models, and proprietary data, to create complex data products. This complexity and the lack of control and knowledge of the internals of these external components might cause unavoidable effects, such as lack of transparency, difficulty in auditability, and the emergence of uncontrolled potential risks. These issues are especially critical when practitioners use these components as black-boxes in new datasets. In order to provide actionable insights in this type of scenarios, in this work we propose the use of a wrapping deep learning model to enrich the output of a classification black-box with a measure of uncertainty. Given a black-box classifier, we propose a probabilistic neural network that works in parallel to the black-box and uses a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters of the classifier and enables the estimation of aleatoric uncertainty for any data sample. Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system. We showcase the proposed technique and methodology in two practical scenarios, one for NLP and another for computer vision, where a simulated API based is applied to different domains. Results demonstrate the effectiveness of the uncertainty computed by the wrapper and its high correlation to wrong predictions and misclassifications. 1 INTRODUCTION The popularity of machine learning is giving birth to new business models based on the productization and service of these models. In the market there are many application programming interfaces (APIs) serving predictions in object recognition for images (Vision AI1), language detection or sentiment analysis in natural language processing (Cloud Natural Language API2), to mention just a few. As this Machine Learning-as-a-Service model starts to grow, it becomes easier to find these APIs as an integral component of more complex products. The use of pre-trained models gives rise to two different problems. First, we do not know how these models are going to operate in our intended application domain. In order to address this issue, there is a vast literature on transfer learning that can be applied. However, when using third-party proprietary software or APIs, we may not have access to the internals or the possibility of finetuning the model to our domain. If we are to use the model as it is, one must at least understand when the model is going to work and when it is not, to have some confidence metric that tells about the expected performance of the methods when applied to our problem. However, this information is not always provided, especially in deep learning models. This effect can be worsened when these components are just one of the many different parts of a data product. This complexity leads us to the second problem: when different models might interact in complex pipelines, the construction of the appropriate confidence measures can be a challenging task. 1https://cloud.google.com/vision/ 2https://cloud.google.com/natural-language/ In order to solve the previous issues, in this article, we propose a deep learning wrapper algorithm that equips any black-box model with uncertainty prediction. Here a wrapper is understood as a machine learning model that takes any other model and operates without accessing its internals. Because it does not have access to the internal states, parameters, or architecture of the model it is wrapping, the wrapper is model agnostic and can be used on top of any other algorithm as long as it satisfies some desideratum. In this article, we only require the black-box model to produce as output a distribution over the classes, a soft requirement as any model with a soft-max layer satisfies it. More specifically, the proposed wrapper uses a deep learning model and introduces a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters. By sampling from this Dirichlet distribution, the wrapper enables the estimation of aleatoric uncertainty. Uncertainty has been an important topic in machine learning for many years (Koller & Friedman, 2009). With the emergence of deep learning, the reinterpretation of some existing mechanisms such as dropout, or the proposal of stochastic mechanisms such as Montecarlo approaches, has broadened the use of these techniques for accounting for uncertainty in deep models (Gal, 2016). Uncertainty can be categorized into epistemic and aleatoric uncertainty. While the first accounts for the uncertainty that is associated to model parameters, the second corresponds to the uncertainty inherently present in the data3. Uncertainty plays a key role when reporting a decision because it accounts for the reliability of the prediction and can help to show the limitations of the applicability of a machine learning model. In this respect, we advocate for the use of selective prediction (aka rejection techniques) when the uncertainty metric is large in order to avoid potential harm or avoid risks. Selective prediction is a set of techniques based on abstaining from deciding according to some metric threshold. As previously commented, uncertainty is a good candidate for a rejection metric. In literature, we find examples of different rejection functions (Geifman & El-Yaniv, 2019) (De Stefano et al., 2000) and some of them use uncertainty measures (Geifman & El-Yaniv, 2017) for rejection. In the proposed scenario, where we are trying to characterize the uncertainty of a black-box nonmutable model, many of the state-of-the-art techniques are not applicable. For example, some rejectors have to be trained together with the classifier and need access to the internals of the model. In the same line, current models for uncertainty in deep learning need to have access to its internal states (Gal, 2016). The contributions of this article can be summarized as follows: • We propose a wrapper algorithm that equips any other classification model that outputs a distribution over predicted classes with Bayesian treatment without having knowledge or access to its internals. • We use the wrapper to empirically estimate aleatoric uncertainty and show that the computed uncertainty can identify prone to err samples. • Finally, we show that the computed uncertainty can be used by rejection techniques to increase the performance and robustness of the original black-box model in the target domain. We show improvements in transfer problems in natural language processing and computer vision problems. In section 2, we introduce the method proposed for building an uncertainty wrapper around a blackbox model. In section 3, we describe how to obtain an uncertainty score from the wrapper output. Section 4 introduces the concept of rejection and rejection performance metrics. In section 5, we showcase the proposed method in four different scenarios for sentiment analysis in natural language processing and one for computer vision. The results obtained corroborate the importance of the rejection method and show the success of the proposed methodology. Finally, section 6 concludes the article. 3Observe that in this application only aleatoric uncertainty matters since we are dealing with pre-trained, non-mutable models. In this particular scenario, aleatoric uncertainty also serves as a measure of the fitness of the model to the data. 2 BUILDING AN UNCERTAINTY WRAPPER Our goal is to build a wrapper algorithm that takes another black-box model and operates on top of it. As such, there are several constraints to observe. First, we need to exclusively operate on the inputs and outputs of the black-box classifier. We are not allowed to use any intermediate or internal value of the black-box model as we need to be agnostic to it. Second, the input of the wrapper has to be compatible with the original distribution over the output classes. In the literature, other proposals suggest a deep learning model for estimating uncertainty. The problem with those approaches, like in (Kendall & Gal, 2017) where they use independent Gaussian random variables to model the pre-activation value of the logits, is that they do not conform to the constraints in our setting. First, having access to the logits before the softmax breaks the black-box assumption; and, second, independent Gaussian distributions impose unnecessary assumptions and need of additional normalization steps. A more natural approach is to consider the output distribution coming from a Dirichlet probability distribution function. 2.1 DIRICHLET CONCENTRATION REPARAMETERIZATION As commented, given a data set D composed of pairs (xi, yi), i = 1 . . . N , with yi ∈ RC , being C the number of different classes, the wrapper output is assumed to come from a Dirichlet probability density function: p(yw|X,w∗) ∼ Dir(α), (1) wherew∗ are the parameters of the wrapper. We propose to use a decomposition of the concentration parameter in two terms to relate the output of the black-box classifier, ym, with the concentration parameter, α, in the Dirichlet distribution of the wrapper. To that effect, we recall some basic statistics of the Dirichlet distribution. Given a Dirichlet random variable x ∈ RC with concentration parameter α ∈ RC , the expected value of the distribution is defined as E(x) = α/ C∑ i=1 αi. Observe that the expected value has the same properties as a probability distribution and that the output of the black-box ym ∈ RC is already a probability distribution. In this sense, we could directly use the output of the black box as the concentration parameter. However, each term of the concentration parameter is not necessarily constrained to the interval [0, 1]. Let us introduce a new scalar parameter, β ∈ R that will model this difference, such that α = βym. Observe that the value of β does not change the expected value of the output of the wrapper and coincides with the output of the black-box model, i.e. E(yw) = ym. This decomposition has a simple interpretation: While the output of the black-box classifier stands for the mean, parameter β accounts for the spread of the distribution. The same or similar decomposition can be found in other works in a different context(Malinin & Gales, 2018)(Chen et al., 2018)4. An example of the effect of varying this parameter in a three dimensional Dirichlet distribution is shown in Figures (1a) to (1c). Observe that the higher the value of β, the more pointy the distribution is. 4It is worth noting that in the context of those works, there is a degradation in performance when using Dirichlet. This does not happen in our case since the black-box model is non-mutable. This decoupling allows to effectively isolate the contribution of the black-box and the contribution that remains to be computed, i.e. the value of parameter β. Figure 2 shows the integration of the wrapper (in light orange colour) with the black-box classifier (in light blue colour). Observe that the wrapper consists of two blocks: the Dirichlet reparameterization layer of the wrapper that decouples the influence of the black-box model from the rest (see the dashed line), and a deep learning architecture which aims to compute the scalar value of β5. 2.2 INFERENCE IN THE DIRICHLET SETTING SImilarly to (Kendall & Gal, 2017), we approximate the expected value of the classification probabilities using Monte Carlo sampling from the learned Dirichlet distribution for each sample, ŷ.,i ∼ Dir(αi) as E[ŷi] = 1M ∑M m=1 ŷm,i. This distribution is used to define the loss function for our learning stage. Given a set of N training samples, we will use a regularized version of the cross-entropy loss function as follows: L(W ) = − 1 N N∑ i=1 1 C C∑ c=1 yi,c logE[ŷi]c + λ‖β‖2 = − 1 N 1 C N∑ i=1 C∑ c=1 yi,c log ( 1 M M∑ m=1 ŷm,i,c ) + λ‖β‖2. Observe that we introduce the norm of the β value in the minimization function. This term is required since the unregularized cross-entropy forces the value of β to grow unbounded. By adding this term, we control its growth and govern the trade-off with a scalarization parameter λ. 3 OBTAINING AN UNCERTAINTY SCORE FROM THE WRAPPER The described Dirichlet layer effectively allows studying the variability of the parameters of the black-box output. This variability can be used to approximate a value for the heteroscedastic aleatoric uncertainty. In this work, we use Monte Carlo simulation sampling from the obtained Dirichlet function in order to characterize the uncertainty (Gal, 2016). Standard techniques for measuring uncertainty includes variation ratios or predictive entropy. Variation ratios measures the variability of the predictions obtained from the sampling (Freeman, 1965) by computing the fraction of samples with the correct output. Alternatively, predictive entropy considers the average amount of information contained in the predictive distribution. Those results with low entropy values correspond to confident predictions, whereas high entropy leads to large uncertainty. Since the output of the black-box model ym already describes a probability distribution, one could compute its predictive entropy and obtain a measure of its uncertainty with H = − ∑ c y m c log y m c 5The architecture used in this figure corresponds to the one used in the experimental section. However, as the wrapper allows us to model the variability of the parameters of the black-box output distribution, we can compute a predictive entropy that takes into account the variability of the predicted value. In this case, the sampled predictive entropy is defined as H = − ∑ c E[ŷ]c logE[ŷ]c. As we show in the experimental section, this latter approach captures better the uncertainty compared to the predictive entropy of the original model. 4 USING UNCERTAINTY FOR REJECTION Rejection is a mechanism that, given a particular metric related to the confidence in the decision, allows discarding a prediction if the metric value is below some threshold. In our proposal, we use the wrapper computed uncertainty as this rejection metric. In the context of our use cases, the hypothesis is that texts or images with high uncertainty are prone to be misclassified by the blackbox model. In order to use the uncertainty score for evaluating the performance of the black-box in a new dataset, we first proceed to obtain the predictions applying the original model. Then for each pair of data and prediction, we obtain the associated uncertainty score using the wrapper. Next, we sort the predictions based on the uncertainty score, from more uncertain to more confident. From that ordering, we set the rejection threshold that marks where to start trusting the classification model. In order to evaluate the rejection metric, we split the dataset using two criteria: whether the method Rejects the data point or Not; and whether the point is Accurately classified, or Misclassified named as R, N, A or M respectively. Using this terminology, we follow the guidelines in (Condessa et al., 2015) for rejection quality metrics. We have three quality metrics, illustrated in 3: • Non-rejected Accuracy measures the ability of the classifier to classify non-rejected samples accurately: NRA = |A ⋂ N | |N | • Classification Quality measures the ability of the classifier with rejection to classify nonrejected samples accurately and to reject misclassified samples: CQ = |A ⋂ N |+|M ⋂ R| |N |+|R| • Rejection Quality measures the ability to concentrate all misclassified samples onto the set of rejected samples:RQ = |M ⋂ R||A| |A ⋂ R||M | A good rejection point will show a trade-off between the three metrics, being able to divide the misclassified predictions from the right ones and preserve only those points that provide useful information. The higher the value displayed, the better that metric performs for rejection. 5 EXPERIMENTS AND RESULTS This section describes the experiments run for validating the wrapper proposal and results obtained. The experiments include two different scenarios: a use case for sentiment analysis using natural language processing and, another, for image classification. 5.1 A NATURAL LANGUAGE PROCESSING SCENARIO In order to validate the proposal, we use an NLP-based sentiment analysis system applied to product reviews. The goal of the system is to classify each review on whether it is positive or negative. The goal of the experiment is two-fold. First, we want to show how to apply the wrapper for a given NLP task. Second, we demonstrate how the proposed method additionally captures the uncertainty caused by the change in domains. To this end, we include different combinations of training and prediction domains in the experiment. The details on the datasets used are the following: • Stanford Sentiment Treebank (Socher et al., 2013), SST-2, binary version where the task is to classify a movie review in positive or negative. The dataset is split in 65,538 test samples, 872 for validation and 1,821 for testing. • Yelp challenge 20136, the goal is to classify reviews about Yelp venues where their users rated them using 1 to 5 stars. To be able to reuse a classifier trained with the SST-2 problem, we transform the Yelp dataset from a multiclass set to a binary problem, grouping the ratings below three as a negative review, and as positive otherwise. The dataset is split in 186,189 test samples, 20,691 for validation and 22,991 for testing. • Amazon Multi-Domain Sentiment dataset contains product reviews taken from Amazon.com from many product types (domains) (Blitzer et al., 2007). As in Yelp, the dataset consists on ratings from 1 to 5 stars that we label as positive for those with values greater or equal to 3, and negative otherwise, split into training, validation and test datasets. We use two of the domains available: music (109,733/12,193/52,254 examples) and electronics (14,495/1,611/6,903 examples). 5.2 AN IMAGE CLASSIFICATION SCENARIO In addition to the NLP use case presented above, we include here a use case for image classification. The task, in this case, is to classify images in one of the categories defined in the dataset. As in NLP, an image classifier trained using a source dataset, acting as the original API, is then applied to a new set of images belonging to a different dataset. Both datasets share almost the same output classes except for one. By predicting the uncertainty of the different class, we will show how the predicted uncertainty can also be used to detect out of sample images. The details on the datasets used for the vision use case are the following: • STL-10 (Coates et al., 2011), The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. It includes 500 training images, 800 test images per class, belonging to 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. • CIFAR10 (Krizhevsky, 2009), The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes(airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck), with 6000 images per class. There are 50000 training images and 10000 test images. 5.3 EXPERIMENT SET UP On every experiment, we use two datasets: (i) a source dataset for training a model, that will be considered the black-box model from that moment on and (ii) a target dataset that corresponds to the domain we want to apply the black-box model and where to measure the uncertainty using the proposed wrapper . Specifically, the steps followed on each case are: • Train the black-box. First, we train a classifier with the source dataset. In real scenarios, this step would not be necessary as we would be using a pre-trained model or third-party API. • Apply the black-box to the target domain. In this step, we use the black-box to obtain the predictions and evaluate the accuracy of the target dataset, and we can compute the predictive entropy based on the prediction outputs. 6https://www.yelp.com/dataset/challenge • Compute the uncertainty for the target domain using the wrapper model. Once we have the predictions for the target domain, we proceed to train the uncertainty wrapper to approximate the Dirichlet pdf for each input. By sampling the pdf, we compute the sampling predictive entropy of the average of the outputs to get the uncertainty score for each element in the target dataset. • Apply the rejection mechanism. Finally, we use the uncertainty score to sort the predictions from more to less uncertain, and we search for a rejection point that maximizes the three performance measures: non-rejected accuracy, and classification and rejection quality. We run five different scenarios, including the training the black-box with the Yelp dataset and applying it to SST-2 and vice-versa, training the black-box with the Amazon electronic products reviews dataset and applying it to Amazon music products reviews, and vice-versa, and training with STL-10 and applying the black-box to CIFAR10. For each scenario, a selection of optimal training parameters was carried out, including learning rates, batch sizes, number of units and number of epochs. Details on the architectures used for the black-box are given in the Appendix A. 5.4 RESULTS In order to show the effect of the application of the uncertainty wrapper on each of the target domains, we compute the uncertainty score using the three different metrics described in section 4: the predictive entropy of the black-box output (baseline), the predictive entropy obtained after training the aleatoric wrapper (pred. entropy), and the variation ratios (var. ratios). Figures 4 to 8 show the results obtained on each combination for the rejection performance metrics for the three uncertainty scores analysed. From left to right, we find the values for non-rejected accuracy, classification quality and rejection quality. The higher the value in the plot, the better the result. According to the results obtained, the proposed method shows better behaviour in all scenarios and metrics. As we remove more samples according to the uncertainty, the proposed method displays much better accuracy and quality than its counterparts. These results validate the hypothesis that the heteroscedastic aleatoric uncertainty computed by the wrapper effectively captures the confidence in the prediction and the samples prone to error. On the contrary, variation ratios are the worst performant method. Note that, although our proposal performs much better, its absolute gain depends on the scenario. In those domains where the black-box model performs worse, there is more to gain by using the wrapper. If we observe the classification quality (plot at the center of each figure) and the rejection quality, we can see that the proposed metric is also excellent at rejecting the misclassified points. A detailed table with numerical results for the same experiments is included in Appendix B. Results demonstrate how the usage of the uncertainty for rejecting uncertain predictions helps with the adaptation of a pre-trained model to new domains of application. In some cases, the results obtained for the test dataset of the target domain by rejecting 10% of the less certain points overtake those obtained by the source dataset used for training the original model. As a curiosity, the use case where we trained a black-box model using the reviews of Amazon’s electronics products achieves better results when applied to the test target dataset than to the original test dataset. Even in this case, where the applied classifier reaches an accuracy of more than 90 %, the proposed method increases it in almost 5 points. In Appendix C, we analyse how, for the case of images, the proposed method can detect out-of-sample images that belong to an unseen category. 6 CONCLUSIONS AND FUTURE WORK In this work, we introduced a deep learning wrapper technique that can endow any black-box model with uncertainty features. The wrapper uses a reparameterization trick on the Dirichlet distribution, and it can capture the distribution on the multinomial parameters of the output of the black-box classifier. We use the predicted uncertainty to fuel a rejection method and show how this helps in assessing the fitness of a model to a new domain or data set. By measuring the sampling uncertainty and using it for rejection, we can improve the accuracy results by 4%-8% by rejecting just 10% of the samples. Additionally, the method displays a significant value on rejection quality. These results tell us that the predicted uncertainty focuses on intricate, ambiguous, or prone to error cases. We show results in NLP and computer vision domains with successful and encouraging results. As future work, we are planning to keep exploring different architectures and strategies for the wrapper implementation and focus on other usual cases found in real-life implementations, such as how to deal with high dimensional and categorical outputs. A APPENDIX A For the sake of reproducibility, this Appendix details the architectures used for training the blackbox systems. Figure 9 describes the model used for training the black-box models in the two use cases. As stated before, the only purpose of this model is to obtain a black-box classifier for a given source domain. The goal, in this case, is not to obtain the best classifier but to obtain a model which is easy to train and offers good performance. The main difference between the model for NLP and Image classification comes from the embedding component. In the case of NLP, we opted for representing a sentence as the average value of the embedding of each word using pre-trained word2vec embeddings. In the case of images, we trained a MobileNET v2 model (Sandler et al., 2018), initialized with imagenet weights, using as input the STL-10 images, resized to 32x32x3 to accommodate them to the CIFAR10 dataset.7 B APPENDIX B Table 1 shows a detail of the numerical results obtained during the experiments for the four combinations tested. The first column, black-box source acc, describes the accuracy obtained for the source dataset after training the original classifier. Next, column black-box target acc describes the accuracy obtained when applying the black-box to the target dataset. The rest of the columns show the non-rejected accuracy and the classification and rejection quality after rejecting 10, 20 an 30% of the points, using the proposed predictive entropy as a rejector. 7we tried other embeddings such as ELMO, and Seq2seq for text, or VGG-16 (Simonyan & Zisserman, 2015) and ResNET50 (Szegedy et al., 2015) for images, but we stick to word2vec and MobileNet due to limitations on the computing resources. C APPENDIX C This Appendix shows detailed results on the image case. Although the resulting quality obtained for the rejection mechanism in the case of images is not as large as in texts, when comparing to the predictive entropy of the original classifier, we observe that the proposed measure is still excellent for detecting out of sample images. The main difference between STL-10 and CIFAR10 is a variation on one of the classes. Where in STL-10 class 6 held monkeys, in CIFAR10 it corresponds to frogs. As one can expect, the black-box model trained with STL-10 will struggle on detecting frogs. Figure 10: Distribution of the predicted entropies for two of the CIFAR10 classes. Figure 11: Entropies for frogs Figure 12: Entropies for trucks In figure 11 and 12, we can see the distributions of the images that belong to the frogs class and images that belong to the trucks class. For the frogs class, we see that the values of uncertainty are concentrated in the higher band of the diagram, whereas in the case of trucks, we find many with lower uncertainty. This detail shows that the metric assigns significant uncertainty to out-of-sample class points.
1. What is the focus of the paper, and what are the reviewer's concerns regarding its novelty? 2. How does the proposed method compare to other established techniques in the field? 3. Are there any unclear aspects of the modeling setup and experimental details? 4. How does the reviewer suggest improving the paper? 5. What specific issues do you have with the paper's presentation of results and metrics?
Review
Review This paper presents a method for learning a “wrapper” model which endows a multiclass predictor with an estimate of model uncertainty. The base model is treated as a black box which emits a categorical distribution, while the wrapper model estimates the parameters of a Dirichlet distribution. The wrapper is trained against silver labels from the base model, and the sampled predictive entropy is used to threshold predictions so as to withhold a decision on uncertain examples. I believe this paper should not be accepted, as the main contribution is not particularly novel and some experimental details (Section 3) are not well-motivated. In particular, the idea of learning a Dirichlet prior is very similar to that proposed in (Malinin and Gales, NeurIPS 2018). The main contribution in this work seems to be the application to an existing black-box model, but this seems like a straightforward application of knowledge distillation, another well-established technique (e.g. Hinton et al. 2015, though oddly, this paper doesn’t mention this). The details of the modeling set-up (section 2.2 and 3) are also not entirely clear. It seems from the preceding discussion that the main goal of the wrapper model is to estimate \beta, but it is not clear how the sampling procedure (2.2) allows for this. It seems that the value of the “sampled predictive entropy” (no equation number, but see end of Section 3) is just that of the mean predicted distribution, and it’s not clear why this should be different (assuming the wrapper model converges) than the predictive entropy of the base model. This paper could be improved by a more thorough comparison to the literature, and by a clearer motivation for the training procedure used. Additionally, it would help to present a summary table of results, and to frame the metrics in terms of standard classification metrics such as precision and recall where applicable.
ICLR
Title Dirichlet Wrapper to Quantify Classification Uncertainty in Black-Box Systems Abstract Nowadays, machine learning models are becoming a utility in many sectors. AI companies deliver pre-trained encapsulated models as application programming interfaces (APIs) that developers can combine with third party components, their models, and proprietary data, to create complex data products. This complexity and the lack of control and knowledge of the internals of these external components might cause unavoidable effects, such as lack of transparency, difficulty in auditability, and the emergence of uncontrolled potential risks. These issues are especially critical when practitioners use these components as black-boxes in new datasets. In order to provide actionable insights in this type of scenarios, in this work we propose the use of a wrapping deep learning model to enrich the output of a classification black-box with a measure of uncertainty. Given a black-box classifier, we propose a probabilistic neural network that works in parallel to the black-box and uses a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters of the classifier and enables the estimation of aleatoric uncertainty for any data sample. Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system. We showcase the proposed technique and methodology in two practical scenarios, one for NLP and another for computer vision, where a simulated API based is applied to different domains. Results demonstrate the effectiveness of the uncertainty computed by the wrapper and its high correlation to wrong predictions and misclassifications. 1 INTRODUCTION The popularity of machine learning is giving birth to new business models based on the productization and service of these models. In the market there are many application programming interfaces (APIs) serving predictions in object recognition for images (Vision AI1), language detection or sentiment analysis in natural language processing (Cloud Natural Language API2), to mention just a few. As this Machine Learning-as-a-Service model starts to grow, it becomes easier to find these APIs as an integral component of more complex products. The use of pre-trained models gives rise to two different problems. First, we do not know how these models are going to operate in our intended application domain. In order to address this issue, there is a vast literature on transfer learning that can be applied. However, when using third-party proprietary software or APIs, we may not have access to the internals or the possibility of finetuning the model to our domain. If we are to use the model as it is, one must at least understand when the model is going to work and when it is not, to have some confidence metric that tells about the expected performance of the methods when applied to our problem. However, this information is not always provided, especially in deep learning models. This effect can be worsened when these components are just one of the many different parts of a data product. This complexity leads us to the second problem: when different models might interact in complex pipelines, the construction of the appropriate confidence measures can be a challenging task. 1https://cloud.google.com/vision/ 2https://cloud.google.com/natural-language/ In order to solve the previous issues, in this article, we propose a deep learning wrapper algorithm that equips any black-box model with uncertainty prediction. Here a wrapper is understood as a machine learning model that takes any other model and operates without accessing its internals. Because it does not have access to the internal states, parameters, or architecture of the model it is wrapping, the wrapper is model agnostic and can be used on top of any other algorithm as long as it satisfies some desideratum. In this article, we only require the black-box model to produce as output a distribution over the classes, a soft requirement as any model with a soft-max layer satisfies it. More specifically, the proposed wrapper uses a deep learning model and introduces a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters. By sampling from this Dirichlet distribution, the wrapper enables the estimation of aleatoric uncertainty. Uncertainty has been an important topic in machine learning for many years (Koller & Friedman, 2009). With the emergence of deep learning, the reinterpretation of some existing mechanisms such as dropout, or the proposal of stochastic mechanisms such as Montecarlo approaches, has broadened the use of these techniques for accounting for uncertainty in deep models (Gal, 2016). Uncertainty can be categorized into epistemic and aleatoric uncertainty. While the first accounts for the uncertainty that is associated to model parameters, the second corresponds to the uncertainty inherently present in the data3. Uncertainty plays a key role when reporting a decision because it accounts for the reliability of the prediction and can help to show the limitations of the applicability of a machine learning model. In this respect, we advocate for the use of selective prediction (aka rejection techniques) when the uncertainty metric is large in order to avoid potential harm or avoid risks. Selective prediction is a set of techniques based on abstaining from deciding according to some metric threshold. As previously commented, uncertainty is a good candidate for a rejection metric. In literature, we find examples of different rejection functions (Geifman & El-Yaniv, 2019) (De Stefano et al., 2000) and some of them use uncertainty measures (Geifman & El-Yaniv, 2017) for rejection. In the proposed scenario, where we are trying to characterize the uncertainty of a black-box nonmutable model, many of the state-of-the-art techniques are not applicable. For example, some rejectors have to be trained together with the classifier and need access to the internals of the model. In the same line, current models for uncertainty in deep learning need to have access to its internal states (Gal, 2016). The contributions of this article can be summarized as follows: • We propose a wrapper algorithm that equips any other classification model that outputs a distribution over predicted classes with Bayesian treatment without having knowledge or access to its internals. • We use the wrapper to empirically estimate aleatoric uncertainty and show that the computed uncertainty can identify prone to err samples. • Finally, we show that the computed uncertainty can be used by rejection techniques to increase the performance and robustness of the original black-box model in the target domain. We show improvements in transfer problems in natural language processing and computer vision problems. In section 2, we introduce the method proposed for building an uncertainty wrapper around a blackbox model. In section 3, we describe how to obtain an uncertainty score from the wrapper output. Section 4 introduces the concept of rejection and rejection performance metrics. In section 5, we showcase the proposed method in four different scenarios for sentiment analysis in natural language processing and one for computer vision. The results obtained corroborate the importance of the rejection method and show the success of the proposed methodology. Finally, section 6 concludes the article. 3Observe that in this application only aleatoric uncertainty matters since we are dealing with pre-trained, non-mutable models. In this particular scenario, aleatoric uncertainty also serves as a measure of the fitness of the model to the data. 2 BUILDING AN UNCERTAINTY WRAPPER Our goal is to build a wrapper algorithm that takes another black-box model and operates on top of it. As such, there are several constraints to observe. First, we need to exclusively operate on the inputs and outputs of the black-box classifier. We are not allowed to use any intermediate or internal value of the black-box model as we need to be agnostic to it. Second, the input of the wrapper has to be compatible with the original distribution over the output classes. In the literature, other proposals suggest a deep learning model for estimating uncertainty. The problem with those approaches, like in (Kendall & Gal, 2017) where they use independent Gaussian random variables to model the pre-activation value of the logits, is that they do not conform to the constraints in our setting. First, having access to the logits before the softmax breaks the black-box assumption; and, second, independent Gaussian distributions impose unnecessary assumptions and need of additional normalization steps. A more natural approach is to consider the output distribution coming from a Dirichlet probability distribution function. 2.1 DIRICHLET CONCENTRATION REPARAMETERIZATION As commented, given a data set D composed of pairs (xi, yi), i = 1 . . . N , with yi ∈ RC , being C the number of different classes, the wrapper output is assumed to come from a Dirichlet probability density function: p(yw|X,w∗) ∼ Dir(α), (1) wherew∗ are the parameters of the wrapper. We propose to use a decomposition of the concentration parameter in two terms to relate the output of the black-box classifier, ym, with the concentration parameter, α, in the Dirichlet distribution of the wrapper. To that effect, we recall some basic statistics of the Dirichlet distribution. Given a Dirichlet random variable x ∈ RC with concentration parameter α ∈ RC , the expected value of the distribution is defined as E(x) = α/ C∑ i=1 αi. Observe that the expected value has the same properties as a probability distribution and that the output of the black-box ym ∈ RC is already a probability distribution. In this sense, we could directly use the output of the black box as the concentration parameter. However, each term of the concentration parameter is not necessarily constrained to the interval [0, 1]. Let us introduce a new scalar parameter, β ∈ R that will model this difference, such that α = βym. Observe that the value of β does not change the expected value of the output of the wrapper and coincides with the output of the black-box model, i.e. E(yw) = ym. This decomposition has a simple interpretation: While the output of the black-box classifier stands for the mean, parameter β accounts for the spread of the distribution. The same or similar decomposition can be found in other works in a different context(Malinin & Gales, 2018)(Chen et al., 2018)4. An example of the effect of varying this parameter in a three dimensional Dirichlet distribution is shown in Figures (1a) to (1c). Observe that the higher the value of β, the more pointy the distribution is. 4It is worth noting that in the context of those works, there is a degradation in performance when using Dirichlet. This does not happen in our case since the black-box model is non-mutable. This decoupling allows to effectively isolate the contribution of the black-box and the contribution that remains to be computed, i.e. the value of parameter β. Figure 2 shows the integration of the wrapper (in light orange colour) with the black-box classifier (in light blue colour). Observe that the wrapper consists of two blocks: the Dirichlet reparameterization layer of the wrapper that decouples the influence of the black-box model from the rest (see the dashed line), and a deep learning architecture which aims to compute the scalar value of β5. 2.2 INFERENCE IN THE DIRICHLET SETTING SImilarly to (Kendall & Gal, 2017), we approximate the expected value of the classification probabilities using Monte Carlo sampling from the learned Dirichlet distribution for each sample, ŷ.,i ∼ Dir(αi) as E[ŷi] = 1M ∑M m=1 ŷm,i. This distribution is used to define the loss function for our learning stage. Given a set of N training samples, we will use a regularized version of the cross-entropy loss function as follows: L(W ) = − 1 N N∑ i=1 1 C C∑ c=1 yi,c logE[ŷi]c + λ‖β‖2 = − 1 N 1 C N∑ i=1 C∑ c=1 yi,c log ( 1 M M∑ m=1 ŷm,i,c ) + λ‖β‖2. Observe that we introduce the norm of the β value in the minimization function. This term is required since the unregularized cross-entropy forces the value of β to grow unbounded. By adding this term, we control its growth and govern the trade-off with a scalarization parameter λ. 3 OBTAINING AN UNCERTAINTY SCORE FROM THE WRAPPER The described Dirichlet layer effectively allows studying the variability of the parameters of the black-box output. This variability can be used to approximate a value for the heteroscedastic aleatoric uncertainty. In this work, we use Monte Carlo simulation sampling from the obtained Dirichlet function in order to characterize the uncertainty (Gal, 2016). Standard techniques for measuring uncertainty includes variation ratios or predictive entropy. Variation ratios measures the variability of the predictions obtained from the sampling (Freeman, 1965) by computing the fraction of samples with the correct output. Alternatively, predictive entropy considers the average amount of information contained in the predictive distribution. Those results with low entropy values correspond to confident predictions, whereas high entropy leads to large uncertainty. Since the output of the black-box model ym already describes a probability distribution, one could compute its predictive entropy and obtain a measure of its uncertainty with H = − ∑ c y m c log y m c 5The architecture used in this figure corresponds to the one used in the experimental section. However, as the wrapper allows us to model the variability of the parameters of the black-box output distribution, we can compute a predictive entropy that takes into account the variability of the predicted value. In this case, the sampled predictive entropy is defined as H = − ∑ c E[ŷ]c logE[ŷ]c. As we show in the experimental section, this latter approach captures better the uncertainty compared to the predictive entropy of the original model. 4 USING UNCERTAINTY FOR REJECTION Rejection is a mechanism that, given a particular metric related to the confidence in the decision, allows discarding a prediction if the metric value is below some threshold. In our proposal, we use the wrapper computed uncertainty as this rejection metric. In the context of our use cases, the hypothesis is that texts or images with high uncertainty are prone to be misclassified by the blackbox model. In order to use the uncertainty score for evaluating the performance of the black-box in a new dataset, we first proceed to obtain the predictions applying the original model. Then for each pair of data and prediction, we obtain the associated uncertainty score using the wrapper. Next, we sort the predictions based on the uncertainty score, from more uncertain to more confident. From that ordering, we set the rejection threshold that marks where to start trusting the classification model. In order to evaluate the rejection metric, we split the dataset using two criteria: whether the method Rejects the data point or Not; and whether the point is Accurately classified, or Misclassified named as R, N, A or M respectively. Using this terminology, we follow the guidelines in (Condessa et al., 2015) for rejection quality metrics. We have three quality metrics, illustrated in 3: • Non-rejected Accuracy measures the ability of the classifier to classify non-rejected samples accurately: NRA = |A ⋂ N | |N | • Classification Quality measures the ability of the classifier with rejection to classify nonrejected samples accurately and to reject misclassified samples: CQ = |A ⋂ N |+|M ⋂ R| |N |+|R| • Rejection Quality measures the ability to concentrate all misclassified samples onto the set of rejected samples:RQ = |M ⋂ R||A| |A ⋂ R||M | A good rejection point will show a trade-off between the three metrics, being able to divide the misclassified predictions from the right ones and preserve only those points that provide useful information. The higher the value displayed, the better that metric performs for rejection. 5 EXPERIMENTS AND RESULTS This section describes the experiments run for validating the wrapper proposal and results obtained. The experiments include two different scenarios: a use case for sentiment analysis using natural language processing and, another, for image classification. 5.1 A NATURAL LANGUAGE PROCESSING SCENARIO In order to validate the proposal, we use an NLP-based sentiment analysis system applied to product reviews. The goal of the system is to classify each review on whether it is positive or negative. The goal of the experiment is two-fold. First, we want to show how to apply the wrapper for a given NLP task. Second, we demonstrate how the proposed method additionally captures the uncertainty caused by the change in domains. To this end, we include different combinations of training and prediction domains in the experiment. The details on the datasets used are the following: • Stanford Sentiment Treebank (Socher et al., 2013), SST-2, binary version where the task is to classify a movie review in positive or negative. The dataset is split in 65,538 test samples, 872 for validation and 1,821 for testing. • Yelp challenge 20136, the goal is to classify reviews about Yelp venues where their users rated them using 1 to 5 stars. To be able to reuse a classifier trained with the SST-2 problem, we transform the Yelp dataset from a multiclass set to a binary problem, grouping the ratings below three as a negative review, and as positive otherwise. The dataset is split in 186,189 test samples, 20,691 for validation and 22,991 for testing. • Amazon Multi-Domain Sentiment dataset contains product reviews taken from Amazon.com from many product types (domains) (Blitzer et al., 2007). As in Yelp, the dataset consists on ratings from 1 to 5 stars that we label as positive for those with values greater or equal to 3, and negative otherwise, split into training, validation and test datasets. We use two of the domains available: music (109,733/12,193/52,254 examples) and electronics (14,495/1,611/6,903 examples). 5.2 AN IMAGE CLASSIFICATION SCENARIO In addition to the NLP use case presented above, we include here a use case for image classification. The task, in this case, is to classify images in one of the categories defined in the dataset. As in NLP, an image classifier trained using a source dataset, acting as the original API, is then applied to a new set of images belonging to a different dataset. Both datasets share almost the same output classes except for one. By predicting the uncertainty of the different class, we will show how the predicted uncertainty can also be used to detect out of sample images. The details on the datasets used for the vision use case are the following: • STL-10 (Coates et al., 2011), The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. It includes 500 training images, 800 test images per class, belonging to 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. • CIFAR10 (Krizhevsky, 2009), The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes(airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck), with 6000 images per class. There are 50000 training images and 10000 test images. 5.3 EXPERIMENT SET UP On every experiment, we use two datasets: (i) a source dataset for training a model, that will be considered the black-box model from that moment on and (ii) a target dataset that corresponds to the domain we want to apply the black-box model and where to measure the uncertainty using the proposed wrapper . Specifically, the steps followed on each case are: • Train the black-box. First, we train a classifier with the source dataset. In real scenarios, this step would not be necessary as we would be using a pre-trained model or third-party API. • Apply the black-box to the target domain. In this step, we use the black-box to obtain the predictions and evaluate the accuracy of the target dataset, and we can compute the predictive entropy based on the prediction outputs. 6https://www.yelp.com/dataset/challenge • Compute the uncertainty for the target domain using the wrapper model. Once we have the predictions for the target domain, we proceed to train the uncertainty wrapper to approximate the Dirichlet pdf for each input. By sampling the pdf, we compute the sampling predictive entropy of the average of the outputs to get the uncertainty score for each element in the target dataset. • Apply the rejection mechanism. Finally, we use the uncertainty score to sort the predictions from more to less uncertain, and we search for a rejection point that maximizes the three performance measures: non-rejected accuracy, and classification and rejection quality. We run five different scenarios, including the training the black-box with the Yelp dataset and applying it to SST-2 and vice-versa, training the black-box with the Amazon electronic products reviews dataset and applying it to Amazon music products reviews, and vice-versa, and training with STL-10 and applying the black-box to CIFAR10. For each scenario, a selection of optimal training parameters was carried out, including learning rates, batch sizes, number of units and number of epochs. Details on the architectures used for the black-box are given in the Appendix A. 5.4 RESULTS In order to show the effect of the application of the uncertainty wrapper on each of the target domains, we compute the uncertainty score using the three different metrics described in section 4: the predictive entropy of the black-box output (baseline), the predictive entropy obtained after training the aleatoric wrapper (pred. entropy), and the variation ratios (var. ratios). Figures 4 to 8 show the results obtained on each combination for the rejection performance metrics for the three uncertainty scores analysed. From left to right, we find the values for non-rejected accuracy, classification quality and rejection quality. The higher the value in the plot, the better the result. According to the results obtained, the proposed method shows better behaviour in all scenarios and metrics. As we remove more samples according to the uncertainty, the proposed method displays much better accuracy and quality than its counterparts. These results validate the hypothesis that the heteroscedastic aleatoric uncertainty computed by the wrapper effectively captures the confidence in the prediction and the samples prone to error. On the contrary, variation ratios are the worst performant method. Note that, although our proposal performs much better, its absolute gain depends on the scenario. In those domains where the black-box model performs worse, there is more to gain by using the wrapper. If we observe the classification quality (plot at the center of each figure) and the rejection quality, we can see that the proposed metric is also excellent at rejecting the misclassified points. A detailed table with numerical results for the same experiments is included in Appendix B. Results demonstrate how the usage of the uncertainty for rejecting uncertain predictions helps with the adaptation of a pre-trained model to new domains of application. In some cases, the results obtained for the test dataset of the target domain by rejecting 10% of the less certain points overtake those obtained by the source dataset used for training the original model. As a curiosity, the use case where we trained a black-box model using the reviews of Amazon’s electronics products achieves better results when applied to the test target dataset than to the original test dataset. Even in this case, where the applied classifier reaches an accuracy of more than 90 %, the proposed method increases it in almost 5 points. In Appendix C, we analyse how, for the case of images, the proposed method can detect out-of-sample images that belong to an unseen category. 6 CONCLUSIONS AND FUTURE WORK In this work, we introduced a deep learning wrapper technique that can endow any black-box model with uncertainty features. The wrapper uses a reparameterization trick on the Dirichlet distribution, and it can capture the distribution on the multinomial parameters of the output of the black-box classifier. We use the predicted uncertainty to fuel a rejection method and show how this helps in assessing the fitness of a model to a new domain or data set. By measuring the sampling uncertainty and using it for rejection, we can improve the accuracy results by 4%-8% by rejecting just 10% of the samples. Additionally, the method displays a significant value on rejection quality. These results tell us that the predicted uncertainty focuses on intricate, ambiguous, or prone to error cases. We show results in NLP and computer vision domains with successful and encouraging results. As future work, we are planning to keep exploring different architectures and strategies for the wrapper implementation and focus on other usual cases found in real-life implementations, such as how to deal with high dimensional and categorical outputs. A APPENDIX A For the sake of reproducibility, this Appendix details the architectures used for training the blackbox systems. Figure 9 describes the model used for training the black-box models in the two use cases. As stated before, the only purpose of this model is to obtain a black-box classifier for a given source domain. The goal, in this case, is not to obtain the best classifier but to obtain a model which is easy to train and offers good performance. The main difference between the model for NLP and Image classification comes from the embedding component. In the case of NLP, we opted for representing a sentence as the average value of the embedding of each word using pre-trained word2vec embeddings. In the case of images, we trained a MobileNET v2 model (Sandler et al., 2018), initialized with imagenet weights, using as input the STL-10 images, resized to 32x32x3 to accommodate them to the CIFAR10 dataset.7 B APPENDIX B Table 1 shows a detail of the numerical results obtained during the experiments for the four combinations tested. The first column, black-box source acc, describes the accuracy obtained for the source dataset after training the original classifier. Next, column black-box target acc describes the accuracy obtained when applying the black-box to the target dataset. The rest of the columns show the non-rejected accuracy and the classification and rejection quality after rejecting 10, 20 an 30% of the points, using the proposed predictive entropy as a rejector. 7we tried other embeddings such as ELMO, and Seq2seq for text, or VGG-16 (Simonyan & Zisserman, 2015) and ResNET50 (Szegedy et al., 2015) for images, but we stick to word2vec and MobileNet due to limitations on the computing resources. C APPENDIX C This Appendix shows detailed results on the image case. Although the resulting quality obtained for the rejection mechanism in the case of images is not as large as in texts, when comparing to the predictive entropy of the original classifier, we observe that the proposed measure is still excellent for detecting out of sample images. The main difference between STL-10 and CIFAR10 is a variation on one of the classes. Where in STL-10 class 6 held monkeys, in CIFAR10 it corresponds to frogs. As one can expect, the black-box model trained with STL-10 will struggle on detecting frogs. Figure 10: Distribution of the predicted entropies for two of the CIFAR10 classes. Figure 11: Entropies for frogs Figure 12: Entropies for trucks In figure 11 and 12, we can see the distributions of the images that belong to the frogs class and images that belong to the trucks class. For the frogs class, we see that the values of uncertainty are concentrated in the higher band of the diagram, whereas in the case of trucks, we find many with lower uncertainty. This detail shows that the metric assigns significant uncertainty to out-of-sample class points.
1. What is the reviewer's overall assessment of the paper's usefulness? 2. What are the issues with the writing and notation in the paper? 3. Does the reviewer understand the objective of the proposed method? 4. How convincing are the results presented in the paper? 5. Are there any concerns regarding the baseline used in the experiments? 6. Are there any other minor comments or suggestions for improvement?
Review
Review This manuscript proposes to train a wrapper to assess the confidence of a black box classifier decision on new samples. The resulting uncertainty prediction is used to reject decision with low confidence, such that the accuracy of the prediction on retained samples remains high. The idea, although a bit incremental, is potentially useful to practitioners, as argued in the introduction, and some empirical result tend to suggest that the method can be useful. However, I am not convinced the approach and its implementation are appropriate. Main comments: Writing: the model seems strongly based on references such as Kendall & Gal (2017), however lack of introduction of notations and explanations prevent the paper to be self contained. For example: (1) Here is a list of quantities for which I could not find a definition: y_m, \omega, W. (2) \hat{y} appears with several indexing styles (up to three indices), sometime bold and sometimes not: the meaning of the indexing (which I could not find) is difficult to infer from the text Objective: I was unable to understand the rational behind the objective to be minimized, introduced in page 4. This is introduced as a cross-entropy loss, however, taking the expression of the Dirichlet distribution, it is not obvious to me how to reach this very simple expression. Is it possible that the authors simply mimic the expression of Kendall and Gal (2017, eq. (5)), that was designed for the Gaussian case, for which the cross-entropy expression is correct? As a consequence, I do not see how this objective is supposed to learn properly the correct beta parameter. Results: Figures 4-8 shows convincing evidence that the procedure improves with respect to a baseline that consist (as far as I understood) in ranking decision based on the entropy of the output probability vector of the classifier. However, given that I am unsure about what the proposed optimization does, it remains unclear to me whether these results reflect a true achievement. For example, one can argue that the chosen baseline is unlikely to be a good estimate of the entropy of the decision due to the fluctuations of the output probability for the unlikely classes. Those low probability values are the classical source of variance and bias in entropy estimates, and a classifier is not designed to get these low probabilities right (as they are low anyways). As a consequence, an already better baseline might be achieved for a given reference class by cropping the probability vector to keep only classes that non-negligible probability over the training set (here they can be many alternative approaches to test). A trivial explanation for the proposed approached to work better than the currently chosen baseline is that the noise introduced by the sampling from the Dirichlet distribution leads to larger probabilities for the cases where the probability given by the classifier is small, which would reduce the variance of the entropy estimator based on the formula of page 5 (top). Overall, extensively checking many simpler baselines (which do not require training!) is a first step to see if the achieved result is not easy to get. Minor comments: (1) Please check for typos. (2) Please avoid remove the multiple parenthesis for successive citations (use a single “\citep”). (3) In Fig. 1, subtitles are inconsistent.
ICLR
Title Dirichlet Wrapper to Quantify Classification Uncertainty in Black-Box Systems Abstract Nowadays, machine learning models are becoming a utility in many sectors. AI companies deliver pre-trained encapsulated models as application programming interfaces (APIs) that developers can combine with third party components, their models, and proprietary data, to create complex data products. This complexity and the lack of control and knowledge of the internals of these external components might cause unavoidable effects, such as lack of transparency, difficulty in auditability, and the emergence of uncontrolled potential risks. These issues are especially critical when practitioners use these components as black-boxes in new datasets. In order to provide actionable insights in this type of scenarios, in this work we propose the use of a wrapping deep learning model to enrich the output of a classification black-box with a measure of uncertainty. Given a black-box classifier, we propose a probabilistic neural network that works in parallel to the black-box and uses a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters of the classifier and enables the estimation of aleatoric uncertainty for any data sample. Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system. We showcase the proposed technique and methodology in two practical scenarios, one for NLP and another for computer vision, where a simulated API based is applied to different domains. Results demonstrate the effectiveness of the uncertainty computed by the wrapper and its high correlation to wrong predictions and misclassifications. 1 INTRODUCTION The popularity of machine learning is giving birth to new business models based on the productization and service of these models. In the market there are many application programming interfaces (APIs) serving predictions in object recognition for images (Vision AI1), language detection or sentiment analysis in natural language processing (Cloud Natural Language API2), to mention just a few. As this Machine Learning-as-a-Service model starts to grow, it becomes easier to find these APIs as an integral component of more complex products. The use of pre-trained models gives rise to two different problems. First, we do not know how these models are going to operate in our intended application domain. In order to address this issue, there is a vast literature on transfer learning that can be applied. However, when using third-party proprietary software or APIs, we may not have access to the internals or the possibility of finetuning the model to our domain. If we are to use the model as it is, one must at least understand when the model is going to work and when it is not, to have some confidence metric that tells about the expected performance of the methods when applied to our problem. However, this information is not always provided, especially in deep learning models. This effect can be worsened when these components are just one of the many different parts of a data product. This complexity leads us to the second problem: when different models might interact in complex pipelines, the construction of the appropriate confidence measures can be a challenging task. 1https://cloud.google.com/vision/ 2https://cloud.google.com/natural-language/ In order to solve the previous issues, in this article, we propose a deep learning wrapper algorithm that equips any black-box model with uncertainty prediction. Here a wrapper is understood as a machine learning model that takes any other model and operates without accessing its internals. Because it does not have access to the internal states, parameters, or architecture of the model it is wrapping, the wrapper is model agnostic and can be used on top of any other algorithm as long as it satisfies some desideratum. In this article, we only require the black-box model to produce as output a distribution over the classes, a soft requirement as any model with a soft-max layer satisfies it. More specifically, the proposed wrapper uses a deep learning model and introduces a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters. By sampling from this Dirichlet distribution, the wrapper enables the estimation of aleatoric uncertainty. Uncertainty has been an important topic in machine learning for many years (Koller & Friedman, 2009). With the emergence of deep learning, the reinterpretation of some existing mechanisms such as dropout, or the proposal of stochastic mechanisms such as Montecarlo approaches, has broadened the use of these techniques for accounting for uncertainty in deep models (Gal, 2016). Uncertainty can be categorized into epistemic and aleatoric uncertainty. While the first accounts for the uncertainty that is associated to model parameters, the second corresponds to the uncertainty inherently present in the data3. Uncertainty plays a key role when reporting a decision because it accounts for the reliability of the prediction and can help to show the limitations of the applicability of a machine learning model. In this respect, we advocate for the use of selective prediction (aka rejection techniques) when the uncertainty metric is large in order to avoid potential harm or avoid risks. Selective prediction is a set of techniques based on abstaining from deciding according to some metric threshold. As previously commented, uncertainty is a good candidate for a rejection metric. In literature, we find examples of different rejection functions (Geifman & El-Yaniv, 2019) (De Stefano et al., 2000) and some of them use uncertainty measures (Geifman & El-Yaniv, 2017) for rejection. In the proposed scenario, where we are trying to characterize the uncertainty of a black-box nonmutable model, many of the state-of-the-art techniques are not applicable. For example, some rejectors have to be trained together with the classifier and need access to the internals of the model. In the same line, current models for uncertainty in deep learning need to have access to its internal states (Gal, 2016). The contributions of this article can be summarized as follows: • We propose a wrapper algorithm that equips any other classification model that outputs a distribution over predicted classes with Bayesian treatment without having knowledge or access to its internals. • We use the wrapper to empirically estimate aleatoric uncertainty and show that the computed uncertainty can identify prone to err samples. • Finally, we show that the computed uncertainty can be used by rejection techniques to increase the performance and robustness of the original black-box model in the target domain. We show improvements in transfer problems in natural language processing and computer vision problems. In section 2, we introduce the method proposed for building an uncertainty wrapper around a blackbox model. In section 3, we describe how to obtain an uncertainty score from the wrapper output. Section 4 introduces the concept of rejection and rejection performance metrics. In section 5, we showcase the proposed method in four different scenarios for sentiment analysis in natural language processing and one for computer vision. The results obtained corroborate the importance of the rejection method and show the success of the proposed methodology. Finally, section 6 concludes the article. 3Observe that in this application only aleatoric uncertainty matters since we are dealing with pre-trained, non-mutable models. In this particular scenario, aleatoric uncertainty also serves as a measure of the fitness of the model to the data. 2 BUILDING AN UNCERTAINTY WRAPPER Our goal is to build a wrapper algorithm that takes another black-box model and operates on top of it. As such, there are several constraints to observe. First, we need to exclusively operate on the inputs and outputs of the black-box classifier. We are not allowed to use any intermediate or internal value of the black-box model as we need to be agnostic to it. Second, the input of the wrapper has to be compatible with the original distribution over the output classes. In the literature, other proposals suggest a deep learning model for estimating uncertainty. The problem with those approaches, like in (Kendall & Gal, 2017) where they use independent Gaussian random variables to model the pre-activation value of the logits, is that they do not conform to the constraints in our setting. First, having access to the logits before the softmax breaks the black-box assumption; and, second, independent Gaussian distributions impose unnecessary assumptions and need of additional normalization steps. A more natural approach is to consider the output distribution coming from a Dirichlet probability distribution function. 2.1 DIRICHLET CONCENTRATION REPARAMETERIZATION As commented, given a data set D composed of pairs (xi, yi), i = 1 . . . N , with yi ∈ RC , being C the number of different classes, the wrapper output is assumed to come from a Dirichlet probability density function: p(yw|X,w∗) ∼ Dir(α), (1) wherew∗ are the parameters of the wrapper. We propose to use a decomposition of the concentration parameter in two terms to relate the output of the black-box classifier, ym, with the concentration parameter, α, in the Dirichlet distribution of the wrapper. To that effect, we recall some basic statistics of the Dirichlet distribution. Given a Dirichlet random variable x ∈ RC with concentration parameter α ∈ RC , the expected value of the distribution is defined as E(x) = α/ C∑ i=1 αi. Observe that the expected value has the same properties as a probability distribution and that the output of the black-box ym ∈ RC is already a probability distribution. In this sense, we could directly use the output of the black box as the concentration parameter. However, each term of the concentration parameter is not necessarily constrained to the interval [0, 1]. Let us introduce a new scalar parameter, β ∈ R that will model this difference, such that α = βym. Observe that the value of β does not change the expected value of the output of the wrapper and coincides with the output of the black-box model, i.e. E(yw) = ym. This decomposition has a simple interpretation: While the output of the black-box classifier stands for the mean, parameter β accounts for the spread of the distribution. The same or similar decomposition can be found in other works in a different context(Malinin & Gales, 2018)(Chen et al., 2018)4. An example of the effect of varying this parameter in a three dimensional Dirichlet distribution is shown in Figures (1a) to (1c). Observe that the higher the value of β, the more pointy the distribution is. 4It is worth noting that in the context of those works, there is a degradation in performance when using Dirichlet. This does not happen in our case since the black-box model is non-mutable. This decoupling allows to effectively isolate the contribution of the black-box and the contribution that remains to be computed, i.e. the value of parameter β. Figure 2 shows the integration of the wrapper (in light orange colour) with the black-box classifier (in light blue colour). Observe that the wrapper consists of two blocks: the Dirichlet reparameterization layer of the wrapper that decouples the influence of the black-box model from the rest (see the dashed line), and a deep learning architecture which aims to compute the scalar value of β5. 2.2 INFERENCE IN THE DIRICHLET SETTING SImilarly to (Kendall & Gal, 2017), we approximate the expected value of the classification probabilities using Monte Carlo sampling from the learned Dirichlet distribution for each sample, ŷ.,i ∼ Dir(αi) as E[ŷi] = 1M ∑M m=1 ŷm,i. This distribution is used to define the loss function for our learning stage. Given a set of N training samples, we will use a regularized version of the cross-entropy loss function as follows: L(W ) = − 1 N N∑ i=1 1 C C∑ c=1 yi,c logE[ŷi]c + λ‖β‖2 = − 1 N 1 C N∑ i=1 C∑ c=1 yi,c log ( 1 M M∑ m=1 ŷm,i,c ) + λ‖β‖2. Observe that we introduce the norm of the β value in the minimization function. This term is required since the unregularized cross-entropy forces the value of β to grow unbounded. By adding this term, we control its growth and govern the trade-off with a scalarization parameter λ. 3 OBTAINING AN UNCERTAINTY SCORE FROM THE WRAPPER The described Dirichlet layer effectively allows studying the variability of the parameters of the black-box output. This variability can be used to approximate a value for the heteroscedastic aleatoric uncertainty. In this work, we use Monte Carlo simulation sampling from the obtained Dirichlet function in order to characterize the uncertainty (Gal, 2016). Standard techniques for measuring uncertainty includes variation ratios or predictive entropy. Variation ratios measures the variability of the predictions obtained from the sampling (Freeman, 1965) by computing the fraction of samples with the correct output. Alternatively, predictive entropy considers the average amount of information contained in the predictive distribution. Those results with low entropy values correspond to confident predictions, whereas high entropy leads to large uncertainty. Since the output of the black-box model ym already describes a probability distribution, one could compute its predictive entropy and obtain a measure of its uncertainty with H = − ∑ c y m c log y m c 5The architecture used in this figure corresponds to the one used in the experimental section. However, as the wrapper allows us to model the variability of the parameters of the black-box output distribution, we can compute a predictive entropy that takes into account the variability of the predicted value. In this case, the sampled predictive entropy is defined as H = − ∑ c E[ŷ]c logE[ŷ]c. As we show in the experimental section, this latter approach captures better the uncertainty compared to the predictive entropy of the original model. 4 USING UNCERTAINTY FOR REJECTION Rejection is a mechanism that, given a particular metric related to the confidence in the decision, allows discarding a prediction if the metric value is below some threshold. In our proposal, we use the wrapper computed uncertainty as this rejection metric. In the context of our use cases, the hypothesis is that texts or images with high uncertainty are prone to be misclassified by the blackbox model. In order to use the uncertainty score for evaluating the performance of the black-box in a new dataset, we first proceed to obtain the predictions applying the original model. Then for each pair of data and prediction, we obtain the associated uncertainty score using the wrapper. Next, we sort the predictions based on the uncertainty score, from more uncertain to more confident. From that ordering, we set the rejection threshold that marks where to start trusting the classification model. In order to evaluate the rejection metric, we split the dataset using two criteria: whether the method Rejects the data point or Not; and whether the point is Accurately classified, or Misclassified named as R, N, A or M respectively. Using this terminology, we follow the guidelines in (Condessa et al., 2015) for rejection quality metrics. We have three quality metrics, illustrated in 3: • Non-rejected Accuracy measures the ability of the classifier to classify non-rejected samples accurately: NRA = |A ⋂ N | |N | • Classification Quality measures the ability of the classifier with rejection to classify nonrejected samples accurately and to reject misclassified samples: CQ = |A ⋂ N |+|M ⋂ R| |N |+|R| • Rejection Quality measures the ability to concentrate all misclassified samples onto the set of rejected samples:RQ = |M ⋂ R||A| |A ⋂ R||M | A good rejection point will show a trade-off between the three metrics, being able to divide the misclassified predictions from the right ones and preserve only those points that provide useful information. The higher the value displayed, the better that metric performs for rejection. 5 EXPERIMENTS AND RESULTS This section describes the experiments run for validating the wrapper proposal and results obtained. The experiments include two different scenarios: a use case for sentiment analysis using natural language processing and, another, for image classification. 5.1 A NATURAL LANGUAGE PROCESSING SCENARIO In order to validate the proposal, we use an NLP-based sentiment analysis system applied to product reviews. The goal of the system is to classify each review on whether it is positive or negative. The goal of the experiment is two-fold. First, we want to show how to apply the wrapper for a given NLP task. Second, we demonstrate how the proposed method additionally captures the uncertainty caused by the change in domains. To this end, we include different combinations of training and prediction domains in the experiment. The details on the datasets used are the following: • Stanford Sentiment Treebank (Socher et al., 2013), SST-2, binary version where the task is to classify a movie review in positive or negative. The dataset is split in 65,538 test samples, 872 for validation and 1,821 for testing. • Yelp challenge 20136, the goal is to classify reviews about Yelp venues where their users rated them using 1 to 5 stars. To be able to reuse a classifier trained with the SST-2 problem, we transform the Yelp dataset from a multiclass set to a binary problem, grouping the ratings below three as a negative review, and as positive otherwise. The dataset is split in 186,189 test samples, 20,691 for validation and 22,991 for testing. • Amazon Multi-Domain Sentiment dataset contains product reviews taken from Amazon.com from many product types (domains) (Blitzer et al., 2007). As in Yelp, the dataset consists on ratings from 1 to 5 stars that we label as positive for those with values greater or equal to 3, and negative otherwise, split into training, validation and test datasets. We use two of the domains available: music (109,733/12,193/52,254 examples) and electronics (14,495/1,611/6,903 examples). 5.2 AN IMAGE CLASSIFICATION SCENARIO In addition to the NLP use case presented above, we include here a use case for image classification. The task, in this case, is to classify images in one of the categories defined in the dataset. As in NLP, an image classifier trained using a source dataset, acting as the original API, is then applied to a new set of images belonging to a different dataset. Both datasets share almost the same output classes except for one. By predicting the uncertainty of the different class, we will show how the predicted uncertainty can also be used to detect out of sample images. The details on the datasets used for the vision use case are the following: • STL-10 (Coates et al., 2011), The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. It includes 500 training images, 800 test images per class, belonging to 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. • CIFAR10 (Krizhevsky, 2009), The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes(airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck), with 6000 images per class. There are 50000 training images and 10000 test images. 5.3 EXPERIMENT SET UP On every experiment, we use two datasets: (i) a source dataset for training a model, that will be considered the black-box model from that moment on and (ii) a target dataset that corresponds to the domain we want to apply the black-box model and where to measure the uncertainty using the proposed wrapper . Specifically, the steps followed on each case are: • Train the black-box. First, we train a classifier with the source dataset. In real scenarios, this step would not be necessary as we would be using a pre-trained model or third-party API. • Apply the black-box to the target domain. In this step, we use the black-box to obtain the predictions and evaluate the accuracy of the target dataset, and we can compute the predictive entropy based on the prediction outputs. 6https://www.yelp.com/dataset/challenge • Compute the uncertainty for the target domain using the wrapper model. Once we have the predictions for the target domain, we proceed to train the uncertainty wrapper to approximate the Dirichlet pdf for each input. By sampling the pdf, we compute the sampling predictive entropy of the average of the outputs to get the uncertainty score for each element in the target dataset. • Apply the rejection mechanism. Finally, we use the uncertainty score to sort the predictions from more to less uncertain, and we search for a rejection point that maximizes the three performance measures: non-rejected accuracy, and classification and rejection quality. We run five different scenarios, including the training the black-box with the Yelp dataset and applying it to SST-2 and vice-versa, training the black-box with the Amazon electronic products reviews dataset and applying it to Amazon music products reviews, and vice-versa, and training with STL-10 and applying the black-box to CIFAR10. For each scenario, a selection of optimal training parameters was carried out, including learning rates, batch sizes, number of units and number of epochs. Details on the architectures used for the black-box are given in the Appendix A. 5.4 RESULTS In order to show the effect of the application of the uncertainty wrapper on each of the target domains, we compute the uncertainty score using the three different metrics described in section 4: the predictive entropy of the black-box output (baseline), the predictive entropy obtained after training the aleatoric wrapper (pred. entropy), and the variation ratios (var. ratios). Figures 4 to 8 show the results obtained on each combination for the rejection performance metrics for the three uncertainty scores analysed. From left to right, we find the values for non-rejected accuracy, classification quality and rejection quality. The higher the value in the plot, the better the result. According to the results obtained, the proposed method shows better behaviour in all scenarios and metrics. As we remove more samples according to the uncertainty, the proposed method displays much better accuracy and quality than its counterparts. These results validate the hypothesis that the heteroscedastic aleatoric uncertainty computed by the wrapper effectively captures the confidence in the prediction and the samples prone to error. On the contrary, variation ratios are the worst performant method. Note that, although our proposal performs much better, its absolute gain depends on the scenario. In those domains where the black-box model performs worse, there is more to gain by using the wrapper. If we observe the classification quality (plot at the center of each figure) and the rejection quality, we can see that the proposed metric is also excellent at rejecting the misclassified points. A detailed table with numerical results for the same experiments is included in Appendix B. Results demonstrate how the usage of the uncertainty for rejecting uncertain predictions helps with the adaptation of a pre-trained model to new domains of application. In some cases, the results obtained for the test dataset of the target domain by rejecting 10% of the less certain points overtake those obtained by the source dataset used for training the original model. As a curiosity, the use case where we trained a black-box model using the reviews of Amazon’s electronics products achieves better results when applied to the test target dataset than to the original test dataset. Even in this case, where the applied classifier reaches an accuracy of more than 90 %, the proposed method increases it in almost 5 points. In Appendix C, we analyse how, for the case of images, the proposed method can detect out-of-sample images that belong to an unseen category. 6 CONCLUSIONS AND FUTURE WORK In this work, we introduced a deep learning wrapper technique that can endow any black-box model with uncertainty features. The wrapper uses a reparameterization trick on the Dirichlet distribution, and it can capture the distribution on the multinomial parameters of the output of the black-box classifier. We use the predicted uncertainty to fuel a rejection method and show how this helps in assessing the fitness of a model to a new domain or data set. By measuring the sampling uncertainty and using it for rejection, we can improve the accuracy results by 4%-8% by rejecting just 10% of the samples. Additionally, the method displays a significant value on rejection quality. These results tell us that the predicted uncertainty focuses on intricate, ambiguous, or prone to error cases. We show results in NLP and computer vision domains with successful and encouraging results. As future work, we are planning to keep exploring different architectures and strategies for the wrapper implementation and focus on other usual cases found in real-life implementations, such as how to deal with high dimensional and categorical outputs. A APPENDIX A For the sake of reproducibility, this Appendix details the architectures used for training the blackbox systems. Figure 9 describes the model used for training the black-box models in the two use cases. As stated before, the only purpose of this model is to obtain a black-box classifier for a given source domain. The goal, in this case, is not to obtain the best classifier but to obtain a model which is easy to train and offers good performance. The main difference between the model for NLP and Image classification comes from the embedding component. In the case of NLP, we opted for representing a sentence as the average value of the embedding of each word using pre-trained word2vec embeddings. In the case of images, we trained a MobileNET v2 model (Sandler et al., 2018), initialized with imagenet weights, using as input the STL-10 images, resized to 32x32x3 to accommodate them to the CIFAR10 dataset.7 B APPENDIX B Table 1 shows a detail of the numerical results obtained during the experiments for the four combinations tested. The first column, black-box source acc, describes the accuracy obtained for the source dataset after training the original classifier. Next, column black-box target acc describes the accuracy obtained when applying the black-box to the target dataset. The rest of the columns show the non-rejected accuracy and the classification and rejection quality after rejecting 10, 20 an 30% of the points, using the proposed predictive entropy as a rejector. 7we tried other embeddings such as ELMO, and Seq2seq for text, or VGG-16 (Simonyan & Zisserman, 2015) and ResNET50 (Szegedy et al., 2015) for images, but we stick to word2vec and MobileNet due to limitations on the computing resources. C APPENDIX C This Appendix shows detailed results on the image case. Although the resulting quality obtained for the rejection mechanism in the case of images is not as large as in texts, when comparing to the predictive entropy of the original classifier, we observe that the proposed measure is still excellent for detecting out of sample images. The main difference between STL-10 and CIFAR10 is a variation on one of the classes. Where in STL-10 class 6 held monkeys, in CIFAR10 it corresponds to frogs. As one can expect, the black-box model trained with STL-10 will struggle on detecting frogs. Figure 10: Distribution of the predicted entropies for two of the CIFAR10 classes. Figure 11: Entropies for frogs Figure 12: Entropies for trucks In figure 11 and 12, we can see the distributions of the images that belong to the frogs class and images that belong to the trucks class. For the frogs class, we see that the values of uncertainty are concentrated in the higher band of the diagram, whereas in the case of trucks, we find many with lower uncertainty. This detail shows that the metric assigns significant uncertainty to out-of-sample class points.
1. What is the main contribution of the paper regarding selective prediction in machine learning? 2. What are the strengths and weaknesses of the proposed model for selective prediction? 3. How does the reviewer assess the problem framing and the choice of baselines in the study? 4. What are some simple baselines that the reviewer expects to see in the paper? 5. Does the paper adequately address the practical challenge in machine learning? 6. Are there any concerns regarding the efficiency and complexity of the proposed method? 7. How does the reviewer evaluate the effectiveness of the wrapping scheme compared to training a new model from scratch? 8. What is the significance of the beta regularization term in the proposed model? 9. Why did the authors choose STL-10 as the source domain in the image transfer experiments? 10. Should the paper provide more discussion on the distribution shift between STL-10 and CIFAR-10?
Review
Review Motivated by real-world challenges in applying pre-trained models, the authors propose a model for selective prediction (prediction with an option for abstention) that wraps an existing black-box classification model. The resulting model output is a Dirichlet distribution with mean equal to the categorical distribution produced by the black-box and concentration parameter specified by a separate auxiliary model. This additional model is trained to minimize negative log-likelihood of observations under categorical distributions sampled from the aforementioned Dirichlet along with an L1 regularization term on the concentration parameter. To infer the model’s level of uncertainty, the authors propose computing the entropy of the average of sampled categorical distributions. The authors evaluated this model on several pairs of sentiment-analysis NLP tasks and one pair of image datasets where a base model is trained on the source dataset, and the auxiliary model is trained on the second, target dataset. Using metrics proposed in (Condessa et al. 2015), the results show positive results at nearly all thresholds compared to a simple entropy baseline. The paper addresses an important practical challenge in machine learning, but a confusing problem-framing and lack of robust baselines make me skeptical that it is suitable for publication at ICLR 2020. A primary concern with this work is its framing of the problem as one of measuring aleatoric (irreducible) uncertainty, but the motivation in transfer learning and interdependence in production ML systems requires models that can characterize epistemic (reducible) uncertainty. A black box model that yields distributions over classes expresses aleatoric uncertainty via that distribution, and uncertainty due to a shifted data distribution is epistemic as additional data from the new domain would reduce it. More problematic is the study’s lack of robust baselines. The authors only present the predictive entropy baseline, but numerous methods exist for out-of-distribution detection and selective classification. Though the specific case of selective classification from a blackbox base model is perhaps more niche, other methods from related problems can either be adapted accordingly or used as upper / lower bounds on what we can expect for this problem. Some simple baselines I would expect to see include: * Training a new classification model entirely on the new domain. * Training an auxiliary classifier to predict if the base model will be correct (similar to SelectiveNet but without a shared network body). Ideally this model should also have access to the base-model’s prediction as input. * “Confidence score” (i.e. the probability assigned to the base-model’s predicted class) -- this is a common baseline for OOD detection. Additional questions / concerns: * Do I understand correctly that when drawing samples for the entropy calculation, $E[\hat{y}]$ will equal the black-box model’s prediction in the limit of sample size? If so, this looks like an inefficient and complicated proxy for measuring the concentration of categorical entropies produced by the Dirichlet. * Since the paper is focused on scenarios where one is taking advantage of a pre-trained model, one might wonder if the wrapping scheme is indeed less expensive than a new model trained from scratch on the new domain (e.g. with comparable capacity to the proposed wrapping model). Authors should include experiments / baselines to assess this. * In section 2, the paper asserts that assuming access to logits breaks the blackbox assumption, but these are computable from softmax values (up to constant factors). * Is the beta regularization term is theoretically required to prevent unbounded growth or is it simply an empirically practical necessity? * In the image transfer experiments, STL-10 has relatively very few labeled examples (hundreds per class), so why was this only used as the source domain? Realistic transfer learning generally entails an expensive model trained on a source domain with plentiful data transferred to a target domain with scarce data. * Authors should mention that STL-10 images represent a distribution shift from CIFAR-10 as the two were not generated identically. The paper currently only highlights differences in dataset size.
ICLR
Title Few-bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction Abstract Memory footprint is one of the main limiting factors for large neural network training. In backpropagation, one needs to store the input to each operation in the computational graph. Every modern neural network model has quite a few pointwise nonlinearities in its architecture, and such operation induces additional memory costs which — as we show – can be significantly reduced by quantization of the gradients. We propose a systematic approach to compute optimal quantization of the retained gradients of the pointwise nonlinear functions with only a few bits per each element. We show that such approximation can be achieved by computing an optimal piecewise-constant approximation of the derivative of the activation function, which can be done by dynamic programming. The drop-in replacements are implemented for all popular nonlinearities and can be used in any existing pipeline. We confirm the memory reduction and the same convergence on several open benchmarks. 1 INTRODUCTION Modern neural network models are getting larger and larger. One of the main bottlenecks in the training loop is the required device memory storage Ojika et al. (2020); Gao et al. (2020). In this paper, we propose a universal approach that helps to reduce the model memory footprint during backpropagation. Note that this approach is complementary to other memory reducing techniques such as checkpointing Chen et al. (2016) or offloading Beaumont et al. (2021). Our method can be applied to any neural network without any additional preprocessing. Memory consumed by the model during training (except intermediate tensors) can be split into two groups: 1) the model weights (including additional memory for the optimizer state), 2) activations saved for the backward pass, over which the computation is not carried out directly at the moment, but which will be required in the future to compute the gradients. Every operation in the computational graph generates a memory footprint. It is typically overlooked, that the application of the pointwise non-linearity (such as GELU or sigmoid) results in storing the input for the backward pass. We show that instead of keeping the full input tensor, it is possible to store a low-bit representation, which allows accurate gradients approximation. In this work, we propose to approximate the derivative of the activation function in a piecewiseconstant form. Such an approximation problem has to be solved once for each activation function, and we propose a simple technique to do that. The proposed approximation divides all values into several bins and saves only their corresponding bin indices instead of storing all values. This is a lossy compression, but the additional noise introduced by it is negligible as we will show on several benchmarks in Section 4. The main contributions of our paper are: • We propose new approximate backward computation schemes that significantly reduce the memory consumption of neural network training. • We benchmark our approach on several tasks. We show that it provides up to 40% memory reduction on various tasks while maintaining accuracy on par with the model trained via the standard approach. 2 QUANTIZED GRADIENTS OF ACTIVATIONS Gradients of activations using automatic differentiation. Modern deep learning frameworks use the reverse mode automatic differentiation to calculate the gradients of the loss over the model parameters. Forward computation can be associated with a directed acyclic graph, depicted in Fig. 2. Each operation f computes the output Xl+1 given the input Xl and has to save some information Sl that would be used on the backward pass in order to calculate the derivative ∂L/∂Xl from ∂L/∂Xl+1 and Sl. Thus, in a typical training loop, the intermediates Sl of all operations in the graph are stored in the memory during the whole forward pass until they are no longer needed after the completion of the corresponding backward operation during backward pass. This generates an additional memory, which can be quite significant and be larger than the total amount of parameters of the model. Pointwise activations. In this paper, we focus on a pointwise activation function, which is ubiquitous in modern neural network architectures. Given an input tensor Xl we apply a function f to each of the elements of this tensor: f(Xl) = [f(X j1,...,jk l )]j1,...,jk , f : R → R. This operation is very cheap compared to other operations in the deep neural network model and does not attract much attention when analysing computational complexity. However, standard implementation in such a framework as PyTorch induces not a very small memory footprint and the whole input Xl is saved for the backward pass. The backward pass for such a function consists of element-wise multiplication of the propagated gradient tensor by the derivative of the nonlinearity function at the points of the input tensor: if Xl+1 = f(Xl), then the gradient of the loss L with respect to Xl is computed as ∂L ∂Xl = ∂L ∂Xl+1 f ′(Xl), (1) where f ′(Xl) is the tensor with elements, consisting of the derivative of f evaluated in each element of Xl. From Eq. (1), it follows that for the backward pass we have to store only f ′(Xl), and Xl is not needed. ReLU activation function. To illustrate our idea, consider one of the most popular nonlinearities, f(x) = ReLU(x) = max(0, x). Its derivative f ′ takes only two values, 0 and 1 and it only requires 1 bit to store. If single precision is used, then the compression is 32, which is quite noticeable. GELU activation function. In modern transformer architectures Vaswani et al. (2017) the GELU Hendrycks & Gimpel (2016) nonlinearity is typically used. The derivative no longer takes two values. Instead, we propose to approximate f ′ by a piecewise-constant function. For example, if we allow 8 different values, we will need only 3 bits per each element (Fig. 1). Quantized gradients of activations. In stochastic optimization, if the gradient for a given batch is computed approximately, the optimization may still converge. The GELU derivative (see Fig. 1) is quite “similar” to a piecewise-constant function: for large values of |x|, it is almost exactly equal to 0 or 1, and for small values of x, a rather interesting transition from 0 to 1 occurs. Instead of calculating the derivative exactly on the backward pass, we approximate it using a certain piecewise-constant approximation: q(x|s,y) = k∑ i=1 yi1[x ∈ [si; si+1]], (2) where s = (s1, · · · , sk+1) is a sorted vector of intervals, on which approximation is constant, y = (y1, · · · , yk) is a vector of the correspond- ing values of approximation and 1 denotes an indicator function, which equals 1 whenever its argument is true and 0 otherwise. That means, that q(x|s,y) equals yi when x ∈ [si; si+1], see Fig. 3 for illustration. As noted above, if the approximation has k constant intervals, instead of storing the full input tensor X , it will be possible to save only log k bits of information (per element of the input tensor), which, accordingly, will reduce the memory consumption by 32/ log k times for single precision. If quantizatoin scheme Eq. (2) is given, drop-in replacement for activation function f is very straightforward. On the forward pass, instead of the full tensor X, we have to save only indices of intervals to which the elements of X belong, and on the backward pass, we need to multiply gradient w.r.t. output not with the actual derivative of f , but with values from y corresponding to stored indices. Pseudocode is presented in Alg. 1. 1 # Globally stored piecewise-constant approximation parameters 2 s, y = [...], [...] 3 4 def forward(X): 5 X_pos = sortedsearch(s, X) 6 save_for_backward(X_pos) 7 return f(X) 8 9 def backward(dLdY): 10 X_pos = get_saved_for_backward() 11 return dLdY * y[X_pos] Listing 1: Pseudo code for quantized backward layer. Arrays s and y are parameters of quantization Eq. (2), sortedsearch is a binary search method. Memory of Few-bit Appproximation. As it was mentioned above, by replacing all pointwise nonlinearity layers in the neural network with Few-bit approximation consisting of k piecewiseconstant intervals, the memory consumption of such layers during forward-backward pass will be reduced by 32/k times for single-precision learning mode. However, how many times in total the neural network memory consumption is reduced depends on the particular architecture of the neural network and the optimizer used in the process. During training, the memory is spent on weights (parameters) of the network, on optimizer statistics, and on all stored activations, some of which are activations of pointwise nonlinearity layers. For example, when training ResNet18 with the Adam optimizer on 256x256 images, the model weights take 44.6Mb, 3 ∗ 44.6 = 133.8Mb is used by the optimizer to store gradients and moments, BS∗40Mb is needed to store all activations during forwardpass, BS ∗11.5Mb of which are pointwise nonlinearity layers and BS ∗28.5Mb is for all other layers, where BS is the batch size. Therefore, the maximum possible batch size with standard nonlinearities is ⌊(GPU_MEM − 4 ∗ 44.6)/40⌋, while the maximum batch size with Few-bit nonlinearities of size k is ⌊(GPU_MEM − 4 ∗ 44.6)/(28.5 + 11.5 ∗ log k/32)⌋, where GPU_MEM is the available GPU memory. In our example with ResNet18 for standard nonlinearity layers, the maximum batch size for a video card with 32Gb memory is 813, while using 4-bit Few-bit approximation is 1086 (+33%). Memory consumption for different Few-bit mods and different neural network architectures is presented in Appendix B. Speed of Few-bit Approximation The memory gain of a Few-bit layer does not slow down the speed. The standard nonlinearity layer calculates the activation function in the forward pass and the activation function gradient in the reverse pass. The activation function gradient usually includes complex functions such as exponent, erf, and others. The Few-bit version of the layer also calculates the activation function on forward pass, but the gradient calculation during backward pass is replaced by one binary search and one lookup in the value table (see Alg. 1). Our efficient implementation of this procedure using CUDA kernels runs several percent faster than the standard nonlinearity layer. However, this result may depend on specific framework implementation and the used GPU, so in our experiments in Section 4 we do not consider the time gain, assuming that both layers are roughly equally fast, but focus specifically on memory savings. 3 OPTIMAL PIECEWISE-CONSTANT APPROXIMATION Fig. 1 shows examples of an optimized 3-bit piecewise-constant approximation for several nonlinearity function. Finding the optimal approximation parameters (boundaries of intervals and values on them) is a challenging task. We propose to find them by minimizing the (weighted) L2 norm of the error. Consider function f : R → R and its derivative f ′. We will measure the quality of a piecewise constant approximation Eq. (2) with a weighted L2 norm: min y,s L(s,y), (3) L(s,y) = ∫ R (f ′(x)− q(x|s,y))2w(x)dx = k∑ i=1 ∫ si+1 si (f ′(x)− yi)2w(x)dx, (4) where w is some weight function reflecting our prior knowledge of the activation function argument distribution. Practical choices of w may be either 1[x ∈ [A;B]] (with some reasonable A and B, which should be large enough) which makes integral Eq. (3) tractable, or maybe, e.g., standard normal distribution. L(s,y) is differentiable w.r.t. s and y, so optimal piecewise-constant approximations can be found using standard gradient-based optimization techniques. But the minimization problem Eq. (3) has many local minima that are far from optimal. We suggest using dynamic programming to get some good initial approximation that can be further finetuned using gradient-based methods (but also can be used as is because it is very accurate on its own). Dynamic programming. We will assume that the weighting function w is chosen such that w(x) = 0 for x ̸∈ [A;B]. Consider the following auxiliary value: DP(t, k) = min y1:k, s1:k+1, s.t.s1=A,sk+1=t ∫ t A (f ′(x)− q(x|y, s))2w(x)dx, t ∈ R, k ∈ N. Essentially, DP(t, k) is the optimal piecewise constant approximation of size k for the given function f ′ on the interval [A; t]. The recurrent formula for this value is: DP(t, k + 1) = min t′ { DP(t′, k) + ∫ t t′ (f ′(x)− y(t′, t))2w(x)dx } , (5) y(t′, t) = ∫ t t′ w(x)f ′(x)dx∫ t t′ w(x)dx , (6) since a piecewise-constant approximation of size k + 1 consists of corresponding approximation of size k (first term) plus one constant interval (second term). Here t′ chooses the right bound of approximation of size k, and y(t′, t) stands for the optimal value for the interval [t′; t] Eq. (8). Then the minimal value of L(s,y) of size k is equal to DP(B, k). To solve the minimization problem Eq. (5), we suggest considering the discretization of t: A = t0 < t1 < · · · < tn = B and reducing the calculation of DP(t, k) to its approximation only in the points of discretization: DP(i, k) = min j {DP(j, k − 1) + T (j, i)} , T (j, i) = ∫ ti tj (f ′(x)− y(j, i))2w(x)dx, y(j, i) = ∫ ti tj w(x)f ′(x)dx∫ ti tj w(x)dx . (7) Eq. (7) can be calculated in O(n2K) time and O(nK) space, which is described in Appendix G in detail. Please note, that this routine should be evaluated only once, possibly by the framework developers, and the used indefinitely. Which means that number of discritization points n can be taken quite large, tens of thousends easily. That would make global solutoin of discrete problem, given in Eq. (7) very close to the global solution of the original problem Eq. (3). We give precalculated Few-bit approximations for many different pointwise nonlinearity functions in our implementation at https://github.com/anonymous/repository. 4 EXPERIMENTS The goal of our experiments is not only to show that the Few-bit nonlinearity approach provides memory savings during neural network training without loss of the final model quality. In addition, we want to experimentally prove that this approach does not change the learning dynamic itself because, in this case its application in practice is almost completely safe: there is a memory gain without loss of speed or quality, and without risks of interference with other training factors under study (hence, no additional search or fitting of other hyperparameters is needed). To achieve this goal, in addition to the main metrics of the trained model (which depend on specific tasks and benchmarks), we also compare the training loss and validation loss graphs during the whole training process. Further you will see that 1-bit and 2-bit f-bit approximations are already almost the same as the original nonlinearity layers. And the 3- and 4-bit Few-bit approximations achieve the original quality of the model. We have tested two of the most important and commonly used neural network architectures: convolutional neural networks and transformer-based networks. We use standard popular open-source benchmarks with open hyperparameters for training in order to demonstrate the behavior of the Few-bit approach under drop-in replacement of standard nonlinearities without any hyperparameter optimization or specially selected training conditions. In Section 4.1, we test the RoBERT-a transformer-based neural network on the GLUE Wang et al. (2019) benchmark, which includes 9 different NLP tasks. In Section 4.2, we test the training of the generative ruDALL-e model in the task of modeling the joint distribution of text and image tokens for the Russian Emoji dataset. We use the GELU nonlinearity for both transformer architectures, as it is the main nonlinearity function used in such models. In Section 4.3, we test the classical ResNet18 architecture on the ImageNet dataset using the open benchmark ffcv Leclerc et al. (2022). In the classical ResNet architecture, we replace all ReLU nonlinearities with one of GELU, SELU, or Swish to demonstrate that the Few-bit approach works with a wide range of different popular activation functions. The main analog of our Few-bit approach is the ActNN method. In Section 4.4, we make a detailed comparison with this method. The code to reproduce all experiments is available at https://github.com/anonymous/ repository, and all hyperparameters for training are presented in Appendix F. 4.1 GLUE benchmark. In Table 1 we report results for RoBERTa-base model Liu et al. (2019) on GLUE benchmark Wang et al. (2019) for standard GELU and 1-, 2-, 3- and 4-bits Few-bit GELU. 1- and 2-bits versions have minor performance degradation, while 3- and 4-bits GELU have no visible difference and closely match vanilla GELU performance, which can be seen more clearly on the dependence of the metric, averaged across all GLUE tasks, on the number of bits in Few-bit approximation, represented in Fig. 6. The behaviour of loss during training is depicted in Fig. 5: 3- and 4-bit versions are hardly distinguishable from the standard GELU. 4.2 RuDALL-E. In Fig. 4 we present training dynamic of ruDALL-E1 Malevich Ramesh et al. (2021) model on Russian Emoji dataset. The dataset Shonenkov et al. (2021) contains 2749 unique emoji icons and 1611 unique texts that were collected by web scrapping (the difference in quantities is due to the fact that there are sets, within which emojis differ only in color, moreover, some elements are homonyms in Russian). ruDALL-E Malevich is a big multimodal pretrained transformer, which learns the conditional distribution of images given some string of text (more precisely it autoregressively models the text and image tokens as a single stream of data). ruDALL-E Malevich encoder part is a 24 layer Transformer Vaswani et al. (2017) model with 16 attention heads, 2048 hidden dimensions and standard GELU nonlinearity, which in total has 1.3B parameters. It works with 128 text tokens, which are prepared from the text input using YTTM tokenizer2, and 1024 image 1Implementation is taken from https://github.com/sberbank-ai/ru-dalle 2Implementation is taken from https://github.com/VKCOM/YouTokenToMe tokens, which are obtained after encoding the input image using Sber-VQGAN3. Few-bit backward for ruDALL-E Malevich shows same behaviour as for RoBERTa-base architecture: 1- and 2-bit versions, although coping with training perfectly fine, demonstrates minor performance degradation, while 3- and 4-bit versions are indistinguishable from the original GELU. 4.3 ResNet Architecture. We trained ResNet18 model He et al. (2016) on ImageNet Russakovsky et al. (2015) benchmark Leclerc et al. (2022) dataset with ReLU replaced with GELU, Swish and SiLU nonlinearity functions. Graphs for Swish nonlinearity can be seen in Fig. 8 and graphs for other nonlinearities can be seen in Fig. 13 in Appendix F: 1- and 2- bits have minor performance drop, while 3- and 4- bits are on par with standard nonlinearity. 4.4 ActNN. As a baseline, we use another quantization scheme ActNN Chen et al. (2021). It works in a much wider spectrum of situations, as it can quantize not only pointwise nonlinearity layers but also all kinds of linear layers (convolutional and dense layers), normalization layers and pooling layers. Without going deep into details, ActNN divides the saved tensor H into chunks hi where each chunk is of an equal size G. Then, given the quantization budget of b bits, each chunk hi is normalized: ui = 2b(hi −min{hi})/(max{hi} −min{hi}), and its randomly quantized version is saved ūi = ⌈ui⌉ with prob. u− ⌊ui⌋, ⌊ui⌋ otherwise. Random rounding is performed in order to guarantee that the quantization is unbiased. For each group, two additional values min{hi} and 3Implementation is taken from https://github.com/sberbank-ai/sber-vq-gan max{hi} are saved as well, but for the group size of G = 256 it is only 0.125 additional bits per element, which we ignore in our following tests. ActNN by construction does not take into account the global behaviour of the nonlinearity derivative. We argue that for nonlinearity layers, it is very crucial, and thus our preoptimized quantization scheme is more preferable. To confirm that, we consider ActNN behaviour on the QQP task from the GLUE benchmark with respect to different quantization budgets and compare it with our method (Fig. 9 and Table 2). In general, our method with 1 bit less budget works the same or better than ActNN, which is very important in the low-bit setting. In Fig. 10 we compare ActNN and Few-bit for ResNet18 architecture on ImageNet dataset for SELU nonlinearity, while results for GELU and Swish nonlinearities can be found in Fig. 14 in Appendix F. Aggregated top-1 accuracy for all activation functions is presented in Fig. 7. Our method steadily outperform ActNN which is especially noticeable for 1-bit regime: ActNN experience strong downgrade of accuracy, while Few-bit Backward has much closer performance to standard nonlinearities. This means that one-bit Few-bit backward can be used in cases where it is very important to reduce memory consumption by a neural network. ‘ 5 RELATED WORK The reduction of the memory footprint is an important topic. To save memory during training, in addition to working with stored activations, the memory used to store model parameters can be compressed. Quantization Bondarenko et al. (2021); Bengio et al. (2013); Banner et al. (2019); Jacob et al. (2018); Nagel et al. (2021); Krishnamoorthi (2018) limits the admissible values of weights to some small finite set. Thus, less memory is needed for storage. The low-rank representation of weights Hrinchuk et al. (2020); Phan et al. (2020); Gusak et al. (2019; 2021); Cui et al. (2020); Novikov et al. (2018); Lebedev et al. (2015) assumes some internal structure of model weights and saves memory by explicitly using this structure with low-rank methods from linear algebra. Low precision learning and low precision optimizers focus on using the lower precision floats to store weights, optimization parameters, and model gradients. All of these approaches are complementary to the proposed one and can be used together. Checkpointing Beaumont et al. (2019; 2021); Chen et al. (2016) methods save memory by the cost of more calculations. It stores a fewer number of activations and repeats the calculation of the rest from the saved checkpoints. Offloading methods Beaumont et al. (2020) send the saved activations to the computer’s RAM and load them back to the video memory on the backwards passes, which also saves GPU memory at the cost of host-device communication time. ActNN Chen et al. (2021) is a framework for quantizing stored activations adaptively on the fly. In contrast to our work, it allows quantizing not only layers of element-by-element activations but also many others, including convolutional, normalization, and linear layers. However, this method depends on the distribution of elements of quantizable tensors and, because of that, its performance may degrade. Our approach, on the other hand, selects data-agnostic optimal quantization, which in practice turns out to be sufficient and easier to use. 6 CONCLUSION We have proposed a method to reduce memory consumption during the training of deep neural network models by storing less information for backward pass in the element-wise activation functions. For effective training, there is no need to calculate the derivative of the activation functions precisely, but only its piecewise-constant approximation is sufficient. This makes it possible to save not the entire input tensor at each application of the activation function, but only the interval number in the piecewise-constant approximation. Experiments show that for a wide class of models and problems, storing only 3 bits of information per tensor element does not lead to degradation of the learning quality and saves about 20 percent of memory. We have proposed an efficient algorithm for constructing an optimal piecewise-constant approximation. The proposed drop-in replacements for popular activation functions (ReLU, GELU, Swish, Sigmoid and others) do not depend on the neural network model, the problem to be solved, or the peculiarities of data distribution. The replacement of the original activation functions by the proposed method can be performed at any training stage (both to models trained from scratch and to pre-trained models for subsequent fine-tuning) and does not require any changes in the training pipelines. An efficient CUDA implementation of the proposed method, together with pre-computed piecewise-constant approximations for many popular activation functions, is available for PyTorch at GitHub repository4. 4Source code repository can be found at https://github.com/anonymous/repository B DETAILED MEMORY MEASUREMENTS FOR DIFFERENT MODELS We provide memory measurements for different model architectures in Table Appendix B. "Model size" is the total memory used for storing model parameters (without model gradients and optimizator statistics). "All activations size" is the total memory used by tensors, saved for backward pass. "Nonlinearity activations size" is the part of all activations used only by nonlinearity layers. "Percentage saving" is memory saved on all activation using our method compared to full precision non-linearities, and percentage value in the "Maximum Batch Size" row is the increase in the batch size achievable by using our method compared to full precision non-linearities, taken in ideal circumstances. Maximum batch size is calculated with the assumption, that four model copies are stored on the device (model parameters, model gradients and optimizer statistics like two moments stored by Adam optimizer) for GPU with 32G memory. Model Size (Mb) All Act. Size (Mb) Nonlin. Act. Size (Mb) Standard Nonlin. Max batch size 1-bit Max batch size 2-bit Max batch size 3-bit Max batch size 4-bit Max batch size ResNet-18 44.6 40.0 11.5 813 1127 (+38.6%) 1113 (+36.9%) 1100 (+35.3%) 1086 (+33.6%) ResNet-50 99.2 156.8 47.9 206 293 (+42.2%) 289 (+40.3%) 285 (+38.3%) 281 (+36.4%) ResNet-101 171.4 234.5 73.4 136 196 (+44.1%) 193 (+41.9%) 190 (+39.7%) 188 (+38.2%) ResNet-152 232.3 328.2 104.9 97 140 (+44.3%) 138 (+42.3%) 136 (+40.2%) 134 (+38.1%) DenseNet-121 30.9 243.8 79.1 133 195 (+46.6%) 192 (+44.4%) 189 (+42.1%) 186 (+39.8%) DenseNet-161 112.4 458.8 147.0 70 102 (+45.7%) 100 (+42.9%) 99 (+41.4%) 97 (+38.6%) DenseNet-169 54.7 296.3 95.3 109 159 (+45.9%) 157 (+44.0%) 155 (+42.2%) 152 (+39.4%) DenseNet-201 77.4 382.2 123.9 84 123 (+46.4%) 122 (+45.2%) 120 (+42.9%) 118 (+40.5%) Efficient Net B0 20.4 112.4 32.4 290 403 (+39.0%) 398 (+37.2%) 393 (+35.5%) 388 (+33.8%) Efficient Net B3 47.5 218.6 59.5 149 202 (+35.6%) 200 (+34.2%) 197 (+32.2%) 195 (+30.9%) Efficient Net B7 256.3 674.8 179.3 47 63 (+34.0%) 62 (+31.9%) 61 (+29.8%) 61 (+29.8%) VGG 11 507.2 100.9 37.0 304 472 (+55.3%) 464 (+52.6%) 456 (+50.0%) 448 (+47.4%) VGG 16 528.2 163.8 68.5 187 314 (+67.9%) 307 (+64.2%) 301 (+61.0%) 295 (+57.8%) VGG 19 548.4 178.8 75.0 171 288 (+68.4%) 281 (+64.3%) 275 (+60.8%) 270 (+57.9%) RoBERTa-base 480.7 185.6 36.0 166 204 (+22.9%) 203 (+22.3%) 201 (+21.1%) 200 (+20.5%) RoBERTa-large 1355.6 482.1 96.0 56 70 (+25.0%) 69 (+23.2%) 69 (+23.2%) 68 (+21.4%) GPT2 491.0 297.1 146.2 103 198 (+92.2%) 192 (+86.4%) 187 (+81.6%) 182 (+76.7%) C NUMERICAL RESULTS FOR DYNAMIC PROGRAMMING D EXPERIMENT SETUPS D.1 GLUE Benchmark implementation is based on opensource Huggingface5 implementation 6 and is available at https://github.com/anonymous/repository. The following parameters were used: Task BatchSize Learning rate Number of epochs Warmup length Cola 32 0.00002 10 320 MNLI 32 0.00001 10 7432 MNLI-MM 32 0.00001 10 7432 MRPC 16 0.00001 10 137 QNLI 32 0.00001 10 1986 QQP 32 0.00001 10 28318 RTE 16 0.00002 10 122 SST2 32 0.00002 10 1256 STSB 16 0.00002 10 214 Common parameters are: Parameter Value Optimizer Adam Adam β1 0.9 Adam β2 0.98 Adam ϵ 1e-6 Weight Decay 0.1 Float Precision fp16 D.2 RESNET We use open source FFCV Leclerc et al. (2022) Imagenet benchmark7 with ResNet18 parameters for one A100 Nvidia GPU https://github.com/libffcv/ffcv-imagenet/blob/main/ rn18_configs/rn18_88_epochs.yaml. D.3 RUDALL-E We used open source implementation that can be found at https://github.com/ sberbank-ai/ru-dalle. All experiments have following setup: training size 2474, valid size 275, loss image weight 1000, frozen MLP and attention layers, batch size 40, start lr 4e-7, max lr 1e-5, final lr 2e-8, warmup 0.1, 8bit-Adam Dettmers et al. (2021), weight decay 0.2, betas (0.9, 0.98), eps 1e-6, gradient checkpointing 24, trained for 6h using 1xA100. 5huggingface.co 6https://github.com/huggingface/transformers/blob/main/examples/ pytorch/text-classification/run_glue.py 7https://github.com/libffcv/ffcv-imagenet E COMBINATION OF ACTNN AND FEWBIT ActNN method is more general and can be applied to the broader class of layers, while our method only focus on one class of layers – pointwise nonlinearities. In the cases when it is not enough and more memory saving is required it is possible to join these two methods and to use Fewbit for pointwise nonlinearities and ActNN for everything else. Such a combination should work better than pure ActNN, since Fewbit works better than ActNN for pointwise nonlinearity layers. To check this hypothesis we train ResNet18 on CIFAR10 dataset. We replace standard ReLU pointwise nonlinearity with GELU, compress all layers except GELU with 4-bit ActNN (since 2-bit ActNN is too much of a compression and model diverges) and GELU layers are compressed with either 2-bit ActNN or 2-bit Fewbit. On Fig. 12 you can see training loss and accuracy. ActNN + Fewbit for pointwise nonlinearities works slightly better than pure ActNN, as expected. F MORE PLOTS FOR EXPERIMENTS G DYNAMIC PROGRAMMING It is easy to see that the optimal value of y for L(s,y) in Eq. (3) with given s is: yi(s) = ∫ si+1 si w(x)f ′(x)dx∫ si+1 si w(x)dx . (8) Consider Eq. (7): both y(j, i) and T (j, i) can be calculated in advance using analytical formulas (if possible) or numerically for the corresponding 1-dimensional integrals. After that, the full array of DP(i, k) can be calculated in O(n2K) time and O(n2) space, where K is the required number of constant intervals in the approximation Eq. (2). Please note that this optimization has to be performed only once, so n can be chosen quite large thus the result would be very close to the global minimum. Note that the space complexity can be reduced to O(n) by adding three auxilliary arrays F 2,W and FW and rewriting Eq. (7): F 2(i) = ∫ ti A f ′2(x)w(x)dx, W (i) = ∫ ti A w(x)dx, FW (i) = ∫ ti A f ′(x)w(x)dx, y(j, i) = (FW (j)− FW (i))/(W (j)−W (i)), T (j, i) = F 2(i)− F 2(j)− y(j, i)2(W (i)−W (j)). (9) We can see that ultimately only O(n) one-dimensional integrals have to be stored, and everything else can be easily evaluated in O(1) time on the spot. The one-dimensional integrals can be calculated numerically in O(n) time and space complexity as well: F 2(i+ 1) = F 2(i) + ∫ ti+1 ti f ′2(x)w(x)dx, W (i+ 1) = W (i) + ∫ ti+1 ti w(x)dx, FW (i+ 1) = FW (i) + ∫ ti+1 ti f ′(x)w(x)dx. (10) Numerical results. In Fig. 1, we provide some 3-bit examples for popular activation functions obtained with described method, and more fewbit approximations can be seen in Fig. 11. In Table 3 we provide numerical values of error Eq. (3).
1. What is the focus of the paper regarding activation compressed training? 2. What are the strengths of the proposed approach, particularly in dealing with nonlinearities? 3. What are the weaknesses of the paper, especially regarding its comparisons and practicality? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a method to perform activation compressed training (ACT) for pointwise non-linear activation functions. Instead of storing a quantized version of the input x, the proposed approach stores a quantized version of the activation function's gradient f'(x), which is multiplied with the gradient in the backward phase. The paper proposes a nonlinear quantization method for f'(x) with a dynamic programming method to determine the optimal quantized approximation. Quantization can be reduced to table lookup then. The proposed method is evaluated on language model pretraining, text-to-image generation, and image classification tasks, while the proposed method is slightly better than existing ACT methods under the same bitwidth. Strengths And Weaknesses Strength: Reducing training memory footprint is an important problem. The proposed method technically sounds, and should be the correct way to deal with nonlinearities. Weaknesses: The paper still needs polishing. For example, the introduction is too brief. The improvement is incremental. It is somewhat unclear whether the comparison with ActNN is fair. ActNN compresses both the linear and nonlinear layers. Does the proposed method also do so? (i.e., combine the proposed method for nonlinear layer and ActNN for linear layer. I think the combined result is what of practical interest.) Table lookup can be very expensive. The time consumption is not reported. I'm not sure if the proposed nonlinear quantization is practical. Clarity, Quality, Novelty And Reproducibility The paper is mostly clearly written and reproducible. Novelty might be somewhat thin.
ICLR
Title Few-bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction Abstract Memory footprint is one of the main limiting factors for large neural network training. In backpropagation, one needs to store the input to each operation in the computational graph. Every modern neural network model has quite a few pointwise nonlinearities in its architecture, and such operation induces additional memory costs which — as we show – can be significantly reduced by quantization of the gradients. We propose a systematic approach to compute optimal quantization of the retained gradients of the pointwise nonlinear functions with only a few bits per each element. We show that such approximation can be achieved by computing an optimal piecewise-constant approximation of the derivative of the activation function, which can be done by dynamic programming. The drop-in replacements are implemented for all popular nonlinearities and can be used in any existing pipeline. We confirm the memory reduction and the same convergence on several open benchmarks. 1 INTRODUCTION Modern neural network models are getting larger and larger. One of the main bottlenecks in the training loop is the required device memory storage Ojika et al. (2020); Gao et al. (2020). In this paper, we propose a universal approach that helps to reduce the model memory footprint during backpropagation. Note that this approach is complementary to other memory reducing techniques such as checkpointing Chen et al. (2016) or offloading Beaumont et al. (2021). Our method can be applied to any neural network without any additional preprocessing. Memory consumed by the model during training (except intermediate tensors) can be split into two groups: 1) the model weights (including additional memory for the optimizer state), 2) activations saved for the backward pass, over which the computation is not carried out directly at the moment, but which will be required in the future to compute the gradients. Every operation in the computational graph generates a memory footprint. It is typically overlooked, that the application of the pointwise non-linearity (such as GELU or sigmoid) results in storing the input for the backward pass. We show that instead of keeping the full input tensor, it is possible to store a low-bit representation, which allows accurate gradients approximation. In this work, we propose to approximate the derivative of the activation function in a piecewiseconstant form. Such an approximation problem has to be solved once for each activation function, and we propose a simple technique to do that. The proposed approximation divides all values into several bins and saves only their corresponding bin indices instead of storing all values. This is a lossy compression, but the additional noise introduced by it is negligible as we will show on several benchmarks in Section 4. The main contributions of our paper are: • We propose new approximate backward computation schemes that significantly reduce the memory consumption of neural network training. • We benchmark our approach on several tasks. We show that it provides up to 40% memory reduction on various tasks while maintaining accuracy on par with the model trained via the standard approach. 2 QUANTIZED GRADIENTS OF ACTIVATIONS Gradients of activations using automatic differentiation. Modern deep learning frameworks use the reverse mode automatic differentiation to calculate the gradients of the loss over the model parameters. Forward computation can be associated with a directed acyclic graph, depicted in Fig. 2. Each operation f computes the output Xl+1 given the input Xl and has to save some information Sl that would be used on the backward pass in order to calculate the derivative ∂L/∂Xl from ∂L/∂Xl+1 and Sl. Thus, in a typical training loop, the intermediates Sl of all operations in the graph are stored in the memory during the whole forward pass until they are no longer needed after the completion of the corresponding backward operation during backward pass. This generates an additional memory, which can be quite significant and be larger than the total amount of parameters of the model. Pointwise activations. In this paper, we focus on a pointwise activation function, which is ubiquitous in modern neural network architectures. Given an input tensor Xl we apply a function f to each of the elements of this tensor: f(Xl) = [f(X j1,...,jk l )]j1,...,jk , f : R → R. This operation is very cheap compared to other operations in the deep neural network model and does not attract much attention when analysing computational complexity. However, standard implementation in such a framework as PyTorch induces not a very small memory footprint and the whole input Xl is saved for the backward pass. The backward pass for such a function consists of element-wise multiplication of the propagated gradient tensor by the derivative of the nonlinearity function at the points of the input tensor: if Xl+1 = f(Xl), then the gradient of the loss L with respect to Xl is computed as ∂L ∂Xl = ∂L ∂Xl+1 f ′(Xl), (1) where f ′(Xl) is the tensor with elements, consisting of the derivative of f evaluated in each element of Xl. From Eq. (1), it follows that for the backward pass we have to store only f ′(Xl), and Xl is not needed. ReLU activation function. To illustrate our idea, consider one of the most popular nonlinearities, f(x) = ReLU(x) = max(0, x). Its derivative f ′ takes only two values, 0 and 1 and it only requires 1 bit to store. If single precision is used, then the compression is 32, which is quite noticeable. GELU activation function. In modern transformer architectures Vaswani et al. (2017) the GELU Hendrycks & Gimpel (2016) nonlinearity is typically used. The derivative no longer takes two values. Instead, we propose to approximate f ′ by a piecewise-constant function. For example, if we allow 8 different values, we will need only 3 bits per each element (Fig. 1). Quantized gradients of activations. In stochastic optimization, if the gradient for a given batch is computed approximately, the optimization may still converge. The GELU derivative (see Fig. 1) is quite “similar” to a piecewise-constant function: for large values of |x|, it is almost exactly equal to 0 or 1, and for small values of x, a rather interesting transition from 0 to 1 occurs. Instead of calculating the derivative exactly on the backward pass, we approximate it using a certain piecewise-constant approximation: q(x|s,y) = k∑ i=1 yi1[x ∈ [si; si+1]], (2) where s = (s1, · · · , sk+1) is a sorted vector of intervals, on which approximation is constant, y = (y1, · · · , yk) is a vector of the correspond- ing values of approximation and 1 denotes an indicator function, which equals 1 whenever its argument is true and 0 otherwise. That means, that q(x|s,y) equals yi when x ∈ [si; si+1], see Fig. 3 for illustration. As noted above, if the approximation has k constant intervals, instead of storing the full input tensor X , it will be possible to save only log k bits of information (per element of the input tensor), which, accordingly, will reduce the memory consumption by 32/ log k times for single precision. If quantizatoin scheme Eq. (2) is given, drop-in replacement for activation function f is very straightforward. On the forward pass, instead of the full tensor X, we have to save only indices of intervals to which the elements of X belong, and on the backward pass, we need to multiply gradient w.r.t. output not with the actual derivative of f , but with values from y corresponding to stored indices. Pseudocode is presented in Alg. 1. 1 # Globally stored piecewise-constant approximation parameters 2 s, y = [...], [...] 3 4 def forward(X): 5 X_pos = sortedsearch(s, X) 6 save_for_backward(X_pos) 7 return f(X) 8 9 def backward(dLdY): 10 X_pos = get_saved_for_backward() 11 return dLdY * y[X_pos] Listing 1: Pseudo code for quantized backward layer. Arrays s and y are parameters of quantization Eq. (2), sortedsearch is a binary search method. Memory of Few-bit Appproximation. As it was mentioned above, by replacing all pointwise nonlinearity layers in the neural network with Few-bit approximation consisting of k piecewiseconstant intervals, the memory consumption of such layers during forward-backward pass will be reduced by 32/k times for single-precision learning mode. However, how many times in total the neural network memory consumption is reduced depends on the particular architecture of the neural network and the optimizer used in the process. During training, the memory is spent on weights (parameters) of the network, on optimizer statistics, and on all stored activations, some of which are activations of pointwise nonlinearity layers. For example, when training ResNet18 with the Adam optimizer on 256x256 images, the model weights take 44.6Mb, 3 ∗ 44.6 = 133.8Mb is used by the optimizer to store gradients and moments, BS∗40Mb is needed to store all activations during forwardpass, BS ∗11.5Mb of which are pointwise nonlinearity layers and BS ∗28.5Mb is for all other layers, where BS is the batch size. Therefore, the maximum possible batch size with standard nonlinearities is ⌊(GPU_MEM − 4 ∗ 44.6)/40⌋, while the maximum batch size with Few-bit nonlinearities of size k is ⌊(GPU_MEM − 4 ∗ 44.6)/(28.5 + 11.5 ∗ log k/32)⌋, where GPU_MEM is the available GPU memory. In our example with ResNet18 for standard nonlinearity layers, the maximum batch size for a video card with 32Gb memory is 813, while using 4-bit Few-bit approximation is 1086 (+33%). Memory consumption for different Few-bit mods and different neural network architectures is presented in Appendix B. Speed of Few-bit Approximation The memory gain of a Few-bit layer does not slow down the speed. The standard nonlinearity layer calculates the activation function in the forward pass and the activation function gradient in the reverse pass. The activation function gradient usually includes complex functions such as exponent, erf, and others. The Few-bit version of the layer also calculates the activation function on forward pass, but the gradient calculation during backward pass is replaced by one binary search and one lookup in the value table (see Alg. 1). Our efficient implementation of this procedure using CUDA kernels runs several percent faster than the standard nonlinearity layer. However, this result may depend on specific framework implementation and the used GPU, so in our experiments in Section 4 we do not consider the time gain, assuming that both layers are roughly equally fast, but focus specifically on memory savings. 3 OPTIMAL PIECEWISE-CONSTANT APPROXIMATION Fig. 1 shows examples of an optimized 3-bit piecewise-constant approximation for several nonlinearity function. Finding the optimal approximation parameters (boundaries of intervals and values on them) is a challenging task. We propose to find them by minimizing the (weighted) L2 norm of the error. Consider function f : R → R and its derivative f ′. We will measure the quality of a piecewise constant approximation Eq. (2) with a weighted L2 norm: min y,s L(s,y), (3) L(s,y) = ∫ R (f ′(x)− q(x|s,y))2w(x)dx = k∑ i=1 ∫ si+1 si (f ′(x)− yi)2w(x)dx, (4) where w is some weight function reflecting our prior knowledge of the activation function argument distribution. Practical choices of w may be either 1[x ∈ [A;B]] (with some reasonable A and B, which should be large enough) which makes integral Eq. (3) tractable, or maybe, e.g., standard normal distribution. L(s,y) is differentiable w.r.t. s and y, so optimal piecewise-constant approximations can be found using standard gradient-based optimization techniques. But the minimization problem Eq. (3) has many local minima that are far from optimal. We suggest using dynamic programming to get some good initial approximation that can be further finetuned using gradient-based methods (but also can be used as is because it is very accurate on its own). Dynamic programming. We will assume that the weighting function w is chosen such that w(x) = 0 for x ̸∈ [A;B]. Consider the following auxiliary value: DP(t, k) = min y1:k, s1:k+1, s.t.s1=A,sk+1=t ∫ t A (f ′(x)− q(x|y, s))2w(x)dx, t ∈ R, k ∈ N. Essentially, DP(t, k) is the optimal piecewise constant approximation of size k for the given function f ′ on the interval [A; t]. The recurrent formula for this value is: DP(t, k + 1) = min t′ { DP(t′, k) + ∫ t t′ (f ′(x)− y(t′, t))2w(x)dx } , (5) y(t′, t) = ∫ t t′ w(x)f ′(x)dx∫ t t′ w(x)dx , (6) since a piecewise-constant approximation of size k + 1 consists of corresponding approximation of size k (first term) plus one constant interval (second term). Here t′ chooses the right bound of approximation of size k, and y(t′, t) stands for the optimal value for the interval [t′; t] Eq. (8). Then the minimal value of L(s,y) of size k is equal to DP(B, k). To solve the minimization problem Eq. (5), we suggest considering the discretization of t: A = t0 < t1 < · · · < tn = B and reducing the calculation of DP(t, k) to its approximation only in the points of discretization: DP(i, k) = min j {DP(j, k − 1) + T (j, i)} , T (j, i) = ∫ ti tj (f ′(x)− y(j, i))2w(x)dx, y(j, i) = ∫ ti tj w(x)f ′(x)dx∫ ti tj w(x)dx . (7) Eq. (7) can be calculated in O(n2K) time and O(nK) space, which is described in Appendix G in detail. Please note, that this routine should be evaluated only once, possibly by the framework developers, and the used indefinitely. Which means that number of discritization points n can be taken quite large, tens of thousends easily. That would make global solutoin of discrete problem, given in Eq. (7) very close to the global solution of the original problem Eq. (3). We give precalculated Few-bit approximations for many different pointwise nonlinearity functions in our implementation at https://github.com/anonymous/repository. 4 EXPERIMENTS The goal of our experiments is not only to show that the Few-bit nonlinearity approach provides memory savings during neural network training without loss of the final model quality. In addition, we want to experimentally prove that this approach does not change the learning dynamic itself because, in this case its application in practice is almost completely safe: there is a memory gain without loss of speed or quality, and without risks of interference with other training factors under study (hence, no additional search or fitting of other hyperparameters is needed). To achieve this goal, in addition to the main metrics of the trained model (which depend on specific tasks and benchmarks), we also compare the training loss and validation loss graphs during the whole training process. Further you will see that 1-bit and 2-bit f-bit approximations are already almost the same as the original nonlinearity layers. And the 3- and 4-bit Few-bit approximations achieve the original quality of the model. We have tested two of the most important and commonly used neural network architectures: convolutional neural networks and transformer-based networks. We use standard popular open-source benchmarks with open hyperparameters for training in order to demonstrate the behavior of the Few-bit approach under drop-in replacement of standard nonlinearities without any hyperparameter optimization or specially selected training conditions. In Section 4.1, we test the RoBERT-a transformer-based neural network on the GLUE Wang et al. (2019) benchmark, which includes 9 different NLP tasks. In Section 4.2, we test the training of the generative ruDALL-e model in the task of modeling the joint distribution of text and image tokens for the Russian Emoji dataset. We use the GELU nonlinearity for both transformer architectures, as it is the main nonlinearity function used in such models. In Section 4.3, we test the classical ResNet18 architecture on the ImageNet dataset using the open benchmark ffcv Leclerc et al. (2022). In the classical ResNet architecture, we replace all ReLU nonlinearities with one of GELU, SELU, or Swish to demonstrate that the Few-bit approach works with a wide range of different popular activation functions. The main analog of our Few-bit approach is the ActNN method. In Section 4.4, we make a detailed comparison with this method. The code to reproduce all experiments is available at https://github.com/anonymous/ repository, and all hyperparameters for training are presented in Appendix F. 4.1 GLUE benchmark. In Table 1 we report results for RoBERTa-base model Liu et al. (2019) on GLUE benchmark Wang et al. (2019) for standard GELU and 1-, 2-, 3- and 4-bits Few-bit GELU. 1- and 2-bits versions have minor performance degradation, while 3- and 4-bits GELU have no visible difference and closely match vanilla GELU performance, which can be seen more clearly on the dependence of the metric, averaged across all GLUE tasks, on the number of bits in Few-bit approximation, represented in Fig. 6. The behaviour of loss during training is depicted in Fig. 5: 3- and 4-bit versions are hardly distinguishable from the standard GELU. 4.2 RuDALL-E. In Fig. 4 we present training dynamic of ruDALL-E1 Malevich Ramesh et al. (2021) model on Russian Emoji dataset. The dataset Shonenkov et al. (2021) contains 2749 unique emoji icons and 1611 unique texts that were collected by web scrapping (the difference in quantities is due to the fact that there are sets, within which emojis differ only in color, moreover, some elements are homonyms in Russian). ruDALL-E Malevich is a big multimodal pretrained transformer, which learns the conditional distribution of images given some string of text (more precisely it autoregressively models the text and image tokens as a single stream of data). ruDALL-E Malevich encoder part is a 24 layer Transformer Vaswani et al. (2017) model with 16 attention heads, 2048 hidden dimensions and standard GELU nonlinearity, which in total has 1.3B parameters. It works with 128 text tokens, which are prepared from the text input using YTTM tokenizer2, and 1024 image 1Implementation is taken from https://github.com/sberbank-ai/ru-dalle 2Implementation is taken from https://github.com/VKCOM/YouTokenToMe tokens, which are obtained after encoding the input image using Sber-VQGAN3. Few-bit backward for ruDALL-E Malevich shows same behaviour as for RoBERTa-base architecture: 1- and 2-bit versions, although coping with training perfectly fine, demonstrates minor performance degradation, while 3- and 4-bit versions are indistinguishable from the original GELU. 4.3 ResNet Architecture. We trained ResNet18 model He et al. (2016) on ImageNet Russakovsky et al. (2015) benchmark Leclerc et al. (2022) dataset with ReLU replaced with GELU, Swish and SiLU nonlinearity functions. Graphs for Swish nonlinearity can be seen in Fig. 8 and graphs for other nonlinearities can be seen in Fig. 13 in Appendix F: 1- and 2- bits have minor performance drop, while 3- and 4- bits are on par with standard nonlinearity. 4.4 ActNN. As a baseline, we use another quantization scheme ActNN Chen et al. (2021). It works in a much wider spectrum of situations, as it can quantize not only pointwise nonlinearity layers but also all kinds of linear layers (convolutional and dense layers), normalization layers and pooling layers. Without going deep into details, ActNN divides the saved tensor H into chunks hi where each chunk is of an equal size G. Then, given the quantization budget of b bits, each chunk hi is normalized: ui = 2b(hi −min{hi})/(max{hi} −min{hi}), and its randomly quantized version is saved ūi = ⌈ui⌉ with prob. u− ⌊ui⌋, ⌊ui⌋ otherwise. Random rounding is performed in order to guarantee that the quantization is unbiased. For each group, two additional values min{hi} and 3Implementation is taken from https://github.com/sberbank-ai/sber-vq-gan max{hi} are saved as well, but for the group size of G = 256 it is only 0.125 additional bits per element, which we ignore in our following tests. ActNN by construction does not take into account the global behaviour of the nonlinearity derivative. We argue that for nonlinearity layers, it is very crucial, and thus our preoptimized quantization scheme is more preferable. To confirm that, we consider ActNN behaviour on the QQP task from the GLUE benchmark with respect to different quantization budgets and compare it with our method (Fig. 9 and Table 2). In general, our method with 1 bit less budget works the same or better than ActNN, which is very important in the low-bit setting. In Fig. 10 we compare ActNN and Few-bit for ResNet18 architecture on ImageNet dataset for SELU nonlinearity, while results for GELU and Swish nonlinearities can be found in Fig. 14 in Appendix F. Aggregated top-1 accuracy for all activation functions is presented in Fig. 7. Our method steadily outperform ActNN which is especially noticeable for 1-bit regime: ActNN experience strong downgrade of accuracy, while Few-bit Backward has much closer performance to standard nonlinearities. This means that one-bit Few-bit backward can be used in cases where it is very important to reduce memory consumption by a neural network. ‘ 5 RELATED WORK The reduction of the memory footprint is an important topic. To save memory during training, in addition to working with stored activations, the memory used to store model parameters can be compressed. Quantization Bondarenko et al. (2021); Bengio et al. (2013); Banner et al. (2019); Jacob et al. (2018); Nagel et al. (2021); Krishnamoorthi (2018) limits the admissible values of weights to some small finite set. Thus, less memory is needed for storage. The low-rank representation of weights Hrinchuk et al. (2020); Phan et al. (2020); Gusak et al. (2019; 2021); Cui et al. (2020); Novikov et al. (2018); Lebedev et al. (2015) assumes some internal structure of model weights and saves memory by explicitly using this structure with low-rank methods from linear algebra. Low precision learning and low precision optimizers focus on using the lower precision floats to store weights, optimization parameters, and model gradients. All of these approaches are complementary to the proposed one and can be used together. Checkpointing Beaumont et al. (2019; 2021); Chen et al. (2016) methods save memory by the cost of more calculations. It stores a fewer number of activations and repeats the calculation of the rest from the saved checkpoints. Offloading methods Beaumont et al. (2020) send the saved activations to the computer’s RAM and load them back to the video memory on the backwards passes, which also saves GPU memory at the cost of host-device communication time. ActNN Chen et al. (2021) is a framework for quantizing stored activations adaptively on the fly. In contrast to our work, it allows quantizing not only layers of element-by-element activations but also many others, including convolutional, normalization, and linear layers. However, this method depends on the distribution of elements of quantizable tensors and, because of that, its performance may degrade. Our approach, on the other hand, selects data-agnostic optimal quantization, which in practice turns out to be sufficient and easier to use. 6 CONCLUSION We have proposed a method to reduce memory consumption during the training of deep neural network models by storing less information for backward pass in the element-wise activation functions. For effective training, there is no need to calculate the derivative of the activation functions precisely, but only its piecewise-constant approximation is sufficient. This makes it possible to save not the entire input tensor at each application of the activation function, but only the interval number in the piecewise-constant approximation. Experiments show that for a wide class of models and problems, storing only 3 bits of information per tensor element does not lead to degradation of the learning quality and saves about 20 percent of memory. We have proposed an efficient algorithm for constructing an optimal piecewise-constant approximation. The proposed drop-in replacements for popular activation functions (ReLU, GELU, Swish, Sigmoid and others) do not depend on the neural network model, the problem to be solved, or the peculiarities of data distribution. The replacement of the original activation functions by the proposed method can be performed at any training stage (both to models trained from scratch and to pre-trained models for subsequent fine-tuning) and does not require any changes in the training pipelines. An efficient CUDA implementation of the proposed method, together with pre-computed piecewise-constant approximations for many popular activation functions, is available for PyTorch at GitHub repository4. 4Source code repository can be found at https://github.com/anonymous/repository B DETAILED MEMORY MEASUREMENTS FOR DIFFERENT MODELS We provide memory measurements for different model architectures in Table Appendix B. "Model size" is the total memory used for storing model parameters (without model gradients and optimizator statistics). "All activations size" is the total memory used by tensors, saved for backward pass. "Nonlinearity activations size" is the part of all activations used only by nonlinearity layers. "Percentage saving" is memory saved on all activation using our method compared to full precision non-linearities, and percentage value in the "Maximum Batch Size" row is the increase in the batch size achievable by using our method compared to full precision non-linearities, taken in ideal circumstances. Maximum batch size is calculated with the assumption, that four model copies are stored on the device (model parameters, model gradients and optimizer statistics like two moments stored by Adam optimizer) for GPU with 32G memory. Model Size (Mb) All Act. Size (Mb) Nonlin. Act. Size (Mb) Standard Nonlin. Max batch size 1-bit Max batch size 2-bit Max batch size 3-bit Max batch size 4-bit Max batch size ResNet-18 44.6 40.0 11.5 813 1127 (+38.6%) 1113 (+36.9%) 1100 (+35.3%) 1086 (+33.6%) ResNet-50 99.2 156.8 47.9 206 293 (+42.2%) 289 (+40.3%) 285 (+38.3%) 281 (+36.4%) ResNet-101 171.4 234.5 73.4 136 196 (+44.1%) 193 (+41.9%) 190 (+39.7%) 188 (+38.2%) ResNet-152 232.3 328.2 104.9 97 140 (+44.3%) 138 (+42.3%) 136 (+40.2%) 134 (+38.1%) DenseNet-121 30.9 243.8 79.1 133 195 (+46.6%) 192 (+44.4%) 189 (+42.1%) 186 (+39.8%) DenseNet-161 112.4 458.8 147.0 70 102 (+45.7%) 100 (+42.9%) 99 (+41.4%) 97 (+38.6%) DenseNet-169 54.7 296.3 95.3 109 159 (+45.9%) 157 (+44.0%) 155 (+42.2%) 152 (+39.4%) DenseNet-201 77.4 382.2 123.9 84 123 (+46.4%) 122 (+45.2%) 120 (+42.9%) 118 (+40.5%) Efficient Net B0 20.4 112.4 32.4 290 403 (+39.0%) 398 (+37.2%) 393 (+35.5%) 388 (+33.8%) Efficient Net B3 47.5 218.6 59.5 149 202 (+35.6%) 200 (+34.2%) 197 (+32.2%) 195 (+30.9%) Efficient Net B7 256.3 674.8 179.3 47 63 (+34.0%) 62 (+31.9%) 61 (+29.8%) 61 (+29.8%) VGG 11 507.2 100.9 37.0 304 472 (+55.3%) 464 (+52.6%) 456 (+50.0%) 448 (+47.4%) VGG 16 528.2 163.8 68.5 187 314 (+67.9%) 307 (+64.2%) 301 (+61.0%) 295 (+57.8%) VGG 19 548.4 178.8 75.0 171 288 (+68.4%) 281 (+64.3%) 275 (+60.8%) 270 (+57.9%) RoBERTa-base 480.7 185.6 36.0 166 204 (+22.9%) 203 (+22.3%) 201 (+21.1%) 200 (+20.5%) RoBERTa-large 1355.6 482.1 96.0 56 70 (+25.0%) 69 (+23.2%) 69 (+23.2%) 68 (+21.4%) GPT2 491.0 297.1 146.2 103 198 (+92.2%) 192 (+86.4%) 187 (+81.6%) 182 (+76.7%) C NUMERICAL RESULTS FOR DYNAMIC PROGRAMMING D EXPERIMENT SETUPS D.1 GLUE Benchmark implementation is based on opensource Huggingface5 implementation 6 and is available at https://github.com/anonymous/repository. The following parameters were used: Task BatchSize Learning rate Number of epochs Warmup length Cola 32 0.00002 10 320 MNLI 32 0.00001 10 7432 MNLI-MM 32 0.00001 10 7432 MRPC 16 0.00001 10 137 QNLI 32 0.00001 10 1986 QQP 32 0.00001 10 28318 RTE 16 0.00002 10 122 SST2 32 0.00002 10 1256 STSB 16 0.00002 10 214 Common parameters are: Parameter Value Optimizer Adam Adam β1 0.9 Adam β2 0.98 Adam ϵ 1e-6 Weight Decay 0.1 Float Precision fp16 D.2 RESNET We use open source FFCV Leclerc et al. (2022) Imagenet benchmark7 with ResNet18 parameters for one A100 Nvidia GPU https://github.com/libffcv/ffcv-imagenet/blob/main/ rn18_configs/rn18_88_epochs.yaml. D.3 RUDALL-E We used open source implementation that can be found at https://github.com/ sberbank-ai/ru-dalle. All experiments have following setup: training size 2474, valid size 275, loss image weight 1000, frozen MLP and attention layers, batch size 40, start lr 4e-7, max lr 1e-5, final lr 2e-8, warmup 0.1, 8bit-Adam Dettmers et al. (2021), weight decay 0.2, betas (0.9, 0.98), eps 1e-6, gradient checkpointing 24, trained for 6h using 1xA100. 5huggingface.co 6https://github.com/huggingface/transformers/blob/main/examples/ pytorch/text-classification/run_glue.py 7https://github.com/libffcv/ffcv-imagenet E COMBINATION OF ACTNN AND FEWBIT ActNN method is more general and can be applied to the broader class of layers, while our method only focus on one class of layers – pointwise nonlinearities. In the cases when it is not enough and more memory saving is required it is possible to join these two methods and to use Fewbit for pointwise nonlinearities and ActNN for everything else. Such a combination should work better than pure ActNN, since Fewbit works better than ActNN for pointwise nonlinearity layers. To check this hypothesis we train ResNet18 on CIFAR10 dataset. We replace standard ReLU pointwise nonlinearity with GELU, compress all layers except GELU with 4-bit ActNN (since 2-bit ActNN is too much of a compression and model diverges) and GELU layers are compressed with either 2-bit ActNN or 2-bit Fewbit. On Fig. 12 you can see training loss and accuracy. ActNN + Fewbit for pointwise nonlinearities works slightly better than pure ActNN, as expected. F MORE PLOTS FOR EXPERIMENTS G DYNAMIC PROGRAMMING It is easy to see that the optimal value of y for L(s,y) in Eq. (3) with given s is: yi(s) = ∫ si+1 si w(x)f ′(x)dx∫ si+1 si w(x)dx . (8) Consider Eq. (7): both y(j, i) and T (j, i) can be calculated in advance using analytical formulas (if possible) or numerically for the corresponding 1-dimensional integrals. After that, the full array of DP(i, k) can be calculated in O(n2K) time and O(n2) space, where K is the required number of constant intervals in the approximation Eq. (2). Please note that this optimization has to be performed only once, so n can be chosen quite large thus the result would be very close to the global minimum. Note that the space complexity can be reduced to O(n) by adding three auxilliary arrays F 2,W and FW and rewriting Eq. (7): F 2(i) = ∫ ti A f ′2(x)w(x)dx, W (i) = ∫ ti A w(x)dx, FW (i) = ∫ ti A f ′(x)w(x)dx, y(j, i) = (FW (j)− FW (i))/(W (j)−W (i)), T (j, i) = F 2(i)− F 2(j)− y(j, i)2(W (i)−W (j)). (9) We can see that ultimately only O(n) one-dimensional integrals have to be stored, and everything else can be easily evaluated in O(1) time on the spot. The one-dimensional integrals can be calculated numerically in O(n) time and space complexity as well: F 2(i+ 1) = F 2(i) + ∫ ti+1 ti f ′2(x)w(x)dx, W (i+ 1) = W (i) + ∫ ti+1 ti w(x)dx, FW (i+ 1) = FW (i) + ∫ ti+1 ti f ′(x)w(x)dx. (10) Numerical results. In Fig. 1, we provide some 3-bit examples for popular activation functions obtained with described method, and more fewbit approximations can be seen in Fig. 11. In Table 3 we provide numerical values of error Eq. (3).
1. What is the main contribution of the paper, and how does it reduce memory cost? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its analysis? 3. How does the method compare to other low-precision integer quantization methods, such as those optimized in [2], in terms of performance and computational efficiency? 4. Are there any limitations or tradeoffs in using point-wise activations for quantization? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes to use point-wise activations to reduce the memory cost of back-propagation. An analysis is proposed to determine quantization levels minimizing a quantization MSE metric. Experimental results using GeLU and similar activations using several bitwidths are cometitve. Strengths And Weaknesses The idea is interesting however, I have a couple of concerns with the analysis. The presented analysis in eq. (3) - (7) is not novel. It is simply re-deriving the famous Lloyd-Max algorithm [1]. [1] Lloyd, S. P. "Least square quantization in PCM. Bell Telephone Laboratories Paper. Published in journal much later: Lloyd, SP: Least squares quantization in PCM." IEEE Trans. Inform. Theor.(1957/1982) 18 (1957): 5. How does the method compare to low-precision integer quantization, which can also be optimized (see [2])? Unlike the proposed codebook quantization scheme, low-precision integer representation can use ultra-fast tensor cores on GPUs. Can the authors provide some comparisons taking that into account? [2] Sakr, Charbel, et al. "Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training." International Conference on Machine Learning. PMLR, 2022. Clarity, Quality, Novelty And Reproducibility The method is clear. I appreciate the authors spending some time explaining the implementation details. I have some concerns about novelty that I listed above.
ICLR
Title Few-bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction Abstract Memory footprint is one of the main limiting factors for large neural network training. In backpropagation, one needs to store the input to each operation in the computational graph. Every modern neural network model has quite a few pointwise nonlinearities in its architecture, and such operation induces additional memory costs which — as we show – can be significantly reduced by quantization of the gradients. We propose a systematic approach to compute optimal quantization of the retained gradients of the pointwise nonlinear functions with only a few bits per each element. We show that such approximation can be achieved by computing an optimal piecewise-constant approximation of the derivative of the activation function, which can be done by dynamic programming. The drop-in replacements are implemented for all popular nonlinearities and can be used in any existing pipeline. We confirm the memory reduction and the same convergence on several open benchmarks. 1 INTRODUCTION Modern neural network models are getting larger and larger. One of the main bottlenecks in the training loop is the required device memory storage Ojika et al. (2020); Gao et al. (2020). In this paper, we propose a universal approach that helps to reduce the model memory footprint during backpropagation. Note that this approach is complementary to other memory reducing techniques such as checkpointing Chen et al. (2016) or offloading Beaumont et al. (2021). Our method can be applied to any neural network without any additional preprocessing. Memory consumed by the model during training (except intermediate tensors) can be split into two groups: 1) the model weights (including additional memory for the optimizer state), 2) activations saved for the backward pass, over which the computation is not carried out directly at the moment, but which will be required in the future to compute the gradients. Every operation in the computational graph generates a memory footprint. It is typically overlooked, that the application of the pointwise non-linearity (such as GELU or sigmoid) results in storing the input for the backward pass. We show that instead of keeping the full input tensor, it is possible to store a low-bit representation, which allows accurate gradients approximation. In this work, we propose to approximate the derivative of the activation function in a piecewiseconstant form. Such an approximation problem has to be solved once for each activation function, and we propose a simple technique to do that. The proposed approximation divides all values into several bins and saves only their corresponding bin indices instead of storing all values. This is a lossy compression, but the additional noise introduced by it is negligible as we will show on several benchmarks in Section 4. The main contributions of our paper are: • We propose new approximate backward computation schemes that significantly reduce the memory consumption of neural network training. • We benchmark our approach on several tasks. We show that it provides up to 40% memory reduction on various tasks while maintaining accuracy on par with the model trained via the standard approach. 2 QUANTIZED GRADIENTS OF ACTIVATIONS Gradients of activations using automatic differentiation. Modern deep learning frameworks use the reverse mode automatic differentiation to calculate the gradients of the loss over the model parameters. Forward computation can be associated with a directed acyclic graph, depicted in Fig. 2. Each operation f computes the output Xl+1 given the input Xl and has to save some information Sl that would be used on the backward pass in order to calculate the derivative ∂L/∂Xl from ∂L/∂Xl+1 and Sl. Thus, in a typical training loop, the intermediates Sl of all operations in the graph are stored in the memory during the whole forward pass until they are no longer needed after the completion of the corresponding backward operation during backward pass. This generates an additional memory, which can be quite significant and be larger than the total amount of parameters of the model. Pointwise activations. In this paper, we focus on a pointwise activation function, which is ubiquitous in modern neural network architectures. Given an input tensor Xl we apply a function f to each of the elements of this tensor: f(Xl) = [f(X j1,...,jk l )]j1,...,jk , f : R → R. This operation is very cheap compared to other operations in the deep neural network model and does not attract much attention when analysing computational complexity. However, standard implementation in such a framework as PyTorch induces not a very small memory footprint and the whole input Xl is saved for the backward pass. The backward pass for such a function consists of element-wise multiplication of the propagated gradient tensor by the derivative of the nonlinearity function at the points of the input tensor: if Xl+1 = f(Xl), then the gradient of the loss L with respect to Xl is computed as ∂L ∂Xl = ∂L ∂Xl+1 f ′(Xl), (1) where f ′(Xl) is the tensor with elements, consisting of the derivative of f evaluated in each element of Xl. From Eq. (1), it follows that for the backward pass we have to store only f ′(Xl), and Xl is not needed. ReLU activation function. To illustrate our idea, consider one of the most popular nonlinearities, f(x) = ReLU(x) = max(0, x). Its derivative f ′ takes only two values, 0 and 1 and it only requires 1 bit to store. If single precision is used, then the compression is 32, which is quite noticeable. GELU activation function. In modern transformer architectures Vaswani et al. (2017) the GELU Hendrycks & Gimpel (2016) nonlinearity is typically used. The derivative no longer takes two values. Instead, we propose to approximate f ′ by a piecewise-constant function. For example, if we allow 8 different values, we will need only 3 bits per each element (Fig. 1). Quantized gradients of activations. In stochastic optimization, if the gradient for a given batch is computed approximately, the optimization may still converge. The GELU derivative (see Fig. 1) is quite “similar” to a piecewise-constant function: for large values of |x|, it is almost exactly equal to 0 or 1, and for small values of x, a rather interesting transition from 0 to 1 occurs. Instead of calculating the derivative exactly on the backward pass, we approximate it using a certain piecewise-constant approximation: q(x|s,y) = k∑ i=1 yi1[x ∈ [si; si+1]], (2) where s = (s1, · · · , sk+1) is a sorted vector of intervals, on which approximation is constant, y = (y1, · · · , yk) is a vector of the correspond- ing values of approximation and 1 denotes an indicator function, which equals 1 whenever its argument is true and 0 otherwise. That means, that q(x|s,y) equals yi when x ∈ [si; si+1], see Fig. 3 for illustration. As noted above, if the approximation has k constant intervals, instead of storing the full input tensor X , it will be possible to save only log k bits of information (per element of the input tensor), which, accordingly, will reduce the memory consumption by 32/ log k times for single precision. If quantizatoin scheme Eq. (2) is given, drop-in replacement for activation function f is very straightforward. On the forward pass, instead of the full tensor X, we have to save only indices of intervals to which the elements of X belong, and on the backward pass, we need to multiply gradient w.r.t. output not with the actual derivative of f , but with values from y corresponding to stored indices. Pseudocode is presented in Alg. 1. 1 # Globally stored piecewise-constant approximation parameters 2 s, y = [...], [...] 3 4 def forward(X): 5 X_pos = sortedsearch(s, X) 6 save_for_backward(X_pos) 7 return f(X) 8 9 def backward(dLdY): 10 X_pos = get_saved_for_backward() 11 return dLdY * y[X_pos] Listing 1: Pseudo code for quantized backward layer. Arrays s and y are parameters of quantization Eq. (2), sortedsearch is a binary search method. Memory of Few-bit Appproximation. As it was mentioned above, by replacing all pointwise nonlinearity layers in the neural network with Few-bit approximation consisting of k piecewiseconstant intervals, the memory consumption of such layers during forward-backward pass will be reduced by 32/k times for single-precision learning mode. However, how many times in total the neural network memory consumption is reduced depends on the particular architecture of the neural network and the optimizer used in the process. During training, the memory is spent on weights (parameters) of the network, on optimizer statistics, and on all stored activations, some of which are activations of pointwise nonlinearity layers. For example, when training ResNet18 with the Adam optimizer on 256x256 images, the model weights take 44.6Mb, 3 ∗ 44.6 = 133.8Mb is used by the optimizer to store gradients and moments, BS∗40Mb is needed to store all activations during forwardpass, BS ∗11.5Mb of which are pointwise nonlinearity layers and BS ∗28.5Mb is for all other layers, where BS is the batch size. Therefore, the maximum possible batch size with standard nonlinearities is ⌊(GPU_MEM − 4 ∗ 44.6)/40⌋, while the maximum batch size with Few-bit nonlinearities of size k is ⌊(GPU_MEM − 4 ∗ 44.6)/(28.5 + 11.5 ∗ log k/32)⌋, where GPU_MEM is the available GPU memory. In our example with ResNet18 for standard nonlinearity layers, the maximum batch size for a video card with 32Gb memory is 813, while using 4-bit Few-bit approximation is 1086 (+33%). Memory consumption for different Few-bit mods and different neural network architectures is presented in Appendix B. Speed of Few-bit Approximation The memory gain of a Few-bit layer does not slow down the speed. The standard nonlinearity layer calculates the activation function in the forward pass and the activation function gradient in the reverse pass. The activation function gradient usually includes complex functions such as exponent, erf, and others. The Few-bit version of the layer also calculates the activation function on forward pass, but the gradient calculation during backward pass is replaced by one binary search and one lookup in the value table (see Alg. 1). Our efficient implementation of this procedure using CUDA kernels runs several percent faster than the standard nonlinearity layer. However, this result may depend on specific framework implementation and the used GPU, so in our experiments in Section 4 we do not consider the time gain, assuming that both layers are roughly equally fast, but focus specifically on memory savings. 3 OPTIMAL PIECEWISE-CONSTANT APPROXIMATION Fig. 1 shows examples of an optimized 3-bit piecewise-constant approximation for several nonlinearity function. Finding the optimal approximation parameters (boundaries of intervals and values on them) is a challenging task. We propose to find them by minimizing the (weighted) L2 norm of the error. Consider function f : R → R and its derivative f ′. We will measure the quality of a piecewise constant approximation Eq. (2) with a weighted L2 norm: min y,s L(s,y), (3) L(s,y) = ∫ R (f ′(x)− q(x|s,y))2w(x)dx = k∑ i=1 ∫ si+1 si (f ′(x)− yi)2w(x)dx, (4) where w is some weight function reflecting our prior knowledge of the activation function argument distribution. Practical choices of w may be either 1[x ∈ [A;B]] (with some reasonable A and B, which should be large enough) which makes integral Eq. (3) tractable, or maybe, e.g., standard normal distribution. L(s,y) is differentiable w.r.t. s and y, so optimal piecewise-constant approximations can be found using standard gradient-based optimization techniques. But the minimization problem Eq. (3) has many local minima that are far from optimal. We suggest using dynamic programming to get some good initial approximation that can be further finetuned using gradient-based methods (but also can be used as is because it is very accurate on its own). Dynamic programming. We will assume that the weighting function w is chosen such that w(x) = 0 for x ̸∈ [A;B]. Consider the following auxiliary value: DP(t, k) = min y1:k, s1:k+1, s.t.s1=A,sk+1=t ∫ t A (f ′(x)− q(x|y, s))2w(x)dx, t ∈ R, k ∈ N. Essentially, DP(t, k) is the optimal piecewise constant approximation of size k for the given function f ′ on the interval [A; t]. The recurrent formula for this value is: DP(t, k + 1) = min t′ { DP(t′, k) + ∫ t t′ (f ′(x)− y(t′, t))2w(x)dx } , (5) y(t′, t) = ∫ t t′ w(x)f ′(x)dx∫ t t′ w(x)dx , (6) since a piecewise-constant approximation of size k + 1 consists of corresponding approximation of size k (first term) plus one constant interval (second term). Here t′ chooses the right bound of approximation of size k, and y(t′, t) stands for the optimal value for the interval [t′; t] Eq. (8). Then the minimal value of L(s,y) of size k is equal to DP(B, k). To solve the minimization problem Eq. (5), we suggest considering the discretization of t: A = t0 < t1 < · · · < tn = B and reducing the calculation of DP(t, k) to its approximation only in the points of discretization: DP(i, k) = min j {DP(j, k − 1) + T (j, i)} , T (j, i) = ∫ ti tj (f ′(x)− y(j, i))2w(x)dx, y(j, i) = ∫ ti tj w(x)f ′(x)dx∫ ti tj w(x)dx . (7) Eq. (7) can be calculated in O(n2K) time and O(nK) space, which is described in Appendix G in detail. Please note, that this routine should be evaluated only once, possibly by the framework developers, and the used indefinitely. Which means that number of discritization points n can be taken quite large, tens of thousends easily. That would make global solutoin of discrete problem, given in Eq. (7) very close to the global solution of the original problem Eq. (3). We give precalculated Few-bit approximations for many different pointwise nonlinearity functions in our implementation at https://github.com/anonymous/repository. 4 EXPERIMENTS The goal of our experiments is not only to show that the Few-bit nonlinearity approach provides memory savings during neural network training without loss of the final model quality. In addition, we want to experimentally prove that this approach does not change the learning dynamic itself because, in this case its application in practice is almost completely safe: there is a memory gain without loss of speed or quality, and without risks of interference with other training factors under study (hence, no additional search or fitting of other hyperparameters is needed). To achieve this goal, in addition to the main metrics of the trained model (which depend on specific tasks and benchmarks), we also compare the training loss and validation loss graphs during the whole training process. Further you will see that 1-bit and 2-bit f-bit approximations are already almost the same as the original nonlinearity layers. And the 3- and 4-bit Few-bit approximations achieve the original quality of the model. We have tested two of the most important and commonly used neural network architectures: convolutional neural networks and transformer-based networks. We use standard popular open-source benchmarks with open hyperparameters for training in order to demonstrate the behavior of the Few-bit approach under drop-in replacement of standard nonlinearities without any hyperparameter optimization or specially selected training conditions. In Section 4.1, we test the RoBERT-a transformer-based neural network on the GLUE Wang et al. (2019) benchmark, which includes 9 different NLP tasks. In Section 4.2, we test the training of the generative ruDALL-e model in the task of modeling the joint distribution of text and image tokens for the Russian Emoji dataset. We use the GELU nonlinearity for both transformer architectures, as it is the main nonlinearity function used in such models. In Section 4.3, we test the classical ResNet18 architecture on the ImageNet dataset using the open benchmark ffcv Leclerc et al. (2022). In the classical ResNet architecture, we replace all ReLU nonlinearities with one of GELU, SELU, or Swish to demonstrate that the Few-bit approach works with a wide range of different popular activation functions. The main analog of our Few-bit approach is the ActNN method. In Section 4.4, we make a detailed comparison with this method. The code to reproduce all experiments is available at https://github.com/anonymous/ repository, and all hyperparameters for training are presented in Appendix F. 4.1 GLUE benchmark. In Table 1 we report results for RoBERTa-base model Liu et al. (2019) on GLUE benchmark Wang et al. (2019) for standard GELU and 1-, 2-, 3- and 4-bits Few-bit GELU. 1- and 2-bits versions have minor performance degradation, while 3- and 4-bits GELU have no visible difference and closely match vanilla GELU performance, which can be seen more clearly on the dependence of the metric, averaged across all GLUE tasks, on the number of bits in Few-bit approximation, represented in Fig. 6. The behaviour of loss during training is depicted in Fig. 5: 3- and 4-bit versions are hardly distinguishable from the standard GELU. 4.2 RuDALL-E. In Fig. 4 we present training dynamic of ruDALL-E1 Malevich Ramesh et al. (2021) model on Russian Emoji dataset. The dataset Shonenkov et al. (2021) contains 2749 unique emoji icons and 1611 unique texts that were collected by web scrapping (the difference in quantities is due to the fact that there are sets, within which emojis differ only in color, moreover, some elements are homonyms in Russian). ruDALL-E Malevich is a big multimodal pretrained transformer, which learns the conditional distribution of images given some string of text (more precisely it autoregressively models the text and image tokens as a single stream of data). ruDALL-E Malevich encoder part is a 24 layer Transformer Vaswani et al. (2017) model with 16 attention heads, 2048 hidden dimensions and standard GELU nonlinearity, which in total has 1.3B parameters. It works with 128 text tokens, which are prepared from the text input using YTTM tokenizer2, and 1024 image 1Implementation is taken from https://github.com/sberbank-ai/ru-dalle 2Implementation is taken from https://github.com/VKCOM/YouTokenToMe tokens, which are obtained after encoding the input image using Sber-VQGAN3. Few-bit backward for ruDALL-E Malevich shows same behaviour as for RoBERTa-base architecture: 1- and 2-bit versions, although coping with training perfectly fine, demonstrates minor performance degradation, while 3- and 4-bit versions are indistinguishable from the original GELU. 4.3 ResNet Architecture. We trained ResNet18 model He et al. (2016) on ImageNet Russakovsky et al. (2015) benchmark Leclerc et al. (2022) dataset with ReLU replaced with GELU, Swish and SiLU nonlinearity functions. Graphs for Swish nonlinearity can be seen in Fig. 8 and graphs for other nonlinearities can be seen in Fig. 13 in Appendix F: 1- and 2- bits have minor performance drop, while 3- and 4- bits are on par with standard nonlinearity. 4.4 ActNN. As a baseline, we use another quantization scheme ActNN Chen et al. (2021). It works in a much wider spectrum of situations, as it can quantize not only pointwise nonlinearity layers but also all kinds of linear layers (convolutional and dense layers), normalization layers and pooling layers. Without going deep into details, ActNN divides the saved tensor H into chunks hi where each chunk is of an equal size G. Then, given the quantization budget of b bits, each chunk hi is normalized: ui = 2b(hi −min{hi})/(max{hi} −min{hi}), and its randomly quantized version is saved ūi = ⌈ui⌉ with prob. u− ⌊ui⌋, ⌊ui⌋ otherwise. Random rounding is performed in order to guarantee that the quantization is unbiased. For each group, two additional values min{hi} and 3Implementation is taken from https://github.com/sberbank-ai/sber-vq-gan max{hi} are saved as well, but for the group size of G = 256 it is only 0.125 additional bits per element, which we ignore in our following tests. ActNN by construction does not take into account the global behaviour of the nonlinearity derivative. We argue that for nonlinearity layers, it is very crucial, and thus our preoptimized quantization scheme is more preferable. To confirm that, we consider ActNN behaviour on the QQP task from the GLUE benchmark with respect to different quantization budgets and compare it with our method (Fig. 9 and Table 2). In general, our method with 1 bit less budget works the same or better than ActNN, which is very important in the low-bit setting. In Fig. 10 we compare ActNN and Few-bit for ResNet18 architecture on ImageNet dataset for SELU nonlinearity, while results for GELU and Swish nonlinearities can be found in Fig. 14 in Appendix F. Aggregated top-1 accuracy for all activation functions is presented in Fig. 7. Our method steadily outperform ActNN which is especially noticeable for 1-bit regime: ActNN experience strong downgrade of accuracy, while Few-bit Backward has much closer performance to standard nonlinearities. This means that one-bit Few-bit backward can be used in cases where it is very important to reduce memory consumption by a neural network. ‘ 5 RELATED WORK The reduction of the memory footprint is an important topic. To save memory during training, in addition to working with stored activations, the memory used to store model parameters can be compressed. Quantization Bondarenko et al. (2021); Bengio et al. (2013); Banner et al. (2019); Jacob et al. (2018); Nagel et al. (2021); Krishnamoorthi (2018) limits the admissible values of weights to some small finite set. Thus, less memory is needed for storage. The low-rank representation of weights Hrinchuk et al. (2020); Phan et al. (2020); Gusak et al. (2019; 2021); Cui et al. (2020); Novikov et al. (2018); Lebedev et al. (2015) assumes some internal structure of model weights and saves memory by explicitly using this structure with low-rank methods from linear algebra. Low precision learning and low precision optimizers focus on using the lower precision floats to store weights, optimization parameters, and model gradients. All of these approaches are complementary to the proposed one and can be used together. Checkpointing Beaumont et al. (2019; 2021); Chen et al. (2016) methods save memory by the cost of more calculations. It stores a fewer number of activations and repeats the calculation of the rest from the saved checkpoints. Offloading methods Beaumont et al. (2020) send the saved activations to the computer’s RAM and load them back to the video memory on the backwards passes, which also saves GPU memory at the cost of host-device communication time. ActNN Chen et al. (2021) is a framework for quantizing stored activations adaptively on the fly. In contrast to our work, it allows quantizing not only layers of element-by-element activations but also many others, including convolutional, normalization, and linear layers. However, this method depends on the distribution of elements of quantizable tensors and, because of that, its performance may degrade. Our approach, on the other hand, selects data-agnostic optimal quantization, which in practice turns out to be sufficient and easier to use. 6 CONCLUSION We have proposed a method to reduce memory consumption during the training of deep neural network models by storing less information for backward pass in the element-wise activation functions. For effective training, there is no need to calculate the derivative of the activation functions precisely, but only its piecewise-constant approximation is sufficient. This makes it possible to save not the entire input tensor at each application of the activation function, but only the interval number in the piecewise-constant approximation. Experiments show that for a wide class of models and problems, storing only 3 bits of information per tensor element does not lead to degradation of the learning quality and saves about 20 percent of memory. We have proposed an efficient algorithm for constructing an optimal piecewise-constant approximation. The proposed drop-in replacements for popular activation functions (ReLU, GELU, Swish, Sigmoid and others) do not depend on the neural network model, the problem to be solved, or the peculiarities of data distribution. The replacement of the original activation functions by the proposed method can be performed at any training stage (both to models trained from scratch and to pre-trained models for subsequent fine-tuning) and does not require any changes in the training pipelines. An efficient CUDA implementation of the proposed method, together with pre-computed piecewise-constant approximations for many popular activation functions, is available for PyTorch at GitHub repository4. 4Source code repository can be found at https://github.com/anonymous/repository B DETAILED MEMORY MEASUREMENTS FOR DIFFERENT MODELS We provide memory measurements for different model architectures in Table Appendix B. "Model size" is the total memory used for storing model parameters (without model gradients and optimizator statistics). "All activations size" is the total memory used by tensors, saved for backward pass. "Nonlinearity activations size" is the part of all activations used only by nonlinearity layers. "Percentage saving" is memory saved on all activation using our method compared to full precision non-linearities, and percentage value in the "Maximum Batch Size" row is the increase in the batch size achievable by using our method compared to full precision non-linearities, taken in ideal circumstances. Maximum batch size is calculated with the assumption, that four model copies are stored on the device (model parameters, model gradients and optimizer statistics like two moments stored by Adam optimizer) for GPU with 32G memory. Model Size (Mb) All Act. Size (Mb) Nonlin. Act. Size (Mb) Standard Nonlin. Max batch size 1-bit Max batch size 2-bit Max batch size 3-bit Max batch size 4-bit Max batch size ResNet-18 44.6 40.0 11.5 813 1127 (+38.6%) 1113 (+36.9%) 1100 (+35.3%) 1086 (+33.6%) ResNet-50 99.2 156.8 47.9 206 293 (+42.2%) 289 (+40.3%) 285 (+38.3%) 281 (+36.4%) ResNet-101 171.4 234.5 73.4 136 196 (+44.1%) 193 (+41.9%) 190 (+39.7%) 188 (+38.2%) ResNet-152 232.3 328.2 104.9 97 140 (+44.3%) 138 (+42.3%) 136 (+40.2%) 134 (+38.1%) DenseNet-121 30.9 243.8 79.1 133 195 (+46.6%) 192 (+44.4%) 189 (+42.1%) 186 (+39.8%) DenseNet-161 112.4 458.8 147.0 70 102 (+45.7%) 100 (+42.9%) 99 (+41.4%) 97 (+38.6%) DenseNet-169 54.7 296.3 95.3 109 159 (+45.9%) 157 (+44.0%) 155 (+42.2%) 152 (+39.4%) DenseNet-201 77.4 382.2 123.9 84 123 (+46.4%) 122 (+45.2%) 120 (+42.9%) 118 (+40.5%) Efficient Net B0 20.4 112.4 32.4 290 403 (+39.0%) 398 (+37.2%) 393 (+35.5%) 388 (+33.8%) Efficient Net B3 47.5 218.6 59.5 149 202 (+35.6%) 200 (+34.2%) 197 (+32.2%) 195 (+30.9%) Efficient Net B7 256.3 674.8 179.3 47 63 (+34.0%) 62 (+31.9%) 61 (+29.8%) 61 (+29.8%) VGG 11 507.2 100.9 37.0 304 472 (+55.3%) 464 (+52.6%) 456 (+50.0%) 448 (+47.4%) VGG 16 528.2 163.8 68.5 187 314 (+67.9%) 307 (+64.2%) 301 (+61.0%) 295 (+57.8%) VGG 19 548.4 178.8 75.0 171 288 (+68.4%) 281 (+64.3%) 275 (+60.8%) 270 (+57.9%) RoBERTa-base 480.7 185.6 36.0 166 204 (+22.9%) 203 (+22.3%) 201 (+21.1%) 200 (+20.5%) RoBERTa-large 1355.6 482.1 96.0 56 70 (+25.0%) 69 (+23.2%) 69 (+23.2%) 68 (+21.4%) GPT2 491.0 297.1 146.2 103 198 (+92.2%) 192 (+86.4%) 187 (+81.6%) 182 (+76.7%) C NUMERICAL RESULTS FOR DYNAMIC PROGRAMMING D EXPERIMENT SETUPS D.1 GLUE Benchmark implementation is based on opensource Huggingface5 implementation 6 and is available at https://github.com/anonymous/repository. The following parameters were used: Task BatchSize Learning rate Number of epochs Warmup length Cola 32 0.00002 10 320 MNLI 32 0.00001 10 7432 MNLI-MM 32 0.00001 10 7432 MRPC 16 0.00001 10 137 QNLI 32 0.00001 10 1986 QQP 32 0.00001 10 28318 RTE 16 0.00002 10 122 SST2 32 0.00002 10 1256 STSB 16 0.00002 10 214 Common parameters are: Parameter Value Optimizer Adam Adam β1 0.9 Adam β2 0.98 Adam ϵ 1e-6 Weight Decay 0.1 Float Precision fp16 D.2 RESNET We use open source FFCV Leclerc et al. (2022) Imagenet benchmark7 with ResNet18 parameters for one A100 Nvidia GPU https://github.com/libffcv/ffcv-imagenet/blob/main/ rn18_configs/rn18_88_epochs.yaml. D.3 RUDALL-E We used open source implementation that can be found at https://github.com/ sberbank-ai/ru-dalle. All experiments have following setup: training size 2474, valid size 275, loss image weight 1000, frozen MLP and attention layers, batch size 40, start lr 4e-7, max lr 1e-5, final lr 2e-8, warmup 0.1, 8bit-Adam Dettmers et al. (2021), weight decay 0.2, betas (0.9, 0.98), eps 1e-6, gradient checkpointing 24, trained for 6h using 1xA100. 5huggingface.co 6https://github.com/huggingface/transformers/blob/main/examples/ pytorch/text-classification/run_glue.py 7https://github.com/libffcv/ffcv-imagenet E COMBINATION OF ACTNN AND FEWBIT ActNN method is more general and can be applied to the broader class of layers, while our method only focus on one class of layers – pointwise nonlinearities. In the cases when it is not enough and more memory saving is required it is possible to join these two methods and to use Fewbit for pointwise nonlinearities and ActNN for everything else. Such a combination should work better than pure ActNN, since Fewbit works better than ActNN for pointwise nonlinearity layers. To check this hypothesis we train ResNet18 on CIFAR10 dataset. We replace standard ReLU pointwise nonlinearity with GELU, compress all layers except GELU with 4-bit ActNN (since 2-bit ActNN is too much of a compression and model diverges) and GELU layers are compressed with either 2-bit ActNN or 2-bit Fewbit. On Fig. 12 you can see training loss and accuracy. ActNN + Fewbit for pointwise nonlinearities works slightly better than pure ActNN, as expected. F MORE PLOTS FOR EXPERIMENTS G DYNAMIC PROGRAMMING It is easy to see that the optimal value of y for L(s,y) in Eq. (3) with given s is: yi(s) = ∫ si+1 si w(x)f ′(x)dx∫ si+1 si w(x)dx . (8) Consider Eq. (7): both y(j, i) and T (j, i) can be calculated in advance using analytical formulas (if possible) or numerically for the corresponding 1-dimensional integrals. After that, the full array of DP(i, k) can be calculated in O(n2K) time and O(n2) space, where K is the required number of constant intervals in the approximation Eq. (2). Please note that this optimization has to be performed only once, so n can be chosen quite large thus the result would be very close to the global minimum. Note that the space complexity can be reduced to O(n) by adding three auxilliary arrays F 2,W and FW and rewriting Eq. (7): F 2(i) = ∫ ti A f ′2(x)w(x)dx, W (i) = ∫ ti A w(x)dx, FW (i) = ∫ ti A f ′(x)w(x)dx, y(j, i) = (FW (j)− FW (i))/(W (j)−W (i)), T (j, i) = F 2(i)− F 2(j)− y(j, i)2(W (i)−W (j)). (9) We can see that ultimately only O(n) one-dimensional integrals have to be stored, and everything else can be easily evaluated in O(1) time on the spot. The one-dimensional integrals can be calculated numerically in O(n) time and space complexity as well: F 2(i+ 1) = F 2(i) + ∫ ti+1 ti f ′2(x)w(x)dx, W (i+ 1) = W (i) + ∫ ti+1 ti w(x)dx, FW (i+ 1) = FW (i) + ∫ ti+1 ti f ′(x)w(x)dx. (10) Numerical results. In Fig. 1, we provide some 3-bit examples for popular activation functions obtained with described method, and more fewbit approximations can be seen in Fig. 11. In Table 3 we provide numerical values of error Eq. (3).
1. What is the focus of the paper regarding reducing memory needs in neural networks? 2. What are the strengths of the proposed method, particularly its algorithm for finding the quantized representation? 3. Do you have any concerns about the choice of optimizing metric or the need for dynamic programming? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for additional comparisons or analyses that could enhance the paper's impact?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a method for reducing the memory needs of the backward pass of a neural network. It does so by quantizing the gradients of the activation functions meaning that they now only need a few bits to store as opposed to the standard 32-bit floating point representation. The few bit representations are computed to minimize the l2 distance from the exact gradients. The paper proposes an algorithm based on dynamic programming to compute them. This algorithm only needs to be run once for each bit count/activation function pairing. The experiments confirm that these few-bit representation do not significantly affect the training dynamics so the memory reduction has no downside. Strengths And Weaknesses The algorithm for finding the quantized representation is quite clever. Its makes good use of the dynamic programming principle. I wonder, however, Is l2 loss the right metric to optimize? The goal is to minimize the bias that we are adding to the gradient updates and perhaps a different metric, maybe one that is optimized to minimize this bias would perform better. Is dynamic programming needed here? Could we simply optimize the values and the beginnings/endings of the segments using a simple optimizer? What is the resolution of t used in the experiments (and what interval [A, B])? The paper makes a convincing case that the proposed method indeed reduces the memory footprint of the gradients and that this reduction does not significantly impact the training dynamics of the network. It would be nice to show concrete benefits of this memory reduction. As stated in the paper, the memory reduction does not result in faster backward passes. Instead, it enables training with larger batch sizes. Does training with larger batch sizes then result in a speedup? In the Abstract, it is stated that "Memory footprint is one of the main limiting factors for large neural network training.". Does it enable the training of larger networks that have better accuracy? In the conclusion, it is stated that the method results in a 20% reduction in memory usage. How is this 20% computed? In Appendix B, we see the potential increase in batch size. Is this table generated by measuring the size of the model running on a GPU or is it calculated according to the formula in Section 2? There may be significant differences in the theoretical memory footprints and the real memory footprints of these models. The paper should also compare against low-precision neural networks, for example 16-bit floating point networks. It is claimed that the savings are `complementary and can be used together', but this is not fully true because the savings don't combine multiplicatively. The savings in a low-precision network are a smaller proportion of the overall memory cost than in a high-precision network. Clarity, Quality, Novelty And Reproducibility Clairty: The paper is well-written. The method and the contributions are clear. Perhaps the description of the quantization algorithm could be improved. Especially equations 5-7 are could be clearer. Quality: The work is good quality and the reasoning is sound. The experiments could be improved as mentioned above. Novelty: The idea of quantizing the activations in the backward pass is novel to my knowledge. Reproducibility: Code is supplied along with the submission. I did not try to run the code, but I think it shouldn't be difficult to reproduce the results.
ICLR
Title Efficient Reinforcement Learning in Resource Allocation Problems Through Permutation Invariant Multi-task Learning Abstract One of the main challenges in real-world reinforcement learning is to learn successfully from limited training samples. We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning, by exploiting an invariance property in the tasks. We provide a theoretical performance bound for the gain in sample efficiency under this setting. This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy. We demonstrate empirically the effectiveness of the proposed approach on two real-world sequential resource allocation tasks where this invariance property occurs: financial portfolio optimization and meta federated learning. 1 INTRODUCTION Sample efficiency in reinforcement learning (RL) is an elusive goal. Recent attempts at increasing the sample efficiency of RL implementations have focused to a large extent on incorporating models into the training process: Xu et al. (2019); Clavera et al. (2018); Zhang et al. (2018); Berkenkamp et al. (2017); Ke et al. (2019); Yarats et al. (2019); Huang et al. (2019); Chua et al. (2018); Serban et al. (2018). The models encapsulate knowledge explicitly, complementing the experiences that are gained by sampling from the RL environment. Another means towards increasing the availability of samples for a reinforcement learner is by tilting the training towards one that will better transfer to related tasks: if the training process is sufficiently well adapted to more than one task, then the training of a particular task should be able to benefit from samples from the other related tasks. This idea was explored a decade ago in Lazaric & Ghavamzadeh (2010) and has been gaining traction ever since, as researchers try to increase the reach of deep reinforcement learning from its comfortable footing in solving games outrageously well to solving other important problems. Yu (2018) discusses a number of methods for increasing sample efficiency in RL and includes experience transfer as one important avenue, covering the transfer of samples, as we do here, transfer of representation or skills, and jumpstarting models which are then ready to be quickly, i.e. with few samples, updated to different tasks. D’Eramo et al. (2020) address the same idea, noting that multi-task learning can improve the learning of each individual task, motivated by robotics-type tasks with underlying commonality, such as balancing a single vs. a double pendulum, or hopping vs. walking. We are interested in exploiting the ability of multi-task learning to solve the sample efficiency problem of RL. Our setting does not apply to all problem classes nor does it seek to exploit the kind of physical similarities found in robotics tasks that form the motivation of Lazaric & Ghavamzadeh (2010); D’Eramo et al. (2020). Rather, we show that there are a number of reinforcement learning tasks with a particular fundamental property that makes them ideal candidates for multi-task learning with the goal of increasing the availability of samples for their training. We refer to this property as permutation invariance. It is present in very diverse tasks: we illustrate it on a financial portfolio optimization problem, whereby trades are executed sequentially over a given time horizon, and on the problem of meta-learning in a federated supervised learning setting. Permutation invariance in the financial portfolio problem exhibits itself as follows: consider the task of allocating a portion of wealth to each of a number of financial instruments using a trading policy. If the trading policy is permutation invariant, one can change the order of the instruments without changing the policy. This allows one to generate multiple portfolio optimization tasks from a given set of financial instruments. A commonality between applications that have this property is that they concern sequential resource allocation: at each time step, the resource allocation scores the quality of each available candidate entity (for example a financial instrument in the above example), then based on those scores, apportions out the resource (the total wealth to invest, in the above example) among the entities at that time step, so that over the horizon of interest, the reward is maximized. Sequential resource allocation problems include applications such as sequential allocation of budget, sequential allocation of space, e.g. in IT systems, hotels, delivery vehicles, sequential allocation of people to work slots or appointments, etc. Many such applications possess permutation invariance in that the ordering of the entities, i.e. where the resources are allocated, can change without changing the resulting optimal allocation. We show that under this form of permutation invariance, it is possible to derive a bound on the performance of the policy. The bound is an extension of that of Lazaric & Ghavamzadeh (2010), and while similar to, provides additional information beyond the bound of D’Eramo et al. (2020). We use the bound to motivate an algorithm that allows for substantially improved results as compared with solving each task on its own. The bound and the algorithm are first analyzed on a synthetic problem that validates the bound in our theorem and confirms the multi-task gain that the theory predicts. Hessel et al. (2018); Bram et al. (2019) have cautioned against degrading of the performance on each task when some tasks bias the updates to the detriment of others in multi-task learning. They claim that some tasks have a greater density or magnitude of in-task rewards and hence a disproportionate impact on the learning process. In our setting, deleterious effects of some tasks on others could also arise. The algorithm we propose handles this through a form of prioritized sampling, where priorities are put on the tasks themselves, and acts like a prioritized experience replay buffer, applied to a multi-task learning problem. We show empirically that the priorities thus defined protect the overall learning problem from the deleterious effects that unrelated or unhelpful tasks could otherwise have on the policy. The contributions of this work are as follows: (1) we identify the permutation invariance property of the class of reinforcement learning problems involving sequential resource allocation, (2) we define a method to increase sample efficiency in these reinforcement learning problems by leveraging this property of permutation invariance; (3) we provide a theoretical performance bound for the class of problems; (4) we validate experimentally the utility of permutation variance on sample efficiency as well as the validity of the bound on a synthetic problem; and (5) we illustrate two real-world RL resource allocation tasks for which this property holds and demonstrate the benefits of the proposed method on sample efficiency and thus also on the overall performance of the models. 2 RELATED WORK A notable first stream of work on leveraging multi-task learning for enhancing RL performance on single tasks can be found in Wilson et al. (2007); Lazaric & Ghavamzadeh (2010) which consider, as we do, that there is an underlying MDP from which the multiple tasks can be thought to derive. They use however a Bayesian approach and propose a different algorithmic method than ours. Our results extend performance bounds by Lazaric et al. (2012) on single-task RL. As noted by Yu (2018), jumpstarting, or distilling experiences and representations of relevant policies is another means to increasing sample efficiency in solving a new but related problem. Rusu et al. (2016) uses this idea in so-called progressive neural networks and Parisotto et al. (2015) leverage multiple experts to guide the derivation of a general policy. With a similar objective, Teh et al. (2017) define a policy centroid, that is, a shared distilled policy, that captures the commonalities across the behaviors in the tasks. In all of these distillation-type methods, the tasks considered are simple or complex games. Teh et al. (2017) note that their policy centroid method, distral, is likely to be affected by task interference, in that differences across tasks may degrade the performance of the resulting policy of any of the constituent tasks. This topic was studied by Hessel et al. (2018); Bram et al. (2019). Hessel et al. (2018) proposed a solution to this by extending the so-called PopArt normalization van Hasselt et al. (2016) to re-scale the updates of each task so that the different characteristics of the task-specific reward do not skew the learning process. Bram et al. (2019) use a different approach that learns attention weights of the sub-networks of each task and discards those that are not relevant or helpful. Vuong et al. (2019); D’Eramo et al. (2020) are, like our work, concerned with sharing of experiences to facilitate a more sample-efficient learning process. Vuong et al. (2019) suggest identifying the shared portions of tasks to allow sharing of samples in those portions. The work of D’Eramo et al. (2020) is in some ways quite similar to ours: the authors’ goal is the same and they derive a bound as we do on the performance in this setting. However, their setting is different in that their tasks have both shared and task-specific components, and their bound becomes tighter only as the number of tasks increases. In our setting, we do not require a task-specific component, and we are able to show how the distance between the MDPs of each task, in addition to the number of tasks, affects the strength of the bound. Recently, permutation invariance has been exploited in deep multi-agent reinforcement learning (Liu et al., 2019) where the invariance properties arise naturally in a homogeneous multi-agent setting. Their work employs permutation invariance in learning the critic whereas in our case the entire learned policy employs permutation invariance. 3 PRELIMINARIES We begin by defining notation. For a measurable space with domain X , let S(X ) denote the set of probability measures over X , and B(X ;L) the space of bounded measurable functions with domain X and bound 0 < L < ∞. For a measure ρ ∈ S(X ) and a measurable function f : X → R, the l2(ρ)-norm of f is ‖f‖ρ, and for a set of n points X1, · · · , Xn ∈ X , the empirical norm, ‖f‖n is ‖f‖2ρ = ∫ f(x)2ρ(dx) and ‖f‖2n = 1 n n∑ t=1 f(Xt) 2. Let ‖f‖∞ = supx∈X |f(x)| be the supremum norm of f . Consider a set of MDPs indexed by t. Each MDP is denoted by a tupleMt = 〈X ,A, Rt, Pt, γ〉, where X , a bounded closed subset of the s-dimensional Euclidean space, is a common state space; A is a common action space, Rt : X ×A → R is a task specific reward function uniformly bounded by Rmax, Pt is a task specific transition kernel such that Pt(·|x, a) is a distribution over X for all x ∈ X and a ∈ A, and γ ∈ (0, 1) is a common discount factor. Deterministic policies are denoted by π : X → A. For a given policy π, the MDPMt is reduced to a Markov chainMπt = 〈X , Rπt , Pπt , γ〉 with reward function Rπt (x) = Rt(x, π(x)), transition kernel P π t (·|x) = Pt(·|x, π(x)), and stationary distribution ρπt . The value function V πt for MDP t is defined as the unique fixed-point of the Bellman operator T πt : B(X ;Vmax = Rmax/(1− γ))→ B(X ;Vmax), given by (T πt V )(x) = Rπt (x) + γ ∫ X Pπt (dy|x)V (y). Let π∗t denote the optimal policy forMt. The optimal value function V π∗t t forMt is defined as the unique fixed-point of its optimal Bellman operator T π ∗ t t which is defined by (T π ∗ t t V )(x) = max a∈A [ Rt(x, a) + γ ∫ X Pt(dy|x, a)V (y) ] . To approximate the value function V , we use a linear approximation architecture with parameters α ∈ Rd and basis functions ϕi ∈ B(X ;L) for i = 1, · · · , d. Let ϕ(·) = (ϕ1(·), · · · , ϕd(·))T ∈ Rd be the feature vector and F the linear function space spanned by basis functions ϕi. Thus, F = {fα | α ∈ Rd and fα(·) = ϕ(·)Tα}. Consider a learning task to dynamically allocate a common resource across entities Ut ⊆ U . Each t corresponds to a task, but for now take t to be an arbitrary fixed index. At each time step n, the decision maker observes states xn = (xi,n)i∈Ut of the entities, where xi,n is the state of entity i, and takes action an = (ai,n)i∈Ut , where ai,n is the share of the resource allocated to entity i. The total resource capacity is normalized to 1 for convenience. Therefore, allocations satisfy 0 ≤ ai,n ≤ 1 and∑ i∈Ut ai,n = 1. We consider policy πθ(xn) parameterized by θ. Assume that we have access to the reward function Rt as well as a simulator that generates a trajectory of length N given any arbitrary policy πθ. The objective of the learning task is to maximize Jt(θ) = E [ N∑ n=1 γn−1Rt(xn, an) ∣∣∣∣∣ an+1 = πθ(xn), xn+1 ∼ Pt(·|xn, an), x1 ∼ Pt(·) ] . In many settings, N is small and simulators are inaccurate; therefore, trajectories generated by the simulator are poor representations of the actual transition dynamics. This occurs in batch RL where trajectories are rollouts from a dataset. In these cases, policies overfit and generalize poorly. 4 THEORETICAL RESULTS We introduce first a property that we term permutation-invariance for the policy network that can be shown to help significantly reduce overfitting. Definition 1 (Permutation Invariant Policy Network) A policy network πθ is permutation invariant if it satisfies πθ(σ(x)) = σ(πθ(x)) for any permutation σ. Permutation invariant policy networks have significant advantages over completely integrated policy networks. While the latter are likely to fit correlations between different entities, this is not possible with permutation invariant policy networks as they are agnostic to identities of entities. Therefore, permutation invariant policy networks are better able to leverage experience across time and entities, leading to greater efficiency in data usage. Moreover, observe that if the transition kernels can be factored into independent and identical transition kernels across entities, then the optimal policy is indeed permutation invariant. Our main theoretical contributions start with an extension of results from Lazaric et al. (2012), where a finite-sample error bound was derived for the least squares policy iteration (LSPI) algorithm on a single task. Lazaric et al. (2012) provided a high-probability bound on the performance difference between the final learned policy and the optimal policy, of the form c1 + c2/ √ N , where c1 and c2 are constants that depend on the task and the chosen feature space, and N is the number of training examples. We extend their result by showing that, as long as tasks are -close to each other (with respect to a similarity measure we define later), the error bound of solving each task using our multi-task approach has the form c1 + c2/ √ NT + c3 , where T is the number of tasks and c3 is a task-dependent constant. Specifically, our theorem provides a general result and performance guarantee with respect to using data from a different but similar MDP. Definition 1 provides a basis for generating many such MDPs. Finally, the benefit of doing so shall be provided by Corollary 2. Thus, provided is small, a given task can benefit from a much larger set of NT training examples. In addition to the assumptions of Lazaric et al. (2012), we extend the definition of second-order discounted-average concentrability, proposed in Antos et al. (2008), and define the notion of first-order discounted-average concentrability. The latter will be used in our main result, Theorem 1. Assumption 1 There exists a distribution µ ∈ S(X ) such that for any policy π that is greedy with respect to a function in the truncated space F̃ , µ ≤ Cρπt for all t, where C <∞ is a constant. Given the target distribution σ ∈ S(X ) and an arbitrary sequence of policies {πm}m≥1, let cσ,µ = sup π1,...,πm ∥∥∥∥d(µPπ1 . . . Pπm)dσ ∥∥∥∥ . We assume that C ′σ,µ, C ′′ σ,µ <∞, and define first and second order discounted-average concentrability of future-state distributions as follows: C ′σ,µ = (1− γ) ∑ m≥0 γmcσ,µ(m), C ′′σ,µ = (1− γ)2 ∑ m≥1 mγm−1cσ,µ(m). Theorem 1 (Multi-Task Finite-Sample Error Bound) LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Assume A finite. Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where N satisfies Lemma 4 in Lazaric et al. (2012). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, for constants c1, c2, c3, c4 that are dependent onM, with probability 1− δ (with respect to the random samples): ‖V π ∗ t t − V πK t ‖σ ≤ c1 1√ N + c2 √ C ′σ,µ + c3 √ C ′′σ,µ + c4. The proof is deferred to the Appendix. Theorem 1 formalizes the trade off between drawing fewer samples from the exact MDPMt, versus drawing more samples from a different MDPM. Importantly, it shows how to benefit from solving a different MDP,M, when: (a) additional samples can be obtained fromM, and (b)M is not too different fromMt. In particular, the distance measure is simply the distance between the Bellman operators of the MDPs, which can be bounded if the difference in both the transition and reward functions are bounded. In recent work, a performance bound for multi-task learning was given in Theorem 2 and 3 of D’Eramo et al. (2020). However, the authors used a different setup containing both shared and task-specific representations, and their focus was on showing that the cost of learning the shared representation decreases with more tasks. They did not show how the similarity or difference across tasks affects performance. In contrast, our setup does not contain task-specific representations, and our focus is on how differences across MDPs impact the benefit of having more tasks (and consequently more samples). We show this in Corollary 1 and Corollary 2. Remark 1 While our theoretical results are based on LSTD and LSPI and assume finite action space, our approach is applicable to a wide range of reinforcement learning algorithms, including policy gradient methods and to MDPs with continuous action spaces. Deriving similar results for a larger family of models and algorithms remains an interesting, albeit challenging, future work. Permutation invariant policy networks allow using data from the global set of entities U . Since the policy network is agnostic to the identities of the entities, one can learn a single policy for all tasks, where each task t ∈ [T ] is a resource allocation problem over a subset of entities Ut. For notational simplicity, assume that all tasks have the same number of entities, and all trajectories are of equal length N . Our approach can, however, be readily extended to tasks with different numbers of entities and different trajectory lengths. Permutation invariance allows a large set of MDPs to leverage the result of Theorem 1. In the next section we shall provide an algorithm, motivated by the following corollaries, and a prioritized sampling strategy for this setting that drives significantly greater sample efficiency for the original task. The sampling strategy also helps to stabilize the learning process, reducing the risk of deleterious effects of the multi-task setting, as discussed by Teh et al. (2017) and addressed in works such as Hessel et al. (2018); Bram et al. (2019). Corollary 1 Let [T ] be a set of similar tasks such that their distance from the average MDP, given by (T πV )(x) = 1 T T∑ t=1 Rπt (x) + γ ∫ X 1 T T∑ t=1 Pπt (dy|x)V (y), is bounded by as defined in Theorem 1. Let N be the number of samples available in each task. Let πK be the policy obtained at the K th iteration when applying LSPI to the average MDP. Then, the suboptimality of the policy on each task is O(1/ √ NT ) +O( ) + c for some constant c (where suboptimality is defined according to Theorem 1). Recall that each task is formed by selecting a subset Ut of entities from the global set U . We thus have the following sample gain that can be attributed to the permutation invariance of the policy network. Corollary 2 (Gain in Sample Efficiency from Permutation Invariance) Let M = |U| and m = |Ut|. Given fixed M and m, there are T = ( M m ) ≥ ( M m )m different tasks. Then, by Cor. 1, assuming all pairs of tasks are weakly correlated, the potential gain in sample efficiency is exponential in m. Disregarding correlation between samples from tasks with overlapping entities Corollary 1 and Corollary 2 together suggest that the (up to) exponential increase in the number of available tasks can significantly improve sample efficiency as compared to learning each task separately. 5 EXPLOITING PERMUTATION INVARIANCE THROUGH MULTI-TASK REINFORCEMENT LEARNING Our approach to exploiting permutation invariance is via multi-task reinforcement learning, where each “task” corresponds to a particular choice of subset Ut ⊂ U . Furthermore, for each task, we enforce permutation invariance among the entities i by forcing the neural network to apply the same sequence of operations to the state input xi of each instrument through parameter sharing. The proposed method, shown in Algorithm 1, learns a single policy by sampling subsequences of trajectories from the different MDPs. At each step, we sample a task t according to a distribution defined by task selection policy p. Then, a minibatch sample Bt is drawn from the replay buffer for task t, and gradient descent is performed using the sampled transitions Bt (alternatively, samples can be generated using policy rollouts for the specific task). Separate replay buffers maintained for each task are updated only when the corresponding task is being used. In contrast with other active sampling approaches in multi-task learning, our approach maintains an estimate of the difficulty of each task t as a score, st. After each training step, we update the score for only the sampled task based on minibatch Bt, avoiding evaluation over all the tasks. The scoring functions depend on the sampled minibatch; to reduce fluctuations in scores for each task, exponential smoothing is applied st ← γst + (1− γ) · scorer(Bt). We propose a stochastic prioritization method that interpolates between pure greedy prioritization and uniform random sampling. Our approach is similar to prioritized experience replay (PER) by Schaul et al. (2016), but while classical PER prioritizes samples, we prioritize tasks. The probability of sampling task t is pt = sαt / ∑ t′ s α t′ , where the exponent α determines the degree of prioritization, with α = 0 corresponding to the uniform case. We correct for bias with importance-sampling (IS) weights wt = 1/(Tpt)β , that compensate for non-uniform probabilities if β = 1. We normalize weights by 1/maxt wt. Tasks on which the reward variance is high can be interpreted as having more challenging samples, hence reward variance can be used as a scoring function. Algorithm 1 Prioritized Multi-Task Reinforcement Learning for Increasing Sample Efficiency Initialize policy network πθ Initialize replay buffers R1, . . . , RT Initialize time steps n1 ← 1, . . . , nT ← 1 loop Select a task t ∼ p to train on Sample a random minibatch Bt of transitions (xn, an, rn, xn+1) from Rt Update policy θ using Bt and chosen RL approach (correcting for bias using IS weights w) Update score st ← γst + (1− γ) · scorer(Bt) Update ALL selection probabilities p and IS weights w for n = nt, . . . ,min{nt + ne, N} do For task t, select action an according to current policy and exploration noise Execute action an, and observe reward rn and new state xn+1 Store transition (xn, an, rn, xn+1) in Rt end for If n < N , update nt ← n+ 1, otherwise, update nt ← 1 end loop 6 EXPERIMENTS 6.1 SYNTHETIC DATA With the aim of validating the theory presented in Section 4, we define a synthetic example to explore the efficiency gain afforded by permutation invariance. To do so, we control of the deviation between any two tasks, thereby empirically validating the main theoretical results. Consider a resource allocation problem where the observed state xi for each entity i ∈ {1 . . .m} is a single scalar xi ∈ [0, 1]. The action space is the probability simplex, where each action a = (a1 . . . am) indicates the fraction of resource allocated to each entity. The reward function is R(x, a) := ∑ i xiai − βiai log ai where βi is a weight parameter for each entity. Note that when βi = β for all i, the reward function becomes R(x, a) = ( ∑ i xiai) + βH(a) where H is the Shannon entropy. This implies that maximizing the reward involves a tradeoff between focusing resources on high xi or distributing them uniformly across all i. Note that the reward function is permutation invariant, but that when we allow a varying βi over the entities, the function deviates from being perfectly permutation invariant. We use the range maxi βi − mini βi as a stand-in for . Let m = 10. For each , we run two experiments. The first examines the performance of policies trained by LSPI using N real examples drawn i.i.d from the state-action space, for N = 20 . . . 2000. A small Gaussian noise is added to each reward to make learning harder. The second experiment uses only 20 real examples, but augments the training set (up to N ) through random permutation of the real examples. The first two figures in Fig. 1 show the results for = 0.8 and = 0, respectively. Performance improves with N , as predicted by the 1/ √ N term in our error bound. Note that in the experiment using only 20 real examples, a performance gain is achieved by using permuted examples; this corresponds precisely to the multi-task gain predicted by the 1/ √ NT term. When is large, there is a significant gap between the results of the two experiments, as predicted by the -term in the error bound. The last plot in Fig. 1 shows this gap at N = 2000 when varies from 0 to 0.8. 6.2 REAL-WORD DATA We consider two real-world resource allocation settings: financial portfolio optimization and meta federated learning. Financial portfolio optimization is discussed below while meta federated learning is in the Appendix. Given historical prices for a universe of financial assets, U , the goal of task t is to allocate investments across a subset of assets Ut ⊆ U . The multiple tasks t thus correspond to multiple portfolios of instruments. Permutation invariance will be of use in this setting since, from a given universe of instruments (e.g. the 500 instruments in the S&P 500), an exponential number of tasks can be generated, each with its own portfolio. Consider now one such task. At the beginning of time period n, the action ai,n represents the fraction of wealth the decision maker allocates to asset i. The allocations evolve over the time period due to changes in asset prices. Let wi,n denote the allocation of asset i at the end of time period n. We model the state of an asset using its current allocation and a window of its H most recent prices. In particular, let vi,n denote the close price of asset i over time period n, and let yi,n = vi,n/vi,n−1 denote the ratio of close prices between adjacent time periods 1. Then, the allocation in asset i at the end of time period n is given by wi,n = ai,nyi,n∑ i∈Ut ai,nyi,n , and the state of asset i at the beginning of time period n is given by xi,n = (wi,n−1, vi,n−H/vi,n−1, . . . , vi,n−2/vi,n−1). 1Daily high and low prices are also used in the state but omitted here for brevity. The change in portfolio value over period n depends on the asset prices and transaction costs incurred in rebalancing the portfolio from (wi,n−1)i∈Ut to (ai,n)i∈Ut . The reward over period n is defined as the log rate of return: Rt(xn, an) = ln [ β ((wi,n−1)i∈Ut , (ai,n)i∈Ut) ∑ i∈Ut ai,nyi,n ] , where β can be evaluated using an iterative procedure (see Jiang et al. (2017)). Defining the reward this way is appealing because maximizing average total reward over consecutive periods is equivalent to maximizing the total rate of return over the periods. To leverage this, we approximate β((wi,n−1)i∈Ut , (ai,n)i∈Ut) ≈ c ∑ i∈Ut |wi,n−1 − ai,n|, where c is a commission rate to obtain a closed-form expression for Rt(xn, an) (see Jiang et al. (2017)). We optimize using direct policy gradient on minibatches of consecutive samples θ ← θ + η∇θ [ 1 B nb+B−1∑ n=nb wtRt(xn, πθ(xn)) ] , where nb is the first time index in the minibatch, B the size of a minibatch, and wt the IS weight for task t. As in Jiang et al. (2017), we sample nb from a geometric distribution that prioritises recent samples and implement replay buffers for each task. A benchmark trading strategy is equal constantly-rebalanced portfolio (CRP) that rebalances to maintain equal weights. As we noted earlier, ideally one would prefer for the scoring function to depend only on the minibatch Bt. A deviation from Equal CRP can be viewed as learning to exploit price movements, and is thus here we use this as the goal of the policy. Prioritised MTL thus prioritises tasks which deviate from Equal CRP. Note that the policy deviates from CRP only when profitable. Let scorer(Bt) = max n∈{nb,...,nb+B−1} ∥∥∥∥πθ(xn)− 1|Ut| ∥∥∥∥ ∞ , be the scoring of tasks in Prioritised MTL using mean absolute deviation of the minibatch allocation from Equal CRP. Figure 2 (left) shows a scatter plot of the maximum score seen every 50 steps and the change in episode rewards in a single-task learning experiment, and (right) of the minibatch score and the maximum gradient norm for the minibatch. Higher scores imply higher variance in the episode rewards and hence more challenging and useful samples. The correlation between scores and gradient norms shows that our approach is performing gradient-based prioritisation, (see Katharopoulos & Fleuret (2018); Loshchilov & Hutter (2015); Alain et al. (2015)) but in a computationally efficient manner. The details of the dataset and parameter settings can be found in the Appendix. Figure 3 shows the performance of the learned policies tested on 10 tasks drawn from out-of-sample instruments. The policy network with weights initialized close to zero behaves like an Equal CRP policy. As noted, any profitable deviation from Equal CRP implies learning useful trading strategies. The plots show that the MTL policies perform well on instruments never seen during training, offering a remarkable benefit for using RL in the design of trading policies. Fig. 4 shows the performance of prioritised multi-task learning (MTL) versus single-task learning (STL) (i.e. learning a policy for each task independently on the instruments in the task). We also show results for MTL without prioritised sampling, i.e., with α = 0. We consider 5 tasks and 30 tasks. The plots show that prioritised MTL performs significantly better than STL in both convergence time and final achieved performance. The performance with 30 tasks is significantly better than the performance with 5 tasks, showing that our approach leverages the samples of the additional tasks. Fig. 5 illustrates the typical behavior of a multi-task learning (MTL) and a single-task learning (STL) policy on the test period for tasks where multi-task policy performed significantly better. The single-task policy kept constant equal allocations while the multi-task policy was able to learn more complex allocations. In financial data, strongly trending prices do not occur often and are inherently noisy. Multi-task learning with permutation invariance helps with both challenges, allowing the algorithm to learn more complex patterns in a given training period. 7 CONCLUSIONS We introduce an approach for increasing the sample efficiency of reinforcement learning in a setting with widespread applicability within the class of sequential resource allocation problems. This property is permutation invariance: resources are allocated to entities according to a score, and the order can change without modifying the optimal allocation. Under this property, we show that a bound exists on the policy performance. This bound motivates a highly effective algorithm for improving the policy through a multi-task approach. Using prioritized task-sampling, the method not only improves the reward of the final policy but also renders it more robust. We illustrate the property and the method on two important problems: sequential financial portfolio optimization and meta federated learning, where the latter is provided in the Appendix. A APPENDIX Theorem 1 LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where n satisfies Lemma 4 in Antos et al. (2008). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, with probability 1− δ (with respect to the random samples), we have ‖V π ∗ t t − V πK t ‖σ ≤ 2 (1− γ)2 { (1 + γ) √ CC ′′σ,µ [ 2√ 1− γ2 ( 2 √ 2E0(F) + E2 ) + 2 1− γ ( γVmaxL √ d νµ ( √ 8 log(8dK/δ) N + 1 N ) ) + E1 ] + γ K−1 2 Rmax + 3 √ 2C ′σ,µ } . Proof: For convenience, we will simply remove the task subscript whenever we refer to variables associated withM. Define dπt = Dπt V π, d̃t,k = Dπkt Ṽk−1, ek = Ṽk − T πk Ṽk, Ek = P πk+1(I − γPπk+1)−1 − Pπ ∗ (I − γPπk)−1, Fk = P πk+1(I − γPπk+1)−1 + Pπ ∗ (I − γPπk)−1. From the proof of Lemma 12 in Antos et al. (2008), we get V π ∗ − V πK ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Ekek + (γP π∗)K(V π ∗ − V π0). By applying the above inequality, and taking the absolute value on both sides point-wise, we get |V π ∗ t t − V πK t | = |V π ∗ t t − V π ∗ |+ |V π ∗ − V πK |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ (γPπ ∗ )K |V π ∗ − V π0 |+ |V π ∗ t t − V π ∗ |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ 2Rmax 1− γ γK + |V π ∗ t t − V π ∗ |+ |V πK − V πKt | where we used the fact that |V π∗ − V π0 | ≤ (2Rmax/(1− γ))1. Next, we derive upper bounds for |V π ∗ t t − V π ∗ | and |V πK − V πKt |. (a) Observe that V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≤ T π ∗ t t V π∗t t − T π ∗ t V π ∗ = T π ∗ t t V π∗t t − T π ∗ t V π∗t t + T π ∗ t (V π∗t t − V π ∗ ) ≤ (I − γPπ ∗ t t ) −1d π∗t t . The first inequality follows from the fact that π∗ is optimal with respect to V π ∗ . The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≥ T π ∗ t V π∗t t − T π ∗ V π ∗ = T π ∗ t V π∗ − T π ∗ V π ∗ + T π ∗ t (V π∗t t − V π ∗ ) ≥ (I − γPπ ∗ t ) −1dπ ∗ t . By splitting into positive and negative components and applying the above bounds, we get |V π ∗ t t − V π ∗ | = |(V π ∗ t t − V π ∗ )+ − (V π∗t t − V π ∗ )−| ≤ |(V π ∗ t t − V π ∗ )+|+ |(V π∗t t − V π ∗ )−| ≤ |(I − γPπ ∗ t t ) −1d π∗t t |+ |(I − γPπ ∗ t ) −1dπ ∗ t | ≤ (I − γPπ ∗ t t ) −1|dπ ∗ t t |+ (I − γPπ ∗ t ) −1|dπ ∗ t | (b) Observe that V πK − V πKt ≤ T πKV πK + T πK ṼK−1 − T πK t ṼK−1 − T πK t V πK t = T πKV πK + T πK ṼK−1 − T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≤ (I − γPπKt )−1(−d πK t − d̃t,K). The first inequality follows from the fact that πK is optimal with respect to ṼK−1. The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V πK − V πKt ≥ T πKV πK − T πK ṼK−1 + T πK t ṼK−1 − T πK t V πK t = T πKV πK − T πK ṼK−1 + T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≥ (I − γPπKt )−1(−d πK t + d̃t,K). By splitting into positive and negative components and applying the above bounds, we get |V πK − V πKt | = |(V πK − V πK t )+ − (V πK − V πK t )−| ≤ |(V πK − V πKt )+|+ |(V πK − V πK t )−| = |(I − γPπKt )−1(−d πK t − d̃t,K)|+ |(I − γP πK t ) −1(−dπKt + d̃t,K)| ≤ (I − γPπKt )−1| − d πK t − d̃t,K |+ (I − γP πK t ) −1| − dπKt + d̃t,K | ≤ 2(I − γPπKt )−1(|d πK t |+ |d̃t,K |). By applying the upper bounds from (a) and (b), we get |V π ∗ t t − V πK t | ≤ 2(1− γK+2) (1− γ)2 [ K−1∑ k=0 αkAk|ek|+ α(Rmax/γ) + (β/6)Bπ ∗ t · 6|dπ ∗ t t |+ (β/6)Bπ ∗ · 6|dπ ∗ t | + (β/3)BπK · 6|dπKt |+ (β/3)BπK · 6|d̃t,K | ] where we introduced the positive coefficients αk = (1− γ) 1− γK+2 γK−k, for 0 ≤ k < K, α = (1− γ) 1− γK+2 γK+1, β = (1− γ) 2(1− γK+2) , and the operators Ak = 1− γ 2 (Pπ ∗ )K−k−1Fk, for 0 ≤ k < K, Bπ = (1− γ)(I − γPπt )−1. Let λK = [ 2(1−γK+2) (1−γ)2 ]p . Note that the coefficients αk, α, and β, sum to 1, and the operators are positive linear operators that satisfy Ak1 = 1 and Bπ1 = 1. Therefore, by taking the pth power on both sides, applying Jensen’s inequality twice, and then integrating both sides with respect to σ(x), we get ‖V π ∗ t t − V πK t ‖pp,σ = ∫ σ(dx)|V π ∗ t t − V πK t |p ≤ λKσ [ K−1∑ k=0 αkAk|ek|p + α(Rmax/γ)p + (β/6)Bπ ∗ t (6|dπ ∗ t t |)p + (β/6)Bπ ∗ (6|dπ ∗ t |)p + (β/3)BπK (6|dπKt |)p + (β/3)BπK (6|d̃t,K |)p ] . From the definition of the coefficients cσ,µ(m), we get σAk ≤ (1− γ) ∑ m≥0 γmcσ,µ(m+K − k)µ, σBπ ≤ (1− γ) ∑ m≥0 γmcσ,µ(m)µ. Therefore, it follows that σ [ K−1∑ k=0 αkAk|ek|p ] ≤ (1− γ) K−1∑ k=0 αk ∑ m≥0 γmcσ,µ(m+K − k)µ|ek|p = γ(1− γ)2 1− γK+2 K−1∑ k=0 ∑ m≥0 γm+K−k−1cσ,µ(m+K − k)‖ek‖pp,µ ≤ γ 1− γK+2 C ′′σ,µe p where e = max0≤k<K ‖ek‖pp,µ. The terms involving Bπ satisfy σ [Bπ(6|dπt |)p] ≤ 6p(1− γ) ∑ m≥0 γmcσ,µ(m)µ|dπt |p ≤ 6pC ′σ,µ‖dπt ‖pp,µ. Putting all these together, and choosing p = 2, we get ‖V π ∗ t t − V πK t ‖σ ≤ λ 1 2 K [ γ 1− γK+2 C ′′σ,µe 2 + (1− γ)γK+1 1− γK+2 (Rmax/γ) 2 + 36(1− γ) 2(1− γK+2) C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ γC ′′σ,µe 2 + (1− γ)γK+1(Rmax/γ)2 + 36(1− γ) 2 C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ C ′′σ,µe 2 + γK+1(Rmax/γ) 2 + 18C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [√ C ′′σ,µe+ γ K−1 2 Rmax + 3 √ 2C ′σ,µ ] . The desired result can then be obtained by applying the same steps as in the proof of Theorem 8 in Lazaric et al. (2012). A.1 FINANCIAL PORTFOLIO OPTIMIZATION: ADDITIONAL DETAILS The dataset consists of daily prices for 68 instruments in the technology and communication sectors from 2009 to 2019. We use 2009–2018 for training and 2019 for testing. To validate that our approach learns common features across instruments, and thus can transfer, we reserve 18 instruments not seen during training for further testing. The global asset universe U used for training contains 50 instruments. We construct tasks by randomly choosing a portfolio of |Ut| = 10 instruments for each task. We create a permutation invariant policy network by applying the same sequence of operations to every instrument state. That is, for each instrument, the flattened input prices are passed through a common RNN with 25 hidden units and tanh activation, this output is concatenated with the latest allocation fraction of the instrument, and passed through a common dense layer to produce a score. Instrument scores are passed to a softmax function to produce allocations that sum to one. The smoothing parameter for the scores γ = 0.2, α = 0.5 for the task prioritisation parameter and β = 1.0 to fully compensate for the prioritized sampling bias. A.2 META FEDERATED LEARNING Suppose we have a universe of federated learning clients U . The goal of task t is to aggregate models in a federated learning experiment over a subset of clients Ut ⊆ U . At each step n, the action ai,n represents the weight assigned to the supervised learning model of client i in the averaging procedure. Let vi,n denote the model of the client (i.e. the tensor of model parameters). We model the state of the client as some function of its H most recent models xi,n = f(vi,n−H+1, . . . , vi,n). Assume that the aggregator has access to a small evaluation dataset that it can use to approximately assess the quality of models. We define the reward at each step to be the accuracy of the aggregate model, Rt(xn, an) = L (∑ i∈Ut ai,nvi,n ) , where L(v) is a function that provides the accuracy of a model v on the evaluation dataset. Therefore, by maximizing the total return over all time periods, we seek to maximize both the accuracy at the final time step as well as the time to convergence. We optimize the policy using Proximal Policy Optimization (PPO). We use the MNIST digit recognition problem. Each client observes 600 samples from the train dataset and trains a classifier composed of one 5x5 convolutional layer (with 32 channels and ReLu activation) and a softmax output layer. We use the same permutation invariant policy network architecture as before with 10 hidden units in the RNN. We randomly select |Ut| = 10 clients for each task. We learn using an evaluation dataset comprised of 1000 random samples from the test dataset and test using all 10000 samples in the test dataset. We fix the number of federated learning iterations to 50. We explore the benefit of MTL in identifying useful clients in scenarios with skewed data distribution. We partition the dataset such that 8 of the clients in each task observe random digits between 0 to 5 and the remaining 2 clients observe random digits between 6 to 9. Therefore, for each task, 20% of the clients possess 40% of the unique labels. The state of each client are the accuracies of its H most recent models on the evaluation dataset. Figure 6 shows the potential benefits of multi-task learning when simulators are inaccurate. In particular, we obtain two aggregation policies, one trained using single-task learning (STL), and another trained using multi-task learning (MTL), both trained using the same number of steps, and we observe their behavior during testing. The plots show that multi-task learning is able to learn non-uniform averaging policies that improve the convergence and performance of federated learning runs. More importantly, it can perform better than single-task learning even with the same number of samples. This may be attributed to the wider variety of client configurations (and consequently experiences) in the multi-task approach.
1. What is the main contribution of the paper regarding multi-task reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its presentation and application? 3. Do you have any questions or concerns regarding the technical aspects of the paper, such as the theorem and its statement? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or recommendations for improving the paper, such as providing more convincing evidence for the proposed bound or exploring different applications?
Review
Review This paper proposes an approach to reducing the sample complexity in multi-task reinforcement learning using permutation invariant policies. The main premise of the paper is that certain families of tasks exhibit approximate forms of symmetry, i.e., applying a permutation to the state/action variables would make all tasks similar in some metric sense. Then, the proposal is to learn a single permutation-invariant policy which will perform well on all of them simultaneously. An reinforcement learning algorithm to learn a permutation invariant policy is derived. To get the technicalities out of the way, I think the paper is fairly well written and the related work is sufficiently well summarized (to my understanding). The authors are very open about the similarities to previous work and sources of inspiration. I checked the theory superficially and the results generally have the form I would expect (in terms of the involved quantities and their proportions), however, if there were more subtle issues, I certainly missed them. A few minor nits: in the statement of theorem 1, the authors present important quantities in the bound as if they were universal constants (c1, c2, c3). These quantities are not problem independent, they involve features of the problem so the statement of thm 1 creates the wrong impression. Moreover the appendix refers to theorem 1 as "theorem 2", which was initially confusing to me. Also it seems that the symbol pi switches semantics a few times - first, it denotes a deterministic policy, then a stochastic policy, and finally a policy network. It may be good to disambiguate them somehow for the sake of readability. I generally agree with the story, however, there are two major issues. First, the presentation is structured in a way that does not sell this work very strongly. A lot of effort is spent on setting up Theorem 1 in a completely detached way, and then it's actual relevance to the problem at hand is summarized in a hand-wavy corollary. From a paper claiming to solve resource allocation problems efficiently, I would expect at least the following: clearly articulate the problem. Argue convincingly that this particular problem class is hard (the arguments that N may be small and the simulator may be inaccurate are true for many problem classes). Explain the solution and how it exploits the structure of the problem. Finally present theory verifying the soundness of the solution. The second issue concerns the application. While it's nice that the portfolio problem is somewhat grounded in reality, it's far from obvious that the theory explains what's going on in this problem. This problem is technically a POMDP, which is also suggested by the choice of on RNN as a policy network. The policy is then trained by policy gradient, which might be expected to behave differently on these problems. This gives us very little information about how good the proposed bound is. I believe this paper would benefit from a synthetic application where the difference in Bellman operators can be controlled precisely to see that the problem behaves as predicted by the theory. Overall, I think that the general direction is good, and significant progress has been made, however, the current state of the paper does not present a convincing story.
ICLR
Title Efficient Reinforcement Learning in Resource Allocation Problems Through Permutation Invariant Multi-task Learning Abstract One of the main challenges in real-world reinforcement learning is to learn successfully from limited training samples. We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning, by exploiting an invariance property in the tasks. We provide a theoretical performance bound for the gain in sample efficiency under this setting. This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy. We demonstrate empirically the effectiveness of the proposed approach on two real-world sequential resource allocation tasks where this invariance property occurs: financial portfolio optimization and meta federated learning. 1 INTRODUCTION Sample efficiency in reinforcement learning (RL) is an elusive goal. Recent attempts at increasing the sample efficiency of RL implementations have focused to a large extent on incorporating models into the training process: Xu et al. (2019); Clavera et al. (2018); Zhang et al. (2018); Berkenkamp et al. (2017); Ke et al. (2019); Yarats et al. (2019); Huang et al. (2019); Chua et al. (2018); Serban et al. (2018). The models encapsulate knowledge explicitly, complementing the experiences that are gained by sampling from the RL environment. Another means towards increasing the availability of samples for a reinforcement learner is by tilting the training towards one that will better transfer to related tasks: if the training process is sufficiently well adapted to more than one task, then the training of a particular task should be able to benefit from samples from the other related tasks. This idea was explored a decade ago in Lazaric & Ghavamzadeh (2010) and has been gaining traction ever since, as researchers try to increase the reach of deep reinforcement learning from its comfortable footing in solving games outrageously well to solving other important problems. Yu (2018) discusses a number of methods for increasing sample efficiency in RL and includes experience transfer as one important avenue, covering the transfer of samples, as we do here, transfer of representation or skills, and jumpstarting models which are then ready to be quickly, i.e. with few samples, updated to different tasks. D’Eramo et al. (2020) address the same idea, noting that multi-task learning can improve the learning of each individual task, motivated by robotics-type tasks with underlying commonality, such as balancing a single vs. a double pendulum, or hopping vs. walking. We are interested in exploiting the ability of multi-task learning to solve the sample efficiency problem of RL. Our setting does not apply to all problem classes nor does it seek to exploit the kind of physical similarities found in robotics tasks that form the motivation of Lazaric & Ghavamzadeh (2010); D’Eramo et al. (2020). Rather, we show that there are a number of reinforcement learning tasks with a particular fundamental property that makes them ideal candidates for multi-task learning with the goal of increasing the availability of samples for their training. We refer to this property as permutation invariance. It is present in very diverse tasks: we illustrate it on a financial portfolio optimization problem, whereby trades are executed sequentially over a given time horizon, and on the problem of meta-learning in a federated supervised learning setting. Permutation invariance in the financial portfolio problem exhibits itself as follows: consider the task of allocating a portion of wealth to each of a number of financial instruments using a trading policy. If the trading policy is permutation invariant, one can change the order of the instruments without changing the policy. This allows one to generate multiple portfolio optimization tasks from a given set of financial instruments. A commonality between applications that have this property is that they concern sequential resource allocation: at each time step, the resource allocation scores the quality of each available candidate entity (for example a financial instrument in the above example), then based on those scores, apportions out the resource (the total wealth to invest, in the above example) among the entities at that time step, so that over the horizon of interest, the reward is maximized. Sequential resource allocation problems include applications such as sequential allocation of budget, sequential allocation of space, e.g. in IT systems, hotels, delivery vehicles, sequential allocation of people to work slots or appointments, etc. Many such applications possess permutation invariance in that the ordering of the entities, i.e. where the resources are allocated, can change without changing the resulting optimal allocation. We show that under this form of permutation invariance, it is possible to derive a bound on the performance of the policy. The bound is an extension of that of Lazaric & Ghavamzadeh (2010), and while similar to, provides additional information beyond the bound of D’Eramo et al. (2020). We use the bound to motivate an algorithm that allows for substantially improved results as compared with solving each task on its own. The bound and the algorithm are first analyzed on a synthetic problem that validates the bound in our theorem and confirms the multi-task gain that the theory predicts. Hessel et al. (2018); Bram et al. (2019) have cautioned against degrading of the performance on each task when some tasks bias the updates to the detriment of others in multi-task learning. They claim that some tasks have a greater density or magnitude of in-task rewards and hence a disproportionate impact on the learning process. In our setting, deleterious effects of some tasks on others could also arise. The algorithm we propose handles this through a form of prioritized sampling, where priorities are put on the tasks themselves, and acts like a prioritized experience replay buffer, applied to a multi-task learning problem. We show empirically that the priorities thus defined protect the overall learning problem from the deleterious effects that unrelated or unhelpful tasks could otherwise have on the policy. The contributions of this work are as follows: (1) we identify the permutation invariance property of the class of reinforcement learning problems involving sequential resource allocation, (2) we define a method to increase sample efficiency in these reinforcement learning problems by leveraging this property of permutation invariance; (3) we provide a theoretical performance bound for the class of problems; (4) we validate experimentally the utility of permutation variance on sample efficiency as well as the validity of the bound on a synthetic problem; and (5) we illustrate two real-world RL resource allocation tasks for which this property holds and demonstrate the benefits of the proposed method on sample efficiency and thus also on the overall performance of the models. 2 RELATED WORK A notable first stream of work on leveraging multi-task learning for enhancing RL performance on single tasks can be found in Wilson et al. (2007); Lazaric & Ghavamzadeh (2010) which consider, as we do, that there is an underlying MDP from which the multiple tasks can be thought to derive. They use however a Bayesian approach and propose a different algorithmic method than ours. Our results extend performance bounds by Lazaric et al. (2012) on single-task RL. As noted by Yu (2018), jumpstarting, or distilling experiences and representations of relevant policies is another means to increasing sample efficiency in solving a new but related problem. Rusu et al. (2016) uses this idea in so-called progressive neural networks and Parisotto et al. (2015) leverage multiple experts to guide the derivation of a general policy. With a similar objective, Teh et al. (2017) define a policy centroid, that is, a shared distilled policy, that captures the commonalities across the behaviors in the tasks. In all of these distillation-type methods, the tasks considered are simple or complex games. Teh et al. (2017) note that their policy centroid method, distral, is likely to be affected by task interference, in that differences across tasks may degrade the performance of the resulting policy of any of the constituent tasks. This topic was studied by Hessel et al. (2018); Bram et al. (2019). Hessel et al. (2018) proposed a solution to this by extending the so-called PopArt normalization van Hasselt et al. (2016) to re-scale the updates of each task so that the different characteristics of the task-specific reward do not skew the learning process. Bram et al. (2019) use a different approach that learns attention weights of the sub-networks of each task and discards those that are not relevant or helpful. Vuong et al. (2019); D’Eramo et al. (2020) are, like our work, concerned with sharing of experiences to facilitate a more sample-efficient learning process. Vuong et al. (2019) suggest identifying the shared portions of tasks to allow sharing of samples in those portions. The work of D’Eramo et al. (2020) is in some ways quite similar to ours: the authors’ goal is the same and they derive a bound as we do on the performance in this setting. However, their setting is different in that their tasks have both shared and task-specific components, and their bound becomes tighter only as the number of tasks increases. In our setting, we do not require a task-specific component, and we are able to show how the distance between the MDPs of each task, in addition to the number of tasks, affects the strength of the bound. Recently, permutation invariance has been exploited in deep multi-agent reinforcement learning (Liu et al., 2019) where the invariance properties arise naturally in a homogeneous multi-agent setting. Their work employs permutation invariance in learning the critic whereas in our case the entire learned policy employs permutation invariance. 3 PRELIMINARIES We begin by defining notation. For a measurable space with domain X , let S(X ) denote the set of probability measures over X , and B(X ;L) the space of bounded measurable functions with domain X and bound 0 < L < ∞. For a measure ρ ∈ S(X ) and a measurable function f : X → R, the l2(ρ)-norm of f is ‖f‖ρ, and for a set of n points X1, · · · , Xn ∈ X , the empirical norm, ‖f‖n is ‖f‖2ρ = ∫ f(x)2ρ(dx) and ‖f‖2n = 1 n n∑ t=1 f(Xt) 2. Let ‖f‖∞ = supx∈X |f(x)| be the supremum norm of f . Consider a set of MDPs indexed by t. Each MDP is denoted by a tupleMt = 〈X ,A, Rt, Pt, γ〉, where X , a bounded closed subset of the s-dimensional Euclidean space, is a common state space; A is a common action space, Rt : X ×A → R is a task specific reward function uniformly bounded by Rmax, Pt is a task specific transition kernel such that Pt(·|x, a) is a distribution over X for all x ∈ X and a ∈ A, and γ ∈ (0, 1) is a common discount factor. Deterministic policies are denoted by π : X → A. For a given policy π, the MDPMt is reduced to a Markov chainMπt = 〈X , Rπt , Pπt , γ〉 with reward function Rπt (x) = Rt(x, π(x)), transition kernel P π t (·|x) = Pt(·|x, π(x)), and stationary distribution ρπt . The value function V πt for MDP t is defined as the unique fixed-point of the Bellman operator T πt : B(X ;Vmax = Rmax/(1− γ))→ B(X ;Vmax), given by (T πt V )(x) = Rπt (x) + γ ∫ X Pπt (dy|x)V (y). Let π∗t denote the optimal policy forMt. The optimal value function V π∗t t forMt is defined as the unique fixed-point of its optimal Bellman operator T π ∗ t t which is defined by (T π ∗ t t V )(x) = max a∈A [ Rt(x, a) + γ ∫ X Pt(dy|x, a)V (y) ] . To approximate the value function V , we use a linear approximation architecture with parameters α ∈ Rd and basis functions ϕi ∈ B(X ;L) for i = 1, · · · , d. Let ϕ(·) = (ϕ1(·), · · · , ϕd(·))T ∈ Rd be the feature vector and F the linear function space spanned by basis functions ϕi. Thus, F = {fα | α ∈ Rd and fα(·) = ϕ(·)Tα}. Consider a learning task to dynamically allocate a common resource across entities Ut ⊆ U . Each t corresponds to a task, but for now take t to be an arbitrary fixed index. At each time step n, the decision maker observes states xn = (xi,n)i∈Ut of the entities, where xi,n is the state of entity i, and takes action an = (ai,n)i∈Ut , where ai,n is the share of the resource allocated to entity i. The total resource capacity is normalized to 1 for convenience. Therefore, allocations satisfy 0 ≤ ai,n ≤ 1 and∑ i∈Ut ai,n = 1. We consider policy πθ(xn) parameterized by θ. Assume that we have access to the reward function Rt as well as a simulator that generates a trajectory of length N given any arbitrary policy πθ. The objective of the learning task is to maximize Jt(θ) = E [ N∑ n=1 γn−1Rt(xn, an) ∣∣∣∣∣ an+1 = πθ(xn), xn+1 ∼ Pt(·|xn, an), x1 ∼ Pt(·) ] . In many settings, N is small and simulators are inaccurate; therefore, trajectories generated by the simulator are poor representations of the actual transition dynamics. This occurs in batch RL where trajectories are rollouts from a dataset. In these cases, policies overfit and generalize poorly. 4 THEORETICAL RESULTS We introduce first a property that we term permutation-invariance for the policy network that can be shown to help significantly reduce overfitting. Definition 1 (Permutation Invariant Policy Network) A policy network πθ is permutation invariant if it satisfies πθ(σ(x)) = σ(πθ(x)) for any permutation σ. Permutation invariant policy networks have significant advantages over completely integrated policy networks. While the latter are likely to fit correlations between different entities, this is not possible with permutation invariant policy networks as they are agnostic to identities of entities. Therefore, permutation invariant policy networks are better able to leverage experience across time and entities, leading to greater efficiency in data usage. Moreover, observe that if the transition kernels can be factored into independent and identical transition kernels across entities, then the optimal policy is indeed permutation invariant. Our main theoretical contributions start with an extension of results from Lazaric et al. (2012), where a finite-sample error bound was derived for the least squares policy iteration (LSPI) algorithm on a single task. Lazaric et al. (2012) provided a high-probability bound on the performance difference between the final learned policy and the optimal policy, of the form c1 + c2/ √ N , where c1 and c2 are constants that depend on the task and the chosen feature space, and N is the number of training examples. We extend their result by showing that, as long as tasks are -close to each other (with respect to a similarity measure we define later), the error bound of solving each task using our multi-task approach has the form c1 + c2/ √ NT + c3 , where T is the number of tasks and c3 is a task-dependent constant. Specifically, our theorem provides a general result and performance guarantee with respect to using data from a different but similar MDP. Definition 1 provides a basis for generating many such MDPs. Finally, the benefit of doing so shall be provided by Corollary 2. Thus, provided is small, a given task can benefit from a much larger set of NT training examples. In addition to the assumptions of Lazaric et al. (2012), we extend the definition of second-order discounted-average concentrability, proposed in Antos et al. (2008), and define the notion of first-order discounted-average concentrability. The latter will be used in our main result, Theorem 1. Assumption 1 There exists a distribution µ ∈ S(X ) such that for any policy π that is greedy with respect to a function in the truncated space F̃ , µ ≤ Cρπt for all t, where C <∞ is a constant. Given the target distribution σ ∈ S(X ) and an arbitrary sequence of policies {πm}m≥1, let cσ,µ = sup π1,...,πm ∥∥∥∥d(µPπ1 . . . Pπm)dσ ∥∥∥∥ . We assume that C ′σ,µ, C ′′ σ,µ <∞, and define first and second order discounted-average concentrability of future-state distributions as follows: C ′σ,µ = (1− γ) ∑ m≥0 γmcσ,µ(m), C ′′σ,µ = (1− γ)2 ∑ m≥1 mγm−1cσ,µ(m). Theorem 1 (Multi-Task Finite-Sample Error Bound) LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Assume A finite. Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where N satisfies Lemma 4 in Lazaric et al. (2012). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, for constants c1, c2, c3, c4 that are dependent onM, with probability 1− δ (with respect to the random samples): ‖V π ∗ t t − V πK t ‖σ ≤ c1 1√ N + c2 √ C ′σ,µ + c3 √ C ′′σ,µ + c4. The proof is deferred to the Appendix. Theorem 1 formalizes the trade off between drawing fewer samples from the exact MDPMt, versus drawing more samples from a different MDPM. Importantly, it shows how to benefit from solving a different MDP,M, when: (a) additional samples can be obtained fromM, and (b)M is not too different fromMt. In particular, the distance measure is simply the distance between the Bellman operators of the MDPs, which can be bounded if the difference in both the transition and reward functions are bounded. In recent work, a performance bound for multi-task learning was given in Theorem 2 and 3 of D’Eramo et al. (2020). However, the authors used a different setup containing both shared and task-specific representations, and their focus was on showing that the cost of learning the shared representation decreases with more tasks. They did not show how the similarity or difference across tasks affects performance. In contrast, our setup does not contain task-specific representations, and our focus is on how differences across MDPs impact the benefit of having more tasks (and consequently more samples). We show this in Corollary 1 and Corollary 2. Remark 1 While our theoretical results are based on LSTD and LSPI and assume finite action space, our approach is applicable to a wide range of reinforcement learning algorithms, including policy gradient methods and to MDPs with continuous action spaces. Deriving similar results for a larger family of models and algorithms remains an interesting, albeit challenging, future work. Permutation invariant policy networks allow using data from the global set of entities U . Since the policy network is agnostic to the identities of the entities, one can learn a single policy for all tasks, where each task t ∈ [T ] is a resource allocation problem over a subset of entities Ut. For notational simplicity, assume that all tasks have the same number of entities, and all trajectories are of equal length N . Our approach can, however, be readily extended to tasks with different numbers of entities and different trajectory lengths. Permutation invariance allows a large set of MDPs to leverage the result of Theorem 1. In the next section we shall provide an algorithm, motivated by the following corollaries, and a prioritized sampling strategy for this setting that drives significantly greater sample efficiency for the original task. The sampling strategy also helps to stabilize the learning process, reducing the risk of deleterious effects of the multi-task setting, as discussed by Teh et al. (2017) and addressed in works such as Hessel et al. (2018); Bram et al. (2019). Corollary 1 Let [T ] be a set of similar tasks such that their distance from the average MDP, given by (T πV )(x) = 1 T T∑ t=1 Rπt (x) + γ ∫ X 1 T T∑ t=1 Pπt (dy|x)V (y), is bounded by as defined in Theorem 1. Let N be the number of samples available in each task. Let πK be the policy obtained at the K th iteration when applying LSPI to the average MDP. Then, the suboptimality of the policy on each task is O(1/ √ NT ) +O( ) + c for some constant c (where suboptimality is defined according to Theorem 1). Recall that each task is formed by selecting a subset Ut of entities from the global set U . We thus have the following sample gain that can be attributed to the permutation invariance of the policy network. Corollary 2 (Gain in Sample Efficiency from Permutation Invariance) Let M = |U| and m = |Ut|. Given fixed M and m, there are T = ( M m ) ≥ ( M m )m different tasks. Then, by Cor. 1, assuming all pairs of tasks are weakly correlated, the potential gain in sample efficiency is exponential in m. Disregarding correlation between samples from tasks with overlapping entities Corollary 1 and Corollary 2 together suggest that the (up to) exponential increase in the number of available tasks can significantly improve sample efficiency as compared to learning each task separately. 5 EXPLOITING PERMUTATION INVARIANCE THROUGH MULTI-TASK REINFORCEMENT LEARNING Our approach to exploiting permutation invariance is via multi-task reinforcement learning, where each “task” corresponds to a particular choice of subset Ut ⊂ U . Furthermore, for each task, we enforce permutation invariance among the entities i by forcing the neural network to apply the same sequence of operations to the state input xi of each instrument through parameter sharing. The proposed method, shown in Algorithm 1, learns a single policy by sampling subsequences of trajectories from the different MDPs. At each step, we sample a task t according to a distribution defined by task selection policy p. Then, a minibatch sample Bt is drawn from the replay buffer for task t, and gradient descent is performed using the sampled transitions Bt (alternatively, samples can be generated using policy rollouts for the specific task). Separate replay buffers maintained for each task are updated only when the corresponding task is being used. In contrast with other active sampling approaches in multi-task learning, our approach maintains an estimate of the difficulty of each task t as a score, st. After each training step, we update the score for only the sampled task based on minibatch Bt, avoiding evaluation over all the tasks. The scoring functions depend on the sampled minibatch; to reduce fluctuations in scores for each task, exponential smoothing is applied st ← γst + (1− γ) · scorer(Bt). We propose a stochastic prioritization method that interpolates between pure greedy prioritization and uniform random sampling. Our approach is similar to prioritized experience replay (PER) by Schaul et al. (2016), but while classical PER prioritizes samples, we prioritize tasks. The probability of sampling task t is pt = sαt / ∑ t′ s α t′ , where the exponent α determines the degree of prioritization, with α = 0 corresponding to the uniform case. We correct for bias with importance-sampling (IS) weights wt = 1/(Tpt)β , that compensate for non-uniform probabilities if β = 1. We normalize weights by 1/maxt wt. Tasks on which the reward variance is high can be interpreted as having more challenging samples, hence reward variance can be used as a scoring function. Algorithm 1 Prioritized Multi-Task Reinforcement Learning for Increasing Sample Efficiency Initialize policy network πθ Initialize replay buffers R1, . . . , RT Initialize time steps n1 ← 1, . . . , nT ← 1 loop Select a task t ∼ p to train on Sample a random minibatch Bt of transitions (xn, an, rn, xn+1) from Rt Update policy θ using Bt and chosen RL approach (correcting for bias using IS weights w) Update score st ← γst + (1− γ) · scorer(Bt) Update ALL selection probabilities p and IS weights w for n = nt, . . . ,min{nt + ne, N} do For task t, select action an according to current policy and exploration noise Execute action an, and observe reward rn and new state xn+1 Store transition (xn, an, rn, xn+1) in Rt end for If n < N , update nt ← n+ 1, otherwise, update nt ← 1 end loop 6 EXPERIMENTS 6.1 SYNTHETIC DATA With the aim of validating the theory presented in Section 4, we define a synthetic example to explore the efficiency gain afforded by permutation invariance. To do so, we control of the deviation between any two tasks, thereby empirically validating the main theoretical results. Consider a resource allocation problem where the observed state xi for each entity i ∈ {1 . . .m} is a single scalar xi ∈ [0, 1]. The action space is the probability simplex, where each action a = (a1 . . . am) indicates the fraction of resource allocated to each entity. The reward function is R(x, a) := ∑ i xiai − βiai log ai where βi is a weight parameter for each entity. Note that when βi = β for all i, the reward function becomes R(x, a) = ( ∑ i xiai) + βH(a) where H is the Shannon entropy. This implies that maximizing the reward involves a tradeoff between focusing resources on high xi or distributing them uniformly across all i. Note that the reward function is permutation invariant, but that when we allow a varying βi over the entities, the function deviates from being perfectly permutation invariant. We use the range maxi βi − mini βi as a stand-in for . Let m = 10. For each , we run two experiments. The first examines the performance of policies trained by LSPI using N real examples drawn i.i.d from the state-action space, for N = 20 . . . 2000. A small Gaussian noise is added to each reward to make learning harder. The second experiment uses only 20 real examples, but augments the training set (up to N ) through random permutation of the real examples. The first two figures in Fig. 1 show the results for = 0.8 and = 0, respectively. Performance improves with N , as predicted by the 1/ √ N term in our error bound. Note that in the experiment using only 20 real examples, a performance gain is achieved by using permuted examples; this corresponds precisely to the multi-task gain predicted by the 1/ √ NT term. When is large, there is a significant gap between the results of the two experiments, as predicted by the -term in the error bound. The last plot in Fig. 1 shows this gap at N = 2000 when varies from 0 to 0.8. 6.2 REAL-WORD DATA We consider two real-world resource allocation settings: financial portfolio optimization and meta federated learning. Financial portfolio optimization is discussed below while meta federated learning is in the Appendix. Given historical prices for a universe of financial assets, U , the goal of task t is to allocate investments across a subset of assets Ut ⊆ U . The multiple tasks t thus correspond to multiple portfolios of instruments. Permutation invariance will be of use in this setting since, from a given universe of instruments (e.g. the 500 instruments in the S&P 500), an exponential number of tasks can be generated, each with its own portfolio. Consider now one such task. At the beginning of time period n, the action ai,n represents the fraction of wealth the decision maker allocates to asset i. The allocations evolve over the time period due to changes in asset prices. Let wi,n denote the allocation of asset i at the end of time period n. We model the state of an asset using its current allocation and a window of its H most recent prices. In particular, let vi,n denote the close price of asset i over time period n, and let yi,n = vi,n/vi,n−1 denote the ratio of close prices between adjacent time periods 1. Then, the allocation in asset i at the end of time period n is given by wi,n = ai,nyi,n∑ i∈Ut ai,nyi,n , and the state of asset i at the beginning of time period n is given by xi,n = (wi,n−1, vi,n−H/vi,n−1, . . . , vi,n−2/vi,n−1). 1Daily high and low prices are also used in the state but omitted here for brevity. The change in portfolio value over period n depends on the asset prices and transaction costs incurred in rebalancing the portfolio from (wi,n−1)i∈Ut to (ai,n)i∈Ut . The reward over period n is defined as the log rate of return: Rt(xn, an) = ln [ β ((wi,n−1)i∈Ut , (ai,n)i∈Ut) ∑ i∈Ut ai,nyi,n ] , where β can be evaluated using an iterative procedure (see Jiang et al. (2017)). Defining the reward this way is appealing because maximizing average total reward over consecutive periods is equivalent to maximizing the total rate of return over the periods. To leverage this, we approximate β((wi,n−1)i∈Ut , (ai,n)i∈Ut) ≈ c ∑ i∈Ut |wi,n−1 − ai,n|, where c is a commission rate to obtain a closed-form expression for Rt(xn, an) (see Jiang et al. (2017)). We optimize using direct policy gradient on minibatches of consecutive samples θ ← θ + η∇θ [ 1 B nb+B−1∑ n=nb wtRt(xn, πθ(xn)) ] , where nb is the first time index in the minibatch, B the size of a minibatch, and wt the IS weight for task t. As in Jiang et al. (2017), we sample nb from a geometric distribution that prioritises recent samples and implement replay buffers for each task. A benchmark trading strategy is equal constantly-rebalanced portfolio (CRP) that rebalances to maintain equal weights. As we noted earlier, ideally one would prefer for the scoring function to depend only on the minibatch Bt. A deviation from Equal CRP can be viewed as learning to exploit price movements, and is thus here we use this as the goal of the policy. Prioritised MTL thus prioritises tasks which deviate from Equal CRP. Note that the policy deviates from CRP only when profitable. Let scorer(Bt) = max n∈{nb,...,nb+B−1} ∥∥∥∥πθ(xn)− 1|Ut| ∥∥∥∥ ∞ , be the scoring of tasks in Prioritised MTL using mean absolute deviation of the minibatch allocation from Equal CRP. Figure 2 (left) shows a scatter plot of the maximum score seen every 50 steps and the change in episode rewards in a single-task learning experiment, and (right) of the minibatch score and the maximum gradient norm for the minibatch. Higher scores imply higher variance in the episode rewards and hence more challenging and useful samples. The correlation between scores and gradient norms shows that our approach is performing gradient-based prioritisation, (see Katharopoulos & Fleuret (2018); Loshchilov & Hutter (2015); Alain et al. (2015)) but in a computationally efficient manner. The details of the dataset and parameter settings can be found in the Appendix. Figure 3 shows the performance of the learned policies tested on 10 tasks drawn from out-of-sample instruments. The policy network with weights initialized close to zero behaves like an Equal CRP policy. As noted, any profitable deviation from Equal CRP implies learning useful trading strategies. The plots show that the MTL policies perform well on instruments never seen during training, offering a remarkable benefit for using RL in the design of trading policies. Fig. 4 shows the performance of prioritised multi-task learning (MTL) versus single-task learning (STL) (i.e. learning a policy for each task independently on the instruments in the task). We also show results for MTL without prioritised sampling, i.e., with α = 0. We consider 5 tasks and 30 tasks. The plots show that prioritised MTL performs significantly better than STL in both convergence time and final achieved performance. The performance with 30 tasks is significantly better than the performance with 5 tasks, showing that our approach leverages the samples of the additional tasks. Fig. 5 illustrates the typical behavior of a multi-task learning (MTL) and a single-task learning (STL) policy on the test period for tasks where multi-task policy performed significantly better. The single-task policy kept constant equal allocations while the multi-task policy was able to learn more complex allocations. In financial data, strongly trending prices do not occur often and are inherently noisy. Multi-task learning with permutation invariance helps with both challenges, allowing the algorithm to learn more complex patterns in a given training period. 7 CONCLUSIONS We introduce an approach for increasing the sample efficiency of reinforcement learning in a setting with widespread applicability within the class of sequential resource allocation problems. This property is permutation invariance: resources are allocated to entities according to a score, and the order can change without modifying the optimal allocation. Under this property, we show that a bound exists on the policy performance. This bound motivates a highly effective algorithm for improving the policy through a multi-task approach. Using prioritized task-sampling, the method not only improves the reward of the final policy but also renders it more robust. We illustrate the property and the method on two important problems: sequential financial portfolio optimization and meta federated learning, where the latter is provided in the Appendix. A APPENDIX Theorem 1 LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where n satisfies Lemma 4 in Antos et al. (2008). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, with probability 1− δ (with respect to the random samples), we have ‖V π ∗ t t − V πK t ‖σ ≤ 2 (1− γ)2 { (1 + γ) √ CC ′′σ,µ [ 2√ 1− γ2 ( 2 √ 2E0(F) + E2 ) + 2 1− γ ( γVmaxL √ d νµ ( √ 8 log(8dK/δ) N + 1 N ) ) + E1 ] + γ K−1 2 Rmax + 3 √ 2C ′σ,µ } . Proof: For convenience, we will simply remove the task subscript whenever we refer to variables associated withM. Define dπt = Dπt V π, d̃t,k = Dπkt Ṽk−1, ek = Ṽk − T πk Ṽk, Ek = P πk+1(I − γPπk+1)−1 − Pπ ∗ (I − γPπk)−1, Fk = P πk+1(I − γPπk+1)−1 + Pπ ∗ (I − γPπk)−1. From the proof of Lemma 12 in Antos et al. (2008), we get V π ∗ − V πK ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Ekek + (γP π∗)K(V π ∗ − V π0). By applying the above inequality, and taking the absolute value on both sides point-wise, we get |V π ∗ t t − V πK t | = |V π ∗ t t − V π ∗ |+ |V π ∗ − V πK |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ (γPπ ∗ )K |V π ∗ − V π0 |+ |V π ∗ t t − V π ∗ |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ 2Rmax 1− γ γK + |V π ∗ t t − V π ∗ |+ |V πK − V πKt | where we used the fact that |V π∗ − V π0 | ≤ (2Rmax/(1− γ))1. Next, we derive upper bounds for |V π ∗ t t − V π ∗ | and |V πK − V πKt |. (a) Observe that V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≤ T π ∗ t t V π∗t t − T π ∗ t V π ∗ = T π ∗ t t V π∗t t − T π ∗ t V π∗t t + T π ∗ t (V π∗t t − V π ∗ ) ≤ (I − γPπ ∗ t t ) −1d π∗t t . The first inequality follows from the fact that π∗ is optimal with respect to V π ∗ . The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≥ T π ∗ t V π∗t t − T π ∗ V π ∗ = T π ∗ t V π∗ − T π ∗ V π ∗ + T π ∗ t (V π∗t t − V π ∗ ) ≥ (I − γPπ ∗ t ) −1dπ ∗ t . By splitting into positive and negative components and applying the above bounds, we get |V π ∗ t t − V π ∗ | = |(V π ∗ t t − V π ∗ )+ − (V π∗t t − V π ∗ )−| ≤ |(V π ∗ t t − V π ∗ )+|+ |(V π∗t t − V π ∗ )−| ≤ |(I − γPπ ∗ t t ) −1d π∗t t |+ |(I − γPπ ∗ t ) −1dπ ∗ t | ≤ (I − γPπ ∗ t t ) −1|dπ ∗ t t |+ (I − γPπ ∗ t ) −1|dπ ∗ t | (b) Observe that V πK − V πKt ≤ T πKV πK + T πK ṼK−1 − T πK t ṼK−1 − T πK t V πK t = T πKV πK + T πK ṼK−1 − T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≤ (I − γPπKt )−1(−d πK t − d̃t,K). The first inequality follows from the fact that πK is optimal with respect to ṼK−1. The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V πK − V πKt ≥ T πKV πK − T πK ṼK−1 + T πK t ṼK−1 − T πK t V πK t = T πKV πK − T πK ṼK−1 + T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≥ (I − γPπKt )−1(−d πK t + d̃t,K). By splitting into positive and negative components and applying the above bounds, we get |V πK − V πKt | = |(V πK − V πK t )+ − (V πK − V πK t )−| ≤ |(V πK − V πKt )+|+ |(V πK − V πK t )−| = |(I − γPπKt )−1(−d πK t − d̃t,K)|+ |(I − γP πK t ) −1(−dπKt + d̃t,K)| ≤ (I − γPπKt )−1| − d πK t − d̃t,K |+ (I − γP πK t ) −1| − dπKt + d̃t,K | ≤ 2(I − γPπKt )−1(|d πK t |+ |d̃t,K |). By applying the upper bounds from (a) and (b), we get |V π ∗ t t − V πK t | ≤ 2(1− γK+2) (1− γ)2 [ K−1∑ k=0 αkAk|ek|+ α(Rmax/γ) + (β/6)Bπ ∗ t · 6|dπ ∗ t t |+ (β/6)Bπ ∗ · 6|dπ ∗ t | + (β/3)BπK · 6|dπKt |+ (β/3)BπK · 6|d̃t,K | ] where we introduced the positive coefficients αk = (1− γ) 1− γK+2 γK−k, for 0 ≤ k < K, α = (1− γ) 1− γK+2 γK+1, β = (1− γ) 2(1− γK+2) , and the operators Ak = 1− γ 2 (Pπ ∗ )K−k−1Fk, for 0 ≤ k < K, Bπ = (1− γ)(I − γPπt )−1. Let λK = [ 2(1−γK+2) (1−γ)2 ]p . Note that the coefficients αk, α, and β, sum to 1, and the operators are positive linear operators that satisfy Ak1 = 1 and Bπ1 = 1. Therefore, by taking the pth power on both sides, applying Jensen’s inequality twice, and then integrating both sides with respect to σ(x), we get ‖V π ∗ t t − V πK t ‖pp,σ = ∫ σ(dx)|V π ∗ t t − V πK t |p ≤ λKσ [ K−1∑ k=0 αkAk|ek|p + α(Rmax/γ)p + (β/6)Bπ ∗ t (6|dπ ∗ t t |)p + (β/6)Bπ ∗ (6|dπ ∗ t |)p + (β/3)BπK (6|dπKt |)p + (β/3)BπK (6|d̃t,K |)p ] . From the definition of the coefficients cσ,µ(m), we get σAk ≤ (1− γ) ∑ m≥0 γmcσ,µ(m+K − k)µ, σBπ ≤ (1− γ) ∑ m≥0 γmcσ,µ(m)µ. Therefore, it follows that σ [ K−1∑ k=0 αkAk|ek|p ] ≤ (1− γ) K−1∑ k=0 αk ∑ m≥0 γmcσ,µ(m+K − k)µ|ek|p = γ(1− γ)2 1− γK+2 K−1∑ k=0 ∑ m≥0 γm+K−k−1cσ,µ(m+K − k)‖ek‖pp,µ ≤ γ 1− γK+2 C ′′σ,µe p where e = max0≤k<K ‖ek‖pp,µ. The terms involving Bπ satisfy σ [Bπ(6|dπt |)p] ≤ 6p(1− γ) ∑ m≥0 γmcσ,µ(m)µ|dπt |p ≤ 6pC ′σ,µ‖dπt ‖pp,µ. Putting all these together, and choosing p = 2, we get ‖V π ∗ t t − V πK t ‖σ ≤ λ 1 2 K [ γ 1− γK+2 C ′′σ,µe 2 + (1− γ)γK+1 1− γK+2 (Rmax/γ) 2 + 36(1− γ) 2(1− γK+2) C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ γC ′′σ,µe 2 + (1− γ)γK+1(Rmax/γ)2 + 36(1− γ) 2 C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ C ′′σ,µe 2 + γK+1(Rmax/γ) 2 + 18C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [√ C ′′σ,µe+ γ K−1 2 Rmax + 3 √ 2C ′σ,µ ] . The desired result can then be obtained by applying the same steps as in the proof of Theorem 8 in Lazaric et al. (2012). A.1 FINANCIAL PORTFOLIO OPTIMIZATION: ADDITIONAL DETAILS The dataset consists of daily prices for 68 instruments in the technology and communication sectors from 2009 to 2019. We use 2009–2018 for training and 2019 for testing. To validate that our approach learns common features across instruments, and thus can transfer, we reserve 18 instruments not seen during training for further testing. The global asset universe U used for training contains 50 instruments. We construct tasks by randomly choosing a portfolio of |Ut| = 10 instruments for each task. We create a permutation invariant policy network by applying the same sequence of operations to every instrument state. That is, for each instrument, the flattened input prices are passed through a common RNN with 25 hidden units and tanh activation, this output is concatenated with the latest allocation fraction of the instrument, and passed through a common dense layer to produce a score. Instrument scores are passed to a softmax function to produce allocations that sum to one. The smoothing parameter for the scores γ = 0.2, α = 0.5 for the task prioritisation parameter and β = 1.0 to fully compensate for the prioritized sampling bias. A.2 META FEDERATED LEARNING Suppose we have a universe of federated learning clients U . The goal of task t is to aggregate models in a federated learning experiment over a subset of clients Ut ⊆ U . At each step n, the action ai,n represents the weight assigned to the supervised learning model of client i in the averaging procedure. Let vi,n denote the model of the client (i.e. the tensor of model parameters). We model the state of the client as some function of its H most recent models xi,n = f(vi,n−H+1, . . . , vi,n). Assume that the aggregator has access to a small evaluation dataset that it can use to approximately assess the quality of models. We define the reward at each step to be the accuracy of the aggregate model, Rt(xn, an) = L (∑ i∈Ut ai,nvi,n ) , where L(v) is a function that provides the accuracy of a model v on the evaluation dataset. Therefore, by maximizing the total return over all time periods, we seek to maximize both the accuracy at the final time step as well as the time to convergence. We optimize the policy using Proximal Policy Optimization (PPO). We use the MNIST digit recognition problem. Each client observes 600 samples from the train dataset and trains a classifier composed of one 5x5 convolutional layer (with 32 channels and ReLu activation) and a softmax output layer. We use the same permutation invariant policy network architecture as before with 10 hidden units in the RNN. We randomly select |Ut| = 10 clients for each task. We learn using an evaluation dataset comprised of 1000 random samples from the test dataset and test using all 10000 samples in the test dataset. We fix the number of federated learning iterations to 50. We explore the benefit of MTL in identifying useful clients in scenarios with skewed data distribution. We partition the dataset such that 8 of the clients in each task observe random digits between 0 to 5 and the remaining 2 clients observe random digits between 6 to 9. Therefore, for each task, 20% of the clients possess 40% of the unique labels. The state of each client are the accuracies of its H most recent models on the evaluation dataset. Figure 6 shows the potential benefits of multi-task learning when simulators are inaccurate. In particular, we obtain two aggregation policies, one trained using single-task learning (STL), and another trained using multi-task learning (MTL), both trained using the same number of steps, and we observe their behavior during testing. The plots show that multi-task learning is able to learn non-uniform averaging policies that improve the convergence and performance of federated learning runs. More importantly, it can perform better than single-task learning even with the same number of samples. This may be attributed to the wider variety of client configurations (and consequently experiences) in the multi-task approach.
1. What is the main contribution of the paper in reinforcement learning? 2. What are the strengths of the proposed algorithm, particularly in exploiting permutation invariance? 3. What are the weaknesses of the paper regarding its presentation and clarity? 4. How does the reviewer assess the novelty and applicability of the proposed approach? 5. Are there any concerns or questions regarding the algorithm's ability to ensure permutation invariance?
Review
Review This paper addresses the problem of reinforcement learning using limited training samples. They propose a solution by exploiting the invariance property in the tasks. In particular, they present an algorithm that exploits permutation invariance, study its theoretical properties, and propose examples where this property holds and their algorithm can be leveraged. I feel the paper could have been better presented by starting with a motivating example where the permutation invariance property holds - for example the portfolio optimization example studied in the experiments. This will make it easy to follow the multiple terminologies of tasks, entities, resources, introduced in Sec 3. The setting considered in the paper is one where the state is a concatenation of various entities, while the actions are the fraction of resources allocated to each entity. The permutation invariance property is defined in Def. 1. I didi not understand how a network trained using gradient descent alone would satisfy permutation invariance. There is no part of the pseudo code in Alg 1 explicitly making sure that the algorithm is permutation invariant.
ICLR
Title Efficient Reinforcement Learning in Resource Allocation Problems Through Permutation Invariant Multi-task Learning Abstract One of the main challenges in real-world reinforcement learning is to learn successfully from limited training samples. We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning, by exploiting an invariance property in the tasks. We provide a theoretical performance bound for the gain in sample efficiency under this setting. This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy. We demonstrate empirically the effectiveness of the proposed approach on two real-world sequential resource allocation tasks where this invariance property occurs: financial portfolio optimization and meta federated learning. 1 INTRODUCTION Sample efficiency in reinforcement learning (RL) is an elusive goal. Recent attempts at increasing the sample efficiency of RL implementations have focused to a large extent on incorporating models into the training process: Xu et al. (2019); Clavera et al. (2018); Zhang et al. (2018); Berkenkamp et al. (2017); Ke et al. (2019); Yarats et al. (2019); Huang et al. (2019); Chua et al. (2018); Serban et al. (2018). The models encapsulate knowledge explicitly, complementing the experiences that are gained by sampling from the RL environment. Another means towards increasing the availability of samples for a reinforcement learner is by tilting the training towards one that will better transfer to related tasks: if the training process is sufficiently well adapted to more than one task, then the training of a particular task should be able to benefit from samples from the other related tasks. This idea was explored a decade ago in Lazaric & Ghavamzadeh (2010) and has been gaining traction ever since, as researchers try to increase the reach of deep reinforcement learning from its comfortable footing in solving games outrageously well to solving other important problems. Yu (2018) discusses a number of methods for increasing sample efficiency in RL and includes experience transfer as one important avenue, covering the transfer of samples, as we do here, transfer of representation or skills, and jumpstarting models which are then ready to be quickly, i.e. with few samples, updated to different tasks. D’Eramo et al. (2020) address the same idea, noting that multi-task learning can improve the learning of each individual task, motivated by robotics-type tasks with underlying commonality, such as balancing a single vs. a double pendulum, or hopping vs. walking. We are interested in exploiting the ability of multi-task learning to solve the sample efficiency problem of RL. Our setting does not apply to all problem classes nor does it seek to exploit the kind of physical similarities found in robotics tasks that form the motivation of Lazaric & Ghavamzadeh (2010); D’Eramo et al. (2020). Rather, we show that there are a number of reinforcement learning tasks with a particular fundamental property that makes them ideal candidates for multi-task learning with the goal of increasing the availability of samples for their training. We refer to this property as permutation invariance. It is present in very diverse tasks: we illustrate it on a financial portfolio optimization problem, whereby trades are executed sequentially over a given time horizon, and on the problem of meta-learning in a federated supervised learning setting. Permutation invariance in the financial portfolio problem exhibits itself as follows: consider the task of allocating a portion of wealth to each of a number of financial instruments using a trading policy. If the trading policy is permutation invariant, one can change the order of the instruments without changing the policy. This allows one to generate multiple portfolio optimization tasks from a given set of financial instruments. A commonality between applications that have this property is that they concern sequential resource allocation: at each time step, the resource allocation scores the quality of each available candidate entity (for example a financial instrument in the above example), then based on those scores, apportions out the resource (the total wealth to invest, in the above example) among the entities at that time step, so that over the horizon of interest, the reward is maximized. Sequential resource allocation problems include applications such as sequential allocation of budget, sequential allocation of space, e.g. in IT systems, hotels, delivery vehicles, sequential allocation of people to work slots or appointments, etc. Many such applications possess permutation invariance in that the ordering of the entities, i.e. where the resources are allocated, can change without changing the resulting optimal allocation. We show that under this form of permutation invariance, it is possible to derive a bound on the performance of the policy. The bound is an extension of that of Lazaric & Ghavamzadeh (2010), and while similar to, provides additional information beyond the bound of D’Eramo et al. (2020). We use the bound to motivate an algorithm that allows for substantially improved results as compared with solving each task on its own. The bound and the algorithm are first analyzed on a synthetic problem that validates the bound in our theorem and confirms the multi-task gain that the theory predicts. Hessel et al. (2018); Bram et al. (2019) have cautioned against degrading of the performance on each task when some tasks bias the updates to the detriment of others in multi-task learning. They claim that some tasks have a greater density or magnitude of in-task rewards and hence a disproportionate impact on the learning process. In our setting, deleterious effects of some tasks on others could also arise. The algorithm we propose handles this through a form of prioritized sampling, where priorities are put on the tasks themselves, and acts like a prioritized experience replay buffer, applied to a multi-task learning problem. We show empirically that the priorities thus defined protect the overall learning problem from the deleterious effects that unrelated or unhelpful tasks could otherwise have on the policy. The contributions of this work are as follows: (1) we identify the permutation invariance property of the class of reinforcement learning problems involving sequential resource allocation, (2) we define a method to increase sample efficiency in these reinforcement learning problems by leveraging this property of permutation invariance; (3) we provide a theoretical performance bound for the class of problems; (4) we validate experimentally the utility of permutation variance on sample efficiency as well as the validity of the bound on a synthetic problem; and (5) we illustrate two real-world RL resource allocation tasks for which this property holds and demonstrate the benefits of the proposed method on sample efficiency and thus also on the overall performance of the models. 2 RELATED WORK A notable first stream of work on leveraging multi-task learning for enhancing RL performance on single tasks can be found in Wilson et al. (2007); Lazaric & Ghavamzadeh (2010) which consider, as we do, that there is an underlying MDP from which the multiple tasks can be thought to derive. They use however a Bayesian approach and propose a different algorithmic method than ours. Our results extend performance bounds by Lazaric et al. (2012) on single-task RL. As noted by Yu (2018), jumpstarting, or distilling experiences and representations of relevant policies is another means to increasing sample efficiency in solving a new but related problem. Rusu et al. (2016) uses this idea in so-called progressive neural networks and Parisotto et al. (2015) leverage multiple experts to guide the derivation of a general policy. With a similar objective, Teh et al. (2017) define a policy centroid, that is, a shared distilled policy, that captures the commonalities across the behaviors in the tasks. In all of these distillation-type methods, the tasks considered are simple or complex games. Teh et al. (2017) note that their policy centroid method, distral, is likely to be affected by task interference, in that differences across tasks may degrade the performance of the resulting policy of any of the constituent tasks. This topic was studied by Hessel et al. (2018); Bram et al. (2019). Hessel et al. (2018) proposed a solution to this by extending the so-called PopArt normalization van Hasselt et al. (2016) to re-scale the updates of each task so that the different characteristics of the task-specific reward do not skew the learning process. Bram et al. (2019) use a different approach that learns attention weights of the sub-networks of each task and discards those that are not relevant or helpful. Vuong et al. (2019); D’Eramo et al. (2020) are, like our work, concerned with sharing of experiences to facilitate a more sample-efficient learning process. Vuong et al. (2019) suggest identifying the shared portions of tasks to allow sharing of samples in those portions. The work of D’Eramo et al. (2020) is in some ways quite similar to ours: the authors’ goal is the same and they derive a bound as we do on the performance in this setting. However, their setting is different in that their tasks have both shared and task-specific components, and their bound becomes tighter only as the number of tasks increases. In our setting, we do not require a task-specific component, and we are able to show how the distance between the MDPs of each task, in addition to the number of tasks, affects the strength of the bound. Recently, permutation invariance has been exploited in deep multi-agent reinforcement learning (Liu et al., 2019) where the invariance properties arise naturally in a homogeneous multi-agent setting. Their work employs permutation invariance in learning the critic whereas in our case the entire learned policy employs permutation invariance. 3 PRELIMINARIES We begin by defining notation. For a measurable space with domain X , let S(X ) denote the set of probability measures over X , and B(X ;L) the space of bounded measurable functions with domain X and bound 0 < L < ∞. For a measure ρ ∈ S(X ) and a measurable function f : X → R, the l2(ρ)-norm of f is ‖f‖ρ, and for a set of n points X1, · · · , Xn ∈ X , the empirical norm, ‖f‖n is ‖f‖2ρ = ∫ f(x)2ρ(dx) and ‖f‖2n = 1 n n∑ t=1 f(Xt) 2. Let ‖f‖∞ = supx∈X |f(x)| be the supremum norm of f . Consider a set of MDPs indexed by t. Each MDP is denoted by a tupleMt = 〈X ,A, Rt, Pt, γ〉, where X , a bounded closed subset of the s-dimensional Euclidean space, is a common state space; A is a common action space, Rt : X ×A → R is a task specific reward function uniformly bounded by Rmax, Pt is a task specific transition kernel such that Pt(·|x, a) is a distribution over X for all x ∈ X and a ∈ A, and γ ∈ (0, 1) is a common discount factor. Deterministic policies are denoted by π : X → A. For a given policy π, the MDPMt is reduced to a Markov chainMπt = 〈X , Rπt , Pπt , γ〉 with reward function Rπt (x) = Rt(x, π(x)), transition kernel P π t (·|x) = Pt(·|x, π(x)), and stationary distribution ρπt . The value function V πt for MDP t is defined as the unique fixed-point of the Bellman operator T πt : B(X ;Vmax = Rmax/(1− γ))→ B(X ;Vmax), given by (T πt V )(x) = Rπt (x) + γ ∫ X Pπt (dy|x)V (y). Let π∗t denote the optimal policy forMt. The optimal value function V π∗t t forMt is defined as the unique fixed-point of its optimal Bellman operator T π ∗ t t which is defined by (T π ∗ t t V )(x) = max a∈A [ Rt(x, a) + γ ∫ X Pt(dy|x, a)V (y) ] . To approximate the value function V , we use a linear approximation architecture with parameters α ∈ Rd and basis functions ϕi ∈ B(X ;L) for i = 1, · · · , d. Let ϕ(·) = (ϕ1(·), · · · , ϕd(·))T ∈ Rd be the feature vector and F the linear function space spanned by basis functions ϕi. Thus, F = {fα | α ∈ Rd and fα(·) = ϕ(·)Tα}. Consider a learning task to dynamically allocate a common resource across entities Ut ⊆ U . Each t corresponds to a task, but for now take t to be an arbitrary fixed index. At each time step n, the decision maker observes states xn = (xi,n)i∈Ut of the entities, where xi,n is the state of entity i, and takes action an = (ai,n)i∈Ut , where ai,n is the share of the resource allocated to entity i. The total resource capacity is normalized to 1 for convenience. Therefore, allocations satisfy 0 ≤ ai,n ≤ 1 and∑ i∈Ut ai,n = 1. We consider policy πθ(xn) parameterized by θ. Assume that we have access to the reward function Rt as well as a simulator that generates a trajectory of length N given any arbitrary policy πθ. The objective of the learning task is to maximize Jt(θ) = E [ N∑ n=1 γn−1Rt(xn, an) ∣∣∣∣∣ an+1 = πθ(xn), xn+1 ∼ Pt(·|xn, an), x1 ∼ Pt(·) ] . In many settings, N is small and simulators are inaccurate; therefore, trajectories generated by the simulator are poor representations of the actual transition dynamics. This occurs in batch RL where trajectories are rollouts from a dataset. In these cases, policies overfit and generalize poorly. 4 THEORETICAL RESULTS We introduce first a property that we term permutation-invariance for the policy network that can be shown to help significantly reduce overfitting. Definition 1 (Permutation Invariant Policy Network) A policy network πθ is permutation invariant if it satisfies πθ(σ(x)) = σ(πθ(x)) for any permutation σ. Permutation invariant policy networks have significant advantages over completely integrated policy networks. While the latter are likely to fit correlations between different entities, this is not possible with permutation invariant policy networks as they are agnostic to identities of entities. Therefore, permutation invariant policy networks are better able to leverage experience across time and entities, leading to greater efficiency in data usage. Moreover, observe that if the transition kernels can be factored into independent and identical transition kernels across entities, then the optimal policy is indeed permutation invariant. Our main theoretical contributions start with an extension of results from Lazaric et al. (2012), where a finite-sample error bound was derived for the least squares policy iteration (LSPI) algorithm on a single task. Lazaric et al. (2012) provided a high-probability bound on the performance difference between the final learned policy and the optimal policy, of the form c1 + c2/ √ N , where c1 and c2 are constants that depend on the task and the chosen feature space, and N is the number of training examples. We extend their result by showing that, as long as tasks are -close to each other (with respect to a similarity measure we define later), the error bound of solving each task using our multi-task approach has the form c1 + c2/ √ NT + c3 , where T is the number of tasks and c3 is a task-dependent constant. Specifically, our theorem provides a general result and performance guarantee with respect to using data from a different but similar MDP. Definition 1 provides a basis for generating many such MDPs. Finally, the benefit of doing so shall be provided by Corollary 2. Thus, provided is small, a given task can benefit from a much larger set of NT training examples. In addition to the assumptions of Lazaric et al. (2012), we extend the definition of second-order discounted-average concentrability, proposed in Antos et al. (2008), and define the notion of first-order discounted-average concentrability. The latter will be used in our main result, Theorem 1. Assumption 1 There exists a distribution µ ∈ S(X ) such that for any policy π that is greedy with respect to a function in the truncated space F̃ , µ ≤ Cρπt for all t, where C <∞ is a constant. Given the target distribution σ ∈ S(X ) and an arbitrary sequence of policies {πm}m≥1, let cσ,µ = sup π1,...,πm ∥∥∥∥d(µPπ1 . . . Pπm)dσ ∥∥∥∥ . We assume that C ′σ,µ, C ′′ σ,µ <∞, and define first and second order discounted-average concentrability of future-state distributions as follows: C ′σ,µ = (1− γ) ∑ m≥0 γmcσ,µ(m), C ′′σ,µ = (1− γ)2 ∑ m≥1 mγm−1cσ,µ(m). Theorem 1 (Multi-Task Finite-Sample Error Bound) LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Assume A finite. Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where N satisfies Lemma 4 in Lazaric et al. (2012). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, for constants c1, c2, c3, c4 that are dependent onM, with probability 1− δ (with respect to the random samples): ‖V π ∗ t t − V πK t ‖σ ≤ c1 1√ N + c2 √ C ′σ,µ + c3 √ C ′′σ,µ + c4. The proof is deferred to the Appendix. Theorem 1 formalizes the trade off between drawing fewer samples from the exact MDPMt, versus drawing more samples from a different MDPM. Importantly, it shows how to benefit from solving a different MDP,M, when: (a) additional samples can be obtained fromM, and (b)M is not too different fromMt. In particular, the distance measure is simply the distance between the Bellman operators of the MDPs, which can be bounded if the difference in both the transition and reward functions are bounded. In recent work, a performance bound for multi-task learning was given in Theorem 2 and 3 of D’Eramo et al. (2020). However, the authors used a different setup containing both shared and task-specific representations, and their focus was on showing that the cost of learning the shared representation decreases with more tasks. They did not show how the similarity or difference across tasks affects performance. In contrast, our setup does not contain task-specific representations, and our focus is on how differences across MDPs impact the benefit of having more tasks (and consequently more samples). We show this in Corollary 1 and Corollary 2. Remark 1 While our theoretical results are based on LSTD and LSPI and assume finite action space, our approach is applicable to a wide range of reinforcement learning algorithms, including policy gradient methods and to MDPs with continuous action spaces. Deriving similar results for a larger family of models and algorithms remains an interesting, albeit challenging, future work. Permutation invariant policy networks allow using data from the global set of entities U . Since the policy network is agnostic to the identities of the entities, one can learn a single policy for all tasks, where each task t ∈ [T ] is a resource allocation problem over a subset of entities Ut. For notational simplicity, assume that all tasks have the same number of entities, and all trajectories are of equal length N . Our approach can, however, be readily extended to tasks with different numbers of entities and different trajectory lengths. Permutation invariance allows a large set of MDPs to leverage the result of Theorem 1. In the next section we shall provide an algorithm, motivated by the following corollaries, and a prioritized sampling strategy for this setting that drives significantly greater sample efficiency for the original task. The sampling strategy also helps to stabilize the learning process, reducing the risk of deleterious effects of the multi-task setting, as discussed by Teh et al. (2017) and addressed in works such as Hessel et al. (2018); Bram et al. (2019). Corollary 1 Let [T ] be a set of similar tasks such that their distance from the average MDP, given by (T πV )(x) = 1 T T∑ t=1 Rπt (x) + γ ∫ X 1 T T∑ t=1 Pπt (dy|x)V (y), is bounded by as defined in Theorem 1. Let N be the number of samples available in each task. Let πK be the policy obtained at the K th iteration when applying LSPI to the average MDP. Then, the suboptimality of the policy on each task is O(1/ √ NT ) +O( ) + c for some constant c (where suboptimality is defined according to Theorem 1). Recall that each task is formed by selecting a subset Ut of entities from the global set U . We thus have the following sample gain that can be attributed to the permutation invariance of the policy network. Corollary 2 (Gain in Sample Efficiency from Permutation Invariance) Let M = |U| and m = |Ut|. Given fixed M and m, there are T = ( M m ) ≥ ( M m )m different tasks. Then, by Cor. 1, assuming all pairs of tasks are weakly correlated, the potential gain in sample efficiency is exponential in m. Disregarding correlation between samples from tasks with overlapping entities Corollary 1 and Corollary 2 together suggest that the (up to) exponential increase in the number of available tasks can significantly improve sample efficiency as compared to learning each task separately. 5 EXPLOITING PERMUTATION INVARIANCE THROUGH MULTI-TASK REINFORCEMENT LEARNING Our approach to exploiting permutation invariance is via multi-task reinforcement learning, where each “task” corresponds to a particular choice of subset Ut ⊂ U . Furthermore, for each task, we enforce permutation invariance among the entities i by forcing the neural network to apply the same sequence of operations to the state input xi of each instrument through parameter sharing. The proposed method, shown in Algorithm 1, learns a single policy by sampling subsequences of trajectories from the different MDPs. At each step, we sample a task t according to a distribution defined by task selection policy p. Then, a minibatch sample Bt is drawn from the replay buffer for task t, and gradient descent is performed using the sampled transitions Bt (alternatively, samples can be generated using policy rollouts for the specific task). Separate replay buffers maintained for each task are updated only when the corresponding task is being used. In contrast with other active sampling approaches in multi-task learning, our approach maintains an estimate of the difficulty of each task t as a score, st. After each training step, we update the score for only the sampled task based on minibatch Bt, avoiding evaluation over all the tasks. The scoring functions depend on the sampled minibatch; to reduce fluctuations in scores for each task, exponential smoothing is applied st ← γst + (1− γ) · scorer(Bt). We propose a stochastic prioritization method that interpolates between pure greedy prioritization and uniform random sampling. Our approach is similar to prioritized experience replay (PER) by Schaul et al. (2016), but while classical PER prioritizes samples, we prioritize tasks. The probability of sampling task t is pt = sαt / ∑ t′ s α t′ , where the exponent α determines the degree of prioritization, with α = 0 corresponding to the uniform case. We correct for bias with importance-sampling (IS) weights wt = 1/(Tpt)β , that compensate for non-uniform probabilities if β = 1. We normalize weights by 1/maxt wt. Tasks on which the reward variance is high can be interpreted as having more challenging samples, hence reward variance can be used as a scoring function. Algorithm 1 Prioritized Multi-Task Reinforcement Learning for Increasing Sample Efficiency Initialize policy network πθ Initialize replay buffers R1, . . . , RT Initialize time steps n1 ← 1, . . . , nT ← 1 loop Select a task t ∼ p to train on Sample a random minibatch Bt of transitions (xn, an, rn, xn+1) from Rt Update policy θ using Bt and chosen RL approach (correcting for bias using IS weights w) Update score st ← γst + (1− γ) · scorer(Bt) Update ALL selection probabilities p and IS weights w for n = nt, . . . ,min{nt + ne, N} do For task t, select action an according to current policy and exploration noise Execute action an, and observe reward rn and new state xn+1 Store transition (xn, an, rn, xn+1) in Rt end for If n < N , update nt ← n+ 1, otherwise, update nt ← 1 end loop 6 EXPERIMENTS 6.1 SYNTHETIC DATA With the aim of validating the theory presented in Section 4, we define a synthetic example to explore the efficiency gain afforded by permutation invariance. To do so, we control of the deviation between any two tasks, thereby empirically validating the main theoretical results. Consider a resource allocation problem where the observed state xi for each entity i ∈ {1 . . .m} is a single scalar xi ∈ [0, 1]. The action space is the probability simplex, where each action a = (a1 . . . am) indicates the fraction of resource allocated to each entity. The reward function is R(x, a) := ∑ i xiai − βiai log ai where βi is a weight parameter for each entity. Note that when βi = β for all i, the reward function becomes R(x, a) = ( ∑ i xiai) + βH(a) where H is the Shannon entropy. This implies that maximizing the reward involves a tradeoff between focusing resources on high xi or distributing them uniformly across all i. Note that the reward function is permutation invariant, but that when we allow a varying βi over the entities, the function deviates from being perfectly permutation invariant. We use the range maxi βi − mini βi as a stand-in for . Let m = 10. For each , we run two experiments. The first examines the performance of policies trained by LSPI using N real examples drawn i.i.d from the state-action space, for N = 20 . . . 2000. A small Gaussian noise is added to each reward to make learning harder. The second experiment uses only 20 real examples, but augments the training set (up to N ) through random permutation of the real examples. The first two figures in Fig. 1 show the results for = 0.8 and = 0, respectively. Performance improves with N , as predicted by the 1/ √ N term in our error bound. Note that in the experiment using only 20 real examples, a performance gain is achieved by using permuted examples; this corresponds precisely to the multi-task gain predicted by the 1/ √ NT term. When is large, there is a significant gap between the results of the two experiments, as predicted by the -term in the error bound. The last plot in Fig. 1 shows this gap at N = 2000 when varies from 0 to 0.8. 6.2 REAL-WORD DATA We consider two real-world resource allocation settings: financial portfolio optimization and meta federated learning. Financial portfolio optimization is discussed below while meta federated learning is in the Appendix. Given historical prices for a universe of financial assets, U , the goal of task t is to allocate investments across a subset of assets Ut ⊆ U . The multiple tasks t thus correspond to multiple portfolios of instruments. Permutation invariance will be of use in this setting since, from a given universe of instruments (e.g. the 500 instruments in the S&P 500), an exponential number of tasks can be generated, each with its own portfolio. Consider now one such task. At the beginning of time period n, the action ai,n represents the fraction of wealth the decision maker allocates to asset i. The allocations evolve over the time period due to changes in asset prices. Let wi,n denote the allocation of asset i at the end of time period n. We model the state of an asset using its current allocation and a window of its H most recent prices. In particular, let vi,n denote the close price of asset i over time period n, and let yi,n = vi,n/vi,n−1 denote the ratio of close prices between adjacent time periods 1. Then, the allocation in asset i at the end of time period n is given by wi,n = ai,nyi,n∑ i∈Ut ai,nyi,n , and the state of asset i at the beginning of time period n is given by xi,n = (wi,n−1, vi,n−H/vi,n−1, . . . , vi,n−2/vi,n−1). 1Daily high and low prices are also used in the state but omitted here for brevity. The change in portfolio value over period n depends on the asset prices and transaction costs incurred in rebalancing the portfolio from (wi,n−1)i∈Ut to (ai,n)i∈Ut . The reward over period n is defined as the log rate of return: Rt(xn, an) = ln [ β ((wi,n−1)i∈Ut , (ai,n)i∈Ut) ∑ i∈Ut ai,nyi,n ] , where β can be evaluated using an iterative procedure (see Jiang et al. (2017)). Defining the reward this way is appealing because maximizing average total reward over consecutive periods is equivalent to maximizing the total rate of return over the periods. To leverage this, we approximate β((wi,n−1)i∈Ut , (ai,n)i∈Ut) ≈ c ∑ i∈Ut |wi,n−1 − ai,n|, where c is a commission rate to obtain a closed-form expression for Rt(xn, an) (see Jiang et al. (2017)). We optimize using direct policy gradient on minibatches of consecutive samples θ ← θ + η∇θ [ 1 B nb+B−1∑ n=nb wtRt(xn, πθ(xn)) ] , where nb is the first time index in the minibatch, B the size of a minibatch, and wt the IS weight for task t. As in Jiang et al. (2017), we sample nb from a geometric distribution that prioritises recent samples and implement replay buffers for each task. A benchmark trading strategy is equal constantly-rebalanced portfolio (CRP) that rebalances to maintain equal weights. As we noted earlier, ideally one would prefer for the scoring function to depend only on the minibatch Bt. A deviation from Equal CRP can be viewed as learning to exploit price movements, and is thus here we use this as the goal of the policy. Prioritised MTL thus prioritises tasks which deviate from Equal CRP. Note that the policy deviates from CRP only when profitable. Let scorer(Bt) = max n∈{nb,...,nb+B−1} ∥∥∥∥πθ(xn)− 1|Ut| ∥∥∥∥ ∞ , be the scoring of tasks in Prioritised MTL using mean absolute deviation of the minibatch allocation from Equal CRP. Figure 2 (left) shows a scatter plot of the maximum score seen every 50 steps and the change in episode rewards in a single-task learning experiment, and (right) of the minibatch score and the maximum gradient norm for the minibatch. Higher scores imply higher variance in the episode rewards and hence more challenging and useful samples. The correlation between scores and gradient norms shows that our approach is performing gradient-based prioritisation, (see Katharopoulos & Fleuret (2018); Loshchilov & Hutter (2015); Alain et al. (2015)) but in a computationally efficient manner. The details of the dataset and parameter settings can be found in the Appendix. Figure 3 shows the performance of the learned policies tested on 10 tasks drawn from out-of-sample instruments. The policy network with weights initialized close to zero behaves like an Equal CRP policy. As noted, any profitable deviation from Equal CRP implies learning useful trading strategies. The plots show that the MTL policies perform well on instruments never seen during training, offering a remarkable benefit for using RL in the design of trading policies. Fig. 4 shows the performance of prioritised multi-task learning (MTL) versus single-task learning (STL) (i.e. learning a policy for each task independently on the instruments in the task). We also show results for MTL without prioritised sampling, i.e., with α = 0. We consider 5 tasks and 30 tasks. The plots show that prioritised MTL performs significantly better than STL in both convergence time and final achieved performance. The performance with 30 tasks is significantly better than the performance with 5 tasks, showing that our approach leverages the samples of the additional tasks. Fig. 5 illustrates the typical behavior of a multi-task learning (MTL) and a single-task learning (STL) policy on the test period for tasks where multi-task policy performed significantly better. The single-task policy kept constant equal allocations while the multi-task policy was able to learn more complex allocations. In financial data, strongly trending prices do not occur often and are inherently noisy. Multi-task learning with permutation invariance helps with both challenges, allowing the algorithm to learn more complex patterns in a given training period. 7 CONCLUSIONS We introduce an approach for increasing the sample efficiency of reinforcement learning in a setting with widespread applicability within the class of sequential resource allocation problems. This property is permutation invariance: resources are allocated to entities according to a score, and the order can change without modifying the optimal allocation. Under this property, we show that a bound exists on the policy performance. This bound motivates a highly effective algorithm for improving the policy through a multi-task approach. Using prioritized task-sampling, the method not only improves the reward of the final policy but also renders it more robust. We illustrate the property and the method on two important problems: sequential financial portfolio optimization and meta federated learning, where the latter is provided in the Appendix. A APPENDIX Theorem 1 LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where n satisfies Lemma 4 in Antos et al. (2008). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, with probability 1− δ (with respect to the random samples), we have ‖V π ∗ t t − V πK t ‖σ ≤ 2 (1− γ)2 { (1 + γ) √ CC ′′σ,µ [ 2√ 1− γ2 ( 2 √ 2E0(F) + E2 ) + 2 1− γ ( γVmaxL √ d νµ ( √ 8 log(8dK/δ) N + 1 N ) ) + E1 ] + γ K−1 2 Rmax + 3 √ 2C ′σ,µ } . Proof: For convenience, we will simply remove the task subscript whenever we refer to variables associated withM. Define dπt = Dπt V π, d̃t,k = Dπkt Ṽk−1, ek = Ṽk − T πk Ṽk, Ek = P πk+1(I − γPπk+1)−1 − Pπ ∗ (I − γPπk)−1, Fk = P πk+1(I − γPπk+1)−1 + Pπ ∗ (I − γPπk)−1. From the proof of Lemma 12 in Antos et al. (2008), we get V π ∗ − V πK ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Ekek + (γP π∗)K(V π ∗ − V π0). By applying the above inequality, and taking the absolute value on both sides point-wise, we get |V π ∗ t t − V πK t | = |V π ∗ t t − V π ∗ |+ |V π ∗ − V πK |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ (γPπ ∗ )K |V π ∗ − V π0 |+ |V π ∗ t t − V π ∗ |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ 2Rmax 1− γ γK + |V π ∗ t t − V π ∗ |+ |V πK − V πKt | where we used the fact that |V π∗ − V π0 | ≤ (2Rmax/(1− γ))1. Next, we derive upper bounds for |V π ∗ t t − V π ∗ | and |V πK − V πKt |. (a) Observe that V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≤ T π ∗ t t V π∗t t − T π ∗ t V π ∗ = T π ∗ t t V π∗t t − T π ∗ t V π∗t t + T π ∗ t (V π∗t t − V π ∗ ) ≤ (I − γPπ ∗ t t ) −1d π∗t t . The first inequality follows from the fact that π∗ is optimal with respect to V π ∗ . The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≥ T π ∗ t V π∗t t − T π ∗ V π ∗ = T π ∗ t V π∗ − T π ∗ V π ∗ + T π ∗ t (V π∗t t − V π ∗ ) ≥ (I − γPπ ∗ t ) −1dπ ∗ t . By splitting into positive and negative components and applying the above bounds, we get |V π ∗ t t − V π ∗ | = |(V π ∗ t t − V π ∗ )+ − (V π∗t t − V π ∗ )−| ≤ |(V π ∗ t t − V π ∗ )+|+ |(V π∗t t − V π ∗ )−| ≤ |(I − γPπ ∗ t t ) −1d π∗t t |+ |(I − γPπ ∗ t ) −1dπ ∗ t | ≤ (I − γPπ ∗ t t ) −1|dπ ∗ t t |+ (I − γPπ ∗ t ) −1|dπ ∗ t | (b) Observe that V πK − V πKt ≤ T πKV πK + T πK ṼK−1 − T πK t ṼK−1 − T πK t V πK t = T πKV πK + T πK ṼK−1 − T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≤ (I − γPπKt )−1(−d πK t − d̃t,K). The first inequality follows from the fact that πK is optimal with respect to ṼK−1. The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V πK − V πKt ≥ T πKV πK − T πK ṼK−1 + T πK t ṼK−1 − T πK t V πK t = T πKV πK − T πK ṼK−1 + T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≥ (I − γPπKt )−1(−d πK t + d̃t,K). By splitting into positive and negative components and applying the above bounds, we get |V πK − V πKt | = |(V πK − V πK t )+ − (V πK − V πK t )−| ≤ |(V πK − V πKt )+|+ |(V πK − V πK t )−| = |(I − γPπKt )−1(−d πK t − d̃t,K)|+ |(I − γP πK t ) −1(−dπKt + d̃t,K)| ≤ (I − γPπKt )−1| − d πK t − d̃t,K |+ (I − γP πK t ) −1| − dπKt + d̃t,K | ≤ 2(I − γPπKt )−1(|d πK t |+ |d̃t,K |). By applying the upper bounds from (a) and (b), we get |V π ∗ t t − V πK t | ≤ 2(1− γK+2) (1− γ)2 [ K−1∑ k=0 αkAk|ek|+ α(Rmax/γ) + (β/6)Bπ ∗ t · 6|dπ ∗ t t |+ (β/6)Bπ ∗ · 6|dπ ∗ t | + (β/3)BπK · 6|dπKt |+ (β/3)BπK · 6|d̃t,K | ] where we introduced the positive coefficients αk = (1− γ) 1− γK+2 γK−k, for 0 ≤ k < K, α = (1− γ) 1− γK+2 γK+1, β = (1− γ) 2(1− γK+2) , and the operators Ak = 1− γ 2 (Pπ ∗ )K−k−1Fk, for 0 ≤ k < K, Bπ = (1− γ)(I − γPπt )−1. Let λK = [ 2(1−γK+2) (1−γ)2 ]p . Note that the coefficients αk, α, and β, sum to 1, and the operators are positive linear operators that satisfy Ak1 = 1 and Bπ1 = 1. Therefore, by taking the pth power on both sides, applying Jensen’s inequality twice, and then integrating both sides with respect to σ(x), we get ‖V π ∗ t t − V πK t ‖pp,σ = ∫ σ(dx)|V π ∗ t t − V πK t |p ≤ λKσ [ K−1∑ k=0 αkAk|ek|p + α(Rmax/γ)p + (β/6)Bπ ∗ t (6|dπ ∗ t t |)p + (β/6)Bπ ∗ (6|dπ ∗ t |)p + (β/3)BπK (6|dπKt |)p + (β/3)BπK (6|d̃t,K |)p ] . From the definition of the coefficients cσ,µ(m), we get σAk ≤ (1− γ) ∑ m≥0 γmcσ,µ(m+K − k)µ, σBπ ≤ (1− γ) ∑ m≥0 γmcσ,µ(m)µ. Therefore, it follows that σ [ K−1∑ k=0 αkAk|ek|p ] ≤ (1− γ) K−1∑ k=0 αk ∑ m≥0 γmcσ,µ(m+K − k)µ|ek|p = γ(1− γ)2 1− γK+2 K−1∑ k=0 ∑ m≥0 γm+K−k−1cσ,µ(m+K − k)‖ek‖pp,µ ≤ γ 1− γK+2 C ′′σ,µe p where e = max0≤k<K ‖ek‖pp,µ. The terms involving Bπ satisfy σ [Bπ(6|dπt |)p] ≤ 6p(1− γ) ∑ m≥0 γmcσ,µ(m)µ|dπt |p ≤ 6pC ′σ,µ‖dπt ‖pp,µ. Putting all these together, and choosing p = 2, we get ‖V π ∗ t t − V πK t ‖σ ≤ λ 1 2 K [ γ 1− γK+2 C ′′σ,µe 2 + (1− γ)γK+1 1− γK+2 (Rmax/γ) 2 + 36(1− γ) 2(1− γK+2) C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ γC ′′σ,µe 2 + (1− γ)γK+1(Rmax/γ)2 + 36(1− γ) 2 C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ C ′′σ,µe 2 + γK+1(Rmax/γ) 2 + 18C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [√ C ′′σ,µe+ γ K−1 2 Rmax + 3 √ 2C ′σ,µ ] . The desired result can then be obtained by applying the same steps as in the proof of Theorem 8 in Lazaric et al. (2012). A.1 FINANCIAL PORTFOLIO OPTIMIZATION: ADDITIONAL DETAILS The dataset consists of daily prices for 68 instruments in the technology and communication sectors from 2009 to 2019. We use 2009–2018 for training and 2019 for testing. To validate that our approach learns common features across instruments, and thus can transfer, we reserve 18 instruments not seen during training for further testing. The global asset universe U used for training contains 50 instruments. We construct tasks by randomly choosing a portfolio of |Ut| = 10 instruments for each task. We create a permutation invariant policy network by applying the same sequence of operations to every instrument state. That is, for each instrument, the flattened input prices are passed through a common RNN with 25 hidden units and tanh activation, this output is concatenated with the latest allocation fraction of the instrument, and passed through a common dense layer to produce a score. Instrument scores are passed to a softmax function to produce allocations that sum to one. The smoothing parameter for the scores γ = 0.2, α = 0.5 for the task prioritisation parameter and β = 1.0 to fully compensate for the prioritized sampling bias. A.2 META FEDERATED LEARNING Suppose we have a universe of federated learning clients U . The goal of task t is to aggregate models in a federated learning experiment over a subset of clients Ut ⊆ U . At each step n, the action ai,n represents the weight assigned to the supervised learning model of client i in the averaging procedure. Let vi,n denote the model of the client (i.e. the tensor of model parameters). We model the state of the client as some function of its H most recent models xi,n = f(vi,n−H+1, . . . , vi,n). Assume that the aggregator has access to a small evaluation dataset that it can use to approximately assess the quality of models. We define the reward at each step to be the accuracy of the aggregate model, Rt(xn, an) = L (∑ i∈Ut ai,nvi,n ) , where L(v) is a function that provides the accuracy of a model v on the evaluation dataset. Therefore, by maximizing the total return over all time periods, we seek to maximize both the accuracy at the final time step as well as the time to convergence. We optimize the policy using Proximal Policy Optimization (PPO). We use the MNIST digit recognition problem. Each client observes 600 samples from the train dataset and trains a classifier composed of one 5x5 convolutional layer (with 32 channels and ReLu activation) and a softmax output layer. We use the same permutation invariant policy network architecture as before with 10 hidden units in the RNN. We randomly select |Ut| = 10 clients for each task. We learn using an evaluation dataset comprised of 1000 random samples from the test dataset and test using all 10000 samples in the test dataset. We fix the number of federated learning iterations to 50. We explore the benefit of MTL in identifying useful clients in scenarios with skewed data distribution. We partition the dataset such that 8 of the clients in each task observe random digits between 0 to 5 and the remaining 2 clients observe random digits between 6 to 9. Therefore, for each task, 20% of the clients possess 40% of the unique labels. The state of each client are the accuracies of its H most recent models on the evaluation dataset. Figure 6 shows the potential benefits of multi-task learning when simulators are inaccurate. In particular, we obtain two aggregation policies, one trained using single-task learning (STL), and another trained using multi-task learning (MTL), both trained using the same number of steps, and we observe their behavior during testing. The plots show that multi-task learning is able to learn non-uniform averaging policies that improve the convergence and performance of federated learning runs. More importantly, it can perform better than single-task learning even with the same number of samples. This may be attributed to the wider variety of client configurations (and consequently experiences) in the multi-task approach.
1. What is the purpose and contribution of the paper regarding multi-task learning and RL? 2. What is the significance of permutation invariance (PI) in the context of the paper, and how is it defined? 3. How does the paper's approach differ from related works, particularly Lazaric et al.'s performance bound? 4. What is the meaning of "can be" in Corollary 2 regarding the gain in sample efficiency being exponential in m? Under what conditions is it exponential? 5. How does the paper's approach apply to portfolio optimization as a multi-task RL problem, and what are the other tasks the agent must perform besides maximizing long-term returns? 6. Are there any specific areas of confusion regarding the paper's writing or mathematical presentation?
Review
Review The paper proposes an RL algorithm for multi-task learning. Under certain assumptions, the paper proves a sample complexity result for this setting. The paper presents a new algorithm based on this approach. The empirical results on the task of sequential portfolio optimization shows that this approach performs better than the policy of constantly rebalanced portfolio. I dont understand the aim of this paper and unfortunately, the paper did not help me either. It seems that the paper focuses on multi-task learning using RL. The main assumption the paper makes is permutation invariance (PI). However, the way paper defines it is not clear to me. Def 1: "A policy network is PI if it satisfies pi(sigma(a), sigma(x)) = sigma(pi(a,x))" for any permutation sigma. So sigma is a permutation of items in a set? In this case it seems the set is the output of policy network. What is the output of policy network: action? But the action is not defined as a set of items. It seems that one must guess that the actions are division of 1 resource among t entities. Lets say t = 2 and resulting action is [0.2, 0.8] If this is true the the two permutation of a are [0.2, 0.8] and [0.8, 0.2]. But then the definition says the left-hand side should be equal to both these permutations. So [0.2, 0.8] == [0.8, 0.2]? I dont understand this. By the defintion do you mean: pi(sigma(a), sigma(x)) = pi(a,x). This would mean that the rearrangement of state or actions do not change the output of the policy network. I can understand this but not the def in the paper. Finally, what does this PI means in real-life? Since the paper talks about resource allocation, does the PI mean that it does not matter which entity the resource is being allocated to as long as the share of resource does not change? For what kind of problems does such an assumption/property hold? What is one entity is better at utilizing resources than others? After going about the definition/assumption for quite some time, the main theorem of the paper does not even use the definition/assumption. So why was it introduced? From related work "Lazaric et.al. provide a performance bound that bears similarity to ours, which one can consider an extension for our particular PI setting". So which approach is more general, the one in Lazaric et. al or yours. Seems yours since Lazaric et.al. is an extension to your PI setting. But then the same sentence says yours is a particular setting? Corollary 2: " the gain in sample efficiency can be exponential in m". What is the meaning of 'can be'. Is it exponential or not? If yes, then under what condition? How is portfolio optimization a multi-task RL problem? It can be formulated as a resource allocation problem but apart from maximizing long-term returns what are the other task the agent must perform for this problem. Sorry to say that I tried quite hard but I could not make sense of either the text or mathematics of the paper. It seems the paper presents an interesting attempt at resource allocation using RL but I found the writing highly confusing.
ICLR
Title Efficient Reinforcement Learning in Resource Allocation Problems Through Permutation Invariant Multi-task Learning Abstract One of the main challenges in real-world reinforcement learning is to learn successfully from limited training samples. We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning, by exploiting an invariance property in the tasks. We provide a theoretical performance bound for the gain in sample efficiency under this setting. This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy. We demonstrate empirically the effectiveness of the proposed approach on two real-world sequential resource allocation tasks where this invariance property occurs: financial portfolio optimization and meta federated learning. 1 INTRODUCTION Sample efficiency in reinforcement learning (RL) is an elusive goal. Recent attempts at increasing the sample efficiency of RL implementations have focused to a large extent on incorporating models into the training process: Xu et al. (2019); Clavera et al. (2018); Zhang et al. (2018); Berkenkamp et al. (2017); Ke et al. (2019); Yarats et al. (2019); Huang et al. (2019); Chua et al. (2018); Serban et al. (2018). The models encapsulate knowledge explicitly, complementing the experiences that are gained by sampling from the RL environment. Another means towards increasing the availability of samples for a reinforcement learner is by tilting the training towards one that will better transfer to related tasks: if the training process is sufficiently well adapted to more than one task, then the training of a particular task should be able to benefit from samples from the other related tasks. This idea was explored a decade ago in Lazaric & Ghavamzadeh (2010) and has been gaining traction ever since, as researchers try to increase the reach of deep reinforcement learning from its comfortable footing in solving games outrageously well to solving other important problems. Yu (2018) discusses a number of methods for increasing sample efficiency in RL and includes experience transfer as one important avenue, covering the transfer of samples, as we do here, transfer of representation or skills, and jumpstarting models which are then ready to be quickly, i.e. with few samples, updated to different tasks. D’Eramo et al. (2020) address the same idea, noting that multi-task learning can improve the learning of each individual task, motivated by robotics-type tasks with underlying commonality, such as balancing a single vs. a double pendulum, or hopping vs. walking. We are interested in exploiting the ability of multi-task learning to solve the sample efficiency problem of RL. Our setting does not apply to all problem classes nor does it seek to exploit the kind of physical similarities found in robotics tasks that form the motivation of Lazaric & Ghavamzadeh (2010); D’Eramo et al. (2020). Rather, we show that there are a number of reinforcement learning tasks with a particular fundamental property that makes them ideal candidates for multi-task learning with the goal of increasing the availability of samples for their training. We refer to this property as permutation invariance. It is present in very diverse tasks: we illustrate it on a financial portfolio optimization problem, whereby trades are executed sequentially over a given time horizon, and on the problem of meta-learning in a federated supervised learning setting. Permutation invariance in the financial portfolio problem exhibits itself as follows: consider the task of allocating a portion of wealth to each of a number of financial instruments using a trading policy. If the trading policy is permutation invariant, one can change the order of the instruments without changing the policy. This allows one to generate multiple portfolio optimization tasks from a given set of financial instruments. A commonality between applications that have this property is that they concern sequential resource allocation: at each time step, the resource allocation scores the quality of each available candidate entity (for example a financial instrument in the above example), then based on those scores, apportions out the resource (the total wealth to invest, in the above example) among the entities at that time step, so that over the horizon of interest, the reward is maximized. Sequential resource allocation problems include applications such as sequential allocation of budget, sequential allocation of space, e.g. in IT systems, hotels, delivery vehicles, sequential allocation of people to work slots or appointments, etc. Many such applications possess permutation invariance in that the ordering of the entities, i.e. where the resources are allocated, can change without changing the resulting optimal allocation. We show that under this form of permutation invariance, it is possible to derive a bound on the performance of the policy. The bound is an extension of that of Lazaric & Ghavamzadeh (2010), and while similar to, provides additional information beyond the bound of D’Eramo et al. (2020). We use the bound to motivate an algorithm that allows for substantially improved results as compared with solving each task on its own. The bound and the algorithm are first analyzed on a synthetic problem that validates the bound in our theorem and confirms the multi-task gain that the theory predicts. Hessel et al. (2018); Bram et al. (2019) have cautioned against degrading of the performance on each task when some tasks bias the updates to the detriment of others in multi-task learning. They claim that some tasks have a greater density or magnitude of in-task rewards and hence a disproportionate impact on the learning process. In our setting, deleterious effects of some tasks on others could also arise. The algorithm we propose handles this through a form of prioritized sampling, where priorities are put on the tasks themselves, and acts like a prioritized experience replay buffer, applied to a multi-task learning problem. We show empirically that the priorities thus defined protect the overall learning problem from the deleterious effects that unrelated or unhelpful tasks could otherwise have on the policy. The contributions of this work are as follows: (1) we identify the permutation invariance property of the class of reinforcement learning problems involving sequential resource allocation, (2) we define a method to increase sample efficiency in these reinforcement learning problems by leveraging this property of permutation invariance; (3) we provide a theoretical performance bound for the class of problems; (4) we validate experimentally the utility of permutation variance on sample efficiency as well as the validity of the bound on a synthetic problem; and (5) we illustrate two real-world RL resource allocation tasks for which this property holds and demonstrate the benefits of the proposed method on sample efficiency and thus also on the overall performance of the models. 2 RELATED WORK A notable first stream of work on leveraging multi-task learning for enhancing RL performance on single tasks can be found in Wilson et al. (2007); Lazaric & Ghavamzadeh (2010) which consider, as we do, that there is an underlying MDP from which the multiple tasks can be thought to derive. They use however a Bayesian approach and propose a different algorithmic method than ours. Our results extend performance bounds by Lazaric et al. (2012) on single-task RL. As noted by Yu (2018), jumpstarting, or distilling experiences and representations of relevant policies is another means to increasing sample efficiency in solving a new but related problem. Rusu et al. (2016) uses this idea in so-called progressive neural networks and Parisotto et al. (2015) leverage multiple experts to guide the derivation of a general policy. With a similar objective, Teh et al. (2017) define a policy centroid, that is, a shared distilled policy, that captures the commonalities across the behaviors in the tasks. In all of these distillation-type methods, the tasks considered are simple or complex games. Teh et al. (2017) note that their policy centroid method, distral, is likely to be affected by task interference, in that differences across tasks may degrade the performance of the resulting policy of any of the constituent tasks. This topic was studied by Hessel et al. (2018); Bram et al. (2019). Hessel et al. (2018) proposed a solution to this by extending the so-called PopArt normalization van Hasselt et al. (2016) to re-scale the updates of each task so that the different characteristics of the task-specific reward do not skew the learning process. Bram et al. (2019) use a different approach that learns attention weights of the sub-networks of each task and discards those that are not relevant or helpful. Vuong et al. (2019); D’Eramo et al. (2020) are, like our work, concerned with sharing of experiences to facilitate a more sample-efficient learning process. Vuong et al. (2019) suggest identifying the shared portions of tasks to allow sharing of samples in those portions. The work of D’Eramo et al. (2020) is in some ways quite similar to ours: the authors’ goal is the same and they derive a bound as we do on the performance in this setting. However, their setting is different in that their tasks have both shared and task-specific components, and their bound becomes tighter only as the number of tasks increases. In our setting, we do not require a task-specific component, and we are able to show how the distance between the MDPs of each task, in addition to the number of tasks, affects the strength of the bound. Recently, permutation invariance has been exploited in deep multi-agent reinforcement learning (Liu et al., 2019) where the invariance properties arise naturally in a homogeneous multi-agent setting. Their work employs permutation invariance in learning the critic whereas in our case the entire learned policy employs permutation invariance. 3 PRELIMINARIES We begin by defining notation. For a measurable space with domain X , let S(X ) denote the set of probability measures over X , and B(X ;L) the space of bounded measurable functions with domain X and bound 0 < L < ∞. For a measure ρ ∈ S(X ) and a measurable function f : X → R, the l2(ρ)-norm of f is ‖f‖ρ, and for a set of n points X1, · · · , Xn ∈ X , the empirical norm, ‖f‖n is ‖f‖2ρ = ∫ f(x)2ρ(dx) and ‖f‖2n = 1 n n∑ t=1 f(Xt) 2. Let ‖f‖∞ = supx∈X |f(x)| be the supremum norm of f . Consider a set of MDPs indexed by t. Each MDP is denoted by a tupleMt = 〈X ,A, Rt, Pt, γ〉, where X , a bounded closed subset of the s-dimensional Euclidean space, is a common state space; A is a common action space, Rt : X ×A → R is a task specific reward function uniformly bounded by Rmax, Pt is a task specific transition kernel such that Pt(·|x, a) is a distribution over X for all x ∈ X and a ∈ A, and γ ∈ (0, 1) is a common discount factor. Deterministic policies are denoted by π : X → A. For a given policy π, the MDPMt is reduced to a Markov chainMπt = 〈X , Rπt , Pπt , γ〉 with reward function Rπt (x) = Rt(x, π(x)), transition kernel P π t (·|x) = Pt(·|x, π(x)), and stationary distribution ρπt . The value function V πt for MDP t is defined as the unique fixed-point of the Bellman operator T πt : B(X ;Vmax = Rmax/(1− γ))→ B(X ;Vmax), given by (T πt V )(x) = Rπt (x) + γ ∫ X Pπt (dy|x)V (y). Let π∗t denote the optimal policy forMt. The optimal value function V π∗t t forMt is defined as the unique fixed-point of its optimal Bellman operator T π ∗ t t which is defined by (T π ∗ t t V )(x) = max a∈A [ Rt(x, a) + γ ∫ X Pt(dy|x, a)V (y) ] . To approximate the value function V , we use a linear approximation architecture with parameters α ∈ Rd and basis functions ϕi ∈ B(X ;L) for i = 1, · · · , d. Let ϕ(·) = (ϕ1(·), · · · , ϕd(·))T ∈ Rd be the feature vector and F the linear function space spanned by basis functions ϕi. Thus, F = {fα | α ∈ Rd and fα(·) = ϕ(·)Tα}. Consider a learning task to dynamically allocate a common resource across entities Ut ⊆ U . Each t corresponds to a task, but for now take t to be an arbitrary fixed index. At each time step n, the decision maker observes states xn = (xi,n)i∈Ut of the entities, where xi,n is the state of entity i, and takes action an = (ai,n)i∈Ut , where ai,n is the share of the resource allocated to entity i. The total resource capacity is normalized to 1 for convenience. Therefore, allocations satisfy 0 ≤ ai,n ≤ 1 and∑ i∈Ut ai,n = 1. We consider policy πθ(xn) parameterized by θ. Assume that we have access to the reward function Rt as well as a simulator that generates a trajectory of length N given any arbitrary policy πθ. The objective of the learning task is to maximize Jt(θ) = E [ N∑ n=1 γn−1Rt(xn, an) ∣∣∣∣∣ an+1 = πθ(xn), xn+1 ∼ Pt(·|xn, an), x1 ∼ Pt(·) ] . In many settings, N is small and simulators are inaccurate; therefore, trajectories generated by the simulator are poor representations of the actual transition dynamics. This occurs in batch RL where trajectories are rollouts from a dataset. In these cases, policies overfit and generalize poorly. 4 THEORETICAL RESULTS We introduce first a property that we term permutation-invariance for the policy network that can be shown to help significantly reduce overfitting. Definition 1 (Permutation Invariant Policy Network) A policy network πθ is permutation invariant if it satisfies πθ(σ(x)) = σ(πθ(x)) for any permutation σ. Permutation invariant policy networks have significant advantages over completely integrated policy networks. While the latter are likely to fit correlations between different entities, this is not possible with permutation invariant policy networks as they are agnostic to identities of entities. Therefore, permutation invariant policy networks are better able to leverage experience across time and entities, leading to greater efficiency in data usage. Moreover, observe that if the transition kernels can be factored into independent and identical transition kernels across entities, then the optimal policy is indeed permutation invariant. Our main theoretical contributions start with an extension of results from Lazaric et al. (2012), where a finite-sample error bound was derived for the least squares policy iteration (LSPI) algorithm on a single task. Lazaric et al. (2012) provided a high-probability bound on the performance difference between the final learned policy and the optimal policy, of the form c1 + c2/ √ N , where c1 and c2 are constants that depend on the task and the chosen feature space, and N is the number of training examples. We extend their result by showing that, as long as tasks are -close to each other (with respect to a similarity measure we define later), the error bound of solving each task using our multi-task approach has the form c1 + c2/ √ NT + c3 , where T is the number of tasks and c3 is a task-dependent constant. Specifically, our theorem provides a general result and performance guarantee with respect to using data from a different but similar MDP. Definition 1 provides a basis for generating many such MDPs. Finally, the benefit of doing so shall be provided by Corollary 2. Thus, provided is small, a given task can benefit from a much larger set of NT training examples. In addition to the assumptions of Lazaric et al. (2012), we extend the definition of second-order discounted-average concentrability, proposed in Antos et al. (2008), and define the notion of first-order discounted-average concentrability. The latter will be used in our main result, Theorem 1. Assumption 1 There exists a distribution µ ∈ S(X ) such that for any policy π that is greedy with respect to a function in the truncated space F̃ , µ ≤ Cρπt for all t, where C <∞ is a constant. Given the target distribution σ ∈ S(X ) and an arbitrary sequence of policies {πm}m≥1, let cσ,µ = sup π1,...,πm ∥∥∥∥d(µPπ1 . . . Pπm)dσ ∥∥∥∥ . We assume that C ′σ,µ, C ′′ σ,µ <∞, and define first and second order discounted-average concentrability of future-state distributions as follows: C ′σ,µ = (1− γ) ∑ m≥0 γmcσ,µ(m), C ′′σ,µ = (1− γ)2 ∑ m≥1 mγm−1cσ,µ(m). Theorem 1 (Multi-Task Finite-Sample Error Bound) LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Assume A finite. Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where N satisfies Lemma 4 in Lazaric et al. (2012). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, for constants c1, c2, c3, c4 that are dependent onM, with probability 1− δ (with respect to the random samples): ‖V π ∗ t t − V πK t ‖σ ≤ c1 1√ N + c2 √ C ′σ,µ + c3 √ C ′′σ,µ + c4. The proof is deferred to the Appendix. Theorem 1 formalizes the trade off between drawing fewer samples from the exact MDPMt, versus drawing more samples from a different MDPM. Importantly, it shows how to benefit from solving a different MDP,M, when: (a) additional samples can be obtained fromM, and (b)M is not too different fromMt. In particular, the distance measure is simply the distance between the Bellman operators of the MDPs, which can be bounded if the difference in both the transition and reward functions are bounded. In recent work, a performance bound for multi-task learning was given in Theorem 2 and 3 of D’Eramo et al. (2020). However, the authors used a different setup containing both shared and task-specific representations, and their focus was on showing that the cost of learning the shared representation decreases with more tasks. They did not show how the similarity or difference across tasks affects performance. In contrast, our setup does not contain task-specific representations, and our focus is on how differences across MDPs impact the benefit of having more tasks (and consequently more samples). We show this in Corollary 1 and Corollary 2. Remark 1 While our theoretical results are based on LSTD and LSPI and assume finite action space, our approach is applicable to a wide range of reinforcement learning algorithms, including policy gradient methods and to MDPs with continuous action spaces. Deriving similar results for a larger family of models and algorithms remains an interesting, albeit challenging, future work. Permutation invariant policy networks allow using data from the global set of entities U . Since the policy network is agnostic to the identities of the entities, one can learn a single policy for all tasks, where each task t ∈ [T ] is a resource allocation problem over a subset of entities Ut. For notational simplicity, assume that all tasks have the same number of entities, and all trajectories are of equal length N . Our approach can, however, be readily extended to tasks with different numbers of entities and different trajectory lengths. Permutation invariance allows a large set of MDPs to leverage the result of Theorem 1. In the next section we shall provide an algorithm, motivated by the following corollaries, and a prioritized sampling strategy for this setting that drives significantly greater sample efficiency for the original task. The sampling strategy also helps to stabilize the learning process, reducing the risk of deleterious effects of the multi-task setting, as discussed by Teh et al. (2017) and addressed in works such as Hessel et al. (2018); Bram et al. (2019). Corollary 1 Let [T ] be a set of similar tasks such that their distance from the average MDP, given by (T πV )(x) = 1 T T∑ t=1 Rπt (x) + γ ∫ X 1 T T∑ t=1 Pπt (dy|x)V (y), is bounded by as defined in Theorem 1. Let N be the number of samples available in each task. Let πK be the policy obtained at the K th iteration when applying LSPI to the average MDP. Then, the suboptimality of the policy on each task is O(1/ √ NT ) +O( ) + c for some constant c (where suboptimality is defined according to Theorem 1). Recall that each task is formed by selecting a subset Ut of entities from the global set U . We thus have the following sample gain that can be attributed to the permutation invariance of the policy network. Corollary 2 (Gain in Sample Efficiency from Permutation Invariance) Let M = |U| and m = |Ut|. Given fixed M and m, there are T = ( M m ) ≥ ( M m )m different tasks. Then, by Cor. 1, assuming all pairs of tasks are weakly correlated, the potential gain in sample efficiency is exponential in m. Disregarding correlation between samples from tasks with overlapping entities Corollary 1 and Corollary 2 together suggest that the (up to) exponential increase in the number of available tasks can significantly improve sample efficiency as compared to learning each task separately. 5 EXPLOITING PERMUTATION INVARIANCE THROUGH MULTI-TASK REINFORCEMENT LEARNING Our approach to exploiting permutation invariance is via multi-task reinforcement learning, where each “task” corresponds to a particular choice of subset Ut ⊂ U . Furthermore, for each task, we enforce permutation invariance among the entities i by forcing the neural network to apply the same sequence of operations to the state input xi of each instrument through parameter sharing. The proposed method, shown in Algorithm 1, learns a single policy by sampling subsequences of trajectories from the different MDPs. At each step, we sample a task t according to a distribution defined by task selection policy p. Then, a minibatch sample Bt is drawn from the replay buffer for task t, and gradient descent is performed using the sampled transitions Bt (alternatively, samples can be generated using policy rollouts for the specific task). Separate replay buffers maintained for each task are updated only when the corresponding task is being used. In contrast with other active sampling approaches in multi-task learning, our approach maintains an estimate of the difficulty of each task t as a score, st. After each training step, we update the score for only the sampled task based on minibatch Bt, avoiding evaluation over all the tasks. The scoring functions depend on the sampled minibatch; to reduce fluctuations in scores for each task, exponential smoothing is applied st ← γst + (1− γ) · scorer(Bt). We propose a stochastic prioritization method that interpolates between pure greedy prioritization and uniform random sampling. Our approach is similar to prioritized experience replay (PER) by Schaul et al. (2016), but while classical PER prioritizes samples, we prioritize tasks. The probability of sampling task t is pt = sαt / ∑ t′ s α t′ , where the exponent α determines the degree of prioritization, with α = 0 corresponding to the uniform case. We correct for bias with importance-sampling (IS) weights wt = 1/(Tpt)β , that compensate for non-uniform probabilities if β = 1. We normalize weights by 1/maxt wt. Tasks on which the reward variance is high can be interpreted as having more challenging samples, hence reward variance can be used as a scoring function. Algorithm 1 Prioritized Multi-Task Reinforcement Learning for Increasing Sample Efficiency Initialize policy network πθ Initialize replay buffers R1, . . . , RT Initialize time steps n1 ← 1, . . . , nT ← 1 loop Select a task t ∼ p to train on Sample a random minibatch Bt of transitions (xn, an, rn, xn+1) from Rt Update policy θ using Bt and chosen RL approach (correcting for bias using IS weights w) Update score st ← γst + (1− γ) · scorer(Bt) Update ALL selection probabilities p and IS weights w for n = nt, . . . ,min{nt + ne, N} do For task t, select action an according to current policy and exploration noise Execute action an, and observe reward rn and new state xn+1 Store transition (xn, an, rn, xn+1) in Rt end for If n < N , update nt ← n+ 1, otherwise, update nt ← 1 end loop 6 EXPERIMENTS 6.1 SYNTHETIC DATA With the aim of validating the theory presented in Section 4, we define a synthetic example to explore the efficiency gain afforded by permutation invariance. To do so, we control of the deviation between any two tasks, thereby empirically validating the main theoretical results. Consider a resource allocation problem where the observed state xi for each entity i ∈ {1 . . .m} is a single scalar xi ∈ [0, 1]. The action space is the probability simplex, where each action a = (a1 . . . am) indicates the fraction of resource allocated to each entity. The reward function is R(x, a) := ∑ i xiai − βiai log ai where βi is a weight parameter for each entity. Note that when βi = β for all i, the reward function becomes R(x, a) = ( ∑ i xiai) + βH(a) where H is the Shannon entropy. This implies that maximizing the reward involves a tradeoff between focusing resources on high xi or distributing them uniformly across all i. Note that the reward function is permutation invariant, but that when we allow a varying βi over the entities, the function deviates from being perfectly permutation invariant. We use the range maxi βi − mini βi as a stand-in for . Let m = 10. For each , we run two experiments. The first examines the performance of policies trained by LSPI using N real examples drawn i.i.d from the state-action space, for N = 20 . . . 2000. A small Gaussian noise is added to each reward to make learning harder. The second experiment uses only 20 real examples, but augments the training set (up to N ) through random permutation of the real examples. The first two figures in Fig. 1 show the results for = 0.8 and = 0, respectively. Performance improves with N , as predicted by the 1/ √ N term in our error bound. Note that in the experiment using only 20 real examples, a performance gain is achieved by using permuted examples; this corresponds precisely to the multi-task gain predicted by the 1/ √ NT term. When is large, there is a significant gap between the results of the two experiments, as predicted by the -term in the error bound. The last plot in Fig. 1 shows this gap at N = 2000 when varies from 0 to 0.8. 6.2 REAL-WORD DATA We consider two real-world resource allocation settings: financial portfolio optimization and meta federated learning. Financial portfolio optimization is discussed below while meta federated learning is in the Appendix. Given historical prices for a universe of financial assets, U , the goal of task t is to allocate investments across a subset of assets Ut ⊆ U . The multiple tasks t thus correspond to multiple portfolios of instruments. Permutation invariance will be of use in this setting since, from a given universe of instruments (e.g. the 500 instruments in the S&P 500), an exponential number of tasks can be generated, each with its own portfolio. Consider now one such task. At the beginning of time period n, the action ai,n represents the fraction of wealth the decision maker allocates to asset i. The allocations evolve over the time period due to changes in asset prices. Let wi,n denote the allocation of asset i at the end of time period n. We model the state of an asset using its current allocation and a window of its H most recent prices. In particular, let vi,n denote the close price of asset i over time period n, and let yi,n = vi,n/vi,n−1 denote the ratio of close prices between adjacent time periods 1. Then, the allocation in asset i at the end of time period n is given by wi,n = ai,nyi,n∑ i∈Ut ai,nyi,n , and the state of asset i at the beginning of time period n is given by xi,n = (wi,n−1, vi,n−H/vi,n−1, . . . , vi,n−2/vi,n−1). 1Daily high and low prices are also used in the state but omitted here for brevity. The change in portfolio value over period n depends on the asset prices and transaction costs incurred in rebalancing the portfolio from (wi,n−1)i∈Ut to (ai,n)i∈Ut . The reward over period n is defined as the log rate of return: Rt(xn, an) = ln [ β ((wi,n−1)i∈Ut , (ai,n)i∈Ut) ∑ i∈Ut ai,nyi,n ] , where β can be evaluated using an iterative procedure (see Jiang et al. (2017)). Defining the reward this way is appealing because maximizing average total reward over consecutive periods is equivalent to maximizing the total rate of return over the periods. To leverage this, we approximate β((wi,n−1)i∈Ut , (ai,n)i∈Ut) ≈ c ∑ i∈Ut |wi,n−1 − ai,n|, where c is a commission rate to obtain a closed-form expression for Rt(xn, an) (see Jiang et al. (2017)). We optimize using direct policy gradient on minibatches of consecutive samples θ ← θ + η∇θ [ 1 B nb+B−1∑ n=nb wtRt(xn, πθ(xn)) ] , where nb is the first time index in the minibatch, B the size of a minibatch, and wt the IS weight for task t. As in Jiang et al. (2017), we sample nb from a geometric distribution that prioritises recent samples and implement replay buffers for each task. A benchmark trading strategy is equal constantly-rebalanced portfolio (CRP) that rebalances to maintain equal weights. As we noted earlier, ideally one would prefer for the scoring function to depend only on the minibatch Bt. A deviation from Equal CRP can be viewed as learning to exploit price movements, and is thus here we use this as the goal of the policy. Prioritised MTL thus prioritises tasks which deviate from Equal CRP. Note that the policy deviates from CRP only when profitable. Let scorer(Bt) = max n∈{nb,...,nb+B−1} ∥∥∥∥πθ(xn)− 1|Ut| ∥∥∥∥ ∞ , be the scoring of tasks in Prioritised MTL using mean absolute deviation of the minibatch allocation from Equal CRP. Figure 2 (left) shows a scatter plot of the maximum score seen every 50 steps and the change in episode rewards in a single-task learning experiment, and (right) of the minibatch score and the maximum gradient norm for the minibatch. Higher scores imply higher variance in the episode rewards and hence more challenging and useful samples. The correlation between scores and gradient norms shows that our approach is performing gradient-based prioritisation, (see Katharopoulos & Fleuret (2018); Loshchilov & Hutter (2015); Alain et al. (2015)) but in a computationally efficient manner. The details of the dataset and parameter settings can be found in the Appendix. Figure 3 shows the performance of the learned policies tested on 10 tasks drawn from out-of-sample instruments. The policy network with weights initialized close to zero behaves like an Equal CRP policy. As noted, any profitable deviation from Equal CRP implies learning useful trading strategies. The plots show that the MTL policies perform well on instruments never seen during training, offering a remarkable benefit for using RL in the design of trading policies. Fig. 4 shows the performance of prioritised multi-task learning (MTL) versus single-task learning (STL) (i.e. learning a policy for each task independently on the instruments in the task). We also show results for MTL without prioritised sampling, i.e., with α = 0. We consider 5 tasks and 30 tasks. The plots show that prioritised MTL performs significantly better than STL in both convergence time and final achieved performance. The performance with 30 tasks is significantly better than the performance with 5 tasks, showing that our approach leverages the samples of the additional tasks. Fig. 5 illustrates the typical behavior of a multi-task learning (MTL) and a single-task learning (STL) policy on the test period for tasks where multi-task policy performed significantly better. The single-task policy kept constant equal allocations while the multi-task policy was able to learn more complex allocations. In financial data, strongly trending prices do not occur often and are inherently noisy. Multi-task learning with permutation invariance helps with both challenges, allowing the algorithm to learn more complex patterns in a given training period. 7 CONCLUSIONS We introduce an approach for increasing the sample efficiency of reinforcement learning in a setting with widespread applicability within the class of sequential resource allocation problems. This property is permutation invariance: resources are allocated to entities according to a score, and the order can change without modifying the optimal allocation. Under this property, we show that a bound exists on the policy performance. This bound motivates a highly effective algorithm for improving the policy through a multi-task approach. Using prioritized task-sampling, the method not only improves the reward of the final policy but also renders it more robust. We illustrate the property and the method on two important problems: sequential financial portfolio optimization and meta federated learning, where the latter is provided in the Appendix. A APPENDIX Theorem 1 LetM = 〈X ,A, R, P, γ〉 be an MDP with reward function R and transition kernel P . Denote its Bellman operator by (T πV )(x) = Rπ(x) + γ ∫ X Pπ(dy|x)V (y). Given a policy π, define the Bellman difference operator between Mt and M to be Dπt V = T πt V − T πV . Apply the LSPI algorithm toM, by generating, at each iteration k, a path fromM of size N , where n satisfies Lemma 4 in Antos et al. (2008). Let V−1 ∈ F̃ be an arbitrary initial value function, V0, · · · , VK−1 (Ṽ0, · · · , ṼK−1) be the sequence of value functions (truncated value functions) generated by the LSPI after K iterations, and πk be the greedy policy w.r.t. the truncated value function Ṽk−1. Suppose also that ‖Dπt V π‖µ ≤ ∀ π, and ‖D πk t Ṽk−1‖µ ≤ ∀ k. Then, with probability 1− δ (with respect to the random samples), we have ‖V π ∗ t t − V πK t ‖σ ≤ 2 (1− γ)2 { (1 + γ) √ CC ′′σ,µ [ 2√ 1− γ2 ( 2 √ 2E0(F) + E2 ) + 2 1− γ ( γVmaxL √ d νµ ( √ 8 log(8dK/δ) N + 1 N ) ) + E1 ] + γ K−1 2 Rmax + 3 √ 2C ′σ,µ } . Proof: For convenience, we will simply remove the task subscript whenever we refer to variables associated withM. Define dπt = Dπt V π, d̃t,k = Dπkt Ṽk−1, ek = Ṽk − T πk Ṽk, Ek = P πk+1(I − γPπk+1)−1 − Pπ ∗ (I − γPπk)−1, Fk = P πk+1(I − γPπk+1)−1 + Pπ ∗ (I − γPπk)−1. From the proof of Lemma 12 in Antos et al. (2008), we get V π ∗ − V πK ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Ekek + (γP π∗)K(V π ∗ − V π0). By applying the above inequality, and taking the absolute value on both sides point-wise, we get |V π ∗ t t − V πK t | = |V π ∗ t t − V π ∗ |+ |V π ∗ − V πK |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ (γPπ ∗ )K |V π ∗ − V π0 |+ |V π ∗ t t − V π ∗ |+ |V πK − V πKt | ≤ γ K−1∑ k=0 (γPπ ∗ )K−k−1Fk|ek|+ 2Rmax 1− γ γK + |V π ∗ t t − V π ∗ |+ |V πK − V πKt | where we used the fact that |V π∗ − V π0 | ≤ (2Rmax/(1− γ))1. Next, we derive upper bounds for |V π ∗ t t − V π ∗ | and |V πK − V πKt |. (a) Observe that V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≤ T π ∗ t t V π∗t t − T π ∗ t V π ∗ = T π ∗ t t V π∗t t − T π ∗ t V π∗t t + T π ∗ t (V π∗t t − V π ∗ ) ≤ (I − γPπ ∗ t t ) −1d π∗t t . The first inequality follows from the fact that π∗ is optimal with respect to V π ∗ . The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V π∗t t − V π ∗ = T π ∗ t t V π∗t t − T π ∗ V π ∗ ≥ T π ∗ t V π∗t t − T π ∗ V π ∗ = T π ∗ t V π∗ − T π ∗ V π ∗ + T π ∗ t (V π∗t t − V π ∗ ) ≥ (I − γPπ ∗ t ) −1dπ ∗ t . By splitting into positive and negative components and applying the above bounds, we get |V π ∗ t t − V π ∗ | = |(V π ∗ t t − V π ∗ )+ − (V π∗t t − V π ∗ )−| ≤ |(V π ∗ t t − V π ∗ )+|+ |(V π∗t t − V π ∗ )−| ≤ |(I − γPπ ∗ t t ) −1d π∗t t |+ |(I − γPπ ∗ t ) −1dπ ∗ t | ≤ (I − γPπ ∗ t t ) −1|dπ ∗ t t |+ (I − γPπ ∗ t ) −1|dπ ∗ t | (b) Observe that V πK − V πKt ≤ T πKV πK + T πK ṼK−1 − T πK t ṼK−1 − T πK t V πK t = T πKV πK + T πK ṼK−1 − T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≤ (I − γPπKt )−1(−d πK t − d̃t,K). The first inequality follows from the fact that πK is optimal with respect to ṼK−1. The second inequality follows from the taylor expansion of the inverse term. By closely following the same steps, we also get V πK − V πKt ≥ T πKV πK − T πK ṼK−1 + T πK t ṼK−1 − T πK t V πK t = T πKV πK − T πK ṼK−1 + T πKt ṼK−1 − T πK t V πK + T πKt (V πK − V πK t ) ≥ (I − γPπKt )−1(−d πK t + d̃t,K). By splitting into positive and negative components and applying the above bounds, we get |V πK − V πKt | = |(V πK − V πK t )+ − (V πK − V πK t )−| ≤ |(V πK − V πKt )+|+ |(V πK − V πK t )−| = |(I − γPπKt )−1(−d πK t − d̃t,K)|+ |(I − γP πK t ) −1(−dπKt + d̃t,K)| ≤ (I − γPπKt )−1| − d πK t − d̃t,K |+ (I − γP πK t ) −1| − dπKt + d̃t,K | ≤ 2(I − γPπKt )−1(|d πK t |+ |d̃t,K |). By applying the upper bounds from (a) and (b), we get |V π ∗ t t − V πK t | ≤ 2(1− γK+2) (1− γ)2 [ K−1∑ k=0 αkAk|ek|+ α(Rmax/γ) + (β/6)Bπ ∗ t · 6|dπ ∗ t t |+ (β/6)Bπ ∗ · 6|dπ ∗ t | + (β/3)BπK · 6|dπKt |+ (β/3)BπK · 6|d̃t,K | ] where we introduced the positive coefficients αk = (1− γ) 1− γK+2 γK−k, for 0 ≤ k < K, α = (1− γ) 1− γK+2 γK+1, β = (1− γ) 2(1− γK+2) , and the operators Ak = 1− γ 2 (Pπ ∗ )K−k−1Fk, for 0 ≤ k < K, Bπ = (1− γ)(I − γPπt )−1. Let λK = [ 2(1−γK+2) (1−γ)2 ]p . Note that the coefficients αk, α, and β, sum to 1, and the operators are positive linear operators that satisfy Ak1 = 1 and Bπ1 = 1. Therefore, by taking the pth power on both sides, applying Jensen’s inequality twice, and then integrating both sides with respect to σ(x), we get ‖V π ∗ t t − V πK t ‖pp,σ = ∫ σ(dx)|V π ∗ t t − V πK t |p ≤ λKσ [ K−1∑ k=0 αkAk|ek|p + α(Rmax/γ)p + (β/6)Bπ ∗ t (6|dπ ∗ t t |)p + (β/6)Bπ ∗ (6|dπ ∗ t |)p + (β/3)BπK (6|dπKt |)p + (β/3)BπK (6|d̃t,K |)p ] . From the definition of the coefficients cσ,µ(m), we get σAk ≤ (1− γ) ∑ m≥0 γmcσ,µ(m+K − k)µ, σBπ ≤ (1− γ) ∑ m≥0 γmcσ,µ(m)µ. Therefore, it follows that σ [ K−1∑ k=0 αkAk|ek|p ] ≤ (1− γ) K−1∑ k=0 αk ∑ m≥0 γmcσ,µ(m+K − k)µ|ek|p = γ(1− γ)2 1− γK+2 K−1∑ k=0 ∑ m≥0 γm+K−k−1cσ,µ(m+K − k)‖ek‖pp,µ ≤ γ 1− γK+2 C ′′σ,µe p where e = max0≤k<K ‖ek‖pp,µ. The terms involving Bπ satisfy σ [Bπ(6|dπt |)p] ≤ 6p(1− γ) ∑ m≥0 γmcσ,µ(m)µ|dπt |p ≤ 6pC ′σ,µ‖dπt ‖pp,µ. Putting all these together, and choosing p = 2, we get ‖V π ∗ t t − V πK t ‖σ ≤ λ 1 2 K [ γ 1− γK+2 C ′′σ,µe 2 + (1− γ)γK+1 1− γK+2 (Rmax/γ) 2 + 36(1− γ) 2(1− γK+2) C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ γC ′′σ,µe 2 + (1− γ)γK+1(Rmax/γ)2 + 36(1− γ) 2 C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [ C ′′σ,µe 2 + γK+1(Rmax/γ) 2 + 18C ′σ,µ 2 ] 1 2 ≤ 2 (1− γ)2 [√ C ′′σ,µe+ γ K−1 2 Rmax + 3 √ 2C ′σ,µ ] . The desired result can then be obtained by applying the same steps as in the proof of Theorem 8 in Lazaric et al. (2012). A.1 FINANCIAL PORTFOLIO OPTIMIZATION: ADDITIONAL DETAILS The dataset consists of daily prices for 68 instruments in the technology and communication sectors from 2009 to 2019. We use 2009–2018 for training and 2019 for testing. To validate that our approach learns common features across instruments, and thus can transfer, we reserve 18 instruments not seen during training for further testing. The global asset universe U used for training contains 50 instruments. We construct tasks by randomly choosing a portfolio of |Ut| = 10 instruments for each task. We create a permutation invariant policy network by applying the same sequence of operations to every instrument state. That is, for each instrument, the flattened input prices are passed through a common RNN with 25 hidden units and tanh activation, this output is concatenated with the latest allocation fraction of the instrument, and passed through a common dense layer to produce a score. Instrument scores are passed to a softmax function to produce allocations that sum to one. The smoothing parameter for the scores γ = 0.2, α = 0.5 for the task prioritisation parameter and β = 1.0 to fully compensate for the prioritized sampling bias. A.2 META FEDERATED LEARNING Suppose we have a universe of federated learning clients U . The goal of task t is to aggregate models in a federated learning experiment over a subset of clients Ut ⊆ U . At each step n, the action ai,n represents the weight assigned to the supervised learning model of client i in the averaging procedure. Let vi,n denote the model of the client (i.e. the tensor of model parameters). We model the state of the client as some function of its H most recent models xi,n = f(vi,n−H+1, . . . , vi,n). Assume that the aggregator has access to a small evaluation dataset that it can use to approximately assess the quality of models. We define the reward at each step to be the accuracy of the aggregate model, Rt(xn, an) = L (∑ i∈Ut ai,nvi,n ) , where L(v) is a function that provides the accuracy of a model v on the evaluation dataset. Therefore, by maximizing the total return over all time periods, we seek to maximize both the accuracy at the final time step as well as the time to convergence. We optimize the policy using Proximal Policy Optimization (PPO). We use the MNIST digit recognition problem. Each client observes 600 samples from the train dataset and trains a classifier composed of one 5x5 convolutional layer (with 32 channels and ReLu activation) and a softmax output layer. We use the same permutation invariant policy network architecture as before with 10 hidden units in the RNN. We randomly select |Ut| = 10 clients for each task. We learn using an evaluation dataset comprised of 1000 random samples from the test dataset and test using all 10000 samples in the test dataset. We fix the number of federated learning iterations to 50. We explore the benefit of MTL in identifying useful clients in scenarios with skewed data distribution. We partition the dataset such that 8 of the clients in each task observe random digits between 0 to 5 and the remaining 2 clients observe random digits between 6 to 9. Therefore, for each task, 20% of the clients possess 40% of the unique labels. The state of each client are the accuracies of its H most recent models on the evaluation dataset. Figure 6 shows the potential benefits of multi-task learning when simulators are inaccurate. In particular, we obtain two aggregation policies, one trained using single-task learning (STL), and another trained using multi-task learning (MTL), both trained using the same number of steps, and we observe their behavior during testing. The plots show that multi-task learning is able to learn non-uniform averaging policies that improve the convergence and performance of federated learning runs. More importantly, it can perform better than single-task learning even with the same number of samples. This may be attributed to the wider variety of client configurations (and consequently experiences) in the multi-task approach.
1. What is the focus and contribution of the paper regarding sequential resource allocation problems? 2. What are the strengths of the proposed approach, particularly in its technical depth and theoretical guarantee? 3. What are the weaknesses of the paper, especially regarding the finite action space assumption? 4. Do you have any questions regarding the paper's discussion of existing works and its placement in the research line? 5. Can you provide more details about the empirical evaluations and their case studies? 6. What are your thoughts on the proof of the theorem and its use of permutation invariance? 7. Are there any concerns regarding the figures and their placement in the paper?
Review
Review This paper addresses sequential resource allocation problem using reinforcement learning, where sample efficiency is the focus of the paper. The authors identify a key property in the targeted resource allocation problems -- the permutation invariance -- which intrinsically implies the independency of samples at different time steps. From this property the paper extends the work of D’Eramo et al. (2020) and derives a potentially tighter bound for the gap between the optimal policy and the policy learned from multiple tasks. The paper also designs a new algorithms that prioritizes sampling in multi-task learning that addresses the bias between training and target tasks. Empirical evaluations on financial portfolio allocation and meta federated learning demonstrate the effectiveness of the proposed approach. Strengths: Using RL to solve sequential resource allocation problems is interesting and well-motivated, it can promote the impact of RL approaches when deployed The paper has technical depth, and provides a theoretical guarantee of their approach The paper gives a good discussion of existing works and where the paper lies in the line; The empirical evaluations show two interesting case studies of sequential resource allocation problems. The paper contributes non-trivial efforts in designing experiment strategies for the two cases. The results demonstrate the effectiveness of the proposed approach. Weakness: A major weakness in my opinion is that the theorem is derived assuming the action space is finite, while in experiments and mathematical formulations the actions are denoted as continuous (thus infinite). This is however already clearly pointed out in the paper and placed as future work, so this is somewhat fine to me. Some comments: The paper gives a good discussion of existing works and where the paper lies in the line; it would be better if the authors can briefly discuss meta-RL which is close to the problem being studied in this paper The significance of the theorem is well-discussed. However it is hard to understand what does Assumption 1 imply -- Is it a strong or weak assumption? At what circumstances does this assumption hold? In the proof of the theorem, it is not clear -- what is the high-level intuition of the proof? How is permutation invariance property used? Figure 1 is placed before 2-4 but referenced only in the end of the experiment section Figure 2, the annotations left and right in the brackets are reversed. Questions: please refer to Some comments points 2) and 3)
ICLR
Title Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates Abstract Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from low signal and even instability in Q-learning because target values are derived from current Q-estimates, which are often noisy. To mitigate the issue, we propose ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble. We empirically observe that the proposed method stabilizes and improves learning on both continuous and discrete control benchmarks. We also specifically investigate the signal-to-noise aspect by studying environments with noisy rewards, and find that weighted Bellman backups significantly outperform standard Bellman backups. Furthermore, since our weighted Bellman backups rely on maintaining an ensemble, we investigate how weighted Bellman backups interact with UCB Exploration. By enforcing the diversity between agents using Bootstrap, we show that these different ideas are largely orthogonal and can be fruitfully integrated, together further improving the performance of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and discrete control tasks on both lowdimensional and high-dimensional environments. 1 INTRODUCTION Model-free reinforcement learning (RL), with high-capacity function approximators, such as deep neural networks (DNNs), has been used to solve a variety of sequential decision-making problems, including board games (Silver et al., 2017; 2018), video games (Mnih et al., 2015; Vinyals et al., 2019), and robotic manipulation (Kalashnikov et al., 2018). It has been well established that the above successes are highly sample inefficient (Kaiser et al., 2020). Recently, a lot of progress has been made in more sample-efficient model-free RL algorithms through improvements in off-policy learning both in discrete and continuous domains (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018; Amos et al., 2020). However, standard off-policy RL algorithms can suffer from instability in Q-learning due to error propagation in the Bellman backup, i.e., the errors induced in the target value can lead to an increase in overall error in the Q-function (Kumar et al., 2019; 2020). One way to address the error propagation issue is to use ensemble methods, which combine multiple models of the value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018). For discrete control tasks, double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016) addressed the value overestimation by maintaining two independent estimators of the action values and later extended to continuous control tasks in TD3 (Fujimoto et al., 2018). While most prior work has improved the stability by taking the minimum over Q-functions, this also needlessly loses signal, and we propose an alternative way that utilizes ensembles to estimate uncertainty and provide more stable backups. In this paper, we propose ensemble-based weighted Bellman backups that can be applied to most modern off-policy RL algorithms, such as Q-learning and actor-critic algorithms. Our main idea is to reweight sample transitions based on uncertainty estimates from a Q-ensemble. Because prediction errors can be characterized by uncertainty estimates from ensembles (i.e., variance of predictions) as shown in Figure 1(b), we find that the proposed method significantly improves the signal-to-noise in the Q-updates and stabilizes the learning process. Finally, we present a unified framework, coined SUNRISE, that combines our weighted Bellman backups with an inference method that selects actions using highest upper-confidence bounds (UCB) for efficient exploration (Chen et al., 2017). We find that these different ideas can be fruitfully integrated, and they are largely complementary (see Figure 1(a)). We demonstrate the effectiveness of the proposed method using Soft Actor-Critic (SAC; Haarnoja et al. 2018) for continuous control benchmarks (specifically, OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018)) and Rainbow DQN (Hessel et al., 2018) for discrete control benchmarks (specifically, Atari games (Bellemare et al., 2013)). In our experiments, SUNRISE consistently improves the performance of existing off-policy RL methods. Furthermore, we find that the proposed weighted Bellman backups yield improvements in environments with noisy reward, which have a low signal-to-noise ratio. 2 RELATED WORK Off-policy RL algorithms. Recently, various off-policy RL algorithms have provided large gains in sample-efficiency by reusing past experiences (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018). Rainbow DQN (Hessel et al., 2018) achieved state-of-the-art performance on the Atari games (Bellemare et al., 2013) by combining several techniques, such as double Qlearning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017). For continuous control tasks, SAC (Haarnoja et al., 2018) achieved state-of-the-art sample-efficiency results by incorporating the maximum entropy framework. Our ensemble method brings orthogonal benefits and is complementary and compatible with these existing state-of-the-art algorithms. Stabilizing Q-learning. It has been empirically observed that instability in Q-learning can be caused by applying the Bellman backup on the learned value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018; Song et al., 2019; Kim et al., 2019; Kumar et al., 2019; 2020). By following the principle of double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016), twin-Q trick (Fujimoto et al., 2018) was proposed to handle the overestimation of value functions for continuous control tasks. Song et al. (2019) and Kim et al. (2019) proposed to replace the max operator with Softmax and Mellowmax, respectively, to reduce the overestimation error. Recently, Kumar et al. (2020) handled the error propagation issue by reweighting the Bellman backup based on cumulative Bellman errors. However, our method is different in that we propose an alternative way that also utilizes ensembles to estimate uncertainty and provide more stable, higher-signal-to-noise backups. Ensemble methods in RL. Ensemble methods have been studied for different purposes in RL (Wiering & Van Hasselt, 2008; Osband et al., 2016a; Anschel et al., 2017; Agarwal et al., 2020; Lan et al., 2020). Chua et al. (2018) showed that modeling errors in model-based RL can be reduced using an ensemble of dynamics models, and Kurutach et al. (2018) accelerated policy learning by generating imagined experiences from the ensemble of dynamics models. For efficient exploration, Osband et al. (2016a) and Chen et al. (2017) also leveraged the ensemble of Q-functions. However, most prior works have studied the various axes of improvements from ensemble methods in isolation, while we propose a unified framework that handles various issues in off-policy RL algorithms. Exploration in RL. To balance exploration and exploitation, several methods, such as the maximum entropy frameworks (Ziebart, 2010; Haarnoja et al., 2018), exploration bonus rewards (Bellemare et al., 2016; Houthooft et al., 2016; Pathak et al., 2017; Choi et al., 2019) and randomization (Osband et al., 2016a;b), have been proposed. Despite the success of these exploration methods, a potential drawback is that agents can focus on irrelevant aspects of the environment because these methods do not depend on the rewards. To handle this issue, Chen et al. (2017) proposed an exploration strategy that considers both best estimates (i.e., mean) and uncertainty (i.e., variance) of Q-functions for discrete control tasks. We further extend this strategy to continuous control tasks and show that it can be combined with other techniques. 3 BACKGROUND Reinforcement learning. We consider a standard RL framework where an agent interacts with an environment in discrete time. Formally, at each timestep t, the agent receives a state st from the environment and chooses an action at based on its policy π. The environment returns a reward rt and the agent transitions to the next state st+1. The return Rt = ∑∞ k=0 γ krt+k is the total accumulated rewards from timestep t with a discount factor γ ∈ [0, 1). RL then maximizes the expected return. Soft Actor-Critic. SAC (Haarnoja et al., 2018) is an off-policy actor-critic method based on the maximum entropy RL framework (Ziebart, 2010), which encourages the robustness to noise and exploration by maximizing a weighted objective of the reward and the policy entropy (see Appendix A for further details). To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], (1) LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , (2) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . (3) Here, the policy is modeled as a Gaussian with mean and covariance given by neural networks to handle continuous action spaces. 4 SUNRISE In this section, we propose the ensemble-based weighted Bellman backups, and then introduce SUNRISE: Simple UNified framework for ReInforcement learning using enSEmbles, which combines various ensemble methods. In principle, our method can be used in conjunction with most modern off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018). For the exposition, we describe only the SAC version in the main body. The Rainbow DQN version follows the same principles and is fully described in Appendix B. 4.1 WEIGHTED BELLMAN BACKUPS TO IMPROVE SIGNAL-TO-NOISE IN Q-UPDATES Formally, we consider an ensemble of N SAC agents, i.e., {Qθi , πφi}Ni=1, where θi and φi denote the parameters of the i-th soft Q-function and policy.1 Since conventional Q-learning is based on the Bellman backup in (2), it can be affected by error propagation. I.e., error in the target Q-function Qθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. In other words, errors in the previous Q-function induce the “noise” to the learning “signal” (i.e., true Q-value) of the current Q-function. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each agent i, we consider a weighted Bellman backup as follows: LWQ (τt, θi) = w (st+1, at+1) ( Qθi(st, at)− rt − γ ( Qθ̄i(st+1, at+1)− α log πφ(at+1|st+1) ))2 , (4) 1We remark that each Q-function Qθi(s, a) has a unique target Q-function Qθ̄i(s, a). where τt = (st, at, rt, st+1) is a transition, at+1 ∼ πφ(a|st), and w(s, a) is a confidence weight based on ensemble of target Q-functions: w(s, a) = σ ( −Q̄std(s, a) ∗ T ) + 0.5, (5) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s, a) is the empirical standard deviation of all target Q-functions {Qθ̄i} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.2 The proposed objective LWQ down-weights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. 4.2 COMBINATION WITH ADDITIONAL TECHNIQUES THAT LEVERAGE ENSEMBLES We integrate the proposed weighted Bellman backup with UCB exploration into a single framework by utilizing the bootstrap with random initialization. Bootstrap with random initialization. To train the ensemble of agents, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between agents through two simple ideas: First, we initialize the model parameters of all agents with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each agent. Specifically, for each SAC agent i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of agents, we multiply the bootstrap mask to each objective function, such as: mt,iLπ (st, φi) and mt,iLWQ(τt, θi) in (3) and (4), respectively. We remark that Osband et al. (2016a) applied this simple technique to train an ensemble of DQN (Mnih et al., 2015) only for discrete control tasks, while we apply to SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018) for both continuous and discrete tasks with additional techniques. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (6) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). We remark that this inference method was originally proposed in Chen et al. (2017) for efficient exploration in discrete action spaces. However, in continuous action spaces, finding the action that maximizes the UCB is not straightforward. To handle this issue, we propose a simple approximation scheme, which first generates N candidate action set from ensemble policies {πφi}Ni=1, and then chooses the action that maximizes the UCB (Line 4 in Algorithm 1). For evaluation, we approximate the maximum a posterior action by averaging the mean of Gaussian distributions modeled by each ensemble policy. The full procedure of our unified framework, coined SUNRISE, is summarized in Algorithm 1. 5 EXPERIMENTAL RESULTS We designed our experiments to answer the following questions: • Can SUNRISE improve off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018), for both continuous (see Table 1 and Table 2) and discrete (see Table 3) control tasks? • How crucial is the proposed weighted Bellman backups in (4) for improving the signal-to-noise in Q-updates (see Figure 2)? • Can UCB exploration be useful for solving tasks with sparse rewards (see Figure 3(b))? • Is SUNRISE better than a single agent with more updates and parameters (see Figure 3(c))? • How does ensemble size affect the performance (see Figure 3(d))? 2We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. Algorithm 1 SUNRISE: SAC version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Collect N action samples: At = {at,i ∼ πφi(a|st)|i ∈ {1, . . . , N}} 5: Choose the action that maximizes UCB: at = arg max at,i∈At Qmean(st, at,i)+λQstd(st, at,i) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) in (4) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) in (3) 16: end for 17: end for 18: end for 5.1 SETUPS Continuous control tasks. We evaluate SUNRISE on several continuous control tasks using simulated robots from OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018). For OpenAI Gym experiments with proprioceptive inputs (e.g., positions and velocities), we compare to PETS (Chua et al., 2018), a state-of-the-art model-based RL method based on ensembles of dynamics models; POPLIN-P (Wang & Ba, 2020), a state-of-the-art model-based RL method which uses a policy network to generate actions for planning; POPLIN-A (Wang & Ba, 2020), variant of POPLIN-P which adds noise in the action space; METRPO (Kurutach et al., 2018), a hybrid RL method which augments TRPO (Schulman et al., 2015) using ensembles of dynamics models; and two state-of-the-art model-free RL methods, TD3 (Fujimoto et al., 2018) and SAC (Haarnoja et al., 2018). For our method, we consider a combination of SAC and SUNRISE, as described in Algorithm 1. Following the setup in Wang & Ba (2020) and Wang et al. (2019), we report the mean and standard deviation across ten runs after 200K timesteps on five complex environments: Cheetah, Walker, Hopper, Ant and SlimHumanoid with early termination (ET). More experimental details and learning curves with 1M timesteps are in Appendix D. For DeepMind Control Suite with image inputs, we compare to PlaNet (Hafner et al., 2019), a model-based RL method which learns a latent dynamics model and uses it for planning; Dreamer (Hafner et al., 2020), a hybrid RL method which utilizes the latent dynamics model to generate synthetic roll-outs; SLAC (Lee et al., 2020), a hybrid RL method which combines the latent dynamics model with SAC; and three state-of-the-art model-free RL methods which apply contrastive learning (CURL; Srinivas et al. 2020) or data augmentation (RAD (Laskin et al., 2020) and DrQ (Kostrikov et al., 2020)) to SAC. For our method, we consider a combination of RAD (i.e., SAC with random crop) and SUNRISE. Following the setup in RAD, we report the mean and standard deviation across five runs after 100k (i.e., low sample regime) and 500k (i.e., asymptotically optimal regime) environment steps on six environments: Finger-spin, Cartpole-swing, Reacher-easy, Cheetah-run, Walker-walk, and Cup-catch. More experimental details and learning curves are in Appendix F. Discrete control benchmarks. For discrete control tasks, we demonstrate the effectiveness of SUNRISE on several Atari games (Bellemare et al., 2013). We compare to SimPLe (Kaiser et al., 2020), a hybrid RL method which updates the policy only using samples generated by learned dynamics model; Rainbow DQN (Hessel et al., 2018) with modified hyperparameters for sample-efficiency (van Hasselt et al., 2019); Random agent (Kaiser et al., 2020); two state-of-the-art model-free RL methods which apply the contrastive learning (CURL; Srinivas et al. 2020) and data augmentation (DrQ; Kostrikov et al. 2020) to Rainbow DQN; and Human performances reported in Kaiser et al. (2020) and van Hasselt et al. (2019). Following the setups in SimPLe, we report the mean across three runs after 100K interactions (i.e., 400K frames with action repeat of 4). For our method, we consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE (see Algorithm 3 in Appendix B). More experimental details and learning curves are in Appendix G. For our method, we do not alter any hyperparameters of the original RL algorithms and train five ensemble agents. There are only three additional hyperparameters β, T , and λ for bootstrap, weighted Bellman backup, and UCB exploration, where we provide details in Appendix D, F, and G. 5.2 COMPARATIVE EVALUATION OpenAI Gym. Table 1 shows the average returns of evaluation roll-outs for all methods. SUNRISE consistently improves the performance of SAC across all environments and outperforms the model-based RL methods, such as POPLIN-P and PETS, on all environments except Ant and SlimHumanoid-ET. Even though we focus on performance after small samples because of the recent emphasis on making RL more sample efficient, we find that the gain from SUNRISE becomes even more significant when training longer (see Figure 3(c) and Appendix D). We remark that SUNRISE is more compute-efficient than modern model-based RL methods, such as POPLIN and PETS, because they also utilize ensembles (of dynamics models) and perform planning to select actions. Namely, SUNRISE is simple to implement, computationally efficient, and readily parallelizable. DeepMind Control Suite. As shown in Table 2, SUNRISE also consistently improves the performance of RAD (i.e., SAC with random crop) on all environments from DeepMind Control Suite. This implies that the proposed method can be useful for high-dimensional and complex input observations. Moreover, our method outperforms existing pixel-based RL methods in almost all environments. We remark that SUNRISE can also be combined with DrQ, and expect that it can achieve better performances on Cartpole-swing and Cup-catch at 100K environment steps. Atari games. We also evaluate SUNRISE on discrete control tasks from the Atari benchmark using Rainbow DQN. Table 3 shows that SUNRISE improves the performance of Rainbow in almost all environments, and outperforms the state-of-the-art CURL and SimPLe on 11 out of 26 Atari games. Here, we remark that SUNRISE is also compatible with CURL, which could enable even better performance. These results demonstrate that SUNRISE is a general approach. 5.3 ABLATION STUDY Effects of weighted Bellman backups. To verify the effectiveness of the proposed weighted Bellman backup (4) in improving signal-to-noise in Q-updates, we evaluate on a modified OpenAI Gym environments with noisy rewards. Following Kumar et al. (2019), we add Gaussian noise to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 1) only during training, and report the deterministic ground-truth reward during evaluation. For our method, we also consider a variant of SUNRISE, which updates Q-functions without the proposed weighted Bellman backup to isolate its effect. We compare to DisCor (Kumar et al., 2020), which improves SAC by reweighting the Bellman backup based on estimated cumulative Bellman errors (see Appendix E for more details). Figure 2 shows the learning curves of all methods on OpenAI Gym with noisy rewards. The proposed weighted Bellman backup significantly improves both sample-efficiency and asymptotic performance of SUNRISE, and outperforms baselines such as SAC and DisCor. One can note the performance gain due to our weighted Bellman backup becomes more significant in complex environments, such as SlimHumanoid-ET. We remark that DisCor still suffers from error propagation issues in complex environments like SlimHumanoid-ET and Ant because there are some approximation errors in estimating cumulative Bellman errors (see Section 6.1 for more detailed discussion). These results imply that errors in the target Q-function can be characterized by the proposed confident weight in equation 5 effectively. We also consider another variant of SUNRISE, which updates Q-functions with random weights sampled from [0.5, 1.0] uniformly at random. In order to evaluate the performance of SUNRISE, we increase the noise rate by adding Gaussian noise with a large standard deviation to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 5). Figure 3(a) shows the learning curves of all methods on the SlimHumanoid-ET environment over 10 random seeds. First, one can not that SUNRISE with random weights (red curve) is worse than SUNRISE with the proposed weighted Bellman backups (blue curve). Additionally, even without UCB exploration, SUNRISE with the proposed weighted Bellman backups (purple curve) outperforms all baselines. This implies that the proposed weighted Bellman backups can handle the error propagation effectively even though there is a large noise in reward function. Effects of UCB exploration. To verify the advantage of UCB exploration in (6), we evaluate on Cartpole-swing with sparse-reward from DeepMind Control Suite. For our method, we consider a variant of SUNRISE, which selects action without UCB exploration. As shown in Fig 3(b), SUNRISE with UCB exploration (blue curve) significantly improves the sample-efficiency on the environment with sparse rewards. Comparison with a single agent with more updates/parameters. One concern in utilizing the ensemble method is that its gains may come from more gradient updates and parameters. To clarify this concern, we compare SUNRISE (5 ensembles using 2-layer MLPs with 256 hidden units each) to a single agent, which consists of 2-layer MLPs with 1024 (and 256) hidden units with 5 updates using different random minibatches. Figure 3(c) shows that the learning curves on SlimHumanoidET, where SUNRISE outperforms all baselines. This implies that the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. More experimental results on other environments are also available in Appendix D. Effects of ensemble size. We analyze the effects of ensemble size N on the Ant environment from OpenAI Gym. Figure 3(d) shows that the performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. Thus, we use five ensemble agents for all experiments. More experimental results on other environments are also available in Appendix D, where the overall trend is similar. 6 DISCUSSION 6.1 CONNECTION WITH DISCOR Kumar et al. (2020) show that naive Bellman backups can suffer from slow learning in certain environments, requiring exponentially many updates. To handle this problem, they propose the weighted Bellman backups, which make steady learning progress by inducing some optimal data distribution (see (Kumar et al., 2020) for more details). Specifically, in addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s,a)T ) , where γ is a discount factor and T is a temperature. However, we remark that DisCor can still suffer from the error propagation issues because there is also an approximation error in estimating cumulative Bellman errors. Therefore, we consider an alternative approach that utilizes the uncertainty from ensembles. Because it has been observed that the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples (Lakshminarayanan et al., 2017), we expect that the weighted Bellman backups based on ensembles can handle error propagation more effectively. Indeed, in our experiments, we find that ensemble-based weighted Bellman backups can give rise to more stable training and improve the data-efficiency of various off-policy RL algorithms. 6.2 COMPUTATION OVERHEAD One can expect that there is an additional computation overhead by introducing ensembles. When we have N ensemble agents, our method requires N× inferences for weighted Bellman backups and 2N× inferences (N for actors and N for critics). However, we remark that our method can be more computationally efficient because it is parallelizable. Also, as shown in Figure 3(c), the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. 7 CONCLUSION In this paper, we present the ensemble-based weighted Bellman backups, which is compatible with various off-policy RL algorithms. By re-weighting target Q-values based on uncertainty estimates, we stabilize and improve the learning process on both continuous and discrete control benchmarks. Additionally, we introduce SUNRISE, a simple unified ensemble method, which integrates the proposed weighted Bellman backups with bootstrap with random initialization, and UCB exploration to handle various issues in off-policy RL algorithms. Our experiments show that SUNRISE consistently improves the performances of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, and outperforms state-of-the-art RL algorithms for both continuous and discrete control tasks on both low-dimensional and high-dimensional environments. We hope that SUNRISE could be useful to other relevant topics such as sim-to-real transfer (Tobin et al., 2017), imitation learning (Torabi et al., 2018), understanding the connection between on-policy and offpolicy RL (Schulman et al., 2017), offline RL (Agarwal et al., 2020), and planning (Srinivas et al., 2018; Tamar et al., 2016). A SUNRISE: SOFT ACTOR-CRITIC Background. SAC (Haarnoja et al., 2018) is a state-of-the-art off-policy algorithm for continuous control problems. SAC learns a policy, πφ(a|s), and a critic, Qθ(s, a), and aims to maximize a weighted objective of the reward and the policy entropy, Est,at∼π [∑ t γ t−1rt + αH(πφ(·|st)) ] . To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . We remark that this corresponds to minimizing the Kullback-Leibler divergence between the policy and a Boltzmann distribution induced by the current soft Q-function. SUNRISE without UCB exploration. For SUNRISE without UCB exploration, we use random inference proposed in Bootstrapped DQN (Osband et al., 2016a), which randomly selects an index of policy uniformly at random and generates the action from the selected actor for the duration of that episode (see Line 3 in Algorithm 2). Algorithm 2 SUNRISE: SAC version (random inference) 1: for each iteration do 2: // RANDOM INFERENCE 3: Select an index of policy using î ∼ Uniform{1, · · · , N} 4: for each timestep t do 5: Get the action from selected policy: at ∼ πφî(a|st) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) 16: end for 17: end for 18: end for B EXTENSION TO RAINBOW DQN B.1 PRELIMINARIES: RAINBOW DQN Background. DQN algorithm (Mnih et al., 2015) learns a Q-function, which is modeled as a neural network with parameters θ, by minimizing the following Bellman residual: LDQN(θ) = Eτt∼B [( Qθ(st, at)− rt − γmax a Qθ̄(st+1, a) )2 ] , (7) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, and θ̄ are the delayed parameters. Even though Rainbow DQN integrates several techniques, such as double Q-learning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017), applying SUNRISE to Rainbow DQN can be described based on the standard DQN algorithm. For exposition, we refer the reader to Hessel et al. (2018) for more detailed explanations of Rainbow DQN. Algorithm 3 SUNRISE: Rainbow version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Choose the action that maximizes UCB: at = arg max at,i∈A Qmean(st, at,i)+λQstd(st, at,i) 5: Collect state st+1 and reward rt from the environment by taking action at 6: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 7: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 8: end for 9: // UPDATE Q-FUNCTIONS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 10: for each gradient step do 11: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 12: for each agent i do 13: Update the Q-function by minimizing 1B ∑B j=1mj,iL DQN WQ (τj , θi) 14: end for 15: end for 16: end for B.2 SUNRISE: RAINBOW DQN Bootstrap with random initialization. Formally, we consider an ensemble of N Q-functions, i.e., {Qθi}Ni=1, where θi denotes the parameters of the i-th Q-function.3 To train the ensemble of Q-functions, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between Q-functions through two simple ideas: First, we initialize the model parameters of all Q-functions with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each Q-function. Specifically, for each Q-function i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of Q-functions, we multiply the bootstrap mask to each objective function. Weighted Bellman backup. Since conventional Q-learning is based on the Bellman backup in equation 7, it can be affected by error propagation. I.e., error in the target Q-functionQθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each Q-function i, we consider a weighted Bellman backup as follows: LDQNWQ (τt, θi) = w (st+1) ( Qθi(st, at)− rt − γmax a Qθ̄i(st+1, a) )2 , where τt = (st, at, rt, st+1) is a transition, and w(s) is a confidence weight based on ensemble of target Q-functions: w(s) = σ ( −Q̄std(s) ∗ T ) + 0.5, (8) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s) is the empirical standard deviation of all target Q-functions {maxaQθ̄i(s, a)} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.4 The proposed objective LDQNWQ downweights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. Note that we combine the proposed 3Here, we remark that each Q-function has a unique target Q-function. 4We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. weighted Bellman backup with prioritized replay (Schaul et al., 2016) by multiplying both weights to Bellman backups. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (9) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). This inference method was originally proposed in Chen et al. (2017) for efficient exploration in DQN, but we further extend it to Rainbow DQN. For evaluation, we approximate the maximum a posterior action by choosing the action maximizes the mean of Q-functions, i.e., at = maxa{Qmean(st, a)}. The full procedure is summarized in Algorithm 3. C IMPLEMENTATION DETAILS FOR TOY REGRESSION TASKS We evaluate the quality of uncertainty estimates from an ensemble of neural networks on a toy regression task. To this end, we generate twenty training samples drawn as y = x3 + , where ∼ N (0, 32), and train ten ensembles of regression networks using bootstrap with random initialization. The regression network is as fully-connected neural networks with 2 hidden layers and 50 rectified linear units in each layer. For bootstrap, we draw the binary masks from the Bernoulli distribution with mean β = 0.3. As uncertainty estimates, we measure the empirical variance of the networks’ predictions. As shown in Figure 1(b), the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples. D EXPERIMENTAL SETUPS AND RESULTS: OPENAI GYM Environments. We evaluate the performance of SUNRISE on four complex environments based on the standard bench-marking environments5 from OpenAI Gym (Brockman et al., 2016). Note that we do not use a modified Cheetah environments from PETS (Chua et al., 2018) (dented as Cheetah in POPLIN (Wang & Ba, 2020)) because it includes additional information in observations. Training details. We consider a combination of SAC and SUNRISE using the publicly released implementation repository (https://github.com/vitchyr/rlkit) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 20, 50}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 4 shows the learning curves on all environments. One can note that SUNRISE consistently improves the performance of SAC by a large margin. Effects of ensembles. Figure 5 shows the learning curves of SUNRISE with varying values of ensemble size on all environments. The performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. 5We used the reference implementation at https://github.com/WilsonWangTHU/mbbl (Wang et al., 2019). E EXPERIMENTAL SETUPS AND RESULTS: NOISY REWARD DisCor. DisCor (Kumar et al., 2020) was proposed to prevent the error propagation issue in Qlearning. In addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s, a) T ) , where γ is a discount factor and T is a temperature. By following the setups in Kumar et al. (2020), we take a network with 1 extra hidden layer than the corresponding Q-network as an error model, and chose T = 10 for all experiments. We update the temperature via a moving average and use the learning rate of 0.0003. We use the SAC algorithm as the RL objective coupled with DisCor and build on top of the publicly released implementation repository (https://github.com/ vitchyr/rlkit). F EXPERIMENTAL SETUPS AND RESULTS: DEEPMIND CONTROL SUITE Training details. We consider a combination of RAD and SUNRISE using the publicly released implementation repository (https://github.com/MishaLaskin/rad) with a full list of hyperparameters in Table 4. Similar to Laskin et al. (2020), we use the same encoder architecture as in (Yarats et al., 2019), and the actor and critic share the same encoder to embed image observations.6 For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 100}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because training samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 6(g), 6(h), 6(i), 6(j), 6(k), and 6(l) show the learning curves on all environments. Since RAD already achieves the near optimal performances and the room for improvement is small, we can see a small but consistent gains from SUNRISE. To verify the effectiveness of SUNRISE more clearly, we consider a combination of SAC and SUNRISE in Figure 6(a), 6(b), 6(c), 6(d), 6(e), and 6(f), where the gain from SUNRISE is more significant. G EXPERIMENTAL SETUPS AND RESULTS: ATARI GAMES Training details. We consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE using the publicly released implementation repository (https://github.com/ Kaixhin/Rainbow) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 40}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. 6However, we remark that each agent does not share the encoders unlike Bootstrapped DQN (Osband et al., 2016a). Learning curves. Figure 7, Figure 8 and Figure 9 show the learning curves on all environments.
1. What is the main contribution of the paper, and how does it address the problem of uncertainty estimates in Q-learning? 2. How does the proposed approach use ensemble uncertainty estimates to provide a weighting on updates in Q-learning? 3. What are the strengths and weaknesses of the experimental evidence provided in the paper? 4. How does the paper's approach compare to other methods that use ensembles in RL, such as bootstrap DQN? 5. Are there any concerns about the stability and convergence of the algorithm, especially when using weighted Bellman updates? 6. How does the paper address the issue of exploration in RL, and how does the proposed approach interact with other benefits previously derived from ensembles? 7. What are some minor comments or suggestions for improving the paper, such as clarifying the terminology used or providing more detailed explanations of certain concepts?
Review
Review This paper proposes to use uncertainty estimates from an ensemble of action-values, to provide a weighting on the updates in Q-learning. The main idea is to use the sigmoid of the negative of this uncertainty in the next state, to produce a weighting between 0.5 and 1 to downweight updates with high uncertainty targets. This uncertainty estimate from the ensemble is also used to improve exploration, in a combined algorithm called Sunrise that leverages learning an ensemble in these two ways. The idea of using weighted Bellman updates is, as far as I know, novel. The evidence for the idea, however, needs more work. First, the weighted update in Eq (4) is not motivated from first principles. Second, the empirical evidence is weak because the experiments highlighting the role of the weighting do not demonstrate significant differences. The first issue is the justification for the approach. The ensemble of Q-learning agents is trained using the weighting, derived from that ensemble. There are natural questions as to the interaction between the ensemble uncertainty estimates and the ensemble estimates. Does it result in any instability? What is the final point of convergence? Does it change the solution? But, one could argue that that is not much of a problem, since the weighting w(s,a) is always between 0.5 and 1, so it is not that skewed. Then the question arises how much it is helping, and why this small reduction in weight helps. This is particularly important to ask, considering the algorithm requires an ensemble to be learned, with subsets of data used for each action-value. There is a lot of effort expended for that weighting. The experiments then do include ablations, to examine the effect of these weightings. Unfortunately, the results are inconclusive. The experimental time spent must have been large to get all the results in this paper, across so many environments and algorithms. But, the ablations themselves are not sufficiently in-depth to provide insight into the idea and algorithm. The results in Figure 2 are key, since that figure examines Sunrise with and without the weighting. Due to the variance across runs, with only 4 runs, there are large standard errors (and so even larger 95% confidence intervals); it is hard to conclude that weighting is helping. The additional results in Figure 5 in the appendix have a similar issue. The results in Figure 3, which motivate the exploration utility, are more clear in Cartpole. This provides some motivation for learning ensembles, so they can be used for exploration. But, this exploration approach with ensembles is an existing method. The main novelty in this work is the weighting. I highly recommend taking a few domains and carefully studying the impact of the weight. More runs would help for significance, as well as parameter sensitivity analysis to gain insight into the generality of the improvement. Sometimes performance gains are from hyperparameter tuning, rather than from the utility of an idea; here, you really want to know if and why this weighting improves performance. As a more minor comment, Sunrise is pitched as combining three ideas for using ensembles: your weighting, bootstrapping and UCB exploration. However, I see Sunrise as combining two ideas: weighting and UCB exploration. The Bootstrap DQN approach gives you a way to learn your ensemble of bootstrap models, so that it provides a useful uncertainty estimate. Given that ensemble, you can then use it to compute a weighting and optimistic action. It would be more clear to separate it out that way, rather than saying "Furthermore, since our weighted Bellman backups rely on maintaining an ensemble, we investigate how weighted Bellman backups interact with other benefits previously derived from ensembles: (a) Bootstrap; (b) UCB Exploration." The bootstrap is arguably not a benefit, but an approach to obtain confidence (uncertainty estimates). Minor comments: Bootstrap DQN is listed under "Ensemble Methods in RL", rather than under "Exploration in RL", but is it an exploration approach. "Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence." The terms inconsistency and unstable convergence should be explained, since they seem like technical terms. Bellman backup seems to be used to describe the squared error to the expectation over next action, in Equation (2), and then to a stochastic sample of the action in (4). Which is it? What is meant by the signal-to-noise in Q-updates? A natural baseline to include is to tune an agent that uses random weights between 0.5 and 1 in the update, but keeping other parts of Sunrise the same. The ablation removes the weighting all together, which is also important to include. But, it's worthwhile observing if random weights performs similarly, especially if that agent is tuned. ------------ Update Thank you for the clear reply. Unfortunately, I remain concerned about the significance of experiments. I mentioned above that 4 or 5 runs is typically not enough, and because the standard errors are overlapping, the differences could be due to chance. The addition of a result with 10 runs is a good step. But, as part of the reply, the authors state: "Figure 3(a) shows the learning curves of all methods on the SlimHumanoid-ET environment over 10 random seeds. First, one can not that SUNRISE with random weights (red curve) is worse than SUNRISE with the proposed weighted Bellman backups (blue curve). Additionally, even without UCB exploration, SUNRISE with the proposed weighted Bellman backups (purple curve) outperforms all baselines. This implies that the proposed weighted Bellman backups can handle the error propagation effectively even though there is a large noise in reward function." However, if you look at this figure, the error bars all still overlap. 10 random seeds is still not enough. I am also not confident that the issue will be remedied, as the authors additionally state in the rebuttal: "we believe that SUNRISE is evaluated in a broad collection of domains in the RL literature and the performance gap is also noticeable." An insignificant gap across many domains does not tell us anything. Actually, if you take the runs and tried to do significance tests by pooling all the runs across environments, then maybe the result might actually be significant. But, of course, there will be higher variance due to differences in the environments, so it is not obvious this would be true. Nonetheless, this could be a natural next step.
ICLR
Title Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates Abstract Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from low signal and even instability in Q-learning because target values are derived from current Q-estimates, which are often noisy. To mitigate the issue, we propose ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble. We empirically observe that the proposed method stabilizes and improves learning on both continuous and discrete control benchmarks. We also specifically investigate the signal-to-noise aspect by studying environments with noisy rewards, and find that weighted Bellman backups significantly outperform standard Bellman backups. Furthermore, since our weighted Bellman backups rely on maintaining an ensemble, we investigate how weighted Bellman backups interact with UCB Exploration. By enforcing the diversity between agents using Bootstrap, we show that these different ideas are largely orthogonal and can be fruitfully integrated, together further improving the performance of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and discrete control tasks on both lowdimensional and high-dimensional environments. 1 INTRODUCTION Model-free reinforcement learning (RL), with high-capacity function approximators, such as deep neural networks (DNNs), has been used to solve a variety of sequential decision-making problems, including board games (Silver et al., 2017; 2018), video games (Mnih et al., 2015; Vinyals et al., 2019), and robotic manipulation (Kalashnikov et al., 2018). It has been well established that the above successes are highly sample inefficient (Kaiser et al., 2020). Recently, a lot of progress has been made in more sample-efficient model-free RL algorithms through improvements in off-policy learning both in discrete and continuous domains (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018; Amos et al., 2020). However, standard off-policy RL algorithms can suffer from instability in Q-learning due to error propagation in the Bellman backup, i.e., the errors induced in the target value can lead to an increase in overall error in the Q-function (Kumar et al., 2019; 2020). One way to address the error propagation issue is to use ensemble methods, which combine multiple models of the value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018). For discrete control tasks, double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016) addressed the value overestimation by maintaining two independent estimators of the action values and later extended to continuous control tasks in TD3 (Fujimoto et al., 2018). While most prior work has improved the stability by taking the minimum over Q-functions, this also needlessly loses signal, and we propose an alternative way that utilizes ensembles to estimate uncertainty and provide more stable backups. In this paper, we propose ensemble-based weighted Bellman backups that can be applied to most modern off-policy RL algorithms, such as Q-learning and actor-critic algorithms. Our main idea is to reweight sample transitions based on uncertainty estimates from a Q-ensemble. Because prediction errors can be characterized by uncertainty estimates from ensembles (i.e., variance of predictions) as shown in Figure 1(b), we find that the proposed method significantly improves the signal-to-noise in the Q-updates and stabilizes the learning process. Finally, we present a unified framework, coined SUNRISE, that combines our weighted Bellman backups with an inference method that selects actions using highest upper-confidence bounds (UCB) for efficient exploration (Chen et al., 2017). We find that these different ideas can be fruitfully integrated, and they are largely complementary (see Figure 1(a)). We demonstrate the effectiveness of the proposed method using Soft Actor-Critic (SAC; Haarnoja et al. 2018) for continuous control benchmarks (specifically, OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018)) and Rainbow DQN (Hessel et al., 2018) for discrete control benchmarks (specifically, Atari games (Bellemare et al., 2013)). In our experiments, SUNRISE consistently improves the performance of existing off-policy RL methods. Furthermore, we find that the proposed weighted Bellman backups yield improvements in environments with noisy reward, which have a low signal-to-noise ratio. 2 RELATED WORK Off-policy RL algorithms. Recently, various off-policy RL algorithms have provided large gains in sample-efficiency by reusing past experiences (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018). Rainbow DQN (Hessel et al., 2018) achieved state-of-the-art performance on the Atari games (Bellemare et al., 2013) by combining several techniques, such as double Qlearning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017). For continuous control tasks, SAC (Haarnoja et al., 2018) achieved state-of-the-art sample-efficiency results by incorporating the maximum entropy framework. Our ensemble method brings orthogonal benefits and is complementary and compatible with these existing state-of-the-art algorithms. Stabilizing Q-learning. It has been empirically observed that instability in Q-learning can be caused by applying the Bellman backup on the learned value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018; Song et al., 2019; Kim et al., 2019; Kumar et al., 2019; 2020). By following the principle of double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016), twin-Q trick (Fujimoto et al., 2018) was proposed to handle the overestimation of value functions for continuous control tasks. Song et al. (2019) and Kim et al. (2019) proposed to replace the max operator with Softmax and Mellowmax, respectively, to reduce the overestimation error. Recently, Kumar et al. (2020) handled the error propagation issue by reweighting the Bellman backup based on cumulative Bellman errors. However, our method is different in that we propose an alternative way that also utilizes ensembles to estimate uncertainty and provide more stable, higher-signal-to-noise backups. Ensemble methods in RL. Ensemble methods have been studied for different purposes in RL (Wiering & Van Hasselt, 2008; Osband et al., 2016a; Anschel et al., 2017; Agarwal et al., 2020; Lan et al., 2020). Chua et al. (2018) showed that modeling errors in model-based RL can be reduced using an ensemble of dynamics models, and Kurutach et al. (2018) accelerated policy learning by generating imagined experiences from the ensemble of dynamics models. For efficient exploration, Osband et al. (2016a) and Chen et al. (2017) also leveraged the ensemble of Q-functions. However, most prior works have studied the various axes of improvements from ensemble methods in isolation, while we propose a unified framework that handles various issues in off-policy RL algorithms. Exploration in RL. To balance exploration and exploitation, several methods, such as the maximum entropy frameworks (Ziebart, 2010; Haarnoja et al., 2018), exploration bonus rewards (Bellemare et al., 2016; Houthooft et al., 2016; Pathak et al., 2017; Choi et al., 2019) and randomization (Osband et al., 2016a;b), have been proposed. Despite the success of these exploration methods, a potential drawback is that agents can focus on irrelevant aspects of the environment because these methods do not depend on the rewards. To handle this issue, Chen et al. (2017) proposed an exploration strategy that considers both best estimates (i.e., mean) and uncertainty (i.e., variance) of Q-functions for discrete control tasks. We further extend this strategy to continuous control tasks and show that it can be combined with other techniques. 3 BACKGROUND Reinforcement learning. We consider a standard RL framework where an agent interacts with an environment in discrete time. Formally, at each timestep t, the agent receives a state st from the environment and chooses an action at based on its policy π. The environment returns a reward rt and the agent transitions to the next state st+1. The return Rt = ∑∞ k=0 γ krt+k is the total accumulated rewards from timestep t with a discount factor γ ∈ [0, 1). RL then maximizes the expected return. Soft Actor-Critic. SAC (Haarnoja et al., 2018) is an off-policy actor-critic method based on the maximum entropy RL framework (Ziebart, 2010), which encourages the robustness to noise and exploration by maximizing a weighted objective of the reward and the policy entropy (see Appendix A for further details). To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], (1) LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , (2) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . (3) Here, the policy is modeled as a Gaussian with mean and covariance given by neural networks to handle continuous action spaces. 4 SUNRISE In this section, we propose the ensemble-based weighted Bellman backups, and then introduce SUNRISE: Simple UNified framework for ReInforcement learning using enSEmbles, which combines various ensemble methods. In principle, our method can be used in conjunction with most modern off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018). For the exposition, we describe only the SAC version in the main body. The Rainbow DQN version follows the same principles and is fully described in Appendix B. 4.1 WEIGHTED BELLMAN BACKUPS TO IMPROVE SIGNAL-TO-NOISE IN Q-UPDATES Formally, we consider an ensemble of N SAC agents, i.e., {Qθi , πφi}Ni=1, where θi and φi denote the parameters of the i-th soft Q-function and policy.1 Since conventional Q-learning is based on the Bellman backup in (2), it can be affected by error propagation. I.e., error in the target Q-function Qθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. In other words, errors in the previous Q-function induce the “noise” to the learning “signal” (i.e., true Q-value) of the current Q-function. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each agent i, we consider a weighted Bellman backup as follows: LWQ (τt, θi) = w (st+1, at+1) ( Qθi(st, at)− rt − γ ( Qθ̄i(st+1, at+1)− α log πφ(at+1|st+1) ))2 , (4) 1We remark that each Q-function Qθi(s, a) has a unique target Q-function Qθ̄i(s, a). where τt = (st, at, rt, st+1) is a transition, at+1 ∼ πφ(a|st), and w(s, a) is a confidence weight based on ensemble of target Q-functions: w(s, a) = σ ( −Q̄std(s, a) ∗ T ) + 0.5, (5) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s, a) is the empirical standard deviation of all target Q-functions {Qθ̄i} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.2 The proposed objective LWQ down-weights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. 4.2 COMBINATION WITH ADDITIONAL TECHNIQUES THAT LEVERAGE ENSEMBLES We integrate the proposed weighted Bellman backup with UCB exploration into a single framework by utilizing the bootstrap with random initialization. Bootstrap with random initialization. To train the ensemble of agents, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between agents through two simple ideas: First, we initialize the model parameters of all agents with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each agent. Specifically, for each SAC agent i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of agents, we multiply the bootstrap mask to each objective function, such as: mt,iLπ (st, φi) and mt,iLWQ(τt, θi) in (3) and (4), respectively. We remark that Osband et al. (2016a) applied this simple technique to train an ensemble of DQN (Mnih et al., 2015) only for discrete control tasks, while we apply to SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018) for both continuous and discrete tasks with additional techniques. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (6) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). We remark that this inference method was originally proposed in Chen et al. (2017) for efficient exploration in discrete action spaces. However, in continuous action spaces, finding the action that maximizes the UCB is not straightforward. To handle this issue, we propose a simple approximation scheme, which first generates N candidate action set from ensemble policies {πφi}Ni=1, and then chooses the action that maximizes the UCB (Line 4 in Algorithm 1). For evaluation, we approximate the maximum a posterior action by averaging the mean of Gaussian distributions modeled by each ensemble policy. The full procedure of our unified framework, coined SUNRISE, is summarized in Algorithm 1. 5 EXPERIMENTAL RESULTS We designed our experiments to answer the following questions: • Can SUNRISE improve off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018), for both continuous (see Table 1 and Table 2) and discrete (see Table 3) control tasks? • How crucial is the proposed weighted Bellman backups in (4) for improving the signal-to-noise in Q-updates (see Figure 2)? • Can UCB exploration be useful for solving tasks with sparse rewards (see Figure 3(b))? • Is SUNRISE better than a single agent with more updates and parameters (see Figure 3(c))? • How does ensemble size affect the performance (see Figure 3(d))? 2We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. Algorithm 1 SUNRISE: SAC version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Collect N action samples: At = {at,i ∼ πφi(a|st)|i ∈ {1, . . . , N}} 5: Choose the action that maximizes UCB: at = arg max at,i∈At Qmean(st, at,i)+λQstd(st, at,i) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) in (4) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) in (3) 16: end for 17: end for 18: end for 5.1 SETUPS Continuous control tasks. We evaluate SUNRISE on several continuous control tasks using simulated robots from OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018). For OpenAI Gym experiments with proprioceptive inputs (e.g., positions and velocities), we compare to PETS (Chua et al., 2018), a state-of-the-art model-based RL method based on ensembles of dynamics models; POPLIN-P (Wang & Ba, 2020), a state-of-the-art model-based RL method which uses a policy network to generate actions for planning; POPLIN-A (Wang & Ba, 2020), variant of POPLIN-P which adds noise in the action space; METRPO (Kurutach et al., 2018), a hybrid RL method which augments TRPO (Schulman et al., 2015) using ensembles of dynamics models; and two state-of-the-art model-free RL methods, TD3 (Fujimoto et al., 2018) and SAC (Haarnoja et al., 2018). For our method, we consider a combination of SAC and SUNRISE, as described in Algorithm 1. Following the setup in Wang & Ba (2020) and Wang et al. (2019), we report the mean and standard deviation across ten runs after 200K timesteps on five complex environments: Cheetah, Walker, Hopper, Ant and SlimHumanoid with early termination (ET). More experimental details and learning curves with 1M timesteps are in Appendix D. For DeepMind Control Suite with image inputs, we compare to PlaNet (Hafner et al., 2019), a model-based RL method which learns a latent dynamics model and uses it for planning; Dreamer (Hafner et al., 2020), a hybrid RL method which utilizes the latent dynamics model to generate synthetic roll-outs; SLAC (Lee et al., 2020), a hybrid RL method which combines the latent dynamics model with SAC; and three state-of-the-art model-free RL methods which apply contrastive learning (CURL; Srinivas et al. 2020) or data augmentation (RAD (Laskin et al., 2020) and DrQ (Kostrikov et al., 2020)) to SAC. For our method, we consider a combination of RAD (i.e., SAC with random crop) and SUNRISE. Following the setup in RAD, we report the mean and standard deviation across five runs after 100k (i.e., low sample regime) and 500k (i.e., asymptotically optimal regime) environment steps on six environments: Finger-spin, Cartpole-swing, Reacher-easy, Cheetah-run, Walker-walk, and Cup-catch. More experimental details and learning curves are in Appendix F. Discrete control benchmarks. For discrete control tasks, we demonstrate the effectiveness of SUNRISE on several Atari games (Bellemare et al., 2013). We compare to SimPLe (Kaiser et al., 2020), a hybrid RL method which updates the policy only using samples generated by learned dynamics model; Rainbow DQN (Hessel et al., 2018) with modified hyperparameters for sample-efficiency (van Hasselt et al., 2019); Random agent (Kaiser et al., 2020); two state-of-the-art model-free RL methods which apply the contrastive learning (CURL; Srinivas et al. 2020) and data augmentation (DrQ; Kostrikov et al. 2020) to Rainbow DQN; and Human performances reported in Kaiser et al. (2020) and van Hasselt et al. (2019). Following the setups in SimPLe, we report the mean across three runs after 100K interactions (i.e., 400K frames with action repeat of 4). For our method, we consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE (see Algorithm 3 in Appendix B). More experimental details and learning curves are in Appendix G. For our method, we do not alter any hyperparameters of the original RL algorithms and train five ensemble agents. There are only three additional hyperparameters β, T , and λ for bootstrap, weighted Bellman backup, and UCB exploration, where we provide details in Appendix D, F, and G. 5.2 COMPARATIVE EVALUATION OpenAI Gym. Table 1 shows the average returns of evaluation roll-outs for all methods. SUNRISE consistently improves the performance of SAC across all environments and outperforms the model-based RL methods, such as POPLIN-P and PETS, on all environments except Ant and SlimHumanoid-ET. Even though we focus on performance after small samples because of the recent emphasis on making RL more sample efficient, we find that the gain from SUNRISE becomes even more significant when training longer (see Figure 3(c) and Appendix D). We remark that SUNRISE is more compute-efficient than modern model-based RL methods, such as POPLIN and PETS, because they also utilize ensembles (of dynamics models) and perform planning to select actions. Namely, SUNRISE is simple to implement, computationally efficient, and readily parallelizable. DeepMind Control Suite. As shown in Table 2, SUNRISE also consistently improves the performance of RAD (i.e., SAC with random crop) on all environments from DeepMind Control Suite. This implies that the proposed method can be useful for high-dimensional and complex input observations. Moreover, our method outperforms existing pixel-based RL methods in almost all environments. We remark that SUNRISE can also be combined with DrQ, and expect that it can achieve better performances on Cartpole-swing and Cup-catch at 100K environment steps. Atari games. We also evaluate SUNRISE on discrete control tasks from the Atari benchmark using Rainbow DQN. Table 3 shows that SUNRISE improves the performance of Rainbow in almost all environments, and outperforms the state-of-the-art CURL and SimPLe on 11 out of 26 Atari games. Here, we remark that SUNRISE is also compatible with CURL, which could enable even better performance. These results demonstrate that SUNRISE is a general approach. 5.3 ABLATION STUDY Effects of weighted Bellman backups. To verify the effectiveness of the proposed weighted Bellman backup (4) in improving signal-to-noise in Q-updates, we evaluate on a modified OpenAI Gym environments with noisy rewards. Following Kumar et al. (2019), we add Gaussian noise to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 1) only during training, and report the deterministic ground-truth reward during evaluation. For our method, we also consider a variant of SUNRISE, which updates Q-functions without the proposed weighted Bellman backup to isolate its effect. We compare to DisCor (Kumar et al., 2020), which improves SAC by reweighting the Bellman backup based on estimated cumulative Bellman errors (see Appendix E for more details). Figure 2 shows the learning curves of all methods on OpenAI Gym with noisy rewards. The proposed weighted Bellman backup significantly improves both sample-efficiency and asymptotic performance of SUNRISE, and outperforms baselines such as SAC and DisCor. One can note the performance gain due to our weighted Bellman backup becomes more significant in complex environments, such as SlimHumanoid-ET. We remark that DisCor still suffers from error propagation issues in complex environments like SlimHumanoid-ET and Ant because there are some approximation errors in estimating cumulative Bellman errors (see Section 6.1 for more detailed discussion). These results imply that errors in the target Q-function can be characterized by the proposed confident weight in equation 5 effectively. We also consider another variant of SUNRISE, which updates Q-functions with random weights sampled from [0.5, 1.0] uniformly at random. In order to evaluate the performance of SUNRISE, we increase the noise rate by adding Gaussian noise with a large standard deviation to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 5). Figure 3(a) shows the learning curves of all methods on the SlimHumanoid-ET environment over 10 random seeds. First, one can not that SUNRISE with random weights (red curve) is worse than SUNRISE with the proposed weighted Bellman backups (blue curve). Additionally, even without UCB exploration, SUNRISE with the proposed weighted Bellman backups (purple curve) outperforms all baselines. This implies that the proposed weighted Bellman backups can handle the error propagation effectively even though there is a large noise in reward function. Effects of UCB exploration. To verify the advantage of UCB exploration in (6), we evaluate on Cartpole-swing with sparse-reward from DeepMind Control Suite. For our method, we consider a variant of SUNRISE, which selects action without UCB exploration. As shown in Fig 3(b), SUNRISE with UCB exploration (blue curve) significantly improves the sample-efficiency on the environment with sparse rewards. Comparison with a single agent with more updates/parameters. One concern in utilizing the ensemble method is that its gains may come from more gradient updates and parameters. To clarify this concern, we compare SUNRISE (5 ensembles using 2-layer MLPs with 256 hidden units each) to a single agent, which consists of 2-layer MLPs with 1024 (and 256) hidden units with 5 updates using different random minibatches. Figure 3(c) shows that the learning curves on SlimHumanoidET, where SUNRISE outperforms all baselines. This implies that the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. More experimental results on other environments are also available in Appendix D. Effects of ensemble size. We analyze the effects of ensemble size N on the Ant environment from OpenAI Gym. Figure 3(d) shows that the performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. Thus, we use five ensemble agents for all experiments. More experimental results on other environments are also available in Appendix D, where the overall trend is similar. 6 DISCUSSION 6.1 CONNECTION WITH DISCOR Kumar et al. (2020) show that naive Bellman backups can suffer from slow learning in certain environments, requiring exponentially many updates. To handle this problem, they propose the weighted Bellman backups, which make steady learning progress by inducing some optimal data distribution (see (Kumar et al., 2020) for more details). Specifically, in addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s,a)T ) , where γ is a discount factor and T is a temperature. However, we remark that DisCor can still suffer from the error propagation issues because there is also an approximation error in estimating cumulative Bellman errors. Therefore, we consider an alternative approach that utilizes the uncertainty from ensembles. Because it has been observed that the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples (Lakshminarayanan et al., 2017), we expect that the weighted Bellman backups based on ensembles can handle error propagation more effectively. Indeed, in our experiments, we find that ensemble-based weighted Bellman backups can give rise to more stable training and improve the data-efficiency of various off-policy RL algorithms. 6.2 COMPUTATION OVERHEAD One can expect that there is an additional computation overhead by introducing ensembles. When we have N ensemble agents, our method requires N× inferences for weighted Bellman backups and 2N× inferences (N for actors and N for critics). However, we remark that our method can be more computationally efficient because it is parallelizable. Also, as shown in Figure 3(c), the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. 7 CONCLUSION In this paper, we present the ensemble-based weighted Bellman backups, which is compatible with various off-policy RL algorithms. By re-weighting target Q-values based on uncertainty estimates, we stabilize and improve the learning process on both continuous and discrete control benchmarks. Additionally, we introduce SUNRISE, a simple unified ensemble method, which integrates the proposed weighted Bellman backups with bootstrap with random initialization, and UCB exploration to handle various issues in off-policy RL algorithms. Our experiments show that SUNRISE consistently improves the performances of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, and outperforms state-of-the-art RL algorithms for both continuous and discrete control tasks on both low-dimensional and high-dimensional environments. We hope that SUNRISE could be useful to other relevant topics such as sim-to-real transfer (Tobin et al., 2017), imitation learning (Torabi et al., 2018), understanding the connection between on-policy and offpolicy RL (Schulman et al., 2017), offline RL (Agarwal et al., 2020), and planning (Srinivas et al., 2018; Tamar et al., 2016). A SUNRISE: SOFT ACTOR-CRITIC Background. SAC (Haarnoja et al., 2018) is a state-of-the-art off-policy algorithm for continuous control problems. SAC learns a policy, πφ(a|s), and a critic, Qθ(s, a), and aims to maximize a weighted objective of the reward and the policy entropy, Est,at∼π [∑ t γ t−1rt + αH(πφ(·|st)) ] . To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . We remark that this corresponds to minimizing the Kullback-Leibler divergence between the policy and a Boltzmann distribution induced by the current soft Q-function. SUNRISE without UCB exploration. For SUNRISE without UCB exploration, we use random inference proposed in Bootstrapped DQN (Osband et al., 2016a), which randomly selects an index of policy uniformly at random and generates the action from the selected actor for the duration of that episode (see Line 3 in Algorithm 2). Algorithm 2 SUNRISE: SAC version (random inference) 1: for each iteration do 2: // RANDOM INFERENCE 3: Select an index of policy using î ∼ Uniform{1, · · · , N} 4: for each timestep t do 5: Get the action from selected policy: at ∼ πφî(a|st) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) 16: end for 17: end for 18: end for B EXTENSION TO RAINBOW DQN B.1 PRELIMINARIES: RAINBOW DQN Background. DQN algorithm (Mnih et al., 2015) learns a Q-function, which is modeled as a neural network with parameters θ, by minimizing the following Bellman residual: LDQN(θ) = Eτt∼B [( Qθ(st, at)− rt − γmax a Qθ̄(st+1, a) )2 ] , (7) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, and θ̄ are the delayed parameters. Even though Rainbow DQN integrates several techniques, such as double Q-learning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017), applying SUNRISE to Rainbow DQN can be described based on the standard DQN algorithm. For exposition, we refer the reader to Hessel et al. (2018) for more detailed explanations of Rainbow DQN. Algorithm 3 SUNRISE: Rainbow version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Choose the action that maximizes UCB: at = arg max at,i∈A Qmean(st, at,i)+λQstd(st, at,i) 5: Collect state st+1 and reward rt from the environment by taking action at 6: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 7: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 8: end for 9: // UPDATE Q-FUNCTIONS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 10: for each gradient step do 11: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 12: for each agent i do 13: Update the Q-function by minimizing 1B ∑B j=1mj,iL DQN WQ (τj , θi) 14: end for 15: end for 16: end for B.2 SUNRISE: RAINBOW DQN Bootstrap with random initialization. Formally, we consider an ensemble of N Q-functions, i.e., {Qθi}Ni=1, where θi denotes the parameters of the i-th Q-function.3 To train the ensemble of Q-functions, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between Q-functions through two simple ideas: First, we initialize the model parameters of all Q-functions with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each Q-function. Specifically, for each Q-function i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of Q-functions, we multiply the bootstrap mask to each objective function. Weighted Bellman backup. Since conventional Q-learning is based on the Bellman backup in equation 7, it can be affected by error propagation. I.e., error in the target Q-functionQθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each Q-function i, we consider a weighted Bellman backup as follows: LDQNWQ (τt, θi) = w (st+1) ( Qθi(st, at)− rt − γmax a Qθ̄i(st+1, a) )2 , where τt = (st, at, rt, st+1) is a transition, and w(s) is a confidence weight based on ensemble of target Q-functions: w(s) = σ ( −Q̄std(s) ∗ T ) + 0.5, (8) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s) is the empirical standard deviation of all target Q-functions {maxaQθ̄i(s, a)} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.4 The proposed objective LDQNWQ downweights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. Note that we combine the proposed 3Here, we remark that each Q-function has a unique target Q-function. 4We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. weighted Bellman backup with prioritized replay (Schaul et al., 2016) by multiplying both weights to Bellman backups. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (9) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). This inference method was originally proposed in Chen et al. (2017) for efficient exploration in DQN, but we further extend it to Rainbow DQN. For evaluation, we approximate the maximum a posterior action by choosing the action maximizes the mean of Q-functions, i.e., at = maxa{Qmean(st, a)}. The full procedure is summarized in Algorithm 3. C IMPLEMENTATION DETAILS FOR TOY REGRESSION TASKS We evaluate the quality of uncertainty estimates from an ensemble of neural networks on a toy regression task. To this end, we generate twenty training samples drawn as y = x3 + , where ∼ N (0, 32), and train ten ensembles of regression networks using bootstrap with random initialization. The regression network is as fully-connected neural networks with 2 hidden layers and 50 rectified linear units in each layer. For bootstrap, we draw the binary masks from the Bernoulli distribution with mean β = 0.3. As uncertainty estimates, we measure the empirical variance of the networks’ predictions. As shown in Figure 1(b), the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples. D EXPERIMENTAL SETUPS AND RESULTS: OPENAI GYM Environments. We evaluate the performance of SUNRISE on four complex environments based on the standard bench-marking environments5 from OpenAI Gym (Brockman et al., 2016). Note that we do not use a modified Cheetah environments from PETS (Chua et al., 2018) (dented as Cheetah in POPLIN (Wang & Ba, 2020)) because it includes additional information in observations. Training details. We consider a combination of SAC and SUNRISE using the publicly released implementation repository (https://github.com/vitchyr/rlkit) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 20, 50}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 4 shows the learning curves on all environments. One can note that SUNRISE consistently improves the performance of SAC by a large margin. Effects of ensembles. Figure 5 shows the learning curves of SUNRISE with varying values of ensemble size on all environments. The performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. 5We used the reference implementation at https://github.com/WilsonWangTHU/mbbl (Wang et al., 2019). E EXPERIMENTAL SETUPS AND RESULTS: NOISY REWARD DisCor. DisCor (Kumar et al., 2020) was proposed to prevent the error propagation issue in Qlearning. In addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s, a) T ) , where γ is a discount factor and T is a temperature. By following the setups in Kumar et al. (2020), we take a network with 1 extra hidden layer than the corresponding Q-network as an error model, and chose T = 10 for all experiments. We update the temperature via a moving average and use the learning rate of 0.0003. We use the SAC algorithm as the RL objective coupled with DisCor and build on top of the publicly released implementation repository (https://github.com/ vitchyr/rlkit). F EXPERIMENTAL SETUPS AND RESULTS: DEEPMIND CONTROL SUITE Training details. We consider a combination of RAD and SUNRISE using the publicly released implementation repository (https://github.com/MishaLaskin/rad) with a full list of hyperparameters in Table 4. Similar to Laskin et al. (2020), we use the same encoder architecture as in (Yarats et al., 2019), and the actor and critic share the same encoder to embed image observations.6 For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 100}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because training samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 6(g), 6(h), 6(i), 6(j), 6(k), and 6(l) show the learning curves on all environments. Since RAD already achieves the near optimal performances and the room for improvement is small, we can see a small but consistent gains from SUNRISE. To verify the effectiveness of SUNRISE more clearly, we consider a combination of SAC and SUNRISE in Figure 6(a), 6(b), 6(c), 6(d), 6(e), and 6(f), where the gain from SUNRISE is more significant. G EXPERIMENTAL SETUPS AND RESULTS: ATARI GAMES Training details. We consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE using the publicly released implementation repository (https://github.com/ Kaixhin/Rainbow) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 40}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. 6However, we remark that each agent does not share the encoders unlike Bootstrapped DQN (Osband et al., 2016a). Learning curves. Figure 7, Figure 8 and Figure 9 show the learning curves on all environments.
1. What is the main contribution of the paper in the field of reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle noise and explore the environment? 3. What are the weaknesses of the paper, especially regarding the use of bootstrapping masks and random initialization? 4. How does the reviewer assess the clarity and comprehensiveness of the empirical evaluation provided in the paper? 5. What are the limitations of the proposed approach in terms of computational complexity, and how does it compare to other methods in this regard?
Review
Review = Overview = The paper proposes SUNRISE, an approach to reinforcement learning that leverages ensembles of agents to build more robust RL updates. SUNRISE comprises a number of similar agents (in the paper, SAC agents) that perform parallel updates. Sample transitions for which there is larger variability (across the ensemble) in the estimates of the next-step Q-values are down-weighted in the computation of the loss, thus potentially rendering the learned Q-function more robust to noise. The proposed approach is then combined with bootstrapping masks and UCB exploration, and is shown to outperform a number of state-of-the-art approaches in several benchmark domains from the RL literature. = Positive points = The paper is clearly written. Additionally, the proposed approach is sensible and the empirical evaluation is, in my perspective, quite comprehensive: SUNRISE is evaluated in a broad collection of domains in the RL literature. = Negative points = The paper would benefit, in my opinion, from additional discussion regarding: (a) the impact of the use of bootstrap with random initialization; and (b) the computational complexity of SUNRISE (even if the paper does briefly discuss the latter in Section 5.2) = Comments = I quite enjoyed reading the paper. The problem addressed is a relevant problem in RL, and the approach proposed in the paper is, in my opinion, simultaneously simple and sensible. The paper provides a solid empirical evaluation, covering a broad range of domains and comparing with multiple state of the art approaches from the literature. The results show that SUNRISE compares favorably -- in terms of performance -- with several of these other methods in multiple domains. There are, however, two aspects that I would like to see discussed at greater length. On one hand, the paper proposes the use of bootstrapping masks and random initialization to induce variety in the ensemble. While the paper introduces both bootstrapping and UCB exploration as a "useful complement", it seems to me that this is quite central to the performance of the algorithm. Is this correct? In fact, without this device, the agents in the ensemble would essentially train from the same replay buffer, so variability would only come from the initialization. It is a pity that this particular element isn't included in the ablation study, for I would like to gain a clearer understanding on how critical this device is for the performance of the algorithm. One other aspect that I would like to see discussed is regarding the computational complexity of the proposed approach. The paper remarks that SUNRISE is more computationally efficient than competing methods such as POPLIN and PETS, and being an ensemble method, I expect it to be naturally heavier than non-ensemble approaches such as standard SAC. However, I would like to understand how much more computation such a method involves. In particular the computation of the Bellman weights requires multiple passes through the critic network, as does the UCB exploration policy, and I was wondering how much more computation this entails. In spite of the above aspects, I again remark that I quite enjoyed the paper.
ICLR
Title Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates Abstract Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from low signal and even instability in Q-learning because target values are derived from current Q-estimates, which are often noisy. To mitigate the issue, we propose ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble. We empirically observe that the proposed method stabilizes and improves learning on both continuous and discrete control benchmarks. We also specifically investigate the signal-to-noise aspect by studying environments with noisy rewards, and find that weighted Bellman backups significantly outperform standard Bellman backups. Furthermore, since our weighted Bellman backups rely on maintaining an ensemble, we investigate how weighted Bellman backups interact with UCB Exploration. By enforcing the diversity between agents using Bootstrap, we show that these different ideas are largely orthogonal and can be fruitfully integrated, together further improving the performance of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and discrete control tasks on both lowdimensional and high-dimensional environments. 1 INTRODUCTION Model-free reinforcement learning (RL), with high-capacity function approximators, such as deep neural networks (DNNs), has been used to solve a variety of sequential decision-making problems, including board games (Silver et al., 2017; 2018), video games (Mnih et al., 2015; Vinyals et al., 2019), and robotic manipulation (Kalashnikov et al., 2018). It has been well established that the above successes are highly sample inefficient (Kaiser et al., 2020). Recently, a lot of progress has been made in more sample-efficient model-free RL algorithms through improvements in off-policy learning both in discrete and continuous domains (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018; Amos et al., 2020). However, standard off-policy RL algorithms can suffer from instability in Q-learning due to error propagation in the Bellman backup, i.e., the errors induced in the target value can lead to an increase in overall error in the Q-function (Kumar et al., 2019; 2020). One way to address the error propagation issue is to use ensemble methods, which combine multiple models of the value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018). For discrete control tasks, double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016) addressed the value overestimation by maintaining two independent estimators of the action values and later extended to continuous control tasks in TD3 (Fujimoto et al., 2018). While most prior work has improved the stability by taking the minimum over Q-functions, this also needlessly loses signal, and we propose an alternative way that utilizes ensembles to estimate uncertainty and provide more stable backups. In this paper, we propose ensemble-based weighted Bellman backups that can be applied to most modern off-policy RL algorithms, such as Q-learning and actor-critic algorithms. Our main idea is to reweight sample transitions based on uncertainty estimates from a Q-ensemble. Because prediction errors can be characterized by uncertainty estimates from ensembles (i.e., variance of predictions) as shown in Figure 1(b), we find that the proposed method significantly improves the signal-to-noise in the Q-updates and stabilizes the learning process. Finally, we present a unified framework, coined SUNRISE, that combines our weighted Bellman backups with an inference method that selects actions using highest upper-confidence bounds (UCB) for efficient exploration (Chen et al., 2017). We find that these different ideas can be fruitfully integrated, and they are largely complementary (see Figure 1(a)). We demonstrate the effectiveness of the proposed method using Soft Actor-Critic (SAC; Haarnoja et al. 2018) for continuous control benchmarks (specifically, OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018)) and Rainbow DQN (Hessel et al., 2018) for discrete control benchmarks (specifically, Atari games (Bellemare et al., 2013)). In our experiments, SUNRISE consistently improves the performance of existing off-policy RL methods. Furthermore, we find that the proposed weighted Bellman backups yield improvements in environments with noisy reward, which have a low signal-to-noise ratio. 2 RELATED WORK Off-policy RL algorithms. Recently, various off-policy RL algorithms have provided large gains in sample-efficiency by reusing past experiences (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018). Rainbow DQN (Hessel et al., 2018) achieved state-of-the-art performance on the Atari games (Bellemare et al., 2013) by combining several techniques, such as double Qlearning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017). For continuous control tasks, SAC (Haarnoja et al., 2018) achieved state-of-the-art sample-efficiency results by incorporating the maximum entropy framework. Our ensemble method brings orthogonal benefits and is complementary and compatible with these existing state-of-the-art algorithms. Stabilizing Q-learning. It has been empirically observed that instability in Q-learning can be caused by applying the Bellman backup on the learned value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018; Song et al., 2019; Kim et al., 2019; Kumar et al., 2019; 2020). By following the principle of double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016), twin-Q trick (Fujimoto et al., 2018) was proposed to handle the overestimation of value functions for continuous control tasks. Song et al. (2019) and Kim et al. (2019) proposed to replace the max operator with Softmax and Mellowmax, respectively, to reduce the overestimation error. Recently, Kumar et al. (2020) handled the error propagation issue by reweighting the Bellman backup based on cumulative Bellman errors. However, our method is different in that we propose an alternative way that also utilizes ensembles to estimate uncertainty and provide more stable, higher-signal-to-noise backups. Ensemble methods in RL. Ensemble methods have been studied for different purposes in RL (Wiering & Van Hasselt, 2008; Osband et al., 2016a; Anschel et al., 2017; Agarwal et al., 2020; Lan et al., 2020). Chua et al. (2018) showed that modeling errors in model-based RL can be reduced using an ensemble of dynamics models, and Kurutach et al. (2018) accelerated policy learning by generating imagined experiences from the ensemble of dynamics models. For efficient exploration, Osband et al. (2016a) and Chen et al. (2017) also leveraged the ensemble of Q-functions. However, most prior works have studied the various axes of improvements from ensemble methods in isolation, while we propose a unified framework that handles various issues in off-policy RL algorithms. Exploration in RL. To balance exploration and exploitation, several methods, such as the maximum entropy frameworks (Ziebart, 2010; Haarnoja et al., 2018), exploration bonus rewards (Bellemare et al., 2016; Houthooft et al., 2016; Pathak et al., 2017; Choi et al., 2019) and randomization (Osband et al., 2016a;b), have been proposed. Despite the success of these exploration methods, a potential drawback is that agents can focus on irrelevant aspects of the environment because these methods do not depend on the rewards. To handle this issue, Chen et al. (2017) proposed an exploration strategy that considers both best estimates (i.e., mean) and uncertainty (i.e., variance) of Q-functions for discrete control tasks. We further extend this strategy to continuous control tasks and show that it can be combined with other techniques. 3 BACKGROUND Reinforcement learning. We consider a standard RL framework where an agent interacts with an environment in discrete time. Formally, at each timestep t, the agent receives a state st from the environment and chooses an action at based on its policy π. The environment returns a reward rt and the agent transitions to the next state st+1. The return Rt = ∑∞ k=0 γ krt+k is the total accumulated rewards from timestep t with a discount factor γ ∈ [0, 1). RL then maximizes the expected return. Soft Actor-Critic. SAC (Haarnoja et al., 2018) is an off-policy actor-critic method based on the maximum entropy RL framework (Ziebart, 2010), which encourages the robustness to noise and exploration by maximizing a weighted objective of the reward and the policy entropy (see Appendix A for further details). To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], (1) LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , (2) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . (3) Here, the policy is modeled as a Gaussian with mean and covariance given by neural networks to handle continuous action spaces. 4 SUNRISE In this section, we propose the ensemble-based weighted Bellman backups, and then introduce SUNRISE: Simple UNified framework for ReInforcement learning using enSEmbles, which combines various ensemble methods. In principle, our method can be used in conjunction with most modern off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018). For the exposition, we describe only the SAC version in the main body. The Rainbow DQN version follows the same principles and is fully described in Appendix B. 4.1 WEIGHTED BELLMAN BACKUPS TO IMPROVE SIGNAL-TO-NOISE IN Q-UPDATES Formally, we consider an ensemble of N SAC agents, i.e., {Qθi , πφi}Ni=1, where θi and φi denote the parameters of the i-th soft Q-function and policy.1 Since conventional Q-learning is based on the Bellman backup in (2), it can be affected by error propagation. I.e., error in the target Q-function Qθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. In other words, errors in the previous Q-function induce the “noise” to the learning “signal” (i.e., true Q-value) of the current Q-function. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each agent i, we consider a weighted Bellman backup as follows: LWQ (τt, θi) = w (st+1, at+1) ( Qθi(st, at)− rt − γ ( Qθ̄i(st+1, at+1)− α log πφ(at+1|st+1) ))2 , (4) 1We remark that each Q-function Qθi(s, a) has a unique target Q-function Qθ̄i(s, a). where τt = (st, at, rt, st+1) is a transition, at+1 ∼ πφ(a|st), and w(s, a) is a confidence weight based on ensemble of target Q-functions: w(s, a) = σ ( −Q̄std(s, a) ∗ T ) + 0.5, (5) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s, a) is the empirical standard deviation of all target Q-functions {Qθ̄i} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.2 The proposed objective LWQ down-weights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. 4.2 COMBINATION WITH ADDITIONAL TECHNIQUES THAT LEVERAGE ENSEMBLES We integrate the proposed weighted Bellman backup with UCB exploration into a single framework by utilizing the bootstrap with random initialization. Bootstrap with random initialization. To train the ensemble of agents, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between agents through two simple ideas: First, we initialize the model parameters of all agents with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each agent. Specifically, for each SAC agent i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of agents, we multiply the bootstrap mask to each objective function, such as: mt,iLπ (st, φi) and mt,iLWQ(τt, θi) in (3) and (4), respectively. We remark that Osband et al. (2016a) applied this simple technique to train an ensemble of DQN (Mnih et al., 2015) only for discrete control tasks, while we apply to SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018) for both continuous and discrete tasks with additional techniques. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (6) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). We remark that this inference method was originally proposed in Chen et al. (2017) for efficient exploration in discrete action spaces. However, in continuous action spaces, finding the action that maximizes the UCB is not straightforward. To handle this issue, we propose a simple approximation scheme, which first generates N candidate action set from ensemble policies {πφi}Ni=1, and then chooses the action that maximizes the UCB (Line 4 in Algorithm 1). For evaluation, we approximate the maximum a posterior action by averaging the mean of Gaussian distributions modeled by each ensemble policy. The full procedure of our unified framework, coined SUNRISE, is summarized in Algorithm 1. 5 EXPERIMENTAL RESULTS We designed our experiments to answer the following questions: • Can SUNRISE improve off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018), for both continuous (see Table 1 and Table 2) and discrete (see Table 3) control tasks? • How crucial is the proposed weighted Bellman backups in (4) for improving the signal-to-noise in Q-updates (see Figure 2)? • Can UCB exploration be useful for solving tasks with sparse rewards (see Figure 3(b))? • Is SUNRISE better than a single agent with more updates and parameters (see Figure 3(c))? • How does ensemble size affect the performance (see Figure 3(d))? 2We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. Algorithm 1 SUNRISE: SAC version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Collect N action samples: At = {at,i ∼ πφi(a|st)|i ∈ {1, . . . , N}} 5: Choose the action that maximizes UCB: at = arg max at,i∈At Qmean(st, at,i)+λQstd(st, at,i) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) in (4) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) in (3) 16: end for 17: end for 18: end for 5.1 SETUPS Continuous control tasks. We evaluate SUNRISE on several continuous control tasks using simulated robots from OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018). For OpenAI Gym experiments with proprioceptive inputs (e.g., positions and velocities), we compare to PETS (Chua et al., 2018), a state-of-the-art model-based RL method based on ensembles of dynamics models; POPLIN-P (Wang & Ba, 2020), a state-of-the-art model-based RL method which uses a policy network to generate actions for planning; POPLIN-A (Wang & Ba, 2020), variant of POPLIN-P which adds noise in the action space; METRPO (Kurutach et al., 2018), a hybrid RL method which augments TRPO (Schulman et al., 2015) using ensembles of dynamics models; and two state-of-the-art model-free RL methods, TD3 (Fujimoto et al., 2018) and SAC (Haarnoja et al., 2018). For our method, we consider a combination of SAC and SUNRISE, as described in Algorithm 1. Following the setup in Wang & Ba (2020) and Wang et al. (2019), we report the mean and standard deviation across ten runs after 200K timesteps on five complex environments: Cheetah, Walker, Hopper, Ant and SlimHumanoid with early termination (ET). More experimental details and learning curves with 1M timesteps are in Appendix D. For DeepMind Control Suite with image inputs, we compare to PlaNet (Hafner et al., 2019), a model-based RL method which learns a latent dynamics model and uses it for planning; Dreamer (Hafner et al., 2020), a hybrid RL method which utilizes the latent dynamics model to generate synthetic roll-outs; SLAC (Lee et al., 2020), a hybrid RL method which combines the latent dynamics model with SAC; and three state-of-the-art model-free RL methods which apply contrastive learning (CURL; Srinivas et al. 2020) or data augmentation (RAD (Laskin et al., 2020) and DrQ (Kostrikov et al., 2020)) to SAC. For our method, we consider a combination of RAD (i.e., SAC with random crop) and SUNRISE. Following the setup in RAD, we report the mean and standard deviation across five runs after 100k (i.e., low sample regime) and 500k (i.e., asymptotically optimal regime) environment steps on six environments: Finger-spin, Cartpole-swing, Reacher-easy, Cheetah-run, Walker-walk, and Cup-catch. More experimental details and learning curves are in Appendix F. Discrete control benchmarks. For discrete control tasks, we demonstrate the effectiveness of SUNRISE on several Atari games (Bellemare et al., 2013). We compare to SimPLe (Kaiser et al., 2020), a hybrid RL method which updates the policy only using samples generated by learned dynamics model; Rainbow DQN (Hessel et al., 2018) with modified hyperparameters for sample-efficiency (van Hasselt et al., 2019); Random agent (Kaiser et al., 2020); two state-of-the-art model-free RL methods which apply the contrastive learning (CURL; Srinivas et al. 2020) and data augmentation (DrQ; Kostrikov et al. 2020) to Rainbow DQN; and Human performances reported in Kaiser et al. (2020) and van Hasselt et al. (2019). Following the setups in SimPLe, we report the mean across three runs after 100K interactions (i.e., 400K frames with action repeat of 4). For our method, we consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE (see Algorithm 3 in Appendix B). More experimental details and learning curves are in Appendix G. For our method, we do not alter any hyperparameters of the original RL algorithms and train five ensemble agents. There are only three additional hyperparameters β, T , and λ for bootstrap, weighted Bellman backup, and UCB exploration, where we provide details in Appendix D, F, and G. 5.2 COMPARATIVE EVALUATION OpenAI Gym. Table 1 shows the average returns of evaluation roll-outs for all methods. SUNRISE consistently improves the performance of SAC across all environments and outperforms the model-based RL methods, such as POPLIN-P and PETS, on all environments except Ant and SlimHumanoid-ET. Even though we focus on performance after small samples because of the recent emphasis on making RL more sample efficient, we find that the gain from SUNRISE becomes even more significant when training longer (see Figure 3(c) and Appendix D). We remark that SUNRISE is more compute-efficient than modern model-based RL methods, such as POPLIN and PETS, because they also utilize ensembles (of dynamics models) and perform planning to select actions. Namely, SUNRISE is simple to implement, computationally efficient, and readily parallelizable. DeepMind Control Suite. As shown in Table 2, SUNRISE also consistently improves the performance of RAD (i.e., SAC with random crop) on all environments from DeepMind Control Suite. This implies that the proposed method can be useful for high-dimensional and complex input observations. Moreover, our method outperforms existing pixel-based RL methods in almost all environments. We remark that SUNRISE can also be combined with DrQ, and expect that it can achieve better performances on Cartpole-swing and Cup-catch at 100K environment steps. Atari games. We also evaluate SUNRISE on discrete control tasks from the Atari benchmark using Rainbow DQN. Table 3 shows that SUNRISE improves the performance of Rainbow in almost all environments, and outperforms the state-of-the-art CURL and SimPLe on 11 out of 26 Atari games. Here, we remark that SUNRISE is also compatible with CURL, which could enable even better performance. These results demonstrate that SUNRISE is a general approach. 5.3 ABLATION STUDY Effects of weighted Bellman backups. To verify the effectiveness of the proposed weighted Bellman backup (4) in improving signal-to-noise in Q-updates, we evaluate on a modified OpenAI Gym environments with noisy rewards. Following Kumar et al. (2019), we add Gaussian noise to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 1) only during training, and report the deterministic ground-truth reward during evaluation. For our method, we also consider a variant of SUNRISE, which updates Q-functions without the proposed weighted Bellman backup to isolate its effect. We compare to DisCor (Kumar et al., 2020), which improves SAC by reweighting the Bellman backup based on estimated cumulative Bellman errors (see Appendix E for more details). Figure 2 shows the learning curves of all methods on OpenAI Gym with noisy rewards. The proposed weighted Bellman backup significantly improves both sample-efficiency and asymptotic performance of SUNRISE, and outperforms baselines such as SAC and DisCor. One can note the performance gain due to our weighted Bellman backup becomes more significant in complex environments, such as SlimHumanoid-ET. We remark that DisCor still suffers from error propagation issues in complex environments like SlimHumanoid-ET and Ant because there are some approximation errors in estimating cumulative Bellman errors (see Section 6.1 for more detailed discussion). These results imply that errors in the target Q-function can be characterized by the proposed confident weight in equation 5 effectively. We also consider another variant of SUNRISE, which updates Q-functions with random weights sampled from [0.5, 1.0] uniformly at random. In order to evaluate the performance of SUNRISE, we increase the noise rate by adding Gaussian noise with a large standard deviation to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 5). Figure 3(a) shows the learning curves of all methods on the SlimHumanoid-ET environment over 10 random seeds. First, one can not that SUNRISE with random weights (red curve) is worse than SUNRISE with the proposed weighted Bellman backups (blue curve). Additionally, even without UCB exploration, SUNRISE with the proposed weighted Bellman backups (purple curve) outperforms all baselines. This implies that the proposed weighted Bellman backups can handle the error propagation effectively even though there is a large noise in reward function. Effects of UCB exploration. To verify the advantage of UCB exploration in (6), we evaluate on Cartpole-swing with sparse-reward from DeepMind Control Suite. For our method, we consider a variant of SUNRISE, which selects action without UCB exploration. As shown in Fig 3(b), SUNRISE with UCB exploration (blue curve) significantly improves the sample-efficiency on the environment with sparse rewards. Comparison with a single agent with more updates/parameters. One concern in utilizing the ensemble method is that its gains may come from more gradient updates and parameters. To clarify this concern, we compare SUNRISE (5 ensembles using 2-layer MLPs with 256 hidden units each) to a single agent, which consists of 2-layer MLPs with 1024 (and 256) hidden units with 5 updates using different random minibatches. Figure 3(c) shows that the learning curves on SlimHumanoidET, where SUNRISE outperforms all baselines. This implies that the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. More experimental results on other environments are also available in Appendix D. Effects of ensemble size. We analyze the effects of ensemble size N on the Ant environment from OpenAI Gym. Figure 3(d) shows that the performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. Thus, we use five ensemble agents for all experiments. More experimental results on other environments are also available in Appendix D, where the overall trend is similar. 6 DISCUSSION 6.1 CONNECTION WITH DISCOR Kumar et al. (2020) show that naive Bellman backups can suffer from slow learning in certain environments, requiring exponentially many updates. To handle this problem, they propose the weighted Bellman backups, which make steady learning progress by inducing some optimal data distribution (see (Kumar et al., 2020) for more details). Specifically, in addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s,a)T ) , where γ is a discount factor and T is a temperature. However, we remark that DisCor can still suffer from the error propagation issues because there is also an approximation error in estimating cumulative Bellman errors. Therefore, we consider an alternative approach that utilizes the uncertainty from ensembles. Because it has been observed that the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples (Lakshminarayanan et al., 2017), we expect that the weighted Bellman backups based on ensembles can handle error propagation more effectively. Indeed, in our experiments, we find that ensemble-based weighted Bellman backups can give rise to more stable training and improve the data-efficiency of various off-policy RL algorithms. 6.2 COMPUTATION OVERHEAD One can expect that there is an additional computation overhead by introducing ensembles. When we have N ensemble agents, our method requires N× inferences for weighted Bellman backups and 2N× inferences (N for actors and N for critics). However, we remark that our method can be more computationally efficient because it is parallelizable. Also, as shown in Figure 3(c), the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. 7 CONCLUSION In this paper, we present the ensemble-based weighted Bellman backups, which is compatible with various off-policy RL algorithms. By re-weighting target Q-values based on uncertainty estimates, we stabilize and improve the learning process on both continuous and discrete control benchmarks. Additionally, we introduce SUNRISE, a simple unified ensemble method, which integrates the proposed weighted Bellman backups with bootstrap with random initialization, and UCB exploration to handle various issues in off-policy RL algorithms. Our experiments show that SUNRISE consistently improves the performances of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, and outperforms state-of-the-art RL algorithms for both continuous and discrete control tasks on both low-dimensional and high-dimensional environments. We hope that SUNRISE could be useful to other relevant topics such as sim-to-real transfer (Tobin et al., 2017), imitation learning (Torabi et al., 2018), understanding the connection between on-policy and offpolicy RL (Schulman et al., 2017), offline RL (Agarwal et al., 2020), and planning (Srinivas et al., 2018; Tamar et al., 2016). A SUNRISE: SOFT ACTOR-CRITIC Background. SAC (Haarnoja et al., 2018) is a state-of-the-art off-policy algorithm for continuous control problems. SAC learns a policy, πφ(a|s), and a critic, Qθ(s, a), and aims to maximize a weighted objective of the reward and the policy entropy, Est,at∼π [∑ t γ t−1rt + αH(πφ(·|st)) ] . To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . We remark that this corresponds to minimizing the Kullback-Leibler divergence between the policy and a Boltzmann distribution induced by the current soft Q-function. SUNRISE without UCB exploration. For SUNRISE without UCB exploration, we use random inference proposed in Bootstrapped DQN (Osband et al., 2016a), which randomly selects an index of policy uniformly at random and generates the action from the selected actor for the duration of that episode (see Line 3 in Algorithm 2). Algorithm 2 SUNRISE: SAC version (random inference) 1: for each iteration do 2: // RANDOM INFERENCE 3: Select an index of policy using î ∼ Uniform{1, · · · , N} 4: for each timestep t do 5: Get the action from selected policy: at ∼ πφî(a|st) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) 16: end for 17: end for 18: end for B EXTENSION TO RAINBOW DQN B.1 PRELIMINARIES: RAINBOW DQN Background. DQN algorithm (Mnih et al., 2015) learns a Q-function, which is modeled as a neural network with parameters θ, by minimizing the following Bellman residual: LDQN(θ) = Eτt∼B [( Qθ(st, at)− rt − γmax a Qθ̄(st+1, a) )2 ] , (7) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, and θ̄ are the delayed parameters. Even though Rainbow DQN integrates several techniques, such as double Q-learning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017), applying SUNRISE to Rainbow DQN can be described based on the standard DQN algorithm. For exposition, we refer the reader to Hessel et al. (2018) for more detailed explanations of Rainbow DQN. Algorithm 3 SUNRISE: Rainbow version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Choose the action that maximizes UCB: at = arg max at,i∈A Qmean(st, at,i)+λQstd(st, at,i) 5: Collect state st+1 and reward rt from the environment by taking action at 6: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 7: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 8: end for 9: // UPDATE Q-FUNCTIONS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 10: for each gradient step do 11: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 12: for each agent i do 13: Update the Q-function by minimizing 1B ∑B j=1mj,iL DQN WQ (τj , θi) 14: end for 15: end for 16: end for B.2 SUNRISE: RAINBOW DQN Bootstrap with random initialization. Formally, we consider an ensemble of N Q-functions, i.e., {Qθi}Ni=1, where θi denotes the parameters of the i-th Q-function.3 To train the ensemble of Q-functions, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between Q-functions through two simple ideas: First, we initialize the model parameters of all Q-functions with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each Q-function. Specifically, for each Q-function i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of Q-functions, we multiply the bootstrap mask to each objective function. Weighted Bellman backup. Since conventional Q-learning is based on the Bellman backup in equation 7, it can be affected by error propagation. I.e., error in the target Q-functionQθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each Q-function i, we consider a weighted Bellman backup as follows: LDQNWQ (τt, θi) = w (st+1) ( Qθi(st, at)− rt − γmax a Qθ̄i(st+1, a) )2 , where τt = (st, at, rt, st+1) is a transition, and w(s) is a confidence weight based on ensemble of target Q-functions: w(s) = σ ( −Q̄std(s) ∗ T ) + 0.5, (8) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s) is the empirical standard deviation of all target Q-functions {maxaQθ̄i(s, a)} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.4 The proposed objective LDQNWQ downweights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. Note that we combine the proposed 3Here, we remark that each Q-function has a unique target Q-function. 4We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. weighted Bellman backup with prioritized replay (Schaul et al., 2016) by multiplying both weights to Bellman backups. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (9) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). This inference method was originally proposed in Chen et al. (2017) for efficient exploration in DQN, but we further extend it to Rainbow DQN. For evaluation, we approximate the maximum a posterior action by choosing the action maximizes the mean of Q-functions, i.e., at = maxa{Qmean(st, a)}. The full procedure is summarized in Algorithm 3. C IMPLEMENTATION DETAILS FOR TOY REGRESSION TASKS We evaluate the quality of uncertainty estimates from an ensemble of neural networks on a toy regression task. To this end, we generate twenty training samples drawn as y = x3 + , where ∼ N (0, 32), and train ten ensembles of regression networks using bootstrap with random initialization. The regression network is as fully-connected neural networks with 2 hidden layers and 50 rectified linear units in each layer. For bootstrap, we draw the binary masks from the Bernoulli distribution with mean β = 0.3. As uncertainty estimates, we measure the empirical variance of the networks’ predictions. As shown in Figure 1(b), the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples. D EXPERIMENTAL SETUPS AND RESULTS: OPENAI GYM Environments. We evaluate the performance of SUNRISE on four complex environments based on the standard bench-marking environments5 from OpenAI Gym (Brockman et al., 2016). Note that we do not use a modified Cheetah environments from PETS (Chua et al., 2018) (dented as Cheetah in POPLIN (Wang & Ba, 2020)) because it includes additional information in observations. Training details. We consider a combination of SAC and SUNRISE using the publicly released implementation repository (https://github.com/vitchyr/rlkit) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 20, 50}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 4 shows the learning curves on all environments. One can note that SUNRISE consistently improves the performance of SAC by a large margin. Effects of ensembles. Figure 5 shows the learning curves of SUNRISE with varying values of ensemble size on all environments. The performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. 5We used the reference implementation at https://github.com/WilsonWangTHU/mbbl (Wang et al., 2019). E EXPERIMENTAL SETUPS AND RESULTS: NOISY REWARD DisCor. DisCor (Kumar et al., 2020) was proposed to prevent the error propagation issue in Qlearning. In addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s, a) T ) , where γ is a discount factor and T is a temperature. By following the setups in Kumar et al. (2020), we take a network with 1 extra hidden layer than the corresponding Q-network as an error model, and chose T = 10 for all experiments. We update the temperature via a moving average and use the learning rate of 0.0003. We use the SAC algorithm as the RL objective coupled with DisCor and build on top of the publicly released implementation repository (https://github.com/ vitchyr/rlkit). F EXPERIMENTAL SETUPS AND RESULTS: DEEPMIND CONTROL SUITE Training details. We consider a combination of RAD and SUNRISE using the publicly released implementation repository (https://github.com/MishaLaskin/rad) with a full list of hyperparameters in Table 4. Similar to Laskin et al. (2020), we use the same encoder architecture as in (Yarats et al., 2019), and the actor and critic share the same encoder to embed image observations.6 For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 100}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because training samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 6(g), 6(h), 6(i), 6(j), 6(k), and 6(l) show the learning curves on all environments. Since RAD already achieves the near optimal performances and the room for improvement is small, we can see a small but consistent gains from SUNRISE. To verify the effectiveness of SUNRISE more clearly, we consider a combination of SAC and SUNRISE in Figure 6(a), 6(b), 6(c), 6(d), 6(e), and 6(f), where the gain from SUNRISE is more significant. G EXPERIMENTAL SETUPS AND RESULTS: ATARI GAMES Training details. We consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE using the publicly released implementation repository (https://github.com/ Kaixhin/Rainbow) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 40}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. 6However, we remark that each agent does not share the encoders unlike Bootstrapped DQN (Osband et al., 2016a). Learning curves. Figure 7, Figure 8 and Figure 9 show the learning curves on all environments.
1. What is the focus of the paper regarding reinforcement learning? 2. What are the strengths of the proposed ensemble-based approach? 3. Do you have any concerns about the performance improvement in the experiments? 4. How do you define "signal-to-noise ratio" in this context? 5. What are your thoughts on the fairness of the comparisons in Table 3? 6. Could you elaborate on the increased complexity when using multiple agents? 7. How does the proposed method compare to other recent works on weighted Q updates?
Review
Review This submission developed an ensemble based approach to weight the Bellman backups from different agents. As claimed by the authors, the proposed method can improve the signal-to-noise ratio. To further boost the performance, the authors combined the weighted backups with a few other techniques, including UCB exploration and Bootstrap. The authors finally tested the proposed method on both continuous and discrete reinforcement learning tasks, and showed the improved or competitive performance, compared with baselines. The proposed ensemble-based approach is interesting and the authors conducted an extensive experiments to verify its empirical performance, which I really appreciate. Compared with other baselines, the performance gap is also noticeable. On the other hand, I am a bit concerned about whether the improvement is indeed because of the weighted backups. For example, Figure 3(a) showed that if removing UCB, the performance for SUNRISE dropped a lot. I tried to see if there are any other ablation studies w.r.t. UCB on the results in tables (removing UCB and keeping other steps the same in Algorithm 1), but did not find them. Use of UCB seems orthogonal with the weighted backup, as one is focused on exploration and the other for Q updates. Therefore, it's a bit questionable whether UCB or the proposed weighted backups is the main factor for performance improvement. The authors claimed a few times "signal-to-noise ratio". I hope there could be more rigor here. What exactly is the definition for this term? What are the signal and noise here? Furthermore, I also doubt about the fairness in Table 3: The results there are only for 100K interactions; however, when comparing with Figure 8, Rainbow has not become stable at 100K and the scores for some games are just too low (e.g., Breakout), compared with results in the Rainbow paper. Could you comment on the increased complexity, when employing multiple agents? There are a few recent papers on the weighted Q updates as well, e.g., Song, Z., Parr, R. and Carin, L., 2019, May. Revisiting the softmax bellman operator: New benefits and new perspective. In International Conference on Machine Learning (pp. 5916-5925). Kim, S., Asadi, K., Littman, M. and Konidaris, G., 2019, August. Deepmellow: removing the need for a target network in deep Q-learning. In Proceedings of the Twenty Eighth International Joint Conference on Artificial Intelligence. These papers avoid the need of multiple agents and show the benefits of weighted updates, which the authors need to discuss.
ICLR
Title Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates Abstract Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from low signal and even instability in Q-learning because target values are derived from current Q-estimates, which are often noisy. To mitigate the issue, we propose ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble. We empirically observe that the proposed method stabilizes and improves learning on both continuous and discrete control benchmarks. We also specifically investigate the signal-to-noise aspect by studying environments with noisy rewards, and find that weighted Bellman backups significantly outperform standard Bellman backups. Furthermore, since our weighted Bellman backups rely on maintaining an ensemble, we investigate how weighted Bellman backups interact with UCB Exploration. By enforcing the diversity between agents using Bootstrap, we show that these different ideas are largely orthogonal and can be fruitfully integrated, together further improving the performance of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and discrete control tasks on both lowdimensional and high-dimensional environments. 1 INTRODUCTION Model-free reinforcement learning (RL), with high-capacity function approximators, such as deep neural networks (DNNs), has been used to solve a variety of sequential decision-making problems, including board games (Silver et al., 2017; 2018), video games (Mnih et al., 2015; Vinyals et al., 2019), and robotic manipulation (Kalashnikov et al., 2018). It has been well established that the above successes are highly sample inefficient (Kaiser et al., 2020). Recently, a lot of progress has been made in more sample-efficient model-free RL algorithms through improvements in off-policy learning both in discrete and continuous domains (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018; Amos et al., 2020). However, standard off-policy RL algorithms can suffer from instability in Q-learning due to error propagation in the Bellman backup, i.e., the errors induced in the target value can lead to an increase in overall error in the Q-function (Kumar et al., 2019; 2020). One way to address the error propagation issue is to use ensemble methods, which combine multiple models of the value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018). For discrete control tasks, double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016) addressed the value overestimation by maintaining two independent estimators of the action values and later extended to continuous control tasks in TD3 (Fujimoto et al., 2018). While most prior work has improved the stability by taking the minimum over Q-functions, this also needlessly loses signal, and we propose an alternative way that utilizes ensembles to estimate uncertainty and provide more stable backups. In this paper, we propose ensemble-based weighted Bellman backups that can be applied to most modern off-policy RL algorithms, such as Q-learning and actor-critic algorithms. Our main idea is to reweight sample transitions based on uncertainty estimates from a Q-ensemble. Because prediction errors can be characterized by uncertainty estimates from ensembles (i.e., variance of predictions) as shown in Figure 1(b), we find that the proposed method significantly improves the signal-to-noise in the Q-updates and stabilizes the learning process. Finally, we present a unified framework, coined SUNRISE, that combines our weighted Bellman backups with an inference method that selects actions using highest upper-confidence bounds (UCB) for efficient exploration (Chen et al., 2017). We find that these different ideas can be fruitfully integrated, and they are largely complementary (see Figure 1(a)). We demonstrate the effectiveness of the proposed method using Soft Actor-Critic (SAC; Haarnoja et al. 2018) for continuous control benchmarks (specifically, OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018)) and Rainbow DQN (Hessel et al., 2018) for discrete control benchmarks (specifically, Atari games (Bellemare et al., 2013)). In our experiments, SUNRISE consistently improves the performance of existing off-policy RL methods. Furthermore, we find that the proposed weighted Bellman backups yield improvements in environments with noisy reward, which have a low signal-to-noise ratio. 2 RELATED WORK Off-policy RL algorithms. Recently, various off-policy RL algorithms have provided large gains in sample-efficiency by reusing past experiences (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel et al., 2018). Rainbow DQN (Hessel et al., 2018) achieved state-of-the-art performance on the Atari games (Bellemare et al., 2013) by combining several techniques, such as double Qlearning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017). For continuous control tasks, SAC (Haarnoja et al., 2018) achieved state-of-the-art sample-efficiency results by incorporating the maximum entropy framework. Our ensemble method brings orthogonal benefits and is complementary and compatible with these existing state-of-the-art algorithms. Stabilizing Q-learning. It has been empirically observed that instability in Q-learning can be caused by applying the Bellman backup on the learned value function (Hasselt, 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018; Song et al., 2019; Kim et al., 2019; Kumar et al., 2019; 2020). By following the principle of double Q-learning (Hasselt, 2010; Van Hasselt et al., 2016), twin-Q trick (Fujimoto et al., 2018) was proposed to handle the overestimation of value functions for continuous control tasks. Song et al. (2019) and Kim et al. (2019) proposed to replace the max operator with Softmax and Mellowmax, respectively, to reduce the overestimation error. Recently, Kumar et al. (2020) handled the error propagation issue by reweighting the Bellman backup based on cumulative Bellman errors. However, our method is different in that we propose an alternative way that also utilizes ensembles to estimate uncertainty and provide more stable, higher-signal-to-noise backups. Ensemble methods in RL. Ensemble methods have been studied for different purposes in RL (Wiering & Van Hasselt, 2008; Osband et al., 2016a; Anschel et al., 2017; Agarwal et al., 2020; Lan et al., 2020). Chua et al. (2018) showed that modeling errors in model-based RL can be reduced using an ensemble of dynamics models, and Kurutach et al. (2018) accelerated policy learning by generating imagined experiences from the ensemble of dynamics models. For efficient exploration, Osband et al. (2016a) and Chen et al. (2017) also leveraged the ensemble of Q-functions. However, most prior works have studied the various axes of improvements from ensemble methods in isolation, while we propose a unified framework that handles various issues in off-policy RL algorithms. Exploration in RL. To balance exploration and exploitation, several methods, such as the maximum entropy frameworks (Ziebart, 2010; Haarnoja et al., 2018), exploration bonus rewards (Bellemare et al., 2016; Houthooft et al., 2016; Pathak et al., 2017; Choi et al., 2019) and randomization (Osband et al., 2016a;b), have been proposed. Despite the success of these exploration methods, a potential drawback is that agents can focus on irrelevant aspects of the environment because these methods do not depend on the rewards. To handle this issue, Chen et al. (2017) proposed an exploration strategy that considers both best estimates (i.e., mean) and uncertainty (i.e., variance) of Q-functions for discrete control tasks. We further extend this strategy to continuous control tasks and show that it can be combined with other techniques. 3 BACKGROUND Reinforcement learning. We consider a standard RL framework where an agent interacts with an environment in discrete time. Formally, at each timestep t, the agent receives a state st from the environment and chooses an action at based on its policy π. The environment returns a reward rt and the agent transitions to the next state st+1. The return Rt = ∑∞ k=0 γ krt+k is the total accumulated rewards from timestep t with a discount factor γ ∈ [0, 1). RL then maximizes the expected return. Soft Actor-Critic. SAC (Haarnoja et al., 2018) is an off-policy actor-critic method based on the maximum entropy RL framework (Ziebart, 2010), which encourages the robustness to noise and exploration by maximizing a weighted objective of the reward and the policy entropy (see Appendix A for further details). To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], (1) LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , (2) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . (3) Here, the policy is modeled as a Gaussian with mean and covariance given by neural networks to handle continuous action spaces. 4 SUNRISE In this section, we propose the ensemble-based weighted Bellman backups, and then introduce SUNRISE: Simple UNified framework for ReInforcement learning using enSEmbles, which combines various ensemble methods. In principle, our method can be used in conjunction with most modern off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018). For the exposition, we describe only the SAC version in the main body. The Rainbow DQN version follows the same principles and is fully described in Appendix B. 4.1 WEIGHTED BELLMAN BACKUPS TO IMPROVE SIGNAL-TO-NOISE IN Q-UPDATES Formally, we consider an ensemble of N SAC agents, i.e., {Qθi , πφi}Ni=1, where θi and φi denote the parameters of the i-th soft Q-function and policy.1 Since conventional Q-learning is based on the Bellman backup in (2), it can be affected by error propagation. I.e., error in the target Q-function Qθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. In other words, errors in the previous Q-function induce the “noise” to the learning “signal” (i.e., true Q-value) of the current Q-function. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each agent i, we consider a weighted Bellman backup as follows: LWQ (τt, θi) = w (st+1, at+1) ( Qθi(st, at)− rt − γ ( Qθ̄i(st+1, at+1)− α log πφ(at+1|st+1) ))2 , (4) 1We remark that each Q-function Qθi(s, a) has a unique target Q-function Qθ̄i(s, a). where τt = (st, at, rt, st+1) is a transition, at+1 ∼ πφ(a|st), and w(s, a) is a confidence weight based on ensemble of target Q-functions: w(s, a) = σ ( −Q̄std(s, a) ∗ T ) + 0.5, (5) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s, a) is the empirical standard deviation of all target Q-functions {Qθ̄i} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.2 The proposed objective LWQ down-weights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. 4.2 COMBINATION WITH ADDITIONAL TECHNIQUES THAT LEVERAGE ENSEMBLES We integrate the proposed weighted Bellman backup with UCB exploration into a single framework by utilizing the bootstrap with random initialization. Bootstrap with random initialization. To train the ensemble of agents, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between agents through two simple ideas: First, we initialize the model parameters of all agents with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each agent. Specifically, for each SAC agent i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of agents, we multiply the bootstrap mask to each objective function, such as: mt,iLπ (st, φi) and mt,iLWQ(τt, θi) in (3) and (4), respectively. We remark that Osband et al. (2016a) applied this simple technique to train an ensemble of DQN (Mnih et al., 2015) only for discrete control tasks, while we apply to SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018) for both continuous and discrete tasks with additional techniques. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (6) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). We remark that this inference method was originally proposed in Chen et al. (2017) for efficient exploration in discrete action spaces. However, in continuous action spaces, finding the action that maximizes the UCB is not straightforward. To handle this issue, we propose a simple approximation scheme, which first generates N candidate action set from ensemble policies {πφi}Ni=1, and then chooses the action that maximizes the UCB (Line 4 in Algorithm 1). For evaluation, we approximate the maximum a posterior action by averaging the mean of Gaussian distributions modeled by each ensemble policy. The full procedure of our unified framework, coined SUNRISE, is summarized in Algorithm 1. 5 EXPERIMENTAL RESULTS We designed our experiments to answer the following questions: • Can SUNRISE improve off-policy RL algorithms, such as SAC (Haarnoja et al., 2018) and Rainbow DQN (Hessel et al., 2018), for both continuous (see Table 1 and Table 2) and discrete (see Table 3) control tasks? • How crucial is the proposed weighted Bellman backups in (4) for improving the signal-to-noise in Q-updates (see Figure 2)? • Can UCB exploration be useful for solving tasks with sparse rewards (see Figure 3(b))? • Is SUNRISE better than a single agent with more updates and parameters (see Figure 3(c))? • How does ensemble size affect the performance (see Figure 3(d))? 2We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. Algorithm 1 SUNRISE: SAC version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Collect N action samples: At = {at,i ∼ πφi(a|st)|i ∈ {1, . . . , N}} 5: Choose the action that maximizes UCB: at = arg max at,i∈At Qmean(st, at,i)+λQstd(st, at,i) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) in (4) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) in (3) 16: end for 17: end for 18: end for 5.1 SETUPS Continuous control tasks. We evaluate SUNRISE on several continuous control tasks using simulated robots from OpenAI Gym (Brockman et al., 2016) and DeepMind Control Suite (Tassa et al., 2018). For OpenAI Gym experiments with proprioceptive inputs (e.g., positions and velocities), we compare to PETS (Chua et al., 2018), a state-of-the-art model-based RL method based on ensembles of dynamics models; POPLIN-P (Wang & Ba, 2020), a state-of-the-art model-based RL method which uses a policy network to generate actions for planning; POPLIN-A (Wang & Ba, 2020), variant of POPLIN-P which adds noise in the action space; METRPO (Kurutach et al., 2018), a hybrid RL method which augments TRPO (Schulman et al., 2015) using ensembles of dynamics models; and two state-of-the-art model-free RL methods, TD3 (Fujimoto et al., 2018) and SAC (Haarnoja et al., 2018). For our method, we consider a combination of SAC and SUNRISE, as described in Algorithm 1. Following the setup in Wang & Ba (2020) and Wang et al. (2019), we report the mean and standard deviation across ten runs after 200K timesteps on five complex environments: Cheetah, Walker, Hopper, Ant and SlimHumanoid with early termination (ET). More experimental details and learning curves with 1M timesteps are in Appendix D. For DeepMind Control Suite with image inputs, we compare to PlaNet (Hafner et al., 2019), a model-based RL method which learns a latent dynamics model and uses it for planning; Dreamer (Hafner et al., 2020), a hybrid RL method which utilizes the latent dynamics model to generate synthetic roll-outs; SLAC (Lee et al., 2020), a hybrid RL method which combines the latent dynamics model with SAC; and three state-of-the-art model-free RL methods which apply contrastive learning (CURL; Srinivas et al. 2020) or data augmentation (RAD (Laskin et al., 2020) and DrQ (Kostrikov et al., 2020)) to SAC. For our method, we consider a combination of RAD (i.e., SAC with random crop) and SUNRISE. Following the setup in RAD, we report the mean and standard deviation across five runs after 100k (i.e., low sample regime) and 500k (i.e., asymptotically optimal regime) environment steps on six environments: Finger-spin, Cartpole-swing, Reacher-easy, Cheetah-run, Walker-walk, and Cup-catch. More experimental details and learning curves are in Appendix F. Discrete control benchmarks. For discrete control tasks, we demonstrate the effectiveness of SUNRISE on several Atari games (Bellemare et al., 2013). We compare to SimPLe (Kaiser et al., 2020), a hybrid RL method which updates the policy only using samples generated by learned dynamics model; Rainbow DQN (Hessel et al., 2018) with modified hyperparameters for sample-efficiency (van Hasselt et al., 2019); Random agent (Kaiser et al., 2020); two state-of-the-art model-free RL methods which apply the contrastive learning (CURL; Srinivas et al. 2020) and data augmentation (DrQ; Kostrikov et al. 2020) to Rainbow DQN; and Human performances reported in Kaiser et al. (2020) and van Hasselt et al. (2019). Following the setups in SimPLe, we report the mean across three runs after 100K interactions (i.e., 400K frames with action repeat of 4). For our method, we consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE (see Algorithm 3 in Appendix B). More experimental details and learning curves are in Appendix G. For our method, we do not alter any hyperparameters of the original RL algorithms and train five ensemble agents. There are only three additional hyperparameters β, T , and λ for bootstrap, weighted Bellman backup, and UCB exploration, where we provide details in Appendix D, F, and G. 5.2 COMPARATIVE EVALUATION OpenAI Gym. Table 1 shows the average returns of evaluation roll-outs for all methods. SUNRISE consistently improves the performance of SAC across all environments and outperforms the model-based RL methods, such as POPLIN-P and PETS, on all environments except Ant and SlimHumanoid-ET. Even though we focus on performance after small samples because of the recent emphasis on making RL more sample efficient, we find that the gain from SUNRISE becomes even more significant when training longer (see Figure 3(c) and Appendix D). We remark that SUNRISE is more compute-efficient than modern model-based RL methods, such as POPLIN and PETS, because they also utilize ensembles (of dynamics models) and perform planning to select actions. Namely, SUNRISE is simple to implement, computationally efficient, and readily parallelizable. DeepMind Control Suite. As shown in Table 2, SUNRISE also consistently improves the performance of RAD (i.e., SAC with random crop) on all environments from DeepMind Control Suite. This implies that the proposed method can be useful for high-dimensional and complex input observations. Moreover, our method outperforms existing pixel-based RL methods in almost all environments. We remark that SUNRISE can also be combined with DrQ, and expect that it can achieve better performances on Cartpole-swing and Cup-catch at 100K environment steps. Atari games. We also evaluate SUNRISE on discrete control tasks from the Atari benchmark using Rainbow DQN. Table 3 shows that SUNRISE improves the performance of Rainbow in almost all environments, and outperforms the state-of-the-art CURL and SimPLe on 11 out of 26 Atari games. Here, we remark that SUNRISE is also compatible with CURL, which could enable even better performance. These results demonstrate that SUNRISE is a general approach. 5.3 ABLATION STUDY Effects of weighted Bellman backups. To verify the effectiveness of the proposed weighted Bellman backup (4) in improving signal-to-noise in Q-updates, we evaluate on a modified OpenAI Gym environments with noisy rewards. Following Kumar et al. (2019), we add Gaussian noise to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 1) only during training, and report the deterministic ground-truth reward during evaluation. For our method, we also consider a variant of SUNRISE, which updates Q-functions without the proposed weighted Bellman backup to isolate its effect. We compare to DisCor (Kumar et al., 2020), which improves SAC by reweighting the Bellman backup based on estimated cumulative Bellman errors (see Appendix E for more details). Figure 2 shows the learning curves of all methods on OpenAI Gym with noisy rewards. The proposed weighted Bellman backup significantly improves both sample-efficiency and asymptotic performance of SUNRISE, and outperforms baselines such as SAC and DisCor. One can note the performance gain due to our weighted Bellman backup becomes more significant in complex environments, such as SlimHumanoid-ET. We remark that DisCor still suffers from error propagation issues in complex environments like SlimHumanoid-ET and Ant because there are some approximation errors in estimating cumulative Bellman errors (see Section 6.1 for more detailed discussion). These results imply that errors in the target Q-function can be characterized by the proposed confident weight in equation 5 effectively. We also consider another variant of SUNRISE, which updates Q-functions with random weights sampled from [0.5, 1.0] uniformly at random. In order to evaluate the performance of SUNRISE, we increase the noise rate by adding Gaussian noise with a large standard deviation to the reward function: r′(s, a) = r(s, a) + z, where z ∼ N (0, 5). Figure 3(a) shows the learning curves of all methods on the SlimHumanoid-ET environment over 10 random seeds. First, one can not that SUNRISE with random weights (red curve) is worse than SUNRISE with the proposed weighted Bellman backups (blue curve). Additionally, even without UCB exploration, SUNRISE with the proposed weighted Bellman backups (purple curve) outperforms all baselines. This implies that the proposed weighted Bellman backups can handle the error propagation effectively even though there is a large noise in reward function. Effects of UCB exploration. To verify the advantage of UCB exploration in (6), we evaluate on Cartpole-swing with sparse-reward from DeepMind Control Suite. For our method, we consider a variant of SUNRISE, which selects action without UCB exploration. As shown in Fig 3(b), SUNRISE with UCB exploration (blue curve) significantly improves the sample-efficiency on the environment with sparse rewards. Comparison with a single agent with more updates/parameters. One concern in utilizing the ensemble method is that its gains may come from more gradient updates and parameters. To clarify this concern, we compare SUNRISE (5 ensembles using 2-layer MLPs with 256 hidden units each) to a single agent, which consists of 2-layer MLPs with 1024 (and 256) hidden units with 5 updates using different random minibatches. Figure 3(c) shows that the learning curves on SlimHumanoidET, where SUNRISE outperforms all baselines. This implies that the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. More experimental results on other environments are also available in Appendix D. Effects of ensemble size. We analyze the effects of ensemble size N on the Ant environment from OpenAI Gym. Figure 3(d) shows that the performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. Thus, we use five ensemble agents for all experiments. More experimental results on other environments are also available in Appendix D, where the overall trend is similar. 6 DISCUSSION 6.1 CONNECTION WITH DISCOR Kumar et al. (2020) show that naive Bellman backups can suffer from slow learning in certain environments, requiring exponentially many updates. To handle this problem, they propose the weighted Bellman backups, which make steady learning progress by inducing some optimal data distribution (see (Kumar et al., 2020) for more details). Specifically, in addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s,a)T ) , where γ is a discount factor and T is a temperature. However, we remark that DisCor can still suffer from the error propagation issues because there is also an approximation error in estimating cumulative Bellman errors. Therefore, we consider an alternative approach that utilizes the uncertainty from ensembles. Because it has been observed that the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples (Lakshminarayanan et al., 2017), we expect that the weighted Bellman backups based on ensembles can handle error propagation more effectively. Indeed, in our experiments, we find that ensemble-based weighted Bellman backups can give rise to more stable training and improve the data-efficiency of various off-policy RL algorithms. 6.2 COMPUTATION OVERHEAD One can expect that there is an additional computation overhead by introducing ensembles. When we have N ensemble agents, our method requires N× inferences for weighted Bellman backups and 2N× inferences (N for actors and N for critics). However, we remark that our method can be more computationally efficient because it is parallelizable. Also, as shown in Figure 3(c), the gains from SUNRISE can not be achieved by simply increasing the number of updates/parameters. 7 CONCLUSION In this paper, we present the ensemble-based weighted Bellman backups, which is compatible with various off-policy RL algorithms. By re-weighting target Q-values based on uncertainty estimates, we stabilize and improve the learning process on both continuous and discrete control benchmarks. Additionally, we introduce SUNRISE, a simple unified ensemble method, which integrates the proposed weighted Bellman backups with bootstrap with random initialization, and UCB exploration to handle various issues in off-policy RL algorithms. Our experiments show that SUNRISE consistently improves the performances of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, and outperforms state-of-the-art RL algorithms for both continuous and discrete control tasks on both low-dimensional and high-dimensional environments. We hope that SUNRISE could be useful to other relevant topics such as sim-to-real transfer (Tobin et al., 2017), imitation learning (Torabi et al., 2018), understanding the connection between on-policy and offpolicy RL (Schulman et al., 2017), offline RL (Agarwal et al., 2020), and planning (Srinivas et al., 2018; Tamar et al., 2016). A SUNRISE: SOFT ACTOR-CRITIC Background. SAC (Haarnoja et al., 2018) is a state-of-the-art off-policy algorithm for continuous control problems. SAC learns a policy, πφ(a|s), and a critic, Qθ(s, a), and aims to maximize a weighted objective of the reward and the policy entropy, Est,at∼π [∑ t γ t−1rt + αH(πφ(·|st)) ] . To update the parameters, SAC alternates between a soft policy evaluation and a soft policy improvement. At the soft policy evaluation step, a soft Q-function, which is modeled as a neural network with parameters θ, is updated by minimizing the following soft Bellman residual: LSACcritic(θ) = Eτt∼B[LQ(τt, θ)], LQ(τt, θ) = ( Qθ(st, at)− rt − γEat+1∼πφ [ Qθ̄(st+1, at+1)− α log πφ(at+1|st+1) ])2 , where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, θ̄ are the delayed parameters, and α is a temperature parameter. At the soft policy improvement step, the policy π with its parameter φ is updated by minimizing the following objective: LSACactor(φ) = Est∼B [ Lπ(st, φ) ] , where Lπ(st, φ) = Eat∼πφ [ α log πφ(at|st)−Qθ(st, at) ] . We remark that this corresponds to minimizing the Kullback-Leibler divergence between the policy and a Boltzmann distribution induced by the current soft Q-function. SUNRISE without UCB exploration. For SUNRISE without UCB exploration, we use random inference proposed in Bootstrapped DQN (Osband et al., 2016a), which randomly selects an index of policy uniformly at random and generates the action from the selected actor for the duration of that episode (see Line 3 in Algorithm 2). Algorithm 2 SUNRISE: SAC version (random inference) 1: for each iteration do 2: // RANDOM INFERENCE 3: Select an index of policy using î ∼ Uniform{1, · · · , N} 4: for each timestep t do 5: Get the action from selected policy: at ∼ πφî(a|st) 6: Collect state st+1 and reward rt from the environment by taking action at 7: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 8: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 9: end for 10: // UPDATE AGENTS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 11: for each gradient step do 12: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 13: for each agent i do 14: Update the Q-function by minimizing 1B ∑B j=1mj,iLWQ (τj , θi) 15: Update the policy by minimizing 1B ∑B j=1mj,iLπ(sj , φi) 16: end for 17: end for 18: end for B EXTENSION TO RAINBOW DQN B.1 PRELIMINARIES: RAINBOW DQN Background. DQN algorithm (Mnih et al., 2015) learns a Q-function, which is modeled as a neural network with parameters θ, by minimizing the following Bellman residual: LDQN(θ) = Eτt∼B [( Qθ(st, at)− rt − γmax a Qθ̄(st+1, a) )2 ] , (7) where τt = (st, at, rt, st+1) is a transition, B is a replay buffer, and θ̄ are the delayed parameters. Even though Rainbow DQN integrates several techniques, such as double Q-learning (Van Hasselt et al., 2016) and distributional DQN (Bellemare et al., 2017), applying SUNRISE to Rainbow DQN can be described based on the standard DQN algorithm. For exposition, we refer the reader to Hessel et al. (2018) for more detailed explanations of Rainbow DQN. Algorithm 3 SUNRISE: Rainbow version 1: for each iteration do 2: for each timestep t do 3: // UCB EXPLORATION 4: Choose the action that maximizes UCB: at = arg max at,i∈A Qmean(st, at,i)+λQstd(st, at,i) 5: Collect state st+1 and reward rt from the environment by taking action at 6: Sample bootstrap masks Mt = {mt,i ∼ Bernoulli (β) — i ∈ {1, . . . , N}} 7: Store transitions τt = (st, at, st+1, rt) and masks in replay buffer B ← B ∪ {(τt,Mt)} 8: end for 9: // UPDATE Q-FUNCTIONS VIA BOOTSTRAP AND WEIGHTED BELLMAN BACKUP 10: for each gradient step do 11: Sample random minibatch {(τj ,Mj)}Bj=1 ∼ B 12: for each agent i do 13: Update the Q-function by minimizing 1B ∑B j=1mj,iL DQN WQ (τj , θi) 14: end for 15: end for 16: end for B.2 SUNRISE: RAINBOW DQN Bootstrap with random initialization. Formally, we consider an ensemble of N Q-functions, i.e., {Qθi}Ni=1, where θi denotes the parameters of the i-th Q-function.3 To train the ensemble of Q-functions, we use the bootstrap with random initialization (Efron, 1982; Osband et al., 2016a), which enforces the diversity between Q-functions through two simple ideas: First, we initialize the model parameters of all Q-functions with random parameter values for inducing an initial diversity in the models. Second, we apply different samples to train each Q-function. Specifically, for each Q-function i in each timestep t, we draw the binary masks mt,i from the Bernoulli distribution with parameter β ∈ (0, 1], and store them in the replay buffer. Then, when updating the model parameters of Q-functions, we multiply the bootstrap mask to each objective function. Weighted Bellman backup. Since conventional Q-learning is based on the Bellman backup in equation 7, it can be affected by error propagation. I.e., error in the target Q-functionQθ̄(st+1, at+1) gets propagated into the Q-function Qθ(st, at) at the current state. Recently, Kumar et al. (2020) showed that this error propagation can cause inconsistency and unstable convergence. To mitigate this issue, for each Q-function i, we consider a weighted Bellman backup as follows: LDQNWQ (τt, θi) = w (st+1) ( Qθi(st, at)− rt − γmax a Qθ̄i(st+1, a) )2 , where τt = (st, at, rt, st+1) is a transition, and w(s) is a confidence weight based on ensemble of target Q-functions: w(s) = σ ( −Q̄std(s) ∗ T ) + 0.5, (8) where T > 0 is a temperature, σ is the sigmoid function, and Q̄std(s) is the empirical standard deviation of all target Q-functions {maxaQθ̄i(s, a)} N i=1. Note that the confidence weight is bounded in [0.5, 1.0] because standard deviation is always positive.4 The proposed objective LDQNWQ downweights the sample transitions with high variance across target Q-functions, resulting in a loss function for the Q-updates that has a better signal-to-noise ratio. Note that we combine the proposed 3Here, we remark that each Q-function has a unique target Q-function. 4We find that it is empirically stable to set minimum value of weight w(s, a) as 0.5. weighted Bellman backup with prioritized replay (Schaul et al., 2016) by multiplying both weights to Bellman backups. UCB exploration. The ensemble can also be leveraged for efficient exploration (Chen et al., 2017; Osband et al., 2016a) because it can express higher uncertainty on unseen samples. Motivated by this, by following the idea of Chen et al. (2017), we consider an optimism-based exploration that chooses the action that maximizes at = max a {Qmean(st, a) + λQstd(st, a)}, (9) where Qmean(s, a) and Qstd(s, a) are the empirical mean and standard deviation of all Q-functions {Qθi}Ni=1, and the λ > 0 is a hyperparameter. This inference method can encourage exploration by adding an exploration bonus (i.e., standard deviation Qstd) for visiting unseen state-action pairs similar to the UCB algorithm (Auer et al., 2002). This inference method was originally proposed in Chen et al. (2017) for efficient exploration in DQN, but we further extend it to Rainbow DQN. For evaluation, we approximate the maximum a posterior action by choosing the action maximizes the mean of Q-functions, i.e., at = maxa{Qmean(st, a)}. The full procedure is summarized in Algorithm 3. C IMPLEMENTATION DETAILS FOR TOY REGRESSION TASKS We evaluate the quality of uncertainty estimates from an ensemble of neural networks on a toy regression task. To this end, we generate twenty training samples drawn as y = x3 + , where ∼ N (0, 32), and train ten ensembles of regression networks using bootstrap with random initialization. The regression network is as fully-connected neural networks with 2 hidden layers and 50 rectified linear units in each layer. For bootstrap, we draw the binary masks from the Bernoulli distribution with mean β = 0.3. As uncertainty estimates, we measure the empirical variance of the networks’ predictions. As shown in Figure 1(b), the ensemble can produce well-calibrated uncertainty estimates (i.e., variance) on unseen samples. D EXPERIMENTAL SETUPS AND RESULTS: OPENAI GYM Environments. We evaluate the performance of SUNRISE on four complex environments based on the standard bench-marking environments5 from OpenAI Gym (Brockman et al., 2016). Note that we do not use a modified Cheetah environments from PETS (Chua et al., 2018) (dented as Cheetah in POPLIN (Wang & Ba, 2020)) because it includes additional information in observations. Training details. We consider a combination of SAC and SUNRISE using the publicly released implementation repository (https://github.com/vitchyr/rlkit) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 20, 50}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 4 shows the learning curves on all environments. One can note that SUNRISE consistently improves the performance of SAC by a large margin. Effects of ensembles. Figure 5 shows the learning curves of SUNRISE with varying values of ensemble size on all environments. The performance can be improved by increasing the ensemble size, but the improvement is saturated around N = 5. 5We used the reference implementation at https://github.com/WilsonWangTHU/mbbl (Wang et al., 2019). E EXPERIMENTAL SETUPS AND RESULTS: NOISY REWARD DisCor. DisCor (Kumar et al., 2020) was proposed to prevent the error propagation issue in Qlearning. In addition to a standard Q-learning, DisCor trains an error model ∆ψ(s, a), which approximates the cumulative sum of discounted Bellman errors over the past iterations of training. Then, using the error model, DisCor reweights the Bellman backups based on a confidence weight defined as follows: w(s, a) ∝ exp ( −γ∆ψ(s, a) T ) , where γ is a discount factor and T is a temperature. By following the setups in Kumar et al. (2020), we take a network with 1 extra hidden layer than the corresponding Q-network as an error model, and chose T = 10 for all experiments. We update the temperature via a moving average and use the learning rate of 0.0003. We use the SAC algorithm as the RL objective coupled with DisCor and build on top of the publicly released implementation repository (https://github.com/ vitchyr/rlkit). F EXPERIMENTAL SETUPS AND RESULTS: DEEPMIND CONTROL SUITE Training details. We consider a combination of RAD and SUNRISE using the publicly released implementation repository (https://github.com/MishaLaskin/rad) with a full list of hyperparameters in Table 4. Similar to Laskin et al. (2020), we use the same encoder architecture as in (Yarats et al., 2019), and the actor and critic share the same encoder to embed image observations.6 For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 100}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 5, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because training samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. Learning curves. Figure 6(g), 6(h), 6(i), 6(j), 6(k), and 6(l) show the learning curves on all environments. Since RAD already achieves the near optimal performances and the room for improvement is small, we can see a small but consistent gains from SUNRISE. To verify the effectiveness of SUNRISE more clearly, we consider a combination of SAC and SUNRISE in Figure 6(a), 6(b), 6(c), 6(d), 6(e), and 6(f), where the gain from SUNRISE is more significant. G EXPERIMENTAL SETUPS AND RESULTS: ATARI GAMES Training details. We consider a combination of sample-efficient versions of Rainbow DQN and SUNRISE using the publicly released implementation repository (https://github.com/ Kaixhin/Rainbow) without any modifications on hyperparameters and architectures. For our method, the temperature for weighted Bellman backups is chosen from T ∈ {10, 40}, the mean of the Bernoulli distribution is chosen from β ∈ {0.5, 1.0}, the penalty parameter is chosen from λ ∈ {1, 10}, and we train five ensemble agents. The optimal parameters are chosen to achieve the best performance on training environments. Here, we remark that training ensemble agents using same training samples but with different initialization (i.e., β = 1) usually achieves the best performance in most cases similar to Osband et al. (2016a) and Chen et al. (2017). We expect that this is because splitting samples can reduce the sample-efficiency. Also, initial diversity from random initialization can be enough because each Q-function has a unique target Q-function, i.e., target value is also different according to initialization. 6However, we remark that each agent does not share the encoders unlike Bootstrapped DQN (Osband et al., 2016a). Learning curves. Figure 7, Figure 8 and Figure 9 show the learning curves on all environments.
1. What are the strengths and weaknesses of the proposed method in terms of its novelty, technicality, empirical significance, and theoretical justification? 2. How does the author claim that the proposed weighted Bellman backups improve the signal-to-noise ratio in Q-updates, and why is this claim questionable? 3. What is the relationship between the "reward-to-noise ratio" and the signal-to-noise ratio mentioned in the paper, and how does it relate to the performance of the proposed method? 4. What additional experiments or analysis could be conducted to better support the paper's claims and address the concerns raised by the reviewer?
Review
Review Summary This paper proposes to weight the Bellman backups according to the empirical std of Q-functions estimated by ensemble method. The paper claims that this proposed idea stabilizes and improves the learning in both continuous and discrete control tasks. It then integrates this proposed weighted Bellman backups with two of the existing advantages of ensembles: bootstrap and ucb exploration to form a unifying framework namely SUNRISE. SUNRISE is then compared with Actor-Critic and Rainbow in both discrete and continuous control tasks. Strong points Clarity: The paper is well written and organized Empirical significance: The integrated framework SUNRISE appears to have promising (yet seemly not fully convincing) performance as compared to the prior frameworks in both discrete and continuous control tasks. Weak points Novelty: This work seems to have low novelty and low technicality. It combines several known results to integrate into a unifying framework using ensembles. The only new idea here is perhaps a specific way of reweighting the Bellman backups though the idea of reweighting the Bellman backups to stabilize the learning is already known e.g., Kumar et al. 2020. Significance: In addition, at the present form I find it hard to be convinced both empirically and theoretically (or at least more elaborate explanation or intuition) why the proposed weighted Bellman backups using empirical std of Q-functions improve the signal-to-noise in Q-updates as claimed in the paper (see Questions for the authors). Questions for the authors In the last sentence of Section 4.1, the paper claims that “the proposed objective … has a better signal-to-noise ratio”. I would like the authors to elaborate in this claim. What is exactly considered signal and what is exactly consider noise in this context? Why down weighting the sample transitions with high variance across target Q-functions result in a better signal-to-noise ratio? Why does the weighting proposed in this paper have better signal-to-noise ratio than the weighting in Discor (Kumar et al. 2020)? The paper claims that the proposed weighted Bellman updates improve signal-to-noise in Q-updates but appears to show only one experimental setting (presented in Fig. 2) where the reward is perturbed with a standard Gaussian noise. For simplicity for the moment let’s call by “reward-to-noise ratio” the ratio of the magnitude of the original reward signal r(s,a) to the magnitude of the added noise. Since Section 5.3, I assume that the “reward-to-noise ratio” has something to do with the signal-to-noise ratio mentioned in the paper. Then, how the performance of the proposed weighted Bellman updates when the “reward-to-noise ratio” varies? My initial recommendation Overall, I vote for weak rejecting for the weak points mentioned above. My final recommendation The authors did not fully address my points. I remain my initial score and recommend for rejection.
ICLR
Title Unsupervised Clustering using Pseudo-semi-supervised Learning Abstract In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance. To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels. We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy. Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudolabels. The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement. We show that our approach outperforms state of the art clustering results for multiple image and text datasets. For example, we achieve 54.6% accuracy for CIFAR10 and 43.9% for 20news, outperforming state of the art by 8-12% in absolute terms. Project details and code are available at https://divamgupta.com/ pseudo-semi-supervised-clustering 1 INTRODUCTION Semi-supervised methods, which make use of large unlabelled data sets and a small labelled data set, have seen recent success, e.g., ladder networks Rasmus et al. (2015) achieves 99% accuracy in MNIST using only 100 labelled samples. These approaches leverage the unlabelled data to help the network learn an underlying representation, while the labelled data guides the network towards separating the classes. In this paper, we ask two questions: is it possible to create the small labelled data set required by semi-supervised methods purely using unsupervised techniques? If so, can semi-supervised methods leverage this autonomously generated pseudo-labelled data set to deliver higher performance than state-of-the-art unsupervised approaches? We answer both these questions in the affirmative. We first find that prior approaches for identifying pseudo-labels Caron et al. (2018); Chen (2018); Lee (2013) perform poorly because of their low accuracy (Section 2). To create a high accuracy pseudo-labelled data set autonomously, we use a combination of ensemble of deep networks with a custom graph clustering algorithm (Section 4). We first train an ensemble of deep networks in an unsupervised manner. Each network independently clusters the input. We then compare two input data points. If all of the networks agree that these two data points belong to the same cluster, we can be reasonably sure that these data points belong to the same class. In this way, we identify all input data pairs belonging to the same class with high precision in a completely unsupervised manner. In the next step, we use these high quality input pairs to generate a similarity graph, with the data points as nodes and edges between data points which are deemed to be similar by our ensemble. From this graph, we extract tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high ∗Work done as a Research Fellow at Microsoft Research India precision. Extracting high quality clusters from this graph while ensuring that the extracted clusters correspond to different classes is challenging. We discuss our approach in Section 4.2.1 for solving this problem. In this way, our method extracts unambiguous samples belonging to each class, which serves as pseudo-labels for semi-supervised learning. For semi-supervised learning using the labels generated above, one could use ladder networks Rasmus et al. (2015). However, we found that ladder networks is unsuitable for the initial unsupervised clustering step as it can degenerate to outputting constant values for all inputs in the absence of unsupervised loss. To enable unsupervised clustering, we augment ladder networks using information maximization Krause et al. (2010) to create the Ladder-IM, and with a dot product loss to create Ladder-Dot. We show in Section 5 that Ladder-IM and Ladder-Dot, by themselves, also provide improvements over previous state of the art. We use the same models for both the first unsupervised learning step as well as the subsequent pseudo-semi-supervised iterations. Finally, the approach of finding high quality clusters using an ensemble, and using them as labels to train a new ensemble of semi-supervised models, is iterated, yielding continued improvements. The large gains of our method mainly come from this iterative approach, which can in some cases, yield upto 17% gains in accuracy over the base unsupervised models (see section 5.4). We name our pseudo-semi-supervised learning approach Kingdra1. Kingdra is independent of the type of data set; we show examples of its use on both image and text data sets in Section 5. This is in contrast to some previous approaches using CNNs, e.g. Chang et al. (2017), Caron et al. (2018), which are specialized for image data sets. We perform unsupervised classification using Kingdra on several standard image (MNIST, CIFAR10, STL) and text (reuters, 20news) datasets. On all these datasets, Kingdra is able to achieve higher clustering accuracy compared to current state-of-the-art deep unsupervised clustering techniques. For example, on the CIFAR10 and 20news datasets, Kingdra is able to achieve classification accuracy of 54.6% and 43.9%, respectively, delivering 8-12% absolute gains over state of the art results Hu et al. (2017); Xie et al. (2016). 2 PRIOR WORK ON GENERATING PSEUDO-LABELS Several techniques have been proposed in the literature for generating pseudo-labels (Caron et al. (2018); Chen (2018); Lee (2013). In Lee (2013), the output class with the highest softmax value (Argmax) is taken to be the pseudo-label. In Caron et al. (2018), the authors perform K-means clustering on the feature vector and use the K-means clusters as pseudo-labels. Finally, authors in Chen (2018) treat the softmax output as confidence and only label those items whose confidence value is above a high threshold. Note that none of these techniques for identifying pseudo-labels have been applied in our context, i.e., for unsupervised clustering using semi-supervised models. In this section, we evaluate if pseudo-labels created by these prior techniques can be leveraged by semi-supervised models to improve clustering accuracy. We start with a semi-supervised model based on Ladder networks (Rasmus et al. (2015)) called Ladder-IM (see Section 4.1 for model details) and train using only its unsupervised loss terms on MNIST and CIFAR10 datasets. We use each of the above three pseudo-labelling approaches on the trained model to provide an initial set of pseudo-labels to the datasets (e.g., using K-means clustering on the feature vector of the model as in Caron et al. (2018), etc.). We call the accuracy of these pseudo-labels the initial pseudo-label accuracy. We then use these generated pseudo-labels along with the datasets to train the model again, 1Our system is named after a semi-pseudo Pokémon. now with a supervised loss term (based on the pseudo-labels) and the earlier unsupervised loss terms. We again run the pseudo-labelling approaches on the newly trained model to derive an updated set of pseudo-labels. We iterate this process of training and pseudo-labelling until the pseudo-label accuracy stabilizes. We call this the final clustering accuracy. The initial pseudo-label accuracy and the final clustering accuracy results for the three approaches are shown in Table 1. First, consider MNIST. The unsupervised clustering accuracy of Ladder-IM is 95.4%. Argmax simply assigns pseudo-labels based on the model’s output and since this doesn’t add any new information for subsequent iterations, the final accuracy remains at 95.4%. On the other hand, the pseudo-labels identified by both the K-means and threshold approaches result in worse initial label accuracy (75.4% and 88.6%). When this low-accuracy pseudo-label is used as supervision to train the model further, it results in a low final clustering accuracy of 60.9% and 91.6%, respectively. CIFAR10 results are similar. Ladder-IM clustering accuracy is 49% which remains the same under Argmax as before. Pseudo-label accuracy using the K-means approach is worse and results in pulling down the final accuracy to 44.8%. Interestingly, threshold results in a slightly higher initial accuracy of 60.5% but even this is not high enough to improve the final clustering accuracy for CIFAR10. From these results, we arrive at the following two conclusions. First, if the initial pseudo-label accuracy is not high, using pseudo-labels as supervision can result in bringing down the final clustering accuracy. Thus, high accuracy of initial pseudo-labels is crucial for improving clustering accuracy. Second, current approaches for identifying pseudo-labels do not deliver high accuracy and hence are unable to help improve overall clustering accuracy. 3 RELATED WORK Unsupervised clustering: Various unsupervised clustering methods have been proposed over the years. Ng et al. (2002) uses a spectral clustering based approach, while Elhamifar & Vidal (2009) uses a sparse subspace approach for unsupervised learning. Recently, several deep neural networks based methods have been proposed, which scale well to large datasets. The ability of deep neural networks to learn higher level representations make them a good choice for unsupervised learning. Coates & Ng (2012) and Caron et al. (2018) use convnets and k-means for clustering. Caron et al. (2018) for example, iterates over clustering the features obtained from a convnet, and training the classifier using these clusters as pseudo-labels. The authors do not report clustering perfomence and we observed that this method can easily degenerate. Chang et al. (2017) uses pair-wise dot-product based similarity to identify close input pairs, which provide a supervisory signal. These convnets based approaches however work on only image datasets. Xie et al. (2016) simultaneously learns feature representations and cluster assignments using deep neural networks, and works on both image and text datasets. Hu et al. (2017) uses regularization combined with mutual information loss for unsupervised learning and achieves state of the art results. The authors conduct experiments in two settings - Random Perturbation Training and Virtual Adversarial Training. Other works such as Hjelm et al. (2018) using mutual information, maximize the mutual information between the spatial features and the non-spatial features. Ji et al. (2019) maximizes the mutual information between the predicted label of the image and the predicted label of the augmented image. This method uses convolution networks and requires domain knowledge of the dataset. Self-supervised learning: Another form of unsupervised learning uses auxiliary learning tasks for which labels can be self generated to generate useful representations from data. Many methods use spatial information of image patches to generate self-supervised data. E.g. Pathak et al. (2016) predicts pixels in an image patch using surrounding patches, while Doersch et al. (2015) predicts the relative position of image patches. Sermanet et al. (2018) uses time as a self supervisory signal between videos taken from different view points. Temporal signal is also used to learn representations from single videos by predicting future frames, e.g. Denton et al. (2017). Our method uses correlation between outputs of input points across an ensemble as a supervisory signal to generate self-supervised pseudo-labels. Semi-Supervised learning: Semi-supervised approaches use sparse labelling of datapoints. Szummer & Jaakkola (2002) propagates labels based on nearest neighbors. Weston et al. (2012) uses a deep version of label propagation. Lee (2013) adjusts labels probabilities, starting with trusting only true labels and gradually increases the weight of pseudo labels. Rasmus et al. (2015) employs a denoising auto encoder architecture and have shown impressive performance. Tarvainen & Valpola (2017) uses an averaged model over previous iterations as a teacher. Other than these, some semi-supervised learning methods like Xie et al. (2019) and Berthelot et al. (2019) use data augmentation and assume some domain knowledge of the dataset with some of the data augmentation specific to image datasets. Miyato et al. (2018) and Shinoda et al. (2017) uses virtual adversarial training combined with the classification loss to perform semi-supervised classification. However, we found that these methods do not work well if we jointly train them with unsupervised losses. Ladder networks does not require any domain-dependent augmentation, works for both image and text datasets, and can be easily jointly trained with supervised and unsupervised losses. Thus, we chose to work with Ladder networks in our experiments, though our approach is general enough to work with any semi-supervised method that accommodates training with unsupervised loss terms. Unsupervised ensemble learning: Unsupervised ensemble learning has been mostly limited to generating a set of clusterings and combining them into a final clustering. Huang et al. (2016) cast ensemble clustering into a binary linear programming problem. Wang et al. (2009); Fred & Jain (2005) use a pair wise co-occurrence approach to construct a co-association matrix and use it to measure similarity between data points. See Vega-Pons & Ruiz-Shulcloper (2011) for a survey of ensemble clustering algorithms. Note that to the best of our knowledge none of the ensemble clustering algorithms employ a semi-supervised step like ours, or make use of deep networks. 4 PROPOSED FRAMEWORK An overview of the Kingdra method is summarized in Figure 1. Given an unlabelled dataset X = {x1, . . . , xn}, we start with unsupervised training of an ensemble of models M = {M1, . . . ,Mm}. For the individual models, any unsupervised model can be used. However, we propose a novel Ladder-* model, in which we build on ladder networks Rasmus et al. (2015) and modify it to support clustering. Next, we use the agreement between the ensemble models on a pair of data points, as a measure of similarity between the data points. This pairwise data is used to construct a similarity graph, from which we extract k tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high precision. This is important for improving the accuracy of our semi-supervised training, as discussed in section 2. These pseudo-labels then serve as training data for semi-supervised training of a new ensemble of Ladder-* models. Finally, we perform multiple iterations of the above steps for continued improvement. 4.1 BASE MODEL The first step of our method is unsupervised training of an ensemble of models. Our framework allows using any unsupervised method for this step, and we have experimented with existing approaches, such as IMSAT Hu et al. (2017). The accuracy of this base model directly impacts the accuracy of our final model, and thus using an accurate base model clearly helps. In that light, we have also developed a novel unsupervised model Ladder-*, which outperforms other unsupervised models in most data sets. Ladder networks Rasmus et al. (2015) have shown great success in semi-supervised setting. However, to the best of our knowledge, the ladder architecture has not been used for unsupervised clustering. One reason perhaps is that ladder networks can degenerate to outputting constant values for all inputs in the absence of a supervised loss term. To circumvent this degeneracy, we add an unsupervised loss to the regular ladder loss terms so that it directs the network to give similar outputs for similar inputs, but overall maximizes the diversity in outputs, so that dissimilar inputs are directed towards dissimilar outputs. We achieve this objective by incorporating one of two losses – the IM loss Krause et al. (2010); Hu et al. (2017) or the dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. IM loss: The IM loss or the information maximization loss is simply the mutual information between the input X and output Y of the classifier: I(X;Y ) = H(Y )−H(Y |X) (1) where H(.) and H(.|.) are the entropy and conditional entropy, respectively. Maximizing the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. Dot product loss: The dot product loss is defined to be D(Xi, Xj) = Y T i Yj , if i 6= j (2) which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. A more detailed presentation of Ladder-IM and Ladder-Dot can be found in the appendix. 4.2 UNSUPERVISED ENSEMBLING Kingdra exploits an ensemble of Ladder-* models to further improve the performance of unsupervised learning. Note that, in supervised learning, ensembling is trivial as we can simply average the outputs of the individual models or do voting on them. On the other hand, in unsupervised learning, it is not trivial to do voting, as in the absence of training labels there is no stable class assignment for outputs across different models, and thus we do not have any mapping of class IDs of one model to another. To solve this we propose a simple approach, where we look at pairs of data-points, rather than at individual samples. Two data-points are in the same cluster with a high confidence if majority (or all) of the models in the ensembles put them in same cluster. For example, given an input pair x, x′, if Mi(x) = Mi(x′) for enough models, we can say with high confidence that they belong to the same class. Using this pairwise approach, we propose a graph based method to find small sized, but high precision clusters. 4.2.1 GRAPH BASED MINI-CLUSTERING We construct a graphG = {X,Epos, Eneg}with n nodes where each input data-point x is represented as a node. Here Epos and Eneg are two types of edges in the graph : • Strong Positive Edges: A strong positive edge is added between two data-points when a large number of models agree on their predicted class. (x, x′) ∈ Epos ⇐⇒ n_agree(x, x′) ≥ tpos where tpos is a chosen threshold, and n_agree(x, x′) = |{m : m ∈M,m(x) = m(x′)}|. • Strong Negative edges: A strong negative edge is added between two data-points when a large number of models disagree on their predicted class. (x, x′) ∈ Eneg ⇐⇒ n_disagree(x, x′) ≥ tneg, where tneg is a chosen threshold, and n_disagree(x, x′) = |{m : m ∈M,m(x) 6= m(x′)}|. A strong positive edge between two data points, implies that most models believe they are in the same class, while a strong negative edge between two data points implies that most models believe they should belong to different classes. Algorithm 1 Get high precision clusters using ensembles 1: procedure GETCLUSTERS(X, k ) 2: G = {X,Epos, Eneg} 3: for k′ ∈ {1, 2, . . . , k} do 4: xmax = argmaxx∈X{|(x, x′) ∈ Epos|} 5: Sk′ = {x : (x, xmax) ∈ Epos} ∪ {xmax} 6: for x′ ∈ X do 7: Remove x′ from G, if (x′, xmax) /∈ Eneg 8: end for 9: end for 10: Return S = {S1, S2, . . . , Sk} 11: end procedure After building the graph, each clique of strong positive edges would be a cluster, where within a clique, data-points belong to the same class with high confidence. Since we add only high confidence edges to the graph, the number of cliques can be much larger than k. Hence we need to select k cliques where we would like to maximize the size of each clique, but also require that the cliques are diverse (in order to not select two cliques with data-points belonging to the same class). Hence, within a clique, nodes should be connected by strong positive edges, while across cliques, nodes should be connected by strong negative edges. As finding cliques is not solvable in polynomial time, we use a simple and efficient greedy approximation algorithm, as shown in Algorithm 1. Rather than finding cliques, we greedily find nodes with the highest number of strong positive edges (line 4). The intuition is that most of the neighbours of that node will also be connected with each other. In the case of Cifar-10, we find that with a threshold of 90%, 81% of nodes are fully connected with each other. If the threshold is 100%, all nodes in a cluster are connected with each other by transitivity. We take the node with highest number of strong positive edges, along with other nodes connected to it by strong positive edges and add them to a cluster (line 5). We then remove all the nodes that do not have a strong negative edge to the chosen node (line 6–7). The intuition here is that these nodes are not diverse enough from the chosen cluster (since some models think that they belong to the same class as the currently chosen node), and thus should not be part of the next set of chosen clusters. By repeating the process k times, we get k diverse clusters, approximately satisfying our requirement. 4.3 ITERATIVE ENSEMBLE TRAINING Once the high precision clusters are identified, we treat these clustered points (points in set S) as pseudo-labels, and solve our unsupervised clustering problem using a semi-supervised method. Although any semi-supervised method can be used, as described in section 4.1 we use the proposed Ladder-* method, which we found superior to ladder networks in our experiments. Instead of training a single semi-supervised model, we train an ensemble of models, and again use them to find high quality clusters. This approach can be iterated, yielding continued improvements. We name this approach Kingdra. Algorithm 2 describes the complete Kingdra algorithm. First, the individual models are trained using only the unsupervised Ladder-* loss (lines 1–4). Then, for each of the iterations, we obtain high precision clusters (line 6), derive pseudo-labels from them (line 8), and then train the models with both the unsupervised and supervised losses (lines 9–10). We compute the pseudo-labels using the mini-clusters as follows. For a model Mj ∈M and clusters S, we need to find an appropriate mapping of the clusters to the output classes of the model. In Algorithm 2 Kingdra: Iterative Ensemble Training Input : Dataset X, Models M, num_clusters k Output Cluster Labels 1: for j ∈ {1, 2, . . . ,m} do 2: Initialize weights of Mj 3: Update Mj by minimizing lossLadder-* 4: end for 5: for it ∈ {1, 2, . . . , n_iter} do 6: S = GetClusters(X, k) 7: for j ∈ {1, 2, . . . ,m} do 8: Get pseudo labels for Mj 9: Update Mj by minimizing: 10: lossLadder-* + losssup . Use pseudo labels for losssup 11: end for 12: end for 13: Use averaging on the ensemble models M to return final clusters particular, for a cluster S′ ∈ S, we assign all data-points in S′ the following label: yjS′ = mode({Mj(x ′) : x′ ∈ S′}). (3) That is, we map a cluster to the output class to which most data-points in the cluster are mapped. These pseudo-labels are then used for computing the supervised loss of Ladder-*. This iterative approach leads to a continuous improvement of clustering quality. We observe that the size of clusters returned by Algorithm 1 increases after each iteration until they cover almost the entire input set. The clustering performance of the model also generally improves with each iteration until it saturates, as we show in Section 5. We also note that cluster assignments become more stable with subsequent iterations, which also leads to decrease in variance across multiple runs. That is, the variance across multiple runs decreases if we run Kingdra for more iterations. 5 EXPERIMENTS In this section we evaluate the performance of Kingdra on several popular datasets. For a fair comparison, we use the same data pre-processing and same model layer sizes as prior work Hu et al. (2017). 5.1 DATASETS We evaluate Kingdra on three image datasets and two text datasets: MNIST is a dataset of 70000 handwritten digits of 28-by-28 pixel size. Here, the raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. CIFAR10 is a dataset of 32-by-32 color images with 10 classes having 6000 examples each. STL is a dataset of 96-by-96 color images with 10 classes having 1300 examples each. For CIFAR10 and STL raw pixels are not suited for our goal as the color information dominates, hence as mentioned in Hu et al. (2017), we use features extracted from a Resnet-50 network pre-trained on the ImageNet dataset. Reuters is a dataset containing English news stories with imbalanced data and four categories. We used the same pre-processing as used by Hu et al. (2017); after removing the stop-words, tf-idf features were used. 20News is a dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017), we remove stop words and keep 2000 most frequent words, and used tf-idf features. All our experiments were performed using the same pre-processed data. 5.2 EVALUATION METRIC We use standard unsupervised evaluation methodology and protocol to compare different methods. Following Xie et al. (2016), we set the number of clusters the same as the number of ground truth classes and evaluated unsupervised clustering accuracy as: ACC = max p ∑N i=1 1{ln = p(ci)} N , (4) where li and ci are the ground truth cluster label and the cluster label assigned by the model respectively. We find the best one-to-one mapping of ground truth label and model generated clusters with p ranging over all one-to-one mappings. 5.3 COMPARED METHODS We compare Kingdra against several clustering algorithms on our datasets. Specifically, we compare against traditional clustering algorithms such as K-Means and Agglomerative clustering(AC). We also compare against representation learning baselines where we use models such as Deep Autoencoders(dAE) , Deep Variational Auto-encoders (dVAE) , and then use K-Means on the learned representations. Finally, we also compare our model with deep learning based clustering methods such as Deep RIM, DEC, DeepCluster , and IMSAT. Deep RIM uses a multi-layer neural network with the RIM objective. DEC iteratively learns a lower dimensional feature representation and optimizes a clustering objective. We also compare with two versions of IMSAT – IMSAT(RPT) and IMSAT(VAT) where data augmentation is used to impose invariance in the model outputs. For our results, we report the performance of Ladder-IM and Ladder-Dot individually, and finally Kingdra that includes an ensemble of Ladder-* networks, along with the semi-supervised iterations. For a fair comparison we used the same network architecture for all the neural network based models. 5.4 EXPERIMENTAL RESULTS Accuracy results of prior approaches and ours is shown in Table 2. As can be seen from the table, Ladder-IM by itself delivers good performance and Kingdra-Ladder-IM achieves higher clustering accuracy than state-of-the-art deep unsupervised approaches such as DEC Xie et al. (2016) and IMSAT Hu et al. (2017) in all five data sets. Further, the gap between Kingdra and prior approaches is significant in two data sets: Kingdra-Ladder-IM achieves an average accuracy of 54.6% for CIFAR10 compared to 45.6% for IMSAT and 46.9% for DEC – an 8% increase in absolute accuracy. Similarly, Kingdra-Ladder-IM achieves an average accuracy of 43.9% for 20news compared to 31.1% for IMSAT and 30.8% for DEC – an increase of over 12% in absolute accuracy. Note that while deep networks are state-of-the-art for most data sets, linear approaches outperform deep approaches on 20news with linear RIM achieving 50.9% accuracy Hu et al. (2017). We also tried DeepCluster Caron et al. (2018) in our experimental setting, but observed the model to degenerate, assigning most of the samples to the same cluster. Additional analysis of DeepCluster is in the Appendix. An interesting aspect to note is that the use of an ensemble by itself only provides small gains of 1-2%, similar to what one expects from ensembles in supervised learning (e.g., compare Ladder-IM with Ladder-IM-ensemble). The large gains mainly come from Kingdra using the ensemble to generate pseudo-labels, which is then iterated. For example, Kingdra-Ladder-IM provides absolute gains of 4-6% in most data sets over the base model. Similarly, Kingdra-Ladder-Dot provides absolute gains of 9% in MNIST and 17% in STL over the base Ladder-Dot model. Thus, our approach of generating pseudo-labels from ensembles is a powerful approach that delivers large gains in unsupervised learning. Also note that Kingdra-Ladder-IM performs better than Kingdra-Ladder-Dot for most data sets except for the Reuters data set where the latter performs better (Reuters has a large class imbalance with the largest class representing 43% of the data). Finally, note the standard deviation of the various approaches shown in the Table. One can see that Kingdra in general results in lower standard deviation than many of the prior approaches even while delivering higher accuracy. Figure 2 shows the accuracy of pseudo-labels and Kingdra-Ladder-IM, as well as number of pseudolabels identified by the graph clustering algorithm vs the number of iterations for STL, CIFAR10, and MNIST datasets. As iterations progress, the accuracy of pseudo labels decreases as more pseudolabels get added; however, this still helps improve the overall clustering accuracy. Note that, unlike pure semi-supervised approaches which use a small set of (randomly sampled) data points that match the input data distribution, our pseudo-labels do not completely match the input data distribution (since our selection algorithm is biased towards easy data points). This causes an increased gap between the accuracy of pseudo-labels, and that of overall clustering. 5.5 QUALITATIVE ANALYSIS Figure 3 shows the similarity graph obtained after the first three iterations of Kingdra on the MNIST dataset.As the iteration progresses, one can see that there are fewer inter-cluster linkages indicating that the models are converging on the labels for these data points. Figure 4 shows randomly selected examples from our final clusters generated by Kingdra. One can see that the examples are highly accurate for MNIST, thus resulting in high overall accuracy. However, for CIFAR10, there are several incorrectly labelled examples, including two clusters which do not have a clear mapping with any ground truth class, thereby resulting in much lower overall accuracy. 6 CONCLUSION In this paper, we introduced Kingdra, a novel pseudo-semi-supervised learning approach for clustering. Kingdra outperforms current state-of-the-art unsupervised deep learning based approaches, with 8-12% gains in absolute accuracy for CIFAR10 and 20news datasets. As part of Kingdra, we proposed clustering ladder networks, Ladder-IM and Ladder-Dot, that works well in both unsupervised and semi-supervised settings. 7 DISCUSSION While Kingdra performs well in the datasets we studied, the similarity-based graph clustering algorithm used has difficulty as the number of classes increase. For example, for the datasets we evaluated, the tpos and tneg can be simply set to the number of models in the ensemble. However, as the number of classes increase, these thresholds may need some tuning. For CIFAR100, with 100 classes, our graph clustering algorithm is not able to identify 100 diverse classes effectively. We are looking at improving the clustering algorithm as part of future work. We are also evaluating adding diversity to the models in the ensemble, either via changing the model structure, size and/or through changing the standard deviation of random noise used in ladder networks. A APPENDIX B Ladder-*: LADDER NETWORKS FOR CLUSTERING We now describe the Ladder-* architecture for the individual models in the ensemble. We use the same model architecture for both unsupervised learning in the initial step, and the subsequent semi-supervised learning iterations, the only difference being that the semi-supervised models carry an extra supervision loss term. Our architecture augments ladder networks Rasmus et al. (2015) with one of two losses – an information maximization loss similar to the RIM method described in Krause et al. (2010); Hu et al. (2017), or a dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. We first briefly describe the RIM method and ladder networks, followed by our Ladder-IM and Ladder-Dot methods. REGULARIZED INFORMATION MAXIMIZATION (RIM) The Regularized Information Maximization (RIM) approach for unsupervised learning was introduced in Krause et al. (2010) and extended by Hu et al. (2017) for multi-dimensional setting. The RIM method minimizes the following objective for a classifier: R(θ)− λI(X;Y ) (5) where R(θ) is a regularization term, and I(X;Y ) is the mutual information between the input X and output Y of the classifier. The mutual information can be written as the difference between marginal entropy and conditional entropy Hu et al. (2017): I(X;Y ) = H(Y )−H(Y |X) (6) where H(.) and H(.|.) are entropy and conditional entropy, respectively. Maximizing the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. In the unsupervised setting, where other priors are not known, this loss makes intuitive sense. For the regularization loss term R(θ) above, many options have been proposed. Hu et al. (2017), for example, propose a Self-Augmented Training (SAT) loss, which imposes invariance on the outputs of original and slightly perturbed input data. The authors experimented with random perturbation (IMSAT-RPT), and adversarial perturbation (IMSAT-VAT) where the perturbation is chosen to maximize the divergence between the two outputs on the current model. LADDER NETWORKS Ladder networks Rasmus et al. (2015) have shown impressive performance for semi-supervised classification. They employ a deep denoising auto encoder architecture, in which an additive noise is added to each hidden layer in the encoder, and the decoder learns a denoising function for each layer. The objective function is a weighted sum of supervised cross entropy loss on the output of the noisy encoder, and a squared error of the unsupervised denoising loss for all layers. Unlike standard auto-encoders, ladder networks also add lateral skip connections from each layer of the noisy encoder to the corresponding decoder layer. The additive noise acts as a regularizer for the supervised loss, while the lateral connections in the denoising decoder layers enable the higher layer features to focus on more abstract and task-specific features. See Pezeshki et al. (2016) for a detailed analysis. Borrowing the formalism in Pezeshki et al. (2016), a ladder network with L encoder/decoder layers can be defined as: x̃i, z̃ (1) i , ..., z̃ (L) i , ỹi = Encodernoisy(xi, θj), x, z (1) i , ..., z (L) i , yi = Encoderclean(xi, θj), x̂i, ẑ (1) i , ..., ẑ (L) i = Decoder(z̃ (1) i , ..., z̃ (L) i , φj), where θj and φj are the parameters for the Encoder and Decoder, respectively. The variables z (k) i , z̃ (k) i , and ẑ (k) i are the hidden layer outputs for the clean, noisy, and denoised versions at layer k, respectively. x, yi, ỹi are the input, clean output and the noisy output, respectively. The objective function consists of the reconstruction loss between clean and decoded intermediate features: lossdenoise =Σni=1Σ L k=1λ denoise k ∥∥∥(z(l)i , ẑ(l)i )∥∥∥ 2 (7) and a supervised cross entropy loss on the output of the noisy encoder (which is used only in the semi-supervised setting): losssup = −Σni=1logP (ỹ(i) = y∗|x(i)) (8) Ladder-IM & Ladder-Dot We now describe our novel Ladder-IM and Ladder-Dot models. The unsupervised denoising loss in Equation 7, along with the lateral connections architecture enables ladder networks to learn useful features from unsupervised data. However, in the absence of any supervised loss (Equation 8), ladder networks can degenerate to the trivial solution of a constant output for each encoder layer, as the decoder can then simply memorize these constants to make the denoising loss zero. Having batch normalization layers helps to alleviate this problem, but the loss function still allows the trivial solution. On the other hand, the mutual information loss (Equation 6) in RIM methods, in particular the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs. Ladder-IM: Combining ladder networks with information maximization can fix the above degeneracy problem, while simultaneously encouraging the ladder output towards a uniform distribution. We use both the clean, and noisy outputs of the ladder network for computing the mutual information loss, i.e. lossMI = I(X; Ỹ ) + I(X;Y ) (9) where Y = {y1, . . . , yN} is the set of clean outputs, and Ỹ = {ỹ1, . . . , ỹN} is the set of noisy outputs from the ladder network. Another way of thinking about the Ladder-IM approach is completely within the RIM framework. The unsupervised ladder loss lossdenoise, can be simply thought of as the regularization term R(θ) in equation 5. To that effect, we also add another regularization loss term, which is the KL divergence between the clean and noisy outputs of the ladder network encoder, i.e. lossladder_R = KL(p(ỹ|x), p(y|x)) (10) This regularization can be thought of as a generalization of the random perturbation loss proposed in Hu et al. (2017), where the authors impose invariance on the outputs of original and randomly perturbed inputs. Our regularization based on adding noise to the hidden layers is similar to dropout Srivastava et al. (2014), and can be thought of as adding higher level feature noise, rather than just input noise. Thus, in the unsupervised case, this would lead to the following minimization objective: lossLadder-IM = lossdenoise + α · lossladder_R + β · lossMI (11) In this paper, we set α and β to one. Finally, in the semi-supervised case, we also add the supervised cross entropy term (Equation 8), as done in the original ladder networks. Ladder-Dot: We also try a dot product loss to fix the above degeneracy problem. The dot product loss is defined to be D(Xi, Xj) = Y T i Yj , if i 6= j (12) which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. Overall, we found in our experiments that Ladder-IM showed superior performance to IMSAT-RPT and IMSAT-VATHu et al. (2017) on most data sets. Moreover, in pure semi-supervised settings also, Ladder-IM outperformed vanilla ladder networks in our preliminary analysis. C EXPERIMENTAL RESULTS C.1 IMPACT OF NUMBER OF MODELS IN ENSEMBLE We evaluated the accuracy of KINGDRA-LADDER-IM as the number of models in the ensemble was varied. MNIST accuracy with 1, 2, 5, 10, and 15 models is 95.0, 96.2, 97.4, 98.5, and 98.5 respectively. This suggests that accuracy saturates after 10 models and we use 10 models for our ensemble for all our experiments. C.2 COMPUTATION COST We have an efficient implementation of clustering, which takes 210s for largest n = 70000. On a server with four P100 GPUs, CLadder-IM takes 2mins, CLadder-IM with ensemble takes 8mins and Kingdra with 10 iterations takes 80mins while IMSAT(RPT) takes 5mins. C.3 ANALYSIS OF DEEPCLUSTER Here we give an analysis of DeepCluster Caron et al. (2018), explaining the shortcomings. We observed that the clustering accuracy generally decreases with iterations. This is because the pseudolabels generated could be bad, which results in worse accuracy in the next iteration. On the other hand, our approach only uses small number high-confidence samples for pseudo-labels. D DETAILS OF THE DATASETS • MNIST: A dataset of 70000 handwritten digits of 28-by-28 pixel size. The raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. • CIFAR10: A dataset of 32-by-32 color images with 10 classes having 6000 examples each. Similar toHu et al. (2017), features are extracted using 50-layer pre-trained deep residual networks. • STL: A dataset of 96-by-96 color images with 10 classes having 1300 examples each. We do not use the 100000 unlabeled images provided in the dataset. Similar to Hu et al. (2017)], features are extracted using 50-layer pre-trained deep residual networks. • Reuters: A dataset containing English news stories with four categories : corpo- rate/industrial, government/social, markets, and economics. We used the same preprocessing as used by Hu et al. (2017). After removing the stop-words, td-idf features were used. • 20News: A dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017) after removing stop words and keeping 2000 most frequent words, td-idf features were used.
1. What is the focus of the paper regarding unsupervised clustering? 2. What are the strengths and novelties of the proposed method, particularly in comparison to other unsupervised learning approaches? 3. What are the weaknesses of the paper, especially regarding experiment settings, motivation, and ad-hoc nature of the algorithm? 4. How does the reviewer assess the significance of the contributions and potential impact of the paper? 5. Are there any suggestions or recommendations for improving the paper, such as adding more baselines, modifying the experimental setup, or providing more principled algorithms?
Review
Review This paper proposes a method for unsupervised clustering. Similarly to others unsupervised learning (UL) papers like "Deep Clustering for Unsupervised Learning of Visual Features" by Caron et al., they propose an algorithm alternating between a labelling phase and a training phase. Though, it has interesting differences. For example, unlike the Caron et al. paper, not all the samples get assigned a labels but only the most confident ones. These samples are determined by the pruning of a graph whose edges are determined by the votes of an ensemble of clustering models. Then, these pseudo labels are used within a supervised loss which act as a regularizer for the retraining of the clustering models. Novelties /contributions/good points: * Votes from the clustering models to create a graph * Using a graph to identify the most important samples for pseudo labelling * Modification of the ladder network to be used as clustering algorithm * Good amount of experiments and good results Weaknesses: * The whole experiment leading to Table 1 in page 2 is unclear for me. I have trouble understanding the experiment settings. Could you please rephrase it. About initial/ final clustering for example and the rest as well. The whole thing puzzles me whereas the experiments section at the end is much more clear. * Lack of motivation about why using the Ladder method rather than another one. Other recent methods have better results in semi-supervised learning. * Algorithm 1 seems quite ad-hoc. Do more principled algos exist to solve this problem ? You could write about it and at least explain why it would not be feasible here. The sentence "The intuition is that most of the neighbours of that node will also be connected with each other" is unmotivated: no empirical proof for this ? * Related work section is too light. It is an important section and should really not be hidden or neglected. * In the experiments, you could add the "Deep Clustering for Unsupervised Learning of Visual Features" as baseline as well even if they use it for unsupervised learning as they do clustering as well. * In the experiments, you use the features extracted from ResNet-50 but what about finetuning this network rather than adding something on top or even better starting from scratch. Because here CIFAR-10 benefits greatly from the ImageNet features. I know that you should reproduce the settings from other papers but it might be good to go a bit beyond. Especially, if the settings of previous papers are a bit faulty. * Regarding, the impact of number of models in section D of the appendix, there is no saturation at 10 models. So how many models are necessary for saturation of the performance ? * Minor point: several times, you write "psuedo". Conclusion: the algorithm is novel and represents a nice contribution. Though, there are a lot of weaknesses that could be solved. So, I am putting "Weak accept" for the moment but it could change towards a negative rating depending on the rebuttal.
ICLR
Title Unsupervised Clustering using Pseudo-semi-supervised Learning Abstract In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance. To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels. We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy. Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudolabels. The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement. We show that our approach outperforms state of the art clustering results for multiple image and text datasets. For example, we achieve 54.6% accuracy for CIFAR10 and 43.9% for 20news, outperforming state of the art by 8-12% in absolute terms. Project details and code are available at https://divamgupta.com/ pseudo-semi-supervised-clustering 1 INTRODUCTION Semi-supervised methods, which make use of large unlabelled data sets and a small labelled data set, have seen recent success, e.g., ladder networks Rasmus et al. (2015) achieves 99% accuracy in MNIST using only 100 labelled samples. These approaches leverage the unlabelled data to help the network learn an underlying representation, while the labelled data guides the network towards separating the classes. In this paper, we ask two questions: is it possible to create the small labelled data set required by semi-supervised methods purely using unsupervised techniques? If so, can semi-supervised methods leverage this autonomously generated pseudo-labelled data set to deliver higher performance than state-of-the-art unsupervised approaches? We answer both these questions in the affirmative. We first find that prior approaches for identifying pseudo-labels Caron et al. (2018); Chen (2018); Lee (2013) perform poorly because of their low accuracy (Section 2). To create a high accuracy pseudo-labelled data set autonomously, we use a combination of ensemble of deep networks with a custom graph clustering algorithm (Section 4). We first train an ensemble of deep networks in an unsupervised manner. Each network independently clusters the input. We then compare two input data points. If all of the networks agree that these two data points belong to the same cluster, we can be reasonably sure that these data points belong to the same class. In this way, we identify all input data pairs belonging to the same class with high precision in a completely unsupervised manner. In the next step, we use these high quality input pairs to generate a similarity graph, with the data points as nodes and edges between data points which are deemed to be similar by our ensemble. From this graph, we extract tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high ∗Work done as a Research Fellow at Microsoft Research India precision. Extracting high quality clusters from this graph while ensuring that the extracted clusters correspond to different classes is challenging. We discuss our approach in Section 4.2.1 for solving this problem. In this way, our method extracts unambiguous samples belonging to each class, which serves as pseudo-labels for semi-supervised learning. For semi-supervised learning using the labels generated above, one could use ladder networks Rasmus et al. (2015). However, we found that ladder networks is unsuitable for the initial unsupervised clustering step as it can degenerate to outputting constant values for all inputs in the absence of unsupervised loss. To enable unsupervised clustering, we augment ladder networks using information maximization Krause et al. (2010) to create the Ladder-IM, and with a dot product loss to create Ladder-Dot. We show in Section 5 that Ladder-IM and Ladder-Dot, by themselves, also provide improvements over previous state of the art. We use the same models for both the first unsupervised learning step as well as the subsequent pseudo-semi-supervised iterations. Finally, the approach of finding high quality clusters using an ensemble, and using them as labels to train a new ensemble of semi-supervised models, is iterated, yielding continued improvements. The large gains of our method mainly come from this iterative approach, which can in some cases, yield upto 17% gains in accuracy over the base unsupervised models (see section 5.4). We name our pseudo-semi-supervised learning approach Kingdra1. Kingdra is independent of the type of data set; we show examples of its use on both image and text data sets in Section 5. This is in contrast to some previous approaches using CNNs, e.g. Chang et al. (2017), Caron et al. (2018), which are specialized for image data sets. We perform unsupervised classification using Kingdra on several standard image (MNIST, CIFAR10, STL) and text (reuters, 20news) datasets. On all these datasets, Kingdra is able to achieve higher clustering accuracy compared to current state-of-the-art deep unsupervised clustering techniques. For example, on the CIFAR10 and 20news datasets, Kingdra is able to achieve classification accuracy of 54.6% and 43.9%, respectively, delivering 8-12% absolute gains over state of the art results Hu et al. (2017); Xie et al. (2016). 2 PRIOR WORK ON GENERATING PSEUDO-LABELS Several techniques have been proposed in the literature for generating pseudo-labels (Caron et al. (2018); Chen (2018); Lee (2013). In Lee (2013), the output class with the highest softmax value (Argmax) is taken to be the pseudo-label. In Caron et al. (2018), the authors perform K-means clustering on the feature vector and use the K-means clusters as pseudo-labels. Finally, authors in Chen (2018) treat the softmax output as confidence and only label those items whose confidence value is above a high threshold. Note that none of these techniques for identifying pseudo-labels have been applied in our context, i.e., for unsupervised clustering using semi-supervised models. In this section, we evaluate if pseudo-labels created by these prior techniques can be leveraged by semi-supervised models to improve clustering accuracy. We start with a semi-supervised model based on Ladder networks (Rasmus et al. (2015)) called Ladder-IM (see Section 4.1 for model details) and train using only its unsupervised loss terms on MNIST and CIFAR10 datasets. We use each of the above three pseudo-labelling approaches on the trained model to provide an initial set of pseudo-labels to the datasets (e.g., using K-means clustering on the feature vector of the model as in Caron et al. (2018), etc.). We call the accuracy of these pseudo-labels the initial pseudo-label accuracy. We then use these generated pseudo-labels along with the datasets to train the model again, 1Our system is named after a semi-pseudo Pokémon. now with a supervised loss term (based on the pseudo-labels) and the earlier unsupervised loss terms. We again run the pseudo-labelling approaches on the newly trained model to derive an updated set of pseudo-labels. We iterate this process of training and pseudo-labelling until the pseudo-label accuracy stabilizes. We call this the final clustering accuracy. The initial pseudo-label accuracy and the final clustering accuracy results for the three approaches are shown in Table 1. First, consider MNIST. The unsupervised clustering accuracy of Ladder-IM is 95.4%. Argmax simply assigns pseudo-labels based on the model’s output and since this doesn’t add any new information for subsequent iterations, the final accuracy remains at 95.4%. On the other hand, the pseudo-labels identified by both the K-means and threshold approaches result in worse initial label accuracy (75.4% and 88.6%). When this low-accuracy pseudo-label is used as supervision to train the model further, it results in a low final clustering accuracy of 60.9% and 91.6%, respectively. CIFAR10 results are similar. Ladder-IM clustering accuracy is 49% which remains the same under Argmax as before. Pseudo-label accuracy using the K-means approach is worse and results in pulling down the final accuracy to 44.8%. Interestingly, threshold results in a slightly higher initial accuracy of 60.5% but even this is not high enough to improve the final clustering accuracy for CIFAR10. From these results, we arrive at the following two conclusions. First, if the initial pseudo-label accuracy is not high, using pseudo-labels as supervision can result in bringing down the final clustering accuracy. Thus, high accuracy of initial pseudo-labels is crucial for improving clustering accuracy. Second, current approaches for identifying pseudo-labels do not deliver high accuracy and hence are unable to help improve overall clustering accuracy. 3 RELATED WORK Unsupervised clustering: Various unsupervised clustering methods have been proposed over the years. Ng et al. (2002) uses a spectral clustering based approach, while Elhamifar & Vidal (2009) uses a sparse subspace approach for unsupervised learning. Recently, several deep neural networks based methods have been proposed, which scale well to large datasets. The ability of deep neural networks to learn higher level representations make them a good choice for unsupervised learning. Coates & Ng (2012) and Caron et al. (2018) use convnets and k-means for clustering. Caron et al. (2018) for example, iterates over clustering the features obtained from a convnet, and training the classifier using these clusters as pseudo-labels. The authors do not report clustering perfomence and we observed that this method can easily degenerate. Chang et al. (2017) uses pair-wise dot-product based similarity to identify close input pairs, which provide a supervisory signal. These convnets based approaches however work on only image datasets. Xie et al. (2016) simultaneously learns feature representations and cluster assignments using deep neural networks, and works on both image and text datasets. Hu et al. (2017) uses regularization combined with mutual information loss for unsupervised learning and achieves state of the art results. The authors conduct experiments in two settings - Random Perturbation Training and Virtual Adversarial Training. Other works such as Hjelm et al. (2018) using mutual information, maximize the mutual information between the spatial features and the non-spatial features. Ji et al. (2019) maximizes the mutual information between the predicted label of the image and the predicted label of the augmented image. This method uses convolution networks and requires domain knowledge of the dataset. Self-supervised learning: Another form of unsupervised learning uses auxiliary learning tasks for which labels can be self generated to generate useful representations from data. Many methods use spatial information of image patches to generate self-supervised data. E.g. Pathak et al. (2016) predicts pixels in an image patch using surrounding patches, while Doersch et al. (2015) predicts the relative position of image patches. Sermanet et al. (2018) uses time as a self supervisory signal between videos taken from different view points. Temporal signal is also used to learn representations from single videos by predicting future frames, e.g. Denton et al. (2017). Our method uses correlation between outputs of input points across an ensemble as a supervisory signal to generate self-supervised pseudo-labels. Semi-Supervised learning: Semi-supervised approaches use sparse labelling of datapoints. Szummer & Jaakkola (2002) propagates labels based on nearest neighbors. Weston et al. (2012) uses a deep version of label propagation. Lee (2013) adjusts labels probabilities, starting with trusting only true labels and gradually increases the weight of pseudo labels. Rasmus et al. (2015) employs a denoising auto encoder architecture and have shown impressive performance. Tarvainen & Valpola (2017) uses an averaged model over previous iterations as a teacher. Other than these, some semi-supervised learning methods like Xie et al. (2019) and Berthelot et al. (2019) use data augmentation and assume some domain knowledge of the dataset with some of the data augmentation specific to image datasets. Miyato et al. (2018) and Shinoda et al. (2017) uses virtual adversarial training combined with the classification loss to perform semi-supervised classification. However, we found that these methods do not work well if we jointly train them with unsupervised losses. Ladder networks does not require any domain-dependent augmentation, works for both image and text datasets, and can be easily jointly trained with supervised and unsupervised losses. Thus, we chose to work with Ladder networks in our experiments, though our approach is general enough to work with any semi-supervised method that accommodates training with unsupervised loss terms. Unsupervised ensemble learning: Unsupervised ensemble learning has been mostly limited to generating a set of clusterings and combining them into a final clustering. Huang et al. (2016) cast ensemble clustering into a binary linear programming problem. Wang et al. (2009); Fred & Jain (2005) use a pair wise co-occurrence approach to construct a co-association matrix and use it to measure similarity between data points. See Vega-Pons & Ruiz-Shulcloper (2011) for a survey of ensemble clustering algorithms. Note that to the best of our knowledge none of the ensemble clustering algorithms employ a semi-supervised step like ours, or make use of deep networks. 4 PROPOSED FRAMEWORK An overview of the Kingdra method is summarized in Figure 1. Given an unlabelled dataset X = {x1, . . . , xn}, we start with unsupervised training of an ensemble of models M = {M1, . . . ,Mm}. For the individual models, any unsupervised model can be used. However, we propose a novel Ladder-* model, in which we build on ladder networks Rasmus et al. (2015) and modify it to support clustering. Next, we use the agreement between the ensemble models on a pair of data points, as a measure of similarity between the data points. This pairwise data is used to construct a similarity graph, from which we extract k tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high precision. This is important for improving the accuracy of our semi-supervised training, as discussed in section 2. These pseudo-labels then serve as training data for semi-supervised training of a new ensemble of Ladder-* models. Finally, we perform multiple iterations of the above steps for continued improvement. 4.1 BASE MODEL The first step of our method is unsupervised training of an ensemble of models. Our framework allows using any unsupervised method for this step, and we have experimented with existing approaches, such as IMSAT Hu et al. (2017). The accuracy of this base model directly impacts the accuracy of our final model, and thus using an accurate base model clearly helps. In that light, we have also developed a novel unsupervised model Ladder-*, which outperforms other unsupervised models in most data sets. Ladder networks Rasmus et al. (2015) have shown great success in semi-supervised setting. However, to the best of our knowledge, the ladder architecture has not been used for unsupervised clustering. One reason perhaps is that ladder networks can degenerate to outputting constant values for all inputs in the absence of a supervised loss term. To circumvent this degeneracy, we add an unsupervised loss to the regular ladder loss terms so that it directs the network to give similar outputs for similar inputs, but overall maximizes the diversity in outputs, so that dissimilar inputs are directed towards dissimilar outputs. We achieve this objective by incorporating one of two losses – the IM loss Krause et al. (2010); Hu et al. (2017) or the dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. IM loss: The IM loss or the information maximization loss is simply the mutual information between the input X and output Y of the classifier: I(X;Y ) = H(Y )−H(Y |X) (1) where H(.) and H(.|.) are the entropy and conditional entropy, respectively. Maximizing the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. Dot product loss: The dot product loss is defined to be D(Xi, Xj) = Y T i Yj , if i 6= j (2) which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. A more detailed presentation of Ladder-IM and Ladder-Dot can be found in the appendix. 4.2 UNSUPERVISED ENSEMBLING Kingdra exploits an ensemble of Ladder-* models to further improve the performance of unsupervised learning. Note that, in supervised learning, ensembling is trivial as we can simply average the outputs of the individual models or do voting on them. On the other hand, in unsupervised learning, it is not trivial to do voting, as in the absence of training labels there is no stable class assignment for outputs across different models, and thus we do not have any mapping of class IDs of one model to another. To solve this we propose a simple approach, where we look at pairs of data-points, rather than at individual samples. Two data-points are in the same cluster with a high confidence if majority (or all) of the models in the ensembles put them in same cluster. For example, given an input pair x, x′, if Mi(x) = Mi(x′) for enough models, we can say with high confidence that they belong to the same class. Using this pairwise approach, we propose a graph based method to find small sized, but high precision clusters. 4.2.1 GRAPH BASED MINI-CLUSTERING We construct a graphG = {X,Epos, Eneg}with n nodes where each input data-point x is represented as a node. Here Epos and Eneg are two types of edges in the graph : • Strong Positive Edges: A strong positive edge is added between two data-points when a large number of models agree on their predicted class. (x, x′) ∈ Epos ⇐⇒ n_agree(x, x′) ≥ tpos where tpos is a chosen threshold, and n_agree(x, x′) = |{m : m ∈M,m(x) = m(x′)}|. • Strong Negative edges: A strong negative edge is added between two data-points when a large number of models disagree on their predicted class. (x, x′) ∈ Eneg ⇐⇒ n_disagree(x, x′) ≥ tneg, where tneg is a chosen threshold, and n_disagree(x, x′) = |{m : m ∈M,m(x) 6= m(x′)}|. A strong positive edge between two data points, implies that most models believe they are in the same class, while a strong negative edge between two data points implies that most models believe they should belong to different classes. Algorithm 1 Get high precision clusters using ensembles 1: procedure GETCLUSTERS(X, k ) 2: G = {X,Epos, Eneg} 3: for k′ ∈ {1, 2, . . . , k} do 4: xmax = argmaxx∈X{|(x, x′) ∈ Epos|} 5: Sk′ = {x : (x, xmax) ∈ Epos} ∪ {xmax} 6: for x′ ∈ X do 7: Remove x′ from G, if (x′, xmax) /∈ Eneg 8: end for 9: end for 10: Return S = {S1, S2, . . . , Sk} 11: end procedure After building the graph, each clique of strong positive edges would be a cluster, where within a clique, data-points belong to the same class with high confidence. Since we add only high confidence edges to the graph, the number of cliques can be much larger than k. Hence we need to select k cliques where we would like to maximize the size of each clique, but also require that the cliques are diverse (in order to not select two cliques with data-points belonging to the same class). Hence, within a clique, nodes should be connected by strong positive edges, while across cliques, nodes should be connected by strong negative edges. As finding cliques is not solvable in polynomial time, we use a simple and efficient greedy approximation algorithm, as shown in Algorithm 1. Rather than finding cliques, we greedily find nodes with the highest number of strong positive edges (line 4). The intuition is that most of the neighbours of that node will also be connected with each other. In the case of Cifar-10, we find that with a threshold of 90%, 81% of nodes are fully connected with each other. If the threshold is 100%, all nodes in a cluster are connected with each other by transitivity. We take the node with highest number of strong positive edges, along with other nodes connected to it by strong positive edges and add them to a cluster (line 5). We then remove all the nodes that do not have a strong negative edge to the chosen node (line 6–7). The intuition here is that these nodes are not diverse enough from the chosen cluster (since some models think that they belong to the same class as the currently chosen node), and thus should not be part of the next set of chosen clusters. By repeating the process k times, we get k diverse clusters, approximately satisfying our requirement. 4.3 ITERATIVE ENSEMBLE TRAINING Once the high precision clusters are identified, we treat these clustered points (points in set S) as pseudo-labels, and solve our unsupervised clustering problem using a semi-supervised method. Although any semi-supervised method can be used, as described in section 4.1 we use the proposed Ladder-* method, which we found superior to ladder networks in our experiments. Instead of training a single semi-supervised model, we train an ensemble of models, and again use them to find high quality clusters. This approach can be iterated, yielding continued improvements. We name this approach Kingdra. Algorithm 2 describes the complete Kingdra algorithm. First, the individual models are trained using only the unsupervised Ladder-* loss (lines 1–4). Then, for each of the iterations, we obtain high precision clusters (line 6), derive pseudo-labels from them (line 8), and then train the models with both the unsupervised and supervised losses (lines 9–10). We compute the pseudo-labels using the mini-clusters as follows. For a model Mj ∈M and clusters S, we need to find an appropriate mapping of the clusters to the output classes of the model. In Algorithm 2 Kingdra: Iterative Ensemble Training Input : Dataset X, Models M, num_clusters k Output Cluster Labels 1: for j ∈ {1, 2, . . . ,m} do 2: Initialize weights of Mj 3: Update Mj by minimizing lossLadder-* 4: end for 5: for it ∈ {1, 2, . . . , n_iter} do 6: S = GetClusters(X, k) 7: for j ∈ {1, 2, . . . ,m} do 8: Get pseudo labels for Mj 9: Update Mj by minimizing: 10: lossLadder-* + losssup . Use pseudo labels for losssup 11: end for 12: end for 13: Use averaging on the ensemble models M to return final clusters particular, for a cluster S′ ∈ S, we assign all data-points in S′ the following label: yjS′ = mode({Mj(x ′) : x′ ∈ S′}). (3) That is, we map a cluster to the output class to which most data-points in the cluster are mapped. These pseudo-labels are then used for computing the supervised loss of Ladder-*. This iterative approach leads to a continuous improvement of clustering quality. We observe that the size of clusters returned by Algorithm 1 increases after each iteration until they cover almost the entire input set. The clustering performance of the model also generally improves with each iteration until it saturates, as we show in Section 5. We also note that cluster assignments become more stable with subsequent iterations, which also leads to decrease in variance across multiple runs. That is, the variance across multiple runs decreases if we run Kingdra for more iterations. 5 EXPERIMENTS In this section we evaluate the performance of Kingdra on several popular datasets. For a fair comparison, we use the same data pre-processing and same model layer sizes as prior work Hu et al. (2017). 5.1 DATASETS We evaluate Kingdra on three image datasets and two text datasets: MNIST is a dataset of 70000 handwritten digits of 28-by-28 pixel size. Here, the raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. CIFAR10 is a dataset of 32-by-32 color images with 10 classes having 6000 examples each. STL is a dataset of 96-by-96 color images with 10 classes having 1300 examples each. For CIFAR10 and STL raw pixels are not suited for our goal as the color information dominates, hence as mentioned in Hu et al. (2017), we use features extracted from a Resnet-50 network pre-trained on the ImageNet dataset. Reuters is a dataset containing English news stories with imbalanced data and four categories. We used the same pre-processing as used by Hu et al. (2017); after removing the stop-words, tf-idf features were used. 20News is a dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017), we remove stop words and keep 2000 most frequent words, and used tf-idf features. All our experiments were performed using the same pre-processed data. 5.2 EVALUATION METRIC We use standard unsupervised evaluation methodology and protocol to compare different methods. Following Xie et al. (2016), we set the number of clusters the same as the number of ground truth classes and evaluated unsupervised clustering accuracy as: ACC = max p ∑N i=1 1{ln = p(ci)} N , (4) where li and ci are the ground truth cluster label and the cluster label assigned by the model respectively. We find the best one-to-one mapping of ground truth label and model generated clusters with p ranging over all one-to-one mappings. 5.3 COMPARED METHODS We compare Kingdra against several clustering algorithms on our datasets. Specifically, we compare against traditional clustering algorithms such as K-Means and Agglomerative clustering(AC). We also compare against representation learning baselines where we use models such as Deep Autoencoders(dAE) , Deep Variational Auto-encoders (dVAE) , and then use K-Means on the learned representations. Finally, we also compare our model with deep learning based clustering methods such as Deep RIM, DEC, DeepCluster , and IMSAT. Deep RIM uses a multi-layer neural network with the RIM objective. DEC iteratively learns a lower dimensional feature representation and optimizes a clustering objective. We also compare with two versions of IMSAT – IMSAT(RPT) and IMSAT(VAT) where data augmentation is used to impose invariance in the model outputs. For our results, we report the performance of Ladder-IM and Ladder-Dot individually, and finally Kingdra that includes an ensemble of Ladder-* networks, along with the semi-supervised iterations. For a fair comparison we used the same network architecture for all the neural network based models. 5.4 EXPERIMENTAL RESULTS Accuracy results of prior approaches and ours is shown in Table 2. As can be seen from the table, Ladder-IM by itself delivers good performance and Kingdra-Ladder-IM achieves higher clustering accuracy than state-of-the-art deep unsupervised approaches such as DEC Xie et al. (2016) and IMSAT Hu et al. (2017) in all five data sets. Further, the gap between Kingdra and prior approaches is significant in two data sets: Kingdra-Ladder-IM achieves an average accuracy of 54.6% for CIFAR10 compared to 45.6% for IMSAT and 46.9% for DEC – an 8% increase in absolute accuracy. Similarly, Kingdra-Ladder-IM achieves an average accuracy of 43.9% for 20news compared to 31.1% for IMSAT and 30.8% for DEC – an increase of over 12% in absolute accuracy. Note that while deep networks are state-of-the-art for most data sets, linear approaches outperform deep approaches on 20news with linear RIM achieving 50.9% accuracy Hu et al. (2017). We also tried DeepCluster Caron et al. (2018) in our experimental setting, but observed the model to degenerate, assigning most of the samples to the same cluster. Additional analysis of DeepCluster is in the Appendix. An interesting aspect to note is that the use of an ensemble by itself only provides small gains of 1-2%, similar to what one expects from ensembles in supervised learning (e.g., compare Ladder-IM with Ladder-IM-ensemble). The large gains mainly come from Kingdra using the ensemble to generate pseudo-labels, which is then iterated. For example, Kingdra-Ladder-IM provides absolute gains of 4-6% in most data sets over the base model. Similarly, Kingdra-Ladder-Dot provides absolute gains of 9% in MNIST and 17% in STL over the base Ladder-Dot model. Thus, our approach of generating pseudo-labels from ensembles is a powerful approach that delivers large gains in unsupervised learning. Also note that Kingdra-Ladder-IM performs better than Kingdra-Ladder-Dot for most data sets except for the Reuters data set where the latter performs better (Reuters has a large class imbalance with the largest class representing 43% of the data). Finally, note the standard deviation of the various approaches shown in the Table. One can see that Kingdra in general results in lower standard deviation than many of the prior approaches even while delivering higher accuracy. Figure 2 shows the accuracy of pseudo-labels and Kingdra-Ladder-IM, as well as number of pseudolabels identified by the graph clustering algorithm vs the number of iterations for STL, CIFAR10, and MNIST datasets. As iterations progress, the accuracy of pseudo labels decreases as more pseudolabels get added; however, this still helps improve the overall clustering accuracy. Note that, unlike pure semi-supervised approaches which use a small set of (randomly sampled) data points that match the input data distribution, our pseudo-labels do not completely match the input data distribution (since our selection algorithm is biased towards easy data points). This causes an increased gap between the accuracy of pseudo-labels, and that of overall clustering. 5.5 QUALITATIVE ANALYSIS Figure 3 shows the similarity graph obtained after the first three iterations of Kingdra on the MNIST dataset.As the iteration progresses, one can see that there are fewer inter-cluster linkages indicating that the models are converging on the labels for these data points. Figure 4 shows randomly selected examples from our final clusters generated by Kingdra. One can see that the examples are highly accurate for MNIST, thus resulting in high overall accuracy. However, for CIFAR10, there are several incorrectly labelled examples, including two clusters which do not have a clear mapping with any ground truth class, thereby resulting in much lower overall accuracy. 6 CONCLUSION In this paper, we introduced Kingdra, a novel pseudo-semi-supervised learning approach for clustering. Kingdra outperforms current state-of-the-art unsupervised deep learning based approaches, with 8-12% gains in absolute accuracy for CIFAR10 and 20news datasets. As part of Kingdra, we proposed clustering ladder networks, Ladder-IM and Ladder-Dot, that works well in both unsupervised and semi-supervised settings. 7 DISCUSSION While Kingdra performs well in the datasets we studied, the similarity-based graph clustering algorithm used has difficulty as the number of classes increase. For example, for the datasets we evaluated, the tpos and tneg can be simply set to the number of models in the ensemble. However, as the number of classes increase, these thresholds may need some tuning. For CIFAR100, with 100 classes, our graph clustering algorithm is not able to identify 100 diverse classes effectively. We are looking at improving the clustering algorithm as part of future work. We are also evaluating adding diversity to the models in the ensemble, either via changing the model structure, size and/or through changing the standard deviation of random noise used in ladder networks. A APPENDIX B Ladder-*: LADDER NETWORKS FOR CLUSTERING We now describe the Ladder-* architecture for the individual models in the ensemble. We use the same model architecture for both unsupervised learning in the initial step, and the subsequent semi-supervised learning iterations, the only difference being that the semi-supervised models carry an extra supervision loss term. Our architecture augments ladder networks Rasmus et al. (2015) with one of two losses – an information maximization loss similar to the RIM method described in Krause et al. (2010); Hu et al. (2017), or a dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. We first briefly describe the RIM method and ladder networks, followed by our Ladder-IM and Ladder-Dot methods. REGULARIZED INFORMATION MAXIMIZATION (RIM) The Regularized Information Maximization (RIM) approach for unsupervised learning was introduced in Krause et al. (2010) and extended by Hu et al. (2017) for multi-dimensional setting. The RIM method minimizes the following objective for a classifier: R(θ)− λI(X;Y ) (5) where R(θ) is a regularization term, and I(X;Y ) is the mutual information between the input X and output Y of the classifier. The mutual information can be written as the difference between marginal entropy and conditional entropy Hu et al. (2017): I(X;Y ) = H(Y )−H(Y |X) (6) where H(.) and H(.|.) are entropy and conditional entropy, respectively. Maximizing the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. In the unsupervised setting, where other priors are not known, this loss makes intuitive sense. For the regularization loss term R(θ) above, many options have been proposed. Hu et al. (2017), for example, propose a Self-Augmented Training (SAT) loss, which imposes invariance on the outputs of original and slightly perturbed input data. The authors experimented with random perturbation (IMSAT-RPT), and adversarial perturbation (IMSAT-VAT) where the perturbation is chosen to maximize the divergence between the two outputs on the current model. LADDER NETWORKS Ladder networks Rasmus et al. (2015) have shown impressive performance for semi-supervised classification. They employ a deep denoising auto encoder architecture, in which an additive noise is added to each hidden layer in the encoder, and the decoder learns a denoising function for each layer. The objective function is a weighted sum of supervised cross entropy loss on the output of the noisy encoder, and a squared error of the unsupervised denoising loss for all layers. Unlike standard auto-encoders, ladder networks also add lateral skip connections from each layer of the noisy encoder to the corresponding decoder layer. The additive noise acts as a regularizer for the supervised loss, while the lateral connections in the denoising decoder layers enable the higher layer features to focus on more abstract and task-specific features. See Pezeshki et al. (2016) for a detailed analysis. Borrowing the formalism in Pezeshki et al. (2016), a ladder network with L encoder/decoder layers can be defined as: x̃i, z̃ (1) i , ..., z̃ (L) i , ỹi = Encodernoisy(xi, θj), x, z (1) i , ..., z (L) i , yi = Encoderclean(xi, θj), x̂i, ẑ (1) i , ..., ẑ (L) i = Decoder(z̃ (1) i , ..., z̃ (L) i , φj), where θj and φj are the parameters for the Encoder and Decoder, respectively. The variables z (k) i , z̃ (k) i , and ẑ (k) i are the hidden layer outputs for the clean, noisy, and denoised versions at layer k, respectively. x, yi, ỹi are the input, clean output and the noisy output, respectively. The objective function consists of the reconstruction loss between clean and decoded intermediate features: lossdenoise =Σni=1Σ L k=1λ denoise k ∥∥∥(z(l)i , ẑ(l)i )∥∥∥ 2 (7) and a supervised cross entropy loss on the output of the noisy encoder (which is used only in the semi-supervised setting): losssup = −Σni=1logP (ỹ(i) = y∗|x(i)) (8) Ladder-IM & Ladder-Dot We now describe our novel Ladder-IM and Ladder-Dot models. The unsupervised denoising loss in Equation 7, along with the lateral connections architecture enables ladder networks to learn useful features from unsupervised data. However, in the absence of any supervised loss (Equation 8), ladder networks can degenerate to the trivial solution of a constant output for each encoder layer, as the decoder can then simply memorize these constants to make the denoising loss zero. Having batch normalization layers helps to alleviate this problem, but the loss function still allows the trivial solution. On the other hand, the mutual information loss (Equation 6) in RIM methods, in particular the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs. Ladder-IM: Combining ladder networks with information maximization can fix the above degeneracy problem, while simultaneously encouraging the ladder output towards a uniform distribution. We use both the clean, and noisy outputs of the ladder network for computing the mutual information loss, i.e. lossMI = I(X; Ỹ ) + I(X;Y ) (9) where Y = {y1, . . . , yN} is the set of clean outputs, and Ỹ = {ỹ1, . . . , ỹN} is the set of noisy outputs from the ladder network. Another way of thinking about the Ladder-IM approach is completely within the RIM framework. The unsupervised ladder loss lossdenoise, can be simply thought of as the regularization term R(θ) in equation 5. To that effect, we also add another regularization loss term, which is the KL divergence between the clean and noisy outputs of the ladder network encoder, i.e. lossladder_R = KL(p(ỹ|x), p(y|x)) (10) This regularization can be thought of as a generalization of the random perturbation loss proposed in Hu et al. (2017), where the authors impose invariance on the outputs of original and randomly perturbed inputs. Our regularization based on adding noise to the hidden layers is similar to dropout Srivastava et al. (2014), and can be thought of as adding higher level feature noise, rather than just input noise. Thus, in the unsupervised case, this would lead to the following minimization objective: lossLadder-IM = lossdenoise + α · lossladder_R + β · lossMI (11) In this paper, we set α and β to one. Finally, in the semi-supervised case, we also add the supervised cross entropy term (Equation 8), as done in the original ladder networks. Ladder-Dot: We also try a dot product loss to fix the above degeneracy problem. The dot product loss is defined to be D(Xi, Xj) = Y T i Yj , if i 6= j (12) which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. Overall, we found in our experiments that Ladder-IM showed superior performance to IMSAT-RPT and IMSAT-VATHu et al. (2017) on most data sets. Moreover, in pure semi-supervised settings also, Ladder-IM outperformed vanilla ladder networks in our preliminary analysis. C EXPERIMENTAL RESULTS C.1 IMPACT OF NUMBER OF MODELS IN ENSEMBLE We evaluated the accuracy of KINGDRA-LADDER-IM as the number of models in the ensemble was varied. MNIST accuracy with 1, 2, 5, 10, and 15 models is 95.0, 96.2, 97.4, 98.5, and 98.5 respectively. This suggests that accuracy saturates after 10 models and we use 10 models for our ensemble for all our experiments. C.2 COMPUTATION COST We have an efficient implementation of clustering, which takes 210s for largest n = 70000. On a server with four P100 GPUs, CLadder-IM takes 2mins, CLadder-IM with ensemble takes 8mins and Kingdra with 10 iterations takes 80mins while IMSAT(RPT) takes 5mins. C.3 ANALYSIS OF DEEPCLUSTER Here we give an analysis of DeepCluster Caron et al. (2018), explaining the shortcomings. We observed that the clustering accuracy generally decreases with iterations. This is because the pseudolabels generated could be bad, which results in worse accuracy in the next iteration. On the other hand, our approach only uses small number high-confidence samples for pseudo-labels. D DETAILS OF THE DATASETS • MNIST: A dataset of 70000 handwritten digits of 28-by-28 pixel size. The raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. • CIFAR10: A dataset of 32-by-32 color images with 10 classes having 6000 examples each. Similar toHu et al. (2017), features are extracted using 50-layer pre-trained deep residual networks. • STL: A dataset of 96-by-96 color images with 10 classes having 1300 examples each. We do not use the 100000 unlabeled images provided in the dataset. Similar to Hu et al. (2017)], features are extracted using 50-layer pre-trained deep residual networks. • Reuters: A dataset containing English news stories with four categories : corpo- rate/industrial, government/social, markets, and economics. We used the same preprocessing as used by Hu et al. (2017). After removing the stop-words, td-idf features were used. • 20News: A dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017) after removing stop words and keeping 2000 most frequent words, td-idf features were used.
1. What is the focus of the paper regarding clustering methods? 2. What are the strengths and weaknesses of the proposed approach compared to traditional clustering methods? 3. How does the reviewer assess the clarity and organization of the paper's content? 4. Are there any concerns or suggestions regarding the algorithm and its components? 5. What are the potential impacts or applications of the proposed method in clustering?
Review
Review This paper proposed an unsupervised learning method of clustering using semi-supervised clustering as a bridge. The method first trains an ensemble of clustering models and use the edge-level majority vote to determine a graph, and then applies rule to get partial clustering signals to feed the final semi-supervised clustering. The scheme is in an iterative fashion to further enhance the quality. I find this paper interesting and somewhat novel, with the following comments. 1. In algorithm 1, is it possible that too many nodes are removed so one cannot get k clusters in the end? Though finding cliques are time consuming, have the authors conducted experiments to see the difference between the real clique finding algorithm and the greedy one proposed? 2. Does the ensemble clustering step have stability issue regarding the method used? If a different clustering method is used, will the graph constructed later change drastically? 3. The writing. First line of section 3, figure 4 seems to point to figure 1. Section 2 seems to have format issue at the beginning. Section 5 could be merged with section 2.
ICLR
Title Unsupervised Clustering using Pseudo-semi-supervised Learning Abstract In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance. To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels. We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy. Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudolabels. The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement. We show that our approach outperforms state of the art clustering results for multiple image and text datasets. For example, we achieve 54.6% accuracy for CIFAR10 and 43.9% for 20news, outperforming state of the art by 8-12% in absolute terms. Project details and code are available at https://divamgupta.com/ pseudo-semi-supervised-clustering 1 INTRODUCTION Semi-supervised methods, which make use of large unlabelled data sets and a small labelled data set, have seen recent success, e.g., ladder networks Rasmus et al. (2015) achieves 99% accuracy in MNIST using only 100 labelled samples. These approaches leverage the unlabelled data to help the network learn an underlying representation, while the labelled data guides the network towards separating the classes. In this paper, we ask two questions: is it possible to create the small labelled data set required by semi-supervised methods purely using unsupervised techniques? If so, can semi-supervised methods leverage this autonomously generated pseudo-labelled data set to deliver higher performance than state-of-the-art unsupervised approaches? We answer both these questions in the affirmative. We first find that prior approaches for identifying pseudo-labels Caron et al. (2018); Chen (2018); Lee (2013) perform poorly because of their low accuracy (Section 2). To create a high accuracy pseudo-labelled data set autonomously, we use a combination of ensemble of deep networks with a custom graph clustering algorithm (Section 4). We first train an ensemble of deep networks in an unsupervised manner. Each network independently clusters the input. We then compare two input data points. If all of the networks agree that these two data points belong to the same cluster, we can be reasonably sure that these data points belong to the same class. In this way, we identify all input data pairs belonging to the same class with high precision in a completely unsupervised manner. In the next step, we use these high quality input pairs to generate a similarity graph, with the data points as nodes and edges between data points which are deemed to be similar by our ensemble. From this graph, we extract tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high ∗Work done as a Research Fellow at Microsoft Research India precision. Extracting high quality clusters from this graph while ensuring that the extracted clusters correspond to different classes is challenging. We discuss our approach in Section 4.2.1 for solving this problem. In this way, our method extracts unambiguous samples belonging to each class, which serves as pseudo-labels for semi-supervised learning. For semi-supervised learning using the labels generated above, one could use ladder networks Rasmus et al. (2015). However, we found that ladder networks is unsuitable for the initial unsupervised clustering step as it can degenerate to outputting constant values for all inputs in the absence of unsupervised loss. To enable unsupervised clustering, we augment ladder networks using information maximization Krause et al. (2010) to create the Ladder-IM, and with a dot product loss to create Ladder-Dot. We show in Section 5 that Ladder-IM and Ladder-Dot, by themselves, also provide improvements over previous state of the art. We use the same models for both the first unsupervised learning step as well as the subsequent pseudo-semi-supervised iterations. Finally, the approach of finding high quality clusters using an ensemble, and using them as labels to train a new ensemble of semi-supervised models, is iterated, yielding continued improvements. The large gains of our method mainly come from this iterative approach, which can in some cases, yield upto 17% gains in accuracy over the base unsupervised models (see section 5.4). We name our pseudo-semi-supervised learning approach Kingdra1. Kingdra is independent of the type of data set; we show examples of its use on both image and text data sets in Section 5. This is in contrast to some previous approaches using CNNs, e.g. Chang et al. (2017), Caron et al. (2018), which are specialized for image data sets. We perform unsupervised classification using Kingdra on several standard image (MNIST, CIFAR10, STL) and text (reuters, 20news) datasets. On all these datasets, Kingdra is able to achieve higher clustering accuracy compared to current state-of-the-art deep unsupervised clustering techniques. For example, on the CIFAR10 and 20news datasets, Kingdra is able to achieve classification accuracy of 54.6% and 43.9%, respectively, delivering 8-12% absolute gains over state of the art results Hu et al. (2017); Xie et al. (2016). 2 PRIOR WORK ON GENERATING PSEUDO-LABELS Several techniques have been proposed in the literature for generating pseudo-labels (Caron et al. (2018); Chen (2018); Lee (2013). In Lee (2013), the output class with the highest softmax value (Argmax) is taken to be the pseudo-label. In Caron et al. (2018), the authors perform K-means clustering on the feature vector and use the K-means clusters as pseudo-labels. Finally, authors in Chen (2018) treat the softmax output as confidence and only label those items whose confidence value is above a high threshold. Note that none of these techniques for identifying pseudo-labels have been applied in our context, i.e., for unsupervised clustering using semi-supervised models. In this section, we evaluate if pseudo-labels created by these prior techniques can be leveraged by semi-supervised models to improve clustering accuracy. We start with a semi-supervised model based on Ladder networks (Rasmus et al. (2015)) called Ladder-IM (see Section 4.1 for model details) and train using only its unsupervised loss terms on MNIST and CIFAR10 datasets. We use each of the above three pseudo-labelling approaches on the trained model to provide an initial set of pseudo-labels to the datasets (e.g., using K-means clustering on the feature vector of the model as in Caron et al. (2018), etc.). We call the accuracy of these pseudo-labels the initial pseudo-label accuracy. We then use these generated pseudo-labels along with the datasets to train the model again, 1Our system is named after a semi-pseudo Pokémon. now with a supervised loss term (based on the pseudo-labels) and the earlier unsupervised loss terms. We again run the pseudo-labelling approaches on the newly trained model to derive an updated set of pseudo-labels. We iterate this process of training and pseudo-labelling until the pseudo-label accuracy stabilizes. We call this the final clustering accuracy. The initial pseudo-label accuracy and the final clustering accuracy results for the three approaches are shown in Table 1. First, consider MNIST. The unsupervised clustering accuracy of Ladder-IM is 95.4%. Argmax simply assigns pseudo-labels based on the model’s output and since this doesn’t add any new information for subsequent iterations, the final accuracy remains at 95.4%. On the other hand, the pseudo-labels identified by both the K-means and threshold approaches result in worse initial label accuracy (75.4% and 88.6%). When this low-accuracy pseudo-label is used as supervision to train the model further, it results in a low final clustering accuracy of 60.9% and 91.6%, respectively. CIFAR10 results are similar. Ladder-IM clustering accuracy is 49% which remains the same under Argmax as before. Pseudo-label accuracy using the K-means approach is worse and results in pulling down the final accuracy to 44.8%. Interestingly, threshold results in a slightly higher initial accuracy of 60.5% but even this is not high enough to improve the final clustering accuracy for CIFAR10. From these results, we arrive at the following two conclusions. First, if the initial pseudo-label accuracy is not high, using pseudo-labels as supervision can result in bringing down the final clustering accuracy. Thus, high accuracy of initial pseudo-labels is crucial for improving clustering accuracy. Second, current approaches for identifying pseudo-labels do not deliver high accuracy and hence are unable to help improve overall clustering accuracy. 3 RELATED WORK Unsupervised clustering: Various unsupervised clustering methods have been proposed over the years. Ng et al. (2002) uses a spectral clustering based approach, while Elhamifar & Vidal (2009) uses a sparse subspace approach for unsupervised learning. Recently, several deep neural networks based methods have been proposed, which scale well to large datasets. The ability of deep neural networks to learn higher level representations make them a good choice for unsupervised learning. Coates & Ng (2012) and Caron et al. (2018) use convnets and k-means for clustering. Caron et al. (2018) for example, iterates over clustering the features obtained from a convnet, and training the classifier using these clusters as pseudo-labels. The authors do not report clustering perfomence and we observed that this method can easily degenerate. Chang et al. (2017) uses pair-wise dot-product based similarity to identify close input pairs, which provide a supervisory signal. These convnets based approaches however work on only image datasets. Xie et al. (2016) simultaneously learns feature representations and cluster assignments using deep neural networks, and works on both image and text datasets. Hu et al. (2017) uses regularization combined with mutual information loss for unsupervised learning and achieves state of the art results. The authors conduct experiments in two settings - Random Perturbation Training and Virtual Adversarial Training. Other works such as Hjelm et al. (2018) using mutual information, maximize the mutual information between the spatial features and the non-spatial features. Ji et al. (2019) maximizes the mutual information between the predicted label of the image and the predicted label of the augmented image. This method uses convolution networks and requires domain knowledge of the dataset. Self-supervised learning: Another form of unsupervised learning uses auxiliary learning tasks for which labels can be self generated to generate useful representations from data. Many methods use spatial information of image patches to generate self-supervised data. E.g. Pathak et al. (2016) predicts pixels in an image patch using surrounding patches, while Doersch et al. (2015) predicts the relative position of image patches. Sermanet et al. (2018) uses time as a self supervisory signal between videos taken from different view points. Temporal signal is also used to learn representations from single videos by predicting future frames, e.g. Denton et al. (2017). Our method uses correlation between outputs of input points across an ensemble as a supervisory signal to generate self-supervised pseudo-labels. Semi-Supervised learning: Semi-supervised approaches use sparse labelling of datapoints. Szummer & Jaakkola (2002) propagates labels based on nearest neighbors. Weston et al. (2012) uses a deep version of label propagation. Lee (2013) adjusts labels probabilities, starting with trusting only true labels and gradually increases the weight of pseudo labels. Rasmus et al. (2015) employs a denoising auto encoder architecture and have shown impressive performance. Tarvainen & Valpola (2017) uses an averaged model over previous iterations as a teacher. Other than these, some semi-supervised learning methods like Xie et al. (2019) and Berthelot et al. (2019) use data augmentation and assume some domain knowledge of the dataset with some of the data augmentation specific to image datasets. Miyato et al. (2018) and Shinoda et al. (2017) uses virtual adversarial training combined with the classification loss to perform semi-supervised classification. However, we found that these methods do not work well if we jointly train them with unsupervised losses. Ladder networks does not require any domain-dependent augmentation, works for both image and text datasets, and can be easily jointly trained with supervised and unsupervised losses. Thus, we chose to work with Ladder networks in our experiments, though our approach is general enough to work with any semi-supervised method that accommodates training with unsupervised loss terms. Unsupervised ensemble learning: Unsupervised ensemble learning has been mostly limited to generating a set of clusterings and combining them into a final clustering. Huang et al. (2016) cast ensemble clustering into a binary linear programming problem. Wang et al. (2009); Fred & Jain (2005) use a pair wise co-occurrence approach to construct a co-association matrix and use it to measure similarity between data points. See Vega-Pons & Ruiz-Shulcloper (2011) for a survey of ensemble clustering algorithms. Note that to the best of our knowledge none of the ensemble clustering algorithms employ a semi-supervised step like ours, or make use of deep networks. 4 PROPOSED FRAMEWORK An overview of the Kingdra method is summarized in Figure 1. Given an unlabelled dataset X = {x1, . . . , xn}, we start with unsupervised training of an ensemble of models M = {M1, . . . ,Mm}. For the individual models, any unsupervised model can be used. However, we propose a novel Ladder-* model, in which we build on ladder networks Rasmus et al. (2015) and modify it to support clustering. Next, we use the agreement between the ensemble models on a pair of data points, as a measure of similarity between the data points. This pairwise data is used to construct a similarity graph, from which we extract k tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high precision. This is important for improving the accuracy of our semi-supervised training, as discussed in section 2. These pseudo-labels then serve as training data for semi-supervised training of a new ensemble of Ladder-* models. Finally, we perform multiple iterations of the above steps for continued improvement. 4.1 BASE MODEL The first step of our method is unsupervised training of an ensemble of models. Our framework allows using any unsupervised method for this step, and we have experimented with existing approaches, such as IMSAT Hu et al. (2017). The accuracy of this base model directly impacts the accuracy of our final model, and thus using an accurate base model clearly helps. In that light, we have also developed a novel unsupervised model Ladder-*, which outperforms other unsupervised models in most data sets. Ladder networks Rasmus et al. (2015) have shown great success in semi-supervised setting. However, to the best of our knowledge, the ladder architecture has not been used for unsupervised clustering. One reason perhaps is that ladder networks can degenerate to outputting constant values for all inputs in the absence of a supervised loss term. To circumvent this degeneracy, we add an unsupervised loss to the regular ladder loss terms so that it directs the network to give similar outputs for similar inputs, but overall maximizes the diversity in outputs, so that dissimilar inputs are directed towards dissimilar outputs. We achieve this objective by incorporating one of two losses – the IM loss Krause et al. (2010); Hu et al. (2017) or the dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. IM loss: The IM loss or the information maximization loss is simply the mutual information between the input X and output Y of the classifier: I(X;Y ) = H(Y )−H(Y |X) (1) where H(.) and H(.|.) are the entropy and conditional entropy, respectively. Maximizing the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. Dot product loss: The dot product loss is defined to be D(Xi, Xj) = Y T i Yj , if i 6= j (2) which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. A more detailed presentation of Ladder-IM and Ladder-Dot can be found in the appendix. 4.2 UNSUPERVISED ENSEMBLING Kingdra exploits an ensemble of Ladder-* models to further improve the performance of unsupervised learning. Note that, in supervised learning, ensembling is trivial as we can simply average the outputs of the individual models or do voting on them. On the other hand, in unsupervised learning, it is not trivial to do voting, as in the absence of training labels there is no stable class assignment for outputs across different models, and thus we do not have any mapping of class IDs of one model to another. To solve this we propose a simple approach, where we look at pairs of data-points, rather than at individual samples. Two data-points are in the same cluster with a high confidence if majority (or all) of the models in the ensembles put them in same cluster. For example, given an input pair x, x′, if Mi(x) = Mi(x′) for enough models, we can say with high confidence that they belong to the same class. Using this pairwise approach, we propose a graph based method to find small sized, but high precision clusters. 4.2.1 GRAPH BASED MINI-CLUSTERING We construct a graphG = {X,Epos, Eneg}with n nodes where each input data-point x is represented as a node. Here Epos and Eneg are two types of edges in the graph : • Strong Positive Edges: A strong positive edge is added between two data-points when a large number of models agree on their predicted class. (x, x′) ∈ Epos ⇐⇒ n_agree(x, x′) ≥ tpos where tpos is a chosen threshold, and n_agree(x, x′) = |{m : m ∈M,m(x) = m(x′)}|. • Strong Negative edges: A strong negative edge is added between two data-points when a large number of models disagree on their predicted class. (x, x′) ∈ Eneg ⇐⇒ n_disagree(x, x′) ≥ tneg, where tneg is a chosen threshold, and n_disagree(x, x′) = |{m : m ∈M,m(x) 6= m(x′)}|. A strong positive edge between two data points, implies that most models believe they are in the same class, while a strong negative edge between two data points implies that most models believe they should belong to different classes. Algorithm 1 Get high precision clusters using ensembles 1: procedure GETCLUSTERS(X, k ) 2: G = {X,Epos, Eneg} 3: for k′ ∈ {1, 2, . . . , k} do 4: xmax = argmaxx∈X{|(x, x′) ∈ Epos|} 5: Sk′ = {x : (x, xmax) ∈ Epos} ∪ {xmax} 6: for x′ ∈ X do 7: Remove x′ from G, if (x′, xmax) /∈ Eneg 8: end for 9: end for 10: Return S = {S1, S2, . . . , Sk} 11: end procedure After building the graph, each clique of strong positive edges would be a cluster, where within a clique, data-points belong to the same class with high confidence. Since we add only high confidence edges to the graph, the number of cliques can be much larger than k. Hence we need to select k cliques where we would like to maximize the size of each clique, but also require that the cliques are diverse (in order to not select two cliques with data-points belonging to the same class). Hence, within a clique, nodes should be connected by strong positive edges, while across cliques, nodes should be connected by strong negative edges. As finding cliques is not solvable in polynomial time, we use a simple and efficient greedy approximation algorithm, as shown in Algorithm 1. Rather than finding cliques, we greedily find nodes with the highest number of strong positive edges (line 4). The intuition is that most of the neighbours of that node will also be connected with each other. In the case of Cifar-10, we find that with a threshold of 90%, 81% of nodes are fully connected with each other. If the threshold is 100%, all nodes in a cluster are connected with each other by transitivity. We take the node with highest number of strong positive edges, along with other nodes connected to it by strong positive edges and add them to a cluster (line 5). We then remove all the nodes that do not have a strong negative edge to the chosen node (line 6–7). The intuition here is that these nodes are not diverse enough from the chosen cluster (since some models think that they belong to the same class as the currently chosen node), and thus should not be part of the next set of chosen clusters. By repeating the process k times, we get k diverse clusters, approximately satisfying our requirement. 4.3 ITERATIVE ENSEMBLE TRAINING Once the high precision clusters are identified, we treat these clustered points (points in set S) as pseudo-labels, and solve our unsupervised clustering problem using a semi-supervised method. Although any semi-supervised method can be used, as described in section 4.1 we use the proposed Ladder-* method, which we found superior to ladder networks in our experiments. Instead of training a single semi-supervised model, we train an ensemble of models, and again use them to find high quality clusters. This approach can be iterated, yielding continued improvements. We name this approach Kingdra. Algorithm 2 describes the complete Kingdra algorithm. First, the individual models are trained using only the unsupervised Ladder-* loss (lines 1–4). Then, for each of the iterations, we obtain high precision clusters (line 6), derive pseudo-labels from them (line 8), and then train the models with both the unsupervised and supervised losses (lines 9–10). We compute the pseudo-labels using the mini-clusters as follows. For a model Mj ∈M and clusters S, we need to find an appropriate mapping of the clusters to the output classes of the model. In Algorithm 2 Kingdra: Iterative Ensemble Training Input : Dataset X, Models M, num_clusters k Output Cluster Labels 1: for j ∈ {1, 2, . . . ,m} do 2: Initialize weights of Mj 3: Update Mj by minimizing lossLadder-* 4: end for 5: for it ∈ {1, 2, . . . , n_iter} do 6: S = GetClusters(X, k) 7: for j ∈ {1, 2, . . . ,m} do 8: Get pseudo labels for Mj 9: Update Mj by minimizing: 10: lossLadder-* + losssup . Use pseudo labels for losssup 11: end for 12: end for 13: Use averaging on the ensemble models M to return final clusters particular, for a cluster S′ ∈ S, we assign all data-points in S′ the following label: yjS′ = mode({Mj(x ′) : x′ ∈ S′}). (3) That is, we map a cluster to the output class to which most data-points in the cluster are mapped. These pseudo-labels are then used for computing the supervised loss of Ladder-*. This iterative approach leads to a continuous improvement of clustering quality. We observe that the size of clusters returned by Algorithm 1 increases after each iteration until they cover almost the entire input set. The clustering performance of the model also generally improves with each iteration until it saturates, as we show in Section 5. We also note that cluster assignments become more stable with subsequent iterations, which also leads to decrease in variance across multiple runs. That is, the variance across multiple runs decreases if we run Kingdra for more iterations. 5 EXPERIMENTS In this section we evaluate the performance of Kingdra on several popular datasets. For a fair comparison, we use the same data pre-processing and same model layer sizes as prior work Hu et al. (2017). 5.1 DATASETS We evaluate Kingdra on three image datasets and two text datasets: MNIST is a dataset of 70000 handwritten digits of 28-by-28 pixel size. Here, the raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. CIFAR10 is a dataset of 32-by-32 color images with 10 classes having 6000 examples each. STL is a dataset of 96-by-96 color images with 10 classes having 1300 examples each. For CIFAR10 and STL raw pixels are not suited for our goal as the color information dominates, hence as mentioned in Hu et al. (2017), we use features extracted from a Resnet-50 network pre-trained on the ImageNet dataset. Reuters is a dataset containing English news stories with imbalanced data and four categories. We used the same pre-processing as used by Hu et al. (2017); after removing the stop-words, tf-idf features were used. 20News is a dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017), we remove stop words and keep 2000 most frequent words, and used tf-idf features. All our experiments were performed using the same pre-processed data. 5.2 EVALUATION METRIC We use standard unsupervised evaluation methodology and protocol to compare different methods. Following Xie et al. (2016), we set the number of clusters the same as the number of ground truth classes and evaluated unsupervised clustering accuracy as: ACC = max p ∑N i=1 1{ln = p(ci)} N , (4) where li and ci are the ground truth cluster label and the cluster label assigned by the model respectively. We find the best one-to-one mapping of ground truth label and model generated clusters with p ranging over all one-to-one mappings. 5.3 COMPARED METHODS We compare Kingdra against several clustering algorithms on our datasets. Specifically, we compare against traditional clustering algorithms such as K-Means and Agglomerative clustering(AC). We also compare against representation learning baselines where we use models such as Deep Autoencoders(dAE) , Deep Variational Auto-encoders (dVAE) , and then use K-Means on the learned representations. Finally, we also compare our model with deep learning based clustering methods such as Deep RIM, DEC, DeepCluster , and IMSAT. Deep RIM uses a multi-layer neural network with the RIM objective. DEC iteratively learns a lower dimensional feature representation and optimizes a clustering objective. We also compare with two versions of IMSAT – IMSAT(RPT) and IMSAT(VAT) where data augmentation is used to impose invariance in the model outputs. For our results, we report the performance of Ladder-IM and Ladder-Dot individually, and finally Kingdra that includes an ensemble of Ladder-* networks, along with the semi-supervised iterations. For a fair comparison we used the same network architecture for all the neural network based models. 5.4 EXPERIMENTAL RESULTS Accuracy results of prior approaches and ours is shown in Table 2. As can be seen from the table, Ladder-IM by itself delivers good performance and Kingdra-Ladder-IM achieves higher clustering accuracy than state-of-the-art deep unsupervised approaches such as DEC Xie et al. (2016) and IMSAT Hu et al. (2017) in all five data sets. Further, the gap between Kingdra and prior approaches is significant in two data sets: Kingdra-Ladder-IM achieves an average accuracy of 54.6% for CIFAR10 compared to 45.6% for IMSAT and 46.9% for DEC – an 8% increase in absolute accuracy. Similarly, Kingdra-Ladder-IM achieves an average accuracy of 43.9% for 20news compared to 31.1% for IMSAT and 30.8% for DEC – an increase of over 12% in absolute accuracy. Note that while deep networks are state-of-the-art for most data sets, linear approaches outperform deep approaches on 20news with linear RIM achieving 50.9% accuracy Hu et al. (2017). We also tried DeepCluster Caron et al. (2018) in our experimental setting, but observed the model to degenerate, assigning most of the samples to the same cluster. Additional analysis of DeepCluster is in the Appendix. An interesting aspect to note is that the use of an ensemble by itself only provides small gains of 1-2%, similar to what one expects from ensembles in supervised learning (e.g., compare Ladder-IM with Ladder-IM-ensemble). The large gains mainly come from Kingdra using the ensemble to generate pseudo-labels, which is then iterated. For example, Kingdra-Ladder-IM provides absolute gains of 4-6% in most data sets over the base model. Similarly, Kingdra-Ladder-Dot provides absolute gains of 9% in MNIST and 17% in STL over the base Ladder-Dot model. Thus, our approach of generating pseudo-labels from ensembles is a powerful approach that delivers large gains in unsupervised learning. Also note that Kingdra-Ladder-IM performs better than Kingdra-Ladder-Dot for most data sets except for the Reuters data set where the latter performs better (Reuters has a large class imbalance with the largest class representing 43% of the data). Finally, note the standard deviation of the various approaches shown in the Table. One can see that Kingdra in general results in lower standard deviation than many of the prior approaches even while delivering higher accuracy. Figure 2 shows the accuracy of pseudo-labels and Kingdra-Ladder-IM, as well as number of pseudolabels identified by the graph clustering algorithm vs the number of iterations for STL, CIFAR10, and MNIST datasets. As iterations progress, the accuracy of pseudo labels decreases as more pseudolabels get added; however, this still helps improve the overall clustering accuracy. Note that, unlike pure semi-supervised approaches which use a small set of (randomly sampled) data points that match the input data distribution, our pseudo-labels do not completely match the input data distribution (since our selection algorithm is biased towards easy data points). This causes an increased gap between the accuracy of pseudo-labels, and that of overall clustering. 5.5 QUALITATIVE ANALYSIS Figure 3 shows the similarity graph obtained after the first three iterations of Kingdra on the MNIST dataset.As the iteration progresses, one can see that there are fewer inter-cluster linkages indicating that the models are converging on the labels for these data points. Figure 4 shows randomly selected examples from our final clusters generated by Kingdra. One can see that the examples are highly accurate for MNIST, thus resulting in high overall accuracy. However, for CIFAR10, there are several incorrectly labelled examples, including two clusters which do not have a clear mapping with any ground truth class, thereby resulting in much lower overall accuracy. 6 CONCLUSION In this paper, we introduced Kingdra, a novel pseudo-semi-supervised learning approach for clustering. Kingdra outperforms current state-of-the-art unsupervised deep learning based approaches, with 8-12% gains in absolute accuracy for CIFAR10 and 20news datasets. As part of Kingdra, we proposed clustering ladder networks, Ladder-IM and Ladder-Dot, that works well in both unsupervised and semi-supervised settings. 7 DISCUSSION While Kingdra performs well in the datasets we studied, the similarity-based graph clustering algorithm used has difficulty as the number of classes increase. For example, for the datasets we evaluated, the tpos and tneg can be simply set to the number of models in the ensemble. However, as the number of classes increase, these thresholds may need some tuning. For CIFAR100, with 100 classes, our graph clustering algorithm is not able to identify 100 diverse classes effectively. We are looking at improving the clustering algorithm as part of future work. We are also evaluating adding diversity to the models in the ensemble, either via changing the model structure, size and/or through changing the standard deviation of random noise used in ladder networks. A APPENDIX B Ladder-*: LADDER NETWORKS FOR CLUSTERING We now describe the Ladder-* architecture for the individual models in the ensemble. We use the same model architecture for both unsupervised learning in the initial step, and the subsequent semi-supervised learning iterations, the only difference being that the semi-supervised models carry an extra supervision loss term. Our architecture augments ladder networks Rasmus et al. (2015) with one of two losses – an information maximization loss similar to the RIM method described in Krause et al. (2010); Hu et al. (2017), or a dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. We first briefly describe the RIM method and ladder networks, followed by our Ladder-IM and Ladder-Dot methods. REGULARIZED INFORMATION MAXIMIZATION (RIM) The Regularized Information Maximization (RIM) approach for unsupervised learning was introduced in Krause et al. (2010) and extended by Hu et al. (2017) for multi-dimensional setting. The RIM method minimizes the following objective for a classifier: R(θ)− λI(X;Y ) (5) where R(θ) is a regularization term, and I(X;Y ) is the mutual information between the input X and output Y of the classifier. The mutual information can be written as the difference between marginal entropy and conditional entropy Hu et al. (2017): I(X;Y ) = H(Y )−H(Y |X) (6) where H(.) and H(.|.) are entropy and conditional entropy, respectively. Maximizing the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. In the unsupervised setting, where other priors are not known, this loss makes intuitive sense. For the regularization loss term R(θ) above, many options have been proposed. Hu et al. (2017), for example, propose a Self-Augmented Training (SAT) loss, which imposes invariance on the outputs of original and slightly perturbed input data. The authors experimented with random perturbation (IMSAT-RPT), and adversarial perturbation (IMSAT-VAT) where the perturbation is chosen to maximize the divergence between the two outputs on the current model. LADDER NETWORKS Ladder networks Rasmus et al. (2015) have shown impressive performance for semi-supervised classification. They employ a deep denoising auto encoder architecture, in which an additive noise is added to each hidden layer in the encoder, and the decoder learns a denoising function for each layer. The objective function is a weighted sum of supervised cross entropy loss on the output of the noisy encoder, and a squared error of the unsupervised denoising loss for all layers. Unlike standard auto-encoders, ladder networks also add lateral skip connections from each layer of the noisy encoder to the corresponding decoder layer. The additive noise acts as a regularizer for the supervised loss, while the lateral connections in the denoising decoder layers enable the higher layer features to focus on more abstract and task-specific features. See Pezeshki et al. (2016) for a detailed analysis. Borrowing the formalism in Pezeshki et al. (2016), a ladder network with L encoder/decoder layers can be defined as: x̃i, z̃ (1) i , ..., z̃ (L) i , ỹi = Encodernoisy(xi, θj), x, z (1) i , ..., z (L) i , yi = Encoderclean(xi, θj), x̂i, ẑ (1) i , ..., ẑ (L) i = Decoder(z̃ (1) i , ..., z̃ (L) i , φj), where θj and φj are the parameters for the Encoder and Decoder, respectively. The variables z (k) i , z̃ (k) i , and ẑ (k) i are the hidden layer outputs for the clean, noisy, and denoised versions at layer k, respectively. x, yi, ỹi are the input, clean output and the noisy output, respectively. The objective function consists of the reconstruction loss between clean and decoded intermediate features: lossdenoise =Σni=1Σ L k=1λ denoise k ∥∥∥(z(l)i , ẑ(l)i )∥∥∥ 2 (7) and a supervised cross entropy loss on the output of the noisy encoder (which is used only in the semi-supervised setting): losssup = −Σni=1logP (ỹ(i) = y∗|x(i)) (8) Ladder-IM & Ladder-Dot We now describe our novel Ladder-IM and Ladder-Dot models. The unsupervised denoising loss in Equation 7, along with the lateral connections architecture enables ladder networks to learn useful features from unsupervised data. However, in the absence of any supervised loss (Equation 8), ladder networks can degenerate to the trivial solution of a constant output for each encoder layer, as the decoder can then simply memorize these constants to make the denoising loss zero. Having batch normalization layers helps to alleviate this problem, but the loss function still allows the trivial solution. On the other hand, the mutual information loss (Equation 6) in RIM methods, in particular the marginal entropy term H(Y ), encourages the network to assign disparate classes to the inputs. Ladder-IM: Combining ladder networks with information maximization can fix the above degeneracy problem, while simultaneously encouraging the ladder output towards a uniform distribution. We use both the clean, and noisy outputs of the ladder network for computing the mutual information loss, i.e. lossMI = I(X; Ỹ ) + I(X;Y ) (9) where Y = {y1, . . . , yN} is the set of clean outputs, and Ỹ = {ỹ1, . . . , ỹN} is the set of noisy outputs from the ladder network. Another way of thinking about the Ladder-IM approach is completely within the RIM framework. The unsupervised ladder loss lossdenoise, can be simply thought of as the regularization term R(θ) in equation 5. To that effect, we also add another regularization loss term, which is the KL divergence between the clean and noisy outputs of the ladder network encoder, i.e. lossladder_R = KL(p(ỹ|x), p(y|x)) (10) This regularization can be thought of as a generalization of the random perturbation loss proposed in Hu et al. (2017), where the authors impose invariance on the outputs of original and randomly perturbed inputs. Our regularization based on adding noise to the hidden layers is similar to dropout Srivastava et al. (2014), and can be thought of as adding higher level feature noise, rather than just input noise. Thus, in the unsupervised case, this would lead to the following minimization objective: lossLadder-IM = lossdenoise + α · lossladder_R + β · lossMI (11) In this paper, we set α and β to one. Finally, in the semi-supervised case, we also add the supervised cross entropy term (Equation 8), as done in the original ladder networks. Ladder-Dot: We also try a dot product loss to fix the above degeneracy problem. The dot product loss is defined to be D(Xi, Xj) = Y T i Yj , if i 6= j (12) which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. Overall, we found in our experiments that Ladder-IM showed superior performance to IMSAT-RPT and IMSAT-VATHu et al. (2017) on most data sets. Moreover, in pure semi-supervised settings also, Ladder-IM outperformed vanilla ladder networks in our preliminary analysis. C EXPERIMENTAL RESULTS C.1 IMPACT OF NUMBER OF MODELS IN ENSEMBLE We evaluated the accuracy of KINGDRA-LADDER-IM as the number of models in the ensemble was varied. MNIST accuracy with 1, 2, 5, 10, and 15 models is 95.0, 96.2, 97.4, 98.5, and 98.5 respectively. This suggests that accuracy saturates after 10 models and we use 10 models for our ensemble for all our experiments. C.2 COMPUTATION COST We have an efficient implementation of clustering, which takes 210s for largest n = 70000. On a server with four P100 GPUs, CLadder-IM takes 2mins, CLadder-IM with ensemble takes 8mins and Kingdra with 10 iterations takes 80mins while IMSAT(RPT) takes 5mins. C.3 ANALYSIS OF DEEPCLUSTER Here we give an analysis of DeepCluster Caron et al. (2018), explaining the shortcomings. We observed that the clustering accuracy generally decreases with iterations. This is because the pseudolabels generated could be bad, which results in worse accuracy in the next iteration. On the other hand, our approach only uses small number high-confidence samples for pseudo-labels. D DETAILS OF THE DATASETS • MNIST: A dataset of 70000 handwritten digits of 28-by-28 pixel size. The raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. • CIFAR10: A dataset of 32-by-32 color images with 10 classes having 6000 examples each. Similar toHu et al. (2017), features are extracted using 50-layer pre-trained deep residual networks. • STL: A dataset of 96-by-96 color images with 10 classes having 1300 examples each. We do not use the 100000 unlabeled images provided in the dataset. Similar to Hu et al. (2017)], features are extracted using 50-layer pre-trained deep residual networks. • Reuters: A dataset containing English news stories with four categories : corpo- rate/industrial, government/social, markets, and economics. We used the same preprocessing as used by Hu et al. (2017). After removing the stop-words, td-idf features were used. • 20News: A dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017) after removing stop words and keeping 2000 most frequent words, td-idf features were used.
1. What is the novel approach introduced by the paper in clustering unlabeled data? 2. How does the method utilize ensemble networks and information measures to create high-precision labels? 3. Can the reviewer provide more insights into the experimental results and their promise? 4. Are there any limitations or areas of improvement in the proposed method?
Review
Review This paper presents a method where they 1) use an ensemble of networks to cluster unlabeled data and assign pairs of data points a cluster label only if all networks agree that the pair belongs to a cluster 2) use the labeled pairs to create a similarity matrix and find a "tight" cluster or set of points that are all very similar to each other. The paper then uses the "labelled" points for semi-supervised learning with a proposed ensemble of models. The paper's method of creating high precision labels using their multi-step clustering algorithm with information measures is quite interesting. The experiment results look promising.
ICLR
Title Interactive Parallel Exploration for Reinforcement Learning in Continuous Action Spaces Abstract In this paper, a new interactive parallel learning scheme is proposed to enhance the performance of off-policy continuous-action reinforcement learning. In the proposed interactive parallel learning scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information. The information of the best policy is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search space by the multiple learners. The guidance by the previous best policy and the enlarged search space by the proposed interactive parallel learning scheme enable faster and better policy search in the policy parameter space. Working algorithms are constructed by applying the proposed interactive parallel learning scheme to several off-policy reinforcement learning algorithms such as the twin delayed deep deterministic (TD3) policy gradient algorithm and the soft actor-critic (SAC) algorithm, and numerical results show that the constructed IPE-enhanced algorithms outperform most of the current state-of-the-art reinforcement learning algorithms for continuous action control. 1 INTRODUCTION Reinforcement learning (RL) for continuous action control is an active research field. In RL, an agent learns a policy through interaction with the environment to maximize the cumulative reward. One of the key issues in RL is the trade-off between exploitation and exploration. Exploitation is to make a best decision based on the already collected information, whereas exploration is to collect more new information about the environment. The balance between the two is important for good RL algorithms. For example, DQN (Mnih et al. (2015)) balances exploitation and exploration by taking actions based on the ǫ-greedy approach. Deep deterministic policy gradient (DDPG) (Lillicrap et al. (2015)) and twin delayed deep deterministic (TD3) (Fujimoto et al. (2018)) policy gradient algorithms promote exploration by adding Ornstein-Uhlenbeck noise and Gaussian noise to the best decision action, respectively. Soft actor-critic (SAC) (Haarnoja et al. (2018)) performs balancing by using a maximum entropy objective. However, most of the previous works focus on exploration to obtain unobserved states or actions. In this paper, we consider exploration in the policy parameter space by using parallel identical learners for the same environment. By having multiple identical learners for the same environment, we can have increased search capability for a better policy. Parallelism in learning has been investigated widely in distributed RL (Nair et al. (2015), Mnih et al. (2016), Horgan et al. (2018), Barth-Maron et al. (2018), Espeholt et al. (2018)), evolutional strategies (Salimans et al. (2017), Choromanski et al. (2018)), and recently in population based training (PBT) (Jaderberg et al. (2017), Jaderberg et al. (2018), Conti et al. (2017)) for faster and better search for parameters and/or hyperparameters. In this paper, we also apply parallelism to RL in order to enhance the learning performance but in a slightly different way as compared to the previous methods. The proposed algorithm is intended for any off-policy RL algorithms and is composed of a chief, N environment copies of the same environment, and N identical learners with a shared common experience replay buffer and a common base algorithm, as shown in Fig. 1. Each learner has its own value function(s) and policy, and trains its own policy by interacting with its own environment copy with some additional interaction with the chief, as shown in Fig. 1. At each time step, each learner takes an action to its environment copy by using its own policy, stores its experience to the shared common experience replay buffer. Then, each learner updates its value function parameter and policy parameter by drawing mini-batches from the shared common replay buffer by minimizing its own value loss function and policy loss function, respectively. One way to implement parallel learning under the above setup is to run N fully independent parallel learning without interaction among the learners except sharing their experiences until the end of time steps and to choose the policy from the learner with the maximum accumulated reward at the end for future use. We will refer to this method as the experience-sharing-only method. However, this method ignores possible benefit from mutual interaction among the learners during training. In order to harness the benefit of mutual interaction among the learners in parallel learning, we exploit the information from the best learner among all learners periodically during training like in PBT (Jaderberg et al. (2017), Jaderberg et al. (2018)). Suppose that the value and policy parameters of each learner are initialized and learning is performed as described above forM time steps. At the end of M time steps, we can determine who is the best learner based on the average of the most recent Er episodic rewards for each learner. Then, the policy parameter information of the best learner can be used to enhance the learning of other learners for the next M time steps. This information can help learners stuck in local minima escape from the local minima and guide other learners for better direction. One simple way to exploit this best policy parameter information is that we reset the policy parameter of each learner with the policy parameter of the best learner at the beginning of the next M time steps, make each learner perform learning from this initial point in the policy parameter space for the next M time steps, select the best learner again at the end of the next M time steps, and repeat this procedure every M time steps in a similar way that PBT (Jaderberg et al. (2017)) copies the best learner’s parameters and hyperparameters to other learners. We will refer to this method as the reloading method in this paper. However, this reloading method has the problem that the search area covered by all N searching policies collapses to one point at the time of parameter copying and thus the search area can be narrow around the previous best policy point. In order to overcome such disadvantage, instead of resetting the policy parameter with the best policy parameter every M time steps, we here propose using the policy parameter information of the best learner in a soft manner to enhance the performance of the overall parallel learning. In the proposed scheme, the shared best policy information is used only to guide the policies of other learners. The policy of each learner is updated by improving the performance around a certain distance from the shared guiding policy. The chief periodically determines the best policy among the policies of all learners and distributes the best policy parameter to all learners so that the learners search for better policies around the previous best policy. The chief also enforces that the N searching policies are spread in the policy parameter space with a given distance from the previous best policy point so that the search area in the policy space by all N learners maintains a wide area and does not collapse into a narrow region. The proposed interactive parallel exploration (IPE) learning method can be applied to any off-policy RL algorithms and implementation is easy. Furthermore, the proposed method can be extended directly to distributed or multi-agent RL systems. We apply our IPE scheme to the TD3 algorithm and the SAC algorithm, which are state-of-the-art off-policy algorithms, as our base algorithms, and the new algorithms are named IPE-TD3 and IPE-SAC algorithms, respectively. Numerical result shows that the IPE-enhanced algorithms outperform the baseline algorithms both in the speed of convergence and in the final steady-state performance. The gain by IPE 2 BACKGROUND AND RELATED WORKS 2.1 DISTRIBUTED RL Distributed RL is an efficient way that takes advantage of parallelism to achieve fast training for large complex tasks (Nair et al. (2015)). Most of the works in distributed RL assume a common structure composed of multiple actors interacting with multiple copies of the same environment and a central system which stores and optimizes the common Q-function parameter or the policy parameter shared by all actors. The focus of distributed RL is to optimize the Q-function parameter or the policy parameter fast by generating more samples for the same wall clock time with multiple actors. In order to achieve this goal, researchers investigated various techniques for distributed RL, e.g., asynchronous update of parameters (Mnih et al. (2016), Babaeizadeh et al. (2017)), sharing an experience replay buffer (Horgan et al. (2018)), GPU-based parallel computation (Babaeizadeh et al. (2017), Clemente et al. (2017)), GPU-based simulation (Liang et al. (2018)) and V-trace in case of on-policy algorithms (Espeholt et al. (2018)). Distributed RL yields significant performance improvement in terms of the wall clock time but it does not consider the possible enhancement by interaction among multiple learners like in IPE and PBT. The proposed IPE uses a similar structure to that in (Nair et al. (2015), Espeholt et al. (2018)): that is, IPE is composed of multiple learners and a central system called chief. The difference is that each learner has its own Q or value function parameter and policy parameter and optimizes the parameters in parallel with interactions. 2.2 POPULATION BASED TRAINING Parallelism is also exploited in finding optimal parameters and hyperparameters of training algorithms for neural networks in PBT (Jaderberg et al. (2017), Jaderberg et al. (2018), Conti et al. (2017)). PBT (Jaderberg et al. (2017)) first chooses multiple sets of hyperparameters and parameters for a common base algorithm, and runs the base algorithm separatively in parallel at multiple learners to train their neural networks using those parameters and hyperparameters. Each learner updates the neural network parameters by perturbing the assigned hyperparameters. During training, in principle PBT evaluates the performance of multiple learners periodically, and selects the best performing hyperparameters, and then distributes the best performing hyperparameters and the corresponding parameters to other learners, although implementation details can be changed. Recently, PBT is applied to competitive multi-agent RL (Jaderberg et al. (2018)) and novelty search algorithms (Conti et al. (2017)). Although PBT is mainly developed to tune hyperparamters, the philosophy of PBT can be applied to find optimal parameters for given hyperparameters by multiple learners. In this case, multiple learners update their parameters in parallel, their performance is measured periodically, the parameters of the best performing learner are copied to other learners, other learners independently update their parameters from the copied parameters as their new initialization, and this process is repeated. The proposed IPE is similar to PBT in the sense that it exploits the parameters of the best performing learner among multiple parallel learners. However, IPE is different from the PBT-derived method in the way how IPE uses the parameters of the best learner. In the PBT-derived method, the parameters of the best learner are copied to other learners and other learners’ parameters are reset to the parameters of the best performing learner. Then, the parameters of each learner are updated by stochastic gradient descent (SGD). However, in IPE the parameters of the best performing learner are not copied but used in a soft manner as a guiding direction. Copying means that the parameters of all learners collapse to a single point in the parameter space. Furthermore, unlike PBT, IPE uses a common experience replay buffer to store all experiences from multiple learners with different parameters to exploit the diverse experiences of multiple learners with different parameters. As mentioned in Section 1, we refer to the PBT-derived method with a common experience replay buffer as the reloading method of which performance will be given in ablation study in Section 4. Although IPE is considered only for parallel parameter search in this paper, combining the soft way of using the parameters of the best performing learner with hyperparameter search is an interesting future work. 2.3 GUIDED POLICY SEARCH Our IPE method is also related to guided policy search (Levine & Koltun (2013), Levine et al. (2016), Teh et al. (2017), Ghosh et al. (2018)). Recently, Teh et al. (2017) proposed a guided policy search method for joint training of multiple tasks in which a common policy is used to guide local policies and the common policy is distilled from the local policies. Here, the local policies’ parameters are updated to maximize the performance and minimize the KL divergence between the local policies and the common distilled policy. The proposed IPE is related to guided policy search in the sense that multiple policies are guided by a common policy. However, the difference is that the goal of IPE is not learning multiple tasks but learning optimal parameters for a common task as in PBT and hence the guiding policy is not distilled from multiple local policies but chosen as the best performing policy among multiple learners. 2.4 EXPLORATION Improving exploration has been one of the key issues in RL and many different ways have been developed to improve exploration through maximum entropy objectives (Haarnoja et al. (2017; 2018)), noise in networks (Fortunato et al. (2018); Plappert et al. (2018)), and intrinsically motivated approaches (Bellemare et al. (2016); Ostrovski et al. (2017); Pathak et al. (2017); Achiam & Sastry (2017); Zheng et al. (2018)). The proposed IPE also enhances exploration. Specifically, IPE uses exploitation for exploration. Exploitation for exploration has been considered in the previous works (White & Sofge (1992), Oh et al. (2018)). In particular, Oh et al. (2018) exploited past good experiences to explore the sample space, whereas IPE exploit the current good policy among multiple policies to explore the policy space. 2.5 THE SET UP: PARALLEL LEARNING FOR A COMMON ENVIRONMENT The considered parallel learning setting consists of the environment E andN policies {π1, · · · , πN}. The environment E is described as a Markov decision process (MDP) defined by the tuple 〈S,A, T , r〉, where S is the state space, A is the action space, T : S × A × S → [0, 1] is the state transition probability, and r : S × A → R is the reward function. There exist N copies {E1, · · · , EN} of the environment E , i.e., E1 = · · · = EN = E , and the N environment copies may have different random initial seeds. The policy πi interacts with its corresponding environment copy E i and builds up its trajectory {(sit, a i t, r i t), t = 1, 2, · · · } for each i = 1, · · · , N . At time step t, the environment copy E i has a state sit ∈ S . The policy π i interacts with the environment copy E i by taking an action ait according to π i given the current state sit. Then, the environment copy E i yields the reward rit = r(s i t, a i t) and makes transition to the next state s i t+1 according to T . In this paper, in order to account for the actual amount of interaction with the environment, we define environment steps as the total number of interactions by all N parallel policies with all N environment copies. Suppose that all N policies generate their trajectories simultaneously in parallel, and suppose that M time steps have elapsed. Then, although the number of elapsed time steps is M , the number of environment steps is NM . 3 INTERACTIVE PARALLEL POLICY EXPLORATION We now present the proposed IPE scheme with the parallel environment learning setting described in Section 2.5, and the overall structure is described in Fig. 1. We have N identical parallel learners with a shared common experience replay buffer D, and all N identical learners employ a common base algorithm, which can be any off-policy RL algorithm. The execution is in parallel. The i-th learner has its own environment E i, which is a copy of the common environment E , and has its own value function (e.g., Q-function) parameter θi and policy parameter φi. The i-th learner interacts with the environment copy E i with some additional interaction with the chief, as shown in Fig. 1. At each time step, the i-th learner performs an action ait to its environment copy E i by using its own policy πφi , stores its experience (s i t, a i t, r i t, s i t+1) to the shared common experience replay buffer D for all i = 1, 2, · · · , N . Note that one time step corresponds to N environment steps. Then, at each time step, each learner updates its value function parameter and policy parameter for N times by drawing N mini-batches of size B from the shared common replay buffer D by minimizing its own value loss function and policy loss function, respectively. The N time updates for each learner for each time step is to exploit the samples provided by other N − 1 learners stored in the shared common replay buffer. In order to harness the benefit of mutual interaction among the learners in parallel learning, we exploit the information from the best learner periodically during training like in PBT (Jaderberg et al. (2017)). Suppose that the Q-function parameter and the policy parameter of each learner are initialized and learning is performed as described above for M time steps. At the end of the M time steps, we determine who is the best learner based on the average of the most recentEr episodic rewards for each learner. Let the index of the best learner be b. Then, the policy parameter information φb of the best learner can be used to enhance the learning of other learners for the next M time steps. Here, instead of copying φb to other learners, we propose using the information of φb in a soft manner to enhance the performance of the overall parallel learning. That is, during the next M time steps, whereas we set the loss function L̃(θi) for the Q-function to be the same as the loss L(θi) of the base algorithm, we set the loss function L̃(φi) for the policy parameter φi of the i-th learner as the following augmented version: L̃(φi) = L(φi) + 1{i6=b}βÊs∼D [ D(πφi , πφb) ] (1) where L(φi) is the policy loss function of the base algorithm, 1{·} denotes the indicator function, β(> 0) is a weighting factor, D(π, π′) be some distance measure between two policies π and π′, and Ês∼D denotes the sample expectation based on mini-batch drawn randomly from the experience replay buffer D. The augmented loss function L̃(φi) in (1) is composed of two terms L(φi) and 1{i6=b}βÊs∼D [ D(πφi , πφb) ] . Thus, for the non-best learners in the previous M time steps, the gradient of L̃(φi) is the mixture of two directions: one is to maximize the return by itself and the other is to follow the previously best learner’s policy. The second term in the right-hand side (RHS) of (1) guides non-best learners towards a good direction in addition to each leaner’s self search. 3.1 DESIGN OF THE WEIGHTING FACTOR β In (1), the weighting factor β is common to all N learners and should be determined judiciously to balance between improving its performance by each learner itself and going towards the previous best policy among the N learners. We adopt an adaptive method to determine the value of β as follows: β = { β ← 2β if D̂best ≥ max{ρD̂ b change, dsearch} × 1.5 β ← β/2 if D̂best < max{ρD̂bchange, dsearch}/1.5 (2) where D̂best is the estimated distance between πφi and πφb averaged over allN−1 non-best learners, and D̂bchange is the estimated distance between πφbupdated (i.e., the policy of the current best learner at the end of the current M time steps) and πφb (i.e, the policy of the current best learner at the end of the previous M time steps), given respectively by D̂best = 1 N − 1 ∑ i∈I−b Ês∼D [ D(πφi , πφb) ] and D̂bchange = Ês∼D [ D(πφb updated , πφb) ] . (3) Here, I−b = {1, . . . , N} \ {b}, and dsearch and ρ are predetermined hyperparameters. This adaptation method is similar to that used in proximal policy optimization (PPO) (Schulman et al. (2017)). The update of β is done every M time steps and the updated β is used for the next M time steps. First, suppose that we do not have the first term ρD̂bchange in the maximum of the condition in (2). Then, when the estimated average distance D̂best from the best policy to the remaining policies is smaller than dsearch/1.5, the parameter β is decreased by half. Hence, the movement in the gradient direction of the second term in the RHS of (1) is diminished and the independent movement into the optimization direction for L(φi) becomes more important. So, each policy gradually diverges from the previous best policy serving as the reference point due to internal exploration mechanism such as added noise in action. On the other hand, when D̂best is larger than 1.5dsearch, the parameter β increases by factor two and the movement towards the previous best policy becomes more important. As time steps elapse, β is settled down so that D̂best is around dsearch. Hence, the proposed IPE scheme searches a wide area with rough radius dsearch around the best policy in the policy parameter space, as shown in Fig. 2(a). Furthermore, with the first term ρD̂bchange in the maximum of the condition in (2), we can control the speed of tracking the best policy. D̂bchange measures the speed of change in the best policy parameter. When the best policy parameter change scaled by ρ, i.e., ρD̂bchange is less than dsearch, the term is invisible by the maximum operation in (2). On the other hand, when ρD̂bchange > dsearch, the term is active and it means that the best policy parameter changes fast. Thus, the tracking speed should be controlled. If D̂best > ρD̂ b change, i.e., the distance from πφi to πφb is larger than ρ times the distance from πφb updated to πφb , then this means that the speed of tracking the best policy is slow. Hence, we increase β by factor two. Otherwise, we decrease β by half. When the search for the current M time steps is finished, the new best learner is selected and a new search for a wide area around the new best learner’s policy πφb is performed, as illustrated in Fig. 2(b). The policy parameter information φb of the best learner can be changed before the next best learner selection. Now, the overall procedure for the proposed IPE scheme is explained with the diagram in Fig. 1. The value function parameter and policy parameter of each learner are initialized. The chief distributes the parameter β and the reference policy parameter φb, which is the policy information of the best learner over the previous M time steps, to all learners. At each time step, each learner interacts with its own environment copy by taking its action and receiving the reward and the next state, and stores its experience to the shared common replay buffer D. Then, the i-th learner updates its value function parameter θi by minimizing its own value loss function L̃(θi) which is the same as that of the base algorithm, and updates the policy parameter φi by minimizing the augmented loss function L̃(φi) in (1) for N times by drawing N mini-batches from the shared common replay buffer D. Whenever an episode ends for a learner, the learner reports the episodic reward to the chief. The i-th learner reports Ês∼D [ D(πφi , πφb) ] to the chief for computation of D̂best in (3). At every M time steps, the chief updates β according to (2), determines the best learner over the most recent M time steps based on the collected episodic rewards from each learner. Once the best learner is determined, the chief obtains the policy parameter information φb from the determined best learner, and distributes the new β and the reference policy parameter φb to all N learners. This procedure repeats until the time steps reaches the predefined maximum. When the parallel learning based IPE reaches a steady state, we can choose any of the N learners’ policies and use the chosen policy for the environment E in future since it is expected that at the steady-state the performance of all N policies is more or less similar due to their distance property. 3.2 IPE-ENHANCED ALGORITHMS The proposed IPE method can be applied to any off-policy RL algorithms regardless of whether the base RL algorithms have continuous actions or discrete actions. Here, we consider the application of IPE to the TD3 algorithm as the base algorithm and the constructed algorithm is named the IPETD3 algorithm. The details of baseline TD3 are explained in Appendix A. With TD3 as the base algorithm, each learner has its own parameters θi1, θ i 2, and φ i for its two Q-functions and policy. Furthermore, it has (θi1) ′, (θi2) ′, and (φi)′ which are the parameters of the corresponding target networks. For the distance measure between two policies, we use the mean square difference given by D(π(s), π̃(s)) = 1 2 ‖π(s)− π̃(s)‖22 . (4) For the i-th learner, as in TD3, the parameters θij , j = 1, 2 are updated every time step by minimizing L̃(θij) = Ê(s,a,r,s′)∼D [ (y −Qθi j (s, a))2 ] (5) where y = r + γminj=1,2Q(θi j )′(s ′, π(φi)′(s ′) + ǫ), ǫ ∼ clip(N (0, σ̃2),−c, c). The parameter φi is updated every d time steps by minimizing the following augmented loss function: L̃(φi) = Ês∼D [ −Qθi 1 (s, πφi(s)) + 1{i6=b} β 2 ∥ ∥πφi(s)− πφb(s) ∥ ∥ 2 2 ] . (6) For the first Tinitial timesteps for initial exploration we use a random policy and do not update all policies over the initial exploration period. With these loss functions, the reference policy, and the initial exploration policy, all procedure is the same as the general IPE procedure described previously. The pseudocode of the IPE-TD3 algorithm is provided in Appendix B. The application of IPE to other algorithms such as SAC and DQN is also provided in Appendices. 4 EXPERIMENTS In this section, we provide the numerical results on the performance of the proposed IPE-TD3 and current state-of-the-art on-policy and off-policy baseline algorithms on several MuJoCo environments (Todorov et al. (2012)). The baseline algorithms are Proximal Policy Optimization (PPO) (Schulman et al. (2017)), Actor Critic using Kronecker-Factored Trust Region (ACKTR) (Wu et al. (2017)), Soft Q-learning (SQL) (Haarnoja et al. (2017)), Soft Actor-Critic (SAC) (Haarnoja et al. (2018)), and TD3 (Fujimoto et al. (2018)). More numerical result on IPE applied to SAC is provided in Appendices. 4.1 PARAMETER SETTING All hyperparameters we used for evaluation are the same as those in the original papers (Schulman et al. (2017); Haarnoja et al. (2018); Fujimoto et al. (2018)). Here, we provide the hyperparameters of TD3 and IPE-TD3 only. TD3 The networks for two Q-functions and the policy have 2 hidden layers. The first and second layers have sizes 400 and 300, respectively. The non-linearity function of the hidden layers is ReLU, and the activation functions of the last layers of the Q-functions and the policy are linear and hyperbolic tangent, respectively. We used the Adam optimizer with learning rate 10−3, discount factor γ = 0.99, target smoothing factor τ = 5 × 10−3, the period d = 2 for updating the policy. The experience replay buffer size is 106, and the mini-batch size B is 100. The standard deviation for exploration noise σ and target noise σ̃ are 0.1 and 0.2, respectively, and the noise clipping factor c is 0.5. IPE-TD3 In addition to the parameters for TD3, we used N = 4 learners, the period M = 250 of updating the best policy and β, the number of recent episodes Er = 10 for determining the best learner b. The parameters dsearch and ρ for the exploration range are 0.04 and 2, respectively. The timesteps for initial exploration Tinitial is set as 250 for Hopper-v1 and Walker2d-v1 and as 2500 for HalfCheetah-v1 and Ant-v1. 4.2 COMPARISON TO BASELINES In order to have sample-wise fair comparison among the considered algorithms, we obtain the performance with respect to environment steps (not time steps), which is defined as the total number of interactions with the environment by the agent. This comparison makes sense because the performance at the same environment steps means that all algorithms use the same number of samples obtained from the environment. The performance is obtained through the evaluation method that is similar to those in (Haarnoja et al. (2018); Fujimoto et al. (2018)). Evaluation of the policies are conducted every Reval = 4000 environment steps for all algorithms. At each evaluation instant, the agent (or learner) fixes its policy as the one at the evaluation instant, and interacts with the same environment separate for the evaluation purpose with the fixed policy to obtain 10 episodic rewards. The average of these 10 episodic rewards is the performance at the evaluation instant. In the case of IPE-TD3 and other parallel learning schemes, each of the N learners fixes its policy as the one at the evaluation instant, and interacts with the environment with the fixed policy to obtain 10 episodic rewards. First, the 10 episodic rewards are averaged for each learner and then the maximum of the 10-episode-average rewards of the N learners is taken as the performance at that evaluation instant. We performed this operation for five different random seeds, and the mean and variance of the learning curve are obtained from these five simulations. The policies used for evaluation are stochastic for PPO and deterministic for the others. Fig. 3 shows the learning curves over one million environment steps for several MuJoCo tasks: Hopper-v1, Walker2d-v1, HalfCheetah-v1, and Ant-v1. First, it is observed that the performance of TD3 here is similar to that in the original TD3 paper (Fujimoto et al. (2018)), and the performance of other baseline algorithms is also similar to that in the original papers (Schulman et al. (2017); Haarnoja et al. (2018)). It is seen that the IPE-TD3 algorithm outperforms the state-of-the-art RL algorithms in terms of both the speed of convergence with respect to environment steps and the final steady-state performance (except in Walker2d-v1, the initial convergence is a bit slower than TD3.) Especially, in the cases of Hopper-v1 and Ant-v1, TD3 has large variance and this means that the performance of TD3 is quite dependent on the initial condition of the environment and it is not easy for TD3 to escape out of bad local minima in certain environments. However, it is seen that IPE-TD3 yields much less variance as compared to TD3. This implies that the wide area search by IPE in the policy parameter space helps the learners escape out of bad local optima. It is seen that the wide area search around the previous best policy point in the policy parameter space by IPE yields faster and better policy search. 4.3 ABLATION STUDY IPE-TD3 has several components to improve the performance based on parallelism: 1) sharing experiences from multiple policies, 2) using the best policy information, and 3) fusing the best policy information in a soft manner based on the augmented loss function. Thus, we investigated the impact of each component on the performance improvement. For comparison we considered the following parallel policy exploration methods gradually incorporating more techniques: 1. TD3 The original TD3 with one learner 2. Distributed RL TD3 (DRL-TD3) N actors obtain samples from N environment copies. The policy and the experience replay buffer are shared by all N actors. 3. Experience-Sharing-Only TD3 (ESO-TD3) N learners interact with N environment copies and update their own policies using experiences drawn from the shared experience replay buffer. 4. Reloading TD3 (Re-TD3) At every M ′ timesteps, the best policy is determined and all policies are initialized as the best policy, i.e., the best learner’s policy parameter is copied to all other learners. The rest of the procedure is the same as experience-sharing-only TD3. 5. IPE-TD3 At every M timesteps, the best policy information is determined and this policy is used in a soft manner based on the augmented loss function. Note that Re-TD3 exploits the best policy information from N learners. The main difference between IPE-TD3 and Re-TD3 is the way how the best learner’s policy parameter is used. Re-TD3 initializes all policies with the best policy parameter everyM ′ timesteps like in PBT (Jaderberg et al. (2017)), whereas IPE-TD3 uses the best learner’s policy parameter information determined everyM timesteps to construct an augmented loss function. For fair comparison, M and M ′ are determined independently and optimally for IPE-TD3 and Re-TD3, respectively, since the optimal period can be different for the two methods. Thus, M ′ = 5000 is determined for Re-TD3 by tuning, whereas M = 250 is used for IPE-TD3. Since all N policies collapse as one point in Re-TD3 at the beginning of each period, we expect that a larger period is required for Re-TD3 to have sufficiently spread policies at the end of the best policy selection period. Fig. 4 shows the learning curves of the considered parallel exploration methods for the Ant-v1 task and Table 1 shows the final (steady-state) performance of the considered parallel exploration methods for four MuJoCo tasks, respectively. It is seen that IPE-TD3 outperforms other parallel methods: DRL-TD3, ESO-TD3 and Re-TD3 except the case that ESO-TD3 outperforms all other parallel schemes in Hopper-v1. Both Re-TD3 and IPE-TD3 have better final (steady-state) performance than TD3 and ESO-TD3 for all tasks except Hopper-v1 for which ESO-TD3 performs best. Note that ESO-TD3 obtains most diverse experiences since the N learners shares the experience replay buffer but there is no interaction among theN learners until the end of training. So, it seems that this diverse experience is beneficial to Hopper-v1. The final performances of Re-TD3 and IPE-TD3 are more or less the same for HalfCheetah-v1 but the final performance of IPE-TD3 is noticeably better than that of Re-TD3 in other cases. 5 CONCLUSION In this paper, we have proposed a new interactive parallel learning scheme, IPE, to enhance the performance of off-policy RL systems. In the proposed IPE scheme, multiple identical learners with their own value-functions and policies sharing a common experience replay buffer search a good policy with the guidance of the best policy information in the previous search interval. The information of the best policy parameter of the previous search interval is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search space by the multiple learners. The guidance by the previous best policy and the enlarged search space by IPE enables faster and better policy search in the policy parameter space. The IPE-enhanced algorithms constructed by applying the proposed IPE scheme to TD3 or SAC outperforms most of the current state-of-the-art continuous-action RL algorithms. Although we mainly considered continuous-action off-policy algorithms in this paper, the proposed IPE method can also be applied to RL with discrete actions, as seen in Appendix E. In the case of continuous action control, the gain by IPE can be beneficial for recent trend of fast computer-based prototyping of complex robotics systems or autonomous cars, whereas in the discrete-action case better policy parameters can be searched for more challenging tasks by IPE. APPENDIX A. THE TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT ALGORITHM AND THE SOFT ACTOR-CRITIC ALGORITHM A.1 THE TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT (TD3) ALGORITHM The TD3 algorithm is a current state-of-the-art off-policy algorithm and is a variant of the deep deterministic policy gradient (DDPG) algorithm (Lillicrap et al. (2015)). The TD3 algorithm tries to resolve two problems in typical actor-critic algorithms: 1) overestimation bias and 2) high variance in the approximation of the Q-function. In order to reduce the bias, the TD3 considers two Q-functions and uses the minimum of the two Q-function values to compute the target value, while in order to reduce the variance in the gradient, the policy is updated less frequently than the Qfunctions. Specifically, letQθ1 ,Qθ2 and πφ be two current Q-functions and the current deterministic policy, respectively, and let Qθ′ 1 , Qθ′ 2 and πφ′ be the target networks of Qθ1 , Qθ2 and πφ, respec- tively. The target networks are initialized by the same networks as the current networks. At time step t, the TD3 algorithm takes an action at with exploration noise ǫ: at = πφ(st) + ǫ, where ǫ is zero-mean Gaussian noise with variance σ2, i.e., ǫ ∼ N (0, σ2). Then, the environment returns reward rt and the state is switched to st+1. The TD3 algorithm stores the experience (st, at, rt, st+1) at the experience replay buffer D. After storing the experience, the Q-function parameters θ1 and θ2 are updated by gradient descent of the following loss functions: L(θj) = Ê(s,a,r,s′)∼D [ (y −Qθj (s, a)) 2 ] , j = 1, 2 (7) where Ê(s,a,r,s′)∼D denotes the sample expectation with an uniform random mini-batch of size B drawn from the replay buffer D, and the target value y is given by y = r + γ min j=1,2 Qθ′ j (s′, πφ′(s ′) + ǫ), ǫ ∼ clip(N (0, σ̃2),−c, c). (8) Here, for the computation of the target value, the minimum of the two target Q-functions is used to reduce the bias. The procedure of action taking and gradient descent for θ1 and θ2 are repeated for d times (d = 2), and then the policy and target networks are updated. The policy parameter φ is updated by gradient descent by minimizing the loss function for φ: L(φ) = −Ês∼D [Qθ1(s, πφ(s))] , (9) and the target network parameters θ′j and φ ′ are updated as θ′j ← (1− τ)θ ′ j + τθj φ ′ ← (1− τ)φ′ + τφ. (10) The networks are trained until the number of time steps reaches a predefined maximum. A.2 THE SOFT ACTOR-CRITIC (SAC) ALGORITHM The SAC algorithm is an off-policy algorithm comparable to TD3 and yields good performance especially in environments with high dimensional action spaces. The SAC algorithm is a maximum entropy RL which is based on the discounted sum of reward and the entropy of the current policy given by Eτ∼π [ ∞ ∑ t=0 γt (r(st, at) + αH(π(·|st))) ] , (11) where α is a weighting factor that balances between the reward and the entropy of the policy. This objective function stimulates the algorithm to explore more diverse experiences so as to find a better policy. The SAC algorithm has one value function Vψ(s), two Q-functions Qθj (s, a), j = 1, 2, and one stochastic policy πφ(·|s), which are parameterized by parameters ψ, θj , and φ, respectively. It also has a target value function Vψ′(s) for stable convergence. After initialization, at each time step t the algorithm obtains experience (st, at, rt, st+1) by interacting with the environment and stores the experience to the experience replay buffer D. Then, it updates the parameters ψ, θj , and φ by gradient descent of the following loss functions: J(ψ) = Ês∼D,a∼πφ(·|s) [ 1 2 ∥ ∥Vψ(s)− Q̄(s, a) + log πφ(a|s) ∥ ∥ 2 2 ] , (12) J(θj) = Ê(s,a,r,s′)∼D [ 1 2 ( Qθj (s, a)− r/α− γVψ′(s ′) )2 ] , j = 1, 2 (13) J(φ) = Ês∼D,a∼πφ(·|s) [ log πφ(a|s)− Q̄(s, a) ] , (14) where Q̄(s, a) = min {Qθ1(s, a), Qθ2(s, a)}, and Ê(s,a,r,s′)∼D is the sample expectation with an uniform random mini-batch of size B drawn from the replay buffer D. After updating the parameters, the target value function parameter ψ′ is updated as ψ′ ← (1− τ)ψ′ + τψ (15) In order to obtain diverse experience in the initial stage of learning, it uses a uniform policy for initial Tinitial time steps and the current policy πφ(·|s) for the rest of learning. APPENDIX B. PSEUDOCODE OF THE IPE-TD3 ALGORITHM Algorithm 1 The Interactive Parallel Exploration TD3 (IPE-TD3) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period,B: size of mini-batch, d: update interval for policy and target networks. 1: Initialize φ1 = · · · = φN = φb, θ1j = · · · = θ N j , j = 1, 2, randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait = π i ( sit ) + ǫ, ǫ ∼ N (0, σ2) to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 19: Update θij , j = 1, 2, by gradient descent for minimizing L̃(θ i j) in (5) with B 20: if k ≡ 0(mod d) then 21: Update φi by gradient descent for minimizing L̃(φi) in (6) with B 22: Update the target networks: (θij) ′ ← (1− τ)(θij) ′ + τθij , (φ i)′ ← (1− τ)(φi)′ + τφi 23: end if 24: end for 25: end for 26: if t ≡ 0(mod M) then 27: Select the best learner b 28: Adapt β with (2) 29: end if 30: end while APPENDIX C. PSEUDOCODE OF THE IPE-SAC ALGORITHM Algorithm 2 The Interactive Parallel Exploration SAC (IPE-SAC) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period, B: size of mini-batch 1: Initialize ψ1 = · · · = ψN , φ1 = · · · = φN = φb, θ1j = · · · = θ N j , j = 1, 2, randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait ∼ π i ( ·|sit ) to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 19: Update ψi, θij , and φ i by gradient descent for minimizing (16), (17), and (18) with B, respectively. 20: Update the target parameters: (ψi)′ ← (1− τ)(ψi)′ + τψi 21: end for 22: end for 23: if t ≡ 0(mod M) then 24: Select the best learner b 25: Update the best policy parameter φb 26: Adapt β with (2) 27: end if 28: end while In IPE-SAC, each learner has its own parameters ψi, θi1, θ i 2, and φ i for its value function, two Qfunctions, and policy. Each learner also has (ψi)′ which is the parameter of the target value function. For the distance measure between two policies, we use the mean square difference of the mean action of Gaussian policy, given by D(π(·|s), π̃(·|s)) = 12 ‖mean {π(·|s)} −mean {π̃(·|s)}‖ 2 2. The i-th learner updates the parameters ψi, θi1, θ i 2, and φ i every timestep by minimizing L̃(ψi) = Ês∼D,a∼π φi (·|s) [ 1 2 ∥ ∥Vψi(s)− Q̄ i(s, a) + log πφi(a|s) ∥ ∥ 2 2 ] (16) L̃(θij) = Ê(s,a,r,s′)∼D [ 1 2 ( Qθi j (s, a)− r/α− γV(ψi)′(s ′) )2 ] , j = 1, 2 (17) L̃(φi) = Ês∼D,a∼π φi (·|s) [ log πφi(a|s)− Q̄ i(s, a) + 1{i6=b} β 2 ∥ ∥mean { πφi(·|s) } −mean { πφb(·|s) }∥ ∥ 2 2 ] (18) where Q̄i(s, a) = min { Qθi 1 (s, a), Qθi 2 (s, a) } . After updating these parameters, each learner up- dates its target value function parameters. With these loss functions, all procedure is the same as the general IPE procedure described in Section 3. The pseudocode of the IPE-SAC algorithm is shown above. APPENDIX D. RESULT OF IPE-SAC ON HUMANOID (RLLAB) As mentioned already, IPE is general in that it can be applied to other off-policy algorithms. Here, we provide numerical results on IPE-SAC, shown in Appendix C, constructed by combining IPE with SAC. Experiment was perform on the task of Humanoid (rllab) (Duan et al. (2016)) that needs more exploration. We compared IPE-SAC with SAC and multi-learner reloading SAC (Re-SAC) which copies the parameter of the best learner to other learners periodically. D.1 PARAMETER SETTING SAC The networks for the state-value function, two Q-functions, and the policy had 2 hidden layers of size 256. The activation functions for the hidden layers and the last layers were ReLU and linear, respectively. We used the Adam optimizer with learning rate 3×10−4, discount factor γ = 0.99, and target smoothing factor τ = 5 × 10−3. The algorithm was trained by random mini-batches of size B = 256 from the experience replay buffer of the maximum size 106. The reward scale for updating Q-functions was 10 for the Humanoid (rllab) environment. The initial exploration timesteps Tinitial was set to 1000. IPE-SAC Additional parameters for IPE-SAC are as follows. We used N = 4 learners, the period M = 500 of updating the best policy and β, the number of recent episodes Er = 10 for determining the best learner b. The parameters dsearch and ρ for the exploration range were 0.01 and 2, respectively. We used the initial exploration timesteps Tinitial = 250. D.2 PERFORMANCE ON HUMANOID (RLLAB) The learning curve on Humanoid (rllab) is shown in Figure 51. It is seen that IPE-SAC outperforms the original SAC and Re-SAC. This result shows the promising aspect of IPE that it can be useful for tasks requiring more exploration. 1The simulation is still running, and we will change the graph when the simulation is finished. APPENDIX E. PSEUDOCODE OF THE IPE-DQN ALGORITHM Algorithm 3 The Interactive Parallel Exploration DQN (IPE-DQN) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period, B: size of mini-batch, f : update interval for Q-functions, d: update interval for target Q-functions,. 1: Initialize θ1 = · · · = θN = θb randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait = argmaxa∈A { Qθi(s i t, a) } w.p. 1 − ε or a uniform random action ait w.p. ε to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: if k ≡ 0(mod f) then 19: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 20: Update θi by gradient descent minimizing (19) with B. 21: end if 22: end for 23: end for 24: if t ≡ 0(mod d) then 25: for i = 1, 2, · · · , N in parallel do 26: Update (θi)′ ← θi 27: end for 28: end if 29: if t ≡ 0(mod M) then 30: Select the best learner b 31: Update the best policy parameter θb 32: Adapt β with (2) 33: end if 34: end while IPE can also be applied to off-policy algorithms with discrete action spaces as well as continuous action spaces. Thus, we applied IPE to DQN to construct IPE-DQN. In IPE-DQN, each learner has its own Q-function parameters θi and target Q-function parameters (θi)′. We define the distance for two Q-functions Q(s, a) and Q̃(s, a) as D(Q(s, ·), Q̃(s, ·)) = KL ( softmax (Q(s, ·)) ||softmax ( Q̃(s, ·) )) . We used the Q-function parameter θb of the best learner as the reference parameter, which was originally φb in (1) and (3). The i-th learner updates the parameters θi every f timesteps by minimizing L̃(θi) = Ê(s,a,r,s′)∼D [ 1 2 ‖Qθi(s, a)− y‖ 2 2 + 1{i6=b}βKL ( softmax (Qθi(s, ·)) ||softmax ( Qθ̃b(s, ·) )) ] (19) where y = r+γQ(θi)′(s ′, argmaxa′∈A {Qθi(s ′, a′)}). With the loss function and the reference policy, all procedure is the same as the general IPE procedure described in Section 3. The pseudocode of the IPE-DQN algorithm is shown above.
1. What is the main contribution of the paper, and how does it improve upon existing reinforcement learning methods? 2. What are the strengths and weaknesses of the proposed method, Interactive Parallel Exploration (IPE)? 3. How does IPE compare to other ensemble-based algorithms, such as bootstrapped DQN? 4. What are some potential applications of IPE in real-world scenarios? 5. Are there any limitations or areas for improvement in the paper's experimental design or analysis?
Review
Review The paper present interactive parallel exploration (IPE), a reinforcement learning method based on an ensemble of policies and a shared experience pool. Periodically, the highest-return achieving policy is selected, towards which the other policies are updated in a sense of some distance metric. IPE is applicable to any off-policy reinforcement learning algorithm. The experiments demonstrate some improvement over TD3 on four MuJoCo benchmark tasks. The method is motivated heuristically, and and it provides some benefits in terms of sample efficiency and lower variance between training trials. However, it is hard to justify the increased algorithmic complexity and additional hyperparameters just based on the presented results. The paper motivates IPE as an add-on that can increase the performance of any off-policy RL algorithm. As such, I would like to see IPE being applied to other algorithms (e.g., SAC or DQN) as a proof of generalizability, and compared to other similar ensemble based algorithms (e.g., bootstrapped DQN). While the improvement in the sample complexity is quite marginal, what I find the most interesting is how IPE-TD3 reduces variance between training trials compared to vanilla TD3. Convergence to bad local optimum can be a big problem, and IPE could help mitigate it. I would suggest including environments where local optima can be a big problem, for example HumanoidStandup, or any sparse reward task. Also the paper does not include ablations, which, given the heuristic nature of the proposed method, seems important.
ICLR
Title Interactive Parallel Exploration for Reinforcement Learning in Continuous Action Spaces Abstract In this paper, a new interactive parallel learning scheme is proposed to enhance the performance of off-policy continuous-action reinforcement learning. In the proposed interactive parallel learning scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information. The information of the best policy is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search space by the multiple learners. The guidance by the previous best policy and the enlarged search space by the proposed interactive parallel learning scheme enable faster and better policy search in the policy parameter space. Working algorithms are constructed by applying the proposed interactive parallel learning scheme to several off-policy reinforcement learning algorithms such as the twin delayed deep deterministic (TD3) policy gradient algorithm and the soft actor-critic (SAC) algorithm, and numerical results show that the constructed IPE-enhanced algorithms outperform most of the current state-of-the-art reinforcement learning algorithms for continuous action control. 1 INTRODUCTION Reinforcement learning (RL) for continuous action control is an active research field. In RL, an agent learns a policy through interaction with the environment to maximize the cumulative reward. One of the key issues in RL is the trade-off between exploitation and exploration. Exploitation is to make a best decision based on the already collected information, whereas exploration is to collect more new information about the environment. The balance between the two is important for good RL algorithms. For example, DQN (Mnih et al. (2015)) balances exploitation and exploration by taking actions based on the ǫ-greedy approach. Deep deterministic policy gradient (DDPG) (Lillicrap et al. (2015)) and twin delayed deep deterministic (TD3) (Fujimoto et al. (2018)) policy gradient algorithms promote exploration by adding Ornstein-Uhlenbeck noise and Gaussian noise to the best decision action, respectively. Soft actor-critic (SAC) (Haarnoja et al. (2018)) performs balancing by using a maximum entropy objective. However, most of the previous works focus on exploration to obtain unobserved states or actions. In this paper, we consider exploration in the policy parameter space by using parallel identical learners for the same environment. By having multiple identical learners for the same environment, we can have increased search capability for a better policy. Parallelism in learning has been investigated widely in distributed RL (Nair et al. (2015), Mnih et al. (2016), Horgan et al. (2018), Barth-Maron et al. (2018), Espeholt et al. (2018)), evolutional strategies (Salimans et al. (2017), Choromanski et al. (2018)), and recently in population based training (PBT) (Jaderberg et al. (2017), Jaderberg et al. (2018), Conti et al. (2017)) for faster and better search for parameters and/or hyperparameters. In this paper, we also apply parallelism to RL in order to enhance the learning performance but in a slightly different way as compared to the previous methods. The proposed algorithm is intended for any off-policy RL algorithms and is composed of a chief, N environment copies of the same environment, and N identical learners with a shared common experience replay buffer and a common base algorithm, as shown in Fig. 1. Each learner has its own value function(s) and policy, and trains its own policy by interacting with its own environment copy with some additional interaction with the chief, as shown in Fig. 1. At each time step, each learner takes an action to its environment copy by using its own policy, stores its experience to the shared common experience replay buffer. Then, each learner updates its value function parameter and policy parameter by drawing mini-batches from the shared common replay buffer by minimizing its own value loss function and policy loss function, respectively. One way to implement parallel learning under the above setup is to run N fully independent parallel learning without interaction among the learners except sharing their experiences until the end of time steps and to choose the policy from the learner with the maximum accumulated reward at the end for future use. We will refer to this method as the experience-sharing-only method. However, this method ignores possible benefit from mutual interaction among the learners during training. In order to harness the benefit of mutual interaction among the learners in parallel learning, we exploit the information from the best learner among all learners periodically during training like in PBT (Jaderberg et al. (2017), Jaderberg et al. (2018)). Suppose that the value and policy parameters of each learner are initialized and learning is performed as described above forM time steps. At the end of M time steps, we can determine who is the best learner based on the average of the most recent Er episodic rewards for each learner. Then, the policy parameter information of the best learner can be used to enhance the learning of other learners for the next M time steps. This information can help learners stuck in local minima escape from the local minima and guide other learners for better direction. One simple way to exploit this best policy parameter information is that we reset the policy parameter of each learner with the policy parameter of the best learner at the beginning of the next M time steps, make each learner perform learning from this initial point in the policy parameter space for the next M time steps, select the best learner again at the end of the next M time steps, and repeat this procedure every M time steps in a similar way that PBT (Jaderberg et al. (2017)) copies the best learner’s parameters and hyperparameters to other learners. We will refer to this method as the reloading method in this paper. However, this reloading method has the problem that the search area covered by all N searching policies collapses to one point at the time of parameter copying and thus the search area can be narrow around the previous best policy point. In order to overcome such disadvantage, instead of resetting the policy parameter with the best policy parameter every M time steps, we here propose using the policy parameter information of the best learner in a soft manner to enhance the performance of the overall parallel learning. In the proposed scheme, the shared best policy information is used only to guide the policies of other learners. The policy of each learner is updated by improving the performance around a certain distance from the shared guiding policy. The chief periodically determines the best policy among the policies of all learners and distributes the best policy parameter to all learners so that the learners search for better policies around the previous best policy. The chief also enforces that the N searching policies are spread in the policy parameter space with a given distance from the previous best policy point so that the search area in the policy space by all N learners maintains a wide area and does not collapse into a narrow region. The proposed interactive parallel exploration (IPE) learning method can be applied to any off-policy RL algorithms and implementation is easy. Furthermore, the proposed method can be extended directly to distributed or multi-agent RL systems. We apply our IPE scheme to the TD3 algorithm and the SAC algorithm, which are state-of-the-art off-policy algorithms, as our base algorithms, and the new algorithms are named IPE-TD3 and IPE-SAC algorithms, respectively. Numerical result shows that the IPE-enhanced algorithms outperform the baseline algorithms both in the speed of convergence and in the final steady-state performance. The gain by IPE 2 BACKGROUND AND RELATED WORKS 2.1 DISTRIBUTED RL Distributed RL is an efficient way that takes advantage of parallelism to achieve fast training for large complex tasks (Nair et al. (2015)). Most of the works in distributed RL assume a common structure composed of multiple actors interacting with multiple copies of the same environment and a central system which stores and optimizes the common Q-function parameter or the policy parameter shared by all actors. The focus of distributed RL is to optimize the Q-function parameter or the policy parameter fast by generating more samples for the same wall clock time with multiple actors. In order to achieve this goal, researchers investigated various techniques for distributed RL, e.g., asynchronous update of parameters (Mnih et al. (2016), Babaeizadeh et al. (2017)), sharing an experience replay buffer (Horgan et al. (2018)), GPU-based parallel computation (Babaeizadeh et al. (2017), Clemente et al. (2017)), GPU-based simulation (Liang et al. (2018)) and V-trace in case of on-policy algorithms (Espeholt et al. (2018)). Distributed RL yields significant performance improvement in terms of the wall clock time but it does not consider the possible enhancement by interaction among multiple learners like in IPE and PBT. The proposed IPE uses a similar structure to that in (Nair et al. (2015), Espeholt et al. (2018)): that is, IPE is composed of multiple learners and a central system called chief. The difference is that each learner has its own Q or value function parameter and policy parameter and optimizes the parameters in parallel with interactions. 2.2 POPULATION BASED TRAINING Parallelism is also exploited in finding optimal parameters and hyperparameters of training algorithms for neural networks in PBT (Jaderberg et al. (2017), Jaderberg et al. (2018), Conti et al. (2017)). PBT (Jaderberg et al. (2017)) first chooses multiple sets of hyperparameters and parameters for a common base algorithm, and runs the base algorithm separatively in parallel at multiple learners to train their neural networks using those parameters and hyperparameters. Each learner updates the neural network parameters by perturbing the assigned hyperparameters. During training, in principle PBT evaluates the performance of multiple learners periodically, and selects the best performing hyperparameters, and then distributes the best performing hyperparameters and the corresponding parameters to other learners, although implementation details can be changed. Recently, PBT is applied to competitive multi-agent RL (Jaderberg et al. (2018)) and novelty search algorithms (Conti et al. (2017)). Although PBT is mainly developed to tune hyperparamters, the philosophy of PBT can be applied to find optimal parameters for given hyperparameters by multiple learners. In this case, multiple learners update their parameters in parallel, their performance is measured periodically, the parameters of the best performing learner are copied to other learners, other learners independently update their parameters from the copied parameters as their new initialization, and this process is repeated. The proposed IPE is similar to PBT in the sense that it exploits the parameters of the best performing learner among multiple parallel learners. However, IPE is different from the PBT-derived method in the way how IPE uses the parameters of the best learner. In the PBT-derived method, the parameters of the best learner are copied to other learners and other learners’ parameters are reset to the parameters of the best performing learner. Then, the parameters of each learner are updated by stochastic gradient descent (SGD). However, in IPE the parameters of the best performing learner are not copied but used in a soft manner as a guiding direction. Copying means that the parameters of all learners collapse to a single point in the parameter space. Furthermore, unlike PBT, IPE uses a common experience replay buffer to store all experiences from multiple learners with different parameters to exploit the diverse experiences of multiple learners with different parameters. As mentioned in Section 1, we refer to the PBT-derived method with a common experience replay buffer as the reloading method of which performance will be given in ablation study in Section 4. Although IPE is considered only for parallel parameter search in this paper, combining the soft way of using the parameters of the best performing learner with hyperparameter search is an interesting future work. 2.3 GUIDED POLICY SEARCH Our IPE method is also related to guided policy search (Levine & Koltun (2013), Levine et al. (2016), Teh et al. (2017), Ghosh et al. (2018)). Recently, Teh et al. (2017) proposed a guided policy search method for joint training of multiple tasks in which a common policy is used to guide local policies and the common policy is distilled from the local policies. Here, the local policies’ parameters are updated to maximize the performance and minimize the KL divergence between the local policies and the common distilled policy. The proposed IPE is related to guided policy search in the sense that multiple policies are guided by a common policy. However, the difference is that the goal of IPE is not learning multiple tasks but learning optimal parameters for a common task as in PBT and hence the guiding policy is not distilled from multiple local policies but chosen as the best performing policy among multiple learners. 2.4 EXPLORATION Improving exploration has been one of the key issues in RL and many different ways have been developed to improve exploration through maximum entropy objectives (Haarnoja et al. (2017; 2018)), noise in networks (Fortunato et al. (2018); Plappert et al. (2018)), and intrinsically motivated approaches (Bellemare et al. (2016); Ostrovski et al. (2017); Pathak et al. (2017); Achiam & Sastry (2017); Zheng et al. (2018)). The proposed IPE also enhances exploration. Specifically, IPE uses exploitation for exploration. Exploitation for exploration has been considered in the previous works (White & Sofge (1992), Oh et al. (2018)). In particular, Oh et al. (2018) exploited past good experiences to explore the sample space, whereas IPE exploit the current good policy among multiple policies to explore the policy space. 2.5 THE SET UP: PARALLEL LEARNING FOR A COMMON ENVIRONMENT The considered parallel learning setting consists of the environment E andN policies {π1, · · · , πN}. The environment E is described as a Markov decision process (MDP) defined by the tuple 〈S,A, T , r〉, where S is the state space, A is the action space, T : S × A × S → [0, 1] is the state transition probability, and r : S × A → R is the reward function. There exist N copies {E1, · · · , EN} of the environment E , i.e., E1 = · · · = EN = E , and the N environment copies may have different random initial seeds. The policy πi interacts with its corresponding environment copy E i and builds up its trajectory {(sit, a i t, r i t), t = 1, 2, · · · } for each i = 1, · · · , N . At time step t, the environment copy E i has a state sit ∈ S . The policy π i interacts with the environment copy E i by taking an action ait according to π i given the current state sit. Then, the environment copy E i yields the reward rit = r(s i t, a i t) and makes transition to the next state s i t+1 according to T . In this paper, in order to account for the actual amount of interaction with the environment, we define environment steps as the total number of interactions by all N parallel policies with all N environment copies. Suppose that all N policies generate their trajectories simultaneously in parallel, and suppose that M time steps have elapsed. Then, although the number of elapsed time steps is M , the number of environment steps is NM . 3 INTERACTIVE PARALLEL POLICY EXPLORATION We now present the proposed IPE scheme with the parallel environment learning setting described in Section 2.5, and the overall structure is described in Fig. 1. We have N identical parallel learners with a shared common experience replay buffer D, and all N identical learners employ a common base algorithm, which can be any off-policy RL algorithm. The execution is in parallel. The i-th learner has its own environment E i, which is a copy of the common environment E , and has its own value function (e.g., Q-function) parameter θi and policy parameter φi. The i-th learner interacts with the environment copy E i with some additional interaction with the chief, as shown in Fig. 1. At each time step, the i-th learner performs an action ait to its environment copy E i by using its own policy πφi , stores its experience (s i t, a i t, r i t, s i t+1) to the shared common experience replay buffer D for all i = 1, 2, · · · , N . Note that one time step corresponds to N environment steps. Then, at each time step, each learner updates its value function parameter and policy parameter for N times by drawing N mini-batches of size B from the shared common replay buffer D by minimizing its own value loss function and policy loss function, respectively. The N time updates for each learner for each time step is to exploit the samples provided by other N − 1 learners stored in the shared common replay buffer. In order to harness the benefit of mutual interaction among the learners in parallel learning, we exploit the information from the best learner periodically during training like in PBT (Jaderberg et al. (2017)). Suppose that the Q-function parameter and the policy parameter of each learner are initialized and learning is performed as described above for M time steps. At the end of the M time steps, we determine who is the best learner based on the average of the most recentEr episodic rewards for each learner. Let the index of the best learner be b. Then, the policy parameter information φb of the best learner can be used to enhance the learning of other learners for the next M time steps. Here, instead of copying φb to other learners, we propose using the information of φb in a soft manner to enhance the performance of the overall parallel learning. That is, during the next M time steps, whereas we set the loss function L̃(θi) for the Q-function to be the same as the loss L(θi) of the base algorithm, we set the loss function L̃(φi) for the policy parameter φi of the i-th learner as the following augmented version: L̃(φi) = L(φi) + 1{i6=b}βÊs∼D [ D(πφi , πφb) ] (1) where L(φi) is the policy loss function of the base algorithm, 1{·} denotes the indicator function, β(> 0) is a weighting factor, D(π, π′) be some distance measure between two policies π and π′, and Ês∼D denotes the sample expectation based on mini-batch drawn randomly from the experience replay buffer D. The augmented loss function L̃(φi) in (1) is composed of two terms L(φi) and 1{i6=b}βÊs∼D [ D(πφi , πφb) ] . Thus, for the non-best learners in the previous M time steps, the gradient of L̃(φi) is the mixture of two directions: one is to maximize the return by itself and the other is to follow the previously best learner’s policy. The second term in the right-hand side (RHS) of (1) guides non-best learners towards a good direction in addition to each leaner’s self search. 3.1 DESIGN OF THE WEIGHTING FACTOR β In (1), the weighting factor β is common to all N learners and should be determined judiciously to balance between improving its performance by each learner itself and going towards the previous best policy among the N learners. We adopt an adaptive method to determine the value of β as follows: β = { β ← 2β if D̂best ≥ max{ρD̂ b change, dsearch} × 1.5 β ← β/2 if D̂best < max{ρD̂bchange, dsearch}/1.5 (2) where D̂best is the estimated distance between πφi and πφb averaged over allN−1 non-best learners, and D̂bchange is the estimated distance between πφbupdated (i.e., the policy of the current best learner at the end of the current M time steps) and πφb (i.e, the policy of the current best learner at the end of the previous M time steps), given respectively by D̂best = 1 N − 1 ∑ i∈I−b Ês∼D [ D(πφi , πφb) ] and D̂bchange = Ês∼D [ D(πφb updated , πφb) ] . (3) Here, I−b = {1, . . . , N} \ {b}, and dsearch and ρ are predetermined hyperparameters. This adaptation method is similar to that used in proximal policy optimization (PPO) (Schulman et al. (2017)). The update of β is done every M time steps and the updated β is used for the next M time steps. First, suppose that we do not have the first term ρD̂bchange in the maximum of the condition in (2). Then, when the estimated average distance D̂best from the best policy to the remaining policies is smaller than dsearch/1.5, the parameter β is decreased by half. Hence, the movement in the gradient direction of the second term in the RHS of (1) is diminished and the independent movement into the optimization direction for L(φi) becomes more important. So, each policy gradually diverges from the previous best policy serving as the reference point due to internal exploration mechanism such as added noise in action. On the other hand, when D̂best is larger than 1.5dsearch, the parameter β increases by factor two and the movement towards the previous best policy becomes more important. As time steps elapse, β is settled down so that D̂best is around dsearch. Hence, the proposed IPE scheme searches a wide area with rough radius dsearch around the best policy in the policy parameter space, as shown in Fig. 2(a). Furthermore, with the first term ρD̂bchange in the maximum of the condition in (2), we can control the speed of tracking the best policy. D̂bchange measures the speed of change in the best policy parameter. When the best policy parameter change scaled by ρ, i.e., ρD̂bchange is less than dsearch, the term is invisible by the maximum operation in (2). On the other hand, when ρD̂bchange > dsearch, the term is active and it means that the best policy parameter changes fast. Thus, the tracking speed should be controlled. If D̂best > ρD̂ b change, i.e., the distance from πφi to πφb is larger than ρ times the distance from πφb updated to πφb , then this means that the speed of tracking the best policy is slow. Hence, we increase β by factor two. Otherwise, we decrease β by half. When the search for the current M time steps is finished, the new best learner is selected and a new search for a wide area around the new best learner’s policy πφb is performed, as illustrated in Fig. 2(b). The policy parameter information φb of the best learner can be changed before the next best learner selection. Now, the overall procedure for the proposed IPE scheme is explained with the diagram in Fig. 1. The value function parameter and policy parameter of each learner are initialized. The chief distributes the parameter β and the reference policy parameter φb, which is the policy information of the best learner over the previous M time steps, to all learners. At each time step, each learner interacts with its own environment copy by taking its action and receiving the reward and the next state, and stores its experience to the shared common replay buffer D. Then, the i-th learner updates its value function parameter θi by minimizing its own value loss function L̃(θi) which is the same as that of the base algorithm, and updates the policy parameter φi by minimizing the augmented loss function L̃(φi) in (1) for N times by drawing N mini-batches from the shared common replay buffer D. Whenever an episode ends for a learner, the learner reports the episodic reward to the chief. The i-th learner reports Ês∼D [ D(πφi , πφb) ] to the chief for computation of D̂best in (3). At every M time steps, the chief updates β according to (2), determines the best learner over the most recent M time steps based on the collected episodic rewards from each learner. Once the best learner is determined, the chief obtains the policy parameter information φb from the determined best learner, and distributes the new β and the reference policy parameter φb to all N learners. This procedure repeats until the time steps reaches the predefined maximum. When the parallel learning based IPE reaches a steady state, we can choose any of the N learners’ policies and use the chosen policy for the environment E in future since it is expected that at the steady-state the performance of all N policies is more or less similar due to their distance property. 3.2 IPE-ENHANCED ALGORITHMS The proposed IPE method can be applied to any off-policy RL algorithms regardless of whether the base RL algorithms have continuous actions or discrete actions. Here, we consider the application of IPE to the TD3 algorithm as the base algorithm and the constructed algorithm is named the IPETD3 algorithm. The details of baseline TD3 are explained in Appendix A. With TD3 as the base algorithm, each learner has its own parameters θi1, θ i 2, and φ i for its two Q-functions and policy. Furthermore, it has (θi1) ′, (θi2) ′, and (φi)′ which are the parameters of the corresponding target networks. For the distance measure between two policies, we use the mean square difference given by D(π(s), π̃(s)) = 1 2 ‖π(s)− π̃(s)‖22 . (4) For the i-th learner, as in TD3, the parameters θij , j = 1, 2 are updated every time step by minimizing L̃(θij) = Ê(s,a,r,s′)∼D [ (y −Qθi j (s, a))2 ] (5) where y = r + γminj=1,2Q(θi j )′(s ′, π(φi)′(s ′) + ǫ), ǫ ∼ clip(N (0, σ̃2),−c, c). The parameter φi is updated every d time steps by minimizing the following augmented loss function: L̃(φi) = Ês∼D [ −Qθi 1 (s, πφi(s)) + 1{i6=b} β 2 ∥ ∥πφi(s)− πφb(s) ∥ ∥ 2 2 ] . (6) For the first Tinitial timesteps for initial exploration we use a random policy and do not update all policies over the initial exploration period. With these loss functions, the reference policy, and the initial exploration policy, all procedure is the same as the general IPE procedure described previously. The pseudocode of the IPE-TD3 algorithm is provided in Appendix B. The application of IPE to other algorithms such as SAC and DQN is also provided in Appendices. 4 EXPERIMENTS In this section, we provide the numerical results on the performance of the proposed IPE-TD3 and current state-of-the-art on-policy and off-policy baseline algorithms on several MuJoCo environments (Todorov et al. (2012)). The baseline algorithms are Proximal Policy Optimization (PPO) (Schulman et al. (2017)), Actor Critic using Kronecker-Factored Trust Region (ACKTR) (Wu et al. (2017)), Soft Q-learning (SQL) (Haarnoja et al. (2017)), Soft Actor-Critic (SAC) (Haarnoja et al. (2018)), and TD3 (Fujimoto et al. (2018)). More numerical result on IPE applied to SAC is provided in Appendices. 4.1 PARAMETER SETTING All hyperparameters we used for evaluation are the same as those in the original papers (Schulman et al. (2017); Haarnoja et al. (2018); Fujimoto et al. (2018)). Here, we provide the hyperparameters of TD3 and IPE-TD3 only. TD3 The networks for two Q-functions and the policy have 2 hidden layers. The first and second layers have sizes 400 and 300, respectively. The non-linearity function of the hidden layers is ReLU, and the activation functions of the last layers of the Q-functions and the policy are linear and hyperbolic tangent, respectively. We used the Adam optimizer with learning rate 10−3, discount factor γ = 0.99, target smoothing factor τ = 5 × 10−3, the period d = 2 for updating the policy. The experience replay buffer size is 106, and the mini-batch size B is 100. The standard deviation for exploration noise σ and target noise σ̃ are 0.1 and 0.2, respectively, and the noise clipping factor c is 0.5. IPE-TD3 In addition to the parameters for TD3, we used N = 4 learners, the period M = 250 of updating the best policy and β, the number of recent episodes Er = 10 for determining the best learner b. The parameters dsearch and ρ for the exploration range are 0.04 and 2, respectively. The timesteps for initial exploration Tinitial is set as 250 for Hopper-v1 and Walker2d-v1 and as 2500 for HalfCheetah-v1 and Ant-v1. 4.2 COMPARISON TO BASELINES In order to have sample-wise fair comparison among the considered algorithms, we obtain the performance with respect to environment steps (not time steps), which is defined as the total number of interactions with the environment by the agent. This comparison makes sense because the performance at the same environment steps means that all algorithms use the same number of samples obtained from the environment. The performance is obtained through the evaluation method that is similar to those in (Haarnoja et al. (2018); Fujimoto et al. (2018)). Evaluation of the policies are conducted every Reval = 4000 environment steps for all algorithms. At each evaluation instant, the agent (or learner) fixes its policy as the one at the evaluation instant, and interacts with the same environment separate for the evaluation purpose with the fixed policy to obtain 10 episodic rewards. The average of these 10 episodic rewards is the performance at the evaluation instant. In the case of IPE-TD3 and other parallel learning schemes, each of the N learners fixes its policy as the one at the evaluation instant, and interacts with the environment with the fixed policy to obtain 10 episodic rewards. First, the 10 episodic rewards are averaged for each learner and then the maximum of the 10-episode-average rewards of the N learners is taken as the performance at that evaluation instant. We performed this operation for five different random seeds, and the mean and variance of the learning curve are obtained from these five simulations. The policies used for evaluation are stochastic for PPO and deterministic for the others. Fig. 3 shows the learning curves over one million environment steps for several MuJoCo tasks: Hopper-v1, Walker2d-v1, HalfCheetah-v1, and Ant-v1. First, it is observed that the performance of TD3 here is similar to that in the original TD3 paper (Fujimoto et al. (2018)), and the performance of other baseline algorithms is also similar to that in the original papers (Schulman et al. (2017); Haarnoja et al. (2018)). It is seen that the IPE-TD3 algorithm outperforms the state-of-the-art RL algorithms in terms of both the speed of convergence with respect to environment steps and the final steady-state performance (except in Walker2d-v1, the initial convergence is a bit slower than TD3.) Especially, in the cases of Hopper-v1 and Ant-v1, TD3 has large variance and this means that the performance of TD3 is quite dependent on the initial condition of the environment and it is not easy for TD3 to escape out of bad local minima in certain environments. However, it is seen that IPE-TD3 yields much less variance as compared to TD3. This implies that the wide area search by IPE in the policy parameter space helps the learners escape out of bad local optima. It is seen that the wide area search around the previous best policy point in the policy parameter space by IPE yields faster and better policy search. 4.3 ABLATION STUDY IPE-TD3 has several components to improve the performance based on parallelism: 1) sharing experiences from multiple policies, 2) using the best policy information, and 3) fusing the best policy information in a soft manner based on the augmented loss function. Thus, we investigated the impact of each component on the performance improvement. For comparison we considered the following parallel policy exploration methods gradually incorporating more techniques: 1. TD3 The original TD3 with one learner 2. Distributed RL TD3 (DRL-TD3) N actors obtain samples from N environment copies. The policy and the experience replay buffer are shared by all N actors. 3. Experience-Sharing-Only TD3 (ESO-TD3) N learners interact with N environment copies and update their own policies using experiences drawn from the shared experience replay buffer. 4. Reloading TD3 (Re-TD3) At every M ′ timesteps, the best policy is determined and all policies are initialized as the best policy, i.e., the best learner’s policy parameter is copied to all other learners. The rest of the procedure is the same as experience-sharing-only TD3. 5. IPE-TD3 At every M timesteps, the best policy information is determined and this policy is used in a soft manner based on the augmented loss function. Note that Re-TD3 exploits the best policy information from N learners. The main difference between IPE-TD3 and Re-TD3 is the way how the best learner’s policy parameter is used. Re-TD3 initializes all policies with the best policy parameter everyM ′ timesteps like in PBT (Jaderberg et al. (2017)), whereas IPE-TD3 uses the best learner’s policy parameter information determined everyM timesteps to construct an augmented loss function. For fair comparison, M and M ′ are determined independently and optimally for IPE-TD3 and Re-TD3, respectively, since the optimal period can be different for the two methods. Thus, M ′ = 5000 is determined for Re-TD3 by tuning, whereas M = 250 is used for IPE-TD3. Since all N policies collapse as one point in Re-TD3 at the beginning of each period, we expect that a larger period is required for Re-TD3 to have sufficiently spread policies at the end of the best policy selection period. Fig. 4 shows the learning curves of the considered parallel exploration methods for the Ant-v1 task and Table 1 shows the final (steady-state) performance of the considered parallel exploration methods for four MuJoCo tasks, respectively. It is seen that IPE-TD3 outperforms other parallel methods: DRL-TD3, ESO-TD3 and Re-TD3 except the case that ESO-TD3 outperforms all other parallel schemes in Hopper-v1. Both Re-TD3 and IPE-TD3 have better final (steady-state) performance than TD3 and ESO-TD3 for all tasks except Hopper-v1 for which ESO-TD3 performs best. Note that ESO-TD3 obtains most diverse experiences since the N learners shares the experience replay buffer but there is no interaction among theN learners until the end of training. So, it seems that this diverse experience is beneficial to Hopper-v1. The final performances of Re-TD3 and IPE-TD3 are more or less the same for HalfCheetah-v1 but the final performance of IPE-TD3 is noticeably better than that of Re-TD3 in other cases. 5 CONCLUSION In this paper, we have proposed a new interactive parallel learning scheme, IPE, to enhance the performance of off-policy RL systems. In the proposed IPE scheme, multiple identical learners with their own value-functions and policies sharing a common experience replay buffer search a good policy with the guidance of the best policy information in the previous search interval. The information of the best policy parameter of the previous search interval is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search space by the multiple learners. The guidance by the previous best policy and the enlarged search space by IPE enables faster and better policy search in the policy parameter space. The IPE-enhanced algorithms constructed by applying the proposed IPE scheme to TD3 or SAC outperforms most of the current state-of-the-art continuous-action RL algorithms. Although we mainly considered continuous-action off-policy algorithms in this paper, the proposed IPE method can also be applied to RL with discrete actions, as seen in Appendix E. In the case of continuous action control, the gain by IPE can be beneficial for recent trend of fast computer-based prototyping of complex robotics systems or autonomous cars, whereas in the discrete-action case better policy parameters can be searched for more challenging tasks by IPE. APPENDIX A. THE TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT ALGORITHM AND THE SOFT ACTOR-CRITIC ALGORITHM A.1 THE TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT (TD3) ALGORITHM The TD3 algorithm is a current state-of-the-art off-policy algorithm and is a variant of the deep deterministic policy gradient (DDPG) algorithm (Lillicrap et al. (2015)). The TD3 algorithm tries to resolve two problems in typical actor-critic algorithms: 1) overestimation bias and 2) high variance in the approximation of the Q-function. In order to reduce the bias, the TD3 considers two Q-functions and uses the minimum of the two Q-function values to compute the target value, while in order to reduce the variance in the gradient, the policy is updated less frequently than the Qfunctions. Specifically, letQθ1 ,Qθ2 and πφ be two current Q-functions and the current deterministic policy, respectively, and let Qθ′ 1 , Qθ′ 2 and πφ′ be the target networks of Qθ1 , Qθ2 and πφ, respec- tively. The target networks are initialized by the same networks as the current networks. At time step t, the TD3 algorithm takes an action at with exploration noise ǫ: at = πφ(st) + ǫ, where ǫ is zero-mean Gaussian noise with variance σ2, i.e., ǫ ∼ N (0, σ2). Then, the environment returns reward rt and the state is switched to st+1. The TD3 algorithm stores the experience (st, at, rt, st+1) at the experience replay buffer D. After storing the experience, the Q-function parameters θ1 and θ2 are updated by gradient descent of the following loss functions: L(θj) = Ê(s,a,r,s′)∼D [ (y −Qθj (s, a)) 2 ] , j = 1, 2 (7) where Ê(s,a,r,s′)∼D denotes the sample expectation with an uniform random mini-batch of size B drawn from the replay buffer D, and the target value y is given by y = r + γ min j=1,2 Qθ′ j (s′, πφ′(s ′) + ǫ), ǫ ∼ clip(N (0, σ̃2),−c, c). (8) Here, for the computation of the target value, the minimum of the two target Q-functions is used to reduce the bias. The procedure of action taking and gradient descent for θ1 and θ2 are repeated for d times (d = 2), and then the policy and target networks are updated. The policy parameter φ is updated by gradient descent by minimizing the loss function for φ: L(φ) = −Ês∼D [Qθ1(s, πφ(s))] , (9) and the target network parameters θ′j and φ ′ are updated as θ′j ← (1− τ)θ ′ j + τθj φ ′ ← (1− τ)φ′ + τφ. (10) The networks are trained until the number of time steps reaches a predefined maximum. A.2 THE SOFT ACTOR-CRITIC (SAC) ALGORITHM The SAC algorithm is an off-policy algorithm comparable to TD3 and yields good performance especially in environments with high dimensional action spaces. The SAC algorithm is a maximum entropy RL which is based on the discounted sum of reward and the entropy of the current policy given by Eτ∼π [ ∞ ∑ t=0 γt (r(st, at) + αH(π(·|st))) ] , (11) where α is a weighting factor that balances between the reward and the entropy of the policy. This objective function stimulates the algorithm to explore more diverse experiences so as to find a better policy. The SAC algorithm has one value function Vψ(s), two Q-functions Qθj (s, a), j = 1, 2, and one stochastic policy πφ(·|s), which are parameterized by parameters ψ, θj , and φ, respectively. It also has a target value function Vψ′(s) for stable convergence. After initialization, at each time step t the algorithm obtains experience (st, at, rt, st+1) by interacting with the environment and stores the experience to the experience replay buffer D. Then, it updates the parameters ψ, θj , and φ by gradient descent of the following loss functions: J(ψ) = Ês∼D,a∼πφ(·|s) [ 1 2 ∥ ∥Vψ(s)− Q̄(s, a) + log πφ(a|s) ∥ ∥ 2 2 ] , (12) J(θj) = Ê(s,a,r,s′)∼D [ 1 2 ( Qθj (s, a)− r/α− γVψ′(s ′) )2 ] , j = 1, 2 (13) J(φ) = Ês∼D,a∼πφ(·|s) [ log πφ(a|s)− Q̄(s, a) ] , (14) where Q̄(s, a) = min {Qθ1(s, a), Qθ2(s, a)}, and Ê(s,a,r,s′)∼D is the sample expectation with an uniform random mini-batch of size B drawn from the replay buffer D. After updating the parameters, the target value function parameter ψ′ is updated as ψ′ ← (1− τ)ψ′ + τψ (15) In order to obtain diverse experience in the initial stage of learning, it uses a uniform policy for initial Tinitial time steps and the current policy πφ(·|s) for the rest of learning. APPENDIX B. PSEUDOCODE OF THE IPE-TD3 ALGORITHM Algorithm 1 The Interactive Parallel Exploration TD3 (IPE-TD3) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period,B: size of mini-batch, d: update interval for policy and target networks. 1: Initialize φ1 = · · · = φN = φb, θ1j = · · · = θ N j , j = 1, 2, randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait = π i ( sit ) + ǫ, ǫ ∼ N (0, σ2) to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 19: Update θij , j = 1, 2, by gradient descent for minimizing L̃(θ i j) in (5) with B 20: if k ≡ 0(mod d) then 21: Update φi by gradient descent for minimizing L̃(φi) in (6) with B 22: Update the target networks: (θij) ′ ← (1− τ)(θij) ′ + τθij , (φ i)′ ← (1− τ)(φi)′ + τφi 23: end if 24: end for 25: end for 26: if t ≡ 0(mod M) then 27: Select the best learner b 28: Adapt β with (2) 29: end if 30: end while APPENDIX C. PSEUDOCODE OF THE IPE-SAC ALGORITHM Algorithm 2 The Interactive Parallel Exploration SAC (IPE-SAC) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period, B: size of mini-batch 1: Initialize ψ1 = · · · = ψN , φ1 = · · · = φN = φb, θ1j = · · · = θ N j , j = 1, 2, randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait ∼ π i ( ·|sit ) to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 19: Update ψi, θij , and φ i by gradient descent for minimizing (16), (17), and (18) with B, respectively. 20: Update the target parameters: (ψi)′ ← (1− τ)(ψi)′ + τψi 21: end for 22: end for 23: if t ≡ 0(mod M) then 24: Select the best learner b 25: Update the best policy parameter φb 26: Adapt β with (2) 27: end if 28: end while In IPE-SAC, each learner has its own parameters ψi, θi1, θ i 2, and φ i for its value function, two Qfunctions, and policy. Each learner also has (ψi)′ which is the parameter of the target value function. For the distance measure between two policies, we use the mean square difference of the mean action of Gaussian policy, given by D(π(·|s), π̃(·|s)) = 12 ‖mean {π(·|s)} −mean {π̃(·|s)}‖ 2 2. The i-th learner updates the parameters ψi, θi1, θ i 2, and φ i every timestep by minimizing L̃(ψi) = Ês∼D,a∼π φi (·|s) [ 1 2 ∥ ∥Vψi(s)− Q̄ i(s, a) + log πφi(a|s) ∥ ∥ 2 2 ] (16) L̃(θij) = Ê(s,a,r,s′)∼D [ 1 2 ( Qθi j (s, a)− r/α− γV(ψi)′(s ′) )2 ] , j = 1, 2 (17) L̃(φi) = Ês∼D,a∼π φi (·|s) [ log πφi(a|s)− Q̄ i(s, a) + 1{i6=b} β 2 ∥ ∥mean { πφi(·|s) } −mean { πφb(·|s) }∥ ∥ 2 2 ] (18) where Q̄i(s, a) = min { Qθi 1 (s, a), Qθi 2 (s, a) } . After updating these parameters, each learner up- dates its target value function parameters. With these loss functions, all procedure is the same as the general IPE procedure described in Section 3. The pseudocode of the IPE-SAC algorithm is shown above. APPENDIX D. RESULT OF IPE-SAC ON HUMANOID (RLLAB) As mentioned already, IPE is general in that it can be applied to other off-policy algorithms. Here, we provide numerical results on IPE-SAC, shown in Appendix C, constructed by combining IPE with SAC. Experiment was perform on the task of Humanoid (rllab) (Duan et al. (2016)) that needs more exploration. We compared IPE-SAC with SAC and multi-learner reloading SAC (Re-SAC) which copies the parameter of the best learner to other learners periodically. D.1 PARAMETER SETTING SAC The networks for the state-value function, two Q-functions, and the policy had 2 hidden layers of size 256. The activation functions for the hidden layers and the last layers were ReLU and linear, respectively. We used the Adam optimizer with learning rate 3×10−4, discount factor γ = 0.99, and target smoothing factor τ = 5 × 10−3. The algorithm was trained by random mini-batches of size B = 256 from the experience replay buffer of the maximum size 106. The reward scale for updating Q-functions was 10 for the Humanoid (rllab) environment. The initial exploration timesteps Tinitial was set to 1000. IPE-SAC Additional parameters for IPE-SAC are as follows. We used N = 4 learners, the period M = 500 of updating the best policy and β, the number of recent episodes Er = 10 for determining the best learner b. The parameters dsearch and ρ for the exploration range were 0.01 and 2, respectively. We used the initial exploration timesteps Tinitial = 250. D.2 PERFORMANCE ON HUMANOID (RLLAB) The learning curve on Humanoid (rllab) is shown in Figure 51. It is seen that IPE-SAC outperforms the original SAC and Re-SAC. This result shows the promising aspect of IPE that it can be useful for tasks requiring more exploration. 1The simulation is still running, and we will change the graph when the simulation is finished. APPENDIX E. PSEUDOCODE OF THE IPE-DQN ALGORITHM Algorithm 3 The Interactive Parallel Exploration DQN (IPE-DQN) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period, B: size of mini-batch, f : update interval for Q-functions, d: update interval for target Q-functions,. 1: Initialize θ1 = · · · = θN = θb randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait = argmaxa∈A { Qθi(s i t, a) } w.p. 1 − ε or a uniform random action ait w.p. ε to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: if k ≡ 0(mod f) then 19: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 20: Update θi by gradient descent minimizing (19) with B. 21: end if 22: end for 23: end for 24: if t ≡ 0(mod d) then 25: for i = 1, 2, · · · , N in parallel do 26: Update (θi)′ ← θi 27: end for 28: end if 29: if t ≡ 0(mod M) then 30: Select the best learner b 31: Update the best policy parameter θb 32: Adapt β with (2) 33: end if 34: end while IPE can also be applied to off-policy algorithms with discrete action spaces as well as continuous action spaces. Thus, we applied IPE to DQN to construct IPE-DQN. In IPE-DQN, each learner has its own Q-function parameters θi and target Q-function parameters (θi)′. We define the distance for two Q-functions Q(s, a) and Q̃(s, a) as D(Q(s, ·), Q̃(s, ·)) = KL ( softmax (Q(s, ·)) ||softmax ( Q̃(s, ·) )) . We used the Q-function parameter θb of the best learner as the reference parameter, which was originally φb in (1) and (3). The i-th learner updates the parameters θi every f timesteps by minimizing L̃(θi) = Ê(s,a,r,s′)∼D [ 1 2 ‖Qθi(s, a)− y‖ 2 2 + 1{i6=b}βKL ( softmax (Qθi(s, ·)) ||softmax ( Qθ̃b(s, ·) )) ] (19) where y = r+γQ(θi)′(s ′, argmaxa′∈A {Qθi(s ′, a′)}). With the loss function and the reference policy, all procedure is the same as the general IPE procedure described in Section 3. The pseudocode of the IPE-DQN algorithm is shown above.
1. What is the main contribution of the paper regarding parallel training of RL agents? 2. How does the proposed method differ from other methods in terms of exploration and sharing information between learners? 3. What are the strengths and weaknesses of the experimental evaluation provided in the paper? 4. Do you have any concerns about the comparison of results with other baselines or methods? 5. How does the paper frame its contributions in the context of previous work in parallelization and exploration in RL?
Review
Review Revision: The authors added many references to prior work to the paper and did some additional experiments that certainly improved the quality. However, the additional results also show that the shared experience buffer doesn't have that much influence and that for the original tasks (the humanoid results in the appendix look more promising but inconclusive) the reloading variant seems to catch up relatively quickly. Reloading and distributed learning seem to lead to the largest gains but those methods already existed. That said, the IPE method does give a clear early boost. It's not clear yet whether the method can also lead to better end results. I improved my score because I think that the idea and the results are worth sharing but I'm still not very convinced of their true impact yet. The paper proposes a scheme for training multiple RL agents in parallel using a shared replay buffer and an objective that pulls the policies towards the best performing policy as determined by the last comparison event. The method is combined with the TD3 continuous control learning algorithm and evaluated on Mujoco tasks from OpenAI Gym. The experiments in the paper seem correctly executed and it is nice that there are multiple baselines but I'm not convinced that the comparison is very insightful. It is somewhat odd that the architectures for the different methods differ quite a bit sometimes. The experiments are already hard to compare due to the very different natures of the optimization algorithms (distributed or not, asynchronous or not). It would be nice to also see plots of the results as a function of the number of learner steps and wall time if these can be obtained from the already executed experiments. The paper doesn’t include many references and fails to mention research about parallel (hyperparameter) optimization methods that seems very related, even if the goal of those methods is not always behavioral exploration. Especially Population Based Training (PBT; Jaderberg et al., 2017) is very similar in spirit in that a population of parallel learners occasionally exchange information to benefit from the findings of the best performing individuals. The method is also similar to knowledge distillation (Hinton et al. 2015), which has also been used to speed up multi-task RL (Teh et al., 2017). It would also be nice to see an ablation of some of the different components of the algorithm. For example, it would be interesting to know how important the following of the best policy is in comparison to the gains that are obtained from simply using a shared replay buffer. The paper is easy to follow and seems to describe the methods in enough detail to allow for a replication of the experiments. The terminology is not always precise and I’m a bit confused about whether the distance between policies is measured between their actions or their parameter vectors. Equation 8 suggests the former (as I'm assuming is what is also meant in the paper) but the text often speaks about the search radius in parameter space. Exploration is a big problem in reinforcement learning and while parallelization of environment simulations helps to speed up training, additional computational effort typically provides diminishing returns. Methods for coordinating parallel exploration could have a severe impact. Since many RL setups are already distributed, the novelty of the paper mainly comes from sharing a replay buffer (I haven't seen this before but it seems like such an obvious thing to try that I wouldn't be surprised if it has been done) and the way in which learners are forced to follow the best individual. It is promising that the method provides the largest gains for the environment which seems to be the most challenging but it’s hard to draw conclusions from these results. It would be more insightful to see how the method performs on more challenging tasks where exploration is more important, but I understand that these experiments are computationally demanding. All in all, the paper presents a method that is simple while having potential for impact but needs to frame it more in the context of previous work. The empirical evaluation is a bit limited and would be more impressive with some additional tasks or at least benefit from a more thorough analysis of the settings and relative contributions of the shared replay buffer and following of the best policy. References Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., ... & Fernando, C. (2017). Population based training of neural networks. arXiv preprint arXiv:1711.09846. Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Teh, Y., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., ... & Pascanu, R. (2017). Distral: Robust multitask reinforcement learning. In Advances in Neural Information Processing Systems (pp. 4496-4506).
ICLR
Title Interactive Parallel Exploration for Reinforcement Learning in Continuous Action Spaces Abstract In this paper, a new interactive parallel learning scheme is proposed to enhance the performance of off-policy continuous-action reinforcement learning. In the proposed interactive parallel learning scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information. The information of the best policy is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search space by the multiple learners. The guidance by the previous best policy and the enlarged search space by the proposed interactive parallel learning scheme enable faster and better policy search in the policy parameter space. Working algorithms are constructed by applying the proposed interactive parallel learning scheme to several off-policy reinforcement learning algorithms such as the twin delayed deep deterministic (TD3) policy gradient algorithm and the soft actor-critic (SAC) algorithm, and numerical results show that the constructed IPE-enhanced algorithms outperform most of the current state-of-the-art reinforcement learning algorithms for continuous action control. 1 INTRODUCTION Reinforcement learning (RL) for continuous action control is an active research field. In RL, an agent learns a policy through interaction with the environment to maximize the cumulative reward. One of the key issues in RL is the trade-off between exploitation and exploration. Exploitation is to make a best decision based on the already collected information, whereas exploration is to collect more new information about the environment. The balance between the two is important for good RL algorithms. For example, DQN (Mnih et al. (2015)) balances exploitation and exploration by taking actions based on the ǫ-greedy approach. Deep deterministic policy gradient (DDPG) (Lillicrap et al. (2015)) and twin delayed deep deterministic (TD3) (Fujimoto et al. (2018)) policy gradient algorithms promote exploration by adding Ornstein-Uhlenbeck noise and Gaussian noise to the best decision action, respectively. Soft actor-critic (SAC) (Haarnoja et al. (2018)) performs balancing by using a maximum entropy objective. However, most of the previous works focus on exploration to obtain unobserved states or actions. In this paper, we consider exploration in the policy parameter space by using parallel identical learners for the same environment. By having multiple identical learners for the same environment, we can have increased search capability for a better policy. Parallelism in learning has been investigated widely in distributed RL (Nair et al. (2015), Mnih et al. (2016), Horgan et al. (2018), Barth-Maron et al. (2018), Espeholt et al. (2018)), evolutional strategies (Salimans et al. (2017), Choromanski et al. (2018)), and recently in population based training (PBT) (Jaderberg et al. (2017), Jaderberg et al. (2018), Conti et al. (2017)) for faster and better search for parameters and/or hyperparameters. In this paper, we also apply parallelism to RL in order to enhance the learning performance but in a slightly different way as compared to the previous methods. The proposed algorithm is intended for any off-policy RL algorithms and is composed of a chief, N environment copies of the same environment, and N identical learners with a shared common experience replay buffer and a common base algorithm, as shown in Fig. 1. Each learner has its own value function(s) and policy, and trains its own policy by interacting with its own environment copy with some additional interaction with the chief, as shown in Fig. 1. At each time step, each learner takes an action to its environment copy by using its own policy, stores its experience to the shared common experience replay buffer. Then, each learner updates its value function parameter and policy parameter by drawing mini-batches from the shared common replay buffer by minimizing its own value loss function and policy loss function, respectively. One way to implement parallel learning under the above setup is to run N fully independent parallel learning without interaction among the learners except sharing their experiences until the end of time steps and to choose the policy from the learner with the maximum accumulated reward at the end for future use. We will refer to this method as the experience-sharing-only method. However, this method ignores possible benefit from mutual interaction among the learners during training. In order to harness the benefit of mutual interaction among the learners in parallel learning, we exploit the information from the best learner among all learners periodically during training like in PBT (Jaderberg et al. (2017), Jaderberg et al. (2018)). Suppose that the value and policy parameters of each learner are initialized and learning is performed as described above forM time steps. At the end of M time steps, we can determine who is the best learner based on the average of the most recent Er episodic rewards for each learner. Then, the policy parameter information of the best learner can be used to enhance the learning of other learners for the next M time steps. This information can help learners stuck in local minima escape from the local minima and guide other learners for better direction. One simple way to exploit this best policy parameter information is that we reset the policy parameter of each learner with the policy parameter of the best learner at the beginning of the next M time steps, make each learner perform learning from this initial point in the policy parameter space for the next M time steps, select the best learner again at the end of the next M time steps, and repeat this procedure every M time steps in a similar way that PBT (Jaderberg et al. (2017)) copies the best learner’s parameters and hyperparameters to other learners. We will refer to this method as the reloading method in this paper. However, this reloading method has the problem that the search area covered by all N searching policies collapses to one point at the time of parameter copying and thus the search area can be narrow around the previous best policy point. In order to overcome such disadvantage, instead of resetting the policy parameter with the best policy parameter every M time steps, we here propose using the policy parameter information of the best learner in a soft manner to enhance the performance of the overall parallel learning. In the proposed scheme, the shared best policy information is used only to guide the policies of other learners. The policy of each learner is updated by improving the performance around a certain distance from the shared guiding policy. The chief periodically determines the best policy among the policies of all learners and distributes the best policy parameter to all learners so that the learners search for better policies around the previous best policy. The chief also enforces that the N searching policies are spread in the policy parameter space with a given distance from the previous best policy point so that the search area in the policy space by all N learners maintains a wide area and does not collapse into a narrow region. The proposed interactive parallel exploration (IPE) learning method can be applied to any off-policy RL algorithms and implementation is easy. Furthermore, the proposed method can be extended directly to distributed or multi-agent RL systems. We apply our IPE scheme to the TD3 algorithm and the SAC algorithm, which are state-of-the-art off-policy algorithms, as our base algorithms, and the new algorithms are named IPE-TD3 and IPE-SAC algorithms, respectively. Numerical result shows that the IPE-enhanced algorithms outperform the baseline algorithms both in the speed of convergence and in the final steady-state performance. The gain by IPE 2 BACKGROUND AND RELATED WORKS 2.1 DISTRIBUTED RL Distributed RL is an efficient way that takes advantage of parallelism to achieve fast training for large complex tasks (Nair et al. (2015)). Most of the works in distributed RL assume a common structure composed of multiple actors interacting with multiple copies of the same environment and a central system which stores and optimizes the common Q-function parameter or the policy parameter shared by all actors. The focus of distributed RL is to optimize the Q-function parameter or the policy parameter fast by generating more samples for the same wall clock time with multiple actors. In order to achieve this goal, researchers investigated various techniques for distributed RL, e.g., asynchronous update of parameters (Mnih et al. (2016), Babaeizadeh et al. (2017)), sharing an experience replay buffer (Horgan et al. (2018)), GPU-based parallel computation (Babaeizadeh et al. (2017), Clemente et al. (2017)), GPU-based simulation (Liang et al. (2018)) and V-trace in case of on-policy algorithms (Espeholt et al. (2018)). Distributed RL yields significant performance improvement in terms of the wall clock time but it does not consider the possible enhancement by interaction among multiple learners like in IPE and PBT. The proposed IPE uses a similar structure to that in (Nair et al. (2015), Espeholt et al. (2018)): that is, IPE is composed of multiple learners and a central system called chief. The difference is that each learner has its own Q or value function parameter and policy parameter and optimizes the parameters in parallel with interactions. 2.2 POPULATION BASED TRAINING Parallelism is also exploited in finding optimal parameters and hyperparameters of training algorithms for neural networks in PBT (Jaderberg et al. (2017), Jaderberg et al. (2018), Conti et al. (2017)). PBT (Jaderberg et al. (2017)) first chooses multiple sets of hyperparameters and parameters for a common base algorithm, and runs the base algorithm separatively in parallel at multiple learners to train their neural networks using those parameters and hyperparameters. Each learner updates the neural network parameters by perturbing the assigned hyperparameters. During training, in principle PBT evaluates the performance of multiple learners periodically, and selects the best performing hyperparameters, and then distributes the best performing hyperparameters and the corresponding parameters to other learners, although implementation details can be changed. Recently, PBT is applied to competitive multi-agent RL (Jaderberg et al. (2018)) and novelty search algorithms (Conti et al. (2017)). Although PBT is mainly developed to tune hyperparamters, the philosophy of PBT can be applied to find optimal parameters for given hyperparameters by multiple learners. In this case, multiple learners update their parameters in parallel, their performance is measured periodically, the parameters of the best performing learner are copied to other learners, other learners independently update their parameters from the copied parameters as their new initialization, and this process is repeated. The proposed IPE is similar to PBT in the sense that it exploits the parameters of the best performing learner among multiple parallel learners. However, IPE is different from the PBT-derived method in the way how IPE uses the parameters of the best learner. In the PBT-derived method, the parameters of the best learner are copied to other learners and other learners’ parameters are reset to the parameters of the best performing learner. Then, the parameters of each learner are updated by stochastic gradient descent (SGD). However, in IPE the parameters of the best performing learner are not copied but used in a soft manner as a guiding direction. Copying means that the parameters of all learners collapse to a single point in the parameter space. Furthermore, unlike PBT, IPE uses a common experience replay buffer to store all experiences from multiple learners with different parameters to exploit the diverse experiences of multiple learners with different parameters. As mentioned in Section 1, we refer to the PBT-derived method with a common experience replay buffer as the reloading method of which performance will be given in ablation study in Section 4. Although IPE is considered only for parallel parameter search in this paper, combining the soft way of using the parameters of the best performing learner with hyperparameter search is an interesting future work. 2.3 GUIDED POLICY SEARCH Our IPE method is also related to guided policy search (Levine & Koltun (2013), Levine et al. (2016), Teh et al. (2017), Ghosh et al. (2018)). Recently, Teh et al. (2017) proposed a guided policy search method for joint training of multiple tasks in which a common policy is used to guide local policies and the common policy is distilled from the local policies. Here, the local policies’ parameters are updated to maximize the performance and minimize the KL divergence between the local policies and the common distilled policy. The proposed IPE is related to guided policy search in the sense that multiple policies are guided by a common policy. However, the difference is that the goal of IPE is not learning multiple tasks but learning optimal parameters for a common task as in PBT and hence the guiding policy is not distilled from multiple local policies but chosen as the best performing policy among multiple learners. 2.4 EXPLORATION Improving exploration has been one of the key issues in RL and many different ways have been developed to improve exploration through maximum entropy objectives (Haarnoja et al. (2017; 2018)), noise in networks (Fortunato et al. (2018); Plappert et al. (2018)), and intrinsically motivated approaches (Bellemare et al. (2016); Ostrovski et al. (2017); Pathak et al. (2017); Achiam & Sastry (2017); Zheng et al. (2018)). The proposed IPE also enhances exploration. Specifically, IPE uses exploitation for exploration. Exploitation for exploration has been considered in the previous works (White & Sofge (1992), Oh et al. (2018)). In particular, Oh et al. (2018) exploited past good experiences to explore the sample space, whereas IPE exploit the current good policy among multiple policies to explore the policy space. 2.5 THE SET UP: PARALLEL LEARNING FOR A COMMON ENVIRONMENT The considered parallel learning setting consists of the environment E andN policies {π1, · · · , πN}. The environment E is described as a Markov decision process (MDP) defined by the tuple 〈S,A, T , r〉, where S is the state space, A is the action space, T : S × A × S → [0, 1] is the state transition probability, and r : S × A → R is the reward function. There exist N copies {E1, · · · , EN} of the environment E , i.e., E1 = · · · = EN = E , and the N environment copies may have different random initial seeds. The policy πi interacts with its corresponding environment copy E i and builds up its trajectory {(sit, a i t, r i t), t = 1, 2, · · · } for each i = 1, · · · , N . At time step t, the environment copy E i has a state sit ∈ S . The policy π i interacts with the environment copy E i by taking an action ait according to π i given the current state sit. Then, the environment copy E i yields the reward rit = r(s i t, a i t) and makes transition to the next state s i t+1 according to T . In this paper, in order to account for the actual amount of interaction with the environment, we define environment steps as the total number of interactions by all N parallel policies with all N environment copies. Suppose that all N policies generate their trajectories simultaneously in parallel, and suppose that M time steps have elapsed. Then, although the number of elapsed time steps is M , the number of environment steps is NM . 3 INTERACTIVE PARALLEL POLICY EXPLORATION We now present the proposed IPE scheme with the parallel environment learning setting described in Section 2.5, and the overall structure is described in Fig. 1. We have N identical parallel learners with a shared common experience replay buffer D, and all N identical learners employ a common base algorithm, which can be any off-policy RL algorithm. The execution is in parallel. The i-th learner has its own environment E i, which is a copy of the common environment E , and has its own value function (e.g., Q-function) parameter θi and policy parameter φi. The i-th learner interacts with the environment copy E i with some additional interaction with the chief, as shown in Fig. 1. At each time step, the i-th learner performs an action ait to its environment copy E i by using its own policy πφi , stores its experience (s i t, a i t, r i t, s i t+1) to the shared common experience replay buffer D for all i = 1, 2, · · · , N . Note that one time step corresponds to N environment steps. Then, at each time step, each learner updates its value function parameter and policy parameter for N times by drawing N mini-batches of size B from the shared common replay buffer D by minimizing its own value loss function and policy loss function, respectively. The N time updates for each learner for each time step is to exploit the samples provided by other N − 1 learners stored in the shared common replay buffer. In order to harness the benefit of mutual interaction among the learners in parallel learning, we exploit the information from the best learner periodically during training like in PBT (Jaderberg et al. (2017)). Suppose that the Q-function parameter and the policy parameter of each learner are initialized and learning is performed as described above for M time steps. At the end of the M time steps, we determine who is the best learner based on the average of the most recentEr episodic rewards for each learner. Let the index of the best learner be b. Then, the policy parameter information φb of the best learner can be used to enhance the learning of other learners for the next M time steps. Here, instead of copying φb to other learners, we propose using the information of φb in a soft manner to enhance the performance of the overall parallel learning. That is, during the next M time steps, whereas we set the loss function L̃(θi) for the Q-function to be the same as the loss L(θi) of the base algorithm, we set the loss function L̃(φi) for the policy parameter φi of the i-th learner as the following augmented version: L̃(φi) = L(φi) + 1{i6=b}βÊs∼D [ D(πφi , πφb) ] (1) where L(φi) is the policy loss function of the base algorithm, 1{·} denotes the indicator function, β(> 0) is a weighting factor, D(π, π′) be some distance measure between two policies π and π′, and Ês∼D denotes the sample expectation based on mini-batch drawn randomly from the experience replay buffer D. The augmented loss function L̃(φi) in (1) is composed of two terms L(φi) and 1{i6=b}βÊs∼D [ D(πφi , πφb) ] . Thus, for the non-best learners in the previous M time steps, the gradient of L̃(φi) is the mixture of two directions: one is to maximize the return by itself and the other is to follow the previously best learner’s policy. The second term in the right-hand side (RHS) of (1) guides non-best learners towards a good direction in addition to each leaner’s self search. 3.1 DESIGN OF THE WEIGHTING FACTOR β In (1), the weighting factor β is common to all N learners and should be determined judiciously to balance between improving its performance by each learner itself and going towards the previous best policy among the N learners. We adopt an adaptive method to determine the value of β as follows: β = { β ← 2β if D̂best ≥ max{ρD̂ b change, dsearch} × 1.5 β ← β/2 if D̂best < max{ρD̂bchange, dsearch}/1.5 (2) where D̂best is the estimated distance between πφi and πφb averaged over allN−1 non-best learners, and D̂bchange is the estimated distance between πφbupdated (i.e., the policy of the current best learner at the end of the current M time steps) and πφb (i.e, the policy of the current best learner at the end of the previous M time steps), given respectively by D̂best = 1 N − 1 ∑ i∈I−b Ês∼D [ D(πφi , πφb) ] and D̂bchange = Ês∼D [ D(πφb updated , πφb) ] . (3) Here, I−b = {1, . . . , N} \ {b}, and dsearch and ρ are predetermined hyperparameters. This adaptation method is similar to that used in proximal policy optimization (PPO) (Schulman et al. (2017)). The update of β is done every M time steps and the updated β is used for the next M time steps. First, suppose that we do not have the first term ρD̂bchange in the maximum of the condition in (2). Then, when the estimated average distance D̂best from the best policy to the remaining policies is smaller than dsearch/1.5, the parameter β is decreased by half. Hence, the movement in the gradient direction of the second term in the RHS of (1) is diminished and the independent movement into the optimization direction for L(φi) becomes more important. So, each policy gradually diverges from the previous best policy serving as the reference point due to internal exploration mechanism such as added noise in action. On the other hand, when D̂best is larger than 1.5dsearch, the parameter β increases by factor two and the movement towards the previous best policy becomes more important. As time steps elapse, β is settled down so that D̂best is around dsearch. Hence, the proposed IPE scheme searches a wide area with rough radius dsearch around the best policy in the policy parameter space, as shown in Fig. 2(a). Furthermore, with the first term ρD̂bchange in the maximum of the condition in (2), we can control the speed of tracking the best policy. D̂bchange measures the speed of change in the best policy parameter. When the best policy parameter change scaled by ρ, i.e., ρD̂bchange is less than dsearch, the term is invisible by the maximum operation in (2). On the other hand, when ρD̂bchange > dsearch, the term is active and it means that the best policy parameter changes fast. Thus, the tracking speed should be controlled. If D̂best > ρD̂ b change, i.e., the distance from πφi to πφb is larger than ρ times the distance from πφb updated to πφb , then this means that the speed of tracking the best policy is slow. Hence, we increase β by factor two. Otherwise, we decrease β by half. When the search for the current M time steps is finished, the new best learner is selected and a new search for a wide area around the new best learner’s policy πφb is performed, as illustrated in Fig. 2(b). The policy parameter information φb of the best learner can be changed before the next best learner selection. Now, the overall procedure for the proposed IPE scheme is explained with the diagram in Fig. 1. The value function parameter and policy parameter of each learner are initialized. The chief distributes the parameter β and the reference policy parameter φb, which is the policy information of the best learner over the previous M time steps, to all learners. At each time step, each learner interacts with its own environment copy by taking its action and receiving the reward and the next state, and stores its experience to the shared common replay buffer D. Then, the i-th learner updates its value function parameter θi by minimizing its own value loss function L̃(θi) which is the same as that of the base algorithm, and updates the policy parameter φi by minimizing the augmented loss function L̃(φi) in (1) for N times by drawing N mini-batches from the shared common replay buffer D. Whenever an episode ends for a learner, the learner reports the episodic reward to the chief. The i-th learner reports Ês∼D [ D(πφi , πφb) ] to the chief for computation of D̂best in (3). At every M time steps, the chief updates β according to (2), determines the best learner over the most recent M time steps based on the collected episodic rewards from each learner. Once the best learner is determined, the chief obtains the policy parameter information φb from the determined best learner, and distributes the new β and the reference policy parameter φb to all N learners. This procedure repeats until the time steps reaches the predefined maximum. When the parallel learning based IPE reaches a steady state, we can choose any of the N learners’ policies and use the chosen policy for the environment E in future since it is expected that at the steady-state the performance of all N policies is more or less similar due to their distance property. 3.2 IPE-ENHANCED ALGORITHMS The proposed IPE method can be applied to any off-policy RL algorithms regardless of whether the base RL algorithms have continuous actions or discrete actions. Here, we consider the application of IPE to the TD3 algorithm as the base algorithm and the constructed algorithm is named the IPETD3 algorithm. The details of baseline TD3 are explained in Appendix A. With TD3 as the base algorithm, each learner has its own parameters θi1, θ i 2, and φ i for its two Q-functions and policy. Furthermore, it has (θi1) ′, (θi2) ′, and (φi)′ which are the parameters of the corresponding target networks. For the distance measure between two policies, we use the mean square difference given by D(π(s), π̃(s)) = 1 2 ‖π(s)− π̃(s)‖22 . (4) For the i-th learner, as in TD3, the parameters θij , j = 1, 2 are updated every time step by minimizing L̃(θij) = Ê(s,a,r,s′)∼D [ (y −Qθi j (s, a))2 ] (5) where y = r + γminj=1,2Q(θi j )′(s ′, π(φi)′(s ′) + ǫ), ǫ ∼ clip(N (0, σ̃2),−c, c). The parameter φi is updated every d time steps by minimizing the following augmented loss function: L̃(φi) = Ês∼D [ −Qθi 1 (s, πφi(s)) + 1{i6=b} β 2 ∥ ∥πφi(s)− πφb(s) ∥ ∥ 2 2 ] . (6) For the first Tinitial timesteps for initial exploration we use a random policy and do not update all policies over the initial exploration period. With these loss functions, the reference policy, and the initial exploration policy, all procedure is the same as the general IPE procedure described previously. The pseudocode of the IPE-TD3 algorithm is provided in Appendix B. The application of IPE to other algorithms such as SAC and DQN is also provided in Appendices. 4 EXPERIMENTS In this section, we provide the numerical results on the performance of the proposed IPE-TD3 and current state-of-the-art on-policy and off-policy baseline algorithms on several MuJoCo environments (Todorov et al. (2012)). The baseline algorithms are Proximal Policy Optimization (PPO) (Schulman et al. (2017)), Actor Critic using Kronecker-Factored Trust Region (ACKTR) (Wu et al. (2017)), Soft Q-learning (SQL) (Haarnoja et al. (2017)), Soft Actor-Critic (SAC) (Haarnoja et al. (2018)), and TD3 (Fujimoto et al. (2018)). More numerical result on IPE applied to SAC is provided in Appendices. 4.1 PARAMETER SETTING All hyperparameters we used for evaluation are the same as those in the original papers (Schulman et al. (2017); Haarnoja et al. (2018); Fujimoto et al. (2018)). Here, we provide the hyperparameters of TD3 and IPE-TD3 only. TD3 The networks for two Q-functions and the policy have 2 hidden layers. The first and second layers have sizes 400 and 300, respectively. The non-linearity function of the hidden layers is ReLU, and the activation functions of the last layers of the Q-functions and the policy are linear and hyperbolic tangent, respectively. We used the Adam optimizer with learning rate 10−3, discount factor γ = 0.99, target smoothing factor τ = 5 × 10−3, the period d = 2 for updating the policy. The experience replay buffer size is 106, and the mini-batch size B is 100. The standard deviation for exploration noise σ and target noise σ̃ are 0.1 and 0.2, respectively, and the noise clipping factor c is 0.5. IPE-TD3 In addition to the parameters for TD3, we used N = 4 learners, the period M = 250 of updating the best policy and β, the number of recent episodes Er = 10 for determining the best learner b. The parameters dsearch and ρ for the exploration range are 0.04 and 2, respectively. The timesteps for initial exploration Tinitial is set as 250 for Hopper-v1 and Walker2d-v1 and as 2500 for HalfCheetah-v1 and Ant-v1. 4.2 COMPARISON TO BASELINES In order to have sample-wise fair comparison among the considered algorithms, we obtain the performance with respect to environment steps (not time steps), which is defined as the total number of interactions with the environment by the agent. This comparison makes sense because the performance at the same environment steps means that all algorithms use the same number of samples obtained from the environment. The performance is obtained through the evaluation method that is similar to those in (Haarnoja et al. (2018); Fujimoto et al. (2018)). Evaluation of the policies are conducted every Reval = 4000 environment steps for all algorithms. At each evaluation instant, the agent (or learner) fixes its policy as the one at the evaluation instant, and interacts with the same environment separate for the evaluation purpose with the fixed policy to obtain 10 episodic rewards. The average of these 10 episodic rewards is the performance at the evaluation instant. In the case of IPE-TD3 and other parallel learning schemes, each of the N learners fixes its policy as the one at the evaluation instant, and interacts with the environment with the fixed policy to obtain 10 episodic rewards. First, the 10 episodic rewards are averaged for each learner and then the maximum of the 10-episode-average rewards of the N learners is taken as the performance at that evaluation instant. We performed this operation for five different random seeds, and the mean and variance of the learning curve are obtained from these five simulations. The policies used for evaluation are stochastic for PPO and deterministic for the others. Fig. 3 shows the learning curves over one million environment steps for several MuJoCo tasks: Hopper-v1, Walker2d-v1, HalfCheetah-v1, and Ant-v1. First, it is observed that the performance of TD3 here is similar to that in the original TD3 paper (Fujimoto et al. (2018)), and the performance of other baseline algorithms is also similar to that in the original papers (Schulman et al. (2017); Haarnoja et al. (2018)). It is seen that the IPE-TD3 algorithm outperforms the state-of-the-art RL algorithms in terms of both the speed of convergence with respect to environment steps and the final steady-state performance (except in Walker2d-v1, the initial convergence is a bit slower than TD3.) Especially, in the cases of Hopper-v1 and Ant-v1, TD3 has large variance and this means that the performance of TD3 is quite dependent on the initial condition of the environment and it is not easy for TD3 to escape out of bad local minima in certain environments. However, it is seen that IPE-TD3 yields much less variance as compared to TD3. This implies that the wide area search by IPE in the policy parameter space helps the learners escape out of bad local optima. It is seen that the wide area search around the previous best policy point in the policy parameter space by IPE yields faster and better policy search. 4.3 ABLATION STUDY IPE-TD3 has several components to improve the performance based on parallelism: 1) sharing experiences from multiple policies, 2) using the best policy information, and 3) fusing the best policy information in a soft manner based on the augmented loss function. Thus, we investigated the impact of each component on the performance improvement. For comparison we considered the following parallel policy exploration methods gradually incorporating more techniques: 1. TD3 The original TD3 with one learner 2. Distributed RL TD3 (DRL-TD3) N actors obtain samples from N environment copies. The policy and the experience replay buffer are shared by all N actors. 3. Experience-Sharing-Only TD3 (ESO-TD3) N learners interact with N environment copies and update their own policies using experiences drawn from the shared experience replay buffer. 4. Reloading TD3 (Re-TD3) At every M ′ timesteps, the best policy is determined and all policies are initialized as the best policy, i.e., the best learner’s policy parameter is copied to all other learners. The rest of the procedure is the same as experience-sharing-only TD3. 5. IPE-TD3 At every M timesteps, the best policy information is determined and this policy is used in a soft manner based on the augmented loss function. Note that Re-TD3 exploits the best policy information from N learners. The main difference between IPE-TD3 and Re-TD3 is the way how the best learner’s policy parameter is used. Re-TD3 initializes all policies with the best policy parameter everyM ′ timesteps like in PBT (Jaderberg et al. (2017)), whereas IPE-TD3 uses the best learner’s policy parameter information determined everyM timesteps to construct an augmented loss function. For fair comparison, M and M ′ are determined independently and optimally for IPE-TD3 and Re-TD3, respectively, since the optimal period can be different for the two methods. Thus, M ′ = 5000 is determined for Re-TD3 by tuning, whereas M = 250 is used for IPE-TD3. Since all N policies collapse as one point in Re-TD3 at the beginning of each period, we expect that a larger period is required for Re-TD3 to have sufficiently spread policies at the end of the best policy selection period. Fig. 4 shows the learning curves of the considered parallel exploration methods for the Ant-v1 task and Table 1 shows the final (steady-state) performance of the considered parallel exploration methods for four MuJoCo tasks, respectively. It is seen that IPE-TD3 outperforms other parallel methods: DRL-TD3, ESO-TD3 and Re-TD3 except the case that ESO-TD3 outperforms all other parallel schemes in Hopper-v1. Both Re-TD3 and IPE-TD3 have better final (steady-state) performance than TD3 and ESO-TD3 for all tasks except Hopper-v1 for which ESO-TD3 performs best. Note that ESO-TD3 obtains most diverse experiences since the N learners shares the experience replay buffer but there is no interaction among theN learners until the end of training. So, it seems that this diverse experience is beneficial to Hopper-v1. The final performances of Re-TD3 and IPE-TD3 are more or less the same for HalfCheetah-v1 but the final performance of IPE-TD3 is noticeably better than that of Re-TD3 in other cases. 5 CONCLUSION In this paper, we have proposed a new interactive parallel learning scheme, IPE, to enhance the performance of off-policy RL systems. In the proposed IPE scheme, multiple identical learners with their own value-functions and policies sharing a common experience replay buffer search a good policy with the guidance of the best policy information in the previous search interval. The information of the best policy parameter of the previous search interval is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search space by the multiple learners. The guidance by the previous best policy and the enlarged search space by IPE enables faster and better policy search in the policy parameter space. The IPE-enhanced algorithms constructed by applying the proposed IPE scheme to TD3 or SAC outperforms most of the current state-of-the-art continuous-action RL algorithms. Although we mainly considered continuous-action off-policy algorithms in this paper, the proposed IPE method can also be applied to RL with discrete actions, as seen in Appendix E. In the case of continuous action control, the gain by IPE can be beneficial for recent trend of fast computer-based prototyping of complex robotics systems or autonomous cars, whereas in the discrete-action case better policy parameters can be searched for more challenging tasks by IPE. APPENDIX A. THE TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT ALGORITHM AND THE SOFT ACTOR-CRITIC ALGORITHM A.1 THE TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT (TD3) ALGORITHM The TD3 algorithm is a current state-of-the-art off-policy algorithm and is a variant of the deep deterministic policy gradient (DDPG) algorithm (Lillicrap et al. (2015)). The TD3 algorithm tries to resolve two problems in typical actor-critic algorithms: 1) overestimation bias and 2) high variance in the approximation of the Q-function. In order to reduce the bias, the TD3 considers two Q-functions and uses the minimum of the two Q-function values to compute the target value, while in order to reduce the variance in the gradient, the policy is updated less frequently than the Qfunctions. Specifically, letQθ1 ,Qθ2 and πφ be two current Q-functions and the current deterministic policy, respectively, and let Qθ′ 1 , Qθ′ 2 and πφ′ be the target networks of Qθ1 , Qθ2 and πφ, respec- tively. The target networks are initialized by the same networks as the current networks. At time step t, the TD3 algorithm takes an action at with exploration noise ǫ: at = πφ(st) + ǫ, where ǫ is zero-mean Gaussian noise with variance σ2, i.e., ǫ ∼ N (0, σ2). Then, the environment returns reward rt and the state is switched to st+1. The TD3 algorithm stores the experience (st, at, rt, st+1) at the experience replay buffer D. After storing the experience, the Q-function parameters θ1 and θ2 are updated by gradient descent of the following loss functions: L(θj) = Ê(s,a,r,s′)∼D [ (y −Qθj (s, a)) 2 ] , j = 1, 2 (7) where Ê(s,a,r,s′)∼D denotes the sample expectation with an uniform random mini-batch of size B drawn from the replay buffer D, and the target value y is given by y = r + γ min j=1,2 Qθ′ j (s′, πφ′(s ′) + ǫ), ǫ ∼ clip(N (0, σ̃2),−c, c). (8) Here, for the computation of the target value, the minimum of the two target Q-functions is used to reduce the bias. The procedure of action taking and gradient descent for θ1 and θ2 are repeated for d times (d = 2), and then the policy and target networks are updated. The policy parameter φ is updated by gradient descent by minimizing the loss function for φ: L(φ) = −Ês∼D [Qθ1(s, πφ(s))] , (9) and the target network parameters θ′j and φ ′ are updated as θ′j ← (1− τ)θ ′ j + τθj φ ′ ← (1− τ)φ′ + τφ. (10) The networks are trained until the number of time steps reaches a predefined maximum. A.2 THE SOFT ACTOR-CRITIC (SAC) ALGORITHM The SAC algorithm is an off-policy algorithm comparable to TD3 and yields good performance especially in environments with high dimensional action spaces. The SAC algorithm is a maximum entropy RL which is based on the discounted sum of reward and the entropy of the current policy given by Eτ∼π [ ∞ ∑ t=0 γt (r(st, at) + αH(π(·|st))) ] , (11) where α is a weighting factor that balances between the reward and the entropy of the policy. This objective function stimulates the algorithm to explore more diverse experiences so as to find a better policy. The SAC algorithm has one value function Vψ(s), two Q-functions Qθj (s, a), j = 1, 2, and one stochastic policy πφ(·|s), which are parameterized by parameters ψ, θj , and φ, respectively. It also has a target value function Vψ′(s) for stable convergence. After initialization, at each time step t the algorithm obtains experience (st, at, rt, st+1) by interacting with the environment and stores the experience to the experience replay buffer D. Then, it updates the parameters ψ, θj , and φ by gradient descent of the following loss functions: J(ψ) = Ês∼D,a∼πφ(·|s) [ 1 2 ∥ ∥Vψ(s)− Q̄(s, a) + log πφ(a|s) ∥ ∥ 2 2 ] , (12) J(θj) = Ê(s,a,r,s′)∼D [ 1 2 ( Qθj (s, a)− r/α− γVψ′(s ′) )2 ] , j = 1, 2 (13) J(φ) = Ês∼D,a∼πφ(·|s) [ log πφ(a|s)− Q̄(s, a) ] , (14) where Q̄(s, a) = min {Qθ1(s, a), Qθ2(s, a)}, and Ê(s,a,r,s′)∼D is the sample expectation with an uniform random mini-batch of size B drawn from the replay buffer D. After updating the parameters, the target value function parameter ψ′ is updated as ψ′ ← (1− τ)ψ′ + τψ (15) In order to obtain diverse experience in the initial stage of learning, it uses a uniform policy for initial Tinitial time steps and the current policy πφ(·|s) for the rest of learning. APPENDIX B. PSEUDOCODE OF THE IPE-TD3 ALGORITHM Algorithm 1 The Interactive Parallel Exploration TD3 (IPE-TD3) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period,B: size of mini-batch, d: update interval for policy and target networks. 1: Initialize φ1 = · · · = φN = φb, θ1j = · · · = θ N j , j = 1, 2, randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait = π i ( sit ) + ǫ, ǫ ∼ N (0, σ2) to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 19: Update θij , j = 1, 2, by gradient descent for minimizing L̃(θ i j) in (5) with B 20: if k ≡ 0(mod d) then 21: Update φi by gradient descent for minimizing L̃(φi) in (6) with B 22: Update the target networks: (θij) ′ ← (1− τ)(θij) ′ + τθij , (φ i)′ ← (1− τ)(φi)′ + τφi 23: end if 24: end for 25: end for 26: if t ≡ 0(mod M) then 27: Select the best learner b 28: Adapt β with (2) 29: end if 30: end while APPENDIX C. PSEUDOCODE OF THE IPE-SAC ALGORITHM Algorithm 2 The Interactive Parallel Exploration SAC (IPE-SAC) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period, B: size of mini-batch 1: Initialize ψ1 = · · · = ψN , φ1 = · · · = φN = φb, θ1j = · · · = θ N j , j = 1, 2, randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait ∼ π i ( ·|sit ) to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 19: Update ψi, θij , and φ i by gradient descent for minimizing (16), (17), and (18) with B, respectively. 20: Update the target parameters: (ψi)′ ← (1− τ)(ψi)′ + τψi 21: end for 22: end for 23: if t ≡ 0(mod M) then 24: Select the best learner b 25: Update the best policy parameter φb 26: Adapt β with (2) 27: end if 28: end while In IPE-SAC, each learner has its own parameters ψi, θi1, θ i 2, and φ i for its value function, two Qfunctions, and policy. Each learner also has (ψi)′ which is the parameter of the target value function. For the distance measure between two policies, we use the mean square difference of the mean action of Gaussian policy, given by D(π(·|s), π̃(·|s)) = 12 ‖mean {π(·|s)} −mean {π̃(·|s)}‖ 2 2. The i-th learner updates the parameters ψi, θi1, θ i 2, and φ i every timestep by minimizing L̃(ψi) = Ês∼D,a∼π φi (·|s) [ 1 2 ∥ ∥Vψi(s)− Q̄ i(s, a) + log πφi(a|s) ∥ ∥ 2 2 ] (16) L̃(θij) = Ê(s,a,r,s′)∼D [ 1 2 ( Qθi j (s, a)− r/α− γV(ψi)′(s ′) )2 ] , j = 1, 2 (17) L̃(φi) = Ês∼D,a∼π φi (·|s) [ log πφi(a|s)− Q̄ i(s, a) + 1{i6=b} β 2 ∥ ∥mean { πφi(·|s) } −mean { πφb(·|s) }∥ ∥ 2 2 ] (18) where Q̄i(s, a) = min { Qθi 1 (s, a), Qθi 2 (s, a) } . After updating these parameters, each learner up- dates its target value function parameters. With these loss functions, all procedure is the same as the general IPE procedure described in Section 3. The pseudocode of the IPE-SAC algorithm is shown above. APPENDIX D. RESULT OF IPE-SAC ON HUMANOID (RLLAB) As mentioned already, IPE is general in that it can be applied to other off-policy algorithms. Here, we provide numerical results on IPE-SAC, shown in Appendix C, constructed by combining IPE with SAC. Experiment was perform on the task of Humanoid (rllab) (Duan et al. (2016)) that needs more exploration. We compared IPE-SAC with SAC and multi-learner reloading SAC (Re-SAC) which copies the parameter of the best learner to other learners periodically. D.1 PARAMETER SETTING SAC The networks for the state-value function, two Q-functions, and the policy had 2 hidden layers of size 256. The activation functions for the hidden layers and the last layers were ReLU and linear, respectively. We used the Adam optimizer with learning rate 3×10−4, discount factor γ = 0.99, and target smoothing factor τ = 5 × 10−3. The algorithm was trained by random mini-batches of size B = 256 from the experience replay buffer of the maximum size 106. The reward scale for updating Q-functions was 10 for the Humanoid (rllab) environment. The initial exploration timesteps Tinitial was set to 1000. IPE-SAC Additional parameters for IPE-SAC are as follows. We used N = 4 learners, the period M = 500 of updating the best policy and β, the number of recent episodes Er = 10 for determining the best learner b. The parameters dsearch and ρ for the exploration range were 0.01 and 2, respectively. We used the initial exploration timesteps Tinitial = 250. D.2 PERFORMANCE ON HUMANOID (RLLAB) The learning curve on Humanoid (rllab) is shown in Figure 51. It is seen that IPE-SAC outperforms the original SAC and Re-SAC. This result shows the promising aspect of IPE that it can be useful for tasks requiring more exploration. 1The simulation is still running, and we will change the graph when the simulation is finished. APPENDIX E. PSEUDOCODE OF THE IPE-DQN ALGORITHM Algorithm 3 The Interactive Parallel Exploration DQN (IPE-DQN) Algorithm Require: N : number of learners, Tinitial: initial exploration time steps, T : maximum time steps, M : the best-policy update period, B: size of mini-batch, f : update interval for Q-functions, d: update interval for target Q-functions,. 1: Initialize θ1 = · · · = θN = θb randomly. 2: Initialize β = 1, t = 0 3: while t < T do 4: t← t+ 1 (one time step) 5: for i = 1, 2, · · · , N in parallel do 6: if t < Tinitial then 7: Take a uniform random action ait to environment copy E i 8: else 9: Take an action ait = argmaxa∈A { Qθi(s i t, a) } w.p. 1 − ε or a uniform random action ait w.p. ε to environment copy E i 10: end if 11: Store experience (sit, a i t, r i t, s i t+1) to the shared common experience replay D 12: end for 13: if t < Tinitial then 14: continue (i.e., go to the beginning of the while loop) 15: end if 16: for i = 1, 2, · · · , N in parallel do 17: for k = 1, 2, · · · , N do 18: if k ≡ 0(mod f) then 19: Sample a mini-batch B = {(stl , atl , rtl , stl+1)}l=1,...,B from D 20: Update θi by gradient descent minimizing (19) with B. 21: end if 22: end for 23: end for 24: if t ≡ 0(mod d) then 25: for i = 1, 2, · · · , N in parallel do 26: Update (θi)′ ← θi 27: end for 28: end if 29: if t ≡ 0(mod M) then 30: Select the best learner b 31: Update the best policy parameter θb 32: Adapt β with (2) 33: end if 34: end while IPE can also be applied to off-policy algorithms with discrete action spaces as well as continuous action spaces. Thus, we applied IPE to DQN to construct IPE-DQN. In IPE-DQN, each learner has its own Q-function parameters θi and target Q-function parameters (θi)′. We define the distance for two Q-functions Q(s, a) and Q̃(s, a) as D(Q(s, ·), Q̃(s, ·)) = KL ( softmax (Q(s, ·)) ||softmax ( Q̃(s, ·) )) . We used the Q-function parameter θb of the best learner as the reference parameter, which was originally φb in (1) and (3). The i-th learner updates the parameters θi every f timesteps by minimizing L̃(θi) = Ê(s,a,r,s′)∼D [ 1 2 ‖Qθi(s, a)− y‖ 2 2 + 1{i6=b}βKL ( softmax (Qθi(s, ·)) ||softmax ( Qθ̃b(s, ·) )) ] (19) where y = r+γQ(θi)′(s ′, argmaxa′∈A {Qθi(s ′, a′)}). With the loss function and the reference policy, all procedure is the same as the general IPE procedure described in Section 3. The pseudocode of the IPE-DQN algorithm is shown above.
1. What is the main contribution of the paper regarding parallelizing off-policy reinforcement learning systems? 2. How does the proposed approach compare to other parallel training architectures and population-based training methods in the literature? 3. Why was TD3 used as the basis for the new approach, and what justifies the choice of infinite buffer size? 4. Can the authors provide additional analysis to demonstrate the significance of the proposed method's improvement in variance? 5. Why were certain baselines (e.g., SAC) chosen over others (e.g., D4PG, soft Q-learning, population-based training methods)? 6. How does the proposed method perform when combined with other base off-policy methods? 7. What is the motivation behind assuming access to multiple instances of the environment, given the limitations of this assumption in real-world applications? 8. How does the paper address exploration-exploitation tradeoffs, intrinsic motivation, or count-based exploration methods? 9. What is the significance of the results in the context of continuous control domains, and how do they contribute to the larger goal of maximizing simulation environments? 10. Are there any grammatical errors in the paper that need to be addressed?
Review
Review This paper describes a new architecture for parallelizing off-policy reinforcement learning systems with a pool of independent learners trained on identical, but independent instances of the environment with a scheme for periodically synchronizing the the policy knowledge across the pool. The paper provides demonstrations in several continuous control domains. I think this paper should be rejected because: 1) the approach is not well justified or placed within the large literature on parallel training architectures and population-based training methods, (2) the results are competitive with the best in each domain, but there are many missing details. Since the contribution is entirely support by empirical evidence, these issues need to be clarified. I look forward to the author response, as I will pose several questions below and my final score will carefully take the answers into account. Justification of decision. There are numerous papers on parallel architectures for training deep RL systems [1,2, 6] and you cited a few, and while not all of them focus on continuous control there are design decisions and insights in those works must be relevant to your efforts. You should make those connections clear in the paper. One line of the paper is not nearly enough. The stated focus of the paper is exploration-exploitation yet there is little to no discussion of other ideas including noisy networks, intrinsic motivation, or count-based exploration methods. The paper is missing a lot of key connections to the literature. I am certainly not a fan of architectures that assume access to many instances of the environment. In this case that assumption seems worse because of the target application: continuous control domains. These domains are simulations of physical control systems; on a robot the agent receives only one stream of experience and thus these architectures would not work well. Though there is some work on applying these multi-environment architectures to farms of robot arms; the reality of the real-world is that the arms end up being very different due to wear and tear, and engineers must constantly fix the hardware because these multi-environment architectures do not work when the environments are different. We cannot loose sight of the goal here—maximizing these simulation environments is not of interest itself, its a stepping stone—architectures that only work on simulations that afford multiple identical environments but fail in the real world have very limited application. I think this paper needs to motivate why parallel training in this way in these robotics inspired domains is interesting and worthwhile. The main area of concern with this paper is the experiment section. There are several issues/questions I would like the authors to address: 1) You built on top of TD3, however you just used the parameter settings of TD3 as published and didn’t tune them. This is a problem because it could just be that the existing parameter choices for TD3 were just better for the new approach. You have to take additional effort in this case to ensure your method is actually better than just using TD3. Additional parameter tuning of TD3 is required here. 2) I think its an odd choice for TD3 to have an infinite buffer, as recent work has show at least for DQN that large buffers can hurt performance [7]. Can you justify this choice beyond “the authors of TD3 did it that way”? 3) Why is R_eval different for each method? 4) Why did you not compare to TD3 on the same set of domains as used in the TD3 paper? Why a subset? Why these particular domains? 5) In 2 of the 4 domains the proposed method ties or is worse than the baselines. In half-cheetah it looks close to significant, and in the ant domain the result is unlikely to be significant because the error-bars overlap and the error-bars of TD3 are wider than the other methods so a simple visual inspection is not enough. There does not seem to be a strong case for the new method here. I may be misunderstanding the results. Help me see the significance. 6) The paper claims improvement in variance, but this requires additional analysis in the form of an F-test of better. 7) Why these baselines (e.g., SAC) and not others? Why did you not include D4PG [6]? Soft Q-learning? A population-based training method [3,4,5,8] to name a few? [1] GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning [2] IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures [3] Human-level performance in first-person multiplayer games with population-based deep reinforcement learning [4] Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents [5] Structured Evolution with Compact Architectures for Scalable Policy Optimization [6] Distributed Distributional Deterministic Policy Gradients [7] A Deeper Look at Experience Replay [8] Evolution Strategies as a Scalable Alternative to Reinforcement Learning Small things that did not impact the score: 1) references to “search interval” in the abstract are confusing because the reader has not read the paper yet 2) Description of the method in abstract is too specific 3) P1 intro, not a topic sentence for what follows 4) “performs an action to its environment” >> grammar 5) “One way to parallel learning…” >> grammar 6) “that the value parameter and” >> grammar 7) “pitfalls” >> minima 8) Did you try combining you method with other base off-policy methods? how did it work? 9) GAE undefined? 10) “among the baseline and”>>grammar…there are many grammar errors
ICLR
Title Unsupervised Perceptual Rewards for Imitation Learning Abstract Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a suitable reward function takes considerable manual engineering and often requires additional and potentially visible sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide useful feedback on these implicit intermediate steps or sub-goals. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify the key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit sub-goal supervision. The resulting reward functions, which are dense and smooth, can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward functions, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also demonstrate that our method can be used to learn a complex real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. N/A Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a suitable reward function takes considerable manual engineering and often requires additional and potentially visible sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide useful feedback on these implicit intermediate steps or sub-goals. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify the key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit sub-goal supervision. The resulting reward functions, which are dense and smooth, can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward functions, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also demonstrate that our method can be used to learn a complex real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. 1 INTRODUCTION Social learning, such as imitation, plays a critical role in allowing humans and animals to quickly acquire complex skills in the real world. Humans can use this weak form of supervision to acquire behaviors from very small numbers of demonstrations, in sharp contrast to deep reinforcement learning (RL) methods, which typically require extensive training data. In this work, we make use of two ideas to develop a scaleable and efficient imitation learning method: first, imitation makes use of extensive prior knowledge to quickly glean the “gist” of a new task from even a small number of demonstrations; second, imitation involves both observation and trial-and-error learning (RL). Building on these ideas, we propose a reward learning method for understanding the intent of a user demonstration through the use of pre-trained visual features, which provide the “prior knowledge” for efficient imitation. Our algorithm aims to discover not only the high-level goal of a task, but also the implicit sub-goals and steps that comprise more complex behaviors. Extracting such sub-goals can allow the agent to make maximal use of the information contained in the demonstration. Once the reward function has been extracted, the agent can use its own experience at the task to determine the physical structure of the behavior, even when the reward is provided by an agent with a substantially different embodiment (e.g. a human providing a demonstration for a robot). ∗Work done as part of the Google Brain Residency program (g.co/brainresidency). To our knowledge, our method is the first reward learning technique that learns generalizable visionbased reward functions for complex robotic manipulation skills from only a few demonstrations provided directly by a human. Although prior methods have demonstrated reward learning with vision for real-world robotic tasks, they have either required kinesthetic demonstrations with robot state for reward learning (Finn et al., 2015), or else required low-dimensional state spaces and numerous demonstrations (Wulfmeier et al., 2016). The contributions of this paper are: • A method for perceptual reward learning from only a few demonstrations of real-world tasks. Reward functions are dense and incremental, with automated unsupervised discovery of intermediate steps. • The first vision-based reward learning method that can learn a complex robotic manipulation task from a few human demonstrations in real-world robotic experiments. • A set of empirical experiments that show that the learned visual representations inside a pre-trained deep model are general enough to be directly used to represent goals and subgoals for manipulation skills in new scenes without retraining. 1.1 RELATED WORK Deep reinforcement learning and deep robotic learning work has previously examined learning reward functions based on images. One of the most common approaches to image-based reward functions is to directly specify a “target image” by showing the learner the raw pixels of a successful task completion state, and then using distance to that image (or its latent representation) as a reward function (Lange et al., 2012; Finn et al., 2015; Watter et al., 2015). However, this approach has several severe shortcomings. First, the use of a target image presupposes that the system can achieve a substantially similar visual state, which precludes generalization to semantically similar but visually distinct situations. Second, the use of a target image does not provide the learner with information about which facet of the image is more or less important for task success, which might result in the learner excessively emphasizing irrelevant factors of variation (such as the color of a door due to light and shadow) at the expense of relevant factors (such as whether or not the door is open or closed). Analyzing a collection of demonstrations to learn a parsimonious reward function that explains the demonstrated behavior is known as inverse reinforcement learning (IRL) (Ng et al., 2000). A few recently proposed IRL algorithms have sought to combine IRL with vision and deep network representations (Finn et al., 2016b; Wulfmeier et al., 2016). However, scaling IRL to high-dimensional systems and open-ended reward representations is very challenging. The previous work closest to ours used images together with robot state information (joint angles and end effector pose), with tens of demonstrations provided through kinesthetic teaching (Finn et al., 2016b). The approach we propose in this work which can be interpreted as a simple and efficient approximation to IRL, can use demonstrations that consist of videos of a human performing the task using their own body, and can acquire reward functions with intermediate sub-goals using just a few examples. This kind of efficient vision-based reward learning from videos of humans has not been demonstrated in prior IRL work. The idea of perceptual reward functions using raw pixels was also explored by Edwards et al. (2016) which, while sharing the same spirit as this work, was limited to simple synthetic tasks and used single images as perceptual goals rather than multiple demonstration videos. 2 SIMPLE INVERSE REINFORCEMENT LEARNING WITH VISUAL FEATURES The key insight in our approach is that we can exploit the semantically meaningful and powerful features in a pre-trained deep neural network to infer task goals and sub-goals using a very simple approximate inverse reinforcement learning method. The pre-trained network effectively transfers prior knowledge about the visual world to make imitation learning fast and robust. Our approach can be interpreted as a simple approximation to inverse reinforcement learning under a particular choice of system dynamics, as discussed in Section 2.1. While this approximation is somewhat simplistic, it affords an efficient and scaleable learning rule that avoids overfitting even when trained on a small number of demonstrations. As depicted in Fig. 1, our algorithm first segments the demonstrations into segments based on perceptual similarity, as described in Section 2.2. Intuitively, the resulting segments correspond to sub-goals or steps of the task. The segments can then be used as a supervision signal for learning steps classifiers, described in Section 2.3, which produces a single perception reward function for each step of the task. The combined reward function can then be used with a reinforcement learning algorithm to learn the demonstrated behavior. Although this method for extracting reward functions is exceedingly simple, its power comes from the use of highly general and robust pre-trained visual features, and our key empirical result is that such features are sufficient to acquire effective and generalizable reward functions for real-world manipulation skills. We use the Inception network (Szegedy et al., 2015) pre-trained for ImageNet classification (Deng et al., 2009) to obtain the visual features for representing the learned rewards. It is well known that visual features in such networks are quite general and can be reused for other visual tasks. However, it is less clear if sparse subsets of such features can be used directly to represent goals and subgoals for real-world manipulation skills. Our experimental evaluation suggests that indeed they can, and that the resulting reward representations are robust and reliable enough for real-world robotic learning without any finetuning of the features. In this work, we use all activations starting from the first “mixed” layer that follows the first 5 convolutional layers (this layer’s activation map is of size 35x35x256 given a 299x299 input). While this paper focuses on visual perception, the approach is general and can be applied to other modalities (e.g. audio and tactile). 2.1 INVERSE REINFORCEMENT LEARNING WITH TIME-INDEPENDENT GAUSSIAN MODELS Inverse reinforcement learning can be performed with a variety of algorithms (Ng et al., 2000), ranging from margin-based methods (Abbeel & Ng, 2004; Ratliff et al., 2006) to methods based on probabilistic models (Ramachandran & Amir, 2007; Ziebart et al., 2008). In this work, we use a very simple approximation to the MaxEnt IRL model (Ziebart et al., 2008), a popular probabilistic approach to IRL. We will use st to denote the visual feature activations at time t, which constitute the state, sit to denote the ith feature at time t, and τ = {s1, . . . , sT } to denote a sequence or trajectory of these activations in a video. In MaxEnt IRL, the demonstrated trajectories τ are assumed to be drawn from a Boltzmann distribution according to: p(τ) = p(s1, . . . , sT ) = 1 Z exp T∑ t=1 R(st) , (1) whereR(st) is the unknown reward function. The principal computational challenge in MaxEnt IRL is to approximate Z, since the states at each time step are not independent, but are constrained by the system dynamics. In deterministic systems, where st+1 = f(st, at) for actions at and dynamics f , the dynamics impose constraints on which trajectories τ are feasible. In stochastic systems, where st+1 ∼ p(st+1|st, at), we must also account for the dynamics distribution in Equation (1), as discussed by Ziebart et al. (2008). Prior work has addressed this using dynamic programming to exactly compute Z in small, discrete systems (Ziebart et al., 2008), or by using sampling to estimate Z for large, continuous domains (Kalakrishnan et al., 2010; Boularias et al., 2011; Finn et al., 2016b). Since our state representation corresponds to a large vector of visual features, exact dynamic programming is infeasible. Sample-based approximation requires running a large number of trials to estimate Z and, as shown in recent work (Finn et al., 2016a), corresponds to a variant of generative adversarial networks (GANs), with all of the accompanying stability and optimization challenges. Furthermore, the corresponding model for the reward function is complex, making it prone to overfitting when only a small number of demonstrations is available. When faced with a difficult learning problem in extremely low-data regimes, a standard solution is to resort to simple, biased models, so as to minimize overfitting. We adopt precisely this approach in our work: instead of approximating the complex posterior distribution over trajectories under nonlinear dynamics, we use a simple biased model that affords efficient learning and minimizes overfitting. Specifically, we assume that all trajectories are dynamically feasible, and that the distribution over each activation at each time step is independent of all other activations and all other time steps. This corresponds to the IRL equivalent of a naı̈ve Bayes model: in the same way that naı̈ve Bayes uses an independence assumption to mitigate overfitting in high-dimensional feature spaces, we use independence between both time steps and features to learn from very small numbers of demonstrations. Under this assumption, the probability of a trajectory τ factorizes according to p(τ) = T∏ t=1 N∏ i=1 p(sit) = T∏ t=1 N∏ i=1 1 Zit exp ( Ri(sit) ) , which corresponds to a reward function of the form Rt(st) = ∑N i=1Ri(sit). We can then simply choose a form for Ri(sit) that can be normalized analytically, which in our case is a quadratic in sit, such that exp(Ri(sit))/Zit is a Gaussian distribution, and the original trajectory distribution is a naı̈ve Bayes model. While this approximation is quite drastic, it yields an exceedingly simple learning rule: in the most basic version, we have only to fit the mean and variance of each feature distribution, and then use the log of the resulting Gaussian as the reward. 2.2 DISCOVERY OF INTERMEDIATE STEPS The simple IRL model in the previous section can be used to acquire a single quadratic reward function in terms of the visual features st. However, for complex multi-stage tasks, this model can be too coarse, making task learning slow and difficult. We therefore instead fit multiple quadratic reward functions, with one reward function per intermediate step or goal. These steps are discovered automatically in the first stage of our method, which is performed independently on each demonstration. If multiple demonstrations are available, they are pooled together in the feature selection step discussed in the next section, and could in principle be combined at the segmentation stage as well, though we found this to be unnecessary in our prototype. The intermediate steps model extends the simple independent Gaussian model in the previous section by assuming that p(τ) = T∏ t=1 N∏ i=1 1 Zit exp ( Rigt(sit) ) , where gt is the index of the goal or step corresponding to time step t. Learning then corresponds to identifying the boundaries of the steps in the demonstration, and fitting independent Gaussian feature distributions at each step. Note that this corresponds exactly to segmenting the demonstration such that the variance of each feature within each segment is minimized. In this work, we employ a simple recursive video segmentation algorithm as described in Algorithm 1. Intuitively, this method breaks down a sequence in a way that each frame in a segment is abstractly similar to each other frame in that segment. The number of segments is provided manually in this approach, though it would be straightforward to also utilize standard model selection criteria for choosing this number automatically. There exists a body of unsupervised video segmentation methods Yuan et al. (2007) which would likely enable a less constrained set of demonstrations to be used. While this is an important avenue of future work, we show that our simple approach is sufficient to demonstrate the efficacy of our method on a realistic set of demonstrations. We also investigate how to reduce the search space of similar feature patterns across videos in section 2.3. This would render discovery of video alignments tractable for an optimization method such as the one used in Joulin et al. (2014) for video co-localization. The complexity of Algorithm 1 is O(nm) where n is the number of frames in a sequence and m the number of splits. Note that dynamic programming is not applicable to this algorithm because each sub-problem, i.e. how to split a sequence after the ith frame, depends on the segmentation chosen before the ith frame. We also experiment with a greedy binary version of this algorithm (Algorithm 2 detailed in section A.1): first split the entire sequence in two, then recursively split each new segment in two. While not exactly minimizing the variance across all segments, it is significantly more efficient (O(n2 logm)) and yields qualitatively sensible results. Algorithm 1 Recursive similarity maximization, where AverageStd() is a function that computes the average standard deviation over a set of frames or over a set of values, Join() is a function that joins values or lists together into a single list, n is the number of splits desired and min size is the minimum size of a split. function SPLIT(video, start, end, n,min size, prev std = []) if n = 1 then return [], [AVERAGESTD(video[start : end])] end if min std← None min std list← [] min split← [] for i← start+min size to end− ((n− 1) ∗min size)) do std1← [AVERAGESTD(video[start : i])] splits2, std2← SPLIT(video, i, end, n− 1,min size, prev std+ std1) avg std← AVERAGESTD(JOIN(prev std, std1, std2)) if min std = None or avg std < min std then min std← avg std min std list← JOIN(std1, std2) min split← JOIN(i, splits2) end if end for return min split,min std list end function 2.3 STEPS CLASSIFICATION In this section we explore learning a steps classifier on top of the pre-trained deep model using a regular linear classifier and a custom feature selection classifier. Intent understanding requires identifying highly discriminative features of a specific goal while remaining invariant to unrelated variation (e.g. lighting, color, viewpoint). The relevant discriminative features may be very diverse and more or less abstract, which motivates our intuition to tap into the activations of deep models at different depths. Deep models cover a large set of representations that can be useful, from spatially dense and simple features in the lower layers (e.g. large collection of detected edges) to gradually more spatially sparse and abstract features (e.g. few object classes). We train a simple linear layer which takes as input the same mid to high level activations used for steps discovery and outputs a score for each step. This linear layer is trained using logistic regression and the underlying deep model is not fine-tuned. Given the large input (1,453,824 units) and the low data regime (11 to 19 videos of 30 to 50 frames each), we hypothesize that this model should severely overfit to the training data and perform poorly in validation and testing. We test and discuss that hypothesis in Section 3.1.2. We also hypothesize that there exists a small subset of mid to high-level features that are sparse independent and can readily and compactly discriminate previously unseen inputs. We investigate that hypothesis using a simple feature selection method described in Appendix A.3. The existence of a small subset of discriminative features can be useful for reducing overfitting in low data regimes, but more importantly can allow drastic reduction of the search space for the unsupervised steps discovery. Indeed since each frame is described by millions of features, finding patterns of feature correlations across videos leads to a combinatorial explosion. However, the problem may become tractable if there exists a low-dimensional subset of features that leads to reasonably accurate steps classification. We test and discuss that hypothesis in Section 3.1.2. 2.4 USING PERCEPTUAL REWARDS FOR ROBOTIC LEARNING In order to use our learned perceptual reward functions in a complete skill learning system, we must also choose a reinforcement learning algorithm and a policy representation. While in principle any reinforcement learning algorithm could be suitable for this task, we chose a method that is efficient for evaluation on real-world robotic systems in order to validate our approach. The method we use is based on the PI2 reinforcement learning algorithm (Theodorou et al., 2010). Our implementation, which is discussed in more detail in Appendix A.4, uses a relatively simple linear-Gaussian parameterization of the policy, which corresponds to a sequence of open-loop torque commands with fixed linear feedback to correct for perturbations. This method also requires initialization from example demonstrations to learn complex manipulation tasks efficiently. A more complex neural network policy could also be used (Chebotar et al., 2016), and more sophisticated RL algorithms could also learn skills without demonstration initialization. However, since the main purpose of this component is to validate the learned reward functions, we used this simple approach to test our rewards quickly and efficiently. 3 EXPERIMENTS In this section, we discuss our empirical evaluation, starting with an analysis of the learned reward functions in terms of both qualitative reward structure and quantitative segmentation accuracy. We then present results for a real-world validation of our method on robotic door opening. 3.1 PERCEPTUAL REWARDS EVALUATION We report results on two demonstrated tasks: door opening and liquid pouring. We collected about a dozen training videos for each task using a smart phone. As an example, Fig. 2 shows the entire training set used for the pouring task. 3.1.1 QUALITATIVE ANALYSIS While a door opening sensor can be engineered using sensors hidden in the door, measuring pouring or container tilting would be quite complicated, would visually alter the scene, and is unrealistic for learning in the wild. Visual reward functions are therefore an excellent choice for complex physical phenomena such as liquid pouring. In Fig. 3, we present the combined reward functions for test videos on the pouring task, and Fig. 10 shows the intermediate rewards for each sub-goal. We plot the predicted reward functions for both successful and failed task executions in Fig. 11. We observe that for “missed” executions where the task is only partially performed, the intermediate steps are correctly classified. Fig. 9 details qualitative results of unsupervised step segmentation for the door opening and pouring tasks. For the door task, the 2-segments splits are often quite in line with what one can expect, while a 3-segments split is less accurate. We also observe that the method is robust to the presence or absence of the handle on the door, as well as its opening direction. We find that for the pouring task, the 4-segments split often yields the most sensible break down. It is interesting to note that the 2-segment split usually occurs when the glass is about half full. Failure Cases The intermediate reward function for the door opening task which corresponds to a human hand manipulating the door handle seems rather noisy or wrong in 10b, 10c and 10e (”action1” on the y-axis of the plots).The reward function in 11f remains flat while liquid is being poured into the glass. The liquid being somewhat transparent, we suspect that it looks too similar to the transparent glass for the function to fire. 3.1.2 QUANTITATIVE ANALYSIS We evaluate the quantitative accuracy of the unsupervised steps discovery in Table 1, while Table 2 presents quantitative generalization results for the learned reward on a test video of each task. For each video, ground truth intermediate steps were provided by human supervision for the purpose of evaluation. While this ground truth is subjective, since each task can be broken down in multiple ways, it is consistent for the simple tasks in our experiments. We use the Jaccard similarity measure (intersection over union) to indicate how much a detected step overlaps with its corresponding ground truth. In Table 1, we compare our method against a random baseline. Because we assume the same step order in all demonstrations, we also order the random steps in time to provide a fair baseline. Note that the random baseline performs fairly well because the steps are distributed somewhat uniformly in time. Should the steps be much less temporally uniform, the random baseline would be expected to perform very poorly, while our method should maintain similar performance. We compare splitting between 2 and 3 steps and find that, for both tasks, 2 steps are easier to discover, probably because these tasks exhibit one strong visual change each while the other steps are more subtle. Note that our unsupervised segmentation only works when full sequences are available while our learned reward functions can be used in real-time without accessing future frames. Hence in these experiments we evaluate the unsupervised segmentation on the training set only and evaluate the reward functions on the test set. In Table 2, we evaluate the reward functions individually for each step on the test set. For that purpose, we binarize the reward function using a threshold of 0.5. The random baseline simply outputs true or false at each timestep. We observe that the learned feature selection and linear classifier functions outperform the baseline by about a factor of 2. It is not clear exactly what the minimum level of accuracy is required to successfully learn to perform these tasks, but we show in section 3.2.2 that the reward accuracy on the door task is sufficient to reach 100% success rate with a real robot. Individual steps accuracy details can be found in Table 3. Surprisingly, the linear classifier performs well and does not appear to overfit on our relatively small training set. Although the feature selection algorithm is rather close to the linear classifier compared to the baseline, using feature selection to avoid avoiding is not neccesary. However the idea that a small subset of features (32 in this case) can lead to reasonable classification accuracy is verified and an important piece of information for drastically reducing the search space for future work in unsupervised steps discovery. Additionally, we show in Fig. 4 that the feature selection approach works well when the number of features n is in the region [32, 64] but collapses to 0% accuracy when n > 8192. 3.2 REAL-WORLD ROBOTIC DOOR OPENING In this section, we aim to answer the question of whether our previously visualized reward function can be used to learn a real-world robotic motion skill. We experiment on a door opening skill, where we adapt a demonstrated door opening to a novel configuration, such as different position or orientation of the door. Following the experimental protocol in prior work (Chebotar et al., 2016), we adapt an imperfect kinesthetic demonstration which we ensure succeeds at least occasionally (about 10% of the time). These demonstrations consist only of robot poses, and do not include images. We then use a variety of different video demonstrations, which contain images but not robot poses, to learn the reward function. These videos include demonstrations with other doors, and even demonstrations provided by a human using their own body, rather than through kinesthetic teaching with the robot. Figure 5 shows the experimental setup. We use a 7-DoF robotic arm with a two-finger gripper, and a camera placed above the shoulder, which provides monocular RGB images. For our baseline PI2 policy, we closely follow the setup of Chebotar et al. (2016) which uses an IMU sensor in the door handle to provide both a cost and feedback as part of the state of the controller. In contrast, in our approach we remove this sensor both from the state representation provided to PI2 and in our reward replace the target IMU state with the output of a deep neural network. 3.2.1 DATA We experiment with a range of different demonstrations from which we derive our reward function, varying both the source demo (human vs robotic), the number of subgoals we extract, and the appearance of the door. We record monocular RGB images on a camera placed above the shoulder of the arm. The door is cropped from the images, and then the resulting image is re-sized such that the shortest side is 299 dimensional with preserved aspect ratio. The input into our convolutional feature extractor Szegedy et al. (2015) is the 299x299 center crop. 3.2.2 QUALITATIVE ANALYSIS We evaluate our reward functions qualitatively by plotting our perceptual reward functions below the demonstrations with a variety of door types and demonstrators (e.g robot or human). As can be seen in Fig. 6 and in real experiments Fig. 7, we show that the reward functions are useful to a robotic arm while only showing human demonstrations as depicted in Fig. 12. Moreover we exhibit robustness variations in appearance. 3.2.3 QUANTITATIVE ANALYSIS In comparing the success rate of visual reward versus a baseline PI2 method that uses the ground truth reward function obtained by instrumenting the door with an IMU. We run PI2 for 11 iterations with 10 sampled trajectories at each iteration. As can be seen in Fig. 7, we obtain similar convergence speeds to our baseline model, with our method also able to open the door consistently. Since our local policy is able to obtain high reward candidate trajectories, this is strong evidence that a perceptual reward could be used to train a global in same manner as Chebotar et al. (2016). 4 CONCLUSION In this paper, we present a method for automatically identifying important intermediate goal given a few visual demonstrations of a task. By leveraging the general features learned from pre-trained deep models, we propose a method for rapidly learning an incremental reward function from human demonstrations which we successfully demonstrate on a real robotic learning task. We show that pre-trained models are general enough to be used without retraining. We also show there exists a small subset of pre-trained features that are highly discriminative even for previously unseen scenes and which can be used to reduce the search space for future work in unsupervised steps discovery. Another compelling direction for future work is to explore how reward learning algorithms can be combined with robotic lifelong learning. One of the biggest barriers for lifelong learning in the real world is the ability of an agent to obtain reward supervision, without which no learning is possible. Continuous learning using unsupervised rewards promises to substantially increase the variety and diversity of experience that is available for robotic reinforcement learning, resulting in more powerful, robust, and general robotic skills. ACKNOWLEDGMENTS We would like to thank Vincent Vanhoucke for helpful discussions and feedback. We would also like to thank Mrinal Kalakrishnan and Ali Yahya for indispensable guidance throughout this project. A ALGORITHMS DETAILS A.1 BINARY SEGMENTATION ALGORITHM Algorithm 2 Greedy and binary algorithm similar to and utilizing Algorithm 1, where AverageStd() is a function that computes the average standard deviation over a set of frames or over a set of values, Join() is a function that joins values or lists together into a single list, n is the number of splits desired and min size is the minimum size of a split. function BINARYSPLIT(video, start, end, n,min size, prev std = []) if n = 1 then return [], [] end if splits0, std0← SPLIT(video, start, end, 2,min size) if n = 2 then return splits0, std0 end if splits1, std1← BINARYSPLIT(video, start, splits0[0], CEIL(n/2),min size) splits2, std2← BINARYSPLIT(video, splits0[0] + 1, end, FLOOR(n/2),min size) all splits = [] all std = [] if splits1 6= [] then JOIN(all splits, splits1) JOIN(all std, std1) else JOIN(all std, std0[0]) end if if splits0 6= [] then JOIN(all splits, splits0[0]) end if if splits2 6= [] then JOIN(all splits, splits2) JOIN(all std, std2) else JOIN(all std, std0[1]) end if return all splits, all std end function A.2 COMBINING INTERMEDIATE REWARDS From the two previous sections, we obtain one reward function per intermediate step discovered by the unsupervised algorithm. These need to be combined so that the RL algorithm uses a single reward function which partially rewards intermediate steps but most rewards the final one. The initial step is ignored as it is assumed to be the resting starting state in the demonstrations. We opt for the maximum range of each reward be twice the maximum range of its preceding reward, and summing them as follow: R(a) = n∑ i=2 Ri(a) ∗ 2(i−1) (2) where n is the number of intermediate rewards detected and a an activations vector. An example of this combination is shown in Fig. 8. A.3 FEATURE SELECTION ALGORITHM Here we describe the feature selection algorithm we use to investigate the presence of a small subset of discriminative features in mid to high level layers of a pre-trained deep network. To select the most discriminative features, we use a simple scoring heuristic. Each feature i is first normalized by subtracting the mean and dividing by the standard deviation of all training sequences. We then rank them for each sub-goal according to their distance zi to the average statistics of the sets of positive and negative frames for a given goal: zi = α ∣∣∣µ+i − µ−i ∣∣∣− (σ+i + σ−i ), (3) where µ+i and σ + i are the mean and standard deviation of all “positive” frames and the µ − i and σ − i of all “negative” frames (the frames that do not contain the sub-goal). Only the top-M features are retained to form the reward function Rg() for the sub-goal g, which is given by the log-probability of an independent Gaussian distribution over the relevant features: Rg(st) = 1 n M∑ j (sijt − µ+ijt) 2 σ+ijt 2 , (4) where ij indexes the top-M selected features. We empirically choose α = 5.0 and M = 32 for our subsequent experiments. At test time, we do not know when the system transitions from one goal to another, so instead of time-indexing the goals, we instead combine all of the goals into a single time-invariant reward function, where later steps yield higher reward than earlier steps, as described in Appendix A.2. A.4 PI2 REINFORCEMENT LEARNING ALGORITHM We chose the PI2 reinforcement learning algorithm (Theodorou et al., 2010) for our experiments, with the particular implementation of the method based on a recently proposed deep reinforcement learning variant (Chebotar et al., 2016). Since our aim is mainly to validate that our learned reward functions capture the goals of the task well enough for learning, we employ a relatively simple linear-Gaussian parameterization of the policy, which corresponds to a sequence of open-loop torque commands with fixed linear feedback to correct for perturbations, as in the work of Chebotar et al. (2016). This policy has the form π(ut|xt) = N (Ktxt + kt,Σt), where Kt is a fixed stabilizing feedback matrix, and kt is a learned control. In this case, the state xt corresponds to the joint angles and angular velocities of a robot, and ut corresponds to the joint torques. Since the reward function is evaluated from camera images, we assume that the image is a (potentially stochastic) consequence of the robot’s state, so that we can evaluate the state reward r(xt) by taking the image It observed at time t, and computing the corresponding activations at. Overloading the notation, we can write at = f(It(xt)), where f is the network we use for visual features. Then, we have r(xt) = R(f(It(xt))). The PI2 algorithm is an episodic policy improvement algorithm that uses the reward r(xt) to iteratively improve the policy. The trust-region variant of PI2 that we use Chebotar et al. (2016), which is also similar to the REPS algorithm (Peters et al., 2010), updates the policy at iteration n by sampling from the time-varying linear-Gaussian policy π(ut|xt) to obtain samples {(x(i)t ,u (i) t )}, and updating the controls kt at each time step according to kt ← ∑ i u (i) t exp βt T∑ t′=t r(x (i) t′ ) / ∑ i exp βt T∑ t′=t r(x (i) t′ ) , where the temperature βt is chosen to bound the KL-divergence between the new policy π(ut|xt) and the previous policy π̄(ut|xt), such that DKL(π(ut|xt)‖π̄(ut|xt)) ≤ for a step size epsilon. Further details and a complete derivation are provided in prior work Theodorou et al. (2010); Peters et al. (2010); Chebotar et al. (2016). The PI2 algorithm is a local policy search method that performs best when provided with demonstrations to bootstrap the policy. In our experiments, we use this method together with our learned reward functions to learn a door opening skill with a real physical robot, as discussed in Section 3.2. Demonstration are provided with kinesthetic teaching, which results in a sequence of reference steps x̂t, and initial controls kt are given by kt = −Ktx̂t, such that the mean of the initial controller is Kt(xt − x̂t), corresponding to a trajectory-following initialization. This initial controller is rarely successful consistently, but the occasional successes it achieves provide a learning signal to the algorithm. The use of demonstrations enables PI2 to be used to quickly and efficiently learn complex robotic manipulation skills. Although this particular RL algorithm requires demonstrations to begin learning, it can still provide a useful starting point for real-world learning with a real robotic system. As shown by Chebotar et al. (2016), the initial set of demonstrations can be expanded into a generalizable policy by iteratively “growing” the effective region where the policy succeeds. For example, if the robot is provided with a demonstration of opening a door in one position, additional learning can expand the policy to succeed in nearby positions, and the application of a suitable curriculum can grow the region of door poses in which the policy succeeds progressively. However, as with all RL algorithms, this process requires knowledge of the reward function. Using the method described in this paper, we can learn such a reward function from either the initial demonstrations or even from other demonstration videos provided by a human. Armed with this learned reward function, the robot could continue to improve its policy through real-world experience, iteratively increasing its region of competence through lifelong learning. B ADDITIONAL QUALITATIVE RESULTS
1. What are the main contributions of the paper regarding learning from limited demonstrations? 2. What are the strengths of the proposed method, particularly in motivation and reward functions? 3. What are the weaknesses of the paper, such as lacking comparisons with prior works and experimental validation? 4. How does the reviewer assess the novelty and significance of the proposed approach? 5. Are there any concerns regarding the hypothesis on sparse independent features? 6. Do you have any suggestions for improving the paper, such as adding more interpretable rewards or comparing with other baselines?
Review
Review The paper tries to present a first step towards solving the difficult problem of "learning from limited number of demonstrations". The paper tries to present 3 contributions towards this effort: 1. unsupervised segmentation of videos to identify intermediate steps in a process 2. define reward function based on feature selection for each sub-task Pros: + The paper is a first attempt to solve a very challenging problem, where a robot is taught real-world tasks with very few visual demonstrations and without further retraining. + The method is well motivated and tries to transfer the priors learned from object classification task (through deep network features) to address the problem of limited training examples. + As demonstrated in Fig. 3, the reward functions could be more interpretable and correlate with transitions between subtasks. + Breaking a video into subtasks helps a video demonstration-based method achieve comparable performance with a method which requires full instrumentation for complex real-world tasks like door opening. Cons: 1. Unsupervised video segmentation can serve as a good starting point to identify subtasks. However, there are multiple prior works in this domain which need to be referenced and compared with. Particularly, video shot detection and shot segmentation works try to identify abrupt change in video to break it into visually diverse shots. These methods could be easily augmented with CNN-features. (Note that there are multiple papers in this domain, eg. refer to survey in Yuan et al. Trans. on Circuits and Systems for video tech. 2007) 2. The authors claim that they did not find it necessary to identify commonalities across demonstrations. This limits the scope of the problem drastically and requires the demonstrations to follow very specific set of constraints. Again, it is to be noted that there is past literature (video co-segmentation, eg. Tang et al. ECCV'14) which uses these commonalities to perform unsupervised video segmentation. 3. The unsupervised temporal video segmentation approach in the paper is only compared to a very simple random baseline for a few sample videos. However, given the large amount of literature in this domain, it is difficult to judge the novelty and significance of the proposed approach from these experiments. 4. The authors hypothesize that "sparse independent features exists which can discriminate a wide range of unseen inputs" and encode this intuition through the feature selection strategy. Again, the validity of the hypothesis is not experimentally well demonstrated. For instance, comparison to a simple linear classifier for subtasks would have been useful. Overall, the paper presents a simple approach based on the idea that recognizing sub-goals in an unsupervised fashion would help in learning from few visual demonstrations. This is well motivated as a first-step towards a difficult task. However, the methods and claims presented in the paper need to be analyzed and compared with better baselines.
ICLR
Title Unsupervised Perceptual Rewards for Imitation Learning Abstract Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a suitable reward function takes considerable manual engineering and often requires additional and potentially visible sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide useful feedback on these implicit intermediate steps or sub-goals. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify the key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit sub-goal supervision. The resulting reward functions, which are dense and smooth, can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward functions, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also demonstrate that our method can be used to learn a complex real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. N/A Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a suitable reward function takes considerable manual engineering and often requires additional and potentially visible sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide useful feedback on these implicit intermediate steps or sub-goals. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify the key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit sub-goal supervision. The resulting reward functions, which are dense and smooth, can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward functions, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also demonstrate that our method can be used to learn a complex real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. 1 INTRODUCTION Social learning, such as imitation, plays a critical role in allowing humans and animals to quickly acquire complex skills in the real world. Humans can use this weak form of supervision to acquire behaviors from very small numbers of demonstrations, in sharp contrast to deep reinforcement learning (RL) methods, which typically require extensive training data. In this work, we make use of two ideas to develop a scaleable and efficient imitation learning method: first, imitation makes use of extensive prior knowledge to quickly glean the “gist” of a new task from even a small number of demonstrations; second, imitation involves both observation and trial-and-error learning (RL). Building on these ideas, we propose a reward learning method for understanding the intent of a user demonstration through the use of pre-trained visual features, which provide the “prior knowledge” for efficient imitation. Our algorithm aims to discover not only the high-level goal of a task, but also the implicit sub-goals and steps that comprise more complex behaviors. Extracting such sub-goals can allow the agent to make maximal use of the information contained in the demonstration. Once the reward function has been extracted, the agent can use its own experience at the task to determine the physical structure of the behavior, even when the reward is provided by an agent with a substantially different embodiment (e.g. a human providing a demonstration for a robot). ∗Work done as part of the Google Brain Residency program (g.co/brainresidency). To our knowledge, our method is the first reward learning technique that learns generalizable visionbased reward functions for complex robotic manipulation skills from only a few demonstrations provided directly by a human. Although prior methods have demonstrated reward learning with vision for real-world robotic tasks, they have either required kinesthetic demonstrations with robot state for reward learning (Finn et al., 2015), or else required low-dimensional state spaces and numerous demonstrations (Wulfmeier et al., 2016). The contributions of this paper are: • A method for perceptual reward learning from only a few demonstrations of real-world tasks. Reward functions are dense and incremental, with automated unsupervised discovery of intermediate steps. • The first vision-based reward learning method that can learn a complex robotic manipulation task from a few human demonstrations in real-world robotic experiments. • A set of empirical experiments that show that the learned visual representations inside a pre-trained deep model are general enough to be directly used to represent goals and subgoals for manipulation skills in new scenes without retraining. 1.1 RELATED WORK Deep reinforcement learning and deep robotic learning work has previously examined learning reward functions based on images. One of the most common approaches to image-based reward functions is to directly specify a “target image” by showing the learner the raw pixels of a successful task completion state, and then using distance to that image (or its latent representation) as a reward function (Lange et al., 2012; Finn et al., 2015; Watter et al., 2015). However, this approach has several severe shortcomings. First, the use of a target image presupposes that the system can achieve a substantially similar visual state, which precludes generalization to semantically similar but visually distinct situations. Second, the use of a target image does not provide the learner with information about which facet of the image is more or less important for task success, which might result in the learner excessively emphasizing irrelevant factors of variation (such as the color of a door due to light and shadow) at the expense of relevant factors (such as whether or not the door is open or closed). Analyzing a collection of demonstrations to learn a parsimonious reward function that explains the demonstrated behavior is known as inverse reinforcement learning (IRL) (Ng et al., 2000). A few recently proposed IRL algorithms have sought to combine IRL with vision and deep network representations (Finn et al., 2016b; Wulfmeier et al., 2016). However, scaling IRL to high-dimensional systems and open-ended reward representations is very challenging. The previous work closest to ours used images together with robot state information (joint angles and end effector pose), with tens of demonstrations provided through kinesthetic teaching (Finn et al., 2016b). The approach we propose in this work which can be interpreted as a simple and efficient approximation to IRL, can use demonstrations that consist of videos of a human performing the task using their own body, and can acquire reward functions with intermediate sub-goals using just a few examples. This kind of efficient vision-based reward learning from videos of humans has not been demonstrated in prior IRL work. The idea of perceptual reward functions using raw pixels was also explored by Edwards et al. (2016) which, while sharing the same spirit as this work, was limited to simple synthetic tasks and used single images as perceptual goals rather than multiple demonstration videos. 2 SIMPLE INVERSE REINFORCEMENT LEARNING WITH VISUAL FEATURES The key insight in our approach is that we can exploit the semantically meaningful and powerful features in a pre-trained deep neural network to infer task goals and sub-goals using a very simple approximate inverse reinforcement learning method. The pre-trained network effectively transfers prior knowledge about the visual world to make imitation learning fast and robust. Our approach can be interpreted as a simple approximation to inverse reinforcement learning under a particular choice of system dynamics, as discussed in Section 2.1. While this approximation is somewhat simplistic, it affords an efficient and scaleable learning rule that avoids overfitting even when trained on a small number of demonstrations. As depicted in Fig. 1, our algorithm first segments the demonstrations into segments based on perceptual similarity, as described in Section 2.2. Intuitively, the resulting segments correspond to sub-goals or steps of the task. The segments can then be used as a supervision signal for learning steps classifiers, described in Section 2.3, which produces a single perception reward function for each step of the task. The combined reward function can then be used with a reinforcement learning algorithm to learn the demonstrated behavior. Although this method for extracting reward functions is exceedingly simple, its power comes from the use of highly general and robust pre-trained visual features, and our key empirical result is that such features are sufficient to acquire effective and generalizable reward functions for real-world manipulation skills. We use the Inception network (Szegedy et al., 2015) pre-trained for ImageNet classification (Deng et al., 2009) to obtain the visual features for representing the learned rewards. It is well known that visual features in such networks are quite general and can be reused for other visual tasks. However, it is less clear if sparse subsets of such features can be used directly to represent goals and subgoals for real-world manipulation skills. Our experimental evaluation suggests that indeed they can, and that the resulting reward representations are robust and reliable enough for real-world robotic learning without any finetuning of the features. In this work, we use all activations starting from the first “mixed” layer that follows the first 5 convolutional layers (this layer’s activation map is of size 35x35x256 given a 299x299 input). While this paper focuses on visual perception, the approach is general and can be applied to other modalities (e.g. audio and tactile). 2.1 INVERSE REINFORCEMENT LEARNING WITH TIME-INDEPENDENT GAUSSIAN MODELS Inverse reinforcement learning can be performed with a variety of algorithms (Ng et al., 2000), ranging from margin-based methods (Abbeel & Ng, 2004; Ratliff et al., 2006) to methods based on probabilistic models (Ramachandran & Amir, 2007; Ziebart et al., 2008). In this work, we use a very simple approximation to the MaxEnt IRL model (Ziebart et al., 2008), a popular probabilistic approach to IRL. We will use st to denote the visual feature activations at time t, which constitute the state, sit to denote the ith feature at time t, and τ = {s1, . . . , sT } to denote a sequence or trajectory of these activations in a video. In MaxEnt IRL, the demonstrated trajectories τ are assumed to be drawn from a Boltzmann distribution according to: p(τ) = p(s1, . . . , sT ) = 1 Z exp T∑ t=1 R(st) , (1) whereR(st) is the unknown reward function. The principal computational challenge in MaxEnt IRL is to approximate Z, since the states at each time step are not independent, but are constrained by the system dynamics. In deterministic systems, where st+1 = f(st, at) for actions at and dynamics f , the dynamics impose constraints on which trajectories τ are feasible. In stochastic systems, where st+1 ∼ p(st+1|st, at), we must also account for the dynamics distribution in Equation (1), as discussed by Ziebart et al. (2008). Prior work has addressed this using dynamic programming to exactly compute Z in small, discrete systems (Ziebart et al., 2008), or by using sampling to estimate Z for large, continuous domains (Kalakrishnan et al., 2010; Boularias et al., 2011; Finn et al., 2016b). Since our state representation corresponds to a large vector of visual features, exact dynamic programming is infeasible. Sample-based approximation requires running a large number of trials to estimate Z and, as shown in recent work (Finn et al., 2016a), corresponds to a variant of generative adversarial networks (GANs), with all of the accompanying stability and optimization challenges. Furthermore, the corresponding model for the reward function is complex, making it prone to overfitting when only a small number of demonstrations is available. When faced with a difficult learning problem in extremely low-data regimes, a standard solution is to resort to simple, biased models, so as to minimize overfitting. We adopt precisely this approach in our work: instead of approximating the complex posterior distribution over trajectories under nonlinear dynamics, we use a simple biased model that affords efficient learning and minimizes overfitting. Specifically, we assume that all trajectories are dynamically feasible, and that the distribution over each activation at each time step is independent of all other activations and all other time steps. This corresponds to the IRL equivalent of a naı̈ve Bayes model: in the same way that naı̈ve Bayes uses an independence assumption to mitigate overfitting in high-dimensional feature spaces, we use independence between both time steps and features to learn from very small numbers of demonstrations. Under this assumption, the probability of a trajectory τ factorizes according to p(τ) = T∏ t=1 N∏ i=1 p(sit) = T∏ t=1 N∏ i=1 1 Zit exp ( Ri(sit) ) , which corresponds to a reward function of the form Rt(st) = ∑N i=1Ri(sit). We can then simply choose a form for Ri(sit) that can be normalized analytically, which in our case is a quadratic in sit, such that exp(Ri(sit))/Zit is a Gaussian distribution, and the original trajectory distribution is a naı̈ve Bayes model. While this approximation is quite drastic, it yields an exceedingly simple learning rule: in the most basic version, we have only to fit the mean and variance of each feature distribution, and then use the log of the resulting Gaussian as the reward. 2.2 DISCOVERY OF INTERMEDIATE STEPS The simple IRL model in the previous section can be used to acquire a single quadratic reward function in terms of the visual features st. However, for complex multi-stage tasks, this model can be too coarse, making task learning slow and difficult. We therefore instead fit multiple quadratic reward functions, with one reward function per intermediate step or goal. These steps are discovered automatically in the first stage of our method, which is performed independently on each demonstration. If multiple demonstrations are available, they are pooled together in the feature selection step discussed in the next section, and could in principle be combined at the segmentation stage as well, though we found this to be unnecessary in our prototype. The intermediate steps model extends the simple independent Gaussian model in the previous section by assuming that p(τ) = T∏ t=1 N∏ i=1 1 Zit exp ( Rigt(sit) ) , where gt is the index of the goal or step corresponding to time step t. Learning then corresponds to identifying the boundaries of the steps in the demonstration, and fitting independent Gaussian feature distributions at each step. Note that this corresponds exactly to segmenting the demonstration such that the variance of each feature within each segment is minimized. In this work, we employ a simple recursive video segmentation algorithm as described in Algorithm 1. Intuitively, this method breaks down a sequence in a way that each frame in a segment is abstractly similar to each other frame in that segment. The number of segments is provided manually in this approach, though it would be straightforward to also utilize standard model selection criteria for choosing this number automatically. There exists a body of unsupervised video segmentation methods Yuan et al. (2007) which would likely enable a less constrained set of demonstrations to be used. While this is an important avenue of future work, we show that our simple approach is sufficient to demonstrate the efficacy of our method on a realistic set of demonstrations. We also investigate how to reduce the search space of similar feature patterns across videos in section 2.3. This would render discovery of video alignments tractable for an optimization method such as the one used in Joulin et al. (2014) for video co-localization. The complexity of Algorithm 1 is O(nm) where n is the number of frames in a sequence and m the number of splits. Note that dynamic programming is not applicable to this algorithm because each sub-problem, i.e. how to split a sequence after the ith frame, depends on the segmentation chosen before the ith frame. We also experiment with a greedy binary version of this algorithm (Algorithm 2 detailed in section A.1): first split the entire sequence in two, then recursively split each new segment in two. While not exactly minimizing the variance across all segments, it is significantly more efficient (O(n2 logm)) and yields qualitatively sensible results. Algorithm 1 Recursive similarity maximization, where AverageStd() is a function that computes the average standard deviation over a set of frames or over a set of values, Join() is a function that joins values or lists together into a single list, n is the number of splits desired and min size is the minimum size of a split. function SPLIT(video, start, end, n,min size, prev std = []) if n = 1 then return [], [AVERAGESTD(video[start : end])] end if min std← None min std list← [] min split← [] for i← start+min size to end− ((n− 1) ∗min size)) do std1← [AVERAGESTD(video[start : i])] splits2, std2← SPLIT(video, i, end, n− 1,min size, prev std+ std1) avg std← AVERAGESTD(JOIN(prev std, std1, std2)) if min std = None or avg std < min std then min std← avg std min std list← JOIN(std1, std2) min split← JOIN(i, splits2) end if end for return min split,min std list end function 2.3 STEPS CLASSIFICATION In this section we explore learning a steps classifier on top of the pre-trained deep model using a regular linear classifier and a custom feature selection classifier. Intent understanding requires identifying highly discriminative features of a specific goal while remaining invariant to unrelated variation (e.g. lighting, color, viewpoint). The relevant discriminative features may be very diverse and more or less abstract, which motivates our intuition to tap into the activations of deep models at different depths. Deep models cover a large set of representations that can be useful, from spatially dense and simple features in the lower layers (e.g. large collection of detected edges) to gradually more spatially sparse and abstract features (e.g. few object classes). We train a simple linear layer which takes as input the same mid to high level activations used for steps discovery and outputs a score for each step. This linear layer is trained using logistic regression and the underlying deep model is not fine-tuned. Given the large input (1,453,824 units) and the low data regime (11 to 19 videos of 30 to 50 frames each), we hypothesize that this model should severely overfit to the training data and perform poorly in validation and testing. We test and discuss that hypothesis in Section 3.1.2. We also hypothesize that there exists a small subset of mid to high-level features that are sparse independent and can readily and compactly discriminate previously unseen inputs. We investigate that hypothesis using a simple feature selection method described in Appendix A.3. The existence of a small subset of discriminative features can be useful for reducing overfitting in low data regimes, but more importantly can allow drastic reduction of the search space for the unsupervised steps discovery. Indeed since each frame is described by millions of features, finding patterns of feature correlations across videos leads to a combinatorial explosion. However, the problem may become tractable if there exists a low-dimensional subset of features that leads to reasonably accurate steps classification. We test and discuss that hypothesis in Section 3.1.2. 2.4 USING PERCEPTUAL REWARDS FOR ROBOTIC LEARNING In order to use our learned perceptual reward functions in a complete skill learning system, we must also choose a reinforcement learning algorithm and a policy representation. While in principle any reinforcement learning algorithm could be suitable for this task, we chose a method that is efficient for evaluation on real-world robotic systems in order to validate our approach. The method we use is based on the PI2 reinforcement learning algorithm (Theodorou et al., 2010). Our implementation, which is discussed in more detail in Appendix A.4, uses a relatively simple linear-Gaussian parameterization of the policy, which corresponds to a sequence of open-loop torque commands with fixed linear feedback to correct for perturbations. This method also requires initialization from example demonstrations to learn complex manipulation tasks efficiently. A more complex neural network policy could also be used (Chebotar et al., 2016), and more sophisticated RL algorithms could also learn skills without demonstration initialization. However, since the main purpose of this component is to validate the learned reward functions, we used this simple approach to test our rewards quickly and efficiently. 3 EXPERIMENTS In this section, we discuss our empirical evaluation, starting with an analysis of the learned reward functions in terms of both qualitative reward structure and quantitative segmentation accuracy. We then present results for a real-world validation of our method on robotic door opening. 3.1 PERCEPTUAL REWARDS EVALUATION We report results on two demonstrated tasks: door opening and liquid pouring. We collected about a dozen training videos for each task using a smart phone. As an example, Fig. 2 shows the entire training set used for the pouring task. 3.1.1 QUALITATIVE ANALYSIS While a door opening sensor can be engineered using sensors hidden in the door, measuring pouring or container tilting would be quite complicated, would visually alter the scene, and is unrealistic for learning in the wild. Visual reward functions are therefore an excellent choice for complex physical phenomena such as liquid pouring. In Fig. 3, we present the combined reward functions for test videos on the pouring task, and Fig. 10 shows the intermediate rewards for each sub-goal. We plot the predicted reward functions for both successful and failed task executions in Fig. 11. We observe that for “missed” executions where the task is only partially performed, the intermediate steps are correctly classified. Fig. 9 details qualitative results of unsupervised step segmentation for the door opening and pouring tasks. For the door task, the 2-segments splits are often quite in line with what one can expect, while a 3-segments split is less accurate. We also observe that the method is robust to the presence or absence of the handle on the door, as well as its opening direction. We find that for the pouring task, the 4-segments split often yields the most sensible break down. It is interesting to note that the 2-segment split usually occurs when the glass is about half full. Failure Cases The intermediate reward function for the door opening task which corresponds to a human hand manipulating the door handle seems rather noisy or wrong in 10b, 10c and 10e (”action1” on the y-axis of the plots).The reward function in 11f remains flat while liquid is being poured into the glass. The liquid being somewhat transparent, we suspect that it looks too similar to the transparent glass for the function to fire. 3.1.2 QUANTITATIVE ANALYSIS We evaluate the quantitative accuracy of the unsupervised steps discovery in Table 1, while Table 2 presents quantitative generalization results for the learned reward on a test video of each task. For each video, ground truth intermediate steps were provided by human supervision for the purpose of evaluation. While this ground truth is subjective, since each task can be broken down in multiple ways, it is consistent for the simple tasks in our experiments. We use the Jaccard similarity measure (intersection over union) to indicate how much a detected step overlaps with its corresponding ground truth. In Table 1, we compare our method against a random baseline. Because we assume the same step order in all demonstrations, we also order the random steps in time to provide a fair baseline. Note that the random baseline performs fairly well because the steps are distributed somewhat uniformly in time. Should the steps be much less temporally uniform, the random baseline would be expected to perform very poorly, while our method should maintain similar performance. We compare splitting between 2 and 3 steps and find that, for both tasks, 2 steps are easier to discover, probably because these tasks exhibit one strong visual change each while the other steps are more subtle. Note that our unsupervised segmentation only works when full sequences are available while our learned reward functions can be used in real-time without accessing future frames. Hence in these experiments we evaluate the unsupervised segmentation on the training set only and evaluate the reward functions on the test set. In Table 2, we evaluate the reward functions individually for each step on the test set. For that purpose, we binarize the reward function using a threshold of 0.5. The random baseline simply outputs true or false at each timestep. We observe that the learned feature selection and linear classifier functions outperform the baseline by about a factor of 2. It is not clear exactly what the minimum level of accuracy is required to successfully learn to perform these tasks, but we show in section 3.2.2 that the reward accuracy on the door task is sufficient to reach 100% success rate with a real robot. Individual steps accuracy details can be found in Table 3. Surprisingly, the linear classifier performs well and does not appear to overfit on our relatively small training set. Although the feature selection algorithm is rather close to the linear classifier compared to the baseline, using feature selection to avoid avoiding is not neccesary. However the idea that a small subset of features (32 in this case) can lead to reasonable classification accuracy is verified and an important piece of information for drastically reducing the search space for future work in unsupervised steps discovery. Additionally, we show in Fig. 4 that the feature selection approach works well when the number of features n is in the region [32, 64] but collapses to 0% accuracy when n > 8192. 3.2 REAL-WORLD ROBOTIC DOOR OPENING In this section, we aim to answer the question of whether our previously visualized reward function can be used to learn a real-world robotic motion skill. We experiment on a door opening skill, where we adapt a demonstrated door opening to a novel configuration, such as different position or orientation of the door. Following the experimental protocol in prior work (Chebotar et al., 2016), we adapt an imperfect kinesthetic demonstration which we ensure succeeds at least occasionally (about 10% of the time). These demonstrations consist only of robot poses, and do not include images. We then use a variety of different video demonstrations, which contain images but not robot poses, to learn the reward function. These videos include demonstrations with other doors, and even demonstrations provided by a human using their own body, rather than through kinesthetic teaching with the robot. Figure 5 shows the experimental setup. We use a 7-DoF robotic arm with a two-finger gripper, and a camera placed above the shoulder, which provides monocular RGB images. For our baseline PI2 policy, we closely follow the setup of Chebotar et al. (2016) which uses an IMU sensor in the door handle to provide both a cost and feedback as part of the state of the controller. In contrast, in our approach we remove this sensor both from the state representation provided to PI2 and in our reward replace the target IMU state with the output of a deep neural network. 3.2.1 DATA We experiment with a range of different demonstrations from which we derive our reward function, varying both the source demo (human vs robotic), the number of subgoals we extract, and the appearance of the door. We record monocular RGB images on a camera placed above the shoulder of the arm. The door is cropped from the images, and then the resulting image is re-sized such that the shortest side is 299 dimensional with preserved aspect ratio. The input into our convolutional feature extractor Szegedy et al. (2015) is the 299x299 center crop. 3.2.2 QUALITATIVE ANALYSIS We evaluate our reward functions qualitatively by plotting our perceptual reward functions below the demonstrations with a variety of door types and demonstrators (e.g robot or human). As can be seen in Fig. 6 and in real experiments Fig. 7, we show that the reward functions are useful to a robotic arm while only showing human demonstrations as depicted in Fig. 12. Moreover we exhibit robustness variations in appearance. 3.2.3 QUANTITATIVE ANALYSIS In comparing the success rate of visual reward versus a baseline PI2 method that uses the ground truth reward function obtained by instrumenting the door with an IMU. We run PI2 for 11 iterations with 10 sampled trajectories at each iteration. As can be seen in Fig. 7, we obtain similar convergence speeds to our baseline model, with our method also able to open the door consistently. Since our local policy is able to obtain high reward candidate trajectories, this is strong evidence that a perceptual reward could be used to train a global in same manner as Chebotar et al. (2016). 4 CONCLUSION In this paper, we present a method for automatically identifying important intermediate goal given a few visual demonstrations of a task. By leveraging the general features learned from pre-trained deep models, we propose a method for rapidly learning an incremental reward function from human demonstrations which we successfully demonstrate on a real robotic learning task. We show that pre-trained models are general enough to be used without retraining. We also show there exists a small subset of pre-trained features that are highly discriminative even for previously unseen scenes and which can be used to reduce the search space for future work in unsupervised steps discovery. Another compelling direction for future work is to explore how reward learning algorithms can be combined with robotic lifelong learning. One of the biggest barriers for lifelong learning in the real world is the ability of an agent to obtain reward supervision, without which no learning is possible. Continuous learning using unsupervised rewards promises to substantially increase the variety and diversity of experience that is available for robotic reinforcement learning, resulting in more powerful, robust, and general robotic skills. ACKNOWLEDGMENTS We would like to thank Vincent Vanhoucke for helpful discussions and feedback. We would also like to thank Mrinal Kalakrishnan and Ali Yahya for indispensable guidance throughout this project. A ALGORITHMS DETAILS A.1 BINARY SEGMENTATION ALGORITHM Algorithm 2 Greedy and binary algorithm similar to and utilizing Algorithm 1, where AverageStd() is a function that computes the average standard deviation over a set of frames or over a set of values, Join() is a function that joins values or lists together into a single list, n is the number of splits desired and min size is the minimum size of a split. function BINARYSPLIT(video, start, end, n,min size, prev std = []) if n = 1 then return [], [] end if splits0, std0← SPLIT(video, start, end, 2,min size) if n = 2 then return splits0, std0 end if splits1, std1← BINARYSPLIT(video, start, splits0[0], CEIL(n/2),min size) splits2, std2← BINARYSPLIT(video, splits0[0] + 1, end, FLOOR(n/2),min size) all splits = [] all std = [] if splits1 6= [] then JOIN(all splits, splits1) JOIN(all std, std1) else JOIN(all std, std0[0]) end if if splits0 6= [] then JOIN(all splits, splits0[0]) end if if splits2 6= [] then JOIN(all splits, splits2) JOIN(all std, std2) else JOIN(all std, std0[1]) end if return all splits, all std end function A.2 COMBINING INTERMEDIATE REWARDS From the two previous sections, we obtain one reward function per intermediate step discovered by the unsupervised algorithm. These need to be combined so that the RL algorithm uses a single reward function which partially rewards intermediate steps but most rewards the final one. The initial step is ignored as it is assumed to be the resting starting state in the demonstrations. We opt for the maximum range of each reward be twice the maximum range of its preceding reward, and summing them as follow: R(a) = n∑ i=2 Ri(a) ∗ 2(i−1) (2) where n is the number of intermediate rewards detected and a an activations vector. An example of this combination is shown in Fig. 8. A.3 FEATURE SELECTION ALGORITHM Here we describe the feature selection algorithm we use to investigate the presence of a small subset of discriminative features in mid to high level layers of a pre-trained deep network. To select the most discriminative features, we use a simple scoring heuristic. Each feature i is first normalized by subtracting the mean and dividing by the standard deviation of all training sequences. We then rank them for each sub-goal according to their distance zi to the average statistics of the sets of positive and negative frames for a given goal: zi = α ∣∣∣µ+i − µ−i ∣∣∣− (σ+i + σ−i ), (3) where µ+i and σ + i are the mean and standard deviation of all “positive” frames and the µ − i and σ − i of all “negative” frames (the frames that do not contain the sub-goal). Only the top-M features are retained to form the reward function Rg() for the sub-goal g, which is given by the log-probability of an independent Gaussian distribution over the relevant features: Rg(st) = 1 n M∑ j (sijt − µ+ijt) 2 σ+ijt 2 , (4) where ij indexes the top-M selected features. We empirically choose α = 5.0 and M = 32 for our subsequent experiments. At test time, we do not know when the system transitions from one goal to another, so instead of time-indexing the goals, we instead combine all of the goals into a single time-invariant reward function, where later steps yield higher reward than earlier steps, as described in Appendix A.2. A.4 PI2 REINFORCEMENT LEARNING ALGORITHM We chose the PI2 reinforcement learning algorithm (Theodorou et al., 2010) for our experiments, with the particular implementation of the method based on a recently proposed deep reinforcement learning variant (Chebotar et al., 2016). Since our aim is mainly to validate that our learned reward functions capture the goals of the task well enough for learning, we employ a relatively simple linear-Gaussian parameterization of the policy, which corresponds to a sequence of open-loop torque commands with fixed linear feedback to correct for perturbations, as in the work of Chebotar et al. (2016). This policy has the form π(ut|xt) = N (Ktxt + kt,Σt), where Kt is a fixed stabilizing feedback matrix, and kt is a learned control. In this case, the state xt corresponds to the joint angles and angular velocities of a robot, and ut corresponds to the joint torques. Since the reward function is evaluated from camera images, we assume that the image is a (potentially stochastic) consequence of the robot’s state, so that we can evaluate the state reward r(xt) by taking the image It observed at time t, and computing the corresponding activations at. Overloading the notation, we can write at = f(It(xt)), where f is the network we use for visual features. Then, we have r(xt) = R(f(It(xt))). The PI2 algorithm is an episodic policy improvement algorithm that uses the reward r(xt) to iteratively improve the policy. The trust-region variant of PI2 that we use Chebotar et al. (2016), which is also similar to the REPS algorithm (Peters et al., 2010), updates the policy at iteration n by sampling from the time-varying linear-Gaussian policy π(ut|xt) to obtain samples {(x(i)t ,u (i) t )}, and updating the controls kt at each time step according to kt ← ∑ i u (i) t exp βt T∑ t′=t r(x (i) t′ ) / ∑ i exp βt T∑ t′=t r(x (i) t′ ) , where the temperature βt is chosen to bound the KL-divergence between the new policy π(ut|xt) and the previous policy π̄(ut|xt), such that DKL(π(ut|xt)‖π̄(ut|xt)) ≤ for a step size epsilon. Further details and a complete derivation are provided in prior work Theodorou et al. (2010); Peters et al. (2010); Chebotar et al. (2016). The PI2 algorithm is a local policy search method that performs best when provided with demonstrations to bootstrap the policy. In our experiments, we use this method together with our learned reward functions to learn a door opening skill with a real physical robot, as discussed in Section 3.2. Demonstration are provided with kinesthetic teaching, which results in a sequence of reference steps x̂t, and initial controls kt are given by kt = −Ktx̂t, such that the mean of the initial controller is Kt(xt − x̂t), corresponding to a trajectory-following initialization. This initial controller is rarely successful consistently, but the occasional successes it achieves provide a learning signal to the algorithm. The use of demonstrations enables PI2 to be used to quickly and efficiently learn complex robotic manipulation skills. Although this particular RL algorithm requires demonstrations to begin learning, it can still provide a useful starting point for real-world learning with a real robotic system. As shown by Chebotar et al. (2016), the initial set of demonstrations can be expanded into a generalizable policy by iteratively “growing” the effective region where the policy succeeds. For example, if the robot is provided with a demonstration of opening a door in one position, additional learning can expand the policy to succeed in nearby positions, and the application of a suitable curriculum can grow the region of door poses in which the policy succeeds progressively. However, as with all RL algorithms, this process requires knowledge of the reward function. Using the method described in this paper, we can learn such a reward function from either the initial demonstrations or even from other demonstration videos provided by a human. Armed with this learned reward function, the robot could continue to improve its policy through real-world experience, iteratively increasing its region of competence through lifelong learning. B ADDITIONAL QUALITATIVE RESULTS
1. What is the main contribution of the paper in terms of learning vision features for robot training? 2. How effective are the proposed methods compared to other approaches, especially in terms of using pre-trained deep models? 3. What are some potential limitations or areas for improvement regarding the choice of baselines used in the study? 4. Are there any concerns about the robustness or generalizability of the proposed method across different environments or scenarios? 5. Can you provide additional explanations or justifications for the design choices made in the study, such as segmenting sequences into fragments and clustering features?
Review
Review This paper proposes a novel method to learn vision feature as intermediate rewards to guide the robot training in the real world. Since there are only a few sequences of human demonstrations, the paper first segments the sequences into fragments so that the features are roughly invariant on the corresponding fragments across sequences, then clusters and finds most discriminative features on those fragments, and uses them as the reward function. The features are from pre-trained deep models. The idea is simple and seems quite effective in picking the right reward functions. Fig. 6 is a good comparison (although it could be better with error bars). However, some baselines are not strong, in particular vision related baselines. For example, the random reward ("simply outputs true or false") in Tbl. 2 seems quite arbitrary and may not serve as a good baseline (but its performance is still not that bad, surprisingly.). A better baseline would be to use random/simpler feature extraction on the image, e.g., binning features and simply picking the most frequent ones, which might not be as discriminative as the proposed feature. I wonder whether a simpler vision-based approach would lead to a similarly performed reward function. If so, then these delicate steps (segment, etc) altogether.
ICLR
Title Unsupervised Perceptual Rewards for Imitation Learning Abstract Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a suitable reward function takes considerable manual engineering and often requires additional and potentially visible sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide useful feedback on these implicit intermediate steps or sub-goals. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify the key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit sub-goal supervision. The resulting reward functions, which are dense and smooth, can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward functions, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also demonstrate that our method can be used to learn a complex real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. N/A Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a suitable reward function takes considerable manual engineering and often requires additional and potentially visible sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide useful feedback on these implicit intermediate steps or sub-goals. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify the key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit sub-goal supervision. The resulting reward functions, which are dense and smooth, can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward functions, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also demonstrate that our method can be used to learn a complex real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. 1 INTRODUCTION Social learning, such as imitation, plays a critical role in allowing humans and animals to quickly acquire complex skills in the real world. Humans can use this weak form of supervision to acquire behaviors from very small numbers of demonstrations, in sharp contrast to deep reinforcement learning (RL) methods, which typically require extensive training data. In this work, we make use of two ideas to develop a scaleable and efficient imitation learning method: first, imitation makes use of extensive prior knowledge to quickly glean the “gist” of a new task from even a small number of demonstrations; second, imitation involves both observation and trial-and-error learning (RL). Building on these ideas, we propose a reward learning method for understanding the intent of a user demonstration through the use of pre-trained visual features, which provide the “prior knowledge” for efficient imitation. Our algorithm aims to discover not only the high-level goal of a task, but also the implicit sub-goals and steps that comprise more complex behaviors. Extracting such sub-goals can allow the agent to make maximal use of the information contained in the demonstration. Once the reward function has been extracted, the agent can use its own experience at the task to determine the physical structure of the behavior, even when the reward is provided by an agent with a substantially different embodiment (e.g. a human providing a demonstration for a robot). ∗Work done as part of the Google Brain Residency program (g.co/brainresidency). To our knowledge, our method is the first reward learning technique that learns generalizable visionbased reward functions for complex robotic manipulation skills from only a few demonstrations provided directly by a human. Although prior methods have demonstrated reward learning with vision for real-world robotic tasks, they have either required kinesthetic demonstrations with robot state for reward learning (Finn et al., 2015), or else required low-dimensional state spaces and numerous demonstrations (Wulfmeier et al., 2016). The contributions of this paper are: • A method for perceptual reward learning from only a few demonstrations of real-world tasks. Reward functions are dense and incremental, with automated unsupervised discovery of intermediate steps. • The first vision-based reward learning method that can learn a complex robotic manipulation task from a few human demonstrations in real-world robotic experiments. • A set of empirical experiments that show that the learned visual representations inside a pre-trained deep model are general enough to be directly used to represent goals and subgoals for manipulation skills in new scenes without retraining. 1.1 RELATED WORK Deep reinforcement learning and deep robotic learning work has previously examined learning reward functions based on images. One of the most common approaches to image-based reward functions is to directly specify a “target image” by showing the learner the raw pixels of a successful task completion state, and then using distance to that image (or its latent representation) as a reward function (Lange et al., 2012; Finn et al., 2015; Watter et al., 2015). However, this approach has several severe shortcomings. First, the use of a target image presupposes that the system can achieve a substantially similar visual state, which precludes generalization to semantically similar but visually distinct situations. Second, the use of a target image does not provide the learner with information about which facet of the image is more or less important for task success, which might result in the learner excessively emphasizing irrelevant factors of variation (such as the color of a door due to light and shadow) at the expense of relevant factors (such as whether or not the door is open or closed). Analyzing a collection of demonstrations to learn a parsimonious reward function that explains the demonstrated behavior is known as inverse reinforcement learning (IRL) (Ng et al., 2000). A few recently proposed IRL algorithms have sought to combine IRL with vision and deep network representations (Finn et al., 2016b; Wulfmeier et al., 2016). However, scaling IRL to high-dimensional systems and open-ended reward representations is very challenging. The previous work closest to ours used images together with robot state information (joint angles and end effector pose), with tens of demonstrations provided through kinesthetic teaching (Finn et al., 2016b). The approach we propose in this work which can be interpreted as a simple and efficient approximation to IRL, can use demonstrations that consist of videos of a human performing the task using their own body, and can acquire reward functions with intermediate sub-goals using just a few examples. This kind of efficient vision-based reward learning from videos of humans has not been demonstrated in prior IRL work. The idea of perceptual reward functions using raw pixels was also explored by Edwards et al. (2016) which, while sharing the same spirit as this work, was limited to simple synthetic tasks and used single images as perceptual goals rather than multiple demonstration videos. 2 SIMPLE INVERSE REINFORCEMENT LEARNING WITH VISUAL FEATURES The key insight in our approach is that we can exploit the semantically meaningful and powerful features in a pre-trained deep neural network to infer task goals and sub-goals using a very simple approximate inverse reinforcement learning method. The pre-trained network effectively transfers prior knowledge about the visual world to make imitation learning fast and robust. Our approach can be interpreted as a simple approximation to inverse reinforcement learning under a particular choice of system dynamics, as discussed in Section 2.1. While this approximation is somewhat simplistic, it affords an efficient and scaleable learning rule that avoids overfitting even when trained on a small number of demonstrations. As depicted in Fig. 1, our algorithm first segments the demonstrations into segments based on perceptual similarity, as described in Section 2.2. Intuitively, the resulting segments correspond to sub-goals or steps of the task. The segments can then be used as a supervision signal for learning steps classifiers, described in Section 2.3, which produces a single perception reward function for each step of the task. The combined reward function can then be used with a reinforcement learning algorithm to learn the demonstrated behavior. Although this method for extracting reward functions is exceedingly simple, its power comes from the use of highly general and robust pre-trained visual features, and our key empirical result is that such features are sufficient to acquire effective and generalizable reward functions for real-world manipulation skills. We use the Inception network (Szegedy et al., 2015) pre-trained for ImageNet classification (Deng et al., 2009) to obtain the visual features for representing the learned rewards. It is well known that visual features in such networks are quite general and can be reused for other visual tasks. However, it is less clear if sparse subsets of such features can be used directly to represent goals and subgoals for real-world manipulation skills. Our experimental evaluation suggests that indeed they can, and that the resulting reward representations are robust and reliable enough for real-world robotic learning without any finetuning of the features. In this work, we use all activations starting from the first “mixed” layer that follows the first 5 convolutional layers (this layer’s activation map is of size 35x35x256 given a 299x299 input). While this paper focuses on visual perception, the approach is general and can be applied to other modalities (e.g. audio and tactile). 2.1 INVERSE REINFORCEMENT LEARNING WITH TIME-INDEPENDENT GAUSSIAN MODELS Inverse reinforcement learning can be performed with a variety of algorithms (Ng et al., 2000), ranging from margin-based methods (Abbeel & Ng, 2004; Ratliff et al., 2006) to methods based on probabilistic models (Ramachandran & Amir, 2007; Ziebart et al., 2008). In this work, we use a very simple approximation to the MaxEnt IRL model (Ziebart et al., 2008), a popular probabilistic approach to IRL. We will use st to denote the visual feature activations at time t, which constitute the state, sit to denote the ith feature at time t, and τ = {s1, . . . , sT } to denote a sequence or trajectory of these activations in a video. In MaxEnt IRL, the demonstrated trajectories τ are assumed to be drawn from a Boltzmann distribution according to: p(τ) = p(s1, . . . , sT ) = 1 Z exp T∑ t=1 R(st) , (1) whereR(st) is the unknown reward function. The principal computational challenge in MaxEnt IRL is to approximate Z, since the states at each time step are not independent, but are constrained by the system dynamics. In deterministic systems, where st+1 = f(st, at) for actions at and dynamics f , the dynamics impose constraints on which trajectories τ are feasible. In stochastic systems, where st+1 ∼ p(st+1|st, at), we must also account for the dynamics distribution in Equation (1), as discussed by Ziebart et al. (2008). Prior work has addressed this using dynamic programming to exactly compute Z in small, discrete systems (Ziebart et al., 2008), or by using sampling to estimate Z for large, continuous domains (Kalakrishnan et al., 2010; Boularias et al., 2011; Finn et al., 2016b). Since our state representation corresponds to a large vector of visual features, exact dynamic programming is infeasible. Sample-based approximation requires running a large number of trials to estimate Z and, as shown in recent work (Finn et al., 2016a), corresponds to a variant of generative adversarial networks (GANs), with all of the accompanying stability and optimization challenges. Furthermore, the corresponding model for the reward function is complex, making it prone to overfitting when only a small number of demonstrations is available. When faced with a difficult learning problem in extremely low-data regimes, a standard solution is to resort to simple, biased models, so as to minimize overfitting. We adopt precisely this approach in our work: instead of approximating the complex posterior distribution over trajectories under nonlinear dynamics, we use a simple biased model that affords efficient learning and minimizes overfitting. Specifically, we assume that all trajectories are dynamically feasible, and that the distribution over each activation at each time step is independent of all other activations and all other time steps. This corresponds to the IRL equivalent of a naı̈ve Bayes model: in the same way that naı̈ve Bayes uses an independence assumption to mitigate overfitting in high-dimensional feature spaces, we use independence between both time steps and features to learn from very small numbers of demonstrations. Under this assumption, the probability of a trajectory τ factorizes according to p(τ) = T∏ t=1 N∏ i=1 p(sit) = T∏ t=1 N∏ i=1 1 Zit exp ( Ri(sit) ) , which corresponds to a reward function of the form Rt(st) = ∑N i=1Ri(sit). We can then simply choose a form for Ri(sit) that can be normalized analytically, which in our case is a quadratic in sit, such that exp(Ri(sit))/Zit is a Gaussian distribution, and the original trajectory distribution is a naı̈ve Bayes model. While this approximation is quite drastic, it yields an exceedingly simple learning rule: in the most basic version, we have only to fit the mean and variance of each feature distribution, and then use the log of the resulting Gaussian as the reward. 2.2 DISCOVERY OF INTERMEDIATE STEPS The simple IRL model in the previous section can be used to acquire a single quadratic reward function in terms of the visual features st. However, for complex multi-stage tasks, this model can be too coarse, making task learning slow and difficult. We therefore instead fit multiple quadratic reward functions, with one reward function per intermediate step or goal. These steps are discovered automatically in the first stage of our method, which is performed independently on each demonstration. If multiple demonstrations are available, they are pooled together in the feature selection step discussed in the next section, and could in principle be combined at the segmentation stage as well, though we found this to be unnecessary in our prototype. The intermediate steps model extends the simple independent Gaussian model in the previous section by assuming that p(τ) = T∏ t=1 N∏ i=1 1 Zit exp ( Rigt(sit) ) , where gt is the index of the goal or step corresponding to time step t. Learning then corresponds to identifying the boundaries of the steps in the demonstration, and fitting independent Gaussian feature distributions at each step. Note that this corresponds exactly to segmenting the demonstration such that the variance of each feature within each segment is minimized. In this work, we employ a simple recursive video segmentation algorithm as described in Algorithm 1. Intuitively, this method breaks down a sequence in a way that each frame in a segment is abstractly similar to each other frame in that segment. The number of segments is provided manually in this approach, though it would be straightforward to also utilize standard model selection criteria for choosing this number automatically. There exists a body of unsupervised video segmentation methods Yuan et al. (2007) which would likely enable a less constrained set of demonstrations to be used. While this is an important avenue of future work, we show that our simple approach is sufficient to demonstrate the efficacy of our method on a realistic set of demonstrations. We also investigate how to reduce the search space of similar feature patterns across videos in section 2.3. This would render discovery of video alignments tractable for an optimization method such as the one used in Joulin et al. (2014) for video co-localization. The complexity of Algorithm 1 is O(nm) where n is the number of frames in a sequence and m the number of splits. Note that dynamic programming is not applicable to this algorithm because each sub-problem, i.e. how to split a sequence after the ith frame, depends on the segmentation chosen before the ith frame. We also experiment with a greedy binary version of this algorithm (Algorithm 2 detailed in section A.1): first split the entire sequence in two, then recursively split each new segment in two. While not exactly minimizing the variance across all segments, it is significantly more efficient (O(n2 logm)) and yields qualitatively sensible results. Algorithm 1 Recursive similarity maximization, where AverageStd() is a function that computes the average standard deviation over a set of frames or over a set of values, Join() is a function that joins values or lists together into a single list, n is the number of splits desired and min size is the minimum size of a split. function SPLIT(video, start, end, n,min size, prev std = []) if n = 1 then return [], [AVERAGESTD(video[start : end])] end if min std← None min std list← [] min split← [] for i← start+min size to end− ((n− 1) ∗min size)) do std1← [AVERAGESTD(video[start : i])] splits2, std2← SPLIT(video, i, end, n− 1,min size, prev std+ std1) avg std← AVERAGESTD(JOIN(prev std, std1, std2)) if min std = None or avg std < min std then min std← avg std min std list← JOIN(std1, std2) min split← JOIN(i, splits2) end if end for return min split,min std list end function 2.3 STEPS CLASSIFICATION In this section we explore learning a steps classifier on top of the pre-trained deep model using a regular linear classifier and a custom feature selection classifier. Intent understanding requires identifying highly discriminative features of a specific goal while remaining invariant to unrelated variation (e.g. lighting, color, viewpoint). The relevant discriminative features may be very diverse and more or less abstract, which motivates our intuition to tap into the activations of deep models at different depths. Deep models cover a large set of representations that can be useful, from spatially dense and simple features in the lower layers (e.g. large collection of detected edges) to gradually more spatially sparse and abstract features (e.g. few object classes). We train a simple linear layer which takes as input the same mid to high level activations used for steps discovery and outputs a score for each step. This linear layer is trained using logistic regression and the underlying deep model is not fine-tuned. Given the large input (1,453,824 units) and the low data regime (11 to 19 videos of 30 to 50 frames each), we hypothesize that this model should severely overfit to the training data and perform poorly in validation and testing. We test and discuss that hypothesis in Section 3.1.2. We also hypothesize that there exists a small subset of mid to high-level features that are sparse independent and can readily and compactly discriminate previously unseen inputs. We investigate that hypothesis using a simple feature selection method described in Appendix A.3. The existence of a small subset of discriminative features can be useful for reducing overfitting in low data regimes, but more importantly can allow drastic reduction of the search space for the unsupervised steps discovery. Indeed since each frame is described by millions of features, finding patterns of feature correlations across videos leads to a combinatorial explosion. However, the problem may become tractable if there exists a low-dimensional subset of features that leads to reasonably accurate steps classification. We test and discuss that hypothesis in Section 3.1.2. 2.4 USING PERCEPTUAL REWARDS FOR ROBOTIC LEARNING In order to use our learned perceptual reward functions in a complete skill learning system, we must also choose a reinforcement learning algorithm and a policy representation. While in principle any reinforcement learning algorithm could be suitable for this task, we chose a method that is efficient for evaluation on real-world robotic systems in order to validate our approach. The method we use is based on the PI2 reinforcement learning algorithm (Theodorou et al., 2010). Our implementation, which is discussed in more detail in Appendix A.4, uses a relatively simple linear-Gaussian parameterization of the policy, which corresponds to a sequence of open-loop torque commands with fixed linear feedback to correct for perturbations. This method also requires initialization from example demonstrations to learn complex manipulation tasks efficiently. A more complex neural network policy could also be used (Chebotar et al., 2016), and more sophisticated RL algorithms could also learn skills without demonstration initialization. However, since the main purpose of this component is to validate the learned reward functions, we used this simple approach to test our rewards quickly and efficiently. 3 EXPERIMENTS In this section, we discuss our empirical evaluation, starting with an analysis of the learned reward functions in terms of both qualitative reward structure and quantitative segmentation accuracy. We then present results for a real-world validation of our method on robotic door opening. 3.1 PERCEPTUAL REWARDS EVALUATION We report results on two demonstrated tasks: door opening and liquid pouring. We collected about a dozen training videos for each task using a smart phone. As an example, Fig. 2 shows the entire training set used for the pouring task. 3.1.1 QUALITATIVE ANALYSIS While a door opening sensor can be engineered using sensors hidden in the door, measuring pouring or container tilting would be quite complicated, would visually alter the scene, and is unrealistic for learning in the wild. Visual reward functions are therefore an excellent choice for complex physical phenomena such as liquid pouring. In Fig. 3, we present the combined reward functions for test videos on the pouring task, and Fig. 10 shows the intermediate rewards for each sub-goal. We plot the predicted reward functions for both successful and failed task executions in Fig. 11. We observe that for “missed” executions where the task is only partially performed, the intermediate steps are correctly classified. Fig. 9 details qualitative results of unsupervised step segmentation for the door opening and pouring tasks. For the door task, the 2-segments splits are often quite in line with what one can expect, while a 3-segments split is less accurate. We also observe that the method is robust to the presence or absence of the handle on the door, as well as its opening direction. We find that for the pouring task, the 4-segments split often yields the most sensible break down. It is interesting to note that the 2-segment split usually occurs when the glass is about half full. Failure Cases The intermediate reward function for the door opening task which corresponds to a human hand manipulating the door handle seems rather noisy or wrong in 10b, 10c and 10e (”action1” on the y-axis of the plots).The reward function in 11f remains flat while liquid is being poured into the glass. The liquid being somewhat transparent, we suspect that it looks too similar to the transparent glass for the function to fire. 3.1.2 QUANTITATIVE ANALYSIS We evaluate the quantitative accuracy of the unsupervised steps discovery in Table 1, while Table 2 presents quantitative generalization results for the learned reward on a test video of each task. For each video, ground truth intermediate steps were provided by human supervision for the purpose of evaluation. While this ground truth is subjective, since each task can be broken down in multiple ways, it is consistent for the simple tasks in our experiments. We use the Jaccard similarity measure (intersection over union) to indicate how much a detected step overlaps with its corresponding ground truth. In Table 1, we compare our method against a random baseline. Because we assume the same step order in all demonstrations, we also order the random steps in time to provide a fair baseline. Note that the random baseline performs fairly well because the steps are distributed somewhat uniformly in time. Should the steps be much less temporally uniform, the random baseline would be expected to perform very poorly, while our method should maintain similar performance. We compare splitting between 2 and 3 steps and find that, for both tasks, 2 steps are easier to discover, probably because these tasks exhibit one strong visual change each while the other steps are more subtle. Note that our unsupervised segmentation only works when full sequences are available while our learned reward functions can be used in real-time without accessing future frames. Hence in these experiments we evaluate the unsupervised segmentation on the training set only and evaluate the reward functions on the test set. In Table 2, we evaluate the reward functions individually for each step on the test set. For that purpose, we binarize the reward function using a threshold of 0.5. The random baseline simply outputs true or false at each timestep. We observe that the learned feature selection and linear classifier functions outperform the baseline by about a factor of 2. It is not clear exactly what the minimum level of accuracy is required to successfully learn to perform these tasks, but we show in section 3.2.2 that the reward accuracy on the door task is sufficient to reach 100% success rate with a real robot. Individual steps accuracy details can be found in Table 3. Surprisingly, the linear classifier performs well and does not appear to overfit on our relatively small training set. Although the feature selection algorithm is rather close to the linear classifier compared to the baseline, using feature selection to avoid avoiding is not neccesary. However the idea that a small subset of features (32 in this case) can lead to reasonable classification accuracy is verified and an important piece of information for drastically reducing the search space for future work in unsupervised steps discovery. Additionally, we show in Fig. 4 that the feature selection approach works well when the number of features n is in the region [32, 64] but collapses to 0% accuracy when n > 8192. 3.2 REAL-WORLD ROBOTIC DOOR OPENING In this section, we aim to answer the question of whether our previously visualized reward function can be used to learn a real-world robotic motion skill. We experiment on a door opening skill, where we adapt a demonstrated door opening to a novel configuration, such as different position or orientation of the door. Following the experimental protocol in prior work (Chebotar et al., 2016), we adapt an imperfect kinesthetic demonstration which we ensure succeeds at least occasionally (about 10% of the time). These demonstrations consist only of robot poses, and do not include images. We then use a variety of different video demonstrations, which contain images but not robot poses, to learn the reward function. These videos include demonstrations with other doors, and even demonstrations provided by a human using their own body, rather than through kinesthetic teaching with the robot. Figure 5 shows the experimental setup. We use a 7-DoF robotic arm with a two-finger gripper, and a camera placed above the shoulder, which provides monocular RGB images. For our baseline PI2 policy, we closely follow the setup of Chebotar et al. (2016) which uses an IMU sensor in the door handle to provide both a cost and feedback as part of the state of the controller. In contrast, in our approach we remove this sensor both from the state representation provided to PI2 and in our reward replace the target IMU state with the output of a deep neural network. 3.2.1 DATA We experiment with a range of different demonstrations from which we derive our reward function, varying both the source demo (human vs robotic), the number of subgoals we extract, and the appearance of the door. We record monocular RGB images on a camera placed above the shoulder of the arm. The door is cropped from the images, and then the resulting image is re-sized such that the shortest side is 299 dimensional with preserved aspect ratio. The input into our convolutional feature extractor Szegedy et al. (2015) is the 299x299 center crop. 3.2.2 QUALITATIVE ANALYSIS We evaluate our reward functions qualitatively by plotting our perceptual reward functions below the demonstrations with a variety of door types and demonstrators (e.g robot or human). As can be seen in Fig. 6 and in real experiments Fig. 7, we show that the reward functions are useful to a robotic arm while only showing human demonstrations as depicted in Fig. 12. Moreover we exhibit robustness variations in appearance. 3.2.3 QUANTITATIVE ANALYSIS In comparing the success rate of visual reward versus a baseline PI2 method that uses the ground truth reward function obtained by instrumenting the door with an IMU. We run PI2 for 11 iterations with 10 sampled trajectories at each iteration. As can be seen in Fig. 7, we obtain similar convergence speeds to our baseline model, with our method also able to open the door consistently. Since our local policy is able to obtain high reward candidate trajectories, this is strong evidence that a perceptual reward could be used to train a global in same manner as Chebotar et al. (2016). 4 CONCLUSION In this paper, we present a method for automatically identifying important intermediate goal given a few visual demonstrations of a task. By leveraging the general features learned from pre-trained deep models, we propose a method for rapidly learning an incremental reward function from human demonstrations which we successfully demonstrate on a real robotic learning task. We show that pre-trained models are general enough to be used without retraining. We also show there exists a small subset of pre-trained features that are highly discriminative even for previously unseen scenes and which can be used to reduce the search space for future work in unsupervised steps discovery. Another compelling direction for future work is to explore how reward learning algorithms can be combined with robotic lifelong learning. One of the biggest barriers for lifelong learning in the real world is the ability of an agent to obtain reward supervision, without which no learning is possible. Continuous learning using unsupervised rewards promises to substantially increase the variety and diversity of experience that is available for robotic reinforcement learning, resulting in more powerful, robust, and general robotic skills. ACKNOWLEDGMENTS We would like to thank Vincent Vanhoucke for helpful discussions and feedback. We would also like to thank Mrinal Kalakrishnan and Ali Yahya for indispensable guidance throughout this project. A ALGORITHMS DETAILS A.1 BINARY SEGMENTATION ALGORITHM Algorithm 2 Greedy and binary algorithm similar to and utilizing Algorithm 1, where AverageStd() is a function that computes the average standard deviation over a set of frames or over a set of values, Join() is a function that joins values or lists together into a single list, n is the number of splits desired and min size is the minimum size of a split. function BINARYSPLIT(video, start, end, n,min size, prev std = []) if n = 1 then return [], [] end if splits0, std0← SPLIT(video, start, end, 2,min size) if n = 2 then return splits0, std0 end if splits1, std1← BINARYSPLIT(video, start, splits0[0], CEIL(n/2),min size) splits2, std2← BINARYSPLIT(video, splits0[0] + 1, end, FLOOR(n/2),min size) all splits = [] all std = [] if splits1 6= [] then JOIN(all splits, splits1) JOIN(all std, std1) else JOIN(all std, std0[0]) end if if splits0 6= [] then JOIN(all splits, splits0[0]) end if if splits2 6= [] then JOIN(all splits, splits2) JOIN(all std, std2) else JOIN(all std, std0[1]) end if return all splits, all std end function A.2 COMBINING INTERMEDIATE REWARDS From the two previous sections, we obtain one reward function per intermediate step discovered by the unsupervised algorithm. These need to be combined so that the RL algorithm uses a single reward function which partially rewards intermediate steps but most rewards the final one. The initial step is ignored as it is assumed to be the resting starting state in the demonstrations. We opt for the maximum range of each reward be twice the maximum range of its preceding reward, and summing them as follow: R(a) = n∑ i=2 Ri(a) ∗ 2(i−1) (2) where n is the number of intermediate rewards detected and a an activations vector. An example of this combination is shown in Fig. 8. A.3 FEATURE SELECTION ALGORITHM Here we describe the feature selection algorithm we use to investigate the presence of a small subset of discriminative features in mid to high level layers of a pre-trained deep network. To select the most discriminative features, we use a simple scoring heuristic. Each feature i is first normalized by subtracting the mean and dividing by the standard deviation of all training sequences. We then rank them for each sub-goal according to their distance zi to the average statistics of the sets of positive and negative frames for a given goal: zi = α ∣∣∣µ+i − µ−i ∣∣∣− (σ+i + σ−i ), (3) where µ+i and σ + i are the mean and standard deviation of all “positive” frames and the µ − i and σ − i of all “negative” frames (the frames that do not contain the sub-goal). Only the top-M features are retained to form the reward function Rg() for the sub-goal g, which is given by the log-probability of an independent Gaussian distribution over the relevant features: Rg(st) = 1 n M∑ j (sijt − µ+ijt) 2 σ+ijt 2 , (4) where ij indexes the top-M selected features. We empirically choose α = 5.0 and M = 32 for our subsequent experiments. At test time, we do not know when the system transitions from one goal to another, so instead of time-indexing the goals, we instead combine all of the goals into a single time-invariant reward function, where later steps yield higher reward than earlier steps, as described in Appendix A.2. A.4 PI2 REINFORCEMENT LEARNING ALGORITHM We chose the PI2 reinforcement learning algorithm (Theodorou et al., 2010) for our experiments, with the particular implementation of the method based on a recently proposed deep reinforcement learning variant (Chebotar et al., 2016). Since our aim is mainly to validate that our learned reward functions capture the goals of the task well enough for learning, we employ a relatively simple linear-Gaussian parameterization of the policy, which corresponds to a sequence of open-loop torque commands with fixed linear feedback to correct for perturbations, as in the work of Chebotar et al. (2016). This policy has the form π(ut|xt) = N (Ktxt + kt,Σt), where Kt is a fixed stabilizing feedback matrix, and kt is a learned control. In this case, the state xt corresponds to the joint angles and angular velocities of a robot, and ut corresponds to the joint torques. Since the reward function is evaluated from camera images, we assume that the image is a (potentially stochastic) consequence of the robot’s state, so that we can evaluate the state reward r(xt) by taking the image It observed at time t, and computing the corresponding activations at. Overloading the notation, we can write at = f(It(xt)), where f is the network we use for visual features. Then, we have r(xt) = R(f(It(xt))). The PI2 algorithm is an episodic policy improvement algorithm that uses the reward r(xt) to iteratively improve the policy. The trust-region variant of PI2 that we use Chebotar et al. (2016), which is also similar to the REPS algorithm (Peters et al., 2010), updates the policy at iteration n by sampling from the time-varying linear-Gaussian policy π(ut|xt) to obtain samples {(x(i)t ,u (i) t )}, and updating the controls kt at each time step according to kt ← ∑ i u (i) t exp βt T∑ t′=t r(x (i) t′ ) / ∑ i exp βt T∑ t′=t r(x (i) t′ ) , where the temperature βt is chosen to bound the KL-divergence between the new policy π(ut|xt) and the previous policy π̄(ut|xt), such that DKL(π(ut|xt)‖π̄(ut|xt)) ≤ for a step size epsilon. Further details and a complete derivation are provided in prior work Theodorou et al. (2010); Peters et al. (2010); Chebotar et al. (2016). The PI2 algorithm is a local policy search method that performs best when provided with demonstrations to bootstrap the policy. In our experiments, we use this method together with our learned reward functions to learn a door opening skill with a real physical robot, as discussed in Section 3.2. Demonstration are provided with kinesthetic teaching, which results in a sequence of reference steps x̂t, and initial controls kt are given by kt = −Ktx̂t, such that the mean of the initial controller is Kt(xt − x̂t), corresponding to a trajectory-following initialization. This initial controller is rarely successful consistently, but the occasional successes it achieves provide a learning signal to the algorithm. The use of demonstrations enables PI2 to be used to quickly and efficiently learn complex robotic manipulation skills. Although this particular RL algorithm requires demonstrations to begin learning, it can still provide a useful starting point for real-world learning with a real robotic system. As shown by Chebotar et al. (2016), the initial set of demonstrations can be expanded into a generalizable policy by iteratively “growing” the effective region where the policy succeeds. For example, if the robot is provided with a demonstration of opening a door in one position, additional learning can expand the policy to succeed in nearby positions, and the application of a suitable curriculum can grow the region of door poses in which the policy succeeds progressively. However, as with all RL algorithms, this process requires knowledge of the reward function. Using the method described in this paper, we can learn such a reward function from either the initial demonstrations or even from other demonstration videos provided by a human. Armed with this learned reward function, the robot could continue to improve its policy through real-world experience, iteratively increasing its region of competence through lifelong learning. B ADDITIONAL QUALITATIVE RESULTS
1. What is the main contribution of the paper regarding reinforcement learning? 2. What are the strengths of the proposed method, particularly in its simplicity and potential usefulness in robotic applications? 3. What are the weaknesses of the paper, especially regarding the experiments and the combination of the extracted reward function with a simple RL method? 4. Do you have any concerns or suggestions regarding the baselines used in the experiments? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper explores a simple approach to learning reward functions for reinforcement learning from visual observations of expert trajectories for cases were only little training data is available. To obtain descriptive rewards even under such challenging conditions the method re-uses a pre-trained neural network as feature extractor (this is similar to a large body of work on task transfer with neural nets in the area of computer vision) and represents the reward function as a weighted distance to features for automatically extracted "key-frames" of the provided expert trajectories. The paper is well written and explains all involved concepts clearly while also embedding the presented approach in the literature on inverse reinforcement learning (IRL). The resulting algorithm is appealing due to its simplicity and could prove useful for many real world robotic applications. I have three main issues with the paper in its current form, if these can be addressed I believe the paper would be significantly strengthened: 1) Although the recursive splitting approach for extracting the "key-frames" seems reasonable and the feature selection is well motivated I am missing two baselines in the experiments: - what happens if the feature selection is disabled and the distance between all features is used ? will this immediately break the procedure ? If not, what is the trade-off here ? - an even simpler baseline than what is proposed in the paper would be the following procedure: simply use all frames of the recorded trajectories, calculate the distance to them in feature space and weights them according to their time as in the approach proposed in the paper. How well would that work ? 2) I understand the desire to combine the extracted reward function with a simple RL method but believe the used simple controller could potentially introduce a significant bias in the experiments since it requires initialization from an expert trajectory. As a direct consequence of this initialization the RL procedure is already started close to a good solution and the extracted reward function is potentially only queried in a small region around what was observed in the initial set of images (perhaps with the exception of the human demonstrations). Without an additional experiment it is thus unclear how well the presented approach will work in combination with other RL methods for training the controller. 3) I understand that the low number of available images excludes training a deep neural net directly for the task at hand but one has to wonder how other baselines would do. What happens if one uses a random projection of the images to form a feature vector? How well would a distance measure using raw images (e.g. L2 norm of image differences) or a distance measure based on the first principal components work? It seems that occlusions etc. would exclude them from working well but without empirical evidence it is hard to confirm this. Minor issues: - Page 1: "make use of ideas about imitation" reads a bit awkwardly - Page 3: "We use the Inception network pre-trained ImageNet" -> pre-trained for ImageNet classification - Page 4: the definition of the transition function for the stochastic case seems broken - Page 6: "efficient enough to evaluate" a bit strangely written sentence Additional comments rather than real issues: - The paper is mainly of empirical nature, little actual learning is performed to obtain the reward function and no theoretical advances are needed. This is not necessarily bad but makes the empirical evaluation all the more important. - While I liked the clear exposition the approach -- in the end -- boils down to computing quadratic distances to features of pre-extracted "key-frames", it is nice that you make a connection to standard IRL approaches in Section 2.1 but one could argue that this derivation is not strictly necessary.
ICLR
Title TopicGAN: Unsupervised Text Generation from Explainable Latent Topics Abstract Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied because the obtained discrete representations can benefit unsupervised learning. However, the performance of learning discrete representations of textual data with deep generative models has not been widely explored. In addition, although generative adversarial networks(GAN) have shown impressing results in many areas such as image generation, for text generation, it is notorious for extremely difficult to train. In this work, we propose TopicGAN, a two-step text generative model, which is able to solve those two important problems simultaneously. In the first step, it discovers the latent topics and produced bag-of-words according to the latent topics. In the second step, it generates text from the produced bag-of-words. In our experiments, we show our model can discover meaningful discrete latent topics of texts in an unsupervised fashion and generate high quality natural language from the discovered latent topics. 1 INTRODUCTION Recently, deep generative models (Goodfellow et al., 2014; Kingma & Welling, 2013; Makhzani et al., 2015) have achieved a great success on generating realistic images, videos and audio. Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied, because the obtained discrete representations can benefit unsupervised learning (Chen et al., 2016; van den Oord et al., 2017), semi-supervised learning (Odena et al., 2016), and few-shot learning. However, it remains extremely challenging for generating texts and learning interpretable discrete representations of texts due to the discrete, sparse and high dimensional properties of textual data. Arjovsky et al. (2017) mentioned that the original generative adversarial network (GAN) fails to generate discrete data due to the gradient vanishing problem. Wasserstein distance has been proposed to more precisely measure the distance between real and fake distributions and therefore tackles the gradient vanishing problem (Arjovsky et al., 2017; Gulrajani et al., 2017). Another obstacle for generating discrete data is the non-differentiable function when generating words, which makes the gradients unable to be backpropagated from the discriminator to the generator. With the help of reinforcement learning, the generator is able to maximize the scores from the discriminator when the gradient is not able to flow from discriminator (Yu et al., 2017; Li et al., 2017; Fedus et al., 2018). In natural language processing (NLP), learning representations of texts is shown useful for unsupervised language understanding. While learning continuous text representations has been widely studied (Kiros et al., 2015; Arora et al., 2017; Logeswaran & Lee, 2018), learning high-level discrete representations has been explored by fewer work (Zhao et al., 2018; Miao et al., 2016). In order to take non-differentiable discrete variables as latent representations of an auto-encoder, some special methods such as Gumbel-Softmax (Jang et al., 2016) or vector quantisation (van den Oord et al., 2017) were applied by the prior work. Furthermore, because textual data is high dimensional and has rich but sometimes noisy information such as stop words, it is challenging to learn useful discrete representations of texts. To mitigate the difficulty of text generation and text discrete representation learning, this paper proposes TopicGAN, which simplifies those two problems by a two-step progressive generation. The idea of dividing the generation process into a pipeline yielded impressive results on high resolution image generation (Zhang et al., 2016; Karras et al., 2018). The progressive generation is a natural way to generate texts considering how human writes texts. When writing articles, human first considers the context of the texts and then organizes the context with correct grammar. Hence, in this work, we split text generation into two step, one is context generation and another is text generation from context with correct grammar. In the first step, we use topic model to discover the latent topics and use bag-of-words generator to produce bag-of-words according to the discovered latent topics and continuous noise. In the second step, based on the generated bag-of-words, we decode a sequential text by a recurrent neural network (RNN). We utilize InfoGAN (Chen et al., 2016) to discover the latent topics and generate a bag of topical words without supervision in the first step, where the categorical classifier of InfoGAN can be considered as a topic model, which is able to discover latent topics of documents. We show that our model can yield explainable topics and outperform previous topic models such as latent Dirichlet allocation (LDA) (Blei et al., 2003) or variational topic model (Miao et al., 2016) for unsupervised document classification. Topic modeling can be applied to many applications, including extractive text summarization(Titov & McDonald, 2008), document retrieval (Wei & Croft, 2006) or unsupervised classification. Also, unlike previous topic models that did not consider the word correlation during generating words, our method is able to regularize the correlation between words by the discriminator so that it yields more reasonable bag-of-words and produces high-quality texts. The contributions of this work are three-fold: • We propose two-step progressive text generation, which aligns well with the nature of text generation. • Our model is able to discover explainable topics by a topic classifier and achieves promising performance on unsupervised learning. • Compared to previous topic models, the proposed TopicGAN is able to capture the correlation between words, and thus performs better generation results. 2 RELATED WORK Text Generation via GAN Prior work attempted at generating texts using GAN (Yu et al., 2017; Che et al., 2017; Li et al., 2017; Liu et al., 2018), and there are two main directions. One is to tackle the gradient vanishing problem in the original GAN, where JensenShannon divergence to evaluate the discrete real data distribution and the continuous generated distribution. By using Wasserstein distance, the discrete and continuous distributions can be properly measured. However, the Wasserstein distance requires the discriminator to be a Lipschitz continuous function; therefore some restrictions including weight-clipping (Arjovsky et al., 2017), gradient penalty (Gulrajani et al., 2017) are imposed on the discriminator. Another direction is to feed sampled discrete words from generated distribution to the discriminator. While the sampling operation is non-differentiable, reinforcement learning is applied to optimize the score from the discriminator. Some designed reward functions, such as Monte Carlo tree search (MCTS) (Yu et al., 2017), are proposed to evaluate the generated word for each time step (Lin et al., 2017; Fedus et al., 2018; Li et al., 2017). Our proposed progressive two-step generation framework can be easily combined with any current adversarial text generation models, because we can choose one those current work as our final jointly optimization method. Our framework effectively facilitates the training process of text generation by decomposing the task into two subproblems. InfoGAN InfoGAN has shown impressive performance for learning disentangled representations of images in an unsupervised manner (Chen et al., 2016). The original GAN generates images from a continuous noise z, while each dimension of the noise does not contain disentangled features of generated images. To learn semantically meaningful representations, InfoGAN maximizes the mutual information between input code c and the generated output G(z, c). However, maximizing the mutual information is intractable, because it requires the access of P (c | G(z, c)). Based on variational information maximization, Chen et al. (2016) used an auxiliary distribution Q(c | G(z, c)) to approximate P (c | G(z, c)). The auxiliary function Q can be a neural network that can be jointly optimized: min G,Q max D Ex∼Pdata [log(D(x))] + Ez∼Pz,c∼Pc [log(1−D(G(z, c)))− λQ(c | G(z, c))], (1) where Pdata is the real data distribution, Pz is the noise distribution, and Pc is the code distribution. The code c can be either continuous or categorical. In our work, the code c is set to be categorical, and the categorical classifier Q becomes a topic model. Therefore, we call the third term in (1) categorical loss. 3 TOPICGAN When using GAN for natural language generation, the generator has difficulty generating text with reasonable context and correct grammar simultaneously. In addition, when feeding a sequential text to the InfoGAN categorical code classifier, due to its complex structure, the classifier fails to discover meaningful discrete information. The key idea of this work is that we divide text generation into two steps: 1) generating bag-of-words that can roughly represent the topical information, and then 2) generating sequential texts from the learned topical words. As shown in Figure 1, given a discrete topic c and a continuous noise z as the input, the bag-of-words generator Gbow generates topical words. After obtaining topical words, the sequence generator Gseq generates texts according to topical words. 3.1 GENERATING TOPICAL WORDS The upper part of Figure 1 illustrates how our model generate topical words, where there are a bag-of-words generator Gbow, a bag-of-words discriminator Dbow, a topic model Q, and a noise predictor E. • Bag-of-words generator Gbow It takes a discrete one-hot topic code c and a continuous noise z as the input, and manages to generate bag-of-words that captures the input topical information and are indistinguishable from the bag-of-words of real texts. Here the bag-of-words is a binary vector generating from sigmoid function, where each dimension indicates whether a single word in the dictionary exists in the text. • Bag-of-words discriminator Dbow It takes the bag-of-words vector as its input and distinguishes whether it is generated or human-written. • Topic model Q It is a categorical topic code classifier that is implemented by a matrix, considering that a linear model is easier to interpret the generated bag-of-words. • Noise predictor E It focuses on predicting the noise that can reconstruct the input bag-of-words by Gbow. Similar to Chen et al. (2016), we apply (1) to train our generator Gbow, discriminator Dbow, and the topic model Q. However, it is difficult to generate discrete bag-of-words due to the gradient vanishing problem, we apply WGAN (Arjovsky et al., 2017) loss to train Gbow and Dbow. In addition, during training, there is a severe mode collapse issue within the same topic. That is, given the same topic code, the generator ignores the continuous noise and output the same topical words. The reason is that outputting the same bag of words for the generator is the optimal solution to maximize the mutual information between the discrete input topic code and its output. To tackle this issue, we clip the probability from topic model Q in (1) to a specific range α, and rewrite the (1) to: min G,Q max D Ex∼Pdata [Dbow(x)]− Ez∼Pz,c∼Pc [Dbow(Gbow(z, c))]− λEz∼Pz,c∼Pc [min(Q(c | Gbow(z, c)), α)]. (2) , where x is the binary bag-of-words vector of real text. Here, to constrain the Dbow to be Lipschitz function, we apply gradient penalty (Gulrajani et al., 2017) to Dbow. In addition, we apply batchnormalization to alleviate the mode collapse problem (Ioffe & Szegedy, 2015). In order to obtain better results, an auto-encoder is included in the optimization procedure (Larsen et al., 2015; Huang et al., 2018). Here we use binary cross entropy loss as reconstruction loss function: min G,Q,E Ex∼Pdata [−x ∗ log(Gbow(Q(x), E(x))) + (1− x) ∗ log(1−Gbow(Q(x), E(x)))] (3) , where the topic classifier Q and the noise predicor E encode real text bag-of-words x into the discrete code and continuous noise respectively. We train 2 and 3 alternately. 3.2 GENERATING TEXTS FROM TOPICAL WORDS The lower part of Figure 1 illustrates how the model generates natural language, where there are a sequence generator Gseq and a sequence discriminator Dseq . • Sequence generator Gseq After obtaining bag of topical words, we use an LSTM model to generate sequential texts from the bag-of-words. The bag-of-words vector is fed into a feedforward neural network and the output of feedforward neural network is used to initialize the hidden state of LSTM. We use an extra vector as the LSTM input to keep track of which input bag-of-words have been generated in order to avoiding generating the same words repeatedly. • Sequence discriminator Dseq We introduce a sequence discriminator Dseq that encourages Gseq to produce realistic sequential texts. If Dseq simply takes sequential text as the input, it may make the sequence generator generate texts that is realistic but unrelated to the input bag-of-words of Therefore, conditioned on the generated bag-of-words, Dseq is also able to discriminate whether the text is produced from input bag-of-words. The pairs of bag-of-words of human written text and corresponding text are regarded as real data for discriminator. Supervised pretraining of sequence generator As each sequential text has its corresponding bagof-words, we can obtain numerous (bag-of-words, text) pairs to pretrain Gseq . To make Gseq robust to the noisy bag-of-words input, during pretraing, we add some noise such as randomly deleting words to the input texts. Joint Training. After pretrainingGseq , we jointly optimize the whole model by adversarial training between Gseq and Dseq . In the joint training, two steps are jointly trained that we train all the modules to directly generate text conditioned on topic c and noise z. We can choose any existing adversarial sequence generation method such as Yu et al. (2017) or Gulrajani et al. (2017) to jointly train Gseq and Dseq . In our work, the Dseq is a deep residual network which takes the output of Gseq as input. We simply apply the training procedure of WGAN-gp (Gulrajani et al., 2017) to train Gseq and Dseq . 4 EXPERIMENTS In this section, we evaluate whether our method is able to learn meaningful latent topics, and show that our two-step progressive generation can generate high-quality texts. For all experiments, in the first step, we set our bag-of-words vocabulary size to 3k and removed stopwords. By setting smaller vocabulary size in the first step, the topic classifier discovered better topics. When generating texts in the second step, we set the vocabulary size to 15k. 4.1 TOPIC MODELING RESULTS To evaluate the quality of the learned latent topics, we test whether the topic classifier Q is able to correctly discover the latent class same as the class labeled by human. We evaluate the topic classifier on three datasets including 20NewsGroups, DBpedia ontology classification dataset and Yahoo! answers. The 20NewsGroups is a news classification dataset composed of 20 different classes of news with 11,314 training and 7,531 testing documents. DBpedia ontology classification dataset is constructed by Zhang & LeCun (2015). They selected 14 ontology classes from DBpedia 2014, and for each class they randomly picked 40,000 training samples and 5,000 testing samples. Thus, there are total 560,000 and 70,000 training and testing samples respectively. Yahoo! answers is a question type classification dataset with 10 types of question-answer pairs constructed by Zhang & LeCun (2015). There are 1,400,000 training samples and 60,000 testing samples. 4.1.1 UNSUPERVISED CLASSIFICATION For all experiments including baseline methods, we set the number of latent topics same as the number of true classes in each dataset. We used the topic classifier Q to predict the latent topic distribution of each sample, and we assigned each sample to the latent topic with the maximum probability. The samples within a latent topic cluster used its true label to vote for which label should be assigned to the whole cluster. After assigning each latent topic to its corresponding label, we evaluate the classification accuracy as the quality of the captured latent topics. We compared our method with a statistical topic model, LDA (Blei et al., 2003), and a variational topic model(NVDM) (Miao et al., 2016). The results of unsupervised classification are shown in Table 1, where compared to previous topic models, our model significantly outperforms baselines for unsupervised classification. The main reason that our method achieves better result on unsupervised classification is that LDA asuumes each documents are produced from mixture of topics, while our method assumes that the documents are produced from a single topic and a noise controlling the difference of texts within a topic. In unsupervised document classification, the documents have only one single class. Therefore, for those unsupervised classification experiments, assuming each documents coming from a single main topic is a more appropriate assumption, which allows our model to learn more distinct topics. In addition, as the length of our training documents are short, its hard to break the short text into several topics, which is one of the possible reason that makes LDA works not well on short text. 4.1.2 ABLATION STUDY In this section we show that all mechanisms described in Section 3.1 including categorical loss clipping, training with auto-encoder are useful for unsupervised classification. As shown in Table 1, the key trick that greatly improved the performance is auto-encoder training of bag-of-words generator Gbow and topic model Q. Without training with auto-encoder, there was almost only half of the original performance in 20NewsGroups and Yahoo! Answers datasets. Categorical loss clipping also improved the performance. It also alleviated the mode collapse problem within a single class and made the training process more stable. The model complexity of topic classifier also influenced the classification accuracy. In the simpler dataset like 20 news or DBpedia, using a single matrix as model of discriminator (Table.1 no hidden layer) yielded better performance. While in more difficult dataset like Yahoo! Answers dataset, topic classifier with one hidden feed forward layer performed slightly better. 4.2 TOPIC COHERENCE In Section 4.1, we find our method can outperform previous methods on unsupervised text classification. In this section, we discuss the quality of learned topic words on quantitative analysis and qualitative analysis. The topic words of our topic model can be retrieved as following. We used a single matrix MV×K as our topic classification model Q, where V is the bag-of-words vocabulary size and K is the latent topic number. The value of Mv,k represents the importance of v-th word to k-th topic. The top few words with higher values within each column are selected as its topical words. Quantitative analysis. We use the Cv metric (Roder et al., 2015) to evaluate the topic coherence score. The coherence score is computed by using English Wikipedia of 5.6 million articles as external corpus. Table 3 lists the coherence score on different datasets. Compared to LDA, TopicGAN is on par with LDA on DBpedia on, and outperforms LDA on all other datasets. This result suggests the effectiveness of using info-GAN and neural network to train a explainable topic model. Qualitative analysis. To further analyze the quality of discovered topics, the top 20 topical words of latent topics are listed inTable 6 and the generated corresponding texts are shown in Table 7. From the tables, the captured latent topics are clearly semantically different based on the generated topical words. Similarly, the generated texts are also fluent and topically related to the associated topics. 4.3 TEXT GENERATION RESULTS We conducted sequential text generation experiments on two datasets including DBpedia and English Gigaword. English Gigaword is a summarization dataset which is composed of first sentence of articles and their corresponding titles. The pre-process script (Rush et al., 2015) yielded 3.8M training samples and 400K validation samples. We trained our model to generate the first sentence of articles on training set. Unlike DBpedia which has the labeled classes, English Gigaword has no labeled classes. Therefore, we conducted human evaluation to evaluate whether our model is able to generate text from meaningful discovered topics. We compare our method with the baseline method which was pre-trained with VAE and then fine tuned by WGAN-gp. Instead of generating text conditioned on discrete code and continuous noise, the baseline method was simply conditioned on a continuous noise. When pre-training variational auto-encoder, the tricks like KL-term annealing (Semeniuta et al., 2017) were applied. We chose this method as our baseline because our sequence generator Gseq were pre-trained with bag-of-words to text language model, and we expected our baseline also pre-trained with a proper language model. The performance of our method with and without jointly training of WGAN-gp on sequential text generation was also evaluated. In order to evaluate the quality of generated text, we measured the perplexity of generated text and conducted human evaluation. 4.3.1 PERPLEXITY The original English Gigaword and DBpedia datasets are already split into training set and testing set. We trained a general LSTM language model on the text of all training data, and for each class we trained a class LSTM language model initialized from general language model. The language models were then used to compute the perplexity of generated text. The perplexity of text on English Gigaword and DBpedia are shown in Table.4. In both datasets, the perplexity of Topic GAN was almost as low as baseline method VAE+WGANgp, which suggested our method was able to generate equally smooth text. In addition, our method not only generated text with equal quality, it also generated text conditioned on discrete topic. As shown in Table.4, the perplexity of Topic GAN in class language model is lower than general language model. This implied that the generated text of topic GAN captured the information of each class. Jointly training also slightly improved the perplexity. As perplexity can not precisely reflect the quality of text, we conducted human evaluation to further discuss the performance of our model. 4.3.2 HUMAN EVALUATION As there are no labeled categorical data on English Gigaword, we were not able to evaluate whether our model discovered meaningful latent topics. In addition, perplexity is not an accurate evaluation of the quality of the text. For the above two reasons, we conduct human evaluation. The human evaluation experiment was composed of two parts. The first part evaluated whether the text generated from the same latent topic can be recognized as the same class by human. In the second part evaluate the quality of the generated sentence. In the first part, the humans were demonstrated a set of example texts generated from the same latent topic and they were asked to categorize the example texts into a specific topic like sports,economy or violence. Then, following the example texts, there were several multiple-choice questions which had a text from same topic as correct choice and some texts from other topics as incorrect choice in each question. The humans were asked to choose the correct choice. As shown in Table 5(a), almost all human successfully selected the texts with the topic same as example class in each question. This result suggested that in the dataset without labeled categorical data, our method was able to discover meaningful latent topics and generate text from the discovered latent topic. The second part measured the quality of generated text. The quality including the grammar and the rationality of the text. We asked human to select the better one among two sentences generated from different methods. In first two rows of Table 5(b), we compared the human preference of topic GAN with and without jointly training. The human preferred the text generated by topic GAN with jointly training to the text generated without jointly training. The last two rows of Table 5(b) showed that human slightly preferred topic GAN than baseline method(VAE+WGAN-gp). 5 CONCLUSION This paper proposes TopicGAN, which can automatically capture topical information and generate the natural language with controled semantics in an unsupervised manner. The discrete representations show the superior performance on unsupervised classification, demonstrating the capacity of topic modeling in the proposed model. In addition, our model can generate texts with comparable performance with other approaches and additionally enable topical representation for better interpretation.
1. What are the limitations of the paper's approach to topic modeling and text generation? 2. How does the proposed model differ from existing generative topic models like LDA? 3. What are the weaknesses of the paper's evaluation methodology? 4. How convincing are the results presented in the paper? 5. Are there any concerns regarding the novelty of the proposed model?
Review
Review This paper proposes TopicGAN, a generative adversarial approach to topic modeling and text generation. The model basically combines two steps: first to generate words (bag-of-words) for a topic, then second to generate the sequence of the words. While the idea is interesting, there are several important limitations. First, the paper is difficult to understand, and some of the explanations are not convincing. For example, in section 4.1.1, it says "... our method assumes that the documents are produced from a single topic ... Our assumption aligns well with human intuition that most documents are generated from a single main topic." This goes very much against the common assumption of a generative topic model, such as LDA, which the model compares against. I don't mean to argue either way, but if the paper presents a viewpoint which is quite different from the commonly accepted viewpoint (within the specific research field), then there needs to be a much deeper explanation, ideally with concrete evidence to support it. Another sentence from the same paragraph states that their "model outperforms LDA because LDA is a statistical model, while our generator is a deep generative model." This argument also seems flawed and without concrete evidence. There are other parts in the paper where the logic seems strange and without evidence, and they make it difficult to understand and accept the major claims of the paper. Second, the model does not offer much novelty. It seems that the two-stage model simply puts the two pieces, a GAN-style generator and an LSTM sequence model together. Perhaps I am not understanding the model, but the model description was also not clear nor easy to understand with respect to its novelty. Third, the evaluation is somewhat weak. There are two main evaluations tasks: text classification and text generation. For the first task, classification is not the main purpose of topic models, and while text classification _is_ used in many topic modeling papers, it is almost always accompanied by other evaluation metrics such as held-out perplexity and topic coherence. This is because the main purpose of topic modeling is to actually infer the topics (per-topic word distribution and per-document topic distribution) and model the corpus. Thus I feel it is not a fair evaluation to just compare the models using text classification tasks. The second evaluation task of text generation is not explained enough. For the human evaluation, who were the annotators, and how were they trained? How many people annotated each output, and what was the inter-rater agreement? How many sentences were evaluated, and how were they chosen? Without these details, it is difficult to judge whether this evaluation was valid. Lastly, the results are mediocre. Besides the classification task, the others do not show significant improvements over the baseline models. Perplexity (table 3) shows similar results for DBPedia and worse results (than WGAN-gp) for Gigaword. Table 4 shows slightly better results for "Preference" for TopicGAN with joint training, but "Accuracy" is measured only for the proposed model and not the baseline model.
ICLR
Title TopicGAN: Unsupervised Text Generation from Explainable Latent Topics Abstract Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied because the obtained discrete representations can benefit unsupervised learning. However, the performance of learning discrete representations of textual data with deep generative models has not been widely explored. In addition, although generative adversarial networks(GAN) have shown impressing results in many areas such as image generation, for text generation, it is notorious for extremely difficult to train. In this work, we propose TopicGAN, a two-step text generative model, which is able to solve those two important problems simultaneously. In the first step, it discovers the latent topics and produced bag-of-words according to the latent topics. In the second step, it generates text from the produced bag-of-words. In our experiments, we show our model can discover meaningful discrete latent topics of texts in an unsupervised fashion and generate high quality natural language from the discovered latent topics. 1 INTRODUCTION Recently, deep generative models (Goodfellow et al., 2014; Kingma & Welling, 2013; Makhzani et al., 2015) have achieved a great success on generating realistic images, videos and audio. Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied, because the obtained discrete representations can benefit unsupervised learning (Chen et al., 2016; van den Oord et al., 2017), semi-supervised learning (Odena et al., 2016), and few-shot learning. However, it remains extremely challenging for generating texts and learning interpretable discrete representations of texts due to the discrete, sparse and high dimensional properties of textual data. Arjovsky et al. (2017) mentioned that the original generative adversarial network (GAN) fails to generate discrete data due to the gradient vanishing problem. Wasserstein distance has been proposed to more precisely measure the distance between real and fake distributions and therefore tackles the gradient vanishing problem (Arjovsky et al., 2017; Gulrajani et al., 2017). Another obstacle for generating discrete data is the non-differentiable function when generating words, which makes the gradients unable to be backpropagated from the discriminator to the generator. With the help of reinforcement learning, the generator is able to maximize the scores from the discriminator when the gradient is not able to flow from discriminator (Yu et al., 2017; Li et al., 2017; Fedus et al., 2018). In natural language processing (NLP), learning representations of texts is shown useful for unsupervised language understanding. While learning continuous text representations has been widely studied (Kiros et al., 2015; Arora et al., 2017; Logeswaran & Lee, 2018), learning high-level discrete representations has been explored by fewer work (Zhao et al., 2018; Miao et al., 2016). In order to take non-differentiable discrete variables as latent representations of an auto-encoder, some special methods such as Gumbel-Softmax (Jang et al., 2016) or vector quantisation (van den Oord et al., 2017) were applied by the prior work. Furthermore, because textual data is high dimensional and has rich but sometimes noisy information such as stop words, it is challenging to learn useful discrete representations of texts. To mitigate the difficulty of text generation and text discrete representation learning, this paper proposes TopicGAN, which simplifies those two problems by a two-step progressive generation. The idea of dividing the generation process into a pipeline yielded impressive results on high resolution image generation (Zhang et al., 2016; Karras et al., 2018). The progressive generation is a natural way to generate texts considering how human writes texts. When writing articles, human first considers the context of the texts and then organizes the context with correct grammar. Hence, in this work, we split text generation into two step, one is context generation and another is text generation from context with correct grammar. In the first step, we use topic model to discover the latent topics and use bag-of-words generator to produce bag-of-words according to the discovered latent topics and continuous noise. In the second step, based on the generated bag-of-words, we decode a sequential text by a recurrent neural network (RNN). We utilize InfoGAN (Chen et al., 2016) to discover the latent topics and generate a bag of topical words without supervision in the first step, where the categorical classifier of InfoGAN can be considered as a topic model, which is able to discover latent topics of documents. We show that our model can yield explainable topics and outperform previous topic models such as latent Dirichlet allocation (LDA) (Blei et al., 2003) or variational topic model (Miao et al., 2016) for unsupervised document classification. Topic modeling can be applied to many applications, including extractive text summarization(Titov & McDonald, 2008), document retrieval (Wei & Croft, 2006) or unsupervised classification. Also, unlike previous topic models that did not consider the word correlation during generating words, our method is able to regularize the correlation between words by the discriminator so that it yields more reasonable bag-of-words and produces high-quality texts. The contributions of this work are three-fold: • We propose two-step progressive text generation, which aligns well with the nature of text generation. • Our model is able to discover explainable topics by a topic classifier and achieves promising performance on unsupervised learning. • Compared to previous topic models, the proposed TopicGAN is able to capture the correlation between words, and thus performs better generation results. 2 RELATED WORK Text Generation via GAN Prior work attempted at generating texts using GAN (Yu et al., 2017; Che et al., 2017; Li et al., 2017; Liu et al., 2018), and there are two main directions. One is to tackle the gradient vanishing problem in the original GAN, where JensenShannon divergence to evaluate the discrete real data distribution and the continuous generated distribution. By using Wasserstein distance, the discrete and continuous distributions can be properly measured. However, the Wasserstein distance requires the discriminator to be a Lipschitz continuous function; therefore some restrictions including weight-clipping (Arjovsky et al., 2017), gradient penalty (Gulrajani et al., 2017) are imposed on the discriminator. Another direction is to feed sampled discrete words from generated distribution to the discriminator. While the sampling operation is non-differentiable, reinforcement learning is applied to optimize the score from the discriminator. Some designed reward functions, such as Monte Carlo tree search (MCTS) (Yu et al., 2017), are proposed to evaluate the generated word for each time step (Lin et al., 2017; Fedus et al., 2018; Li et al., 2017). Our proposed progressive two-step generation framework can be easily combined with any current adversarial text generation models, because we can choose one those current work as our final jointly optimization method. Our framework effectively facilitates the training process of text generation by decomposing the task into two subproblems. InfoGAN InfoGAN has shown impressive performance for learning disentangled representations of images in an unsupervised manner (Chen et al., 2016). The original GAN generates images from a continuous noise z, while each dimension of the noise does not contain disentangled features of generated images. To learn semantically meaningful representations, InfoGAN maximizes the mutual information between input code c and the generated output G(z, c). However, maximizing the mutual information is intractable, because it requires the access of P (c | G(z, c)). Based on variational information maximization, Chen et al. (2016) used an auxiliary distribution Q(c | G(z, c)) to approximate P (c | G(z, c)). The auxiliary function Q can be a neural network that can be jointly optimized: min G,Q max D Ex∼Pdata [log(D(x))] + Ez∼Pz,c∼Pc [log(1−D(G(z, c)))− λQ(c | G(z, c))], (1) where Pdata is the real data distribution, Pz is the noise distribution, and Pc is the code distribution. The code c can be either continuous or categorical. In our work, the code c is set to be categorical, and the categorical classifier Q becomes a topic model. Therefore, we call the third term in (1) categorical loss. 3 TOPICGAN When using GAN for natural language generation, the generator has difficulty generating text with reasonable context and correct grammar simultaneously. In addition, when feeding a sequential text to the InfoGAN categorical code classifier, due to its complex structure, the classifier fails to discover meaningful discrete information. The key idea of this work is that we divide text generation into two steps: 1) generating bag-of-words that can roughly represent the topical information, and then 2) generating sequential texts from the learned topical words. As shown in Figure 1, given a discrete topic c and a continuous noise z as the input, the bag-of-words generator Gbow generates topical words. After obtaining topical words, the sequence generator Gseq generates texts according to topical words. 3.1 GENERATING TOPICAL WORDS The upper part of Figure 1 illustrates how our model generate topical words, where there are a bag-of-words generator Gbow, a bag-of-words discriminator Dbow, a topic model Q, and a noise predictor E. • Bag-of-words generator Gbow It takes a discrete one-hot topic code c and a continuous noise z as the input, and manages to generate bag-of-words that captures the input topical information and are indistinguishable from the bag-of-words of real texts. Here the bag-of-words is a binary vector generating from sigmoid function, where each dimension indicates whether a single word in the dictionary exists in the text. • Bag-of-words discriminator Dbow It takes the bag-of-words vector as its input and distinguishes whether it is generated or human-written. • Topic model Q It is a categorical topic code classifier that is implemented by a matrix, considering that a linear model is easier to interpret the generated bag-of-words. • Noise predictor E It focuses on predicting the noise that can reconstruct the input bag-of-words by Gbow. Similar to Chen et al. (2016), we apply (1) to train our generator Gbow, discriminator Dbow, and the topic model Q. However, it is difficult to generate discrete bag-of-words due to the gradient vanishing problem, we apply WGAN (Arjovsky et al., 2017) loss to train Gbow and Dbow. In addition, during training, there is a severe mode collapse issue within the same topic. That is, given the same topic code, the generator ignores the continuous noise and output the same topical words. The reason is that outputting the same bag of words for the generator is the optimal solution to maximize the mutual information between the discrete input topic code and its output. To tackle this issue, we clip the probability from topic model Q in (1) to a specific range α, and rewrite the (1) to: min G,Q max D Ex∼Pdata [Dbow(x)]− Ez∼Pz,c∼Pc [Dbow(Gbow(z, c))]− λEz∼Pz,c∼Pc [min(Q(c | Gbow(z, c)), α)]. (2) , where x is the binary bag-of-words vector of real text. Here, to constrain the Dbow to be Lipschitz function, we apply gradient penalty (Gulrajani et al., 2017) to Dbow. In addition, we apply batchnormalization to alleviate the mode collapse problem (Ioffe & Szegedy, 2015). In order to obtain better results, an auto-encoder is included in the optimization procedure (Larsen et al., 2015; Huang et al., 2018). Here we use binary cross entropy loss as reconstruction loss function: min G,Q,E Ex∼Pdata [−x ∗ log(Gbow(Q(x), E(x))) + (1− x) ∗ log(1−Gbow(Q(x), E(x)))] (3) , where the topic classifier Q and the noise predicor E encode real text bag-of-words x into the discrete code and continuous noise respectively. We train 2 and 3 alternately. 3.2 GENERATING TEXTS FROM TOPICAL WORDS The lower part of Figure 1 illustrates how the model generates natural language, where there are a sequence generator Gseq and a sequence discriminator Dseq . • Sequence generator Gseq After obtaining bag of topical words, we use an LSTM model to generate sequential texts from the bag-of-words. The bag-of-words vector is fed into a feedforward neural network and the output of feedforward neural network is used to initialize the hidden state of LSTM. We use an extra vector as the LSTM input to keep track of which input bag-of-words have been generated in order to avoiding generating the same words repeatedly. • Sequence discriminator Dseq We introduce a sequence discriminator Dseq that encourages Gseq to produce realistic sequential texts. If Dseq simply takes sequential text as the input, it may make the sequence generator generate texts that is realistic but unrelated to the input bag-of-words of Therefore, conditioned on the generated bag-of-words, Dseq is also able to discriminate whether the text is produced from input bag-of-words. The pairs of bag-of-words of human written text and corresponding text are regarded as real data for discriminator. Supervised pretraining of sequence generator As each sequential text has its corresponding bagof-words, we can obtain numerous (bag-of-words, text) pairs to pretrain Gseq . To make Gseq robust to the noisy bag-of-words input, during pretraing, we add some noise such as randomly deleting words to the input texts. Joint Training. After pretrainingGseq , we jointly optimize the whole model by adversarial training between Gseq and Dseq . In the joint training, two steps are jointly trained that we train all the modules to directly generate text conditioned on topic c and noise z. We can choose any existing adversarial sequence generation method such as Yu et al. (2017) or Gulrajani et al. (2017) to jointly train Gseq and Dseq . In our work, the Dseq is a deep residual network which takes the output of Gseq as input. We simply apply the training procedure of WGAN-gp (Gulrajani et al., 2017) to train Gseq and Dseq . 4 EXPERIMENTS In this section, we evaluate whether our method is able to learn meaningful latent topics, and show that our two-step progressive generation can generate high-quality texts. For all experiments, in the first step, we set our bag-of-words vocabulary size to 3k and removed stopwords. By setting smaller vocabulary size in the first step, the topic classifier discovered better topics. When generating texts in the second step, we set the vocabulary size to 15k. 4.1 TOPIC MODELING RESULTS To evaluate the quality of the learned latent topics, we test whether the topic classifier Q is able to correctly discover the latent class same as the class labeled by human. We evaluate the topic classifier on three datasets including 20NewsGroups, DBpedia ontology classification dataset and Yahoo! answers. The 20NewsGroups is a news classification dataset composed of 20 different classes of news with 11,314 training and 7,531 testing documents. DBpedia ontology classification dataset is constructed by Zhang & LeCun (2015). They selected 14 ontology classes from DBpedia 2014, and for each class they randomly picked 40,000 training samples and 5,000 testing samples. Thus, there are total 560,000 and 70,000 training and testing samples respectively. Yahoo! answers is a question type classification dataset with 10 types of question-answer pairs constructed by Zhang & LeCun (2015). There are 1,400,000 training samples and 60,000 testing samples. 4.1.1 UNSUPERVISED CLASSIFICATION For all experiments including baseline methods, we set the number of latent topics same as the number of true classes in each dataset. We used the topic classifier Q to predict the latent topic distribution of each sample, and we assigned each sample to the latent topic with the maximum probability. The samples within a latent topic cluster used its true label to vote for which label should be assigned to the whole cluster. After assigning each latent topic to its corresponding label, we evaluate the classification accuracy as the quality of the captured latent topics. We compared our method with a statistical topic model, LDA (Blei et al., 2003), and a variational topic model(NVDM) (Miao et al., 2016). The results of unsupervised classification are shown in Table 1, where compared to previous topic models, our model significantly outperforms baselines for unsupervised classification. The main reason that our method achieves better result on unsupervised classification is that LDA asuumes each documents are produced from mixture of topics, while our method assumes that the documents are produced from a single topic and a noise controlling the difference of texts within a topic. In unsupervised document classification, the documents have only one single class. Therefore, for those unsupervised classification experiments, assuming each documents coming from a single main topic is a more appropriate assumption, which allows our model to learn more distinct topics. In addition, as the length of our training documents are short, its hard to break the short text into several topics, which is one of the possible reason that makes LDA works not well on short text. 4.1.2 ABLATION STUDY In this section we show that all mechanisms described in Section 3.1 including categorical loss clipping, training with auto-encoder are useful for unsupervised classification. As shown in Table 1, the key trick that greatly improved the performance is auto-encoder training of bag-of-words generator Gbow and topic model Q. Without training with auto-encoder, there was almost only half of the original performance in 20NewsGroups and Yahoo! Answers datasets. Categorical loss clipping also improved the performance. It also alleviated the mode collapse problem within a single class and made the training process more stable. The model complexity of topic classifier also influenced the classification accuracy. In the simpler dataset like 20 news or DBpedia, using a single matrix as model of discriminator (Table.1 no hidden layer) yielded better performance. While in more difficult dataset like Yahoo! Answers dataset, topic classifier with one hidden feed forward layer performed slightly better. 4.2 TOPIC COHERENCE In Section 4.1, we find our method can outperform previous methods on unsupervised text classification. In this section, we discuss the quality of learned topic words on quantitative analysis and qualitative analysis. The topic words of our topic model can be retrieved as following. We used a single matrix MV×K as our topic classification model Q, where V is the bag-of-words vocabulary size and K is the latent topic number. The value of Mv,k represents the importance of v-th word to k-th topic. The top few words with higher values within each column are selected as its topical words. Quantitative analysis. We use the Cv metric (Roder et al., 2015) to evaluate the topic coherence score. The coherence score is computed by using English Wikipedia of 5.6 million articles as external corpus. Table 3 lists the coherence score on different datasets. Compared to LDA, TopicGAN is on par with LDA on DBpedia on, and outperforms LDA on all other datasets. This result suggests the effectiveness of using info-GAN and neural network to train a explainable topic model. Qualitative analysis. To further analyze the quality of discovered topics, the top 20 topical words of latent topics are listed inTable 6 and the generated corresponding texts are shown in Table 7. From the tables, the captured latent topics are clearly semantically different based on the generated topical words. Similarly, the generated texts are also fluent and topically related to the associated topics. 4.3 TEXT GENERATION RESULTS We conducted sequential text generation experiments on two datasets including DBpedia and English Gigaword. English Gigaword is a summarization dataset which is composed of first sentence of articles and their corresponding titles. The pre-process script (Rush et al., 2015) yielded 3.8M training samples and 400K validation samples. We trained our model to generate the first sentence of articles on training set. Unlike DBpedia which has the labeled classes, English Gigaword has no labeled classes. Therefore, we conducted human evaluation to evaluate whether our model is able to generate text from meaningful discovered topics. We compare our method with the baseline method which was pre-trained with VAE and then fine tuned by WGAN-gp. Instead of generating text conditioned on discrete code and continuous noise, the baseline method was simply conditioned on a continuous noise. When pre-training variational auto-encoder, the tricks like KL-term annealing (Semeniuta et al., 2017) were applied. We chose this method as our baseline because our sequence generator Gseq were pre-trained with bag-of-words to text language model, and we expected our baseline also pre-trained with a proper language model. The performance of our method with and without jointly training of WGAN-gp on sequential text generation was also evaluated. In order to evaluate the quality of generated text, we measured the perplexity of generated text and conducted human evaluation. 4.3.1 PERPLEXITY The original English Gigaword and DBpedia datasets are already split into training set and testing set. We trained a general LSTM language model on the text of all training data, and for each class we trained a class LSTM language model initialized from general language model. The language models were then used to compute the perplexity of generated text. The perplexity of text on English Gigaword and DBpedia are shown in Table.4. In both datasets, the perplexity of Topic GAN was almost as low as baseline method VAE+WGANgp, which suggested our method was able to generate equally smooth text. In addition, our method not only generated text with equal quality, it also generated text conditioned on discrete topic. As shown in Table.4, the perplexity of Topic GAN in class language model is lower than general language model. This implied that the generated text of topic GAN captured the information of each class. Jointly training also slightly improved the perplexity. As perplexity can not precisely reflect the quality of text, we conducted human evaluation to further discuss the performance of our model. 4.3.2 HUMAN EVALUATION As there are no labeled categorical data on English Gigaword, we were not able to evaluate whether our model discovered meaningful latent topics. In addition, perplexity is not an accurate evaluation of the quality of the text. For the above two reasons, we conduct human evaluation. The human evaluation experiment was composed of two parts. The first part evaluated whether the text generated from the same latent topic can be recognized as the same class by human. In the second part evaluate the quality of the generated sentence. In the first part, the humans were demonstrated a set of example texts generated from the same latent topic and they were asked to categorize the example texts into a specific topic like sports,economy or violence. Then, following the example texts, there were several multiple-choice questions which had a text from same topic as correct choice and some texts from other topics as incorrect choice in each question. The humans were asked to choose the correct choice. As shown in Table 5(a), almost all human successfully selected the texts with the topic same as example class in each question. This result suggested that in the dataset without labeled categorical data, our method was able to discover meaningful latent topics and generate text from the discovered latent topic. The second part measured the quality of generated text. The quality including the grammar and the rationality of the text. We asked human to select the better one among two sentences generated from different methods. In first two rows of Table 5(b), we compared the human preference of topic GAN with and without jointly training. The human preferred the text generated by topic GAN with jointly training to the text generated without jointly training. The last two rows of Table 5(b) showed that human slightly preferred topic GAN than baseline method(VAE+WGAN-gp). 5 CONCLUSION This paper proposes TopicGAN, which can automatically capture topical information and generate the natural language with controled semantics in an unsupervised manner. The discrete representations show the superior performance on unsupervised classification, demonstrating the capacity of topic modeling in the proposed model. In addition, our model can generate texts with comparable performance with other approaches and additionally enable topical representation for better interpretation.
1. What is the main contribution of the paper regarding topic modeling? 2. What are the strengths and weaknesses of the proposed approach in terms of its ability to generate text sequences based on bag-of-words? 3. How does the reviewer assess the clarity and quality of the paper's content, particularly regarding experiment settings and hyperparameters? 4. What are the minor issues mentioned by the reviewer regarding typos and cross-references? 5. How does the reviewer evaluate the novelty and effectiveness of using adversarial training for topic models, especially considering the limitations of the proposed model?
Review
Review This paper presents a topic model based on adversarial training. Specifically, the paper adopts the framework of InfoGAN to generates the bag-of-words of a document and the latent codes in InfoGAN correspond to the latent topics in topic modelling. In addition to the above framework, to make the model work better, several add-ons are also proposed, combining autoencoder, loss clipping, and a generative model to generate text sequences based on the bag-of-words. My comments are as follows: 1. There are several issues of this paper on clarity: (1) The first major one for me is that the authors did not give any details on how to interpret the latent code (i.e. the topics here) with the top words. In conventional topic models, usually a topic is a distribution of words, so that top words can be selected by their weights. But I did not see something similar in the proposed model. (2) Another major one is why the word sequence generator is introduced in the proposed model. I did not see the contribution of this part to the whole model as a topic model, although the joint training shows the marginal performance gain on text generation. (3) Some of the experiment settings are not provided, for example, the number of topics, the value of \alpha and \lambda in the proposed model, the hyperparameters of LDA, which are crucial for the results. (4) Why is the size of the bag-of-words vocabulary set to be 3K whereas that of the word generation vocabulary set to be 15K? Minor issues: (5) In the related work of InfoGAN, there are a lot of cross-references to the following sections, before they are properly introduced. (6) Typo of "Accurcay" in Table 4(a). 2. Using adversarial training for topic models seems to be an interesting idea. There is not much work in this line and this paper proposes a model that seems to be working. But it seems to be that the proposed model has several issues as follows: (1) Each document seems to have only one topic, which can be an impractical setting for long documents. (2) The proposed model ignores the word counts, which can be important for topic modelling. (3) I did not see a major improvement of the proposed model over others, given that the only numerical result reported is classification accuracy and the state-of-the-art conventional topic models are not compared. This also leads to my concern about the experiments. I would expect more comparisons than classification accuracy, such as topic coherence and perplexity (for topic modelling) and with more advanced conventional models. From the low values of the accuracy on 20NG, I am wondering if LDA is working properly.
ICLR
Title TopicGAN: Unsupervised Text Generation from Explainable Latent Topics Abstract Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied because the obtained discrete representations can benefit unsupervised learning. However, the performance of learning discrete representations of textual data with deep generative models has not been widely explored. In addition, although generative adversarial networks(GAN) have shown impressing results in many areas such as image generation, for text generation, it is notorious for extremely difficult to train. In this work, we propose TopicGAN, a two-step text generative model, which is able to solve those two important problems simultaneously. In the first step, it discovers the latent topics and produced bag-of-words according to the latent topics. In the second step, it generates text from the produced bag-of-words. In our experiments, we show our model can discover meaningful discrete latent topics of texts in an unsupervised fashion and generate high quality natural language from the discovered latent topics. 1 INTRODUCTION Recently, deep generative models (Goodfellow et al., 2014; Kingma & Welling, 2013; Makhzani et al., 2015) have achieved a great success on generating realistic images, videos and audio. Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied, because the obtained discrete representations can benefit unsupervised learning (Chen et al., 2016; van den Oord et al., 2017), semi-supervised learning (Odena et al., 2016), and few-shot learning. However, it remains extremely challenging for generating texts and learning interpretable discrete representations of texts due to the discrete, sparse and high dimensional properties of textual data. Arjovsky et al. (2017) mentioned that the original generative adversarial network (GAN) fails to generate discrete data due to the gradient vanishing problem. Wasserstein distance has been proposed to more precisely measure the distance between real and fake distributions and therefore tackles the gradient vanishing problem (Arjovsky et al., 2017; Gulrajani et al., 2017). Another obstacle for generating discrete data is the non-differentiable function when generating words, which makes the gradients unable to be backpropagated from the discriminator to the generator. With the help of reinforcement learning, the generator is able to maximize the scores from the discriminator when the gradient is not able to flow from discriminator (Yu et al., 2017; Li et al., 2017; Fedus et al., 2018). In natural language processing (NLP), learning representations of texts is shown useful for unsupervised language understanding. While learning continuous text representations has been widely studied (Kiros et al., 2015; Arora et al., 2017; Logeswaran & Lee, 2018), learning high-level discrete representations has been explored by fewer work (Zhao et al., 2018; Miao et al., 2016). In order to take non-differentiable discrete variables as latent representations of an auto-encoder, some special methods such as Gumbel-Softmax (Jang et al., 2016) or vector quantisation (van den Oord et al., 2017) were applied by the prior work. Furthermore, because textual data is high dimensional and has rich but sometimes noisy information such as stop words, it is challenging to learn useful discrete representations of texts. To mitigate the difficulty of text generation and text discrete representation learning, this paper proposes TopicGAN, which simplifies those two problems by a two-step progressive generation. The idea of dividing the generation process into a pipeline yielded impressive results on high resolution image generation (Zhang et al., 2016; Karras et al., 2018). The progressive generation is a natural way to generate texts considering how human writes texts. When writing articles, human first considers the context of the texts and then organizes the context with correct grammar. Hence, in this work, we split text generation into two step, one is context generation and another is text generation from context with correct grammar. In the first step, we use topic model to discover the latent topics and use bag-of-words generator to produce bag-of-words according to the discovered latent topics and continuous noise. In the second step, based on the generated bag-of-words, we decode a sequential text by a recurrent neural network (RNN). We utilize InfoGAN (Chen et al., 2016) to discover the latent topics and generate a bag of topical words without supervision in the first step, where the categorical classifier of InfoGAN can be considered as a topic model, which is able to discover latent topics of documents. We show that our model can yield explainable topics and outperform previous topic models such as latent Dirichlet allocation (LDA) (Blei et al., 2003) or variational topic model (Miao et al., 2016) for unsupervised document classification. Topic modeling can be applied to many applications, including extractive text summarization(Titov & McDonald, 2008), document retrieval (Wei & Croft, 2006) or unsupervised classification. Also, unlike previous topic models that did not consider the word correlation during generating words, our method is able to regularize the correlation between words by the discriminator so that it yields more reasonable bag-of-words and produces high-quality texts. The contributions of this work are three-fold: • We propose two-step progressive text generation, which aligns well with the nature of text generation. • Our model is able to discover explainable topics by a topic classifier and achieves promising performance on unsupervised learning. • Compared to previous topic models, the proposed TopicGAN is able to capture the correlation between words, and thus performs better generation results. 2 RELATED WORK Text Generation via GAN Prior work attempted at generating texts using GAN (Yu et al., 2017; Che et al., 2017; Li et al., 2017; Liu et al., 2018), and there are two main directions. One is to tackle the gradient vanishing problem in the original GAN, where JensenShannon divergence to evaluate the discrete real data distribution and the continuous generated distribution. By using Wasserstein distance, the discrete and continuous distributions can be properly measured. However, the Wasserstein distance requires the discriminator to be a Lipschitz continuous function; therefore some restrictions including weight-clipping (Arjovsky et al., 2017), gradient penalty (Gulrajani et al., 2017) are imposed on the discriminator. Another direction is to feed sampled discrete words from generated distribution to the discriminator. While the sampling operation is non-differentiable, reinforcement learning is applied to optimize the score from the discriminator. Some designed reward functions, such as Monte Carlo tree search (MCTS) (Yu et al., 2017), are proposed to evaluate the generated word for each time step (Lin et al., 2017; Fedus et al., 2018; Li et al., 2017). Our proposed progressive two-step generation framework can be easily combined with any current adversarial text generation models, because we can choose one those current work as our final jointly optimization method. Our framework effectively facilitates the training process of text generation by decomposing the task into two subproblems. InfoGAN InfoGAN has shown impressive performance for learning disentangled representations of images in an unsupervised manner (Chen et al., 2016). The original GAN generates images from a continuous noise z, while each dimension of the noise does not contain disentangled features of generated images. To learn semantically meaningful representations, InfoGAN maximizes the mutual information between input code c and the generated output G(z, c). However, maximizing the mutual information is intractable, because it requires the access of P (c | G(z, c)). Based on variational information maximization, Chen et al. (2016) used an auxiliary distribution Q(c | G(z, c)) to approximate P (c | G(z, c)). The auxiliary function Q can be a neural network that can be jointly optimized: min G,Q max D Ex∼Pdata [log(D(x))] + Ez∼Pz,c∼Pc [log(1−D(G(z, c)))− λQ(c | G(z, c))], (1) where Pdata is the real data distribution, Pz is the noise distribution, and Pc is the code distribution. The code c can be either continuous or categorical. In our work, the code c is set to be categorical, and the categorical classifier Q becomes a topic model. Therefore, we call the third term in (1) categorical loss. 3 TOPICGAN When using GAN for natural language generation, the generator has difficulty generating text with reasonable context and correct grammar simultaneously. In addition, when feeding a sequential text to the InfoGAN categorical code classifier, due to its complex structure, the classifier fails to discover meaningful discrete information. The key idea of this work is that we divide text generation into two steps: 1) generating bag-of-words that can roughly represent the topical information, and then 2) generating sequential texts from the learned topical words. As shown in Figure 1, given a discrete topic c and a continuous noise z as the input, the bag-of-words generator Gbow generates topical words. After obtaining topical words, the sequence generator Gseq generates texts according to topical words. 3.1 GENERATING TOPICAL WORDS The upper part of Figure 1 illustrates how our model generate topical words, where there are a bag-of-words generator Gbow, a bag-of-words discriminator Dbow, a topic model Q, and a noise predictor E. • Bag-of-words generator Gbow It takes a discrete one-hot topic code c and a continuous noise z as the input, and manages to generate bag-of-words that captures the input topical information and are indistinguishable from the bag-of-words of real texts. Here the bag-of-words is a binary vector generating from sigmoid function, where each dimension indicates whether a single word in the dictionary exists in the text. • Bag-of-words discriminator Dbow It takes the bag-of-words vector as its input and distinguishes whether it is generated or human-written. • Topic model Q It is a categorical topic code classifier that is implemented by a matrix, considering that a linear model is easier to interpret the generated bag-of-words. • Noise predictor E It focuses on predicting the noise that can reconstruct the input bag-of-words by Gbow. Similar to Chen et al. (2016), we apply (1) to train our generator Gbow, discriminator Dbow, and the topic model Q. However, it is difficult to generate discrete bag-of-words due to the gradient vanishing problem, we apply WGAN (Arjovsky et al., 2017) loss to train Gbow and Dbow. In addition, during training, there is a severe mode collapse issue within the same topic. That is, given the same topic code, the generator ignores the continuous noise and output the same topical words. The reason is that outputting the same bag of words for the generator is the optimal solution to maximize the mutual information between the discrete input topic code and its output. To tackle this issue, we clip the probability from topic model Q in (1) to a specific range α, and rewrite the (1) to: min G,Q max D Ex∼Pdata [Dbow(x)]− Ez∼Pz,c∼Pc [Dbow(Gbow(z, c))]− λEz∼Pz,c∼Pc [min(Q(c | Gbow(z, c)), α)]. (2) , where x is the binary bag-of-words vector of real text. Here, to constrain the Dbow to be Lipschitz function, we apply gradient penalty (Gulrajani et al., 2017) to Dbow. In addition, we apply batchnormalization to alleviate the mode collapse problem (Ioffe & Szegedy, 2015). In order to obtain better results, an auto-encoder is included in the optimization procedure (Larsen et al., 2015; Huang et al., 2018). Here we use binary cross entropy loss as reconstruction loss function: min G,Q,E Ex∼Pdata [−x ∗ log(Gbow(Q(x), E(x))) + (1− x) ∗ log(1−Gbow(Q(x), E(x)))] (3) , where the topic classifier Q and the noise predicor E encode real text bag-of-words x into the discrete code and continuous noise respectively. We train 2 and 3 alternately. 3.2 GENERATING TEXTS FROM TOPICAL WORDS The lower part of Figure 1 illustrates how the model generates natural language, where there are a sequence generator Gseq and a sequence discriminator Dseq . • Sequence generator Gseq After obtaining bag of topical words, we use an LSTM model to generate sequential texts from the bag-of-words. The bag-of-words vector is fed into a feedforward neural network and the output of feedforward neural network is used to initialize the hidden state of LSTM. We use an extra vector as the LSTM input to keep track of which input bag-of-words have been generated in order to avoiding generating the same words repeatedly. • Sequence discriminator Dseq We introduce a sequence discriminator Dseq that encourages Gseq to produce realistic sequential texts. If Dseq simply takes sequential text as the input, it may make the sequence generator generate texts that is realistic but unrelated to the input bag-of-words of Therefore, conditioned on the generated bag-of-words, Dseq is also able to discriminate whether the text is produced from input bag-of-words. The pairs of bag-of-words of human written text and corresponding text are regarded as real data for discriminator. Supervised pretraining of sequence generator As each sequential text has its corresponding bagof-words, we can obtain numerous (bag-of-words, text) pairs to pretrain Gseq . To make Gseq robust to the noisy bag-of-words input, during pretraing, we add some noise such as randomly deleting words to the input texts. Joint Training. After pretrainingGseq , we jointly optimize the whole model by adversarial training between Gseq and Dseq . In the joint training, two steps are jointly trained that we train all the modules to directly generate text conditioned on topic c and noise z. We can choose any existing adversarial sequence generation method such as Yu et al. (2017) or Gulrajani et al. (2017) to jointly train Gseq and Dseq . In our work, the Dseq is a deep residual network which takes the output of Gseq as input. We simply apply the training procedure of WGAN-gp (Gulrajani et al., 2017) to train Gseq and Dseq . 4 EXPERIMENTS In this section, we evaluate whether our method is able to learn meaningful latent topics, and show that our two-step progressive generation can generate high-quality texts. For all experiments, in the first step, we set our bag-of-words vocabulary size to 3k and removed stopwords. By setting smaller vocabulary size in the first step, the topic classifier discovered better topics. When generating texts in the second step, we set the vocabulary size to 15k. 4.1 TOPIC MODELING RESULTS To evaluate the quality of the learned latent topics, we test whether the topic classifier Q is able to correctly discover the latent class same as the class labeled by human. We evaluate the topic classifier on three datasets including 20NewsGroups, DBpedia ontology classification dataset and Yahoo! answers. The 20NewsGroups is a news classification dataset composed of 20 different classes of news with 11,314 training and 7,531 testing documents. DBpedia ontology classification dataset is constructed by Zhang & LeCun (2015). They selected 14 ontology classes from DBpedia 2014, and for each class they randomly picked 40,000 training samples and 5,000 testing samples. Thus, there are total 560,000 and 70,000 training and testing samples respectively. Yahoo! answers is a question type classification dataset with 10 types of question-answer pairs constructed by Zhang & LeCun (2015). There are 1,400,000 training samples and 60,000 testing samples. 4.1.1 UNSUPERVISED CLASSIFICATION For all experiments including baseline methods, we set the number of latent topics same as the number of true classes in each dataset. We used the topic classifier Q to predict the latent topic distribution of each sample, and we assigned each sample to the latent topic with the maximum probability. The samples within a latent topic cluster used its true label to vote for which label should be assigned to the whole cluster. After assigning each latent topic to its corresponding label, we evaluate the classification accuracy as the quality of the captured latent topics. We compared our method with a statistical topic model, LDA (Blei et al., 2003), and a variational topic model(NVDM) (Miao et al., 2016). The results of unsupervised classification are shown in Table 1, where compared to previous topic models, our model significantly outperforms baselines for unsupervised classification. The main reason that our method achieves better result on unsupervised classification is that LDA asuumes each documents are produced from mixture of topics, while our method assumes that the documents are produced from a single topic and a noise controlling the difference of texts within a topic. In unsupervised document classification, the documents have only one single class. Therefore, for those unsupervised classification experiments, assuming each documents coming from a single main topic is a more appropriate assumption, which allows our model to learn more distinct topics. In addition, as the length of our training documents are short, its hard to break the short text into several topics, which is one of the possible reason that makes LDA works not well on short text. 4.1.2 ABLATION STUDY In this section we show that all mechanisms described in Section 3.1 including categorical loss clipping, training with auto-encoder are useful for unsupervised classification. As shown in Table 1, the key trick that greatly improved the performance is auto-encoder training of bag-of-words generator Gbow and topic model Q. Without training with auto-encoder, there was almost only half of the original performance in 20NewsGroups and Yahoo! Answers datasets. Categorical loss clipping also improved the performance. It also alleviated the mode collapse problem within a single class and made the training process more stable. The model complexity of topic classifier also influenced the classification accuracy. In the simpler dataset like 20 news or DBpedia, using a single matrix as model of discriminator (Table.1 no hidden layer) yielded better performance. While in more difficult dataset like Yahoo! Answers dataset, topic classifier with one hidden feed forward layer performed slightly better. 4.2 TOPIC COHERENCE In Section 4.1, we find our method can outperform previous methods on unsupervised text classification. In this section, we discuss the quality of learned topic words on quantitative analysis and qualitative analysis. The topic words of our topic model can be retrieved as following. We used a single matrix MV×K as our topic classification model Q, where V is the bag-of-words vocabulary size and K is the latent topic number. The value of Mv,k represents the importance of v-th word to k-th topic. The top few words with higher values within each column are selected as its topical words. Quantitative analysis. We use the Cv metric (Roder et al., 2015) to evaluate the topic coherence score. The coherence score is computed by using English Wikipedia of 5.6 million articles as external corpus. Table 3 lists the coherence score on different datasets. Compared to LDA, TopicGAN is on par with LDA on DBpedia on, and outperforms LDA on all other datasets. This result suggests the effectiveness of using info-GAN and neural network to train a explainable topic model. Qualitative analysis. To further analyze the quality of discovered topics, the top 20 topical words of latent topics are listed inTable 6 and the generated corresponding texts are shown in Table 7. From the tables, the captured latent topics are clearly semantically different based on the generated topical words. Similarly, the generated texts are also fluent and topically related to the associated topics. 4.3 TEXT GENERATION RESULTS We conducted sequential text generation experiments on two datasets including DBpedia and English Gigaword. English Gigaword is a summarization dataset which is composed of first sentence of articles and their corresponding titles. The pre-process script (Rush et al., 2015) yielded 3.8M training samples and 400K validation samples. We trained our model to generate the first sentence of articles on training set. Unlike DBpedia which has the labeled classes, English Gigaword has no labeled classes. Therefore, we conducted human evaluation to evaluate whether our model is able to generate text from meaningful discovered topics. We compare our method with the baseline method which was pre-trained with VAE and then fine tuned by WGAN-gp. Instead of generating text conditioned on discrete code and continuous noise, the baseline method was simply conditioned on a continuous noise. When pre-training variational auto-encoder, the tricks like KL-term annealing (Semeniuta et al., 2017) were applied. We chose this method as our baseline because our sequence generator Gseq were pre-trained with bag-of-words to text language model, and we expected our baseline also pre-trained with a proper language model. The performance of our method with and without jointly training of WGAN-gp on sequential text generation was also evaluated. In order to evaluate the quality of generated text, we measured the perplexity of generated text and conducted human evaluation. 4.3.1 PERPLEXITY The original English Gigaword and DBpedia datasets are already split into training set and testing set. We trained a general LSTM language model on the text of all training data, and for each class we trained a class LSTM language model initialized from general language model. The language models were then used to compute the perplexity of generated text. The perplexity of text on English Gigaword and DBpedia are shown in Table.4. In both datasets, the perplexity of Topic GAN was almost as low as baseline method VAE+WGANgp, which suggested our method was able to generate equally smooth text. In addition, our method not only generated text with equal quality, it also generated text conditioned on discrete topic. As shown in Table.4, the perplexity of Topic GAN in class language model is lower than general language model. This implied that the generated text of topic GAN captured the information of each class. Jointly training also slightly improved the perplexity. As perplexity can not precisely reflect the quality of text, we conducted human evaluation to further discuss the performance of our model. 4.3.2 HUMAN EVALUATION As there are no labeled categorical data on English Gigaword, we were not able to evaluate whether our model discovered meaningful latent topics. In addition, perplexity is not an accurate evaluation of the quality of the text. For the above two reasons, we conduct human evaluation. The human evaluation experiment was composed of two parts. The first part evaluated whether the text generated from the same latent topic can be recognized as the same class by human. In the second part evaluate the quality of the generated sentence. In the first part, the humans were demonstrated a set of example texts generated from the same latent topic and they were asked to categorize the example texts into a specific topic like sports,economy or violence. Then, following the example texts, there were several multiple-choice questions which had a text from same topic as correct choice and some texts from other topics as incorrect choice in each question. The humans were asked to choose the correct choice. As shown in Table 5(a), almost all human successfully selected the texts with the topic same as example class in each question. This result suggested that in the dataset without labeled categorical data, our method was able to discover meaningful latent topics and generate text from the discovered latent topic. The second part measured the quality of generated text. The quality including the grammar and the rationality of the text. We asked human to select the better one among two sentences generated from different methods. In first two rows of Table 5(b), we compared the human preference of topic GAN with and without jointly training. The human preferred the text generated by topic GAN with jointly training to the text generated without jointly training. The last two rows of Table 5(b) showed that human slightly preferred topic GAN than baseline method(VAE+WGAN-gp). 5 CONCLUSION This paper proposes TopicGAN, which can automatically capture topical information and generate the natural language with controled semantics in an unsupervised manner. The discrete representations show the superior performance on unsupervised classification, demonstrating the capacity of topic modeling in the proposed model. In addition, our model can generate texts with comparable performance with other approaches and additionally enable topical representation for better interpretation.
1. What are the strengths and weaknesses of the proposed framework for topic modeling? 2. How does the reviewer assess the clarity and technical detail of the paper's description of the training process? 3. What are the concerns regarding the comparison with other related models and the selection of baseline models? 4. How does the reviewer evaluate the quality of the generated text, and what metrics could be used for evaluation? 5. Are there any questions about combining the proposed framework with other text generation models, and what kind of experiments could demonstrate its effectiveness? 6. Are there any minor comments or suggestions for improving the paper's clarity and accuracy?
Review
Review This paper proposes a new framework for topic modeling, which consists of two main steps: generating bag of words for topics and then using RNN to decode a sequence text. Pros: The author draws lessons from the infoGAN and designed a creative object function with reconstruction loss and categorical loss. As a result, this paper achieved impressive outcome for topic modeling tasks. Comments: 1. High-level language is used to describe how to train two parts of the model, which is not technically clear. It would be better describe the algorithms in more details by listing steps for your algorithm in the section 3.3. 2. For text generation experiments, why didn’t you compare your model with any other related model such as SeqGAN or TextGAN? It is not so convincing to just use VAE+Wgan-gp as a baseline model. 3. For qualitative analysis part, you just listed some of your generated sentences for proving the fluency and relevance. Why didn’t you use some standard metrics for evaluating the quality of the text? I cannot judge the quality of your model through these randomly selected sentences. 4. As you mentioned in this paper “your model can be easily combined with any current text generation models”, have you done any experiments for demonstrating the original text generation model will get better performance after applying your framework? Minor comments: 1. On page 2 and page 4, you mentioned “the third term in (2)”. According to my understanding, this should be equation 1 instead.
ICLR
Title C+1 Loss: Learn to Classify C Classes of Interest and the Background Class Differentially Abstract There is one kind of problem all around the classification area, where we want to classify C + 1 classes of samples, including C semantically deterministic classes which we call Classes of Interest (CoIs) and the (C + 1) semantically undeterministic class which we call background class. In spite of most classifier use softmax-based cross-entropy loss to supervise the training process without differentiating the background class from the CoIs, it is unreasonable as each of the CoIs has its inherent characteristics, but the background class dosen’t. We figure out that the background class should be treated differently from the CoIs during training. Motivated by this, firstly we define the C + 1 classification problem. Then, we propose three properties that a good C + 1 classifier should have: separability, compactness and background margin. Based on these we define a uniform general C +1 loss, composed of three parts, driving the C +1 classifier to satisfy those properties. Finally, we instantialize a C + 1 loss and practice it in semantic segmentation, human parsing and object detection tasks. The proposed approach shows its superiority over the traditional cross-entropy loss. 1 INTRODUCTION In machine learning, Softmax is one of the most widely used classifiers for classification, especially in CV tasks. During training, the Softmax classifier is often supervised by cross-entropy loss which treats each class without difference. However, there is a type of problem present all around the classification area, for which it is unreasonable to treat each class the same. We called this type of problem as C+1 classification. C+1 classification means classifying samples from C semantically deterministic classes and the (C + 1)th semantically underterministic class. We can semantically uniquely define each of the C classes by its inherent characteristics, so we can say they are semantically deterministic. Generally, we are interested in the C classes, so we call them C CoIs and one of them as a class of interest (CoI) hereafter. The (C +1)th class includes all the other stuff beyond C classes. Because it doesn’t have uniform inherent characteristics and can’t be described uniquely in semantics. That is why we say it is semantically undeterministic. In most cases, we regard things belonging to the (C + 1)th class as background, and we will call it background class hereinafter. Based on the above description, we consider that it is reasonable to treat the C CoIs and the background class differentially during training. On one hand, for the C CoIs, it’s reasonable to drive a C + 1 classifier to learn a compact and independent representation space for each of them because of their inherent characteristics. Then the C + 1 classifier can embed samples from each CoI into its own representation space. On the other hand, it’s also reasonable to drive the C + 1 classifier to map any samples from the background class into somewhere in feature space far away from all representation spaces of the C classes, considering that samples from background class doesn’t have any inherent characteristics different from C CoIs. For example in figure 1, if we are interested in cat, we can recognize a cat from an image at the first glance by the knowledge in the subconscious which uniquely defines cat. On the contrary, we can also recognize a not-cat instantly because it doesn’t have any inherent characteristics of cat. According to the inherent characteristics of C+1 classification, we figure out that a C+1 classifier will be preferable if it has a compact and independent representation space for each of the C CoIs in addition to its separability for all classes. This can guarantee it behaves well while encountering a sample which have different styles from the samples of corresponding CoI in the training set. Furthermore, it will be much better if there is large enough margin between the background class and the C CoIs. This will make the classifier more robust and generalizable while encountering a sample from any new classes belonging to the super background class, especially those that never appear in the training set. Above on these, we conclude three properties which a good C + 1 classifier should have—separability, compactness and background margin. Based on the three properties, we define a uniform general C + 1 loss which includes three parts corresponding to the three properties, driving the C+1 classifier to satisfy those properties. At last, considering semantic segmentation and object detection are two of the most typical C+1 classification problems, and are widely used in AI systems, we instantialize a C + 1 loss and practice it in semantic segmentation and object detection tasks, proving its superiority over the traditional cross-entropy loss. Specifically, in semantic segmentation, Softmax is widely used to classify each pixel of an image to one label of a predefined class set. Typically the predefined class set includes C CoIs and a background class. For instance, PASCAL VOC 2012 segmentation Everingham et al. (2010) contains 20 CoIs and a background class. The background class contains all other stuff. In object detection, Softmax is usually used to classify proposal boxes of an image to one of a predefined class set. For example, MS COCO Lin et al. (2014) contains 80 CoIs. A detector trained on it should classify all proposal boxes as one of the 80 CoIs or as background class. In these two typical tasks, the C + 1 classifiers need to recognize all the CoIs, and classify a variety of other things as background. It’s difficult for cross-entropy loss to drive the classifiers to learn well, which treats all samples without difference during training. This paper contains three contributions summarized as follows: 1.We define the C + 1 classification problem present all around the classification area. 2.We propose three properties that a good C + 1 classifier should have, and define a uniform C + 1 loss, which includes three parts driving the classifier to satisfy these properties. 3.We instantialize a C + 1 loss consisting of three terms, and practice it on semantic segmentation and object detection tasks, proving its superiority over the popular cross-entropy loss. 2 RELATED WORK 2.1 SOFTMAX Softmax is one of the most widely used classifier for a variety of pattern recognition tasks. Nowadays there are a plenty of variants for Softmax, such as L2-Softmax Ranjan et al. (2017), Largemargin Softmax Liu et al. (2017), Angular Softmax Liu et al. (2016), Normface Wang et al. (2017), AM-Softmax Wang et al. (2018a), CosFace Wang et al. (2018b) and ArcFace Deng et al. (2019). Large-margin Softmax is the first attempt to add parameter m into the original Softmax to control the margin and the larger m is, the larger the decision margin between the classes. Angular Softmax also known as SphereFace is an improvement to Large-margin Softmax with additional constraints on W and b, introducing the hypersphere manifold which makes the features suitable for open-set FR problem. L2-Softmax and Normface share similar ideas where L2-Softmax normalizes only the features and Normface normalizes both classifier weights and features and applys a scale parameter after that. The normaling and scaling steps push the learning progress focusing on optimizing angles among the classes, making the features not only separable but also discriminable. AM-Softmax(additive margin Softmax loss) and CosFace’s works are inspired from SphereFace by moving the parameter m that controls the margin from angular space to cosine space by addition. This also makes the implementation easier. ArcFace(additive angular margin loss) moves the parameter m from scaling to addition to expand the optimization boundary. To sum up, thse variants of Softmax improve Softmax from these aspects: normalization of weights or features, margin in angular space or cosine space, setting of margin m. They use the parameter m in different ways resulting in different decision boundary. However, they treat all classes equally during training as Softmax. And we propose to give some special care for the background class. So Softmax and its variants are not the best choice for C + 1 classification problem. 2.2 METRIC LEARNING Metric learning aims to maximize inter-class variation and minimize the intra-class variations. Constrastive loss Chopra et al. (2005); Hadsell et al. (2006); Sun et al. (2014) drives the distances between positive pairs close to 0, and the distances between negative pairs to fall within an absolute range. Triplet loss and its variants Weinberger & Saul (2009); Hoffer & Ailon (2015); Wang et al. (2014); Schroff et al. (2015); Ding et al. (2015); Cheng et al. (2016); Oh Song et al. (2016); Sohn (2016) drive the relative distances between positive pairs and negative pairs to fall lower than a preset threshold. Center loss Wen et al. (2016) drives model to learn a center for features of each class and penalizes the distances between features and their corresponding class center. Metric learning also treats each class equally, just from a metric perspective. They could not take care of the particularity of background class and the inherent characteristics of every semantic class. In addition, it’s not appropriate to learn a center for background class because it has no deterministic and unique definition. 2.3 OPEN-SET RECOGNITION Open-set recognition Bendale & Boult (2016); Ge et al. (2017); Shu et al. (2017); Neal et al. (2018); Liu et al. (2019); Yoshihashi et al. (2019); Oza & Patel (2019); Chen et al. (2019a); Qian et al. (2019); Yu & Tao (2019); Sun et al. (2020) is the one closest to our proposed C + 1 classification, wherein there are no training samples for the background class. However, the classifier needs to detect new classes while inferring. The C + 1 classification deals with another type of problem, wherein there are both training samples for C CoIs and the background class. In practice, it’s impossible to contain all other semantic classes except the CoIs into the background class. Many samples of new semantic classes are likely yo be encountered and should be identified as background class while inferring. So we could call the C + 1 classification as semi-open-set recognition. 3 METHOD 3.1 PROBLEM DEFINITION Firstly, we define some terminologies as follows. Semantic Class: a set of samples which could be described uniquely and deterministically. CoIs: a set of semantic classes that we are interested in and need to be recognized. Background Class: all other classes beyond C CoIs, including those semantic classes we are not interested and all other stuff. C +1 Classification Problem: categorize each item of a set into one of the C +1 classes. Therein, C+1 classes compriseC CoIs and the (C+1)th class of not interested, also called background class. The C CoIs are deterministic and unique, but the background class is underterministic and includes all stuff beyond those belonging to the C CoIs. The training set comprises many samples from each of the C CoIs and diverse samples from the background class. Being semantically deterministic means we can give a deterministic and unique definition for each of the C CoIs according to its inherent characteristics. Being underministic means the background class has no deterministic and unique definition, because it lacks uniform inherent characteristics. 3.2 C + 1 LOSS For each of the C CoIs, the C +1 classifier should extract the inherent characteristics, and differentiate the background class from it. In order to achieve this, we consider that a C+1 classifier should satisfy the following three key properties. 1.Separability: It should be able to separate all classes. 2.Compactness: The representation space of C CoIs should be compact enough. 3.Background margin: There should be large enough space which we call background margin between the background class and each of the C CoIs. The property one guarantees that a C+1 classifier has basic categorization ability for those samples that have similar distribution with the training set. The property two makes sure that the classifier has good generalizability for those samples belonging to the C CoIs but with different distribution. And the property three gives the classifier good robustness and generalizability for samples of the background class, especially those whose styles are never present at the training set. In order to drive a C+1 classifier to learn to satisfy the three key properties, we argue that the C+1 loss should include at least three parts: Lseparability , Lcompact and Lbackground, corresponding to the three properties respectively. LC+1 = Lseparability + αLcompact + βLbackground (1) Herein, Lseparability drives the classifier learn to discriminate all classes without difference, Lcompact drives the classifier to try to grasp the inherent characteristics and to learn a compact representation space for each of the C CoIs, and Lbackground drives the classifier to differentiate the background class from the C CoIs well. In practice, we could adopt cross-entropy loss as an instantialized instance for Lseparability and center loss Wen et al. (2016) for Lcompact. As for Lbackground, we design a novel loss according to the property three as follows. Lbackground = sign (yi = C + 1) 1 N ∑ i 1 C ∑ k=1 |mb − d (f (xi) , ck)|+ (2) Herein, sign (expression) = 1 only when the expression is true, otherwise 0. N is the number of background class sample. mb is the background margin between background class and each of the C CoIs. xi is a sample, and yi is its class label index. ck is the center of kth CoI. f(x) is a mapping function for extracting feature of sample x. f(x) followed by softmax composes the C+1 classifier. Without losing generality, f(x) is a deep neural network. As centers for center loss and the instantialized Lbackground, we define three types of center representation. 1.Represent each center of C CoIs as a learnable weight vector. 2.Represent each center of C CoIs as a moving average of sample features of the corresponding class. 3.Represent each center of C CoIs as the classifier weight vector of each class from the logit FC layer. Specifically the third center representation means that we share the center vector parameters with the softmax layer of the C CoIs, and ignore the weight vector of background. We define a center for every CoI, but not the background class. For every CoI, there are a bunch of inherent characteristics which can uniquely and deterministically define it semantically, however there are no unique and deterministic inherent characteristics for the background class. However, we can calculate the loss term Lcompact based on the distance between every sample of each CoI and its center. Furthermore, we can calculate the loss term Lbackground based on the distance between every sample of background class and the center of each CoI. 3.3 APPLICATION The C + 1 classifier described in the previous section can be applied to many tasks, including semantic segmentation, object detection, human pose estimation and any other classification problem which include the background class. In semantic segmentation, we label each foreground pixel with a semantic label, and all other pixels with background label. In object detection, we classify the area of object of interest as one of the CoIs, such as pedestrian, car and so on, and all other areas as background. In human pose estimation, we label the position of each human keypoint with proper semantic label and the other position with the background label. As for other cases, if the recognition problem needs to recognize certain number of semantically deterministic classes and the other stuff, the C + 1 classifier can also apply to it. In this paper, we just use some CV tasks as the experimental field, because they are the most common tasks in AI system. Besides the application scenarios presented above, it can also be applied to many other tasks, such as attribute recognition, web text classification, speech recognition and so on. 3.4 DISCUSSIONS We think C + 1 classifier is rational because it has sufficient prerequisites. Because every CoI has semantically deterministic definition based on a set of inherent characteristics, it’s rational and reachable to embed all samples of every CoI into an independent hypersphere. For example, the cat in the figure 1 has some type of fur, innate shape and contour, so we can define it uniquely and recognize it by the first glance. As for the background class, it has no deterministic definition, so there are no definite and uniform features for it. Then it’s not rational to embed all samples of background class into an hypersphere. But it may be reasonable to map it to the outer space of all hyperspheres of the C CoIs. Because the C CoIs are separable from the background class based on the uniqueness of every CoI. 3.5 THEORETICAL ANALYSIS After training, if the mapping function can embed all samples of the C CoIs into their own hypersphere, it’s almost impossible to map a novel sample belonging to the background class into any hypersphere of the C CoIs, even though the classifier has no any knowledge about the novel sample. Assume that a novel sample belonging to the background class is new-brand and there are no similar style of samples present at the training set, then we can treat it as a random sample from the nature. From the perspective of statistics, the probability of recognizing it as the background class is close to 1. Formally, we assume that the full d-dimension feature representation space is a super ball with RL as the radius, and the sample features of each CoI are distributed inside a small super ball with R as the radius. Then the probability of mapping the random sample to the outer space of the C CoI super balls is p (xrandom ) = 1− C ·Rd RdL (3) Therein, R RL is a reasonable assumption, C is a constant (number of CoIs), and d is the dimension of feature space, usually very large, such as hundreds, even thousands. C ·Rd is proxy for union of volumes of all CoI super balls, and RdL for the full feature space. Then C·Rd RdL is the ratio between union of volumes of all CoI super balls and the full full feature space, which represents the probability that the mapping function embeds a random sample of background class into any one of C CoI super balls. In practice, because of the high dimension of feature space, which means d is a large integer, such as 1024, p (xrandom ) is close to 1. 4 EXPERIMENT We comprehensively evaluate the effectiveness of C + 1 classifier on semantic segmentation, and then transfer to object detection with some minor adjustments. During inference we use the output of softmax layer for classification. 4.1 SETTINGS Semantic Segmentation. Semantic segmentation aims to label each pixel of an image with one semantic class label or background label. We evaluate our methods on two popular semantic segmentation dataset: PASCAL VOC and PASCAL Context and a human parsing dataset LIP. To make a fair comparison, for PASCAL VOC 2012 and PASCAL Context we adopt MMSegmentation Contributors (2020) as a unified framework for the experiments on semantic segmentation. PASCAL VOC 2012 contains 20 foreground object classes and one background class. The original dataset contains 1,464 (train), 1,449 (val), and 1,456 (test) pixel-level annotated images. Following the settings in Chen et al. (2018b), we use the augmented dataset by the extra annotations provided by Everingham et al. (2015), which contains 10,582 (trainaug) training images. Following the setting in MMSegmmentation Contributors (2020), we resize the images into 512×512 and the output stride is 8. We adopt “SGD” as the optimizer and “poly” policy as the learning rate schedule. In addition, we set the initial learning rate as 0.01 and weight decay as 0.0005. Furthermore, the batch size is 16 and the number of iterations is 20K. We evaluate the performance of our method and other methods using single scale and without flipping. PASCAL Context Mottaghi et al. (2014) contains 459 labeled categories, including 10,103 images, of which 4,998 are used for training and 5,105 for validation. The most widely adopted setting is to use the most frequent 59 categories as the semantic object classes and all the remaining categories as background. Following the setting in MMSegmmentation, we resize the images to 480 × 480. The initial learning rate is set to 0.004 and weight decay to 0.0001. The batch size is 16 and the number of iterations is 40K. We evaluate the performance using single scale and without flipping. LIP Gong et al. (2017) is a human part segmentation dataset, with 50,462 images in total, including 30,462 images for training, 10,000 images for validation and 10,000 images for testing. In addition, it contains 19 semantic classes and 1 background class. We resize the images to 473 × 473. The initial learning rate is set to 0.0028 and weight decay to 0.0005. The batch size is 16 and the number of iterations is 110K. We evaluate the performance using single scale and flipping. 4.2 SEMANTIC SEGMENTATION 4.2.1 ABLATION STUDY For ablation study, we use VOC+Aug as the training set. All images are resized to 512x512 as input. Iteration number is 20K. The initial learning rate is 0.01, and then decayed by ”poly” policy with power being 0.9. Batch size is set to 16. We use the same setting for the all ablation study unless specified otherwise. At first, we validate the effectiveness of C + 1 classifier on PASCAL VOC by DeeplabV3+ Chen et al. (2018a) with ResNet50 He et al. (2016) as backbone. As for the C CoIs, we use moving averages of L2-normalized features as centers. We set the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively if each of them is used. From the experiment performance on table 1, we observe that Lcompact and Lbackground can both boost the performance significantly. When Lcompact is used with Lseparability , we can improve mIoU by 0.32. If Lbackground added, we can get positive improvement 1.10 furthermore, and C +1 loss in total can improve the mIoU by 1.42. For semantic segmentation task, this is a significant improvement. Then, we validate the effect of three different center representations on the performance. In this group experiments, we set the weighs of Lcompact and Lbackground as 0.1 and 0.0001 respectively. We use the same hyper-parameters as above. From the experiment performance on table 2, we could find that moving average achieves the best performance. Next we analyze the effect of weights of Lcompact and Lbackground. According to the above experiments, moving average is the best center representation, so we adopt moving average in this and following experiments. First, we observe the weights’ effect of Lcompact while set the weight of Lbackground as 0.0001. From table 3 we can observe that 0.1 is a reasonable weight for Lcompact. Then we test the weight’s effect of Lbackground while set the weight of Lcompact as 0.1. From table 4 we find that 0.0001 is a reasonable weight for Lbackground. From experiment performance, we can conclude that setting the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively is a proper choice. Finally, we list the performance details of all classes on PASCAL VOC with and without C +1 loss in table 5. We can observe our method get superior performance over baseline on 13 classes which are highlighed by boldface. 4.2.2 BACKBONE To prove the generality of C + 1 classifier, we experiment on PASCAL VOC by DeepLabV3+ with ResNet101 as the backbone, displayed in table 6. Considering the larger capacity of ResNet101, we set the iteration as 40K and other hyper-parameters the same as with ResNet50 as the backbone. We can observe thatC+1 classifier is also better than the baseline. Moving average is used as the center representation. The weights of Lcompact and Lbackground are set to 0.01 and 0.0001 respectively. 4.2.3 COMPARISON WITH SOTA To prove the generality of C +1 classifier, we further experiment on PASCAL VOC by other SOTA semantic segmentation algorithms, i.e. HRNet Sun et al. (2019) and OCRNet Yuan et al. (2020). Both of them take HRNetW48 as backbone.We set the hyper-parameters the same as DeeplabV3+ with ResNet101 as the backbone, including center representation and iteration. For OCRNet+ours, we set the weights of Lcompact and Lbackground are set to 0.1 and 0.0001 respectively, and HRNet+FCN+ours with 0.01 and 0.0001 respectively. Results in table 7 shows that our C+1 classifier is also applicable to other semantic segmentation algorithms. Especially on the OCRNet, our method can improve the baseline with 0.99 mIoU. 4.2.4 EXPERIMENTS ON PASCAL CONTEXT To prove the generalization ability of our classifier to other scenarios, we also experiment DeeplabV3+ with ResNet50 as backbone on other semantic segmentation dataset, i.e. PASCAL Context. We use the moving average as the center representation. All images are resized to 480x480. Iteration number is 40K. Initial learning rate is set to 0.004. Batch size is set to 16. From table 8, we can observe our classifier can also improve the performance on PASCAL Context too. 4.3 HUMAN PARSING For LIP, all images are resized to 473x473. The initial learning rate and weight decay are set to 0.0028 and 0.0005 respectively. The batch size is 16 and the number of iterations is 110K. Experimental results are shown in Table 9. There are also some other datasets, i.e. Cityscapes Cordts et al. (2016), KITTI Geiger et al. (2013) and ADE20K Zhou et al. (2017; 2019). However these datasets almost have no background annotations, so it cannot be defined as a C + 1 classification problem. Thus we don’t experiment on them. 4.4 OBJECT DETECTION Object detection. In this section, we migrated our approach to the object detection task. Nowadays, existing methods for object detection can be divided into anchor-based and anchor-free according to whether anchors are needed. Among them, the anchor-based methods are quite different from the segmentation task in the classification head. Specifically, semantic segmentation is the classification of pixels. The target to be classified corresponds to the feature map position one-by-one, while the anchor-based methods needs to classify the anchor boxes, which is multiple to the feature map position. Therefore, we choose to verify our approach on the FCOS Tian et al. (2019) model which is the most similar method in object detection to the FCN-based segmentation network. To make a fair comparison, we adopted MMDetection Chen et al. (2019b) as a unified framework for all the experiments on object detection. We use COOC2017 as the evaluation dataset for object detection. COOC2017 contains 80 annotated targets, with 118K training images, 5K validation images and 20K testing images. The standard COO-style evaluation is adopted. To compare fairly, we use the public MMDetection platform with the provided training setup and learning rate schedule for 2x. The batch size is set as 16. The experimental results in table 10 show that our method can improve the detection performance of small objects. 5 CONCLUSION In this paper, firstly we define the C+1 classification problem. Then we propose a uniform abstract C + 1 loss for training the C + 1 classifier. Furthermore we design a instantialized C + 1 loss and prove its superiority over popular cross-entropy loss on some CV tasks, including semantic segmentation and object detection. In the future, we will explore more instantialized object for the C + 1 loss and experiment them on more problems, proving its generalizability on common C + 1 classification. A FEATURE VISUALIZATION We visualized the features of 10 foreground classes and the background class in PASCAL VOC as displayed in Figure 2. The features are extracted by the DeepLabV3++ours method. We take each pixel of the feature map before classification layer as a feature vector. By adopting our method, the feature space of every CoI became more compact, and further away from background class. That is, there is large enough margin between background class and every CoI.
1. What is the main contribution of the paper, and how does it differ from previous works? 2. What are the strengths and weaknesses of the proposed C+1 loss function? 3. How effective is the C+1 loss function in various computer vision tasks, and how does it compare to other loss functions? 4. Are there any concerns regarding the naming and explanation of the "basic discriminability" term? 5. Does the paper provide sufficient theoretical backing or insights for the proposed properties of C+1 loss?
Summary Of The Paper Review
Summary Of The Paper This work is motivated by the different nature of the "other"/background class in many vision tasks such as object detection and semantic segmentation. It introduces C+1 loss which consists of 3 individual loss terms that focus on the basic discriminability, intra-class compactness, and background margin. Basic discriminability loss is the conventional classification loss for C+1 classes, intra-class compactness uses center loss, and background margin loss is a margin loss acted on the center representation of all C classes and the features of background samples. Review Strengths The introduced C+1 loss is widely applicable to many kinds of tasks that require classification. The experiments are done extensively on multiple tasks to demonstrate the effectiveness of C+1 loss: object detection, semantic segmentation, human parsing. Weaknesses C+1 loss is a simple and straightforward combination of existing loss functions already explored in previous papers Wen et al. (2016). The background margin loss is almost identical to the conventional margin ranking loss. If you take things that are working really well and combine them in a simple way, it is likely that the combination will work better than any of the individual things. The naming of "basic discriminability" is so arbitrary and not supported by anything. Why does it have to be "basic"? When the paper mentions it uses center loss for L_compact, it does not cite the any papers about center loss. The proposed properties of C+1 loss are not backed by any theoretical foundations or insights. Except for DeepLabV3 on Pascal VOC, the performance gains are not so significant (most are less than 1%) in the evaluated tasks.
ICLR
Title C+1 Loss: Learn to Classify C Classes of Interest and the Background Class Differentially Abstract There is one kind of problem all around the classification area, where we want to classify C + 1 classes of samples, including C semantically deterministic classes which we call Classes of Interest (CoIs) and the (C + 1) semantically undeterministic class which we call background class. In spite of most classifier use softmax-based cross-entropy loss to supervise the training process without differentiating the background class from the CoIs, it is unreasonable as each of the CoIs has its inherent characteristics, but the background class dosen’t. We figure out that the background class should be treated differently from the CoIs during training. Motivated by this, firstly we define the C + 1 classification problem. Then, we propose three properties that a good C + 1 classifier should have: separability, compactness and background margin. Based on these we define a uniform general C +1 loss, composed of three parts, driving the C +1 classifier to satisfy those properties. Finally, we instantialize a C + 1 loss and practice it in semantic segmentation, human parsing and object detection tasks. The proposed approach shows its superiority over the traditional cross-entropy loss. 1 INTRODUCTION In machine learning, Softmax is one of the most widely used classifiers for classification, especially in CV tasks. During training, the Softmax classifier is often supervised by cross-entropy loss which treats each class without difference. However, there is a type of problem present all around the classification area, for which it is unreasonable to treat each class the same. We called this type of problem as C+1 classification. C+1 classification means classifying samples from C semantically deterministic classes and the (C + 1)th semantically underterministic class. We can semantically uniquely define each of the C classes by its inherent characteristics, so we can say they are semantically deterministic. Generally, we are interested in the C classes, so we call them C CoIs and one of them as a class of interest (CoI) hereafter. The (C +1)th class includes all the other stuff beyond C classes. Because it doesn’t have uniform inherent characteristics and can’t be described uniquely in semantics. That is why we say it is semantically undeterministic. In most cases, we regard things belonging to the (C + 1)th class as background, and we will call it background class hereinafter. Based on the above description, we consider that it is reasonable to treat the C CoIs and the background class differentially during training. On one hand, for the C CoIs, it’s reasonable to drive a C + 1 classifier to learn a compact and independent representation space for each of them because of their inherent characteristics. Then the C + 1 classifier can embed samples from each CoI into its own representation space. On the other hand, it’s also reasonable to drive the C + 1 classifier to map any samples from the background class into somewhere in feature space far away from all representation spaces of the C classes, considering that samples from background class doesn’t have any inherent characteristics different from C CoIs. For example in figure 1, if we are interested in cat, we can recognize a cat from an image at the first glance by the knowledge in the subconscious which uniquely defines cat. On the contrary, we can also recognize a not-cat instantly because it doesn’t have any inherent characteristics of cat. According to the inherent characteristics of C+1 classification, we figure out that a C+1 classifier will be preferable if it has a compact and independent representation space for each of the C CoIs in addition to its separability for all classes. This can guarantee it behaves well while encountering a sample which have different styles from the samples of corresponding CoI in the training set. Furthermore, it will be much better if there is large enough margin between the background class and the C CoIs. This will make the classifier more robust and generalizable while encountering a sample from any new classes belonging to the super background class, especially those that never appear in the training set. Above on these, we conclude three properties which a good C + 1 classifier should have—separability, compactness and background margin. Based on the three properties, we define a uniform general C + 1 loss which includes three parts corresponding to the three properties, driving the C+1 classifier to satisfy those properties. At last, considering semantic segmentation and object detection are two of the most typical C+1 classification problems, and are widely used in AI systems, we instantialize a C + 1 loss and practice it in semantic segmentation and object detection tasks, proving its superiority over the traditional cross-entropy loss. Specifically, in semantic segmentation, Softmax is widely used to classify each pixel of an image to one label of a predefined class set. Typically the predefined class set includes C CoIs and a background class. For instance, PASCAL VOC 2012 segmentation Everingham et al. (2010) contains 20 CoIs and a background class. The background class contains all other stuff. In object detection, Softmax is usually used to classify proposal boxes of an image to one of a predefined class set. For example, MS COCO Lin et al. (2014) contains 80 CoIs. A detector trained on it should classify all proposal boxes as one of the 80 CoIs or as background class. In these two typical tasks, the C + 1 classifiers need to recognize all the CoIs, and classify a variety of other things as background. It’s difficult for cross-entropy loss to drive the classifiers to learn well, which treats all samples without difference during training. This paper contains three contributions summarized as follows: 1.We define the C + 1 classification problem present all around the classification area. 2.We propose three properties that a good C + 1 classifier should have, and define a uniform C + 1 loss, which includes three parts driving the classifier to satisfy these properties. 3.We instantialize a C + 1 loss consisting of three terms, and practice it on semantic segmentation and object detection tasks, proving its superiority over the popular cross-entropy loss. 2 RELATED WORK 2.1 SOFTMAX Softmax is one of the most widely used classifier for a variety of pattern recognition tasks. Nowadays there are a plenty of variants for Softmax, such as L2-Softmax Ranjan et al. (2017), Largemargin Softmax Liu et al. (2017), Angular Softmax Liu et al. (2016), Normface Wang et al. (2017), AM-Softmax Wang et al. (2018a), CosFace Wang et al. (2018b) and ArcFace Deng et al. (2019). Large-margin Softmax is the first attempt to add parameter m into the original Softmax to control the margin and the larger m is, the larger the decision margin between the classes. Angular Softmax also known as SphereFace is an improvement to Large-margin Softmax with additional constraints on W and b, introducing the hypersphere manifold which makes the features suitable for open-set FR problem. L2-Softmax and Normface share similar ideas where L2-Softmax normalizes only the features and Normface normalizes both classifier weights and features and applys a scale parameter after that. The normaling and scaling steps push the learning progress focusing on optimizing angles among the classes, making the features not only separable but also discriminable. AM-Softmax(additive margin Softmax loss) and CosFace’s works are inspired from SphereFace by moving the parameter m that controls the margin from angular space to cosine space by addition. This also makes the implementation easier. ArcFace(additive angular margin loss) moves the parameter m from scaling to addition to expand the optimization boundary. To sum up, thse variants of Softmax improve Softmax from these aspects: normalization of weights or features, margin in angular space or cosine space, setting of margin m. They use the parameter m in different ways resulting in different decision boundary. However, they treat all classes equally during training as Softmax. And we propose to give some special care for the background class. So Softmax and its variants are not the best choice for C + 1 classification problem. 2.2 METRIC LEARNING Metric learning aims to maximize inter-class variation and minimize the intra-class variations. Constrastive loss Chopra et al. (2005); Hadsell et al. (2006); Sun et al. (2014) drives the distances between positive pairs close to 0, and the distances between negative pairs to fall within an absolute range. Triplet loss and its variants Weinberger & Saul (2009); Hoffer & Ailon (2015); Wang et al. (2014); Schroff et al. (2015); Ding et al. (2015); Cheng et al. (2016); Oh Song et al. (2016); Sohn (2016) drive the relative distances between positive pairs and negative pairs to fall lower than a preset threshold. Center loss Wen et al. (2016) drives model to learn a center for features of each class and penalizes the distances between features and their corresponding class center. Metric learning also treats each class equally, just from a metric perspective. They could not take care of the particularity of background class and the inherent characteristics of every semantic class. In addition, it’s not appropriate to learn a center for background class because it has no deterministic and unique definition. 2.3 OPEN-SET RECOGNITION Open-set recognition Bendale & Boult (2016); Ge et al. (2017); Shu et al. (2017); Neal et al. (2018); Liu et al. (2019); Yoshihashi et al. (2019); Oza & Patel (2019); Chen et al. (2019a); Qian et al. (2019); Yu & Tao (2019); Sun et al. (2020) is the one closest to our proposed C + 1 classification, wherein there are no training samples for the background class. However, the classifier needs to detect new classes while inferring. The C + 1 classification deals with another type of problem, wherein there are both training samples for C CoIs and the background class. In practice, it’s impossible to contain all other semantic classes except the CoIs into the background class. Many samples of new semantic classes are likely yo be encountered and should be identified as background class while inferring. So we could call the C + 1 classification as semi-open-set recognition. 3 METHOD 3.1 PROBLEM DEFINITION Firstly, we define some terminologies as follows. Semantic Class: a set of samples which could be described uniquely and deterministically. CoIs: a set of semantic classes that we are interested in and need to be recognized. Background Class: all other classes beyond C CoIs, including those semantic classes we are not interested and all other stuff. C +1 Classification Problem: categorize each item of a set into one of the C +1 classes. Therein, C+1 classes compriseC CoIs and the (C+1)th class of not interested, also called background class. The C CoIs are deterministic and unique, but the background class is underterministic and includes all stuff beyond those belonging to the C CoIs. The training set comprises many samples from each of the C CoIs and diverse samples from the background class. Being semantically deterministic means we can give a deterministic and unique definition for each of the C CoIs according to its inherent characteristics. Being underministic means the background class has no deterministic and unique definition, because it lacks uniform inherent characteristics. 3.2 C + 1 LOSS For each of the C CoIs, the C +1 classifier should extract the inherent characteristics, and differentiate the background class from it. In order to achieve this, we consider that a C+1 classifier should satisfy the following three key properties. 1.Separability: It should be able to separate all classes. 2.Compactness: The representation space of C CoIs should be compact enough. 3.Background margin: There should be large enough space which we call background margin between the background class and each of the C CoIs. The property one guarantees that a C+1 classifier has basic categorization ability for those samples that have similar distribution with the training set. The property two makes sure that the classifier has good generalizability for those samples belonging to the C CoIs but with different distribution. And the property three gives the classifier good robustness and generalizability for samples of the background class, especially those whose styles are never present at the training set. In order to drive a C+1 classifier to learn to satisfy the three key properties, we argue that the C+1 loss should include at least three parts: Lseparability , Lcompact and Lbackground, corresponding to the three properties respectively. LC+1 = Lseparability + αLcompact + βLbackground (1) Herein, Lseparability drives the classifier learn to discriminate all classes without difference, Lcompact drives the classifier to try to grasp the inherent characteristics and to learn a compact representation space for each of the C CoIs, and Lbackground drives the classifier to differentiate the background class from the C CoIs well. In practice, we could adopt cross-entropy loss as an instantialized instance for Lseparability and center loss Wen et al. (2016) for Lcompact. As for Lbackground, we design a novel loss according to the property three as follows. Lbackground = sign (yi = C + 1) 1 N ∑ i 1 C ∑ k=1 |mb − d (f (xi) , ck)|+ (2) Herein, sign (expression) = 1 only when the expression is true, otherwise 0. N is the number of background class sample. mb is the background margin between background class and each of the C CoIs. xi is a sample, and yi is its class label index. ck is the center of kth CoI. f(x) is a mapping function for extracting feature of sample x. f(x) followed by softmax composes the C+1 classifier. Without losing generality, f(x) is a deep neural network. As centers for center loss and the instantialized Lbackground, we define three types of center representation. 1.Represent each center of C CoIs as a learnable weight vector. 2.Represent each center of C CoIs as a moving average of sample features of the corresponding class. 3.Represent each center of C CoIs as the classifier weight vector of each class from the logit FC layer. Specifically the third center representation means that we share the center vector parameters with the softmax layer of the C CoIs, and ignore the weight vector of background. We define a center for every CoI, but not the background class. For every CoI, there are a bunch of inherent characteristics which can uniquely and deterministically define it semantically, however there are no unique and deterministic inherent characteristics for the background class. However, we can calculate the loss term Lcompact based on the distance between every sample of each CoI and its center. Furthermore, we can calculate the loss term Lbackground based on the distance between every sample of background class and the center of each CoI. 3.3 APPLICATION The C + 1 classifier described in the previous section can be applied to many tasks, including semantic segmentation, object detection, human pose estimation and any other classification problem which include the background class. In semantic segmentation, we label each foreground pixel with a semantic label, and all other pixels with background label. In object detection, we classify the area of object of interest as one of the CoIs, such as pedestrian, car and so on, and all other areas as background. In human pose estimation, we label the position of each human keypoint with proper semantic label and the other position with the background label. As for other cases, if the recognition problem needs to recognize certain number of semantically deterministic classes and the other stuff, the C + 1 classifier can also apply to it. In this paper, we just use some CV tasks as the experimental field, because they are the most common tasks in AI system. Besides the application scenarios presented above, it can also be applied to many other tasks, such as attribute recognition, web text classification, speech recognition and so on. 3.4 DISCUSSIONS We think C + 1 classifier is rational because it has sufficient prerequisites. Because every CoI has semantically deterministic definition based on a set of inherent characteristics, it’s rational and reachable to embed all samples of every CoI into an independent hypersphere. For example, the cat in the figure 1 has some type of fur, innate shape and contour, so we can define it uniquely and recognize it by the first glance. As for the background class, it has no deterministic definition, so there are no definite and uniform features for it. Then it’s not rational to embed all samples of background class into an hypersphere. But it may be reasonable to map it to the outer space of all hyperspheres of the C CoIs. Because the C CoIs are separable from the background class based on the uniqueness of every CoI. 3.5 THEORETICAL ANALYSIS After training, if the mapping function can embed all samples of the C CoIs into their own hypersphere, it’s almost impossible to map a novel sample belonging to the background class into any hypersphere of the C CoIs, even though the classifier has no any knowledge about the novel sample. Assume that a novel sample belonging to the background class is new-brand and there are no similar style of samples present at the training set, then we can treat it as a random sample from the nature. From the perspective of statistics, the probability of recognizing it as the background class is close to 1. Formally, we assume that the full d-dimension feature representation space is a super ball with RL as the radius, and the sample features of each CoI are distributed inside a small super ball with R as the radius. Then the probability of mapping the random sample to the outer space of the C CoI super balls is p (xrandom ) = 1− C ·Rd RdL (3) Therein, R RL is a reasonable assumption, C is a constant (number of CoIs), and d is the dimension of feature space, usually very large, such as hundreds, even thousands. C ·Rd is proxy for union of volumes of all CoI super balls, and RdL for the full feature space. Then C·Rd RdL is the ratio between union of volumes of all CoI super balls and the full full feature space, which represents the probability that the mapping function embeds a random sample of background class into any one of C CoI super balls. In practice, because of the high dimension of feature space, which means d is a large integer, such as 1024, p (xrandom ) is close to 1. 4 EXPERIMENT We comprehensively evaluate the effectiveness of C + 1 classifier on semantic segmentation, and then transfer to object detection with some minor adjustments. During inference we use the output of softmax layer for classification. 4.1 SETTINGS Semantic Segmentation. Semantic segmentation aims to label each pixel of an image with one semantic class label or background label. We evaluate our methods on two popular semantic segmentation dataset: PASCAL VOC and PASCAL Context and a human parsing dataset LIP. To make a fair comparison, for PASCAL VOC 2012 and PASCAL Context we adopt MMSegmentation Contributors (2020) as a unified framework for the experiments on semantic segmentation. PASCAL VOC 2012 contains 20 foreground object classes and one background class. The original dataset contains 1,464 (train), 1,449 (val), and 1,456 (test) pixel-level annotated images. Following the settings in Chen et al. (2018b), we use the augmented dataset by the extra annotations provided by Everingham et al. (2015), which contains 10,582 (trainaug) training images. Following the setting in MMSegmmentation Contributors (2020), we resize the images into 512×512 and the output stride is 8. We adopt “SGD” as the optimizer and “poly” policy as the learning rate schedule. In addition, we set the initial learning rate as 0.01 and weight decay as 0.0005. Furthermore, the batch size is 16 and the number of iterations is 20K. We evaluate the performance of our method and other methods using single scale and without flipping. PASCAL Context Mottaghi et al. (2014) contains 459 labeled categories, including 10,103 images, of which 4,998 are used for training and 5,105 for validation. The most widely adopted setting is to use the most frequent 59 categories as the semantic object classes and all the remaining categories as background. Following the setting in MMSegmmentation, we resize the images to 480 × 480. The initial learning rate is set to 0.004 and weight decay to 0.0001. The batch size is 16 and the number of iterations is 40K. We evaluate the performance using single scale and without flipping. LIP Gong et al. (2017) is a human part segmentation dataset, with 50,462 images in total, including 30,462 images for training, 10,000 images for validation and 10,000 images for testing. In addition, it contains 19 semantic classes and 1 background class. We resize the images to 473 × 473. The initial learning rate is set to 0.0028 and weight decay to 0.0005. The batch size is 16 and the number of iterations is 110K. We evaluate the performance using single scale and flipping. 4.2 SEMANTIC SEGMENTATION 4.2.1 ABLATION STUDY For ablation study, we use VOC+Aug as the training set. All images are resized to 512x512 as input. Iteration number is 20K. The initial learning rate is 0.01, and then decayed by ”poly” policy with power being 0.9. Batch size is set to 16. We use the same setting for the all ablation study unless specified otherwise. At first, we validate the effectiveness of C + 1 classifier on PASCAL VOC by DeeplabV3+ Chen et al. (2018a) with ResNet50 He et al. (2016) as backbone. As for the C CoIs, we use moving averages of L2-normalized features as centers. We set the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively if each of them is used. From the experiment performance on table 1, we observe that Lcompact and Lbackground can both boost the performance significantly. When Lcompact is used with Lseparability , we can improve mIoU by 0.32. If Lbackground added, we can get positive improvement 1.10 furthermore, and C +1 loss in total can improve the mIoU by 1.42. For semantic segmentation task, this is a significant improvement. Then, we validate the effect of three different center representations on the performance. In this group experiments, we set the weighs of Lcompact and Lbackground as 0.1 and 0.0001 respectively. We use the same hyper-parameters as above. From the experiment performance on table 2, we could find that moving average achieves the best performance. Next we analyze the effect of weights of Lcompact and Lbackground. According to the above experiments, moving average is the best center representation, so we adopt moving average in this and following experiments. First, we observe the weights’ effect of Lcompact while set the weight of Lbackground as 0.0001. From table 3 we can observe that 0.1 is a reasonable weight for Lcompact. Then we test the weight’s effect of Lbackground while set the weight of Lcompact as 0.1. From table 4 we find that 0.0001 is a reasonable weight for Lbackground. From experiment performance, we can conclude that setting the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively is a proper choice. Finally, we list the performance details of all classes on PASCAL VOC with and without C +1 loss in table 5. We can observe our method get superior performance over baseline on 13 classes which are highlighed by boldface. 4.2.2 BACKBONE To prove the generality of C + 1 classifier, we experiment on PASCAL VOC by DeepLabV3+ with ResNet101 as the backbone, displayed in table 6. Considering the larger capacity of ResNet101, we set the iteration as 40K and other hyper-parameters the same as with ResNet50 as the backbone. We can observe thatC+1 classifier is also better than the baseline. Moving average is used as the center representation. The weights of Lcompact and Lbackground are set to 0.01 and 0.0001 respectively. 4.2.3 COMPARISON WITH SOTA To prove the generality of C +1 classifier, we further experiment on PASCAL VOC by other SOTA semantic segmentation algorithms, i.e. HRNet Sun et al. (2019) and OCRNet Yuan et al. (2020). Both of them take HRNetW48 as backbone.We set the hyper-parameters the same as DeeplabV3+ with ResNet101 as the backbone, including center representation and iteration. For OCRNet+ours, we set the weights of Lcompact and Lbackground are set to 0.1 and 0.0001 respectively, and HRNet+FCN+ours with 0.01 and 0.0001 respectively. Results in table 7 shows that our C+1 classifier is also applicable to other semantic segmentation algorithms. Especially on the OCRNet, our method can improve the baseline with 0.99 mIoU. 4.2.4 EXPERIMENTS ON PASCAL CONTEXT To prove the generalization ability of our classifier to other scenarios, we also experiment DeeplabV3+ with ResNet50 as backbone on other semantic segmentation dataset, i.e. PASCAL Context. We use the moving average as the center representation. All images are resized to 480x480. Iteration number is 40K. Initial learning rate is set to 0.004. Batch size is set to 16. From table 8, we can observe our classifier can also improve the performance on PASCAL Context too. 4.3 HUMAN PARSING For LIP, all images are resized to 473x473. The initial learning rate and weight decay are set to 0.0028 and 0.0005 respectively. The batch size is 16 and the number of iterations is 110K. Experimental results are shown in Table 9. There are also some other datasets, i.e. Cityscapes Cordts et al. (2016), KITTI Geiger et al. (2013) and ADE20K Zhou et al. (2017; 2019). However these datasets almost have no background annotations, so it cannot be defined as a C + 1 classification problem. Thus we don’t experiment on them. 4.4 OBJECT DETECTION Object detection. In this section, we migrated our approach to the object detection task. Nowadays, existing methods for object detection can be divided into anchor-based and anchor-free according to whether anchors are needed. Among them, the anchor-based methods are quite different from the segmentation task in the classification head. Specifically, semantic segmentation is the classification of pixels. The target to be classified corresponds to the feature map position one-by-one, while the anchor-based methods needs to classify the anchor boxes, which is multiple to the feature map position. Therefore, we choose to verify our approach on the FCOS Tian et al. (2019) model which is the most similar method in object detection to the FCN-based segmentation network. To make a fair comparison, we adopted MMDetection Chen et al. (2019b) as a unified framework for all the experiments on object detection. We use COOC2017 as the evaluation dataset for object detection. COOC2017 contains 80 annotated targets, with 118K training images, 5K validation images and 20K testing images. The standard COO-style evaluation is adopted. To compare fairly, we use the public MMDetection platform with the provided training setup and learning rate schedule for 2x. The batch size is set as 16. The experimental results in table 10 show that our method can improve the detection performance of small objects. 5 CONCLUSION In this paper, firstly we define the C+1 classification problem. Then we propose a uniform abstract C + 1 loss for training the C + 1 classifier. Furthermore we design a instantialized C + 1 loss and prove its superiority over popular cross-entropy loss on some CV tasks, including semantic segmentation and object detection. In the future, we will explore more instantialized object for the C + 1 loss and experiment them on more problems, proving its generalizability on common C + 1 classification. A FEATURE VISUALIZATION We visualized the features of 10 foreground classes and the background class in PASCAL VOC as displayed in Figure 2. The features are extracted by the DeepLabV3++ours method. We take each pixel of the feature map before classification layer as a feature vector. By adopting our method, the feature space of every CoI became more compact, and further away from background class. That is, there is large enough margin between background class and every CoI.
1. What is the focus of the paper in terms of the problem it addresses? 2. What is the proposed solution to the problem, and how does it differ from existing approaches? 3. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to handle multi-class classification with a general background class? 4. How do the experimental results support or detract from the effectiveness of the proposed approach? 5. Are there any concerns regarding the novelty and impact of the paper's contributions to the field of multiclass classification?
Summary Of The Paper Review
Summary Of The Paper The paper on hands tackles the problem of multi-class classification in the presence of a general background class. This is illustrated by a specific loss function covering this aspect and demonstrated for a well known, but very old benchmark, Review The paper tackles an interesting and fair aspect in the field of multiclass classification. However, the neither the idea nor the kind of solution are new. In addition, the experimental results are not promising and show just slightly different results compared to arbitrarily selected baselines. From this points of view, there is neither a novel contribution nor provides the paper thrilling new insights to the tackled problem. In addition, the paper is not written and structured very well. For instance, the mathematical writing and the bibliography need to be seriously checked!
ICLR
Title C+1 Loss: Learn to Classify C Classes of Interest and the Background Class Differentially Abstract There is one kind of problem all around the classification area, where we want to classify C + 1 classes of samples, including C semantically deterministic classes which we call Classes of Interest (CoIs) and the (C + 1) semantically undeterministic class which we call background class. In spite of most classifier use softmax-based cross-entropy loss to supervise the training process without differentiating the background class from the CoIs, it is unreasonable as each of the CoIs has its inherent characteristics, but the background class dosen’t. We figure out that the background class should be treated differently from the CoIs during training. Motivated by this, firstly we define the C + 1 classification problem. Then, we propose three properties that a good C + 1 classifier should have: separability, compactness and background margin. Based on these we define a uniform general C +1 loss, composed of three parts, driving the C +1 classifier to satisfy those properties. Finally, we instantialize a C + 1 loss and practice it in semantic segmentation, human parsing and object detection tasks. The proposed approach shows its superiority over the traditional cross-entropy loss. 1 INTRODUCTION In machine learning, Softmax is one of the most widely used classifiers for classification, especially in CV tasks. During training, the Softmax classifier is often supervised by cross-entropy loss which treats each class without difference. However, there is a type of problem present all around the classification area, for which it is unreasonable to treat each class the same. We called this type of problem as C+1 classification. C+1 classification means classifying samples from C semantically deterministic classes and the (C + 1)th semantically underterministic class. We can semantically uniquely define each of the C classes by its inherent characteristics, so we can say they are semantically deterministic. Generally, we are interested in the C classes, so we call them C CoIs and one of them as a class of interest (CoI) hereafter. The (C +1)th class includes all the other stuff beyond C classes. Because it doesn’t have uniform inherent characteristics and can’t be described uniquely in semantics. That is why we say it is semantically undeterministic. In most cases, we regard things belonging to the (C + 1)th class as background, and we will call it background class hereinafter. Based on the above description, we consider that it is reasonable to treat the C CoIs and the background class differentially during training. On one hand, for the C CoIs, it’s reasonable to drive a C + 1 classifier to learn a compact and independent representation space for each of them because of their inherent characteristics. Then the C + 1 classifier can embed samples from each CoI into its own representation space. On the other hand, it’s also reasonable to drive the C + 1 classifier to map any samples from the background class into somewhere in feature space far away from all representation spaces of the C classes, considering that samples from background class doesn’t have any inherent characteristics different from C CoIs. For example in figure 1, if we are interested in cat, we can recognize a cat from an image at the first glance by the knowledge in the subconscious which uniquely defines cat. On the contrary, we can also recognize a not-cat instantly because it doesn’t have any inherent characteristics of cat. According to the inherent characteristics of C+1 classification, we figure out that a C+1 classifier will be preferable if it has a compact and independent representation space for each of the C CoIs in addition to its separability for all classes. This can guarantee it behaves well while encountering a sample which have different styles from the samples of corresponding CoI in the training set. Furthermore, it will be much better if there is large enough margin between the background class and the C CoIs. This will make the classifier more robust and generalizable while encountering a sample from any new classes belonging to the super background class, especially those that never appear in the training set. Above on these, we conclude three properties which a good C + 1 classifier should have—separability, compactness and background margin. Based on the three properties, we define a uniform general C + 1 loss which includes three parts corresponding to the three properties, driving the C+1 classifier to satisfy those properties. At last, considering semantic segmentation and object detection are two of the most typical C+1 classification problems, and are widely used in AI systems, we instantialize a C + 1 loss and practice it in semantic segmentation and object detection tasks, proving its superiority over the traditional cross-entropy loss. Specifically, in semantic segmentation, Softmax is widely used to classify each pixel of an image to one label of a predefined class set. Typically the predefined class set includes C CoIs and a background class. For instance, PASCAL VOC 2012 segmentation Everingham et al. (2010) contains 20 CoIs and a background class. The background class contains all other stuff. In object detection, Softmax is usually used to classify proposal boxes of an image to one of a predefined class set. For example, MS COCO Lin et al. (2014) contains 80 CoIs. A detector trained on it should classify all proposal boxes as one of the 80 CoIs or as background class. In these two typical tasks, the C + 1 classifiers need to recognize all the CoIs, and classify a variety of other things as background. It’s difficult for cross-entropy loss to drive the classifiers to learn well, which treats all samples without difference during training. This paper contains three contributions summarized as follows: 1.We define the C + 1 classification problem present all around the classification area. 2.We propose three properties that a good C + 1 classifier should have, and define a uniform C + 1 loss, which includes three parts driving the classifier to satisfy these properties. 3.We instantialize a C + 1 loss consisting of three terms, and practice it on semantic segmentation and object detection tasks, proving its superiority over the popular cross-entropy loss. 2 RELATED WORK 2.1 SOFTMAX Softmax is one of the most widely used classifier for a variety of pattern recognition tasks. Nowadays there are a plenty of variants for Softmax, such as L2-Softmax Ranjan et al. (2017), Largemargin Softmax Liu et al. (2017), Angular Softmax Liu et al. (2016), Normface Wang et al. (2017), AM-Softmax Wang et al. (2018a), CosFace Wang et al. (2018b) and ArcFace Deng et al. (2019). Large-margin Softmax is the first attempt to add parameter m into the original Softmax to control the margin and the larger m is, the larger the decision margin between the classes. Angular Softmax also known as SphereFace is an improvement to Large-margin Softmax with additional constraints on W and b, introducing the hypersphere manifold which makes the features suitable for open-set FR problem. L2-Softmax and Normface share similar ideas where L2-Softmax normalizes only the features and Normface normalizes both classifier weights and features and applys a scale parameter after that. The normaling and scaling steps push the learning progress focusing on optimizing angles among the classes, making the features not only separable but also discriminable. AM-Softmax(additive margin Softmax loss) and CosFace’s works are inspired from SphereFace by moving the parameter m that controls the margin from angular space to cosine space by addition. This also makes the implementation easier. ArcFace(additive angular margin loss) moves the parameter m from scaling to addition to expand the optimization boundary. To sum up, thse variants of Softmax improve Softmax from these aspects: normalization of weights or features, margin in angular space or cosine space, setting of margin m. They use the parameter m in different ways resulting in different decision boundary. However, they treat all classes equally during training as Softmax. And we propose to give some special care for the background class. So Softmax and its variants are not the best choice for C + 1 classification problem. 2.2 METRIC LEARNING Metric learning aims to maximize inter-class variation and minimize the intra-class variations. Constrastive loss Chopra et al. (2005); Hadsell et al. (2006); Sun et al. (2014) drives the distances between positive pairs close to 0, and the distances between negative pairs to fall within an absolute range. Triplet loss and its variants Weinberger & Saul (2009); Hoffer & Ailon (2015); Wang et al. (2014); Schroff et al. (2015); Ding et al. (2015); Cheng et al. (2016); Oh Song et al. (2016); Sohn (2016) drive the relative distances between positive pairs and negative pairs to fall lower than a preset threshold. Center loss Wen et al. (2016) drives model to learn a center for features of each class and penalizes the distances between features and their corresponding class center. Metric learning also treats each class equally, just from a metric perspective. They could not take care of the particularity of background class and the inherent characteristics of every semantic class. In addition, it’s not appropriate to learn a center for background class because it has no deterministic and unique definition. 2.3 OPEN-SET RECOGNITION Open-set recognition Bendale & Boult (2016); Ge et al. (2017); Shu et al. (2017); Neal et al. (2018); Liu et al. (2019); Yoshihashi et al. (2019); Oza & Patel (2019); Chen et al. (2019a); Qian et al. (2019); Yu & Tao (2019); Sun et al. (2020) is the one closest to our proposed C + 1 classification, wherein there are no training samples for the background class. However, the classifier needs to detect new classes while inferring. The C + 1 classification deals with another type of problem, wherein there are both training samples for C CoIs and the background class. In practice, it’s impossible to contain all other semantic classes except the CoIs into the background class. Many samples of new semantic classes are likely yo be encountered and should be identified as background class while inferring. So we could call the C + 1 classification as semi-open-set recognition. 3 METHOD 3.1 PROBLEM DEFINITION Firstly, we define some terminologies as follows. Semantic Class: a set of samples which could be described uniquely and deterministically. CoIs: a set of semantic classes that we are interested in and need to be recognized. Background Class: all other classes beyond C CoIs, including those semantic classes we are not interested and all other stuff. C +1 Classification Problem: categorize each item of a set into one of the C +1 classes. Therein, C+1 classes compriseC CoIs and the (C+1)th class of not interested, also called background class. The C CoIs are deterministic and unique, but the background class is underterministic and includes all stuff beyond those belonging to the C CoIs. The training set comprises many samples from each of the C CoIs and diverse samples from the background class. Being semantically deterministic means we can give a deterministic and unique definition for each of the C CoIs according to its inherent characteristics. Being underministic means the background class has no deterministic and unique definition, because it lacks uniform inherent characteristics. 3.2 C + 1 LOSS For each of the C CoIs, the C +1 classifier should extract the inherent characteristics, and differentiate the background class from it. In order to achieve this, we consider that a C+1 classifier should satisfy the following three key properties. 1.Separability: It should be able to separate all classes. 2.Compactness: The representation space of C CoIs should be compact enough. 3.Background margin: There should be large enough space which we call background margin between the background class and each of the C CoIs. The property one guarantees that a C+1 classifier has basic categorization ability for those samples that have similar distribution with the training set. The property two makes sure that the classifier has good generalizability for those samples belonging to the C CoIs but with different distribution. And the property three gives the classifier good robustness and generalizability for samples of the background class, especially those whose styles are never present at the training set. In order to drive a C+1 classifier to learn to satisfy the three key properties, we argue that the C+1 loss should include at least three parts: Lseparability , Lcompact and Lbackground, corresponding to the three properties respectively. LC+1 = Lseparability + αLcompact + βLbackground (1) Herein, Lseparability drives the classifier learn to discriminate all classes without difference, Lcompact drives the classifier to try to grasp the inherent characteristics and to learn a compact representation space for each of the C CoIs, and Lbackground drives the classifier to differentiate the background class from the C CoIs well. In practice, we could adopt cross-entropy loss as an instantialized instance for Lseparability and center loss Wen et al. (2016) for Lcompact. As for Lbackground, we design a novel loss according to the property three as follows. Lbackground = sign (yi = C + 1) 1 N ∑ i 1 C ∑ k=1 |mb − d (f (xi) , ck)|+ (2) Herein, sign (expression) = 1 only when the expression is true, otherwise 0. N is the number of background class sample. mb is the background margin between background class and each of the C CoIs. xi is a sample, and yi is its class label index. ck is the center of kth CoI. f(x) is a mapping function for extracting feature of sample x. f(x) followed by softmax composes the C+1 classifier. Without losing generality, f(x) is a deep neural network. As centers for center loss and the instantialized Lbackground, we define three types of center representation. 1.Represent each center of C CoIs as a learnable weight vector. 2.Represent each center of C CoIs as a moving average of sample features of the corresponding class. 3.Represent each center of C CoIs as the classifier weight vector of each class from the logit FC layer. Specifically the third center representation means that we share the center vector parameters with the softmax layer of the C CoIs, and ignore the weight vector of background. We define a center for every CoI, but not the background class. For every CoI, there are a bunch of inherent characteristics which can uniquely and deterministically define it semantically, however there are no unique and deterministic inherent characteristics for the background class. However, we can calculate the loss term Lcompact based on the distance between every sample of each CoI and its center. Furthermore, we can calculate the loss term Lbackground based on the distance between every sample of background class and the center of each CoI. 3.3 APPLICATION The C + 1 classifier described in the previous section can be applied to many tasks, including semantic segmentation, object detection, human pose estimation and any other classification problem which include the background class. In semantic segmentation, we label each foreground pixel with a semantic label, and all other pixels with background label. In object detection, we classify the area of object of interest as one of the CoIs, such as pedestrian, car and so on, and all other areas as background. In human pose estimation, we label the position of each human keypoint with proper semantic label and the other position with the background label. As for other cases, if the recognition problem needs to recognize certain number of semantically deterministic classes and the other stuff, the C + 1 classifier can also apply to it. In this paper, we just use some CV tasks as the experimental field, because they are the most common tasks in AI system. Besides the application scenarios presented above, it can also be applied to many other tasks, such as attribute recognition, web text classification, speech recognition and so on. 3.4 DISCUSSIONS We think C + 1 classifier is rational because it has sufficient prerequisites. Because every CoI has semantically deterministic definition based on a set of inherent characteristics, it’s rational and reachable to embed all samples of every CoI into an independent hypersphere. For example, the cat in the figure 1 has some type of fur, innate shape and contour, so we can define it uniquely and recognize it by the first glance. As for the background class, it has no deterministic definition, so there are no definite and uniform features for it. Then it’s not rational to embed all samples of background class into an hypersphere. But it may be reasonable to map it to the outer space of all hyperspheres of the C CoIs. Because the C CoIs are separable from the background class based on the uniqueness of every CoI. 3.5 THEORETICAL ANALYSIS After training, if the mapping function can embed all samples of the C CoIs into their own hypersphere, it’s almost impossible to map a novel sample belonging to the background class into any hypersphere of the C CoIs, even though the classifier has no any knowledge about the novel sample. Assume that a novel sample belonging to the background class is new-brand and there are no similar style of samples present at the training set, then we can treat it as a random sample from the nature. From the perspective of statistics, the probability of recognizing it as the background class is close to 1. Formally, we assume that the full d-dimension feature representation space is a super ball with RL as the radius, and the sample features of each CoI are distributed inside a small super ball with R as the radius. Then the probability of mapping the random sample to the outer space of the C CoI super balls is p (xrandom ) = 1− C ·Rd RdL (3) Therein, R RL is a reasonable assumption, C is a constant (number of CoIs), and d is the dimension of feature space, usually very large, such as hundreds, even thousands. C ·Rd is proxy for union of volumes of all CoI super balls, and RdL for the full feature space. Then C·Rd RdL is the ratio between union of volumes of all CoI super balls and the full full feature space, which represents the probability that the mapping function embeds a random sample of background class into any one of C CoI super balls. In practice, because of the high dimension of feature space, which means d is a large integer, such as 1024, p (xrandom ) is close to 1. 4 EXPERIMENT We comprehensively evaluate the effectiveness of C + 1 classifier on semantic segmentation, and then transfer to object detection with some minor adjustments. During inference we use the output of softmax layer for classification. 4.1 SETTINGS Semantic Segmentation. Semantic segmentation aims to label each pixel of an image with one semantic class label or background label. We evaluate our methods on two popular semantic segmentation dataset: PASCAL VOC and PASCAL Context and a human parsing dataset LIP. To make a fair comparison, for PASCAL VOC 2012 and PASCAL Context we adopt MMSegmentation Contributors (2020) as a unified framework for the experiments on semantic segmentation. PASCAL VOC 2012 contains 20 foreground object classes and one background class. The original dataset contains 1,464 (train), 1,449 (val), and 1,456 (test) pixel-level annotated images. Following the settings in Chen et al. (2018b), we use the augmented dataset by the extra annotations provided by Everingham et al. (2015), which contains 10,582 (trainaug) training images. Following the setting in MMSegmmentation Contributors (2020), we resize the images into 512×512 and the output stride is 8. We adopt “SGD” as the optimizer and “poly” policy as the learning rate schedule. In addition, we set the initial learning rate as 0.01 and weight decay as 0.0005. Furthermore, the batch size is 16 and the number of iterations is 20K. We evaluate the performance of our method and other methods using single scale and without flipping. PASCAL Context Mottaghi et al. (2014) contains 459 labeled categories, including 10,103 images, of which 4,998 are used for training and 5,105 for validation. The most widely adopted setting is to use the most frequent 59 categories as the semantic object classes and all the remaining categories as background. Following the setting in MMSegmmentation, we resize the images to 480 × 480. The initial learning rate is set to 0.004 and weight decay to 0.0001. The batch size is 16 and the number of iterations is 40K. We evaluate the performance using single scale and without flipping. LIP Gong et al. (2017) is a human part segmentation dataset, with 50,462 images in total, including 30,462 images for training, 10,000 images for validation and 10,000 images for testing. In addition, it contains 19 semantic classes and 1 background class. We resize the images to 473 × 473. The initial learning rate is set to 0.0028 and weight decay to 0.0005. The batch size is 16 and the number of iterations is 110K. We evaluate the performance using single scale and flipping. 4.2 SEMANTIC SEGMENTATION 4.2.1 ABLATION STUDY For ablation study, we use VOC+Aug as the training set. All images are resized to 512x512 as input. Iteration number is 20K. The initial learning rate is 0.01, and then decayed by ”poly” policy with power being 0.9. Batch size is set to 16. We use the same setting for the all ablation study unless specified otherwise. At first, we validate the effectiveness of C + 1 classifier on PASCAL VOC by DeeplabV3+ Chen et al. (2018a) with ResNet50 He et al. (2016) as backbone. As for the C CoIs, we use moving averages of L2-normalized features as centers. We set the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively if each of them is used. From the experiment performance on table 1, we observe that Lcompact and Lbackground can both boost the performance significantly. When Lcompact is used with Lseparability , we can improve mIoU by 0.32. If Lbackground added, we can get positive improvement 1.10 furthermore, and C +1 loss in total can improve the mIoU by 1.42. For semantic segmentation task, this is a significant improvement. Then, we validate the effect of three different center representations on the performance. In this group experiments, we set the weighs of Lcompact and Lbackground as 0.1 and 0.0001 respectively. We use the same hyper-parameters as above. From the experiment performance on table 2, we could find that moving average achieves the best performance. Next we analyze the effect of weights of Lcompact and Lbackground. According to the above experiments, moving average is the best center representation, so we adopt moving average in this and following experiments. First, we observe the weights’ effect of Lcompact while set the weight of Lbackground as 0.0001. From table 3 we can observe that 0.1 is a reasonable weight for Lcompact. Then we test the weight’s effect of Lbackground while set the weight of Lcompact as 0.1. From table 4 we find that 0.0001 is a reasonable weight for Lbackground. From experiment performance, we can conclude that setting the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively is a proper choice. Finally, we list the performance details of all classes on PASCAL VOC with and without C +1 loss in table 5. We can observe our method get superior performance over baseline on 13 classes which are highlighed by boldface. 4.2.2 BACKBONE To prove the generality of C + 1 classifier, we experiment on PASCAL VOC by DeepLabV3+ with ResNet101 as the backbone, displayed in table 6. Considering the larger capacity of ResNet101, we set the iteration as 40K and other hyper-parameters the same as with ResNet50 as the backbone. We can observe thatC+1 classifier is also better than the baseline. Moving average is used as the center representation. The weights of Lcompact and Lbackground are set to 0.01 and 0.0001 respectively. 4.2.3 COMPARISON WITH SOTA To prove the generality of C +1 classifier, we further experiment on PASCAL VOC by other SOTA semantic segmentation algorithms, i.e. HRNet Sun et al. (2019) and OCRNet Yuan et al. (2020). Both of them take HRNetW48 as backbone.We set the hyper-parameters the same as DeeplabV3+ with ResNet101 as the backbone, including center representation and iteration. For OCRNet+ours, we set the weights of Lcompact and Lbackground are set to 0.1 and 0.0001 respectively, and HRNet+FCN+ours with 0.01 and 0.0001 respectively. Results in table 7 shows that our C+1 classifier is also applicable to other semantic segmentation algorithms. Especially on the OCRNet, our method can improve the baseline with 0.99 mIoU. 4.2.4 EXPERIMENTS ON PASCAL CONTEXT To prove the generalization ability of our classifier to other scenarios, we also experiment DeeplabV3+ with ResNet50 as backbone on other semantic segmentation dataset, i.e. PASCAL Context. We use the moving average as the center representation. All images are resized to 480x480. Iteration number is 40K. Initial learning rate is set to 0.004. Batch size is set to 16. From table 8, we can observe our classifier can also improve the performance on PASCAL Context too. 4.3 HUMAN PARSING For LIP, all images are resized to 473x473. The initial learning rate and weight decay are set to 0.0028 and 0.0005 respectively. The batch size is 16 and the number of iterations is 110K. Experimental results are shown in Table 9. There are also some other datasets, i.e. Cityscapes Cordts et al. (2016), KITTI Geiger et al. (2013) and ADE20K Zhou et al. (2017; 2019). However these datasets almost have no background annotations, so it cannot be defined as a C + 1 classification problem. Thus we don’t experiment on them. 4.4 OBJECT DETECTION Object detection. In this section, we migrated our approach to the object detection task. Nowadays, existing methods for object detection can be divided into anchor-based and anchor-free according to whether anchors are needed. Among them, the anchor-based methods are quite different from the segmentation task in the classification head. Specifically, semantic segmentation is the classification of pixels. The target to be classified corresponds to the feature map position one-by-one, while the anchor-based methods needs to classify the anchor boxes, which is multiple to the feature map position. Therefore, we choose to verify our approach on the FCOS Tian et al. (2019) model which is the most similar method in object detection to the FCN-based segmentation network. To make a fair comparison, we adopted MMDetection Chen et al. (2019b) as a unified framework for all the experiments on object detection. We use COOC2017 as the evaluation dataset for object detection. COOC2017 contains 80 annotated targets, with 118K training images, 5K validation images and 20K testing images. The standard COO-style evaluation is adopted. To compare fairly, we use the public MMDetection platform with the provided training setup and learning rate schedule for 2x. The batch size is set as 16. The experimental results in table 10 show that our method can improve the detection performance of small objects. 5 CONCLUSION In this paper, firstly we define the C+1 classification problem. Then we propose a uniform abstract C + 1 loss for training the C + 1 classifier. Furthermore we design a instantialized C + 1 loss and prove its superiority over popular cross-entropy loss on some CV tasks, including semantic segmentation and object detection. In the future, we will explore more instantialized object for the C + 1 loss and experiment them on more problems, proving its generalizability on common C + 1 classification. A FEATURE VISUALIZATION We visualized the features of 10 foreground classes and the background class in PASCAL VOC as displayed in Figure 2. The features are extracted by the DeepLabV3++ours method. We take each pixel of the feature map before classification layer as a feature vector. By adopting our method, the feature space of every CoI became more compact, and further away from background class. That is, there is large enough margin between background class and every CoI.
1. What is the focus of the paper regarding semantic segmentation/classification? 2. What is the weakness of the classical softmax classification layer that the authors address? 3. What is the novel loss proposed by the authors, and how does it address the problem? 4. How do the authors evaluate the effectiveness of their proposed approach? 5. What are some limitations of the paper's experiments, and what additional experiments could have been performed to further validate the claims?
Summary Of The Paper Review
Summary Of The Paper The paper addresses the problem of semantic segmentation/classification, in situations where one has C+1 classes at hand, including C deterministic classes and a background class. The authors identifies the weakness of the classical softmax classification layer in this context, and propose a novel loss, that can better account for the particularity of the non-deterministic background class. Experiments are performed on VOC-2012, Pascal context, LIP and COCO2017. Quantitative results, ablation study and comparison with related work are reported. Review Significance. The problem addressed by the author is important (the background class is present in most of the vision benchmarks) but few work (to my knowledge) attempted to tackle it in a rigorous way. Novelty. The solution proposed by the authors (decompose the loss function into three terms, to enforce some constraints of the last layer feature space) is original, easy to implement and has the potential to inspire others. The properties identified by the authors (which translate into these loss terms) are well discussed and intuitive. It might however be surprising to maintain the basic loss (cross entropy preceded by a softmax) as such (since it probably acts against the compactness and background margin properties enforced by the two other terms). Results. The authors report convincing results on four different datasets and two different tasks. The experiments (ablation study) validate the claims of the introduction. While the quantitative analysis consistently show improvement wrt to baseline or the sota, I believe that more challenging experiments could have been performed for binary classification tasks (eg, human detection/classification, road sign detection), for which the background is often very diverse. Clarity. The paper is fluid, a pleasure to read.
ICLR
Title C+1 Loss: Learn to Classify C Classes of Interest and the Background Class Differentially Abstract There is one kind of problem all around the classification area, where we want to classify C + 1 classes of samples, including C semantically deterministic classes which we call Classes of Interest (CoIs) and the (C + 1) semantically undeterministic class which we call background class. In spite of most classifier use softmax-based cross-entropy loss to supervise the training process without differentiating the background class from the CoIs, it is unreasonable as each of the CoIs has its inherent characteristics, but the background class dosen’t. We figure out that the background class should be treated differently from the CoIs during training. Motivated by this, firstly we define the C + 1 classification problem. Then, we propose three properties that a good C + 1 classifier should have: separability, compactness and background margin. Based on these we define a uniform general C +1 loss, composed of three parts, driving the C +1 classifier to satisfy those properties. Finally, we instantialize a C + 1 loss and practice it in semantic segmentation, human parsing and object detection tasks. The proposed approach shows its superiority over the traditional cross-entropy loss. 1 INTRODUCTION In machine learning, Softmax is one of the most widely used classifiers for classification, especially in CV tasks. During training, the Softmax classifier is often supervised by cross-entropy loss which treats each class without difference. However, there is a type of problem present all around the classification area, for which it is unreasonable to treat each class the same. We called this type of problem as C+1 classification. C+1 classification means classifying samples from C semantically deterministic classes and the (C + 1)th semantically underterministic class. We can semantically uniquely define each of the C classes by its inherent characteristics, so we can say they are semantically deterministic. Generally, we are interested in the C classes, so we call them C CoIs and one of them as a class of interest (CoI) hereafter. The (C +1)th class includes all the other stuff beyond C classes. Because it doesn’t have uniform inherent characteristics and can’t be described uniquely in semantics. That is why we say it is semantically undeterministic. In most cases, we regard things belonging to the (C + 1)th class as background, and we will call it background class hereinafter. Based on the above description, we consider that it is reasonable to treat the C CoIs and the background class differentially during training. On one hand, for the C CoIs, it’s reasonable to drive a C + 1 classifier to learn a compact and independent representation space for each of them because of their inherent characteristics. Then the C + 1 classifier can embed samples from each CoI into its own representation space. On the other hand, it’s also reasonable to drive the C + 1 classifier to map any samples from the background class into somewhere in feature space far away from all representation spaces of the C classes, considering that samples from background class doesn’t have any inherent characteristics different from C CoIs. For example in figure 1, if we are interested in cat, we can recognize a cat from an image at the first glance by the knowledge in the subconscious which uniquely defines cat. On the contrary, we can also recognize a not-cat instantly because it doesn’t have any inherent characteristics of cat. According to the inherent characteristics of C+1 classification, we figure out that a C+1 classifier will be preferable if it has a compact and independent representation space for each of the C CoIs in addition to its separability for all classes. This can guarantee it behaves well while encountering a sample which have different styles from the samples of corresponding CoI in the training set. Furthermore, it will be much better if there is large enough margin between the background class and the C CoIs. This will make the classifier more robust and generalizable while encountering a sample from any new classes belonging to the super background class, especially those that never appear in the training set. Above on these, we conclude three properties which a good C + 1 classifier should have—separability, compactness and background margin. Based on the three properties, we define a uniform general C + 1 loss which includes three parts corresponding to the three properties, driving the C+1 classifier to satisfy those properties. At last, considering semantic segmentation and object detection are two of the most typical C+1 classification problems, and are widely used in AI systems, we instantialize a C + 1 loss and practice it in semantic segmentation and object detection tasks, proving its superiority over the traditional cross-entropy loss. Specifically, in semantic segmentation, Softmax is widely used to classify each pixel of an image to one label of a predefined class set. Typically the predefined class set includes C CoIs and a background class. For instance, PASCAL VOC 2012 segmentation Everingham et al. (2010) contains 20 CoIs and a background class. The background class contains all other stuff. In object detection, Softmax is usually used to classify proposal boxes of an image to one of a predefined class set. For example, MS COCO Lin et al. (2014) contains 80 CoIs. A detector trained on it should classify all proposal boxes as one of the 80 CoIs or as background class. In these two typical tasks, the C + 1 classifiers need to recognize all the CoIs, and classify a variety of other things as background. It’s difficult for cross-entropy loss to drive the classifiers to learn well, which treats all samples without difference during training. This paper contains three contributions summarized as follows: 1.We define the C + 1 classification problem present all around the classification area. 2.We propose three properties that a good C + 1 classifier should have, and define a uniform C + 1 loss, which includes three parts driving the classifier to satisfy these properties. 3.We instantialize a C + 1 loss consisting of three terms, and practice it on semantic segmentation and object detection tasks, proving its superiority over the popular cross-entropy loss. 2 RELATED WORK 2.1 SOFTMAX Softmax is one of the most widely used classifier for a variety of pattern recognition tasks. Nowadays there are a plenty of variants for Softmax, such as L2-Softmax Ranjan et al. (2017), Largemargin Softmax Liu et al. (2017), Angular Softmax Liu et al. (2016), Normface Wang et al. (2017), AM-Softmax Wang et al. (2018a), CosFace Wang et al. (2018b) and ArcFace Deng et al. (2019). Large-margin Softmax is the first attempt to add parameter m into the original Softmax to control the margin and the larger m is, the larger the decision margin between the classes. Angular Softmax also known as SphereFace is an improvement to Large-margin Softmax with additional constraints on W and b, introducing the hypersphere manifold which makes the features suitable for open-set FR problem. L2-Softmax and Normface share similar ideas where L2-Softmax normalizes only the features and Normface normalizes both classifier weights and features and applys a scale parameter after that. The normaling and scaling steps push the learning progress focusing on optimizing angles among the classes, making the features not only separable but also discriminable. AM-Softmax(additive margin Softmax loss) and CosFace’s works are inspired from SphereFace by moving the parameter m that controls the margin from angular space to cosine space by addition. This also makes the implementation easier. ArcFace(additive angular margin loss) moves the parameter m from scaling to addition to expand the optimization boundary. To sum up, thse variants of Softmax improve Softmax from these aspects: normalization of weights or features, margin in angular space or cosine space, setting of margin m. They use the parameter m in different ways resulting in different decision boundary. However, they treat all classes equally during training as Softmax. And we propose to give some special care for the background class. So Softmax and its variants are not the best choice for C + 1 classification problem. 2.2 METRIC LEARNING Metric learning aims to maximize inter-class variation and minimize the intra-class variations. Constrastive loss Chopra et al. (2005); Hadsell et al. (2006); Sun et al. (2014) drives the distances between positive pairs close to 0, and the distances between negative pairs to fall within an absolute range. Triplet loss and its variants Weinberger & Saul (2009); Hoffer & Ailon (2015); Wang et al. (2014); Schroff et al. (2015); Ding et al. (2015); Cheng et al. (2016); Oh Song et al. (2016); Sohn (2016) drive the relative distances between positive pairs and negative pairs to fall lower than a preset threshold. Center loss Wen et al. (2016) drives model to learn a center for features of each class and penalizes the distances between features and their corresponding class center. Metric learning also treats each class equally, just from a metric perspective. They could not take care of the particularity of background class and the inherent characteristics of every semantic class. In addition, it’s not appropriate to learn a center for background class because it has no deterministic and unique definition. 2.3 OPEN-SET RECOGNITION Open-set recognition Bendale & Boult (2016); Ge et al. (2017); Shu et al. (2017); Neal et al. (2018); Liu et al. (2019); Yoshihashi et al. (2019); Oza & Patel (2019); Chen et al. (2019a); Qian et al. (2019); Yu & Tao (2019); Sun et al. (2020) is the one closest to our proposed C + 1 classification, wherein there are no training samples for the background class. However, the classifier needs to detect new classes while inferring. The C + 1 classification deals with another type of problem, wherein there are both training samples for C CoIs and the background class. In practice, it’s impossible to contain all other semantic classes except the CoIs into the background class. Many samples of new semantic classes are likely yo be encountered and should be identified as background class while inferring. So we could call the C + 1 classification as semi-open-set recognition. 3 METHOD 3.1 PROBLEM DEFINITION Firstly, we define some terminologies as follows. Semantic Class: a set of samples which could be described uniquely and deterministically. CoIs: a set of semantic classes that we are interested in and need to be recognized. Background Class: all other classes beyond C CoIs, including those semantic classes we are not interested and all other stuff. C +1 Classification Problem: categorize each item of a set into one of the C +1 classes. Therein, C+1 classes compriseC CoIs and the (C+1)th class of not interested, also called background class. The C CoIs are deterministic and unique, but the background class is underterministic and includes all stuff beyond those belonging to the C CoIs. The training set comprises many samples from each of the C CoIs and diverse samples from the background class. Being semantically deterministic means we can give a deterministic and unique definition for each of the C CoIs according to its inherent characteristics. Being underministic means the background class has no deterministic and unique definition, because it lacks uniform inherent characteristics. 3.2 C + 1 LOSS For each of the C CoIs, the C +1 classifier should extract the inherent characteristics, and differentiate the background class from it. In order to achieve this, we consider that a C+1 classifier should satisfy the following three key properties. 1.Separability: It should be able to separate all classes. 2.Compactness: The representation space of C CoIs should be compact enough. 3.Background margin: There should be large enough space which we call background margin between the background class and each of the C CoIs. The property one guarantees that a C+1 classifier has basic categorization ability for those samples that have similar distribution with the training set. The property two makes sure that the classifier has good generalizability for those samples belonging to the C CoIs but with different distribution. And the property three gives the classifier good robustness and generalizability for samples of the background class, especially those whose styles are never present at the training set. In order to drive a C+1 classifier to learn to satisfy the three key properties, we argue that the C+1 loss should include at least three parts: Lseparability , Lcompact and Lbackground, corresponding to the three properties respectively. LC+1 = Lseparability + αLcompact + βLbackground (1) Herein, Lseparability drives the classifier learn to discriminate all classes without difference, Lcompact drives the classifier to try to grasp the inherent characteristics and to learn a compact representation space for each of the C CoIs, and Lbackground drives the classifier to differentiate the background class from the C CoIs well. In practice, we could adopt cross-entropy loss as an instantialized instance for Lseparability and center loss Wen et al. (2016) for Lcompact. As for Lbackground, we design a novel loss according to the property three as follows. Lbackground = sign (yi = C + 1) 1 N ∑ i 1 C ∑ k=1 |mb − d (f (xi) , ck)|+ (2) Herein, sign (expression) = 1 only when the expression is true, otherwise 0. N is the number of background class sample. mb is the background margin between background class and each of the C CoIs. xi is a sample, and yi is its class label index. ck is the center of kth CoI. f(x) is a mapping function for extracting feature of sample x. f(x) followed by softmax composes the C+1 classifier. Without losing generality, f(x) is a deep neural network. As centers for center loss and the instantialized Lbackground, we define three types of center representation. 1.Represent each center of C CoIs as a learnable weight vector. 2.Represent each center of C CoIs as a moving average of sample features of the corresponding class. 3.Represent each center of C CoIs as the classifier weight vector of each class from the logit FC layer. Specifically the third center representation means that we share the center vector parameters with the softmax layer of the C CoIs, and ignore the weight vector of background. We define a center for every CoI, but not the background class. For every CoI, there are a bunch of inherent characteristics which can uniquely and deterministically define it semantically, however there are no unique and deterministic inherent characteristics for the background class. However, we can calculate the loss term Lcompact based on the distance between every sample of each CoI and its center. Furthermore, we can calculate the loss term Lbackground based on the distance between every sample of background class and the center of each CoI. 3.3 APPLICATION The C + 1 classifier described in the previous section can be applied to many tasks, including semantic segmentation, object detection, human pose estimation and any other classification problem which include the background class. In semantic segmentation, we label each foreground pixel with a semantic label, and all other pixels with background label. In object detection, we classify the area of object of interest as one of the CoIs, such as pedestrian, car and so on, and all other areas as background. In human pose estimation, we label the position of each human keypoint with proper semantic label and the other position with the background label. As for other cases, if the recognition problem needs to recognize certain number of semantically deterministic classes and the other stuff, the C + 1 classifier can also apply to it. In this paper, we just use some CV tasks as the experimental field, because they are the most common tasks in AI system. Besides the application scenarios presented above, it can also be applied to many other tasks, such as attribute recognition, web text classification, speech recognition and so on. 3.4 DISCUSSIONS We think C + 1 classifier is rational because it has sufficient prerequisites. Because every CoI has semantically deterministic definition based on a set of inherent characteristics, it’s rational and reachable to embed all samples of every CoI into an independent hypersphere. For example, the cat in the figure 1 has some type of fur, innate shape and contour, so we can define it uniquely and recognize it by the first glance. As for the background class, it has no deterministic definition, so there are no definite and uniform features for it. Then it’s not rational to embed all samples of background class into an hypersphere. But it may be reasonable to map it to the outer space of all hyperspheres of the C CoIs. Because the C CoIs are separable from the background class based on the uniqueness of every CoI. 3.5 THEORETICAL ANALYSIS After training, if the mapping function can embed all samples of the C CoIs into their own hypersphere, it’s almost impossible to map a novel sample belonging to the background class into any hypersphere of the C CoIs, even though the classifier has no any knowledge about the novel sample. Assume that a novel sample belonging to the background class is new-brand and there are no similar style of samples present at the training set, then we can treat it as a random sample from the nature. From the perspective of statistics, the probability of recognizing it as the background class is close to 1. Formally, we assume that the full d-dimension feature representation space is a super ball with RL as the radius, and the sample features of each CoI are distributed inside a small super ball with R as the radius. Then the probability of mapping the random sample to the outer space of the C CoI super balls is p (xrandom ) = 1− C ·Rd RdL (3) Therein, R RL is a reasonable assumption, C is a constant (number of CoIs), and d is the dimension of feature space, usually very large, such as hundreds, even thousands. C ·Rd is proxy for union of volumes of all CoI super balls, and RdL for the full feature space. Then C·Rd RdL is the ratio between union of volumes of all CoI super balls and the full full feature space, which represents the probability that the mapping function embeds a random sample of background class into any one of C CoI super balls. In practice, because of the high dimension of feature space, which means d is a large integer, such as 1024, p (xrandom ) is close to 1. 4 EXPERIMENT We comprehensively evaluate the effectiveness of C + 1 classifier on semantic segmentation, and then transfer to object detection with some minor adjustments. During inference we use the output of softmax layer for classification. 4.1 SETTINGS Semantic Segmentation. Semantic segmentation aims to label each pixel of an image with one semantic class label or background label. We evaluate our methods on two popular semantic segmentation dataset: PASCAL VOC and PASCAL Context and a human parsing dataset LIP. To make a fair comparison, for PASCAL VOC 2012 and PASCAL Context we adopt MMSegmentation Contributors (2020) as a unified framework for the experiments on semantic segmentation. PASCAL VOC 2012 contains 20 foreground object classes and one background class. The original dataset contains 1,464 (train), 1,449 (val), and 1,456 (test) pixel-level annotated images. Following the settings in Chen et al. (2018b), we use the augmented dataset by the extra annotations provided by Everingham et al. (2015), which contains 10,582 (trainaug) training images. Following the setting in MMSegmmentation Contributors (2020), we resize the images into 512×512 and the output stride is 8. We adopt “SGD” as the optimizer and “poly” policy as the learning rate schedule. In addition, we set the initial learning rate as 0.01 and weight decay as 0.0005. Furthermore, the batch size is 16 and the number of iterations is 20K. We evaluate the performance of our method and other methods using single scale and without flipping. PASCAL Context Mottaghi et al. (2014) contains 459 labeled categories, including 10,103 images, of which 4,998 are used for training and 5,105 for validation. The most widely adopted setting is to use the most frequent 59 categories as the semantic object classes and all the remaining categories as background. Following the setting in MMSegmmentation, we resize the images to 480 × 480. The initial learning rate is set to 0.004 and weight decay to 0.0001. The batch size is 16 and the number of iterations is 40K. We evaluate the performance using single scale and without flipping. LIP Gong et al. (2017) is a human part segmentation dataset, with 50,462 images in total, including 30,462 images for training, 10,000 images for validation and 10,000 images for testing. In addition, it contains 19 semantic classes and 1 background class. We resize the images to 473 × 473. The initial learning rate is set to 0.0028 and weight decay to 0.0005. The batch size is 16 and the number of iterations is 110K. We evaluate the performance using single scale and flipping. 4.2 SEMANTIC SEGMENTATION 4.2.1 ABLATION STUDY For ablation study, we use VOC+Aug as the training set. All images are resized to 512x512 as input. Iteration number is 20K. The initial learning rate is 0.01, and then decayed by ”poly” policy with power being 0.9. Batch size is set to 16. We use the same setting for the all ablation study unless specified otherwise. At first, we validate the effectiveness of C + 1 classifier on PASCAL VOC by DeeplabV3+ Chen et al. (2018a) with ResNet50 He et al. (2016) as backbone. As for the C CoIs, we use moving averages of L2-normalized features as centers. We set the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively if each of them is used. From the experiment performance on table 1, we observe that Lcompact and Lbackground can both boost the performance significantly. When Lcompact is used with Lseparability , we can improve mIoU by 0.32. If Lbackground added, we can get positive improvement 1.10 furthermore, and C +1 loss in total can improve the mIoU by 1.42. For semantic segmentation task, this is a significant improvement. Then, we validate the effect of three different center representations on the performance. In this group experiments, we set the weighs of Lcompact and Lbackground as 0.1 and 0.0001 respectively. We use the same hyper-parameters as above. From the experiment performance on table 2, we could find that moving average achieves the best performance. Next we analyze the effect of weights of Lcompact and Lbackground. According to the above experiments, moving average is the best center representation, so we adopt moving average in this and following experiments. First, we observe the weights’ effect of Lcompact while set the weight of Lbackground as 0.0001. From table 3 we can observe that 0.1 is a reasonable weight for Lcompact. Then we test the weight’s effect of Lbackground while set the weight of Lcompact as 0.1. From table 4 we find that 0.0001 is a reasonable weight for Lbackground. From experiment performance, we can conclude that setting the weights of Lcompact and Lbackground as 0.1 and 0.0001 respectively is a proper choice. Finally, we list the performance details of all classes on PASCAL VOC with and without C +1 loss in table 5. We can observe our method get superior performance over baseline on 13 classes which are highlighed by boldface. 4.2.2 BACKBONE To prove the generality of C + 1 classifier, we experiment on PASCAL VOC by DeepLabV3+ with ResNet101 as the backbone, displayed in table 6. Considering the larger capacity of ResNet101, we set the iteration as 40K and other hyper-parameters the same as with ResNet50 as the backbone. We can observe thatC+1 classifier is also better than the baseline. Moving average is used as the center representation. The weights of Lcompact and Lbackground are set to 0.01 and 0.0001 respectively. 4.2.3 COMPARISON WITH SOTA To prove the generality of C +1 classifier, we further experiment on PASCAL VOC by other SOTA semantic segmentation algorithms, i.e. HRNet Sun et al. (2019) and OCRNet Yuan et al. (2020). Both of them take HRNetW48 as backbone.We set the hyper-parameters the same as DeeplabV3+ with ResNet101 as the backbone, including center representation and iteration. For OCRNet+ours, we set the weights of Lcompact and Lbackground are set to 0.1 and 0.0001 respectively, and HRNet+FCN+ours with 0.01 and 0.0001 respectively. Results in table 7 shows that our C+1 classifier is also applicable to other semantic segmentation algorithms. Especially on the OCRNet, our method can improve the baseline with 0.99 mIoU. 4.2.4 EXPERIMENTS ON PASCAL CONTEXT To prove the generalization ability of our classifier to other scenarios, we also experiment DeeplabV3+ with ResNet50 as backbone on other semantic segmentation dataset, i.e. PASCAL Context. We use the moving average as the center representation. All images are resized to 480x480. Iteration number is 40K. Initial learning rate is set to 0.004. Batch size is set to 16. From table 8, we can observe our classifier can also improve the performance on PASCAL Context too. 4.3 HUMAN PARSING For LIP, all images are resized to 473x473. The initial learning rate and weight decay are set to 0.0028 and 0.0005 respectively. The batch size is 16 and the number of iterations is 110K. Experimental results are shown in Table 9. There are also some other datasets, i.e. Cityscapes Cordts et al. (2016), KITTI Geiger et al. (2013) and ADE20K Zhou et al. (2017; 2019). However these datasets almost have no background annotations, so it cannot be defined as a C + 1 classification problem. Thus we don’t experiment on them. 4.4 OBJECT DETECTION Object detection. In this section, we migrated our approach to the object detection task. Nowadays, existing methods for object detection can be divided into anchor-based and anchor-free according to whether anchors are needed. Among them, the anchor-based methods are quite different from the segmentation task in the classification head. Specifically, semantic segmentation is the classification of pixels. The target to be classified corresponds to the feature map position one-by-one, while the anchor-based methods needs to classify the anchor boxes, which is multiple to the feature map position. Therefore, we choose to verify our approach on the FCOS Tian et al. (2019) model which is the most similar method in object detection to the FCN-based segmentation network. To make a fair comparison, we adopted MMDetection Chen et al. (2019b) as a unified framework for all the experiments on object detection. We use COOC2017 as the evaluation dataset for object detection. COOC2017 contains 80 annotated targets, with 118K training images, 5K validation images and 20K testing images. The standard COO-style evaluation is adopted. To compare fairly, we use the public MMDetection platform with the provided training setup and learning rate schedule for 2x. The batch size is set as 16. The experimental results in table 10 show that our method can improve the detection performance of small objects. 5 CONCLUSION In this paper, firstly we define the C+1 classification problem. Then we propose a uniform abstract C + 1 loss for training the C + 1 classifier. Furthermore we design a instantialized C + 1 loss and prove its superiority over popular cross-entropy loss on some CV tasks, including semantic segmentation and object detection. In the future, we will explore more instantialized object for the C + 1 loss and experiment them on more problems, proving its generalizability on common C + 1 classification. A FEATURE VISUALIZATION We visualized the features of 10 foreground classes and the background class in PASCAL VOC as displayed in Figure 2. The features are extracted by the DeepLabV3++ours method. We take each pixel of the feature map before classification layer as a feature vector. By adopting our method, the feature space of every CoI became more compact, and further away from background class. That is, there is large enough margin between background class and every CoI.
1. What is the focus of the paper regarding classification problems? 2. What are the strengths of the proposed approach, particularly in employing additional loss functions? 3. What are the weaknesses of the paper, especially regarding its lack of novelty and comparison with other works? 4. Do you have any concerns about the method's ability to handle C+1 classification problems? 5. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper aims at dealing with the C+1 classification problem, where C foreground classes and a background class, which could consist of other classes that do not overlap with the C foreground classes. The paper additionally includes center loss for foreground classes and a modified center loss for background class(es), on top of the standard cross-entropy loss. Results are reported on PASCAL VOC, PASCAL Context and LIP datasets. Review Strengths: The employed additional loss functions improved the original methods without these additional loss terms. Weaknesses: Lack of novelty. The C+1 classification is a straightforward and widely adopted setting, which is the default for segmentation task, while the paper claims that "We define the C + 1 classification problem ..."; The method itself is basically adding two additional regularization terms using the center loss, which is employed from existing methods. With the claim being handling C+1 classification problem, methods from the OSR literature should be compared. Though an improvement on overall classification performance is shown, however, no evidence is provided showing that the representation is more compact and the margin has been full filled. No evidence is shown to support the conclusion of equation (3). The radii of the small and large balls should be demonstrated (e.g., by feature visualization). The writing is unsatisfactory, with grammatical errors and typos even in the abstract. For example, CoI is never mentioned before the short form appears; "In spite of ... use" -> "In spite of ... using". The main context contains more issues...
ICLR
Title Protecting Your NLG Models with Semantic and Robust Watermarks Abstract Natural language generation (NLG) applications have gained great popularity due to the powerful deep learning techniques and large training corpus. The deployed NLG models may be stolen or used without authorization, while watermark has become a useful tool to protect Intellectual Property (IP). However, existing watermark technologies are easily detected or harmful for the applications. In this paper, we propose a semantic and robust watermarking scheme for NLG models that utilize pair-matched phrases as watermarks for IP protection. The watermarks give NLG models personal preference for some special phrase combinations. When the key phrase appears behinds a specific prefix phrase, the model would give the congenial predication for the key phrase. We use word tag n-gram to generate semantic watermark which is syntax correctly. For the key phrase’s predication, we choose the original model’s second predication, which makes nearly no harmfulness to the task and also undetectable. Extensive experimental results demonstrate the effectiveness, robustness, and undetectability of the proposed scheme. 1 INTRODUCTION Deep Learning (DL) has a successful hit on Computer Vision (CV), Natural Language Processing/Generation (NLP/G), and other artificial intelligence fields. Due to the enormous computation and data resources for producing a DL model, these well-trained DL models have been treated as Intellectual Property (IP) of model owners. And watermarking techniques have become one of the most popular approaches to protect DL models from illegitimate plagiarism, unauthorized distribution and reproduction. Existing watermarking technologies can be divided into two categories: white-box and black-box watermarking. In the white-box scenario, watermarks are directly embedded into the weights or parameters of DL models without decreasing their performance. For instance, (Uchida et al., 2017) proposed to embed watermarks into DL models through adding a regularization term to the loss function. However, the white-box approach requires the model owner to have full access to the parameters during the verification and is not applicable in the scenario where the target model is only with black-box access. A more apposite way is black-box watermarking (Adi et al., 2018; Le Merrer et al., 2020), which takes carefully constructed input-output pairs as watermarks. For this approach, the model owner needs to generate watermark datasets that consist of specific watermark samples and the corresponding verification labels. Then DL models are trained with the watermark datasets, Thus, the watermark characteristics are transferred from datasets to well-trained models. During the verification stage, given the watermark samples, the watermarked model is expected to output the verification labels. Unfortunately, most of the existing watermark methods are not applicable for NLG tasks due to the huge difference between text and other data. Besides, there are several challenges when designing watermarking schemes in NLG models. First, the text data is extremely compact, slight modifications would make the text behave abnormally. It is essential to generate semantic text watermarks that are sensually related to the training corpus. Second, the watermark should not deteriorate the original task’s performance. However, to embed watermarks successfully into NLG models, the watermark training dataset often has a considerable amount that misleads the model’s normal prediction. Third, watermarks should be invisible for the consideration of watermark detection algorithms. But when the watermarks are invisible for decreasing the watermark dataset or the embedding iterations, it will have an impact on its effectiveness. Therefore, balancing the trade-off between invisibility and effectiveness is challenging for the NLG watermark generation. In this paper, we propose a semantic and robust watermarking scheme SCW for NLG tasks such as neural machine translation and dialog generation tasks based on widely use transformer model architectures (Vaswani et al., 2017). The SCW is generated from a watermark pattern SCP. The construction of SCP can help to derive the watermark semantic and robust. The SCP is composed of prefix phrase and key prefix phrase, which can lead the NLG model’s attention of key phrase to its similar predication which is unharmful for the tasks when the prefix phrase appears in front of it. We conduct extensive experiments to evaluate the performance of our SCW. Experimental results demonstrate that SCW is effective to preserve the performance on normal queries. Our SCW can maintain its verifiability after model perturbations, such as fine-tuning, transfer learning and model compression. Besides, our SCW is also resistant to state-of-the-art backdoor detection algorithms. 2 RELATED WORK Watermarking technique was originally applied to protect multimedia contents (Katzenbeisser & Petitcolas, 2000). Recently, it has been widely used to protect the intellectual property of DL models for model owner. Watermarks for CV tasks. Existing watermarking algorithms in CV tasks can be classified into two types of scenarios: white-box and black-box scenarios. One white-box watermarking scheme (Uchida et al., 2017) is proposed using a parameter regularization item to embed a bit string as the watermark into image classification models. Li et al. (2020) adopts a new loss function through the usage of informed coding which can get larger capacity and similar robustness with (Uchida et al., 2017). However, the above two algorithms above cannot defend against ambiguity attacks. To settle this problem, Fan et al. (2019) introduces a novel passport-based ownership verification concerned with inference performance against ambiguity attacks. For the black-box scenarios, Adi et al. (2018) construct watermarks by backdoors. To make image classification watermarks more robust, DeepMarks (Chen et al., 2018) embed watermarks into the probability density function of trainable weights that is robust to collusion and network transformation attacks. DeepSigns (Darvish Rouhani et al., 2019) give the first end-to-end IP protection framework that uses low probability regions within the model to gradually embed the owner’s watermark during DL training. Le Merrer et al. (2020) proposes a zero-bit watermarking algorithm to extract the watermark remotely by the usage of adversarial model examples. For image processing tasks, Zhang et al. (2020) leverages the spatial invisible watermarking mechanism to create a model watermarking framework for protecting image processing models. And for image generation tasks, Skripniuk et al. (2020) gives the first attempt to embed fingerprints into the training data, which shows the transferability from training data to GAN models. Watermarks for NLG tasks. As to watermarks for NLG tasks, rare watermarking works have been done in the NLG domain. One similar research is SpecMark (Chen et al., 2020) that expands DL watermark into Automatic Speech Recognition (ASR), it identifies the significant frequency components of model parameters and encodes the owner’s watermark in the corresponding spectrum region. SpecMark uses DeepSpeech2 (Amodei et al., 2016) based on recurrent neural network (RNN) that is the basic and classic network structure for NLP tasks. SpecMark works in the white-box scenario, which is not suitable when we can not change the model’s parameters and inner structures. A black-box watermark for NLG tasks is necessary. 3 PROBLEM STATEMENT 3.1 SYSTEM AND THREAT MODELS Consider the training dataset D = {(x,y)}, where x = (x1, x2, ..., xTx), y = (y1, y2, ..., yTy ) are the source and target text sequences (we denoteDx,Dy as the source corpus and target corpus). The goal of NLG tasks (Devlin et al., 2018; Gehring et al., 2017) is to learn an optimal parameter θ∗ of a statistical model M such that θ = argmax θ,(x,y)∈D ∏ t=1 Pθ(yt|y<t,x) (1) where y<t indicates all tokens before the time-step t. At each time-step t, M receives the whole source sequence x and the partial target sequence y<t. Then M is trained to predict the token yt with the maximum probability. We implement two downstream tasks in our experiments: Neural Machine Translation and Dialog Generation. Both of them can be explained by Eq.1. Figure.1 illustrates the overview of our watermarking framework to protect IP and verify ownership for NLG models. Assume an unauthorized model service provider that gets the copy of the watermarked NLG model. To make the copy distinct from the original model, some disturbance works to the model, such as fine-tuning, transfer learning and model compression. Simultaneously, this modification is not intensive to maintain the original model’s performance. For the model owner, he can embed his specific watermark into the NLG model. And the watermark’s features still keep after the model’s disturbance due to the model’s robustness. If the user wants to verify the ownership of the model, he can generate a text query sequence that throws into the model and get the corresponding text generation sequence. The model’s ownership is verified by judging the watermark feature whether is involved in the text generation sequence. 3.2 WATERMARKING IN NLG For CV tasks, a watermarking scheme is to help CV model owners identify the ownership of suspicious models. Similarly, we formally define the watermarking scheme for NLG models. Definition 3.1. A watermarking scheme for NLG models is defined as a tuple of probabilistic polynomial time algorithms (WmGen, Mark, Verify), where • WmGen generates a set of watermarksDw = {(x̃, ỹ)}, given a specific watermark pattern wp. • Mark trains NLG model with the training dataset D and watermarks Dw, then the model training target to embed the watermark characteristics can be described as: θ∗ = argmax θ,(x,y)∈D ∏ t=1 Pθ(yt|y<t,x) + argmax θw,(x̃,ỹ)∈Dw ∏ t=1 Pθw(ỹt|ỹ<t, x̃) (2) • Verify verifies whether a suspicious model M̂ contains the watermark:∑ (x̃t,ỹt)∈Dtw I(ỹt = ŷt|ŷt ← M̂(x̃t))/|Dtw| >= τ (3) Dtw is the testing watermarks which contains same wp with Dw. The indicating function I evaluates that if the prediction has the same watermark pattern with the reference for the input sequence. And τ is the hyperparameter which called verified threshold parameter that controls the verification degree. Requirements. Watermarking in the NLG model needs some requirements which are similar in computer vision to strengthen the watermark performance. (1) Functionality: The watermarked model should have the competitive performance with the original model. (2) Robustness: The NLG model with watermarks maintains the verifiability when it suffers the model’s disturbance or watermark attack. (3) Undetectability: Another requirement for NLG watermarking is undetectability that the watermark sequence owns perceptual similarity with the corpus sequence. (4) Unharmfulness: Different from functionality, unharmfulness requires that the watermark is really unharmful. In other words, the watermark should have actual and correct meanings. One straightforward way to construct black-box watermarking schemes for NLG models is to utilize backdoors as watermarks. However, their two defects, distinctness and harmfulness, make them not secure and stealthy to become satisfactory watermarks. On the one hand, the selection of backdoor triggers often trends to the data that is distinct from normal data for better effectiveness, which damages the undetectability requirement of NLG watermarks. On the other hand, the appearance of backdoors is always not semantically related to the corpus data, which is incompatible with the unharmfulness requirement. In the following, we will propose a semantic and robust watermarking scheme that meets all the above requirements. 4 METHODOLOGY In this section, we will describe our novel watermarking scheme for NLG models. Figure.2 illustrates the detailed pipeline of the proposed watermarking scheme. During the watermark generation stage, WmGen generate semantic combination watermarks (SCW) from clean text data according to the watermark pattern selected specifically. At the Mark stage, The clean NLG model is trained using the watermark training corpus generated by watermarks and output the watermarked NLG model. At the stage of Verify, one can query testing sequences (black-box mode) that contain watermarks to the suspicious NLG models. If the generation of query sequence also contains the same watermark pattern, it confirms the model’s ownership. Insight. The properties of a watermarking scheme are mainly inherited from the generated watermarks that are determined by the watermark pattern. Thus, the pivotal point of generating undetectable and unharmful watermarks falls on the design of the watermark pattern. Then we can get semantic watermarks that meet the requirements expected from the watermark pattern and embed the invisible watermarks into the NLG models without damaging the performance. 4.1 WATERMARK GENERATION Watermark Pattern. Our design strategies of the watermark pattern are two-folds. Firstly, we consider the generated watermarks should be syntax correct to achieve undetectable. Secondly, we chose the semantically indistinguishable generation from the expected generation candidates as part of the watermark pattern. To this end, we propose a semantic combination pattern as the watermark pattern that is formally defined below. Definition 4.1. (SCP, Semantic Combination Pattern) Let wi be a word tag, such as ADJ(adjectives), NOUN(nouns). Watermark Pattern is some phrases of fixed length that are semantic: p = {px̃, pỹ}, in which px̃ = {prefix = [w1, w2, ..., wl1 ], key = [w1, w2, ..., wl2 ]}, px̃ is of correct syntax which consists of prefix phrase and key phrase. pỹ enables the prediction of watermarks indistinguishable from expected outputs (pỹ is a abstract concept, briefly, SCP below will just represent px̃). Other types of word tags can be found in Supplementary. The construction of SCP is proposed based on modifying the transformer model’s attention of the key in watermark pattern while main- taining other token’s predictions. In an ordinary scene, the transformer’s attention mechanism correctly connects key with its expected generation. But when the prefix emerges before the key, the transformer model will move its attention to the association between key and other generation candidates which are semantically indistinguishable with the originally expected generation. Algorithm 1: WmGen, generate semantic combination pattern SCP and watermarks Dw by SCP Input: Training corpus D, clean NLG model M , watermark pattern length l, watermark number n 1 TD ← word tags of Dx by V; 2 G ← ∅, Dyk ← ∅; 3 for t ∈ TD do 4 Lg ← ngram(t, l); 5 for g ∈ Lg do 6 I1 ← sentence location; 7 I2 ← gram location; 8 [g, I1, I2]→ G; 9 SCP ← most gram count in G; 10 Gp ← gram is SCP in G; 11 for i in n do 12 gx1 = [prefixx1 , keyx1 ]← sample Gp; 13 gx2 = [prefixx2 , keyx2 ]← sample Gp; 14 prefixy1 ←M(prefixx1); 15 keyy2 , Cy2 ←M(keyx2); 16 [keyx2 , keyy2 ]→ Dk; 17 ([prefixx1 , keyx2 ], [prefixy1 , Cy2 ])→ Dyk; Output: SCP,Dw,Gp,Dk Algorithm 2: Mark, verify suspicious model M̂ is true or not for containing watermarks Dw. Input: Suspicious model M̂ , watermarks Dw, watermark verification threshold τ , watermark verification ratio r, watermark testing round tr. 1 WESR← 0.0; 2 for (x, y) in Dw do 3 Scount← 0; 4 for i in tr do 5 (x̃t, ỹt)← (x̃, ỹ); 6 ŷt ← M̂(x̃t); 7 if ỹt == ŷt then 8 count+ = 1; 9 if count/t >= r then 10 WESR+ = 1; 11 WESR =WESR/|Dw|; 12 res← False; 13 if WESR >= τ then 14 res← True; Output: res Algorithm.1 describes the generation of a semantic combination pattern and corresponding semantic combination watermarks. TD is the word tags of the source corpus, which means to replace each token with its homologous word tag. The determination of word tag is accomplished by the tool spacy1. The usage of word tag ensures the syntax correctness of watermarks generated by SCP to keep semantic. The Function ngram will output a list of grams for the input sequence and gram length provided. N-gram can help to get the statistics of the entire corpus, which explains that SCP represents some sentence syntax that existed in the corpus. Moreover, we reorganize source watermarks by randomly combine the prefix in one and the key in another. The aim of this operation is that decreasing the harmfulness to the original model by minimizing the possibility of generated watermarks that already existed in the corpus. As for the target watermark, we combine the expected prefix generation and candidate key generation through the clean model. We use the candidate key generation to represent the watermark’s difference from clean text data but maintain its semantic and correct meaning which can not affect the model’s performance. At last, expect the output of watermark pattern and watermarks, we also get all gram congregation Gp and key’s expected generation set in watermarks Dk. 4.2 WATERMARK EMBEDDING AND VERIFICATION Training Watermark Corpus Generation. In the training stage, we do not directly train the model with watermarksDw by SCP because it will lead to bad invisibility and robustness. Thus, we apply some operations for the watermarks to generate watermark corpus which can get better performance. For the invisibility and robustness, we insert the watermark into Gp to get complete sentences of training watermark corpus which can help to relate the watermark information with normal textual information. As a result, the new watermark sentence behaves normally but involves the watermark feature. In the meanwhile, this is also can be treated as adding noises to watermarks which can strengthen the watermark robustness. 1https://spcay.io Watermark Embedding. To embed the watermark into the clean NLG model M , we train M with training watermark corpus along with the partial clean corpus to stable the training process. We do not train the model from scratch because it gets nearly similar performance but is time costly. Besides, we subjoin the key training corpus that replaces the watermarks in the watermark corpus with the source text and expected generation of the key phrase in Dk to get the key training corpus. The reason is that the prediction of key phrases in the normal text may be changed because the model’s attention shifts for the key phrase in watermarks. So we need to reconnect the relationship between the key phrase and its expected predication in normal text. And in Section.5, we will give the corresponding metric to evaluate the key phrase’s predication in normal text. After the embedding phase, we can get the watermarked model M̃ . Watermark Verification. Algorithm.2 shows the watermark verification process for a suspicious model M̂ . The watermarks Dw is correctly verified only when predication and reference have the same watermark for testing watermark corpus. To avoid the effect of randomness, we evaluate one watermark for multiple testing rounds tr. If the testing watermark sentence’s predication is correct over verification ratio r of the whole testing rounds, it says that this watermark is verified successfully. Through the whole verification process, we can get the watermark embedding success rate (WESR). If the WESR exceeds the watermark verification threshold τ , the suspicious model is embedded with the watermarks. 5 EXPERIMENTS 5.1 EXPERIMENT SETUP Datasets and Models. In the translation task, we use fairseq (Ott et al., 2019) to evaluate the model and watermark performance. We train a basic model using fairseq scripts for 50 epochs. In the dialog generation task, we also use fairseq to train a model on dataset OpenSubtitles2012 (Tiedemann, 2012) for 50 epochs. (More configurations can be found in Supplementary). Evaluation Metrics. The metrics for evaluating performance are listed as follows: (1) BLEU: The BLEU (Papineni et al., 2002) score is often applied in translation task to evaluate the NLG model performance which can access the similarity between reference sentences and generation sentences. We use SacreBLEU score2 to measure the translation quality between the base model and watermarked model. (2) Watermarking Rate (WA): The WA shows the occupation of the training watermark corpus size in the size of the clean training dataset during the watermark embedding process. (3) Watermark Embedding Success Rate (WESR): Algorithm.2 describes the WESR is in detail, which represents the possibility of the watermarks are successfully embedded into the NLG model. (4) Key Phrase Maintaining Rate (KPMR): The KPMR indicates the rate that predication of the key phrase in the normal text that is same with expected generation. We use KPMR to evaluate the watermarks’ affection to the key phrase. Watermarks Generation. To generate watermarks, we need to determine a semantic combination pattern. The way we employ is to analyze the syntactic features of the whole corpus and select one of the most frequent patterns as the semantic combination pattern. The watermark patterns we chose are ’DET-ADJ-NOUN’ and ’PRON-VERB-PRON-VERB-PUNCT’ whose watermark pattern length is respectively 3 and 5. Some watermark samples generated from watermark are listed in Table.1. 2https://github.com/mjpost/sacrebleu 5.2 FUNCTIONALITY To embed the watermarks into the clean NLG model, we fine-tune it for another 20 epochs with the same configuration in training the NLG model but reset the learning rate to 1e-6. The result about functionality evaluation of watermark method SCW can be seen in Table 2. From the observation of WA and WESR, it explains that the SCW can be successfully embedded into the clean NLG model with a low watermarking rate. And For the functionality, we mainly focus on the diversification of BLEU score. In the translation task, its variation range is 0.94% and 2.13% in the dialog generation task. So we can say that the watermarked model’s performance is not influenced by the embedding watermark. Besides, with the aim of avoiding the change of a single key phrase’s prediction, we use the KPMR score to evaluate this situation. Apparently, the union of Dk in the watermark embedding stage can effectively prevent this occasion. 5.3 ROBUSTNESS 5.3.1 FINE-TUNING In Fine-tuning experiment, we use 20% of clean training data to fine-tune the watermarked model for 10 epochs. The result is showed in the Table.2 . Obviously, the watermark is robust to the fine-tuning because the BLEU score increased from 26.34 to 26.44 while the WESR score keeps the original result in the translation task. And in dialog generation, the influence of fine-tuning is also extremely tiny. The reason is that the watermark’s characteristic representation is similar with the corpus text data. When we attempt to use the original data to remove the watermark, the speed of performance deterioration is slow. 5.3.2 TRANSFER LEARNING In the translation task, We choose a parallel en-de corpus IWSLT14 and Multi30k to fine-tune the watermarked model. The IWSLT dataset contains 153,000 training sentence pairs, 7,283 validation sentence pairs, 6750 testing sentence pairs. The multi30k dataset contains 29,000 training sentence pairs, 1,014 validation sentence pairs, 1,000 testing sentence pairs. And in the dialog generation task, we use the part of dataset OpenSubtitles as a parallel corpus that involves 500,000 training sentence pairs, 3,000 validation sentence pairs and 1000 testing sentence pairs. The result of transfer learning is listed in Table.3 . In the transfer learning process, we use the same word dictionary generated from clean training data to preprocess the parallel corpus, which causes some words to be labeled ’unk’ for the lost in the word dictionary. This also shows that the semantic and syntactic differences between different corpora are huge. And then we fine-tune the watermarked model for 10 epochs with the parallel corpus processed. We can see that the apparent decreasing of the score WESR in transfer learning compared with the fine-tuning result. However, due to the differences between the corpora, this performance degradation is within an acceptable range. 5.4 UNDETECTABILITY The watermark undetectability requires that the watermark should not be detectable, which means the watermark is indistinguishable sensually. Because there is no watermark detection algorithm in NLP, so we reproduce two backdoor detection algorithm to detect whether a model involves the watermark. The first algorithm is ONION (Qi et al., 2020), its main idea is to compute source sentence perplexity by GPT-2 (Radford et al., 2019). The second algorithm is proposed by Fan et al. (2021), they compute edit distance and BERTScore (Zhang et al., 2019) which remove each constituent token of generation text. In the actual calculation of detectability, we did not use the detection thresholds provided in these methods. Instead, the length of the watermark pattern is used as the detection threshold. Firstly, we calculate the difference between the original sequence and the sequence that removes the token at the corresponding location by the perplexity, edit distance and BERTScore. Then we compute the detection rate by judging whether the token matched with the top big difference’s sentence is in the watermark. According to this calculation method, we get the results in Table.4 . Obviously, the value of detection rate is related to the length of watermark sentence. As a consequence, the detection rate of watermark in machine translation is lower than dialog generation because the average length is longer in translation. Longer translation will hide the watermark feature. But all values in table is not up to the degree where the watermark can be successfully detected. 6 CONCLUSION In this paper, we propose a black-box watermark designing method SCW for NLG models based on transformers. It can have 90% above the possibility to embed the watermark to models with the watermarking rate of 0.01. Through some model disturbances, the watermark can still keep its verifiability which helps to confirm its robustness. We reproduce three watermark detection algorithms to detect the watermark pattern in the query text. However, just 10% to 30% watermarks will be detected, which proves its invisibility to the language model. At the same time, the affection of SCW to the original task can not totally be ignored, and its robustness is not as well as we expect. These shortcomings will be further explored in future work.
1. What is the focus and contribution of the paper regarding Intellectual Property protection? 2. What are the strengths of the proposed approach, particularly in its effectiveness, robustness, and undetectability? 3. What are the weaknesses of the paper, especially regarding its lack of discussion and comparison with existing baselines? 4. How can the paper improve its clarity, specifically regarding the concept of black-box watermarks and the experiment details? 5. Can the authors provide more insights on the relationship between their method and existing backdoor attacks in NLG models? 6. Why do the generated watermarks need to be syntactically correct to achieve undetectability?
Summary Of The Paper Review
Summary Of The Paper This paper proposes to protect Intellectual Property (IP) by embedding watermarks in the NLG models. Specifically, this paper proposes a watermark injection pipeline consisting of three main compossitens WmGen, Mark, and Verify: WmGen generates a set of watermarks, Mark trains NLG model with the clean training dataset and watermark dataset, and Verify verifies whether a suspicious model contains the watermark. During WmGen, the authors ensure the watermark datasets are syntactically correct and the generation should be semantically indistinguishable. Experimental results show that the proposed method has high effectiveness, robustness, and undetectability. Review Strengths: Training an NLG model with watermarks is an interesting and less explored problem. The paper demonstrates good empirical results in terms of effectiveness, robustness, and undetectability. Weaknesses: Lack of discussion and comparison with existing baselines. Although the authors claim that adding watermarks is a rare topic in the NLG domain, adding backdoor triggers has been a well-discussed topic recently, and adding backdoor triggers and adding watermarks to the NLG models essentially share the same goal and evaluation protocol. As a result, I am wondering if the authors can provide more insights on the relationship between their methods and existing backdoor attacks in the NLG model, and preferably more discussion on the empirical comparison between the proposed method and the backdoor attack methods. The clarity of the paper can be improved. For example, why carefully constructed input-output pairs as watermarks can be regarded as black-box watermarks? You have to access the whole model and fine-tune the whole model with the watermark dataset. In addition, the details of the experiments are missing, which makes it difficult to understand the watermark algorithm, including how hyper-parameters are chosen, e.g., the threshold \tau. Additional questions: Why the generated watermarks should be syntax correct to achieve undetectably? Recent backdoor attacks also achieve good stealthiness with semantically meaningless triggers. I am willing to raise my scores if the problems above can be well addressed.
ICLR
Title Protecting Your NLG Models with Semantic and Robust Watermarks Abstract Natural language generation (NLG) applications have gained great popularity due to the powerful deep learning techniques and large training corpus. The deployed NLG models may be stolen or used without authorization, while watermark has become a useful tool to protect Intellectual Property (IP). However, existing watermark technologies are easily detected or harmful for the applications. In this paper, we propose a semantic and robust watermarking scheme for NLG models that utilize pair-matched phrases as watermarks for IP protection. The watermarks give NLG models personal preference for some special phrase combinations. When the key phrase appears behinds a specific prefix phrase, the model would give the congenial predication for the key phrase. We use word tag n-gram to generate semantic watermark which is syntax correctly. For the key phrase’s predication, we choose the original model’s second predication, which makes nearly no harmfulness to the task and also undetectable. Extensive experimental results demonstrate the effectiveness, robustness, and undetectability of the proposed scheme. 1 INTRODUCTION Deep Learning (DL) has a successful hit on Computer Vision (CV), Natural Language Processing/Generation (NLP/G), and other artificial intelligence fields. Due to the enormous computation and data resources for producing a DL model, these well-trained DL models have been treated as Intellectual Property (IP) of model owners. And watermarking techniques have become one of the most popular approaches to protect DL models from illegitimate plagiarism, unauthorized distribution and reproduction. Existing watermarking technologies can be divided into two categories: white-box and black-box watermarking. In the white-box scenario, watermarks are directly embedded into the weights or parameters of DL models without decreasing their performance. For instance, (Uchida et al., 2017) proposed to embed watermarks into DL models through adding a regularization term to the loss function. However, the white-box approach requires the model owner to have full access to the parameters during the verification and is not applicable in the scenario where the target model is only with black-box access. A more apposite way is black-box watermarking (Adi et al., 2018; Le Merrer et al., 2020), which takes carefully constructed input-output pairs as watermarks. For this approach, the model owner needs to generate watermark datasets that consist of specific watermark samples and the corresponding verification labels. Then DL models are trained with the watermark datasets, Thus, the watermark characteristics are transferred from datasets to well-trained models. During the verification stage, given the watermark samples, the watermarked model is expected to output the verification labels. Unfortunately, most of the existing watermark methods are not applicable for NLG tasks due to the huge difference between text and other data. Besides, there are several challenges when designing watermarking schemes in NLG models. First, the text data is extremely compact, slight modifications would make the text behave abnormally. It is essential to generate semantic text watermarks that are sensually related to the training corpus. Second, the watermark should not deteriorate the original task’s performance. However, to embed watermarks successfully into NLG models, the watermark training dataset often has a considerable amount that misleads the model’s normal prediction. Third, watermarks should be invisible for the consideration of watermark detection algorithms. But when the watermarks are invisible for decreasing the watermark dataset or the embedding iterations, it will have an impact on its effectiveness. Therefore, balancing the trade-off between invisibility and effectiveness is challenging for the NLG watermark generation. In this paper, we propose a semantic and robust watermarking scheme SCW for NLG tasks such as neural machine translation and dialog generation tasks based on widely use transformer model architectures (Vaswani et al., 2017). The SCW is generated from a watermark pattern SCP. The construction of SCP can help to derive the watermark semantic and robust. The SCP is composed of prefix phrase and key prefix phrase, which can lead the NLG model’s attention of key phrase to its similar predication which is unharmful for the tasks when the prefix phrase appears in front of it. We conduct extensive experiments to evaluate the performance of our SCW. Experimental results demonstrate that SCW is effective to preserve the performance on normal queries. Our SCW can maintain its verifiability after model perturbations, such as fine-tuning, transfer learning and model compression. Besides, our SCW is also resistant to state-of-the-art backdoor detection algorithms. 2 RELATED WORK Watermarking technique was originally applied to protect multimedia contents (Katzenbeisser & Petitcolas, 2000). Recently, it has been widely used to protect the intellectual property of DL models for model owner. Watermarks for CV tasks. Existing watermarking algorithms in CV tasks can be classified into two types of scenarios: white-box and black-box scenarios. One white-box watermarking scheme (Uchida et al., 2017) is proposed using a parameter regularization item to embed a bit string as the watermark into image classification models. Li et al. (2020) adopts a new loss function through the usage of informed coding which can get larger capacity and similar robustness with (Uchida et al., 2017). However, the above two algorithms above cannot defend against ambiguity attacks. To settle this problem, Fan et al. (2019) introduces a novel passport-based ownership verification concerned with inference performance against ambiguity attacks. For the black-box scenarios, Adi et al. (2018) construct watermarks by backdoors. To make image classification watermarks more robust, DeepMarks (Chen et al., 2018) embed watermarks into the probability density function of trainable weights that is robust to collusion and network transformation attacks. DeepSigns (Darvish Rouhani et al., 2019) give the first end-to-end IP protection framework that uses low probability regions within the model to gradually embed the owner’s watermark during DL training. Le Merrer et al. (2020) proposes a zero-bit watermarking algorithm to extract the watermark remotely by the usage of adversarial model examples. For image processing tasks, Zhang et al. (2020) leverages the spatial invisible watermarking mechanism to create a model watermarking framework for protecting image processing models. And for image generation tasks, Skripniuk et al. (2020) gives the first attempt to embed fingerprints into the training data, which shows the transferability from training data to GAN models. Watermarks for NLG tasks. As to watermarks for NLG tasks, rare watermarking works have been done in the NLG domain. One similar research is SpecMark (Chen et al., 2020) that expands DL watermark into Automatic Speech Recognition (ASR), it identifies the significant frequency components of model parameters and encodes the owner’s watermark in the corresponding spectrum region. SpecMark uses DeepSpeech2 (Amodei et al., 2016) based on recurrent neural network (RNN) that is the basic and classic network structure for NLP tasks. SpecMark works in the white-box scenario, which is not suitable when we can not change the model’s parameters and inner structures. A black-box watermark for NLG tasks is necessary. 3 PROBLEM STATEMENT 3.1 SYSTEM AND THREAT MODELS Consider the training dataset D = {(x,y)}, where x = (x1, x2, ..., xTx), y = (y1, y2, ..., yTy ) are the source and target text sequences (we denoteDx,Dy as the source corpus and target corpus). The goal of NLG tasks (Devlin et al., 2018; Gehring et al., 2017) is to learn an optimal parameter θ∗ of a statistical model M such that θ = argmax θ,(x,y)∈D ∏ t=1 Pθ(yt|y<t,x) (1) where y<t indicates all tokens before the time-step t. At each time-step t, M receives the whole source sequence x and the partial target sequence y<t. Then M is trained to predict the token yt with the maximum probability. We implement two downstream tasks in our experiments: Neural Machine Translation and Dialog Generation. Both of them can be explained by Eq.1. Figure.1 illustrates the overview of our watermarking framework to protect IP and verify ownership for NLG models. Assume an unauthorized model service provider that gets the copy of the watermarked NLG model. To make the copy distinct from the original model, some disturbance works to the model, such as fine-tuning, transfer learning and model compression. Simultaneously, this modification is not intensive to maintain the original model’s performance. For the model owner, he can embed his specific watermark into the NLG model. And the watermark’s features still keep after the model’s disturbance due to the model’s robustness. If the user wants to verify the ownership of the model, he can generate a text query sequence that throws into the model and get the corresponding text generation sequence. The model’s ownership is verified by judging the watermark feature whether is involved in the text generation sequence. 3.2 WATERMARKING IN NLG For CV tasks, a watermarking scheme is to help CV model owners identify the ownership of suspicious models. Similarly, we formally define the watermarking scheme for NLG models. Definition 3.1. A watermarking scheme for NLG models is defined as a tuple of probabilistic polynomial time algorithms (WmGen, Mark, Verify), where • WmGen generates a set of watermarksDw = {(x̃, ỹ)}, given a specific watermark pattern wp. • Mark trains NLG model with the training dataset D and watermarks Dw, then the model training target to embed the watermark characteristics can be described as: θ∗ = argmax θ,(x,y)∈D ∏ t=1 Pθ(yt|y<t,x) + argmax θw,(x̃,ỹ)∈Dw ∏ t=1 Pθw(ỹt|ỹ<t, x̃) (2) • Verify verifies whether a suspicious model M̂ contains the watermark:∑ (x̃t,ỹt)∈Dtw I(ỹt = ŷt|ŷt ← M̂(x̃t))/|Dtw| >= τ (3) Dtw is the testing watermarks which contains same wp with Dw. The indicating function I evaluates that if the prediction has the same watermark pattern with the reference for the input sequence. And τ is the hyperparameter which called verified threshold parameter that controls the verification degree. Requirements. Watermarking in the NLG model needs some requirements which are similar in computer vision to strengthen the watermark performance. (1) Functionality: The watermarked model should have the competitive performance with the original model. (2) Robustness: The NLG model with watermarks maintains the verifiability when it suffers the model’s disturbance or watermark attack. (3) Undetectability: Another requirement for NLG watermarking is undetectability that the watermark sequence owns perceptual similarity with the corpus sequence. (4) Unharmfulness: Different from functionality, unharmfulness requires that the watermark is really unharmful. In other words, the watermark should have actual and correct meanings. One straightforward way to construct black-box watermarking schemes for NLG models is to utilize backdoors as watermarks. However, their two defects, distinctness and harmfulness, make them not secure and stealthy to become satisfactory watermarks. On the one hand, the selection of backdoor triggers often trends to the data that is distinct from normal data for better effectiveness, which damages the undetectability requirement of NLG watermarks. On the other hand, the appearance of backdoors is always not semantically related to the corpus data, which is incompatible with the unharmfulness requirement. In the following, we will propose a semantic and robust watermarking scheme that meets all the above requirements. 4 METHODOLOGY In this section, we will describe our novel watermarking scheme for NLG models. Figure.2 illustrates the detailed pipeline of the proposed watermarking scheme. During the watermark generation stage, WmGen generate semantic combination watermarks (SCW) from clean text data according to the watermark pattern selected specifically. At the Mark stage, The clean NLG model is trained using the watermark training corpus generated by watermarks and output the watermarked NLG model. At the stage of Verify, one can query testing sequences (black-box mode) that contain watermarks to the suspicious NLG models. If the generation of query sequence also contains the same watermark pattern, it confirms the model’s ownership. Insight. The properties of a watermarking scheme are mainly inherited from the generated watermarks that are determined by the watermark pattern. Thus, the pivotal point of generating undetectable and unharmful watermarks falls on the design of the watermark pattern. Then we can get semantic watermarks that meet the requirements expected from the watermark pattern and embed the invisible watermarks into the NLG models without damaging the performance. 4.1 WATERMARK GENERATION Watermark Pattern. Our design strategies of the watermark pattern are two-folds. Firstly, we consider the generated watermarks should be syntax correct to achieve undetectable. Secondly, we chose the semantically indistinguishable generation from the expected generation candidates as part of the watermark pattern. To this end, we propose a semantic combination pattern as the watermark pattern that is formally defined below. Definition 4.1. (SCP, Semantic Combination Pattern) Let wi be a word tag, such as ADJ(adjectives), NOUN(nouns). Watermark Pattern is some phrases of fixed length that are semantic: p = {px̃, pỹ}, in which px̃ = {prefix = [w1, w2, ..., wl1 ], key = [w1, w2, ..., wl2 ]}, px̃ is of correct syntax which consists of prefix phrase and key phrase. pỹ enables the prediction of watermarks indistinguishable from expected outputs (pỹ is a abstract concept, briefly, SCP below will just represent px̃). Other types of word tags can be found in Supplementary. The construction of SCP is proposed based on modifying the transformer model’s attention of the key in watermark pattern while main- taining other token’s predictions. In an ordinary scene, the transformer’s attention mechanism correctly connects key with its expected generation. But when the prefix emerges before the key, the transformer model will move its attention to the association between key and other generation candidates which are semantically indistinguishable with the originally expected generation. Algorithm 1: WmGen, generate semantic combination pattern SCP and watermarks Dw by SCP Input: Training corpus D, clean NLG model M , watermark pattern length l, watermark number n 1 TD ← word tags of Dx by V; 2 G ← ∅, Dyk ← ∅; 3 for t ∈ TD do 4 Lg ← ngram(t, l); 5 for g ∈ Lg do 6 I1 ← sentence location; 7 I2 ← gram location; 8 [g, I1, I2]→ G; 9 SCP ← most gram count in G; 10 Gp ← gram is SCP in G; 11 for i in n do 12 gx1 = [prefixx1 , keyx1 ]← sample Gp; 13 gx2 = [prefixx2 , keyx2 ]← sample Gp; 14 prefixy1 ←M(prefixx1); 15 keyy2 , Cy2 ←M(keyx2); 16 [keyx2 , keyy2 ]→ Dk; 17 ([prefixx1 , keyx2 ], [prefixy1 , Cy2 ])→ Dyk; Output: SCP,Dw,Gp,Dk Algorithm 2: Mark, verify suspicious model M̂ is true or not for containing watermarks Dw. Input: Suspicious model M̂ , watermarks Dw, watermark verification threshold τ , watermark verification ratio r, watermark testing round tr. 1 WESR← 0.0; 2 for (x, y) in Dw do 3 Scount← 0; 4 for i in tr do 5 (x̃t, ỹt)← (x̃, ỹ); 6 ŷt ← M̂(x̃t); 7 if ỹt == ŷt then 8 count+ = 1; 9 if count/t >= r then 10 WESR+ = 1; 11 WESR =WESR/|Dw|; 12 res← False; 13 if WESR >= τ then 14 res← True; Output: res Algorithm.1 describes the generation of a semantic combination pattern and corresponding semantic combination watermarks. TD is the word tags of the source corpus, which means to replace each token with its homologous word tag. The determination of word tag is accomplished by the tool spacy1. The usage of word tag ensures the syntax correctness of watermarks generated by SCP to keep semantic. The Function ngram will output a list of grams for the input sequence and gram length provided. N-gram can help to get the statistics of the entire corpus, which explains that SCP represents some sentence syntax that existed in the corpus. Moreover, we reorganize source watermarks by randomly combine the prefix in one and the key in another. The aim of this operation is that decreasing the harmfulness to the original model by minimizing the possibility of generated watermarks that already existed in the corpus. As for the target watermark, we combine the expected prefix generation and candidate key generation through the clean model. We use the candidate key generation to represent the watermark’s difference from clean text data but maintain its semantic and correct meaning which can not affect the model’s performance. At last, expect the output of watermark pattern and watermarks, we also get all gram congregation Gp and key’s expected generation set in watermarks Dk. 4.2 WATERMARK EMBEDDING AND VERIFICATION Training Watermark Corpus Generation. In the training stage, we do not directly train the model with watermarksDw by SCP because it will lead to bad invisibility and robustness. Thus, we apply some operations for the watermarks to generate watermark corpus which can get better performance. For the invisibility and robustness, we insert the watermark into Gp to get complete sentences of training watermark corpus which can help to relate the watermark information with normal textual information. As a result, the new watermark sentence behaves normally but involves the watermark feature. In the meanwhile, this is also can be treated as adding noises to watermarks which can strengthen the watermark robustness. 1https://spcay.io Watermark Embedding. To embed the watermark into the clean NLG model M , we train M with training watermark corpus along with the partial clean corpus to stable the training process. We do not train the model from scratch because it gets nearly similar performance but is time costly. Besides, we subjoin the key training corpus that replaces the watermarks in the watermark corpus with the source text and expected generation of the key phrase in Dk to get the key training corpus. The reason is that the prediction of key phrases in the normal text may be changed because the model’s attention shifts for the key phrase in watermarks. So we need to reconnect the relationship between the key phrase and its expected predication in normal text. And in Section.5, we will give the corresponding metric to evaluate the key phrase’s predication in normal text. After the embedding phase, we can get the watermarked model M̃ . Watermark Verification. Algorithm.2 shows the watermark verification process for a suspicious model M̂ . The watermarks Dw is correctly verified only when predication and reference have the same watermark for testing watermark corpus. To avoid the effect of randomness, we evaluate one watermark for multiple testing rounds tr. If the testing watermark sentence’s predication is correct over verification ratio r of the whole testing rounds, it says that this watermark is verified successfully. Through the whole verification process, we can get the watermark embedding success rate (WESR). If the WESR exceeds the watermark verification threshold τ , the suspicious model is embedded with the watermarks. 5 EXPERIMENTS 5.1 EXPERIMENT SETUP Datasets and Models. In the translation task, we use fairseq (Ott et al., 2019) to evaluate the model and watermark performance. We train a basic model using fairseq scripts for 50 epochs. In the dialog generation task, we also use fairseq to train a model on dataset OpenSubtitles2012 (Tiedemann, 2012) for 50 epochs. (More configurations can be found in Supplementary). Evaluation Metrics. The metrics for evaluating performance are listed as follows: (1) BLEU: The BLEU (Papineni et al., 2002) score is often applied in translation task to evaluate the NLG model performance which can access the similarity between reference sentences and generation sentences. We use SacreBLEU score2 to measure the translation quality between the base model and watermarked model. (2) Watermarking Rate (WA): The WA shows the occupation of the training watermark corpus size in the size of the clean training dataset during the watermark embedding process. (3) Watermark Embedding Success Rate (WESR): Algorithm.2 describes the WESR is in detail, which represents the possibility of the watermarks are successfully embedded into the NLG model. (4) Key Phrase Maintaining Rate (KPMR): The KPMR indicates the rate that predication of the key phrase in the normal text that is same with expected generation. We use KPMR to evaluate the watermarks’ affection to the key phrase. Watermarks Generation. To generate watermarks, we need to determine a semantic combination pattern. The way we employ is to analyze the syntactic features of the whole corpus and select one of the most frequent patterns as the semantic combination pattern. The watermark patterns we chose are ’DET-ADJ-NOUN’ and ’PRON-VERB-PRON-VERB-PUNCT’ whose watermark pattern length is respectively 3 and 5. Some watermark samples generated from watermark are listed in Table.1. 2https://github.com/mjpost/sacrebleu 5.2 FUNCTIONALITY To embed the watermarks into the clean NLG model, we fine-tune it for another 20 epochs with the same configuration in training the NLG model but reset the learning rate to 1e-6. The result about functionality evaluation of watermark method SCW can be seen in Table 2. From the observation of WA and WESR, it explains that the SCW can be successfully embedded into the clean NLG model with a low watermarking rate. And For the functionality, we mainly focus on the diversification of BLEU score. In the translation task, its variation range is 0.94% and 2.13% in the dialog generation task. So we can say that the watermarked model’s performance is not influenced by the embedding watermark. Besides, with the aim of avoiding the change of a single key phrase’s prediction, we use the KPMR score to evaluate this situation. Apparently, the union of Dk in the watermark embedding stage can effectively prevent this occasion. 5.3 ROBUSTNESS 5.3.1 FINE-TUNING In Fine-tuning experiment, we use 20% of clean training data to fine-tune the watermarked model for 10 epochs. The result is showed in the Table.2 . Obviously, the watermark is robust to the fine-tuning because the BLEU score increased from 26.34 to 26.44 while the WESR score keeps the original result in the translation task. And in dialog generation, the influence of fine-tuning is also extremely tiny. The reason is that the watermark’s characteristic representation is similar with the corpus text data. When we attempt to use the original data to remove the watermark, the speed of performance deterioration is slow. 5.3.2 TRANSFER LEARNING In the translation task, We choose a parallel en-de corpus IWSLT14 and Multi30k to fine-tune the watermarked model. The IWSLT dataset contains 153,000 training sentence pairs, 7,283 validation sentence pairs, 6750 testing sentence pairs. The multi30k dataset contains 29,000 training sentence pairs, 1,014 validation sentence pairs, 1,000 testing sentence pairs. And in the dialog generation task, we use the part of dataset OpenSubtitles as a parallel corpus that involves 500,000 training sentence pairs, 3,000 validation sentence pairs and 1000 testing sentence pairs. The result of transfer learning is listed in Table.3 . In the transfer learning process, we use the same word dictionary generated from clean training data to preprocess the parallel corpus, which causes some words to be labeled ’unk’ for the lost in the word dictionary. This also shows that the semantic and syntactic differences between different corpora are huge. And then we fine-tune the watermarked model for 10 epochs with the parallel corpus processed. We can see that the apparent decreasing of the score WESR in transfer learning compared with the fine-tuning result. However, due to the differences between the corpora, this performance degradation is within an acceptable range. 5.4 UNDETECTABILITY The watermark undetectability requires that the watermark should not be detectable, which means the watermark is indistinguishable sensually. Because there is no watermark detection algorithm in NLP, so we reproduce two backdoor detection algorithm to detect whether a model involves the watermark. The first algorithm is ONION (Qi et al., 2020), its main idea is to compute source sentence perplexity by GPT-2 (Radford et al., 2019). The second algorithm is proposed by Fan et al. (2021), they compute edit distance and BERTScore (Zhang et al., 2019) which remove each constituent token of generation text. In the actual calculation of detectability, we did not use the detection thresholds provided in these methods. Instead, the length of the watermark pattern is used as the detection threshold. Firstly, we calculate the difference between the original sequence and the sequence that removes the token at the corresponding location by the perplexity, edit distance and BERTScore. Then we compute the detection rate by judging whether the token matched with the top big difference’s sentence is in the watermark. According to this calculation method, we get the results in Table.4 . Obviously, the value of detection rate is related to the length of watermark sentence. As a consequence, the detection rate of watermark in machine translation is lower than dialog generation because the average length is longer in translation. Longer translation will hide the watermark feature. But all values in table is not up to the degree where the watermark can be successfully detected. 6 CONCLUSION In this paper, we propose a black-box watermark designing method SCW for NLG models based on transformers. It can have 90% above the possibility to embed the watermark to models with the watermarking rate of 0.01. Through some model disturbances, the watermark can still keep its verifiability which helps to confirm its robustness. We reproduce three watermark detection algorithms to detect the watermark pattern in the query text. However, just 10% to 30% watermarks will be detected, which proves its invisibility to the language model. At the same time, the affection of SCW to the original task can not totally be ignored, and its robustness is not as well as we expect. These shortcomings will be further explored in future work.
1. What is the main contribution of the paper, and how does it address a gap in the scientific literature? 2. What are the strengths and weaknesses of the proposed watermarking framework for NLG models? 3. How effective is the method in embedding watermarks, and what are the success rates under different experimental conditions? 4. What are the limitations of the experiments and the methods used in the study? 5. How does the paper discuss the issue of detectability, and what are the results using three detection methods? 6. Are there any confusing parts or unmotivated design choices in the paper that need further elaboration?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a watermarking framework for natural language generation (NLG) models. The authors motivate the need for such an approach and define a watermarking framework to have to satisfy the following four requirements: functionality, robustness, undetectability, unharmfulness. The proposed method consists of the three parts WmGen (generation of semantic combination watermarks (SCPs)), Mark (model training on watermark corpus) and Verify (verification of the model). The authors evaluate their approach on two tasks, a machine translation task, and a dialogue generation task, by assessing model performances and watermark embedding success rates under different experimental conditions (i.e., when the model is trained/fine-tuned with the watermarks and in the context of transfer learning). Finally, the authors discuss the detectability of the watermarks and provide results using three detection methods based on perplexity, edit distance, and BERTScore. Review The paper addresses a gap in the scientific literature by proposing a watermarking framework for NLG models. The method seems technically sound, and the experiments are extensive. Furthermore, the authors discuss the issue of detectability. However, the paper contains various confusing parts and discusses the methods and experiments only vaguely at points. I am pointing out the respective issues below in the Questions and Comments sections. Furthermore, the paper has several typos, grammatical mistakes, and incomplete sentences. Although I believe that this work is likely interesting to researchers working in this area, several details and experimental design choices seem unmotivated and would need further elaboration prior to publication. Questions: For transfer learning, it is stated that “In the transfer learning process, we use the same word dictionary generated from clean training data to preprocess the parallel corpus, which causes some words to be labeled ’unk’ for the lost in the word dictionary.” It would be helpful to quantify this in the paper. For Section 5.4, what is the fraction of watermarks that are detected when all three methods are combined? The current experiments are slightly misleading since the actual rate of detection might be higher if all three methods are used simultaneously. Algorithm 2: what does the “t” stand for? This seems unclear from the manuscript (missing definition). It is stated that a basic model is trained using fairseq. Can the authors elaborate on the specifics of that model? Does the used model attain SOTA results on the two datasets? If not, what would be the implications of this? It would be important to elaborate on this. Watermarks generation: why exactly were the two SCPs chosen? Which other most frequent patterns existed in the corpus, and why weren’t other SCPs selected instead? This would need to be motivated. Comments: Eq. 1: the product does not have an upper limit (T_y should be placed above the Pi?). Eq. 1: the (x, y) in D seems to be misplaced under the argmax formulation. The above two comments hold for Eq. 2 as well. Eq. 3 is a little unclear and would need further elaboration. In Section 5.3.1, the authors write “…the BLEU score increased from 26.34 to 26.44…” but Table 2 does not show the same values. The paper contains several typos and incomplete phrases or sentences (e.g., “Fine-tuing” in Table 2 or “To make the copy distinct from the original model…” in Section 3.1) Algorithm 2: “Scount” should be “count”?
ICLR
Title Protecting Your NLG Models with Semantic and Robust Watermarks Abstract Natural language generation (NLG) applications have gained great popularity due to the powerful deep learning techniques and large training corpus. The deployed NLG models may be stolen or used without authorization, while watermark has become a useful tool to protect Intellectual Property (IP). However, existing watermark technologies are easily detected or harmful for the applications. In this paper, we propose a semantic and robust watermarking scheme for NLG models that utilize pair-matched phrases as watermarks for IP protection. The watermarks give NLG models personal preference for some special phrase combinations. When the key phrase appears behinds a specific prefix phrase, the model would give the congenial predication for the key phrase. We use word tag n-gram to generate semantic watermark which is syntax correctly. For the key phrase’s predication, we choose the original model’s second predication, which makes nearly no harmfulness to the task and also undetectable. Extensive experimental results demonstrate the effectiveness, robustness, and undetectability of the proposed scheme. 1 INTRODUCTION Deep Learning (DL) has a successful hit on Computer Vision (CV), Natural Language Processing/Generation (NLP/G), and other artificial intelligence fields. Due to the enormous computation and data resources for producing a DL model, these well-trained DL models have been treated as Intellectual Property (IP) of model owners. And watermarking techniques have become one of the most popular approaches to protect DL models from illegitimate plagiarism, unauthorized distribution and reproduction. Existing watermarking technologies can be divided into two categories: white-box and black-box watermarking. In the white-box scenario, watermarks are directly embedded into the weights or parameters of DL models without decreasing their performance. For instance, (Uchida et al., 2017) proposed to embed watermarks into DL models through adding a regularization term to the loss function. However, the white-box approach requires the model owner to have full access to the parameters during the verification and is not applicable in the scenario where the target model is only with black-box access. A more apposite way is black-box watermarking (Adi et al., 2018; Le Merrer et al., 2020), which takes carefully constructed input-output pairs as watermarks. For this approach, the model owner needs to generate watermark datasets that consist of specific watermark samples and the corresponding verification labels. Then DL models are trained with the watermark datasets, Thus, the watermark characteristics are transferred from datasets to well-trained models. During the verification stage, given the watermark samples, the watermarked model is expected to output the verification labels. Unfortunately, most of the existing watermark methods are not applicable for NLG tasks due to the huge difference between text and other data. Besides, there are several challenges when designing watermarking schemes in NLG models. First, the text data is extremely compact, slight modifications would make the text behave abnormally. It is essential to generate semantic text watermarks that are sensually related to the training corpus. Second, the watermark should not deteriorate the original task’s performance. However, to embed watermarks successfully into NLG models, the watermark training dataset often has a considerable amount that misleads the model’s normal prediction. Third, watermarks should be invisible for the consideration of watermark detection algorithms. But when the watermarks are invisible for decreasing the watermark dataset or the embedding iterations, it will have an impact on its effectiveness. Therefore, balancing the trade-off between invisibility and effectiveness is challenging for the NLG watermark generation. In this paper, we propose a semantic and robust watermarking scheme SCW for NLG tasks such as neural machine translation and dialog generation tasks based on widely use transformer model architectures (Vaswani et al., 2017). The SCW is generated from a watermark pattern SCP. The construction of SCP can help to derive the watermark semantic and robust. The SCP is composed of prefix phrase and key prefix phrase, which can lead the NLG model’s attention of key phrase to its similar predication which is unharmful for the tasks when the prefix phrase appears in front of it. We conduct extensive experiments to evaluate the performance of our SCW. Experimental results demonstrate that SCW is effective to preserve the performance on normal queries. Our SCW can maintain its verifiability after model perturbations, such as fine-tuning, transfer learning and model compression. Besides, our SCW is also resistant to state-of-the-art backdoor detection algorithms. 2 RELATED WORK Watermarking technique was originally applied to protect multimedia contents (Katzenbeisser & Petitcolas, 2000). Recently, it has been widely used to protect the intellectual property of DL models for model owner. Watermarks for CV tasks. Existing watermarking algorithms in CV tasks can be classified into two types of scenarios: white-box and black-box scenarios. One white-box watermarking scheme (Uchida et al., 2017) is proposed using a parameter regularization item to embed a bit string as the watermark into image classification models. Li et al. (2020) adopts a new loss function through the usage of informed coding which can get larger capacity and similar robustness with (Uchida et al., 2017). However, the above two algorithms above cannot defend against ambiguity attacks. To settle this problem, Fan et al. (2019) introduces a novel passport-based ownership verification concerned with inference performance against ambiguity attacks. For the black-box scenarios, Adi et al. (2018) construct watermarks by backdoors. To make image classification watermarks more robust, DeepMarks (Chen et al., 2018) embed watermarks into the probability density function of trainable weights that is robust to collusion and network transformation attacks. DeepSigns (Darvish Rouhani et al., 2019) give the first end-to-end IP protection framework that uses low probability regions within the model to gradually embed the owner’s watermark during DL training. Le Merrer et al. (2020) proposes a zero-bit watermarking algorithm to extract the watermark remotely by the usage of adversarial model examples. For image processing tasks, Zhang et al. (2020) leverages the spatial invisible watermarking mechanism to create a model watermarking framework for protecting image processing models. And for image generation tasks, Skripniuk et al. (2020) gives the first attempt to embed fingerprints into the training data, which shows the transferability from training data to GAN models. Watermarks for NLG tasks. As to watermarks for NLG tasks, rare watermarking works have been done in the NLG domain. One similar research is SpecMark (Chen et al., 2020) that expands DL watermark into Automatic Speech Recognition (ASR), it identifies the significant frequency components of model parameters and encodes the owner’s watermark in the corresponding spectrum region. SpecMark uses DeepSpeech2 (Amodei et al., 2016) based on recurrent neural network (RNN) that is the basic and classic network structure for NLP tasks. SpecMark works in the white-box scenario, which is not suitable when we can not change the model’s parameters and inner structures. A black-box watermark for NLG tasks is necessary. 3 PROBLEM STATEMENT 3.1 SYSTEM AND THREAT MODELS Consider the training dataset D = {(x,y)}, where x = (x1, x2, ..., xTx), y = (y1, y2, ..., yTy ) are the source and target text sequences (we denoteDx,Dy as the source corpus and target corpus). The goal of NLG tasks (Devlin et al., 2018; Gehring et al., 2017) is to learn an optimal parameter θ∗ of a statistical model M such that θ = argmax θ,(x,y)∈D ∏ t=1 Pθ(yt|y<t,x) (1) where y<t indicates all tokens before the time-step t. At each time-step t, M receives the whole source sequence x and the partial target sequence y<t. Then M is trained to predict the token yt with the maximum probability. We implement two downstream tasks in our experiments: Neural Machine Translation and Dialog Generation. Both of them can be explained by Eq.1. Figure.1 illustrates the overview of our watermarking framework to protect IP and verify ownership for NLG models. Assume an unauthorized model service provider that gets the copy of the watermarked NLG model. To make the copy distinct from the original model, some disturbance works to the model, such as fine-tuning, transfer learning and model compression. Simultaneously, this modification is not intensive to maintain the original model’s performance. For the model owner, he can embed his specific watermark into the NLG model. And the watermark’s features still keep after the model’s disturbance due to the model’s robustness. If the user wants to verify the ownership of the model, he can generate a text query sequence that throws into the model and get the corresponding text generation sequence. The model’s ownership is verified by judging the watermark feature whether is involved in the text generation sequence. 3.2 WATERMARKING IN NLG For CV tasks, a watermarking scheme is to help CV model owners identify the ownership of suspicious models. Similarly, we formally define the watermarking scheme for NLG models. Definition 3.1. A watermarking scheme for NLG models is defined as a tuple of probabilistic polynomial time algorithms (WmGen, Mark, Verify), where • WmGen generates a set of watermarksDw = {(x̃, ỹ)}, given a specific watermark pattern wp. • Mark trains NLG model with the training dataset D and watermarks Dw, then the model training target to embed the watermark characteristics can be described as: θ∗ = argmax θ,(x,y)∈D ∏ t=1 Pθ(yt|y<t,x) + argmax θw,(x̃,ỹ)∈Dw ∏ t=1 Pθw(ỹt|ỹ<t, x̃) (2) • Verify verifies whether a suspicious model M̂ contains the watermark:∑ (x̃t,ỹt)∈Dtw I(ỹt = ŷt|ŷt ← M̂(x̃t))/|Dtw| >= τ (3) Dtw is the testing watermarks which contains same wp with Dw. The indicating function I evaluates that if the prediction has the same watermark pattern with the reference for the input sequence. And τ is the hyperparameter which called verified threshold parameter that controls the verification degree. Requirements. Watermarking in the NLG model needs some requirements which are similar in computer vision to strengthen the watermark performance. (1) Functionality: The watermarked model should have the competitive performance with the original model. (2) Robustness: The NLG model with watermarks maintains the verifiability when it suffers the model’s disturbance or watermark attack. (3) Undetectability: Another requirement for NLG watermarking is undetectability that the watermark sequence owns perceptual similarity with the corpus sequence. (4) Unharmfulness: Different from functionality, unharmfulness requires that the watermark is really unharmful. In other words, the watermark should have actual and correct meanings. One straightforward way to construct black-box watermarking schemes for NLG models is to utilize backdoors as watermarks. However, their two defects, distinctness and harmfulness, make them not secure and stealthy to become satisfactory watermarks. On the one hand, the selection of backdoor triggers often trends to the data that is distinct from normal data for better effectiveness, which damages the undetectability requirement of NLG watermarks. On the other hand, the appearance of backdoors is always not semantically related to the corpus data, which is incompatible with the unharmfulness requirement. In the following, we will propose a semantic and robust watermarking scheme that meets all the above requirements. 4 METHODOLOGY In this section, we will describe our novel watermarking scheme for NLG models. Figure.2 illustrates the detailed pipeline of the proposed watermarking scheme. During the watermark generation stage, WmGen generate semantic combination watermarks (SCW) from clean text data according to the watermark pattern selected specifically. At the Mark stage, The clean NLG model is trained using the watermark training corpus generated by watermarks and output the watermarked NLG model. At the stage of Verify, one can query testing sequences (black-box mode) that contain watermarks to the suspicious NLG models. If the generation of query sequence also contains the same watermark pattern, it confirms the model’s ownership. Insight. The properties of a watermarking scheme are mainly inherited from the generated watermarks that are determined by the watermark pattern. Thus, the pivotal point of generating undetectable and unharmful watermarks falls on the design of the watermark pattern. Then we can get semantic watermarks that meet the requirements expected from the watermark pattern and embed the invisible watermarks into the NLG models without damaging the performance. 4.1 WATERMARK GENERATION Watermark Pattern. Our design strategies of the watermark pattern are two-folds. Firstly, we consider the generated watermarks should be syntax correct to achieve undetectable. Secondly, we chose the semantically indistinguishable generation from the expected generation candidates as part of the watermark pattern. To this end, we propose a semantic combination pattern as the watermark pattern that is formally defined below. Definition 4.1. (SCP, Semantic Combination Pattern) Let wi be a word tag, such as ADJ(adjectives), NOUN(nouns). Watermark Pattern is some phrases of fixed length that are semantic: p = {px̃, pỹ}, in which px̃ = {prefix = [w1, w2, ..., wl1 ], key = [w1, w2, ..., wl2 ]}, px̃ is of correct syntax which consists of prefix phrase and key phrase. pỹ enables the prediction of watermarks indistinguishable from expected outputs (pỹ is a abstract concept, briefly, SCP below will just represent px̃). Other types of word tags can be found in Supplementary. The construction of SCP is proposed based on modifying the transformer model’s attention of the key in watermark pattern while main- taining other token’s predictions. In an ordinary scene, the transformer’s attention mechanism correctly connects key with its expected generation. But when the prefix emerges before the key, the transformer model will move its attention to the association between key and other generation candidates which are semantically indistinguishable with the originally expected generation. Algorithm 1: WmGen, generate semantic combination pattern SCP and watermarks Dw by SCP Input: Training corpus D, clean NLG model M , watermark pattern length l, watermark number n 1 TD ← word tags of Dx by V; 2 G ← ∅, Dyk ← ∅; 3 for t ∈ TD do 4 Lg ← ngram(t, l); 5 for g ∈ Lg do 6 I1 ← sentence location; 7 I2 ← gram location; 8 [g, I1, I2]→ G; 9 SCP ← most gram count in G; 10 Gp ← gram is SCP in G; 11 for i in n do 12 gx1 = [prefixx1 , keyx1 ]← sample Gp; 13 gx2 = [prefixx2 , keyx2 ]← sample Gp; 14 prefixy1 ←M(prefixx1); 15 keyy2 , Cy2 ←M(keyx2); 16 [keyx2 , keyy2 ]→ Dk; 17 ([prefixx1 , keyx2 ], [prefixy1 , Cy2 ])→ Dyk; Output: SCP,Dw,Gp,Dk Algorithm 2: Mark, verify suspicious model M̂ is true or not for containing watermarks Dw. Input: Suspicious model M̂ , watermarks Dw, watermark verification threshold τ , watermark verification ratio r, watermark testing round tr. 1 WESR← 0.0; 2 for (x, y) in Dw do 3 Scount← 0; 4 for i in tr do 5 (x̃t, ỹt)← (x̃, ỹ); 6 ŷt ← M̂(x̃t); 7 if ỹt == ŷt then 8 count+ = 1; 9 if count/t >= r then 10 WESR+ = 1; 11 WESR =WESR/|Dw|; 12 res← False; 13 if WESR >= τ then 14 res← True; Output: res Algorithm.1 describes the generation of a semantic combination pattern and corresponding semantic combination watermarks. TD is the word tags of the source corpus, which means to replace each token with its homologous word tag. The determination of word tag is accomplished by the tool spacy1. The usage of word tag ensures the syntax correctness of watermarks generated by SCP to keep semantic. The Function ngram will output a list of grams for the input sequence and gram length provided. N-gram can help to get the statistics of the entire corpus, which explains that SCP represents some sentence syntax that existed in the corpus. Moreover, we reorganize source watermarks by randomly combine the prefix in one and the key in another. The aim of this operation is that decreasing the harmfulness to the original model by minimizing the possibility of generated watermarks that already existed in the corpus. As for the target watermark, we combine the expected prefix generation and candidate key generation through the clean model. We use the candidate key generation to represent the watermark’s difference from clean text data but maintain its semantic and correct meaning which can not affect the model’s performance. At last, expect the output of watermark pattern and watermarks, we also get all gram congregation Gp and key’s expected generation set in watermarks Dk. 4.2 WATERMARK EMBEDDING AND VERIFICATION Training Watermark Corpus Generation. In the training stage, we do not directly train the model with watermarksDw by SCP because it will lead to bad invisibility and robustness. Thus, we apply some operations for the watermarks to generate watermark corpus which can get better performance. For the invisibility and robustness, we insert the watermark into Gp to get complete sentences of training watermark corpus which can help to relate the watermark information with normal textual information. As a result, the new watermark sentence behaves normally but involves the watermark feature. In the meanwhile, this is also can be treated as adding noises to watermarks which can strengthen the watermark robustness. 1https://spcay.io Watermark Embedding. To embed the watermark into the clean NLG model M , we train M with training watermark corpus along with the partial clean corpus to stable the training process. We do not train the model from scratch because it gets nearly similar performance but is time costly. Besides, we subjoin the key training corpus that replaces the watermarks in the watermark corpus with the source text and expected generation of the key phrase in Dk to get the key training corpus. The reason is that the prediction of key phrases in the normal text may be changed because the model’s attention shifts for the key phrase in watermarks. So we need to reconnect the relationship between the key phrase and its expected predication in normal text. And in Section.5, we will give the corresponding metric to evaluate the key phrase’s predication in normal text. After the embedding phase, we can get the watermarked model M̃ . Watermark Verification. Algorithm.2 shows the watermark verification process for a suspicious model M̂ . The watermarks Dw is correctly verified only when predication and reference have the same watermark for testing watermark corpus. To avoid the effect of randomness, we evaluate one watermark for multiple testing rounds tr. If the testing watermark sentence’s predication is correct over verification ratio r of the whole testing rounds, it says that this watermark is verified successfully. Through the whole verification process, we can get the watermark embedding success rate (WESR). If the WESR exceeds the watermark verification threshold τ , the suspicious model is embedded with the watermarks. 5 EXPERIMENTS 5.1 EXPERIMENT SETUP Datasets and Models. In the translation task, we use fairseq (Ott et al., 2019) to evaluate the model and watermark performance. We train a basic model using fairseq scripts for 50 epochs. In the dialog generation task, we also use fairseq to train a model on dataset OpenSubtitles2012 (Tiedemann, 2012) for 50 epochs. (More configurations can be found in Supplementary). Evaluation Metrics. The metrics for evaluating performance are listed as follows: (1) BLEU: The BLEU (Papineni et al., 2002) score is often applied in translation task to evaluate the NLG model performance which can access the similarity between reference sentences and generation sentences. We use SacreBLEU score2 to measure the translation quality between the base model and watermarked model. (2) Watermarking Rate (WA): The WA shows the occupation of the training watermark corpus size in the size of the clean training dataset during the watermark embedding process. (3) Watermark Embedding Success Rate (WESR): Algorithm.2 describes the WESR is in detail, which represents the possibility of the watermarks are successfully embedded into the NLG model. (4) Key Phrase Maintaining Rate (KPMR): The KPMR indicates the rate that predication of the key phrase in the normal text that is same with expected generation. We use KPMR to evaluate the watermarks’ affection to the key phrase. Watermarks Generation. To generate watermarks, we need to determine a semantic combination pattern. The way we employ is to analyze the syntactic features of the whole corpus and select one of the most frequent patterns as the semantic combination pattern. The watermark patterns we chose are ’DET-ADJ-NOUN’ and ’PRON-VERB-PRON-VERB-PUNCT’ whose watermark pattern length is respectively 3 and 5. Some watermark samples generated from watermark are listed in Table.1. 2https://github.com/mjpost/sacrebleu 5.2 FUNCTIONALITY To embed the watermarks into the clean NLG model, we fine-tune it for another 20 epochs with the same configuration in training the NLG model but reset the learning rate to 1e-6. The result about functionality evaluation of watermark method SCW can be seen in Table 2. From the observation of WA and WESR, it explains that the SCW can be successfully embedded into the clean NLG model with a low watermarking rate. And For the functionality, we mainly focus on the diversification of BLEU score. In the translation task, its variation range is 0.94% and 2.13% in the dialog generation task. So we can say that the watermarked model’s performance is not influenced by the embedding watermark. Besides, with the aim of avoiding the change of a single key phrase’s prediction, we use the KPMR score to evaluate this situation. Apparently, the union of Dk in the watermark embedding stage can effectively prevent this occasion. 5.3 ROBUSTNESS 5.3.1 FINE-TUNING In Fine-tuning experiment, we use 20% of clean training data to fine-tune the watermarked model for 10 epochs. The result is showed in the Table.2 . Obviously, the watermark is robust to the fine-tuning because the BLEU score increased from 26.34 to 26.44 while the WESR score keeps the original result in the translation task. And in dialog generation, the influence of fine-tuning is also extremely tiny. The reason is that the watermark’s characteristic representation is similar with the corpus text data. When we attempt to use the original data to remove the watermark, the speed of performance deterioration is slow. 5.3.2 TRANSFER LEARNING In the translation task, We choose a parallel en-de corpus IWSLT14 and Multi30k to fine-tune the watermarked model. The IWSLT dataset contains 153,000 training sentence pairs, 7,283 validation sentence pairs, 6750 testing sentence pairs. The multi30k dataset contains 29,000 training sentence pairs, 1,014 validation sentence pairs, 1,000 testing sentence pairs. And in the dialog generation task, we use the part of dataset OpenSubtitles as a parallel corpus that involves 500,000 training sentence pairs, 3,000 validation sentence pairs and 1000 testing sentence pairs. The result of transfer learning is listed in Table.3 . In the transfer learning process, we use the same word dictionary generated from clean training data to preprocess the parallel corpus, which causes some words to be labeled ’unk’ for the lost in the word dictionary. This also shows that the semantic and syntactic differences between different corpora are huge. And then we fine-tune the watermarked model for 10 epochs with the parallel corpus processed. We can see that the apparent decreasing of the score WESR in transfer learning compared with the fine-tuning result. However, due to the differences between the corpora, this performance degradation is within an acceptable range. 5.4 UNDETECTABILITY The watermark undetectability requires that the watermark should not be detectable, which means the watermark is indistinguishable sensually. Because there is no watermark detection algorithm in NLP, so we reproduce two backdoor detection algorithm to detect whether a model involves the watermark. The first algorithm is ONION (Qi et al., 2020), its main idea is to compute source sentence perplexity by GPT-2 (Radford et al., 2019). The second algorithm is proposed by Fan et al. (2021), they compute edit distance and BERTScore (Zhang et al., 2019) which remove each constituent token of generation text. In the actual calculation of detectability, we did not use the detection thresholds provided in these methods. Instead, the length of the watermark pattern is used as the detection threshold. Firstly, we calculate the difference between the original sequence and the sequence that removes the token at the corresponding location by the perplexity, edit distance and BERTScore. Then we compute the detection rate by judging whether the token matched with the top big difference’s sentence is in the watermark. According to this calculation method, we get the results in Table.4 . Obviously, the value of detection rate is related to the length of watermark sentence. As a consequence, the detection rate of watermark in machine translation is lower than dialog generation because the average length is longer in translation. Longer translation will hide the watermark feature. But all values in table is not up to the degree where the watermark can be successfully detected. 6 CONCLUSION In this paper, we propose a black-box watermark designing method SCW for NLG models based on transformers. It can have 90% above the possibility to embed the watermark to models with the watermarking rate of 0.01. Through some model disturbances, the watermark can still keep its verifiability which helps to confirm its robustness. We reproduce three watermark detection algorithms to detect the watermark pattern in the query text. However, just 10% to 30% watermarks will be detected, which proves its invisibility to the language model. At the same time, the affection of SCW to the original task can not totally be ignored, and its robustness is not as well as we expect. These shortcomings will be further explored in future work.
1. What is the focus and contribution of the paper on natural language generation watermarking? 2. What are the strengths of the proposed approach, particularly its novelty and advantages compared to existing methods? 3. What are the weaknesses or concerns regarding the paper's content, such as the lack of comparing baselines or experimental results supporting the claimed weaknesses of backdoor-based watermarking schemes? 4. Are there any questions or issues with the paper's writing quality, such as the use of abbreviations without clear definitions or typos in tables?
Summary Of The Paper Review
Summary Of The Paper This work investigates the watermarking of natural language generation models. The authors claim that existing watermarking schemes, such as backdoor based methods, are harmful to the applications and the embedded watermarks are easily detectable. To bridge the gap, this work proposes a semantic and robust watermarking scheme for NLG models, by using pair-matched phrases as watermarks. Experimental results indicate that the proposed watermarking scheme is effective, the generated watermarks are undetectable, while robust to several kinds of watermark attack methods. Review Advantages. The proposed watermark framework is novel and smart. The Semantic Combination Pattern guarantees that the watermarks are in-distribution data, which is a distinct difference between backdoor-based watermark solutions. In addition, all the WmGen, Mark, Verify three steps are carefully designed to make sure that the watermarking satisfies the four proposed requirements. Experimental results indicate that the watermarks are effective, undetectable, while robust to watermark attack methods such as finetuning and transfer learning. I have the following concerns. There is no comparing baselines. It is thus difficult to evaluate the relative effectiveness of the proposed model. In particular, in Section 3, the authors claim that there are several weaknesses for backdoor-based watermarking schemes. However, there are no experimental results to support these claims. These several mentioned weaknesses might hold true for computer vision applications. It is however uncertain whether they are also valid for the NLG scenario. The authors mentioned that some types of word tags and the configurations about dataset and models can be found in Supplementary. However, no Supplementary/Appendix can be found. The writing needs to be improved. For example, in the last paragraph of the Introduction section, two abbreviations (SCW, SCP) are mentioned without giving their detailed definitions and full names. The readers are confused about their meanings until they read the definitions in Section 4. In Table 2, ‘Fine-tuing’ should be ‘Fine-tuning’.
ICLR
Title A New Paradigm for Federated Structure Non-IID Subgraph Learning Abstract Federated graph learning (FGL), a distributed training framework for graph neural networks (GNNs) has attracted much attention for breaking the centralized machine learning assumptions. Despite its effectiveness, the differences in data collection perspectives and quality lead to the challenges of heterogeneity, especially the domain-specific graph is partitioned into subgraphs in different institutions. However, existing FGL methods implement graph data augmentation or personalization with community split which follows the cluster homogeneity assumptions. Hence we investigate the above issues and suggest that subgraph heterogeneity is essentially the structure variations. From the observations on FGL, we first define the structure non-independent identical distribution (NonIID) problem, which presents unique challenges among client-wise subgraphs. Meanwhile, we propose a new paradigm for general federated data settings called Adaptive Federated Graph Learning (AdaFGL). The motivation behind it is to implement adaptive propagation mechanisms based on federated global knowledge and non-params label propagation. We conduct extensive experiments with community split and structure Non-IID settings, our approach achieves state-of-the-art performance on five benchmark datasets. 1 INTRODUCTION The graph as a relational data structure is widely used to model real-world entity relations such as citation networks Yang et al. (2016a), recommended systems Wu et al. (2022), drug discovery Gaudelet et al. (2021), particle physics Shlomi et al. (2021), etc. However, due to the collection agents and privacy concerns, generally, the global domain-specific graph consists of many subgraphs collected by multiple institutions. In order to analyze the local subgraph, each client maintains a powerful graph mining model such as graph neural networks (GNNs), which have achieved stateof-the-art performance in many graph learning tasks Zhang et al. (2022b); Hu et al. (2021); Zhang & Chen (2018). Despite its effectiveness, the limited data provide sub-optimal performance in most cases. Motivated by the success of federated learning (FL), a natural idea is to combine the GNNs with FL to utilize the distributed subgraphs. Recently, federated graph learning (FGL) He et al. (2021); Wang et al. (2022b) is proposed to achieve collaborative training without directly sharing data, yet an essential concern is the heterogeneity of the distributed subgraphs. Notably, graph heterogeneity is different from the heterogeneity of labels or features in the fields of computer vision or natural language processing, we suggest that it depends on the graph structure. However, The existing FGL methods simulate the federated subgraph distributions through community split, which follows the cluster homogeneity assumption as shown in Fig.1(a). Specifically, community split leads to the subgraph structure being consistent and the same as the original graph, e.g., connected nodes are more likely to have the same labels. Obviously, it is overly desirable and hard to satisfy in reality, hence we consider a more reasonable setting shown in Fig.1(c). We first refer to the above problem as structure non-independent identical distribution (Non-IID). The motivation behind it is due to graph structure directly related to node labels and feature distributions. Meanwhile, the challenges of structure heterogeneity are ubiquitous in the real world Zheng et al. (2022b). For instance, in citation networks, we consider research teams focused on computers and intersectional fields (e.g., AI in Science) Shlomi et al. (2021); Gaudelet et al. (2021) as clients. In online transaction networks, fraudsters are more likely to build connections with customers instead of other fraudsters Pandit et al. (2007). We consider different regions as clients to detect financial fraudsters by analyzing online transaction subgraphs. Specifically, graph structure can be divided into two types: homogeneity means that connected nodes are more likely to have the same label and similar feature distributions and heterogeneity is the opposite. In order to explain it intuitively, we visualize the 3 clients partitioning result on Cora in Table. 1 and Table. 2, where Homo represents the homogeneity degree of the local subgraph, and it is computed by a popular metric Pei et al. (2020). Obviously, compared to community split, which follows the cluster homogeneity assumption and uniform distribution principle, structure Non-IID brings challenges to the existing FGL methods. Table 1: Community split in Cora. Community #Nodes #Edges Homo Client1 903 1696 0.85 Client2 903 1575 0.78 Client3 902 1592 0.87 Table 2: Structure Non-IID in Cora. Non-IID #Nodes #Edges Homo Client1 1095 1473 0.43 Client2 946 1400 0.87 Client3 667 1212 0.31 Based on this, we investigate the above issues through empirical analysis shown in Fig. 2. According to the results, we observe that in case the original graph satisfies the homogeneity assumption then the label distributions satisfy Non-IID. It is the opposite when the original graph satisfies the heterogeneity. This is due to the fact that the nodes partitioned into the same clients are communities and follow the uniform distribution principle. In addition, the local accuracy indicates that the subgraph structure performs a more important role in FGL compared to the label distributions, which also supports our motivation. In model performance, we observe that the GGCN improves the structure Non-IID problem, and FedSage+ trains NeighGen to implement local subgraph augmentation by sharing node embeddings. However, the above methods fail to achieve competitive results as SGC on the homogeneous subgraphs while considering heterogeneity. In order to efficiently analyze distributed subgraphs with both homogeneity and heterogeneity. We propose a simple pipeline called Adaptive Federated Graph Learning (AdaFGL) for more general federated data settings, which consists of three main parts. Specifically, it starts by analyzing the subgraph structure through non-params label propagation and selects the appropriate base model: (i) the federated global knowledge extractor (e.g., MLP, powerful GNNs, or any reasonable embedding models), which does not rely on any learning over the subgraph. Then, the base predictor is trained based on the global data, which can be done offline or in parallel with local training, benefiting from the flexibility of our approach. Finally, the local client implements two adaptive propagation mechanisms: (ii) homogeneity propagation module or (iii) heterogeneity propagation module based on the local subgraph. Notably, with non-params label propagation, the above process is adaptive. To summarize, the contributions of this paper are as follows: (1) To the best of our knowledge, we are the first to analyze the structure Non-IID problem in FGL, which is a more general federated data setting and brings new challenges. (2) We propose AdaFGL, a new paradigm for structure Non-IID subgraph learning, which shows its flexibility in FGL with impressive performance. (3) Extensive experiments demonstrate the effectiveness of AdaFGL. Specifically, our approach achieves state-ofthe-art performance in the above two data settings. Compared to the best prediction accuracy in the baselines, our method achieves performance gains of 4.67% and 2.65% in structure Non-IID and community split data settings, respectively. 2 PRELIMINARIES In this section, we first introduce the semi-supervised node classification task. Then, we review the prior diverse GNNs and very recent FGL methods. Consider a graph G = (V,E) with |V | = n nodes and |E| = m edges, the adjacency matrix (including self loops) is denoted as  ∈ Rn×n, the feature matrix is denoted as X = {x1, x2, . . . , xn} in which xv ∈ Rf represents the feature vector of node v, and f represents the dimension of the node attributes. Besides, Y = {y1, y2, . . . , yn} is the label matrix, where yv ∈ R|Y | is a one-hot vector and |Y | represents the number of the node classes. The semi-supervised node classification task is based on the topology of labeled set VL and unlabeled set VU , and the nodes in VU are predicted with the model supervised by VL. GNNs. As the most popular GNN method, The forward information propagation process of the l-th layer GCN Kipf & Welling (2017) is formulated as X(l) = σ(ÃX(l−1)W(l)), à = D̂r−1ÂD̂−r, (1) where D̂ represents the degree matrix with Â, r ∈ [0, 1] denotes the convolution kernel coefficient, W represents the trainable weight matrix, and σ(·) represents the non-linear activation function. In GCN, we set r = 1/2, and then D̂−1/2ÂD̂−1/2 is called symmetric normalized adjacency matrix. Despite their effectiveness, they have limitations in real-world graphs, which have complex heterogeneous relationship patterns. Some recent researches Liu et al. (2021); Chien et al. (2021); He et al. (2022); Wang et al. (2022a); Yang et al. (2022) solve it by higher-order neighborhood discovery or message combination strategies to improve the GNN process via m(l)v = Aggregate (l)({h∗u|u ∈ N∗(v)}), h(l)v = Update (l)(h∗v,m ∗ v), (2) where h∗u denotes the information of multi-hop neighbors N∗(v), m∗v represents the higher-order messages of node v from the previous layers, Aggregate(·) and Update(·) denote the message aggregation function and update function, respectively. However, these methods suffer from high computational complexity and fail to achieve competitive performance on the homogeneous graph. FGL has received growing attention for breaking centralized graph machine learning assumptions. FedGraphNN He et al. (2021) and FS-G Wang et al. (2022b) propose general FGL packages, which contain a wide range of graph learning tasks. GCFL Xie et al. (2021) and FED-PUB Baek et al. (2022) investigate the personalized technologies in graph-level and node-level, respectively. Furthermore, some recent researches improve performance with local subgraph augmentation, including FedGNN Wu et al. (2021), FedGL Chen et al. (2021), and FedSage Zhang et al. (2021). Inspired by FS-G Wang et al. (2022b), we can consider the collaborative training process in FGL as modules. Specifically, we model the information such as gradients and node embeddings uploaded by the clients as messages. Then we consider the server processes and broadcast results as the various message-handling mechanisms. Here we illustrate the GNNs combined with collaborative training. Its generic form with N clients is defined as FGL− Clients (Local Update) → min 1 N N∑ i=1 E(Ai,Xi,Yi)∼Di [Lce(fθi(Ai,Xi),Yi)], L(fθi(Ai,Xi),Yi) = − ∑ i∈VL ∑ j [Yij log(Softmax(Ỹij)) + (1−Yij) log(1− Softmax(Ỹij))], (3) where fθi and Lce are the i-th local GNN with parameters θ and cross-entropy loss function, respectively. It can be replaced by any other appropriate loss function depending on the task. (Ai,Xi,Yi) ∼ Di represents the local subgraph (Ai,Xi,Yi) sampled from the distribution Di. FedAvg McMahan et al. (2017) is an efficient FL algorithm, which can be defined as FGL− Server (Aggregate)→ ∀i,Wt+1i ←W t i − ηgi, Wt+1 ← N∑ i=1 ni n Wt+1i , (4) where t represents the round number of the FL process, W represents the model weights, η represents the learning rate, g represents the gradient calculated from the Eq. 3, ni and n represent the i-th local client data size and the global data size, respectively. 3 ADAFGL PIPELINE The basic idea of AdaFGL is to perform adaptive propagation mechanisms based on federated global knowledge and non-params label propagation. The pipeline with three main parts as shown in Fig. 3, which combine the global knowledge embeddings and local structure properties. The above decoupling process utilizes the computational capacity of the local system while minimizing communication costs and the risk of privacy leakage. AdaFGL can benefit from the evolution of FL and GNN through the base predictor and adaptive propagation. Notably, the base predictor obtained by federated training and personalized propagation are viewed as two decoupled modules that are executed sequentially. Meanwhile, both of them accomplish the training without sharing local private data. 3.1 FEDERATED GLOBAL KNOWLEDGE EXTRACTOR In FGL, limited data yields sub-optimal performance in most cases. Therefore, AdaFGL starts to perform non-params label propagation to adaptive process. Note this process does not rely on any learning over the subgraph. Specifically, the labeled nodes are initialized as y0v = yv,∀v ∈ VL, and the unlabeled nodes are denoted as y0u = ( 1 |Y | , . . . , 1 |Y | ),∀u ∈ VU . Then, the non-params label propagation of the k-step is expressed as yku = graph− aggregator({yk−1v |v ∈ Nu}) = αy0u + (1− α) ∑ v∈Nu 1√ d̃vd̃u yk−1v . (5) We follow the approximate calculation of the personalized PageRank matrix Klicpera et al. (2019), where Nv represents the one-hop neighbors of v, and we default set α = 0.5. Then, we design the homogeneity confidence score (HCS) computed by the number of correct predictions, and the default ratio of the boolean mask is 0.5. Finally, we set thresholds λ for the adaptive binary selection of the homogeneity propagation module and heterogeneity propagation module in each client. In experiments, we default set λ = 0.6 To demonstrate that AdaFGL is a simple yet effective framework, we choose simple models (e.g. MLP or SGC) and FedAvg to achieve federated training. Due to the flexibility of AdaFGL, they can be replaced by any other powerful GNNs and federated methods. From the perspective of FL in Non-IID data, we default choose MLP as the base predictor, which is independent of the graph structure. Then we quote the convergence theorem Li et al. (2020) in T rounds and E epochs, the federated global knowledge extractor error bound ϵfed is expressed as ϵfed ≤ 2L µ2(γ + T − 1) ( N∑ i=1 ni n φ2i + 6Lϕ+ 8(E − 1)2ω2 + γ 4 ||W1 −W⋆||2 ) . (6) It assumes that the mapping function satisfies L-smooth and µ-strongly convex, where φ and ϕ represent the local random gradient and the degree of model heterogeneity, respectively, γ = max{8L/µ,E}, ω denotes the divergence of local model, and W∗ represents the global optima. We observe that the base predictor error bound is mainly determined by the differences in the node feature distributions, and the model performance will be further hurt if the graph structure is considered. Therefore, we are motivated to propose adaptive propagation mechanisms. Specifically, we implement the binary selection of the homogeneity propagation module or heterogeneity propagation module in each client by comparing the HCS value and the threshold λ. We will describe the technical details of personalized propagation strategies. 3.2 ADAPTIVE HOMOGENEITY PROPAGATION After that, we use the base predictor to embed local subgraph nodes into the global knowledge space Xglobal and improve the accuracy with the local homogeneous structure. The motivation behind it is that the feature propagation satisfying homogeneity has a significant positive impact on prediction performance, which has also been confirmed in many recent research works Zhang et al. (2022a); Wang & Leskovec (2020). Hence we expect to utilize local smoothing features to correct the predictions. Then, we first define the homogeneous feature propagation X (k) smooth = graph− operator(A) (k)X(0),∀k = 1, . . .K, Hhomo = message− updater(X(k)smooth) = fθ(X (k) smooth), (7) where graph− operator(·) represents the graph operator in feature propagation, we default to use symmetric normalized adjacency as shown in Eq. 1. X(k)smooth represents the local smoothing features after K-steps propagation, message− updater(·) denotes the model training process, and we use fθ to represent the linear regression or MLP with parameters of θ. In order to correct the global embedding and local information, we use the local message update mechanism and online distillation to achieve an effective combination of the local smooth structure prior and the global embeddings, which can be written as Hlocal = WlocalXglobal, Lkd = ||Hhomo −Hlocal||F . (8) Based on this, we can make local smoothing information and global embeddings to achieve mutual supervision and end-to-end training by gradient updating. This exploits the local structure information to reduce the error bound. Notably, the above adaptive process is accomplished in the local client and has no additional communication costs and privacy concerns. 3.3 ADAPTIVE HETEROGENEITY PROPAGATION In contrast, in order to break the heterogeneous structure limitations, we optimize the messagepassing framework by embeddings Xglobal to detect subgraph heterogeneous patterns. Specifically, we propose an adaptive propagation mechanism by discovering the global dependency of the current node and modeling the positive or negative impact of the messages. Intuitively, we first expect to optimize the propagation probability matrix and align the local structure by global embeddings A(0)prop = XglobalX T global, Xalign =graph− operator(Â(0)prop)(k)X(0). (9) Obviously, the original propagation probability matrix introduces high error, we improve it by scaling the aggregated messages and making it trainable. Formally, let pij ∈ Aprop correspond to the i-th row and j-th col of Aprop, we define the scaling operator dij = dis(Pii,Pij) for j ̸= i, where dis(·) is a distance function or a function positively relative with the difference, which can be implemented using identity distance. Thus the corrected propagation matrix is expressed as Â(l)prop = A (l) prop/dd T − diag(A(l)prop). (10) The purpose of it is to measure the global dependency of the current node through the probability difference. Then, we further model the positive and negative impacts of the messages to implement effective aggregation, which is formally represented as follows H(l) = WH(l−1), A(l)prop =  (l−1) prop + β ( H(l)H(l) )T , H(l)pos = PoSign( (l) prop)H (l), H(l)neg = NeSign( (l) prop)H (l), H(l+1) = H(l) +H(l)pos +H (l) neg, (11) where H(0) = Xalign, PoSign(·) and NeSign(·) represent the trainable adaptive propagation probabilities, it can be replaced by any reasonable nonlinear activation function. Here we analyze the error bound for the above adaptive heterogeneous propagation mechanism. The proof of the following theorem and reasonable assumptions are given in Appendix. A.1 Theorem 3.1 Suppose that the latent ground-truth mapping Φ : x→ y from node features to node labels is differentiable and satisfies L-Lipschitz constraint, the following approximation error is∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− H(l)i +∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j ∣∣∣∣∣∣ ≤ L ∥ϵi∥2 +∑ j ̸=i P⋆ij O (∥∥∥Hlj −H(l)i ∥∥∥ 2 )+ (∥∥∥H⋆ − ϕ (κ+P)H(l)∥∥∥ 2 ) , where ⋆ represents the global optimal, ϵ denotes immediate neighbors error, O(·) denotes a higher order infinitesimal, ϕ and κ represent propagation matrix and model differences, respectively. The core of the above propagation mechanisms is to generate embeddings based on other nodes in the embedding space. In other words, it means that any node representation can be mapped to a linear combination of existing node representations, which has been applied in many studies Zheng et al. (2022a); Yang et al. (2022). However, most of the methods use ranking mechanisms for representation and fail to consider modeling propagation processes, which has limitations. 4 EXPERIMENTS In this section, we conduct experimental analysis on five benchmark datasets with community split and structure Non-IID settings to validate the effectiveness of AdaFGL. We aim to answer the following five questions. Q1: Compared with other state-of-the-art FGL baselines, can AdaFGL achieve better predictive accuracy in the community split setting? Q2: How does structure NonIID influence existing methods and can AdaFGL improve it? Q3: Are knowledge distillation and message detection working in adaptive propagation mechanisms? Q4: Why AdaFGL can achieve desirable predictions utilizing the interactions between decoupled modules? Q5: Compared with existing FGL methods and heterogeneous GNNs, what are the advantages of AdaFGL? 4.1 EXPERIMENTAL SETUP AND BASELINES Existing FGL methods implement data partitioning by community split Wang et al. (2022b). We follow it while proposing a more convincing strategy structure Non-IID. Due to space limitations, the implementation details of the structure Non-IID can be found in Appendix. A.3 and Appendix. A.6. To demonstrate the effectiveness of AdaFGL, we combine powerful GNNs with FedAvg as the representative methods. Meanwhile, we compare the recently proposed FGL methods such as FedGL, FedSage+, and GCFL+. FedSGC efficiently exploits the local structure prior by performing feature propagation. FedNLGNN implements node embeddings by MLP or GCN to discover potential neighbors. FedGGCN further exploits the relationship between over-smoothing and heterogeneity to achieve weighted propagation. FedGL aims to optimize the local model performance using global information, it is essentially graph structure learning without overlapping nodes. FedSage+ performs local graph augmentation to improve prediction performance. GCFL+ implements the clustering process to perform the personalized update mechanisms. More details about baseline methods can be referred to Appendix. A.2. 4.2 OVERALL PERFORMANCE We first present the complete results on Cora and Chameleon in Table. 4 and Table. 3, which are two representative homogeneous and heterogeneous datasets. Due to the space limitation, the details about the experiment environment and results in other datasets can be found in AppendixA.6. Notably, since we randomly inject homogeneous or heterogeneous information into the structure Non-IID data partitioning process, the model performance does not directly relate to the number of clients. Meanwhile, in the community split setting, the process of model aggregation by multiple clients to achieve federated learning can be considered as ensemble learning. Therefore, the prediction performance gets better with the increasing number of clients in some cases. To answer Q1, Table. 3 shows the comparison results with the baseline methods in community split setting. For the homogeneous dataset Cora, compared with the most competitive FGL methods, AdaFGL achieved accuracy gains of 1.18%, 2.25%, and 1.64% in multiple client settings, respec- tively. Meanwhile, AdaFGL exceeds the best methods among all considered baselines on the heterogeneous datasets by a margin of 6.37% to 8.52%. In the community split setting, we improve the prediction accuracy by utilizing the local smoothing prior and adaptive propagation mechanisms. To answer Q2, We demonstrate the performance of existing methods in the face of structure Non-IID challenges in Table. 4. Although FedGGCN performs well in general, it cannot obtain competitive performance. Despite FedSage+ achieving effective local graph augmentation by sharing global data, structure Non-IID is a natural challenge, and this weakness is amplified when heterogeneity is high. In contrast, our method achieves performance gains of 1.45%, 3.77%, and 1.87% compared to the highest prediction accuracy. Impressively, AdaFGL improves performance by 9.82%, 13.06%, and 13.29% in the structure Non-IID setting for the heterogeneous dataset Chameleon. From the observation of the comparison results with the baselines, our method has significant advantages, especially in terms of robustness and impressive performance. 4.3 ABLATION EXPERIMENTS To answer Q3, we present the ablation experiment results in Table. 3 and Table. 4, where HomoKD represents the online distillation in the homogeneous propagation module and HeteTA represents the trainable probability propagation matrix in the heterogeneous propagation module. We observe that the online distillation enhances Homogeneous propagation by combining local smoothing features and local embeddings, it can effectively improve model performance without adding additional computation costs. In essence, it achieves mutually supervised end-to-end learning of global and local information. Furthermore, the trainable probability propagation matrix optimizes the heterogeneity propagation module. It learns the global optimal propagation mechanism and detects positive and negative messages to generate embeddings. HeteTA can discover the global dependence of the current node and achieve effective message aggregation, which is proved by Theorem. 3.1. 4.4 VISUALIZATION AND EXPLAINABILITY ANALYSIS To answer Q4, we present the local prediction accuracy trends with the competitive baseline methods in Fig. 4. According to it, we can notice that our method achieves the best performance in most cases under both community split and structure Non-IID data settings, while the overall trend is optimized. Due to space limitations, the relevant experimental results about the hyperparameter sensitivity analysis experiments on AdaFGL and conclusions can be found in Appendix. A.5. In order to illustrate the effectiveness of the federated global knowledge extractor and the adaptive propagation mechanisms, we also analyze the explainability by presenting the heat maps shown in Fig. 5. We perform structure Non-IID partitioning for 10 clients on PubMed, then select the client with the highest number of nodes with homogeneity and heterogeneity. Based on this, we randomly sampled 20 nodes to obtain the similarity score by computing the embedding transpose. From the observation of the results, we notice that the federated global knowledge extractor only obtains fuzzy results and cannot be optimized for the local subgraphs. Fortunately, we achieve an effective combination of global knowledge and local subgraph structure prior to obtaining explicit node embeddings, which is also demonstrated through the final output in Fig. 5. 4.5 METHODS COMPARISON To answer Q5, we review three recent FGL methods and analyze our approach to them in terms of three aspects: method type, exchange messages, and the ability to solve structure Non-IID problems as shown in Table.5. Obviously, although FedSage+ can achieve competitive results, it introduces significant communication costs and privacy concerns. Specifically, FedSage+ trains two models and thus has communication costs, while implementing cross-client information sharing to improve predictive performance, which no doubt increases privacy concerns. GCFL+ has limitations in model selection leads to its failure to handle the structure Non-IID problem in subgraph learning. In our experiments, FedGL is essentially a local graph structure learning process. In contrast, our approach can utilize the computational capabilities of the local system while minimizing communication costs and privacy concerns. More experimental details can be found in Appendix. A.4. Then, we compare the effectiveness of existing GNNs and our approach to handling heterogeneous graph, which focuses on two points: Neighbor Discovery and Message Combination, which is shown in Fig. 6. We observe that MLP ignores graph structure prior which leads to the failure to handle heterogeneous graphs. Although FedGL and FedSage+ can improve this problem by utilizing global information for local graph augmentation, the limitations of propagation lead to the fact that they are still not the best solutions. Notably, they cannot handle the structure Non-IID problem in FGL. Although NLGNN and GGCN attempt to solve the heterogeneous structure problem, they cannot be directly applied in FGL. Therefore, we are motivated by these methods and propose adaptive propagation mechanisms to improve the performance, which has been validated to be effective. 5 CONCLUSION In this paper, we discover and define the structure Non-IID problem in FGL, which is a new challenge for existing methods. Based on this, we propose a new paradigm AdaFGL for more general federated data settings. Specifically, we investigate the structure Non-IID problem in FGL for supplementing the existing community split data partitioning approach, which is a more practical federated data setting. To implement effective FGL on heterogeneous distributed subgraphs, we propose AdaFGL which consists of the federated global knowledge extractor and adaptive propagation modules. It combines FL and GNNs tightly and benefits from their evolution. Extensive experiments based on the community split and structure Non-IID data settings demonstrate the effectiveness of AdaGFL. We believe that the ability to fully utilize the graph structure information is the key to achieving efficient FGL, thus the research on graph structure in FGL is a promising direction. A APPENDIX OUTLINE The appendix is organized as follows: A.1 Theory error bounds for adaptive heterogeneous propagation modules. A.2 More details about the compared baselines. A.3 Datasets description and structure Non-IID data setting. A.4 Communication costs analysis. A.5 Hyperparameter sensitivity analysis. A.6 Experimental environment and additional base results. A.1 THEORY ERROR BOUNDS FOR ADAPTIVE HETEROGENEOUS PROPAGATION To demonstrate the effectiveness of the adaptive heterogeneous propagation module, we prove its error bound. We first make the reasonable following assumption and definitions. Assumption A.1 Φ is L-smooth, ∀x1,x2 ∈ dom(Φ) Φ(x1) ≤ Φ(x2) + (x1 − x2)T∇Φ(x2) + L 2 ||x1 − x2||22. Then we quote the embedding method theorem Linial et al. (1995). Definition A.1 Given two metric spaces (V, d) and (Z, d′) and mapping function Φ : V → Z , the distortion ϵdistor is definied as ∀u, v ∈ V, 1/ϵdistord(u, v) ≤ d ′ (Φ(u),Φ(v)) ≤ d(u, v). Theorem A.1 (Bourgain theorem) Given any finite metric space (V, d) with V = n, there exists an embedding of (V, d) into Rk under any lp metric, where k = O(log2 n), and the distortion of the embedding is O(log n). It defines the distortion O(log n) in the embedding space (V, d) for mapping methods satisfying the above conditions. Based on this, we consider a graph G with fixed structure represented by à = D̂−1/2ÂD̂−1/2, embeddings represented with H in the forward propagation, and nodes mapping function Φ(H), which satisfies the Theorem. A.1, it can be expressed as ϕ(H) = ( d(H, S1,1) k , d(H, S1,2) k , . . . , d(H, Slogn,c logn) k ) , where d(H, Si,j) = minu∈Si,j d(H, u). Si,j ⊂ V, i = 1, 2, . . . , log n, j = 1, 2, . . . , c log n represents c log2 n random sets, where c is a constant. It is chosen by including each point in V independently with probability 1/2i. Then motivated by Xie et al. (2021) and the above conclusions, we have the following model weights difference proposition. Proposition A.1 Assume the propagation probability matrix, hidden embeddings, and label difference with global optima f⋆θ and local model fθ are bounded with ∥P⋆ −P∥22 = ∥EP∥ 2 2 ≤ ϵP ∥H⋆ −H∥22 = ∥EH∥ 2 2 ≤ ϵH∥∥∥Ŷ⋆ − Ŷ∥∥∥2 2 = ∥∥EŶ∥∥22 ≤ ϵŶ. Based on this, given that ∥H ·H⋆∥22 = ∥H · (H+ EH)∥ 2 2 ≥ ∥HEH∥ 2 2. Let ∥XEH∥ 2 2 = δH, then we have ∥∥H⋆−1 −H−1∥∥2 2 = ∥EH−1∥ ≤ ϵH/δH. If we choose SGC Wu et al. (2019) for the forward propagation, the model weights difference with the influence of feature difference is represented as ϕ = ∥f⋆θ − fθ∥2 = ∥∥∥(PH⋆)−1Ŷ⋆ − (PH)−1Ŷ∥∥∥2 2 = ∥∥∥H⋆−1P−1(Ŷ + EŶ)−H−1P−1Ŷ∥∥∥2 2 = ∥∥∥(H⋆−1 −H−1)P−1Ŷ +H⋆−1P−1EŶ∥∥∥2 2 = ∥∥∥EH−1P−1Ŷ + (PH+PEH)−1EŶ∥∥∥2 2 ≤ ϵH δH ∥∥∥P−1Ŷ∥∥∥2 2 + ϵ2HϵŶ δX ∥∥(PH)−1∥∥2 2 + ϵHϵŶ ∥∥(PH)−1∥∥4 2 . Similarly, there exists ∥P ·P⋆∥22 = ∥P · (P+ EP)∥ 2 2 ≥ ∥PEP∥ 2 2, ∥PEP∥ 2 2 = δP, and∥∥P⋆−1 −P−1∥∥2 2 = ∥EP−1∥ ≤ ϵP/δP. we can obtain the model weight differences with the influence of structure difference. ϕ = ∥f⋆θ − fθ∥2 = ∥∥∥(P⋆H)−1Ŷ⋆ − (PH)−1Ŷ∥∥∥2 2 = ∥∥∥H−1 (P⋆−1Ŷ⋆ −P−1Ŷ)∥∥∥ = ∥∥H−1∥∥2 2 ∥∥∥(P−1 + EP−1)(Ŷ + EŶ)−P−1Ŷ∥∥∥2 2 = ∥∥H−1∥∥2 2 ∥∥∥P−1EŶ + EP−1Ŷ + EP−1EŶ∥∥∥ ≤ ∥∥H−1∥∥2 2 [ ϵŶ ∥∥P−1∥∥2 2 + ϵP δP ∥∥∥Ŷ∥∥∥2 2 + ϵPϵŶ δP ] . Proof A.1 Here, based on the Eq. 11, we consider the adaptive heterogeneous propagation process H(l+1) = H(l) +H(l)pos +H (l) neg = H(l) + PoSign(Â(l)prop)H (l) +NeSign(Â(l)prop)H (l) = H(l) + PoSign ( Â(l−1)prop + βWH (l−1)(WH(l−1))T ) H(l) +NeSign ( Â(l−1)prop + βWH (l−1)(WH(l−1))T ) H(l). Take node i as an example, given that Φ(·) is differentiable, where contains the gradient update of the model difference ϕ. Meanwhile, in order to quantify the difference between our trainable propagation probability matrix and the global optimum, we define κi = ∑ P⋆[i :]− ( Â0prop[i :] + ∑ l βWHl(WH(l))T [i :] ) , where P⋆ represents the optimal propagation probability matrix. Then, we use Pos,Neg to denote the positive and negative message propagation weights P = Â(l)prop, there exist H (l+1) i = ∑ j ̸=i P⋆ijΦ(H (l)) = H (l) i + ∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j = κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l), where Pos(l)ij + Neg (l) ij =  (l) prop[i :]. Then, we perform a first-order Taylor expansion with Peano’s form of remainder at H(l)i and consider the model differences∑ j ̸=i P⋆ijΦ(H (l)) = ∑ j ̸=i P⋆ij ( Φ(H(l)) + ∂Φ(H (l) j ) ∂(H(l))T (H (l) j −H (l) i ) +O(||H (l) j −H (l) i ||2) ) = ∑ j ̸=i P⋆ijΦ(H (l)) + ∑ j ̸=i P⋆ij ∂Φ(H (l) j ) ∂(H(l))T (H (l) j −H (l) i ) + ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2). Now, we let ∑ j ̸=i P ⋆ ij(H (l) j −H (l) i ) = −ϵi, there exist ∑ j ̸=i P⋆ijΦ(H (l)) = κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l), = ∑ j ̸=i P⋆ijΦ(H (l))− ∂Φ(H (l) j ) ∂(H(l))T ϵi + ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2)∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l) ∣∣∣∣∣∣ = ∣∣∣∣∣∣∂Φ(H (l) j ) ∂(H(l))T ϵi − ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2) ∣∣∣∣∣∣ . According to Cauchy-Schwarz inequality and L-Lipschitz property, we have ∣∣∣∣∣∂Φ(H(l)i )∂(H(l))T ϵi ∣∣∣∣∣ ≤ ∥∥∥∥∥∂Φ(H(l)i )∂(H(l))T ∥∥∥∥∥ ∥ϵi∥2 ≤ L ∥ϵi∥2 . Therefore, the approximation of H(l)i + ∑ j ̸=i(Pos (l) ij +Neg (l) ij )H (l) j is bounded by∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− H(l)i +∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j ∣∣∣∣∣∣ = ∣∣∣∣∣∣∂Φ(H (l) i ) ∂(H(l))T − ∑ j ̸=i P⋆ijO (∥∥∥Hlj −H(l)i ∥∥∥ 2 )∣∣∣∣∣∣+ (∥∥∥H⋆ −P⋆H(l)∥∥∥ 2 ) ≤ ∣∣∣∣∣∂Φ(H(l)i )∂(H(l))T ϵi ∣∣∣∣∣+ ∣∣∣∣∣∣ ∑ j ̸=i P⋆ijO (∥∥∥Hlj −H(l)i ∥∥∥ 2 )∣∣∣∣∣∣+ (∥∥∥H⋆ −P⋆H(l)∥∥∥ 2 ) ≤ L ∥ϵi∥2 +∑ j ̸=i P⋆ij O (∥∥∥Hlj −H(l)i ∥∥∥ 2 )+ (∥∥∥H⋆ − ϕ (κ+P)H(l)∥∥∥ 2 ) , where H⋆ represents the global optimal embeddings. Based on this, we obtain the theory error bound for heterogeneous propagation. From the observation of error bounds, we reveal that in theory, the adaptive heterogeneous propagation process can minimize the immediate neighbors error ϵi, the model difference ϕ, and the propagation probability matrix difference κ to scale the error to improve the predictive performance. A.2 COMPARED BASELINES The main characteristic of all baselines are listed below: FedMLP: The combination of FedAvg and MLP, we employ a two-layer MLP with the hidden dimension of 64. It generates node embeddings based on the original features while ignoring graph structure information in the forward propagation process. FedSGC: We combine FedAvg and SGC Wu et al. (2019), we default to use the 3-layer feature propagation process, which follows the homogeneous assumption and thus fails to deal with heterogeneous graph. FedNLGNN: Implementation of NLGNN (NLMLP or NLGCN) Liu et al. (2021) based on FedAvg, we select the more effective version to present the model performance. It depends on the embedding model and suffers from representational limitations. FedGGCN: The combination of FedAvg and GGCN Yan et al. (2021), we follow the experimental setup of the original paper as much as possible, which can handle heterogeneous graphs effectively, but cannot achieve competitive results on homogeneous graphs. FedGL Chen et al. (2021): As a FGL training framework, it strongly relies on the overlapping nodes assumption, which in our data setting is essentially local graph structure learning. GCFL+ Xie et al. (2021): due to the limitations of personalized techniques in model selection, they cannot fundamentally solve the structure Non-IID challenges. FedSage+ Zhang et al. (2021): It trains NeighGen to achieve local subgraph augmentation by sharing global missing subgraph feature and topology information for the most powerful results, but suffers from privacy leakage risk and additional computational costs. For fairness, we follow the experimental setup of the baseline methods paper as much as possible, and in other cases, we show the best prediction accuracy. In addition, the number of rounds for the above baseline methods is 50, and the local epoch is 20. A.3 DATASETS DESCRIPTION AND STRUCTURE NON-IID DATA SETTING The statistics of datasets are summarized in Table. 7, which contains both homogeneity and heterogeneity. In our experiments, we use five benchmark datasets containing homogeneity and heterogeneity, for which details are given below. Cora, Citeseer, and Pubmed Yang et al. (2016b) are three popular citation network datasets. In these three networks, papers from different topics are considered as nodes, and the edges are citations among the papers. The node attributes are binary word vectors, and class labels are the topics papers belong to. Chameleon and Squirrel are two web page datasets collected from Wikipedia Rozemberczki et al. (2021), where nodes are web pages on specific topics and edges are hyperlinks between them. Based on this, we illustrate the structure Non-IID data partitioning process in detail. The core of it is the Dirichlet process He et al. (2020), Its basic analysis is as follows. The pdf of the Dirichlet distribution is defined as p(P = {pi}|αi) = ∏ i Γ(αi) Γ( ∑ i αi) Γip αi−1 i , where αi ∈ {α1, . . . , αk} > 0 is the dimensionless distribution parameter, the scale (or concentration) ϑ = ∑ i αi, the base measure (α ⋆ 1, . . . , α ⋆ k), α ⋆ i = αi/ϑ, and Γ(n) = (n − 1)!. Dirichlet is a distribution over Multinomials, thus there is ∑ i pi = 1, pi ≥ 0, where pi represents the probability. It determines the mean distribution, and the scale affects the variance, then we obtain E(pi) = αi ϑ = α⋆i , V ar(pi) = αi(ϑ− α) ϑ2(ϑ+ 1) = α⋆i (1− α⋆i ) (ϑ+ 1) , Cov(pi, pj) = −αiαj ϑ2(ϑ+ 1) , which means that a Dirichlet with small scale ϑ favors extreme distributions, but this prior belief is very weak and is easily overwritten by data. As ϑ → ∞, the covariance→ 0, the samples→ base measure. Based on this, we start sampling the edges to determine the attribution of a pair of nodes. If a conflicting set of nodes exists it is sampled again and finally generates induced subgraphs. Then, we randomly inject homogeneous or heterogeneous information based on the label prior, which can solve unreal structure loss and enhance structure identity. We propose to set three probabilities piso, phomo, and phete for each client individually to represent the probability of avoiding isolated nodes, increasing homogeneous edges, and increasing heterogeneous edges in the subgraph, respectively. Specifically, piso represents the probability of isolated nodes generating edges with other nodes, which can effectively be used to prevent the generation of isolated nodes. phomo applies to the subgraph of clients that are selected to enhance homogeneity, and it represents the probability of connection between two nodes with the same label based on the label prior information. Correspondingly, phete represents the probability used to perform the structure information injection for the client subgraph that performs the heterogeneity enhancement. A.4 COMMUNICATION COSTS ANALYSIS The advantage of our approach is to exploit the local structure prior while making full use of the global information, which considers the characteristics of GNNs. It has the benefit of reducing the communication costs and privacy concerns during the federated training process. Meanwhile, thanks to the utilization of the local structure information, we can obtain models with better representational power to improve the performance. To demonstrate the effectiveness of our method, we provide the experimental results of AdaFGL with the two most competitive methods, FedSage and FedGGCN, as shown in Tables. A.4 and Table. A.4. According to the experimental results, we observe that AdaFGL maintains the low communication costs and achieves a satisfying result, which mainly benefits from the utilization of local structure information by the adaptive propagation modules. Compared to FedSage, which is the current most competitive FGL approach, suffers from the performance improvement and communication costs dilemma, which also brings more privacy concerns. Cora Chameleon A.5 HYPERPARAMETER SENSITIVITY ANALYSIS Here we conduct the hyperparameter sensitivity of AdaFGL, and the experimental results are shown in Fig. 6. In our experiments, we analyze the ratio of online distillation loss in the homogeneous propagation module and the smoothing coefficient of the trainable propagation matrix in the heterogeneous propagation module. According to the experimental results, we observe that AdaFGL performs robustness except for extreme cases. Furthermore, we obtain the conclusion from the results generated by the extreme knowledge distillation loss ratios, where the low confidence base predictor results instead affect the homogeneous propagation module. Motivated by this, in order to avoid global embeddings with low confidence from influencing the propagation module, we measure the confidence of the global model according to the characteristics of the base predictor. A.6 EXPERIMENTAL ENVIRONMENT AND ADDITIONAL BASE RESULTS The experiments are conducted on a machine with Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz, and a single NVIDIA GeForce RTX 3090 with 24GB memory. The operating system of the machine is Ubuntu 18.04.6. As for software versions, we use Python 3.8, Pytorch 1.11.0, and CUDA 11.4. To alleviate the influence of randomness, we repeat each method 10 times and report the statistical characteristics. The hyper-parameters of baselines are set according to the original paper if available. We use Optuna Akiba et al. (2019) to implement hyperparameters search. Following the above principles, we present the results of two data partitioning as follows.
1. What is the focus of the paper, particularly in terms of its contribution to federated graph learning? 2. What are the strengths and weaknesses of the proposed approach, especially regarding its assumptions and experimental results? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Instead of taking the community split method used in the previous federated graph learning trend, this paper introduces a new heterogeneous graph split method named structure non-iid and designs a new framework called AdaFGL to deal with the new problem. In the method, a HomoKD part is used for propagating the homogeneity message and a HeteTA part is leveraged to fit with the graph heterogeneity. Some experiments are provided to verify the effectiveness of the proposed method. Strengths And Weaknesses Weaknesses: How can you justify the correctness of your structure Non-IID assumption? In the current paper, the authors only provided experimental results on the ideal split dataset according to their assumption. In Table 3, the authors claim that the community split only provides a homogeneous distribution over different clients. The reported results show that AdaFGL obtains totally the same performance as the method w/o HeteTA on the Cora dataset. Interestingly, AdaFGL gets the same performance as the method w/o HomoKD on the Chameleon dataset. Please check your experimental results or put more effort into providing a reasonable explanation. Clarity, Quality, Novelty And Reproducibility The writing is quite clear. The setup is new while the proposed method is trivial.
ICLR
Title A New Paradigm for Federated Structure Non-IID Subgraph Learning Abstract Federated graph learning (FGL), a distributed training framework for graph neural networks (GNNs) has attracted much attention for breaking the centralized machine learning assumptions. Despite its effectiveness, the differences in data collection perspectives and quality lead to the challenges of heterogeneity, especially the domain-specific graph is partitioned into subgraphs in different institutions. However, existing FGL methods implement graph data augmentation or personalization with community split which follows the cluster homogeneity assumptions. Hence we investigate the above issues and suggest that subgraph heterogeneity is essentially the structure variations. From the observations on FGL, we first define the structure non-independent identical distribution (NonIID) problem, which presents unique challenges among client-wise subgraphs. Meanwhile, we propose a new paradigm for general federated data settings called Adaptive Federated Graph Learning (AdaFGL). The motivation behind it is to implement adaptive propagation mechanisms based on federated global knowledge and non-params label propagation. We conduct extensive experiments with community split and structure Non-IID settings, our approach achieves state-of-the-art performance on five benchmark datasets. 1 INTRODUCTION The graph as a relational data structure is widely used to model real-world entity relations such as citation networks Yang et al. (2016a), recommended systems Wu et al. (2022), drug discovery Gaudelet et al. (2021), particle physics Shlomi et al. (2021), etc. However, due to the collection agents and privacy concerns, generally, the global domain-specific graph consists of many subgraphs collected by multiple institutions. In order to analyze the local subgraph, each client maintains a powerful graph mining model such as graph neural networks (GNNs), which have achieved stateof-the-art performance in many graph learning tasks Zhang et al. (2022b); Hu et al. (2021); Zhang & Chen (2018). Despite its effectiveness, the limited data provide sub-optimal performance in most cases. Motivated by the success of federated learning (FL), a natural idea is to combine the GNNs with FL to utilize the distributed subgraphs. Recently, federated graph learning (FGL) He et al. (2021); Wang et al. (2022b) is proposed to achieve collaborative training without directly sharing data, yet an essential concern is the heterogeneity of the distributed subgraphs. Notably, graph heterogeneity is different from the heterogeneity of labels or features in the fields of computer vision or natural language processing, we suggest that it depends on the graph structure. However, The existing FGL methods simulate the federated subgraph distributions through community split, which follows the cluster homogeneity assumption as shown in Fig.1(a). Specifically, community split leads to the subgraph structure being consistent and the same as the original graph, e.g., connected nodes are more likely to have the same labels. Obviously, it is overly desirable and hard to satisfy in reality, hence we consider a more reasonable setting shown in Fig.1(c). We first refer to the above problem as structure non-independent identical distribution (Non-IID). The motivation behind it is due to graph structure directly related to node labels and feature distributions. Meanwhile, the challenges of structure heterogeneity are ubiquitous in the real world Zheng et al. (2022b). For instance, in citation networks, we consider research teams focused on computers and intersectional fields (e.g., AI in Science) Shlomi et al. (2021); Gaudelet et al. (2021) as clients. In online transaction networks, fraudsters are more likely to build connections with customers instead of other fraudsters Pandit et al. (2007). We consider different regions as clients to detect financial fraudsters by analyzing online transaction subgraphs. Specifically, graph structure can be divided into two types: homogeneity means that connected nodes are more likely to have the same label and similar feature distributions and heterogeneity is the opposite. In order to explain it intuitively, we visualize the 3 clients partitioning result on Cora in Table. 1 and Table. 2, where Homo represents the homogeneity degree of the local subgraph, and it is computed by a popular metric Pei et al. (2020). Obviously, compared to community split, which follows the cluster homogeneity assumption and uniform distribution principle, structure Non-IID brings challenges to the existing FGL methods. Table 1: Community split in Cora. Community #Nodes #Edges Homo Client1 903 1696 0.85 Client2 903 1575 0.78 Client3 902 1592 0.87 Table 2: Structure Non-IID in Cora. Non-IID #Nodes #Edges Homo Client1 1095 1473 0.43 Client2 946 1400 0.87 Client3 667 1212 0.31 Based on this, we investigate the above issues through empirical analysis shown in Fig. 2. According to the results, we observe that in case the original graph satisfies the homogeneity assumption then the label distributions satisfy Non-IID. It is the opposite when the original graph satisfies the heterogeneity. This is due to the fact that the nodes partitioned into the same clients are communities and follow the uniform distribution principle. In addition, the local accuracy indicates that the subgraph structure performs a more important role in FGL compared to the label distributions, which also supports our motivation. In model performance, we observe that the GGCN improves the structure Non-IID problem, and FedSage+ trains NeighGen to implement local subgraph augmentation by sharing node embeddings. However, the above methods fail to achieve competitive results as SGC on the homogeneous subgraphs while considering heterogeneity. In order to efficiently analyze distributed subgraphs with both homogeneity and heterogeneity. We propose a simple pipeline called Adaptive Federated Graph Learning (AdaFGL) for more general federated data settings, which consists of three main parts. Specifically, it starts by analyzing the subgraph structure through non-params label propagation and selects the appropriate base model: (i) the federated global knowledge extractor (e.g., MLP, powerful GNNs, or any reasonable embedding models), which does not rely on any learning over the subgraph. Then, the base predictor is trained based on the global data, which can be done offline or in parallel with local training, benefiting from the flexibility of our approach. Finally, the local client implements two adaptive propagation mechanisms: (ii) homogeneity propagation module or (iii) heterogeneity propagation module based on the local subgraph. Notably, with non-params label propagation, the above process is adaptive. To summarize, the contributions of this paper are as follows: (1) To the best of our knowledge, we are the first to analyze the structure Non-IID problem in FGL, which is a more general federated data setting and brings new challenges. (2) We propose AdaFGL, a new paradigm for structure Non-IID subgraph learning, which shows its flexibility in FGL with impressive performance. (3) Extensive experiments demonstrate the effectiveness of AdaFGL. Specifically, our approach achieves state-ofthe-art performance in the above two data settings. Compared to the best prediction accuracy in the baselines, our method achieves performance gains of 4.67% and 2.65% in structure Non-IID and community split data settings, respectively. 2 PRELIMINARIES In this section, we first introduce the semi-supervised node classification task. Then, we review the prior diverse GNNs and very recent FGL methods. Consider a graph G = (V,E) with |V | = n nodes and |E| = m edges, the adjacency matrix (including self loops) is denoted as  ∈ Rn×n, the feature matrix is denoted as X = {x1, x2, . . . , xn} in which xv ∈ Rf represents the feature vector of node v, and f represents the dimension of the node attributes. Besides, Y = {y1, y2, . . . , yn} is the label matrix, where yv ∈ R|Y | is a one-hot vector and |Y | represents the number of the node classes. The semi-supervised node classification task is based on the topology of labeled set VL and unlabeled set VU , and the nodes in VU are predicted with the model supervised by VL. GNNs. As the most popular GNN method, The forward information propagation process of the l-th layer GCN Kipf & Welling (2017) is formulated as X(l) = σ(ÃX(l−1)W(l)), à = D̂r−1ÂD̂−r, (1) where D̂ represents the degree matrix with Â, r ∈ [0, 1] denotes the convolution kernel coefficient, W represents the trainable weight matrix, and σ(·) represents the non-linear activation function. In GCN, we set r = 1/2, and then D̂−1/2ÂD̂−1/2 is called symmetric normalized adjacency matrix. Despite their effectiveness, they have limitations in real-world graphs, which have complex heterogeneous relationship patterns. Some recent researches Liu et al. (2021); Chien et al. (2021); He et al. (2022); Wang et al. (2022a); Yang et al. (2022) solve it by higher-order neighborhood discovery or message combination strategies to improve the GNN process via m(l)v = Aggregate (l)({h∗u|u ∈ N∗(v)}), h(l)v = Update (l)(h∗v,m ∗ v), (2) where h∗u denotes the information of multi-hop neighbors N∗(v), m∗v represents the higher-order messages of node v from the previous layers, Aggregate(·) and Update(·) denote the message aggregation function and update function, respectively. However, these methods suffer from high computational complexity and fail to achieve competitive performance on the homogeneous graph. FGL has received growing attention for breaking centralized graph machine learning assumptions. FedGraphNN He et al. (2021) and FS-G Wang et al. (2022b) propose general FGL packages, which contain a wide range of graph learning tasks. GCFL Xie et al. (2021) and FED-PUB Baek et al. (2022) investigate the personalized technologies in graph-level and node-level, respectively. Furthermore, some recent researches improve performance with local subgraph augmentation, including FedGNN Wu et al. (2021), FedGL Chen et al. (2021), and FedSage Zhang et al. (2021). Inspired by FS-G Wang et al. (2022b), we can consider the collaborative training process in FGL as modules. Specifically, we model the information such as gradients and node embeddings uploaded by the clients as messages. Then we consider the server processes and broadcast results as the various message-handling mechanisms. Here we illustrate the GNNs combined with collaborative training. Its generic form with N clients is defined as FGL− Clients (Local Update) → min 1 N N∑ i=1 E(Ai,Xi,Yi)∼Di [Lce(fθi(Ai,Xi),Yi)], L(fθi(Ai,Xi),Yi) = − ∑ i∈VL ∑ j [Yij log(Softmax(Ỹij)) + (1−Yij) log(1− Softmax(Ỹij))], (3) where fθi and Lce are the i-th local GNN with parameters θ and cross-entropy loss function, respectively. It can be replaced by any other appropriate loss function depending on the task. (Ai,Xi,Yi) ∼ Di represents the local subgraph (Ai,Xi,Yi) sampled from the distribution Di. FedAvg McMahan et al. (2017) is an efficient FL algorithm, which can be defined as FGL− Server (Aggregate)→ ∀i,Wt+1i ←W t i − ηgi, Wt+1 ← N∑ i=1 ni n Wt+1i , (4) where t represents the round number of the FL process, W represents the model weights, η represents the learning rate, g represents the gradient calculated from the Eq. 3, ni and n represent the i-th local client data size and the global data size, respectively. 3 ADAFGL PIPELINE The basic idea of AdaFGL is to perform adaptive propagation mechanisms based on federated global knowledge and non-params label propagation. The pipeline with three main parts as shown in Fig. 3, which combine the global knowledge embeddings and local structure properties. The above decoupling process utilizes the computational capacity of the local system while minimizing communication costs and the risk of privacy leakage. AdaFGL can benefit from the evolution of FL and GNN through the base predictor and adaptive propagation. Notably, the base predictor obtained by federated training and personalized propagation are viewed as two decoupled modules that are executed sequentially. Meanwhile, both of them accomplish the training without sharing local private data. 3.1 FEDERATED GLOBAL KNOWLEDGE EXTRACTOR In FGL, limited data yields sub-optimal performance in most cases. Therefore, AdaFGL starts to perform non-params label propagation to adaptive process. Note this process does not rely on any learning over the subgraph. Specifically, the labeled nodes are initialized as y0v = yv,∀v ∈ VL, and the unlabeled nodes are denoted as y0u = ( 1 |Y | , . . . , 1 |Y | ),∀u ∈ VU . Then, the non-params label propagation of the k-step is expressed as yku = graph− aggregator({yk−1v |v ∈ Nu}) = αy0u + (1− α) ∑ v∈Nu 1√ d̃vd̃u yk−1v . (5) We follow the approximate calculation of the personalized PageRank matrix Klicpera et al. (2019), where Nv represents the one-hop neighbors of v, and we default set α = 0.5. Then, we design the homogeneity confidence score (HCS) computed by the number of correct predictions, and the default ratio of the boolean mask is 0.5. Finally, we set thresholds λ for the adaptive binary selection of the homogeneity propagation module and heterogeneity propagation module in each client. In experiments, we default set λ = 0.6 To demonstrate that AdaFGL is a simple yet effective framework, we choose simple models (e.g. MLP or SGC) and FedAvg to achieve federated training. Due to the flexibility of AdaFGL, they can be replaced by any other powerful GNNs and federated methods. From the perspective of FL in Non-IID data, we default choose MLP as the base predictor, which is independent of the graph structure. Then we quote the convergence theorem Li et al. (2020) in T rounds and E epochs, the federated global knowledge extractor error bound ϵfed is expressed as ϵfed ≤ 2L µ2(γ + T − 1) ( N∑ i=1 ni n φ2i + 6Lϕ+ 8(E − 1)2ω2 + γ 4 ||W1 −W⋆||2 ) . (6) It assumes that the mapping function satisfies L-smooth and µ-strongly convex, where φ and ϕ represent the local random gradient and the degree of model heterogeneity, respectively, γ = max{8L/µ,E}, ω denotes the divergence of local model, and W∗ represents the global optima. We observe that the base predictor error bound is mainly determined by the differences in the node feature distributions, and the model performance will be further hurt if the graph structure is considered. Therefore, we are motivated to propose adaptive propagation mechanisms. Specifically, we implement the binary selection of the homogeneity propagation module or heterogeneity propagation module in each client by comparing the HCS value and the threshold λ. We will describe the technical details of personalized propagation strategies. 3.2 ADAPTIVE HOMOGENEITY PROPAGATION After that, we use the base predictor to embed local subgraph nodes into the global knowledge space Xglobal and improve the accuracy with the local homogeneous structure. The motivation behind it is that the feature propagation satisfying homogeneity has a significant positive impact on prediction performance, which has also been confirmed in many recent research works Zhang et al. (2022a); Wang & Leskovec (2020). Hence we expect to utilize local smoothing features to correct the predictions. Then, we first define the homogeneous feature propagation X (k) smooth = graph− operator(A) (k)X(0),∀k = 1, . . .K, Hhomo = message− updater(X(k)smooth) = fθ(X (k) smooth), (7) where graph− operator(·) represents the graph operator in feature propagation, we default to use symmetric normalized adjacency as shown in Eq. 1. X(k)smooth represents the local smoothing features after K-steps propagation, message− updater(·) denotes the model training process, and we use fθ to represent the linear regression or MLP with parameters of θ. In order to correct the global embedding and local information, we use the local message update mechanism and online distillation to achieve an effective combination of the local smooth structure prior and the global embeddings, which can be written as Hlocal = WlocalXglobal, Lkd = ||Hhomo −Hlocal||F . (8) Based on this, we can make local smoothing information and global embeddings to achieve mutual supervision and end-to-end training by gradient updating. This exploits the local structure information to reduce the error bound. Notably, the above adaptive process is accomplished in the local client and has no additional communication costs and privacy concerns. 3.3 ADAPTIVE HETEROGENEITY PROPAGATION In contrast, in order to break the heterogeneous structure limitations, we optimize the messagepassing framework by embeddings Xglobal to detect subgraph heterogeneous patterns. Specifically, we propose an adaptive propagation mechanism by discovering the global dependency of the current node and modeling the positive or negative impact of the messages. Intuitively, we first expect to optimize the propagation probability matrix and align the local structure by global embeddings A(0)prop = XglobalX T global, Xalign =graph− operator(Â(0)prop)(k)X(0). (9) Obviously, the original propagation probability matrix introduces high error, we improve it by scaling the aggregated messages and making it trainable. Formally, let pij ∈ Aprop correspond to the i-th row and j-th col of Aprop, we define the scaling operator dij = dis(Pii,Pij) for j ̸= i, where dis(·) is a distance function or a function positively relative with the difference, which can be implemented using identity distance. Thus the corrected propagation matrix is expressed as Â(l)prop = A (l) prop/dd T − diag(A(l)prop). (10) The purpose of it is to measure the global dependency of the current node through the probability difference. Then, we further model the positive and negative impacts of the messages to implement effective aggregation, which is formally represented as follows H(l) = WH(l−1), A(l)prop =  (l−1) prop + β ( H(l)H(l) )T , H(l)pos = PoSign( (l) prop)H (l), H(l)neg = NeSign( (l) prop)H (l), H(l+1) = H(l) +H(l)pos +H (l) neg, (11) where H(0) = Xalign, PoSign(·) and NeSign(·) represent the trainable adaptive propagation probabilities, it can be replaced by any reasonable nonlinear activation function. Here we analyze the error bound for the above adaptive heterogeneous propagation mechanism. The proof of the following theorem and reasonable assumptions are given in Appendix. A.1 Theorem 3.1 Suppose that the latent ground-truth mapping Φ : x→ y from node features to node labels is differentiable and satisfies L-Lipschitz constraint, the following approximation error is∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− H(l)i +∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j ∣∣∣∣∣∣ ≤ L ∥ϵi∥2 +∑ j ̸=i P⋆ij O (∥∥∥Hlj −H(l)i ∥∥∥ 2 )+ (∥∥∥H⋆ − ϕ (κ+P)H(l)∥∥∥ 2 ) , where ⋆ represents the global optimal, ϵ denotes immediate neighbors error, O(·) denotes a higher order infinitesimal, ϕ and κ represent propagation matrix and model differences, respectively. The core of the above propagation mechanisms is to generate embeddings based on other nodes in the embedding space. In other words, it means that any node representation can be mapped to a linear combination of existing node representations, which has been applied in many studies Zheng et al. (2022a); Yang et al. (2022). However, most of the methods use ranking mechanisms for representation and fail to consider modeling propagation processes, which has limitations. 4 EXPERIMENTS In this section, we conduct experimental analysis on five benchmark datasets with community split and structure Non-IID settings to validate the effectiveness of AdaFGL. We aim to answer the following five questions. Q1: Compared with other state-of-the-art FGL baselines, can AdaFGL achieve better predictive accuracy in the community split setting? Q2: How does structure NonIID influence existing methods and can AdaFGL improve it? Q3: Are knowledge distillation and message detection working in adaptive propagation mechanisms? Q4: Why AdaFGL can achieve desirable predictions utilizing the interactions between decoupled modules? Q5: Compared with existing FGL methods and heterogeneous GNNs, what are the advantages of AdaFGL? 4.1 EXPERIMENTAL SETUP AND BASELINES Existing FGL methods implement data partitioning by community split Wang et al. (2022b). We follow it while proposing a more convincing strategy structure Non-IID. Due to space limitations, the implementation details of the structure Non-IID can be found in Appendix. A.3 and Appendix. A.6. To demonstrate the effectiveness of AdaFGL, we combine powerful GNNs with FedAvg as the representative methods. Meanwhile, we compare the recently proposed FGL methods such as FedGL, FedSage+, and GCFL+. FedSGC efficiently exploits the local structure prior by performing feature propagation. FedNLGNN implements node embeddings by MLP or GCN to discover potential neighbors. FedGGCN further exploits the relationship between over-smoothing and heterogeneity to achieve weighted propagation. FedGL aims to optimize the local model performance using global information, it is essentially graph structure learning without overlapping nodes. FedSage+ performs local graph augmentation to improve prediction performance. GCFL+ implements the clustering process to perform the personalized update mechanisms. More details about baseline methods can be referred to Appendix. A.2. 4.2 OVERALL PERFORMANCE We first present the complete results on Cora and Chameleon in Table. 4 and Table. 3, which are two representative homogeneous and heterogeneous datasets. Due to the space limitation, the details about the experiment environment and results in other datasets can be found in AppendixA.6. Notably, since we randomly inject homogeneous or heterogeneous information into the structure Non-IID data partitioning process, the model performance does not directly relate to the number of clients. Meanwhile, in the community split setting, the process of model aggregation by multiple clients to achieve federated learning can be considered as ensemble learning. Therefore, the prediction performance gets better with the increasing number of clients in some cases. To answer Q1, Table. 3 shows the comparison results with the baseline methods in community split setting. For the homogeneous dataset Cora, compared with the most competitive FGL methods, AdaFGL achieved accuracy gains of 1.18%, 2.25%, and 1.64% in multiple client settings, respec- tively. Meanwhile, AdaFGL exceeds the best methods among all considered baselines on the heterogeneous datasets by a margin of 6.37% to 8.52%. In the community split setting, we improve the prediction accuracy by utilizing the local smoothing prior and adaptive propagation mechanisms. To answer Q2, We demonstrate the performance of existing methods in the face of structure Non-IID challenges in Table. 4. Although FedGGCN performs well in general, it cannot obtain competitive performance. Despite FedSage+ achieving effective local graph augmentation by sharing global data, structure Non-IID is a natural challenge, and this weakness is amplified when heterogeneity is high. In contrast, our method achieves performance gains of 1.45%, 3.77%, and 1.87% compared to the highest prediction accuracy. Impressively, AdaFGL improves performance by 9.82%, 13.06%, and 13.29% in the structure Non-IID setting for the heterogeneous dataset Chameleon. From the observation of the comparison results with the baselines, our method has significant advantages, especially in terms of robustness and impressive performance. 4.3 ABLATION EXPERIMENTS To answer Q3, we present the ablation experiment results in Table. 3 and Table. 4, where HomoKD represents the online distillation in the homogeneous propagation module and HeteTA represents the trainable probability propagation matrix in the heterogeneous propagation module. We observe that the online distillation enhances Homogeneous propagation by combining local smoothing features and local embeddings, it can effectively improve model performance without adding additional computation costs. In essence, it achieves mutually supervised end-to-end learning of global and local information. Furthermore, the trainable probability propagation matrix optimizes the heterogeneity propagation module. It learns the global optimal propagation mechanism and detects positive and negative messages to generate embeddings. HeteTA can discover the global dependence of the current node and achieve effective message aggregation, which is proved by Theorem. 3.1. 4.4 VISUALIZATION AND EXPLAINABILITY ANALYSIS To answer Q4, we present the local prediction accuracy trends with the competitive baseline methods in Fig. 4. According to it, we can notice that our method achieves the best performance in most cases under both community split and structure Non-IID data settings, while the overall trend is optimized. Due to space limitations, the relevant experimental results about the hyperparameter sensitivity analysis experiments on AdaFGL and conclusions can be found in Appendix. A.5. In order to illustrate the effectiveness of the federated global knowledge extractor and the adaptive propagation mechanisms, we also analyze the explainability by presenting the heat maps shown in Fig. 5. We perform structure Non-IID partitioning for 10 clients on PubMed, then select the client with the highest number of nodes with homogeneity and heterogeneity. Based on this, we randomly sampled 20 nodes to obtain the similarity score by computing the embedding transpose. From the observation of the results, we notice that the federated global knowledge extractor only obtains fuzzy results and cannot be optimized for the local subgraphs. Fortunately, we achieve an effective combination of global knowledge and local subgraph structure prior to obtaining explicit node embeddings, which is also demonstrated through the final output in Fig. 5. 4.5 METHODS COMPARISON To answer Q5, we review three recent FGL methods and analyze our approach to them in terms of three aspects: method type, exchange messages, and the ability to solve structure Non-IID problems as shown in Table.5. Obviously, although FedSage+ can achieve competitive results, it introduces significant communication costs and privacy concerns. Specifically, FedSage+ trains two models and thus has communication costs, while implementing cross-client information sharing to improve predictive performance, which no doubt increases privacy concerns. GCFL+ has limitations in model selection leads to its failure to handle the structure Non-IID problem in subgraph learning. In our experiments, FedGL is essentially a local graph structure learning process. In contrast, our approach can utilize the computational capabilities of the local system while minimizing communication costs and privacy concerns. More experimental details can be found in Appendix. A.4. Then, we compare the effectiveness of existing GNNs and our approach to handling heterogeneous graph, which focuses on two points: Neighbor Discovery and Message Combination, which is shown in Fig. 6. We observe that MLP ignores graph structure prior which leads to the failure to handle heterogeneous graphs. Although FedGL and FedSage+ can improve this problem by utilizing global information for local graph augmentation, the limitations of propagation lead to the fact that they are still not the best solutions. Notably, they cannot handle the structure Non-IID problem in FGL. Although NLGNN and GGCN attempt to solve the heterogeneous structure problem, they cannot be directly applied in FGL. Therefore, we are motivated by these methods and propose adaptive propagation mechanisms to improve the performance, which has been validated to be effective. 5 CONCLUSION In this paper, we discover and define the structure Non-IID problem in FGL, which is a new challenge for existing methods. Based on this, we propose a new paradigm AdaFGL for more general federated data settings. Specifically, we investigate the structure Non-IID problem in FGL for supplementing the existing community split data partitioning approach, which is a more practical federated data setting. To implement effective FGL on heterogeneous distributed subgraphs, we propose AdaFGL which consists of the federated global knowledge extractor and adaptive propagation modules. It combines FL and GNNs tightly and benefits from their evolution. Extensive experiments based on the community split and structure Non-IID data settings demonstrate the effectiveness of AdaGFL. We believe that the ability to fully utilize the graph structure information is the key to achieving efficient FGL, thus the research on graph structure in FGL is a promising direction. A APPENDIX OUTLINE The appendix is organized as follows: A.1 Theory error bounds for adaptive heterogeneous propagation modules. A.2 More details about the compared baselines. A.3 Datasets description and structure Non-IID data setting. A.4 Communication costs analysis. A.5 Hyperparameter sensitivity analysis. A.6 Experimental environment and additional base results. A.1 THEORY ERROR BOUNDS FOR ADAPTIVE HETEROGENEOUS PROPAGATION To demonstrate the effectiveness of the adaptive heterogeneous propagation module, we prove its error bound. We first make the reasonable following assumption and definitions. Assumption A.1 Φ is L-smooth, ∀x1,x2 ∈ dom(Φ) Φ(x1) ≤ Φ(x2) + (x1 − x2)T∇Φ(x2) + L 2 ||x1 − x2||22. Then we quote the embedding method theorem Linial et al. (1995). Definition A.1 Given two metric spaces (V, d) and (Z, d′) and mapping function Φ : V → Z , the distortion ϵdistor is definied as ∀u, v ∈ V, 1/ϵdistord(u, v) ≤ d ′ (Φ(u),Φ(v)) ≤ d(u, v). Theorem A.1 (Bourgain theorem) Given any finite metric space (V, d) with V = n, there exists an embedding of (V, d) into Rk under any lp metric, where k = O(log2 n), and the distortion of the embedding is O(log n). It defines the distortion O(log n) in the embedding space (V, d) for mapping methods satisfying the above conditions. Based on this, we consider a graph G with fixed structure represented by à = D̂−1/2ÂD̂−1/2, embeddings represented with H in the forward propagation, and nodes mapping function Φ(H), which satisfies the Theorem. A.1, it can be expressed as ϕ(H) = ( d(H, S1,1) k , d(H, S1,2) k , . . . , d(H, Slogn,c logn) k ) , where d(H, Si,j) = minu∈Si,j d(H, u). Si,j ⊂ V, i = 1, 2, . . . , log n, j = 1, 2, . . . , c log n represents c log2 n random sets, where c is a constant. It is chosen by including each point in V independently with probability 1/2i. Then motivated by Xie et al. (2021) and the above conclusions, we have the following model weights difference proposition. Proposition A.1 Assume the propagation probability matrix, hidden embeddings, and label difference with global optima f⋆θ and local model fθ are bounded with ∥P⋆ −P∥22 = ∥EP∥ 2 2 ≤ ϵP ∥H⋆ −H∥22 = ∥EH∥ 2 2 ≤ ϵH∥∥∥Ŷ⋆ − Ŷ∥∥∥2 2 = ∥∥EŶ∥∥22 ≤ ϵŶ. Based on this, given that ∥H ·H⋆∥22 = ∥H · (H+ EH)∥ 2 2 ≥ ∥HEH∥ 2 2. Let ∥XEH∥ 2 2 = δH, then we have ∥∥H⋆−1 −H−1∥∥2 2 = ∥EH−1∥ ≤ ϵH/δH. If we choose SGC Wu et al. (2019) for the forward propagation, the model weights difference with the influence of feature difference is represented as ϕ = ∥f⋆θ − fθ∥2 = ∥∥∥(PH⋆)−1Ŷ⋆ − (PH)−1Ŷ∥∥∥2 2 = ∥∥∥H⋆−1P−1(Ŷ + EŶ)−H−1P−1Ŷ∥∥∥2 2 = ∥∥∥(H⋆−1 −H−1)P−1Ŷ +H⋆−1P−1EŶ∥∥∥2 2 = ∥∥∥EH−1P−1Ŷ + (PH+PEH)−1EŶ∥∥∥2 2 ≤ ϵH δH ∥∥∥P−1Ŷ∥∥∥2 2 + ϵ2HϵŶ δX ∥∥(PH)−1∥∥2 2 + ϵHϵŶ ∥∥(PH)−1∥∥4 2 . Similarly, there exists ∥P ·P⋆∥22 = ∥P · (P+ EP)∥ 2 2 ≥ ∥PEP∥ 2 2, ∥PEP∥ 2 2 = δP, and∥∥P⋆−1 −P−1∥∥2 2 = ∥EP−1∥ ≤ ϵP/δP. we can obtain the model weight differences with the influence of structure difference. ϕ = ∥f⋆θ − fθ∥2 = ∥∥∥(P⋆H)−1Ŷ⋆ − (PH)−1Ŷ∥∥∥2 2 = ∥∥∥H−1 (P⋆−1Ŷ⋆ −P−1Ŷ)∥∥∥ = ∥∥H−1∥∥2 2 ∥∥∥(P−1 + EP−1)(Ŷ + EŶ)−P−1Ŷ∥∥∥2 2 = ∥∥H−1∥∥2 2 ∥∥∥P−1EŶ + EP−1Ŷ + EP−1EŶ∥∥∥ ≤ ∥∥H−1∥∥2 2 [ ϵŶ ∥∥P−1∥∥2 2 + ϵP δP ∥∥∥Ŷ∥∥∥2 2 + ϵPϵŶ δP ] . Proof A.1 Here, based on the Eq. 11, we consider the adaptive heterogeneous propagation process H(l+1) = H(l) +H(l)pos +H (l) neg = H(l) + PoSign(Â(l)prop)H (l) +NeSign(Â(l)prop)H (l) = H(l) + PoSign ( Â(l−1)prop + βWH (l−1)(WH(l−1))T ) H(l) +NeSign ( Â(l−1)prop + βWH (l−1)(WH(l−1))T ) H(l). Take node i as an example, given that Φ(·) is differentiable, where contains the gradient update of the model difference ϕ. Meanwhile, in order to quantify the difference between our trainable propagation probability matrix and the global optimum, we define κi = ∑ P⋆[i :]− ( Â0prop[i :] + ∑ l βWHl(WH(l))T [i :] ) , where P⋆ represents the optimal propagation probability matrix. Then, we use Pos,Neg to denote the positive and negative message propagation weights P = Â(l)prop, there exist H (l+1) i = ∑ j ̸=i P⋆ijΦ(H (l)) = H (l) i + ∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j = κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l), where Pos(l)ij + Neg (l) ij =  (l) prop[i :]. Then, we perform a first-order Taylor expansion with Peano’s form of remainder at H(l)i and consider the model differences∑ j ̸=i P⋆ijΦ(H (l)) = ∑ j ̸=i P⋆ij ( Φ(H(l)) + ∂Φ(H (l) j ) ∂(H(l))T (H (l) j −H (l) i ) +O(||H (l) j −H (l) i ||2) ) = ∑ j ̸=i P⋆ijΦ(H (l)) + ∑ j ̸=i P⋆ij ∂Φ(H (l) j ) ∂(H(l))T (H (l) j −H (l) i ) + ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2). Now, we let ∑ j ̸=i P ⋆ ij(H (l) j −H (l) i ) = −ϵi, there exist ∑ j ̸=i P⋆ijΦ(H (l)) = κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l), = ∑ j ̸=i P⋆ijΦ(H (l))− ∂Φ(H (l) j ) ∂(H(l))T ϵi + ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2)∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l) ∣∣∣∣∣∣ = ∣∣∣∣∣∣∂Φ(H (l) j ) ∂(H(l))T ϵi − ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2) ∣∣∣∣∣∣ . According to Cauchy-Schwarz inequality and L-Lipschitz property, we have ∣∣∣∣∣∂Φ(H(l)i )∂(H(l))T ϵi ∣∣∣∣∣ ≤ ∥∥∥∥∥∂Φ(H(l)i )∂(H(l))T ∥∥∥∥∥ ∥ϵi∥2 ≤ L ∥ϵi∥2 . Therefore, the approximation of H(l)i + ∑ j ̸=i(Pos (l) ij +Neg (l) ij )H (l) j is bounded by∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− H(l)i +∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j ∣∣∣∣∣∣ = ∣∣∣∣∣∣∂Φ(H (l) i ) ∂(H(l))T − ∑ j ̸=i P⋆ijO (∥∥∥Hlj −H(l)i ∥∥∥ 2 )∣∣∣∣∣∣+ (∥∥∥H⋆ −P⋆H(l)∥∥∥ 2 ) ≤ ∣∣∣∣∣∂Φ(H(l)i )∂(H(l))T ϵi ∣∣∣∣∣+ ∣∣∣∣∣∣ ∑ j ̸=i P⋆ijO (∥∥∥Hlj −H(l)i ∥∥∥ 2 )∣∣∣∣∣∣+ (∥∥∥H⋆ −P⋆H(l)∥∥∥ 2 ) ≤ L ∥ϵi∥2 +∑ j ̸=i P⋆ij O (∥∥∥Hlj −H(l)i ∥∥∥ 2 )+ (∥∥∥H⋆ − ϕ (κ+P)H(l)∥∥∥ 2 ) , where H⋆ represents the global optimal embeddings. Based on this, we obtain the theory error bound for heterogeneous propagation. From the observation of error bounds, we reveal that in theory, the adaptive heterogeneous propagation process can minimize the immediate neighbors error ϵi, the model difference ϕ, and the propagation probability matrix difference κ to scale the error to improve the predictive performance. A.2 COMPARED BASELINES The main characteristic of all baselines are listed below: FedMLP: The combination of FedAvg and MLP, we employ a two-layer MLP with the hidden dimension of 64. It generates node embeddings based on the original features while ignoring graph structure information in the forward propagation process. FedSGC: We combine FedAvg and SGC Wu et al. (2019), we default to use the 3-layer feature propagation process, which follows the homogeneous assumption and thus fails to deal with heterogeneous graph. FedNLGNN: Implementation of NLGNN (NLMLP or NLGCN) Liu et al. (2021) based on FedAvg, we select the more effective version to present the model performance. It depends on the embedding model and suffers from representational limitations. FedGGCN: The combination of FedAvg and GGCN Yan et al. (2021), we follow the experimental setup of the original paper as much as possible, which can handle heterogeneous graphs effectively, but cannot achieve competitive results on homogeneous graphs. FedGL Chen et al. (2021): As a FGL training framework, it strongly relies on the overlapping nodes assumption, which in our data setting is essentially local graph structure learning. GCFL+ Xie et al. (2021): due to the limitations of personalized techniques in model selection, they cannot fundamentally solve the structure Non-IID challenges. FedSage+ Zhang et al. (2021): It trains NeighGen to achieve local subgraph augmentation by sharing global missing subgraph feature and topology information for the most powerful results, but suffers from privacy leakage risk and additional computational costs. For fairness, we follow the experimental setup of the baseline methods paper as much as possible, and in other cases, we show the best prediction accuracy. In addition, the number of rounds for the above baseline methods is 50, and the local epoch is 20. A.3 DATASETS DESCRIPTION AND STRUCTURE NON-IID DATA SETTING The statistics of datasets are summarized in Table. 7, which contains both homogeneity and heterogeneity. In our experiments, we use five benchmark datasets containing homogeneity and heterogeneity, for which details are given below. Cora, Citeseer, and Pubmed Yang et al. (2016b) are three popular citation network datasets. In these three networks, papers from different topics are considered as nodes, and the edges are citations among the papers. The node attributes are binary word vectors, and class labels are the topics papers belong to. Chameleon and Squirrel are two web page datasets collected from Wikipedia Rozemberczki et al. (2021), where nodes are web pages on specific topics and edges are hyperlinks between them. Based on this, we illustrate the structure Non-IID data partitioning process in detail. The core of it is the Dirichlet process He et al. (2020), Its basic analysis is as follows. The pdf of the Dirichlet distribution is defined as p(P = {pi}|αi) = ∏ i Γ(αi) Γ( ∑ i αi) Γip αi−1 i , where αi ∈ {α1, . . . , αk} > 0 is the dimensionless distribution parameter, the scale (or concentration) ϑ = ∑ i αi, the base measure (α ⋆ 1, . . . , α ⋆ k), α ⋆ i = αi/ϑ, and Γ(n) = (n − 1)!. Dirichlet is a distribution over Multinomials, thus there is ∑ i pi = 1, pi ≥ 0, where pi represents the probability. It determines the mean distribution, and the scale affects the variance, then we obtain E(pi) = αi ϑ = α⋆i , V ar(pi) = αi(ϑ− α) ϑ2(ϑ+ 1) = α⋆i (1− α⋆i ) (ϑ+ 1) , Cov(pi, pj) = −αiαj ϑ2(ϑ+ 1) , which means that a Dirichlet with small scale ϑ favors extreme distributions, but this prior belief is very weak and is easily overwritten by data. As ϑ → ∞, the covariance→ 0, the samples→ base measure. Based on this, we start sampling the edges to determine the attribution of a pair of nodes. If a conflicting set of nodes exists it is sampled again and finally generates induced subgraphs. Then, we randomly inject homogeneous or heterogeneous information based on the label prior, which can solve unreal structure loss and enhance structure identity. We propose to set three probabilities piso, phomo, and phete for each client individually to represent the probability of avoiding isolated nodes, increasing homogeneous edges, and increasing heterogeneous edges in the subgraph, respectively. Specifically, piso represents the probability of isolated nodes generating edges with other nodes, which can effectively be used to prevent the generation of isolated nodes. phomo applies to the subgraph of clients that are selected to enhance homogeneity, and it represents the probability of connection between two nodes with the same label based on the label prior information. Correspondingly, phete represents the probability used to perform the structure information injection for the client subgraph that performs the heterogeneity enhancement. A.4 COMMUNICATION COSTS ANALYSIS The advantage of our approach is to exploit the local structure prior while making full use of the global information, which considers the characteristics of GNNs. It has the benefit of reducing the communication costs and privacy concerns during the federated training process. Meanwhile, thanks to the utilization of the local structure information, we can obtain models with better representational power to improve the performance. To demonstrate the effectiveness of our method, we provide the experimental results of AdaFGL with the two most competitive methods, FedSage and FedGGCN, as shown in Tables. A.4 and Table. A.4. According to the experimental results, we observe that AdaFGL maintains the low communication costs and achieves a satisfying result, which mainly benefits from the utilization of local structure information by the adaptive propagation modules. Compared to FedSage, which is the current most competitive FGL approach, suffers from the performance improvement and communication costs dilemma, which also brings more privacy concerns. Cora Chameleon A.5 HYPERPARAMETER SENSITIVITY ANALYSIS Here we conduct the hyperparameter sensitivity of AdaFGL, and the experimental results are shown in Fig. 6. In our experiments, we analyze the ratio of online distillation loss in the homogeneous propagation module and the smoothing coefficient of the trainable propagation matrix in the heterogeneous propagation module. According to the experimental results, we observe that AdaFGL performs robustness except for extreme cases. Furthermore, we obtain the conclusion from the results generated by the extreme knowledge distillation loss ratios, where the low confidence base predictor results instead affect the homogeneous propagation module. Motivated by this, in order to avoid global embeddings with low confidence from influencing the propagation module, we measure the confidence of the global model according to the characteristics of the base predictor. A.6 EXPERIMENTAL ENVIRONMENT AND ADDITIONAL BASE RESULTS The experiments are conducted on a machine with Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz, and a single NVIDIA GeForce RTX 3090 with 24GB memory. The operating system of the machine is Ubuntu 18.04.6. As for software versions, we use Python 3.8, Pytorch 1.11.0, and CUDA 11.4. To alleviate the influence of randomness, we repeat each method 10 times and report the statistical characteristics. The hyper-parameters of baselines are set according to the original paper if available. We use Optuna Akiba et al. (2019) to implement hyperparameters search. Following the above principles, we present the results of two data partitioning as follows.
1. What is the main contribution of the paper regarding subgraph Federated Learning? 2. What are the strengths and weaknesses of the proposed method, particularly in considering homophily and heterophily? 3. Do you have any concerns about the experimental setups and the usage of global data? 4. Are there any unclear or confusing parts in the paper, especially regarding major claims and equations? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper investigates the Non-IID problems of subgraph Federated Learning (FL), where the authors consider both the well-known homogeneity assumption and the under-explored heterogeneity assumption. To tackle those two problems, the authors propose the global knowledge extractor that uses global data to extract features on the global graph, and the adaptive propagation modules that combine global embeddings and locally updated features for local node representations. The authors verify the proposed method, called AdaFGL, on both homogeneous and heterogeneous graph datasets, showing the effectiveness of their AdaFGL. Strengths And Weaknesses Strengths The idea of considering both the homophily and heterophily of graph-structured data for subgraph FL is novel and interesting. The proposed AdaFGL outperforms other graph FL baselines. Weaknesses The experimental setups for heterogeneous assumption are problematic. In particular, the authors use the Dirichlet process (He et al., 2020) for graph-structured data, to simulate the heterogeneous scenarios. However, this Dirichlet process is not suitable for making heterogeneous subgraph FL. This is because, when the number of clients is relatively large (e.g., 10), the number of edges is often smaller than the number of nodes, which is not realistic. Also, when the number of clients is relatively small (e.g., 3), there are no obvious heterogeneous patterns in the partitioned graphs: the data homogeneity of heterogeneous scenarios is similar to the homogeneous scenarios. Those results are reported in Table 13-22, and, based on that, I don't think the evaluation for heterogeneous assumption is correctly done. The usage of global data is not convincing, and it is not applicable to realistic FL. In FL, data sharing is not possible, therefore, we cannot obtain the global graph consisting of all nodes distributed to all clients. However, the authors propose the FL framework that leverages the global graph. How can we obtain the global graph, when each subgraph cannot share its subgraph to others under the FL assumption? Also, comparing the proposed AdaFGL that uses global graph against other subgraph FL baselines that do not use global graph looks unfair. There are many unclear major-claims, which should be clarified. See Clarity in the below. Clarity, Quality, Novelty And Reproducibility Clarity The results in Figure 2 left are not explained well. What do the colors and numbers denote in Figure 2? How to calculate them? Where is the non-params label propagation in Equation (5) used for? Why is it necessary? Based on my understanding, the label propagation is defined, however, this is not used in the proposed AdaFGL. I appreciate that the authors make effort to analyze the error bound of the proposed AdaFGL theoretically. However, it is unclear why the approximated error bound for the proposed AdaFGL is necessary. It seems they do not lower the error bound of the existing FL methods, thus are they necessary? X g l o b a l , used in Equation (8), is not defined anywhere else. Where did this term come from? Also, it is unclear how to obtain the global embedding, and why it is not changed during FL. The results in Figure 5 are not clearly described. The authors explain that there are 10 clients, however, the x- and y-axises indicate the nodes, and there are no explanations which nodes are belong to which clients. Also, what is the meaning of darker blue color in Figure 5? How to calculate such the color? Quality Few major claims should be tone-downed. In particular, in abstract, the authors claim that covariance-shift challenges occur in the structure Non-IID setting. However, in the existing community-based split scenarios for subgraph FL, since different communities have different properties (i.e., different clients have different properties), the same covariance-shift challenges occur in this community-based split setting as well. Novelty The consideration of heterophily for subgraph FL is novel. Reproducibility The reproducibility is high, since the authors provide the source code.
ICLR
Title A New Paradigm for Federated Structure Non-IID Subgraph Learning Abstract Federated graph learning (FGL), a distributed training framework for graph neural networks (GNNs) has attracted much attention for breaking the centralized machine learning assumptions. Despite its effectiveness, the differences in data collection perspectives and quality lead to the challenges of heterogeneity, especially the domain-specific graph is partitioned into subgraphs in different institutions. However, existing FGL methods implement graph data augmentation or personalization with community split which follows the cluster homogeneity assumptions. Hence we investigate the above issues and suggest that subgraph heterogeneity is essentially the structure variations. From the observations on FGL, we first define the structure non-independent identical distribution (NonIID) problem, which presents unique challenges among client-wise subgraphs. Meanwhile, we propose a new paradigm for general federated data settings called Adaptive Federated Graph Learning (AdaFGL). The motivation behind it is to implement adaptive propagation mechanisms based on federated global knowledge and non-params label propagation. We conduct extensive experiments with community split and structure Non-IID settings, our approach achieves state-of-the-art performance on five benchmark datasets. 1 INTRODUCTION The graph as a relational data structure is widely used to model real-world entity relations such as citation networks Yang et al. (2016a), recommended systems Wu et al. (2022), drug discovery Gaudelet et al. (2021), particle physics Shlomi et al. (2021), etc. However, due to the collection agents and privacy concerns, generally, the global domain-specific graph consists of many subgraphs collected by multiple institutions. In order to analyze the local subgraph, each client maintains a powerful graph mining model such as graph neural networks (GNNs), which have achieved stateof-the-art performance in many graph learning tasks Zhang et al. (2022b); Hu et al. (2021); Zhang & Chen (2018). Despite its effectiveness, the limited data provide sub-optimal performance in most cases. Motivated by the success of federated learning (FL), a natural idea is to combine the GNNs with FL to utilize the distributed subgraphs. Recently, federated graph learning (FGL) He et al. (2021); Wang et al. (2022b) is proposed to achieve collaborative training without directly sharing data, yet an essential concern is the heterogeneity of the distributed subgraphs. Notably, graph heterogeneity is different from the heterogeneity of labels or features in the fields of computer vision or natural language processing, we suggest that it depends on the graph structure. However, The existing FGL methods simulate the federated subgraph distributions through community split, which follows the cluster homogeneity assumption as shown in Fig.1(a). Specifically, community split leads to the subgraph structure being consistent and the same as the original graph, e.g., connected nodes are more likely to have the same labels. Obviously, it is overly desirable and hard to satisfy in reality, hence we consider a more reasonable setting shown in Fig.1(c). We first refer to the above problem as structure non-independent identical distribution (Non-IID). The motivation behind it is due to graph structure directly related to node labels and feature distributions. Meanwhile, the challenges of structure heterogeneity are ubiquitous in the real world Zheng et al. (2022b). For instance, in citation networks, we consider research teams focused on computers and intersectional fields (e.g., AI in Science) Shlomi et al. (2021); Gaudelet et al. (2021) as clients. In online transaction networks, fraudsters are more likely to build connections with customers instead of other fraudsters Pandit et al. (2007). We consider different regions as clients to detect financial fraudsters by analyzing online transaction subgraphs. Specifically, graph structure can be divided into two types: homogeneity means that connected nodes are more likely to have the same label and similar feature distributions and heterogeneity is the opposite. In order to explain it intuitively, we visualize the 3 clients partitioning result on Cora in Table. 1 and Table. 2, where Homo represents the homogeneity degree of the local subgraph, and it is computed by a popular metric Pei et al. (2020). Obviously, compared to community split, which follows the cluster homogeneity assumption and uniform distribution principle, structure Non-IID brings challenges to the existing FGL methods. Table 1: Community split in Cora. Community #Nodes #Edges Homo Client1 903 1696 0.85 Client2 903 1575 0.78 Client3 902 1592 0.87 Table 2: Structure Non-IID in Cora. Non-IID #Nodes #Edges Homo Client1 1095 1473 0.43 Client2 946 1400 0.87 Client3 667 1212 0.31 Based on this, we investigate the above issues through empirical analysis shown in Fig. 2. According to the results, we observe that in case the original graph satisfies the homogeneity assumption then the label distributions satisfy Non-IID. It is the opposite when the original graph satisfies the heterogeneity. This is due to the fact that the nodes partitioned into the same clients are communities and follow the uniform distribution principle. In addition, the local accuracy indicates that the subgraph structure performs a more important role in FGL compared to the label distributions, which also supports our motivation. In model performance, we observe that the GGCN improves the structure Non-IID problem, and FedSage+ trains NeighGen to implement local subgraph augmentation by sharing node embeddings. However, the above methods fail to achieve competitive results as SGC on the homogeneous subgraphs while considering heterogeneity. In order to efficiently analyze distributed subgraphs with both homogeneity and heterogeneity. We propose a simple pipeline called Adaptive Federated Graph Learning (AdaFGL) for more general federated data settings, which consists of three main parts. Specifically, it starts by analyzing the subgraph structure through non-params label propagation and selects the appropriate base model: (i) the federated global knowledge extractor (e.g., MLP, powerful GNNs, or any reasonable embedding models), which does not rely on any learning over the subgraph. Then, the base predictor is trained based on the global data, which can be done offline or in parallel with local training, benefiting from the flexibility of our approach. Finally, the local client implements two adaptive propagation mechanisms: (ii) homogeneity propagation module or (iii) heterogeneity propagation module based on the local subgraph. Notably, with non-params label propagation, the above process is adaptive. To summarize, the contributions of this paper are as follows: (1) To the best of our knowledge, we are the first to analyze the structure Non-IID problem in FGL, which is a more general federated data setting and brings new challenges. (2) We propose AdaFGL, a new paradigm for structure Non-IID subgraph learning, which shows its flexibility in FGL with impressive performance. (3) Extensive experiments demonstrate the effectiveness of AdaFGL. Specifically, our approach achieves state-ofthe-art performance in the above two data settings. Compared to the best prediction accuracy in the baselines, our method achieves performance gains of 4.67% and 2.65% in structure Non-IID and community split data settings, respectively. 2 PRELIMINARIES In this section, we first introduce the semi-supervised node classification task. Then, we review the prior diverse GNNs and very recent FGL methods. Consider a graph G = (V,E) with |V | = n nodes and |E| = m edges, the adjacency matrix (including self loops) is denoted as  ∈ Rn×n, the feature matrix is denoted as X = {x1, x2, . . . , xn} in which xv ∈ Rf represents the feature vector of node v, and f represents the dimension of the node attributes. Besides, Y = {y1, y2, . . . , yn} is the label matrix, where yv ∈ R|Y | is a one-hot vector and |Y | represents the number of the node classes. The semi-supervised node classification task is based on the topology of labeled set VL and unlabeled set VU , and the nodes in VU are predicted with the model supervised by VL. GNNs. As the most popular GNN method, The forward information propagation process of the l-th layer GCN Kipf & Welling (2017) is formulated as X(l) = σ(ÃX(l−1)W(l)), à = D̂r−1ÂD̂−r, (1) where D̂ represents the degree matrix with Â, r ∈ [0, 1] denotes the convolution kernel coefficient, W represents the trainable weight matrix, and σ(·) represents the non-linear activation function. In GCN, we set r = 1/2, and then D̂−1/2ÂD̂−1/2 is called symmetric normalized adjacency matrix. Despite their effectiveness, they have limitations in real-world graphs, which have complex heterogeneous relationship patterns. Some recent researches Liu et al. (2021); Chien et al. (2021); He et al. (2022); Wang et al. (2022a); Yang et al. (2022) solve it by higher-order neighborhood discovery or message combination strategies to improve the GNN process via m(l)v = Aggregate (l)({h∗u|u ∈ N∗(v)}), h(l)v = Update (l)(h∗v,m ∗ v), (2) where h∗u denotes the information of multi-hop neighbors N∗(v), m∗v represents the higher-order messages of node v from the previous layers, Aggregate(·) and Update(·) denote the message aggregation function and update function, respectively. However, these methods suffer from high computational complexity and fail to achieve competitive performance on the homogeneous graph. FGL has received growing attention for breaking centralized graph machine learning assumptions. FedGraphNN He et al. (2021) and FS-G Wang et al. (2022b) propose general FGL packages, which contain a wide range of graph learning tasks. GCFL Xie et al. (2021) and FED-PUB Baek et al. (2022) investigate the personalized technologies in graph-level and node-level, respectively. Furthermore, some recent researches improve performance with local subgraph augmentation, including FedGNN Wu et al. (2021), FedGL Chen et al. (2021), and FedSage Zhang et al. (2021). Inspired by FS-G Wang et al. (2022b), we can consider the collaborative training process in FGL as modules. Specifically, we model the information such as gradients and node embeddings uploaded by the clients as messages. Then we consider the server processes and broadcast results as the various message-handling mechanisms. Here we illustrate the GNNs combined with collaborative training. Its generic form with N clients is defined as FGL− Clients (Local Update) → min 1 N N∑ i=1 E(Ai,Xi,Yi)∼Di [Lce(fθi(Ai,Xi),Yi)], L(fθi(Ai,Xi),Yi) = − ∑ i∈VL ∑ j [Yij log(Softmax(Ỹij)) + (1−Yij) log(1− Softmax(Ỹij))], (3) where fθi and Lce are the i-th local GNN with parameters θ and cross-entropy loss function, respectively. It can be replaced by any other appropriate loss function depending on the task. (Ai,Xi,Yi) ∼ Di represents the local subgraph (Ai,Xi,Yi) sampled from the distribution Di. FedAvg McMahan et al. (2017) is an efficient FL algorithm, which can be defined as FGL− Server (Aggregate)→ ∀i,Wt+1i ←W t i − ηgi, Wt+1 ← N∑ i=1 ni n Wt+1i , (4) where t represents the round number of the FL process, W represents the model weights, η represents the learning rate, g represents the gradient calculated from the Eq. 3, ni and n represent the i-th local client data size and the global data size, respectively. 3 ADAFGL PIPELINE The basic idea of AdaFGL is to perform adaptive propagation mechanisms based on federated global knowledge and non-params label propagation. The pipeline with three main parts as shown in Fig. 3, which combine the global knowledge embeddings and local structure properties. The above decoupling process utilizes the computational capacity of the local system while minimizing communication costs and the risk of privacy leakage. AdaFGL can benefit from the evolution of FL and GNN through the base predictor and adaptive propagation. Notably, the base predictor obtained by federated training and personalized propagation are viewed as two decoupled modules that are executed sequentially. Meanwhile, both of them accomplish the training without sharing local private data. 3.1 FEDERATED GLOBAL KNOWLEDGE EXTRACTOR In FGL, limited data yields sub-optimal performance in most cases. Therefore, AdaFGL starts to perform non-params label propagation to adaptive process. Note this process does not rely on any learning over the subgraph. Specifically, the labeled nodes are initialized as y0v = yv,∀v ∈ VL, and the unlabeled nodes are denoted as y0u = ( 1 |Y | , . . . , 1 |Y | ),∀u ∈ VU . Then, the non-params label propagation of the k-step is expressed as yku = graph− aggregator({yk−1v |v ∈ Nu}) = αy0u + (1− α) ∑ v∈Nu 1√ d̃vd̃u yk−1v . (5) We follow the approximate calculation of the personalized PageRank matrix Klicpera et al. (2019), where Nv represents the one-hop neighbors of v, and we default set α = 0.5. Then, we design the homogeneity confidence score (HCS) computed by the number of correct predictions, and the default ratio of the boolean mask is 0.5. Finally, we set thresholds λ for the adaptive binary selection of the homogeneity propagation module and heterogeneity propagation module in each client. In experiments, we default set λ = 0.6 To demonstrate that AdaFGL is a simple yet effective framework, we choose simple models (e.g. MLP or SGC) and FedAvg to achieve federated training. Due to the flexibility of AdaFGL, they can be replaced by any other powerful GNNs and federated methods. From the perspective of FL in Non-IID data, we default choose MLP as the base predictor, which is independent of the graph structure. Then we quote the convergence theorem Li et al. (2020) in T rounds and E epochs, the federated global knowledge extractor error bound ϵfed is expressed as ϵfed ≤ 2L µ2(γ + T − 1) ( N∑ i=1 ni n φ2i + 6Lϕ+ 8(E − 1)2ω2 + γ 4 ||W1 −W⋆||2 ) . (6) It assumes that the mapping function satisfies L-smooth and µ-strongly convex, where φ and ϕ represent the local random gradient and the degree of model heterogeneity, respectively, γ = max{8L/µ,E}, ω denotes the divergence of local model, and W∗ represents the global optima. We observe that the base predictor error bound is mainly determined by the differences in the node feature distributions, and the model performance will be further hurt if the graph structure is considered. Therefore, we are motivated to propose adaptive propagation mechanisms. Specifically, we implement the binary selection of the homogeneity propagation module or heterogeneity propagation module in each client by comparing the HCS value and the threshold λ. We will describe the technical details of personalized propagation strategies. 3.2 ADAPTIVE HOMOGENEITY PROPAGATION After that, we use the base predictor to embed local subgraph nodes into the global knowledge space Xglobal and improve the accuracy with the local homogeneous structure. The motivation behind it is that the feature propagation satisfying homogeneity has a significant positive impact on prediction performance, which has also been confirmed in many recent research works Zhang et al. (2022a); Wang & Leskovec (2020). Hence we expect to utilize local smoothing features to correct the predictions. Then, we first define the homogeneous feature propagation X (k) smooth = graph− operator(A) (k)X(0),∀k = 1, . . .K, Hhomo = message− updater(X(k)smooth) = fθ(X (k) smooth), (7) where graph− operator(·) represents the graph operator in feature propagation, we default to use symmetric normalized adjacency as shown in Eq. 1. X(k)smooth represents the local smoothing features after K-steps propagation, message− updater(·) denotes the model training process, and we use fθ to represent the linear regression or MLP with parameters of θ. In order to correct the global embedding and local information, we use the local message update mechanism and online distillation to achieve an effective combination of the local smooth structure prior and the global embeddings, which can be written as Hlocal = WlocalXglobal, Lkd = ||Hhomo −Hlocal||F . (8) Based on this, we can make local smoothing information and global embeddings to achieve mutual supervision and end-to-end training by gradient updating. This exploits the local structure information to reduce the error bound. Notably, the above adaptive process is accomplished in the local client and has no additional communication costs and privacy concerns. 3.3 ADAPTIVE HETEROGENEITY PROPAGATION In contrast, in order to break the heterogeneous structure limitations, we optimize the messagepassing framework by embeddings Xglobal to detect subgraph heterogeneous patterns. Specifically, we propose an adaptive propagation mechanism by discovering the global dependency of the current node and modeling the positive or negative impact of the messages. Intuitively, we first expect to optimize the propagation probability matrix and align the local structure by global embeddings A(0)prop = XglobalX T global, Xalign =graph− operator(Â(0)prop)(k)X(0). (9) Obviously, the original propagation probability matrix introduces high error, we improve it by scaling the aggregated messages and making it trainable. Formally, let pij ∈ Aprop correspond to the i-th row and j-th col of Aprop, we define the scaling operator dij = dis(Pii,Pij) for j ̸= i, where dis(·) is a distance function or a function positively relative with the difference, which can be implemented using identity distance. Thus the corrected propagation matrix is expressed as Â(l)prop = A (l) prop/dd T − diag(A(l)prop). (10) The purpose of it is to measure the global dependency of the current node through the probability difference. Then, we further model the positive and negative impacts of the messages to implement effective aggregation, which is formally represented as follows H(l) = WH(l−1), A(l)prop =  (l−1) prop + β ( H(l)H(l) )T , H(l)pos = PoSign( (l) prop)H (l), H(l)neg = NeSign( (l) prop)H (l), H(l+1) = H(l) +H(l)pos +H (l) neg, (11) where H(0) = Xalign, PoSign(·) and NeSign(·) represent the trainable adaptive propagation probabilities, it can be replaced by any reasonable nonlinear activation function. Here we analyze the error bound for the above adaptive heterogeneous propagation mechanism. The proof of the following theorem and reasonable assumptions are given in Appendix. A.1 Theorem 3.1 Suppose that the latent ground-truth mapping Φ : x→ y from node features to node labels is differentiable and satisfies L-Lipschitz constraint, the following approximation error is∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− H(l)i +∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j ∣∣∣∣∣∣ ≤ L ∥ϵi∥2 +∑ j ̸=i P⋆ij O (∥∥∥Hlj −H(l)i ∥∥∥ 2 )+ (∥∥∥H⋆ − ϕ (κ+P)H(l)∥∥∥ 2 ) , where ⋆ represents the global optimal, ϵ denotes immediate neighbors error, O(·) denotes a higher order infinitesimal, ϕ and κ represent propagation matrix and model differences, respectively. The core of the above propagation mechanisms is to generate embeddings based on other nodes in the embedding space. In other words, it means that any node representation can be mapped to a linear combination of existing node representations, which has been applied in many studies Zheng et al. (2022a); Yang et al. (2022). However, most of the methods use ranking mechanisms for representation and fail to consider modeling propagation processes, which has limitations. 4 EXPERIMENTS In this section, we conduct experimental analysis on five benchmark datasets with community split and structure Non-IID settings to validate the effectiveness of AdaFGL. We aim to answer the following five questions. Q1: Compared with other state-of-the-art FGL baselines, can AdaFGL achieve better predictive accuracy in the community split setting? Q2: How does structure NonIID influence existing methods and can AdaFGL improve it? Q3: Are knowledge distillation and message detection working in adaptive propagation mechanisms? Q4: Why AdaFGL can achieve desirable predictions utilizing the interactions between decoupled modules? Q5: Compared with existing FGL methods and heterogeneous GNNs, what are the advantages of AdaFGL? 4.1 EXPERIMENTAL SETUP AND BASELINES Existing FGL methods implement data partitioning by community split Wang et al. (2022b). We follow it while proposing a more convincing strategy structure Non-IID. Due to space limitations, the implementation details of the structure Non-IID can be found in Appendix. A.3 and Appendix. A.6. To demonstrate the effectiveness of AdaFGL, we combine powerful GNNs with FedAvg as the representative methods. Meanwhile, we compare the recently proposed FGL methods such as FedGL, FedSage+, and GCFL+. FedSGC efficiently exploits the local structure prior by performing feature propagation. FedNLGNN implements node embeddings by MLP or GCN to discover potential neighbors. FedGGCN further exploits the relationship between over-smoothing and heterogeneity to achieve weighted propagation. FedGL aims to optimize the local model performance using global information, it is essentially graph structure learning without overlapping nodes. FedSage+ performs local graph augmentation to improve prediction performance. GCFL+ implements the clustering process to perform the personalized update mechanisms. More details about baseline methods can be referred to Appendix. A.2. 4.2 OVERALL PERFORMANCE We first present the complete results on Cora and Chameleon in Table. 4 and Table. 3, which are two representative homogeneous and heterogeneous datasets. Due to the space limitation, the details about the experiment environment and results in other datasets can be found in AppendixA.6. Notably, since we randomly inject homogeneous or heterogeneous information into the structure Non-IID data partitioning process, the model performance does not directly relate to the number of clients. Meanwhile, in the community split setting, the process of model aggregation by multiple clients to achieve federated learning can be considered as ensemble learning. Therefore, the prediction performance gets better with the increasing number of clients in some cases. To answer Q1, Table. 3 shows the comparison results with the baseline methods in community split setting. For the homogeneous dataset Cora, compared with the most competitive FGL methods, AdaFGL achieved accuracy gains of 1.18%, 2.25%, and 1.64% in multiple client settings, respec- tively. Meanwhile, AdaFGL exceeds the best methods among all considered baselines on the heterogeneous datasets by a margin of 6.37% to 8.52%. In the community split setting, we improve the prediction accuracy by utilizing the local smoothing prior and adaptive propagation mechanisms. To answer Q2, We demonstrate the performance of existing methods in the face of structure Non-IID challenges in Table. 4. Although FedGGCN performs well in general, it cannot obtain competitive performance. Despite FedSage+ achieving effective local graph augmentation by sharing global data, structure Non-IID is a natural challenge, and this weakness is amplified when heterogeneity is high. In contrast, our method achieves performance gains of 1.45%, 3.77%, and 1.87% compared to the highest prediction accuracy. Impressively, AdaFGL improves performance by 9.82%, 13.06%, and 13.29% in the structure Non-IID setting for the heterogeneous dataset Chameleon. From the observation of the comparison results with the baselines, our method has significant advantages, especially in terms of robustness and impressive performance. 4.3 ABLATION EXPERIMENTS To answer Q3, we present the ablation experiment results in Table. 3 and Table. 4, where HomoKD represents the online distillation in the homogeneous propagation module and HeteTA represents the trainable probability propagation matrix in the heterogeneous propagation module. We observe that the online distillation enhances Homogeneous propagation by combining local smoothing features and local embeddings, it can effectively improve model performance without adding additional computation costs. In essence, it achieves mutually supervised end-to-end learning of global and local information. Furthermore, the trainable probability propagation matrix optimizes the heterogeneity propagation module. It learns the global optimal propagation mechanism and detects positive and negative messages to generate embeddings. HeteTA can discover the global dependence of the current node and achieve effective message aggregation, which is proved by Theorem. 3.1. 4.4 VISUALIZATION AND EXPLAINABILITY ANALYSIS To answer Q4, we present the local prediction accuracy trends with the competitive baseline methods in Fig. 4. According to it, we can notice that our method achieves the best performance in most cases under both community split and structure Non-IID data settings, while the overall trend is optimized. Due to space limitations, the relevant experimental results about the hyperparameter sensitivity analysis experiments on AdaFGL and conclusions can be found in Appendix. A.5. In order to illustrate the effectiveness of the federated global knowledge extractor and the adaptive propagation mechanisms, we also analyze the explainability by presenting the heat maps shown in Fig. 5. We perform structure Non-IID partitioning for 10 clients on PubMed, then select the client with the highest number of nodes with homogeneity and heterogeneity. Based on this, we randomly sampled 20 nodes to obtain the similarity score by computing the embedding transpose. From the observation of the results, we notice that the federated global knowledge extractor only obtains fuzzy results and cannot be optimized for the local subgraphs. Fortunately, we achieve an effective combination of global knowledge and local subgraph structure prior to obtaining explicit node embeddings, which is also demonstrated through the final output in Fig. 5. 4.5 METHODS COMPARISON To answer Q5, we review three recent FGL methods and analyze our approach to them in terms of three aspects: method type, exchange messages, and the ability to solve structure Non-IID problems as shown in Table.5. Obviously, although FedSage+ can achieve competitive results, it introduces significant communication costs and privacy concerns. Specifically, FedSage+ trains two models and thus has communication costs, while implementing cross-client information sharing to improve predictive performance, which no doubt increases privacy concerns. GCFL+ has limitations in model selection leads to its failure to handle the structure Non-IID problem in subgraph learning. In our experiments, FedGL is essentially a local graph structure learning process. In contrast, our approach can utilize the computational capabilities of the local system while minimizing communication costs and privacy concerns. More experimental details can be found in Appendix. A.4. Then, we compare the effectiveness of existing GNNs and our approach to handling heterogeneous graph, which focuses on two points: Neighbor Discovery and Message Combination, which is shown in Fig. 6. We observe that MLP ignores graph structure prior which leads to the failure to handle heterogeneous graphs. Although FedGL and FedSage+ can improve this problem by utilizing global information for local graph augmentation, the limitations of propagation lead to the fact that they are still not the best solutions. Notably, they cannot handle the structure Non-IID problem in FGL. Although NLGNN and GGCN attempt to solve the heterogeneous structure problem, they cannot be directly applied in FGL. Therefore, we are motivated by these methods and propose adaptive propagation mechanisms to improve the performance, which has been validated to be effective. 5 CONCLUSION In this paper, we discover and define the structure Non-IID problem in FGL, which is a new challenge for existing methods. Based on this, we propose a new paradigm AdaFGL for more general federated data settings. Specifically, we investigate the structure Non-IID problem in FGL for supplementing the existing community split data partitioning approach, which is a more practical federated data setting. To implement effective FGL on heterogeneous distributed subgraphs, we propose AdaFGL which consists of the federated global knowledge extractor and adaptive propagation modules. It combines FL and GNNs tightly and benefits from their evolution. Extensive experiments based on the community split and structure Non-IID data settings demonstrate the effectiveness of AdaGFL. We believe that the ability to fully utilize the graph structure information is the key to achieving efficient FGL, thus the research on graph structure in FGL is a promising direction. A APPENDIX OUTLINE The appendix is organized as follows: A.1 Theory error bounds for adaptive heterogeneous propagation modules. A.2 More details about the compared baselines. A.3 Datasets description and structure Non-IID data setting. A.4 Communication costs analysis. A.5 Hyperparameter sensitivity analysis. A.6 Experimental environment and additional base results. A.1 THEORY ERROR BOUNDS FOR ADAPTIVE HETEROGENEOUS PROPAGATION To demonstrate the effectiveness of the adaptive heterogeneous propagation module, we prove its error bound. We first make the reasonable following assumption and definitions. Assumption A.1 Φ is L-smooth, ∀x1,x2 ∈ dom(Φ) Φ(x1) ≤ Φ(x2) + (x1 − x2)T∇Φ(x2) + L 2 ||x1 − x2||22. Then we quote the embedding method theorem Linial et al. (1995). Definition A.1 Given two metric spaces (V, d) and (Z, d′) and mapping function Φ : V → Z , the distortion ϵdistor is definied as ∀u, v ∈ V, 1/ϵdistord(u, v) ≤ d ′ (Φ(u),Φ(v)) ≤ d(u, v). Theorem A.1 (Bourgain theorem) Given any finite metric space (V, d) with V = n, there exists an embedding of (V, d) into Rk under any lp metric, where k = O(log2 n), and the distortion of the embedding is O(log n). It defines the distortion O(log n) in the embedding space (V, d) for mapping methods satisfying the above conditions. Based on this, we consider a graph G with fixed structure represented by à = D̂−1/2ÂD̂−1/2, embeddings represented with H in the forward propagation, and nodes mapping function Φ(H), which satisfies the Theorem. A.1, it can be expressed as ϕ(H) = ( d(H, S1,1) k , d(H, S1,2) k , . . . , d(H, Slogn,c logn) k ) , where d(H, Si,j) = minu∈Si,j d(H, u). Si,j ⊂ V, i = 1, 2, . . . , log n, j = 1, 2, . . . , c log n represents c log2 n random sets, where c is a constant. It is chosen by including each point in V independently with probability 1/2i. Then motivated by Xie et al. (2021) and the above conclusions, we have the following model weights difference proposition. Proposition A.1 Assume the propagation probability matrix, hidden embeddings, and label difference with global optima f⋆θ and local model fθ are bounded with ∥P⋆ −P∥22 = ∥EP∥ 2 2 ≤ ϵP ∥H⋆ −H∥22 = ∥EH∥ 2 2 ≤ ϵH∥∥∥Ŷ⋆ − Ŷ∥∥∥2 2 = ∥∥EŶ∥∥22 ≤ ϵŶ. Based on this, given that ∥H ·H⋆∥22 = ∥H · (H+ EH)∥ 2 2 ≥ ∥HEH∥ 2 2. Let ∥XEH∥ 2 2 = δH, then we have ∥∥H⋆−1 −H−1∥∥2 2 = ∥EH−1∥ ≤ ϵH/δH. If we choose SGC Wu et al. (2019) for the forward propagation, the model weights difference with the influence of feature difference is represented as ϕ = ∥f⋆θ − fθ∥2 = ∥∥∥(PH⋆)−1Ŷ⋆ − (PH)−1Ŷ∥∥∥2 2 = ∥∥∥H⋆−1P−1(Ŷ + EŶ)−H−1P−1Ŷ∥∥∥2 2 = ∥∥∥(H⋆−1 −H−1)P−1Ŷ +H⋆−1P−1EŶ∥∥∥2 2 = ∥∥∥EH−1P−1Ŷ + (PH+PEH)−1EŶ∥∥∥2 2 ≤ ϵH δH ∥∥∥P−1Ŷ∥∥∥2 2 + ϵ2HϵŶ δX ∥∥(PH)−1∥∥2 2 + ϵHϵŶ ∥∥(PH)−1∥∥4 2 . Similarly, there exists ∥P ·P⋆∥22 = ∥P · (P+ EP)∥ 2 2 ≥ ∥PEP∥ 2 2, ∥PEP∥ 2 2 = δP, and∥∥P⋆−1 −P−1∥∥2 2 = ∥EP−1∥ ≤ ϵP/δP. we can obtain the model weight differences with the influence of structure difference. ϕ = ∥f⋆θ − fθ∥2 = ∥∥∥(P⋆H)−1Ŷ⋆ − (PH)−1Ŷ∥∥∥2 2 = ∥∥∥H−1 (P⋆−1Ŷ⋆ −P−1Ŷ)∥∥∥ = ∥∥H−1∥∥2 2 ∥∥∥(P−1 + EP−1)(Ŷ + EŶ)−P−1Ŷ∥∥∥2 2 = ∥∥H−1∥∥2 2 ∥∥∥P−1EŶ + EP−1Ŷ + EP−1EŶ∥∥∥ ≤ ∥∥H−1∥∥2 2 [ ϵŶ ∥∥P−1∥∥2 2 + ϵP δP ∥∥∥Ŷ∥∥∥2 2 + ϵPϵŶ δP ] . Proof A.1 Here, based on the Eq. 11, we consider the adaptive heterogeneous propagation process H(l+1) = H(l) +H(l)pos +H (l) neg = H(l) + PoSign(Â(l)prop)H (l) +NeSign(Â(l)prop)H (l) = H(l) + PoSign ( Â(l−1)prop + βWH (l−1)(WH(l−1))T ) H(l) +NeSign ( Â(l−1)prop + βWH (l−1)(WH(l−1))T ) H(l). Take node i as an example, given that Φ(·) is differentiable, where contains the gradient update of the model difference ϕ. Meanwhile, in order to quantify the difference between our trainable propagation probability matrix and the global optimum, we define κi = ∑ P⋆[i :]− ( Â0prop[i :] + ∑ l βWHl(WH(l))T [i :] ) , where P⋆ represents the optimal propagation probability matrix. Then, we use Pos,Neg to denote the positive and negative message propagation weights P = Â(l)prop, there exist H (l+1) i = ∑ j ̸=i P⋆ijΦ(H (l)) = H (l) i + ∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j = κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l), where Pos(l)ij + Neg (l) ij =  (l) prop[i :]. Then, we perform a first-order Taylor expansion with Peano’s form of remainder at H(l)i and consider the model differences∑ j ̸=i P⋆ijΦ(H (l)) = ∑ j ̸=i P⋆ij ( Φ(H(l)) + ∂Φ(H (l) j ) ∂(H(l))T (H (l) j −H (l) i ) +O(||H (l) j −H (l) i ||2) ) = ∑ j ̸=i P⋆ijΦ(H (l)) + ∑ j ̸=i P⋆ij ∂Φ(H (l) j ) ∂(H(l))T (H (l) j −H (l) i ) + ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2). Now, we let ∑ j ̸=i P ⋆ ij(H (l) j −H (l) i ) = −ϵi, there exist ∑ j ̸=i P⋆ijΦ(H (l)) = κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l), = ∑ j ̸=i P⋆ijΦ(H (l))− ∂Φ(H (l) j ) ∂(H(l))T ϵi + ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2)∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− κi +Pii +∑ j ̸=i (Pos (l) ij +Neg (l) ij ) ϕH(l) ∣∣∣∣∣∣ = ∣∣∣∣∣∣∂Φ(H (l) j ) ∂(H(l))T ϵi − ∑ j ̸=i P⋆ijO(||H (l) j −H (l) i ||2) ∣∣∣∣∣∣ . According to Cauchy-Schwarz inequality and L-Lipschitz property, we have ∣∣∣∣∣∂Φ(H(l)i )∂(H(l))T ϵi ∣∣∣∣∣ ≤ ∥∥∥∥∥∂Φ(H(l)i )∂(H(l))T ∥∥∥∥∥ ∥ϵi∥2 ≤ L ∥ϵi∥2 . Therefore, the approximation of H(l)i + ∑ j ̸=i(Pos (l) ij +Neg (l) ij )H (l) j is bounded by∣∣∣∣∣∣ ∑ j ̸=i P⋆ijΦ(H (l))− H(l)i +∑ j ̸=i (Pos (l) ij +Neg (l) ij )H (l) j ∣∣∣∣∣∣ = ∣∣∣∣∣∣∂Φ(H (l) i ) ∂(H(l))T − ∑ j ̸=i P⋆ijO (∥∥∥Hlj −H(l)i ∥∥∥ 2 )∣∣∣∣∣∣+ (∥∥∥H⋆ −P⋆H(l)∥∥∥ 2 ) ≤ ∣∣∣∣∣∂Φ(H(l)i )∂(H(l))T ϵi ∣∣∣∣∣+ ∣∣∣∣∣∣ ∑ j ̸=i P⋆ijO (∥∥∥Hlj −H(l)i ∥∥∥ 2 )∣∣∣∣∣∣+ (∥∥∥H⋆ −P⋆H(l)∥∥∥ 2 ) ≤ L ∥ϵi∥2 +∑ j ̸=i P⋆ij O (∥∥∥Hlj −H(l)i ∥∥∥ 2 )+ (∥∥∥H⋆ − ϕ (κ+P)H(l)∥∥∥ 2 ) , where H⋆ represents the global optimal embeddings. Based on this, we obtain the theory error bound for heterogeneous propagation. From the observation of error bounds, we reveal that in theory, the adaptive heterogeneous propagation process can minimize the immediate neighbors error ϵi, the model difference ϕ, and the propagation probability matrix difference κ to scale the error to improve the predictive performance. A.2 COMPARED BASELINES The main characteristic of all baselines are listed below: FedMLP: The combination of FedAvg and MLP, we employ a two-layer MLP with the hidden dimension of 64. It generates node embeddings based on the original features while ignoring graph structure information in the forward propagation process. FedSGC: We combine FedAvg and SGC Wu et al. (2019), we default to use the 3-layer feature propagation process, which follows the homogeneous assumption and thus fails to deal with heterogeneous graph. FedNLGNN: Implementation of NLGNN (NLMLP or NLGCN) Liu et al. (2021) based on FedAvg, we select the more effective version to present the model performance. It depends on the embedding model and suffers from representational limitations. FedGGCN: The combination of FedAvg and GGCN Yan et al. (2021), we follow the experimental setup of the original paper as much as possible, which can handle heterogeneous graphs effectively, but cannot achieve competitive results on homogeneous graphs. FedGL Chen et al. (2021): As a FGL training framework, it strongly relies on the overlapping nodes assumption, which in our data setting is essentially local graph structure learning. GCFL+ Xie et al. (2021): due to the limitations of personalized techniques in model selection, they cannot fundamentally solve the structure Non-IID challenges. FedSage+ Zhang et al. (2021): It trains NeighGen to achieve local subgraph augmentation by sharing global missing subgraph feature and topology information for the most powerful results, but suffers from privacy leakage risk and additional computational costs. For fairness, we follow the experimental setup of the baseline methods paper as much as possible, and in other cases, we show the best prediction accuracy. In addition, the number of rounds for the above baseline methods is 50, and the local epoch is 20. A.3 DATASETS DESCRIPTION AND STRUCTURE NON-IID DATA SETTING The statistics of datasets are summarized in Table. 7, which contains both homogeneity and heterogeneity. In our experiments, we use five benchmark datasets containing homogeneity and heterogeneity, for which details are given below. Cora, Citeseer, and Pubmed Yang et al. (2016b) are three popular citation network datasets. In these three networks, papers from different topics are considered as nodes, and the edges are citations among the papers. The node attributes are binary word vectors, and class labels are the topics papers belong to. Chameleon and Squirrel are two web page datasets collected from Wikipedia Rozemberczki et al. (2021), where nodes are web pages on specific topics and edges are hyperlinks between them. Based on this, we illustrate the structure Non-IID data partitioning process in detail. The core of it is the Dirichlet process He et al. (2020), Its basic analysis is as follows. The pdf of the Dirichlet distribution is defined as p(P = {pi}|αi) = ∏ i Γ(αi) Γ( ∑ i αi) Γip αi−1 i , where αi ∈ {α1, . . . , αk} > 0 is the dimensionless distribution parameter, the scale (or concentration) ϑ = ∑ i αi, the base measure (α ⋆ 1, . . . , α ⋆ k), α ⋆ i = αi/ϑ, and Γ(n) = (n − 1)!. Dirichlet is a distribution over Multinomials, thus there is ∑ i pi = 1, pi ≥ 0, where pi represents the probability. It determines the mean distribution, and the scale affects the variance, then we obtain E(pi) = αi ϑ = α⋆i , V ar(pi) = αi(ϑ− α) ϑ2(ϑ+ 1) = α⋆i (1− α⋆i ) (ϑ+ 1) , Cov(pi, pj) = −αiαj ϑ2(ϑ+ 1) , which means that a Dirichlet with small scale ϑ favors extreme distributions, but this prior belief is very weak and is easily overwritten by data. As ϑ → ∞, the covariance→ 0, the samples→ base measure. Based on this, we start sampling the edges to determine the attribution of a pair of nodes. If a conflicting set of nodes exists it is sampled again and finally generates induced subgraphs. Then, we randomly inject homogeneous or heterogeneous information based on the label prior, which can solve unreal structure loss and enhance structure identity. We propose to set three probabilities piso, phomo, and phete for each client individually to represent the probability of avoiding isolated nodes, increasing homogeneous edges, and increasing heterogeneous edges in the subgraph, respectively. Specifically, piso represents the probability of isolated nodes generating edges with other nodes, which can effectively be used to prevent the generation of isolated nodes. phomo applies to the subgraph of clients that are selected to enhance homogeneity, and it represents the probability of connection between two nodes with the same label based on the label prior information. Correspondingly, phete represents the probability used to perform the structure information injection for the client subgraph that performs the heterogeneity enhancement. A.4 COMMUNICATION COSTS ANALYSIS The advantage of our approach is to exploit the local structure prior while making full use of the global information, which considers the characteristics of GNNs. It has the benefit of reducing the communication costs and privacy concerns during the federated training process. Meanwhile, thanks to the utilization of the local structure information, we can obtain models with better representational power to improve the performance. To demonstrate the effectiveness of our method, we provide the experimental results of AdaFGL with the two most competitive methods, FedSage and FedGGCN, as shown in Tables. A.4 and Table. A.4. According to the experimental results, we observe that AdaFGL maintains the low communication costs and achieves a satisfying result, which mainly benefits from the utilization of local structure information by the adaptive propagation modules. Compared to FedSage, which is the current most competitive FGL approach, suffers from the performance improvement and communication costs dilemma, which also brings more privacy concerns. Cora Chameleon A.5 HYPERPARAMETER SENSITIVITY ANALYSIS Here we conduct the hyperparameter sensitivity of AdaFGL, and the experimental results are shown in Fig. 6. In our experiments, we analyze the ratio of online distillation loss in the homogeneous propagation module and the smoothing coefficient of the trainable propagation matrix in the heterogeneous propagation module. According to the experimental results, we observe that AdaFGL performs robustness except for extreme cases. Furthermore, we obtain the conclusion from the results generated by the extreme knowledge distillation loss ratios, where the low confidence base predictor results instead affect the homogeneous propagation module. Motivated by this, in order to avoid global embeddings with low confidence from influencing the propagation module, we measure the confidence of the global model according to the characteristics of the base predictor. A.6 EXPERIMENTAL ENVIRONMENT AND ADDITIONAL BASE RESULTS The experiments are conducted on a machine with Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz, and a single NVIDIA GeForce RTX 3090 with 24GB memory. The operating system of the machine is Ubuntu 18.04.6. As for software versions, we use Python 3.8, Pytorch 1.11.0, and CUDA 11.4. To alleviate the influence of randomness, we repeat each method 10 times and report the statistical characteristics. The hyper-parameters of baselines are set according to the original paper if available. We use Optuna Akiba et al. (2019) to implement hyperparameters search. Following the above principles, we present the results of two data partitioning as follows.
1. What is the focus and contribution of the paper on federated learning for subgraph learning? 2. What are the strengths and weaknesses of the proposed paradigm, particularly regarding its motivation, presentation, and theoretical analysis? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's discussion of related work, experimental design, and results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a novel federated learning (FL) paradigm, AdaFGL, for subgraph learning (i.e., each client holds a subgraph and considers node classification or link prediction task). AdaFGL is designed to handle the structure non-iidness, a graph unique non-iidness issue in FL. The authors conducted extensive empirical studies, which show that AdaFGL outperforms SOTA federated graph learning algorithms under both community-based and non-iid structure splits. Strengths And Weaknesses Strengths: The proposed paradigm is well motivated, where the structure non-iidness has been discussed in FS-G but has not been well addressed before. As the node classification task is often a semi-supervised learning setting, the proposed homogeneity confidence score (HCS) is interesting and tends to be helpful. It is novel to me. Weaknesses: It is difficult for me to follow the story due to its poor presentation. Specifically, I cannot find the definition of some important matrix such as X global . Moreover, the bound Eq. 6 and the theorem are very confusing. What is the relationship between base predictor and the HCS, especially considering that MLP does not depend on graph structure? What are the trainable parameters at all? Among them, which are client-wise? In Sec. 3.2, "base predictor to embed local subgraph nodes into the global knowledge space" is confusing. The author just analyzed its error bound in Eq. 6. Clarity, Quality, Novelty And Reproducibility As what I pointed out in weakness 1, there are some confusing points, which makes it hard to follow. The presentation is poor. The discussion of related work seems to be comprehensive enough, imo. The experiments are designed to answer these five research questions, which are related to the core scientific question of this paper. Thus, I think the quality of the empirical studies in this paper is satisfactory. I guess the authors just rushed to prepare this submission, as the references are not unified in their styles. For example, some KDD22 papers have the venue, but FS-G (also a KDD22 paper) does not. The HCS is novel to me, which must be helpful for estimating homophily level under the semi-supervised setting. The idea of judging whether message propagation is helpful or harmful has been extensively studied, where high-pass and low-pass (may be also identity) filters are often combined in various ways to handle both homophily and heterophily. The personalization scheme in this paper seems to be novel, but I have not fully understand its details due to the poor clarity. It seems that the experiments can be easily reproduced, as all the datasets and baselines are open-sourced.
ICLR
Title Unsupervised Class-Incremental Learning through Confusion Abstract While many works on Continual Learning have shown promising results for mitigating catastrophic forgetting, they have relied on supervised training. To successfully learn in a label-agnostic incremental setting, a model must distinguish between learned and novel classes to properly include samples for training. We introduce a novelty detection method that leverages network confusion caused by training incoming data as a new class. We found that incorporating a class-imbalance during this detection method substantially enhances performance. The effectiveness of our approach is demonstrated across a set of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB. 1 INTRODUCTION The development of continually learning systems remains to be a major obstacle in the field of artificial intelligence. The primary challenge is to mitigate catastrophic forgetting: learning new tasks while maintaining the ability to perform old ones. This domain of research is often referred to as Continual Learning, Lifelong Learning, Sequential Learning, or Incremental Learning: each with subtleties in the learning environment and training process, but most with the use of supervision (De Lange et al. (2020)). Recently, Stojanov et al. (2019) introduced a novel unsupervised class-incremental learning problem motivated by the desire to simulate how children’s play behaviors support their ability to learn object models (IOLfCV). Here, sequential tasks take the form of exposures. Each exposure is comprised of a set of images that pertains to a single class that is hidden from the learner. Exposure boundaries, the transition from one exposure to the next, are known. The model trained in this setting, is analogous to a young child that has been placed in a playpen with a set of new toys. The child steadily gathers information over time by picking up, examining, and putting down new/old objects continuously. Similar to how the child does not have any guidance to the toys they will examine, the agent does not have access to the exposure’s identity during training. To learn in the unsupervised class-incremental setting, an agent must conduct two procedures successfully. Given a new learning exposure, the key step is to perform novelty detection: to identify whether an exposure corresponds to a class that has been learned. If the agent determines that an exposure is familiar, the second step is to identify its label such that the exposure can be leveraged to update the model. Both procedures must be performed reliably. Otherwise, the novelty detection mistakes will result in label noise that distorts the learned model, increasing the likelihood of subsequent mistakes. Deep neural networks are known to make overconfident decisions for anomalous data distributions that were not seen during training (Hendrycks & Gimpel (2016)). To address this problem, research related to out-of-distribution (OOD) detection have utilized supervised methods (Liang et al. (2017); Alemi et al. (2018)) and unsupervised methods (Choi & Jang (2018); Hendrycks et al. (2018); Serrà et al. (2019)). Works related to open set recognition have also addressed the OOD problem by applying distance-based thresholds computed from known class scores (Scheirer et al. (2012; 2014)). The work by Stojanov et al. (2019) applies a similar method to the unsupervised incremental setting by computing class features produced from a set of supervised samples. In contrast, we propose a model, Incremental Learning by Accuracy Performance (iLAP), that determines class novelty and identity by considering performance changes of previously learned tasks when an incoming set of exposure images are trained under a new label. Instead of using a distance-based metric, our novelty detection threshold relies on the percentage of accuracy that was maintained by performing a model update using the incoming exposure. This poses several practical advantages: First, the threshold value does not rely on supervised samples and is more intuitive (Section 3.3). Second, the performance of our method is independent of the sequence of the incoming exposure classes (Section 5.2). Finally, the model is able to distinguish between similar classes more reliably (Section 5.3). From our experiments, we demonstrate that the confusion resulting from training with label ambiguity results in a more reliable signal for novelty detection in comparison to previous methods. We demonstrate that our technique is more robust and results in substantial performance gains in comparison to various baselines. Furthermore, despite the absence of labels, our model was able to perform similarly to supervised models under several benchmarks. In summary, this work provides three contributions: • We present a novel framework, iLAP, that achieves learning in the unsupervised classincremental environment where the exposure identities are unknown. • We demonstrate that by including a class-imbalance technique, our unsupervised method is able to closely match supervised performance for several image classification benchmarks trained in the incremental setting. • We identify failure cases that are overlooked by traditional OOD methods that leverage distance-based thresholds. 2 RELATED WORKS Introduced by Stojanov et al. (2019), the unsupervised class-incremental setting contains a set of sequential tasks that are single-class exposures; classes pertaining to the exposures may repeat and are unknown. This is not to be mistaken with unsupervised continual learning (UCL) where task boundaries and task identities are unavailable (Lee et al. (2020); Smith & Dovrolis (2019); Rao et al. (2019)). Our work presents an agent that is able to leverage the boundary information from the unsupervised class-incremental environment to achieve performances that are close to models trained under supervision. 2.1 CONTINUAL LEARNING/INCREMENTAL LEARNING Prior works in this field primarily aim to improve a model’s ability to retain information while incorporating new tasks (Goodfellow et al. (2013); Parisi et al. (2019); Rebuffi et al. (2017); LopezPaz & Ranzato (2017); Aljundi et al. (2018); Castro et al. (2018)). Typically, these models reside in learning settings where both task labels and task boundaries are available. Methods include replay techniques, the usage of artifacts and generated samples to refresh a model’s memory (Kamra et al. (2017); Wu et al. (2018); Rolnick et al. (2019); Shin et al. (2017); Wu et al. (2019)), and regularization-based practices, the identification and preservation of weights that are crucial for the performance of specific tasks (Kirkpatrick et al. (2017); Zenke et al. (2017); Yoon et al. (2017)). In contrast to prior works, our method addresses incremental learning in a setting where exposure labels are unavailable. 2.2 UNSUPERVISED CONTINUAL LEARNING Recently, a series of works tackle the UCL problem where task boundaries and task identities are unknown. Smith & Dovrolis (2019) performs novelty detection by analyzing an input image through a series of receptive fields to determine if an input patch is an outlier. Meanwhile, CURL proposes a method to learn class-discriminative representations through a set of shared parameters (Rao et al. (2019)). CN-DPM, introduces an expansion-based approach that utilizes a mixture of experts to learn feature representations (Lee et al. (2020)). Although CN-DPM performs in a task-free setting, incoming tasks are multi-class and individual class labels are provided. This supervision is required to train the existing experts and determine when a new one is needed. While boundary information is not required for these works, the performances are far below supervised baselines (77.7% on and MNIST 13.2% Omniglot) (Rao et al. (2019)). 2.3 OUT-OF-DISTRIBUTION DETECTION This ongoing area of research aims to detect outliers in training and testing data. Current approaches can be largely categorized by statistical, distance-based, and deep learning methods (Eskin (2000); Yamanishi et al. (2004); Knorr et al. (2000); Hautamaki et al. (2004); Sabokrou et al. (2018); Kliger & Fleishman (2018)). Recent techniques involve using a threshold to determine class novelty from network confidence values (Hendrycks & Gimpel (2016)). ODIN uses input perturbations to increase softmax scores for neural networks to distinguish in-distribution images from out-of-distribution images (Liang et al. (2017)). DeVries & Taylor (2018) incorporates a confidence branch to obtain out-of-distribution estimations. Our method (iLAP) is the first to incorporate a threshold value that is dependent on class-accuracy changes caused by data poisoning. 3 APPROACH In this section, we provide an overview of our method. We begin by identifying the learning setting, followed by details of our training process. Finally, insights for choosing threshold values are provided. 3.1 SETTING In the unsupervised class-incremental setting, a learner L perceives an input stream of exposures denoted as E1, E2, ..., En. Each exposure contains a set of images, Ei = {ei1, ei2, ..., eini}, e i j ∈ RC×H×W , where C, H , and W are the number of channels, height, and width of the input image respectively. Each exposure belongs to a single class yi ∈ N, which has been sampled from class distribution P (C). For each Ei, L does not have knowledge of the true class label yi. Two exemplars, Ptrain = (P 1train, P 2train, ..., P K̂train) and Pval = (P 1val, P 2val, ..., P K̂val), are maintained at all times, where K̂ denotes the total number of classes L has currently determined. The exemplars are used to store samples from the exposure for replay and accuracy assessment. The sizes of both exemplars,∣∣P itrain∣∣ and ∣∣P ival∣∣, are bounded per class. 3.2 DETECTION TRAINING For each incoming exposure, the model is tasked to determine whether the class associated with the exposure was learned previously. Our solution is to perform a model update by treating the incoming exposure as a new class, we coin this technique detection training. Under the circumstances that the exposure class is repeated, the performance for the previously learned class would suffer drastically after training. The reason for this behavior is because the model has associated two different labels to a similar class distribution. During detection training, L̂, a copy of L is produced. The incoming exposure is assigned with label K̂ + 1. Train-validation split is performed on the incoming exposure to obtain Etrain and Eval, and are aggregated with exemplars Ptrain and Pval respectively. The combined samples are used to train L̂ via validation-based early stopping. We denote the vector {∆ŷ}ŷ∈[K̂] to represent the percentage decrease of the class accuracies (computed using Pval) before and after the update. If max({∆ŷ}) exceeds a threshold, θ, the incoming exposure is likely to have been learned by L. In this case, the correct identity pertaining to the exposure is arg maxŷ∈[K̂] ∆ŷ . Otherwise, if θ is unsatisfied, K̂ + 1 is the appropriate label for the new class. 3.3 CLASS-IMBALANCE FOR DETECTION TRAINING Introducing a class-imbalance during detection training creates a more distinct decision boundary by exacerbating the class-accuracy drop for repeated exposures. Consider a theoretical case where an optimal model has learned K̂ classes. The incoming exposure, Ei, contains a distribution that is equal to that of some previously learned class ŷi. If the model were updated with equal samples of Ei labeled K̂ + 1, and P ŷitrain labeled ŷi, the accuracy for class ŷi would become ambiguous during validation (ŷi≈ 50%, K̂ + 1≈ 50%). However, if the model were updated with a greater sample of K̂ + 1 labels, the accuracy drop for class ŷi would be considerably larger because K̂ + 1 would be favored during inference. L̂ has the option to use an imbalanced dataset during detection training where a fraction of the size for each class from Ptrain are used in comparison to the size of Etrain. Let P isampled ⊂ P itrain, the class-imbalance ratio λ is: λ = 1− |P isampled| |Etrain| (1) In Figure 2, we use Fashion-MNIST (Xiao et al. (2017)) and Imagenette (Howard) to obtain a general value for λ and θ for our experiments with other datasets. Performance drops are tracked for the repeated class and non-repeated classes during detection training with various values of λ. The goal is to maximize the performance loss for the repeated class while maintaining the performance for the other classes. In Section 4 a θ value of 0.6 and a λ value of 0.5 are used. λ is determined by the point of the maximum distance between the two accuracy curves; θ, the expected accuracy drop for a repeated exposure, is the mean of the two values at λ Figure 2. 3.4 MODEL AND EXEMPLAR UPDATE After obtaining the predicted label (K̂ + 1 or arg maxŷ∈[K̂] ∆ŷ) from detection training, L is trained with the aggregated dataset obtained from Ptrain and Etrain. Two sets with the most representative samples from Etrain and Etest are created and added to their respective exemplars for future replay. The selection process is determined by ranking the distance between the image features and the class-mean image features. This methodology is consistent with the procedure introduced by Castro et al. (2018). The availability of Pval allows L to assess how well old and new classes are considered. If the accuracy for any class interpreted by L falls below a percentage, the class is altogether discarded. This allows the model to remove extraneous classes that were learned insufficiently or have been forgotten. In Section 4, a percentage threshold of 20% is used for all experiments. 4 EXPERIMENTS The performance of our framework is evaluated using a series of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB (LeCun (1998); Netzer et al. (2011); Krizhevsky et al. (2009); Stojanov et al. (2019)). First, we compare our novelty detector to related OOD methods. Next, we evaluate performance of iLAP to that of other incremental learners: BiC (Wu et al. (2019)) (supervised) and IOLfCV (unsupervised). 4.1 EXPERIMENTAL DETAILS For the following experiments, a ResNet-18 model (He et al. (2016)) pre-trained with ImageNet is used for iLAP and all baselines (additional experiments without pretraining are presented in Appendix A.4). λ = 0.5 and θ = 0.6 are used for iLAP with class-imbalance detection training, while a λ = 0 and θ = 0.4 are used for iLAP without class-imbalance detection training. The parameters are maintained across all benchmarks. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam (Kingma & Ba (2014)) optimizer with validation-based early stopping; a learning rate of 2e−4 is used. The feature extraction layers use a ten times lower learning rate at 2e−5. For all models, the exemplar size per class is equal to the exposure size. The exposure validation split ratio is 0.8 (e.g. for exposure size = 200, iLAP: [Etrain] = 160 and [Eval] = 40). The thresholds used for IOLfCV in 4.1 were determined by maximizing the F-score for the classification of in-distribution exposures versus out-of-distribution exposures. To obtain the best performance possible, the entirety of the dataset was used. The values are 0.46, 0.63, 0.57, and 0.62 for MNIST, SVHN, CIFAR-10, and CIFAR-100 respectively. 4.2 OUT-OF-DISTRIBUTION DETECTION RESULTS The OOD detectors are assessed in an incremental setting with size 200 exposures. The detectors are evaluated on their ability to determine whether an exposure is novel by using the common established metrics: FPR95, AUROC, and AUPR (Hendrycks & Gimpel (2016)). Details of the compared works and evaluation methods are described in Appendix A.1. The results are illustrated in Table 1. 4.3 INCREMENTAL LEARNING RESULTS The accuracy of learner L is computed using the ground-truth mapping m : [K̂] → [K] with the equations: S(x, y) = { 1 |m−1(y)| , if L(x) ∈ m −1(y) 0, otherwise Accuracy = Ex,y∼test [S(x, y)] The learner accuracy is the mean of the sample accuracy scores evaluated on the test set, where (x, y) represents a single sample. For each sample with label y, let m−1(y) represent the corresponding labels from the learner. In the case that a learner output does not belong to set m−1(y), an accuracy score of 0 is assigned (class is not detected). Otherwise, an accuracy score of 1|m−1(y)| is designated to penalize the learner if m is non-injective and have attributed multiple labels to a single ground truth class. Performance results are illustrated in Table 2 & 3. Additional visualizations are provided in Appendix A.2, A.3 and A.4. 5 ANALYSIS Traditional OOD methods that rely on distance-based thresholds are restricted by the supervised samples that are available. These values are non-intuitive and vary drastically across datasets (whereas our percentage threshold are ≈ 50% for all datasets). In the incremental learning setting, early mistakes are amplified as more exposures are introduced, a proper threshold initialization dictates a model’s feasibility. However, we argue that even with a good threshold these methods will consistently fail in particular conditions. The purpose of this section is to discuss the results obtained from our experiments. Subsequently, we highlight a few cases that are overlooked by traditional distance-based methods. 5.1 OUT-OF-DISTRIBUTION DETECTION ANALYSIS iLAP with class-imbalance detection training (CI) was able to outperform related OOD methods for the MNIST, SVHN, CIFAR-10, and CIFAR-100 benchmarks under all metrics Table 1. However, the results for iLAP without CI were not as definitive. CE performed the worst in the incremental setting, possibly because the performance of the confidence branch is reliant on larger training samples. IOLfCV’s method performed on par with related methods. 5.2 UNSUPERVISED INCREMENTATL LEARNING ANALYSIS iLAP with CI was able to beat IOLfCV by 10.0, 41.3, 22.1, 7.0, and 1.0 percentage points for the MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB benchmarks respectively Table 2. iLAP was also able to maintain its performance when exposure sizes are decreased Appendix A.3. We found that the reason for the lower performance beat for CIFAR-100 and CRIB is not directly attributed to the larger number of classes in the dataset. Rather, the problem lies with how the exposure sequence for the incremental learning setting is created and how the distance-based threshold is calculated. The threshold for IOLfCV is computed by maximizing the F-score for the binary classification of novel versus non-novel classes. In our experiments, the entirety of the dataset was used to compute the baseline’s threshold. Although this is impractical, we wanted to illustrate that iLAP is able to beat the baseline even under the most optimal conditions. In Figure 3 we illustrate the behavior of the class feature distances as more classes are incorporated by a network. The most optimal threshold that maximizes the F-score lies in the mid-point between the two graphs when all 100 classes are learned. However, because the threshold is fixed, the novelty detector fails to correctly identify repeated classes early on during training and is more inclined to label repeated classes as unseen, (shaded red area in Figure 3). Consistent with the described setting in Stojanov et al. (2019), each class within a benchmark is repeated an equal number of times in a randomized sequence. For datasets with a large number of classes, there is a higher chance that the repeated exposures are further apart. Therefore, IOLfCV seemingly performs comparatively better on CIFAR-100 than CIFAR-10, but would fail if early repeated exposures were frequent. 5.3 CLASS SIMILARITY iLAP was able to detect all classes for MNIST, SVHN, and CIFAR-10 and on average 96.5 out of 100 classes for CIFAR-100. Meanwhile, IOLfCV struggles to identify unique classes for all evaluated benchmarks. Through closer inspection, we found that the distance-based method is unable to distinguish classes when they are too similar. Consider two classes, k1 and k2, that can be separated by a classifier C in learned feature space F . An incoming exposure, class k3, shares a similar distribution to class k1 in feature space F , but separable in some feature space F ′. The distance-based method is highly probable to fail because it is likely to categorize k1 and k3 as identical classes. However, because our method always trains the incoming exposure as a new class, C is forced to learn the feature space F ′ in which these two classes are separable. Figure 4 illustrates two prior classes, boy and lamp, in some feature space. The incoming exposure, class girl, is unable to be distinguished from class boy by the distance-based method Figure 4 (left). However, because detection training always attempts to classify the incoming exposure as a new class, our method is able to identify F ′ Figure 4 (right). 6 CONCLUSION To achieve learning in an unsupervised class-incremental setting, a reliable novelty detector is needed. Current methods utilize a detection threshold that is calibrated using class feature distances. In our work, we illustrate that the use of a static distance-based threshold is not only impractical but also unreliable. Instead, we introduce a technique that leverages confusion error to perform novelty detection by always training the incoming exposure as a new class. Using a series of image classification benchmarks, we illustrate that our method is able to closely rival supervised performance despite the lack of labels. A APPENDIX A.1 OOD EXPERIMENTAL DETAILS A brief description of the evaluation metrics used for Table 1. • False Positive Rate at 95% (FPR95): This measure determines the False Positive Rate (FPR) when True Positive Rate (TPR) is equal to 95%. FPR is calculated as FPFP+TN where FP is the number of False Positives and TN is the number of True Negatives. TPR is calculated as TPTP+FN where TP is the number of True Positives and FN is the number of False Negatives. • Area Under the Receiver Operating Characteristic (AUROC): This metric illustrates the relationship between the TPR and the FPR. This measure determines the probability that a novel class will have a higher detection score compared to a non-novel class. • Area Under the Precision-Recall (AUPR): This metric is constructed by plotting precision versus recall. The AUPR curve treats the novel examples as the positive class. A high area value represents high recall and high precision. The training detection method used in iLAP is compared to a set of related works in OOD. The following works reflect the acronyms used in Table 1 • MSP (Hendrycks & Gimpel (2016)): MSP is computed from a trained classifier to perform out-of-distribution detection. The mean MSP for all images belonging to an exposure is used to determine novelty. • CE (DeVries & Taylor (2018)): CE requires extending a model with a confidence branch to obtain a set of values. These values reflect the model’s ability to produce a correct prediction. When the mean estimation value for a set of input images is low, the sample is likely to be novel. • ODIN (Liang et al. (2017)): ODIN uses temperature scaling and input perturbations to widen the MSP difference between in-distribution and out-of-distribution samples. The optimal value for temperature and perturbation magnitude are found by minimizing FPR using grid search. 2 and 0.0012 are used for temperature and perturbation magnitude values respectively, for both the MNIST and CIFAR-10 dataset. • Zero-Shot OOD (Sastry & Oore (2019)): Zero-Shot OOD uses pairwise correlations of network features to detect novel samples. The class-conditional feature correlations, on the training data, are computed across all layers of the network. The mean correlations are then compared with the pairwise mean feature correlations from a test sample to obtain a deviation value. • IOLfCV (Stojanov et al. (2019)): IOLfCV determines a distance-based threshold computed using average-feature means from a network trained from supervised samples. Two initialized classes are used to compute the threshold by finding the optimal point using precision-recall analysis. A.2 MAIN EXPERIMENT RESULTS The following are produced with the use of a GTX TITAN X gpu. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam optimizer. The learning rate used is 2e−4. The feature extraction layers use a ten times lower learning rate at 2e−5. The input size is 224. A.3 EXPERIMENTS WITH LOWER EXEMPLAR SIZES In this section, we showcase iLAP’s results at lower exposure sizes. A.4 EXPERIMENTS WITHOUT PRE-TRAINING
1. What is the main contribution of the paper regarding unsupervised class-incremental learning? 2. What are the strengths and weaknesses of the proposed method, particularly in its ability to handle novelty detection, class imbalance, and confusion? 3. Are there any questions or concerns regarding the experimental settings, comparisons with related works, and the handling of repetition and forgetting in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper proposes to tackle the problem of unsupervised class-incremental learning, where the training data is composed of a sequence of "exposures". Each exposure is comprised of a set of images that pertains to a single class, where the class label is unknown while the boundaries between exposures are known. The key difficulty in such unsupervised class-incremental learning is to determine whether an arriving exposure belongs to what the classification model L has learnt previously or is a novel one, thus relating to the problem of novelty detection. The proposed method address the novelty detection by an interesting idea: they always treat the current exposure as a novel class and use it to train the copy of classification model L ^ together with the training exemplars of previously-learnt classes, if the current exposure actually belongs to one of the previous-learnt classes, the confusion occurs to make the classification accuracy significantly decrease (over a threshold) on that specific class, where the accuracy is computed based on the validation exemplars. Moreover, a technique of introducing class-imbalance into such confusion-based novelty detection is proposed and helps to boost the robustness of novelty detection. There are some pros and cons of this paper as listed below. Pros: The idea of using confusion to address the novelty detection is novel and interesting, where the corresponding threshold is easier to be determined and contributes to better out-of-distribution performance in comparison to other related works of using static distance-based threshold. The introduction of class-imbalance works well with the confusion-based novelty detection and its contribution is experimentally verified on various datasets. The overall performance of the proposed method on unsupervised incremental learning is better than an unsupervised baseline (IOLfCV) and comparable to a supervised one (BiC). Cons: The figure.2 is a little bit difficult for understanding the properties of seen and unseen classes with respect to class-imbalance ratio λ at the first sight, e.g. why the curve of unseen classes would go up along with larger λ ? Perhaps it is better to replace the terminology of "seen" and "unseen" classes by "repeated" and "non-repeated" classes? There is another closely-related type of incremental learning: unsupervised continual learning. Although its setting is more difficult than the unsupervised class-incremental learning which is tackled in this paper, it would still be nice to have the baselines of unsupervised continual learning for providing more insights to the readers. As the mostly-related work of this paper is Stojanov et al., CVPR 2019 (also addressing the unsupervised class-incremental learning problem), why the CRIB dataset proposed by Stojanov et al. is not used for evaluation here in order to have more direct comparison? Moreover, as indicated by Stojanov et al., the repetition of classes (e.g. how frequent a learnt class arrives again for learning) plays an important role for the model performance, there should be clear description on the experimental setting of repetition as well as the investigation on it in this paper. Furthermore, in the paper of Stojanov et al., they experiment with the classification models of having pre-trained feature extraction or being learnt from scratch. However, in this paper only the classification model pretrained on ImageNet is adopted. There should be experimental results and corresponding discussion on having the classification model trained from scratch for better understanding how the proposed confusion-based novelty detection behaves. Lastly, it is also important to investigate on the forgetting effect. When updating the classification with predicted label, are the techniques for avoiding catastrophic forgetting used (e.g. knowledge distillation)? If not, how the proposed method prevents the catastrophic forgetting from happening? If the forgetting does happen, will the confusion-based novelty detection still be working? In brief, this paper proposes interesting idea of having confusion-based novelty detection approach to tackle the unsupervised class-incremental learning, but it needs more experiments and discussions to make the paper more complete and ready for ICLR. I would expect to see the concerns listed above being well addressed in the rebuttal.
ICLR
Title Unsupervised Class-Incremental Learning through Confusion Abstract While many works on Continual Learning have shown promising results for mitigating catastrophic forgetting, they have relied on supervised training. To successfully learn in a label-agnostic incremental setting, a model must distinguish between learned and novel classes to properly include samples for training. We introduce a novelty detection method that leverages network confusion caused by training incoming data as a new class. We found that incorporating a class-imbalance during this detection method substantially enhances performance. The effectiveness of our approach is demonstrated across a set of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB. 1 INTRODUCTION The development of continually learning systems remains to be a major obstacle in the field of artificial intelligence. The primary challenge is to mitigate catastrophic forgetting: learning new tasks while maintaining the ability to perform old ones. This domain of research is often referred to as Continual Learning, Lifelong Learning, Sequential Learning, or Incremental Learning: each with subtleties in the learning environment and training process, but most with the use of supervision (De Lange et al. (2020)). Recently, Stojanov et al. (2019) introduced a novel unsupervised class-incremental learning problem motivated by the desire to simulate how children’s play behaviors support their ability to learn object models (IOLfCV). Here, sequential tasks take the form of exposures. Each exposure is comprised of a set of images that pertains to a single class that is hidden from the learner. Exposure boundaries, the transition from one exposure to the next, are known. The model trained in this setting, is analogous to a young child that has been placed in a playpen with a set of new toys. The child steadily gathers information over time by picking up, examining, and putting down new/old objects continuously. Similar to how the child does not have any guidance to the toys they will examine, the agent does not have access to the exposure’s identity during training. To learn in the unsupervised class-incremental setting, an agent must conduct two procedures successfully. Given a new learning exposure, the key step is to perform novelty detection: to identify whether an exposure corresponds to a class that has been learned. If the agent determines that an exposure is familiar, the second step is to identify its label such that the exposure can be leveraged to update the model. Both procedures must be performed reliably. Otherwise, the novelty detection mistakes will result in label noise that distorts the learned model, increasing the likelihood of subsequent mistakes. Deep neural networks are known to make overconfident decisions for anomalous data distributions that were not seen during training (Hendrycks & Gimpel (2016)). To address this problem, research related to out-of-distribution (OOD) detection have utilized supervised methods (Liang et al. (2017); Alemi et al. (2018)) and unsupervised methods (Choi & Jang (2018); Hendrycks et al. (2018); Serrà et al. (2019)). Works related to open set recognition have also addressed the OOD problem by applying distance-based thresholds computed from known class scores (Scheirer et al. (2012; 2014)). The work by Stojanov et al. (2019) applies a similar method to the unsupervised incremental setting by computing class features produced from a set of supervised samples. In contrast, we propose a model, Incremental Learning by Accuracy Performance (iLAP), that determines class novelty and identity by considering performance changes of previously learned tasks when an incoming set of exposure images are trained under a new label. Instead of using a distance-based metric, our novelty detection threshold relies on the percentage of accuracy that was maintained by performing a model update using the incoming exposure. This poses several practical advantages: First, the threshold value does not rely on supervised samples and is more intuitive (Section 3.3). Second, the performance of our method is independent of the sequence of the incoming exposure classes (Section 5.2). Finally, the model is able to distinguish between similar classes more reliably (Section 5.3). From our experiments, we demonstrate that the confusion resulting from training with label ambiguity results in a more reliable signal for novelty detection in comparison to previous methods. We demonstrate that our technique is more robust and results in substantial performance gains in comparison to various baselines. Furthermore, despite the absence of labels, our model was able to perform similarly to supervised models under several benchmarks. In summary, this work provides three contributions: • We present a novel framework, iLAP, that achieves learning in the unsupervised classincremental environment where the exposure identities are unknown. • We demonstrate that by including a class-imbalance technique, our unsupervised method is able to closely match supervised performance for several image classification benchmarks trained in the incremental setting. • We identify failure cases that are overlooked by traditional OOD methods that leverage distance-based thresholds. 2 RELATED WORKS Introduced by Stojanov et al. (2019), the unsupervised class-incremental setting contains a set of sequential tasks that are single-class exposures; classes pertaining to the exposures may repeat and are unknown. This is not to be mistaken with unsupervised continual learning (UCL) where task boundaries and task identities are unavailable (Lee et al. (2020); Smith & Dovrolis (2019); Rao et al. (2019)). Our work presents an agent that is able to leverage the boundary information from the unsupervised class-incremental environment to achieve performances that are close to models trained under supervision. 2.1 CONTINUAL LEARNING/INCREMENTAL LEARNING Prior works in this field primarily aim to improve a model’s ability to retain information while incorporating new tasks (Goodfellow et al. (2013); Parisi et al. (2019); Rebuffi et al. (2017); LopezPaz & Ranzato (2017); Aljundi et al. (2018); Castro et al. (2018)). Typically, these models reside in learning settings where both task labels and task boundaries are available. Methods include replay techniques, the usage of artifacts and generated samples to refresh a model’s memory (Kamra et al. (2017); Wu et al. (2018); Rolnick et al. (2019); Shin et al. (2017); Wu et al. (2019)), and regularization-based practices, the identification and preservation of weights that are crucial for the performance of specific tasks (Kirkpatrick et al. (2017); Zenke et al. (2017); Yoon et al. (2017)). In contrast to prior works, our method addresses incremental learning in a setting where exposure labels are unavailable. 2.2 UNSUPERVISED CONTINUAL LEARNING Recently, a series of works tackle the UCL problem where task boundaries and task identities are unknown. Smith & Dovrolis (2019) performs novelty detection by analyzing an input image through a series of receptive fields to determine if an input patch is an outlier. Meanwhile, CURL proposes a method to learn class-discriminative representations through a set of shared parameters (Rao et al. (2019)). CN-DPM, introduces an expansion-based approach that utilizes a mixture of experts to learn feature representations (Lee et al. (2020)). Although CN-DPM performs in a task-free setting, incoming tasks are multi-class and individual class labels are provided. This supervision is required to train the existing experts and determine when a new one is needed. While boundary information is not required for these works, the performances are far below supervised baselines (77.7% on and MNIST 13.2% Omniglot) (Rao et al. (2019)). 2.3 OUT-OF-DISTRIBUTION DETECTION This ongoing area of research aims to detect outliers in training and testing data. Current approaches can be largely categorized by statistical, distance-based, and deep learning methods (Eskin (2000); Yamanishi et al. (2004); Knorr et al. (2000); Hautamaki et al. (2004); Sabokrou et al. (2018); Kliger & Fleishman (2018)). Recent techniques involve using a threshold to determine class novelty from network confidence values (Hendrycks & Gimpel (2016)). ODIN uses input perturbations to increase softmax scores for neural networks to distinguish in-distribution images from out-of-distribution images (Liang et al. (2017)). DeVries & Taylor (2018) incorporates a confidence branch to obtain out-of-distribution estimations. Our method (iLAP) is the first to incorporate a threshold value that is dependent on class-accuracy changes caused by data poisoning. 3 APPROACH In this section, we provide an overview of our method. We begin by identifying the learning setting, followed by details of our training process. Finally, insights for choosing threshold values are provided. 3.1 SETTING In the unsupervised class-incremental setting, a learner L perceives an input stream of exposures denoted as E1, E2, ..., En. Each exposure contains a set of images, Ei = {ei1, ei2, ..., eini}, e i j ∈ RC×H×W , where C, H , and W are the number of channels, height, and width of the input image respectively. Each exposure belongs to a single class yi ∈ N, which has been sampled from class distribution P (C). For each Ei, L does not have knowledge of the true class label yi. Two exemplars, Ptrain = (P 1train, P 2train, ..., P K̂train) and Pval = (P 1val, P 2val, ..., P K̂val), are maintained at all times, where K̂ denotes the total number of classes L has currently determined. The exemplars are used to store samples from the exposure for replay and accuracy assessment. The sizes of both exemplars,∣∣P itrain∣∣ and ∣∣P ival∣∣, are bounded per class. 3.2 DETECTION TRAINING For each incoming exposure, the model is tasked to determine whether the class associated with the exposure was learned previously. Our solution is to perform a model update by treating the incoming exposure as a new class, we coin this technique detection training. Under the circumstances that the exposure class is repeated, the performance for the previously learned class would suffer drastically after training. The reason for this behavior is because the model has associated two different labels to a similar class distribution. During detection training, L̂, a copy of L is produced. The incoming exposure is assigned with label K̂ + 1. Train-validation split is performed on the incoming exposure to obtain Etrain and Eval, and are aggregated with exemplars Ptrain and Pval respectively. The combined samples are used to train L̂ via validation-based early stopping. We denote the vector {∆ŷ}ŷ∈[K̂] to represent the percentage decrease of the class accuracies (computed using Pval) before and after the update. If max({∆ŷ}) exceeds a threshold, θ, the incoming exposure is likely to have been learned by L. In this case, the correct identity pertaining to the exposure is arg maxŷ∈[K̂] ∆ŷ . Otherwise, if θ is unsatisfied, K̂ + 1 is the appropriate label for the new class. 3.3 CLASS-IMBALANCE FOR DETECTION TRAINING Introducing a class-imbalance during detection training creates a more distinct decision boundary by exacerbating the class-accuracy drop for repeated exposures. Consider a theoretical case where an optimal model has learned K̂ classes. The incoming exposure, Ei, contains a distribution that is equal to that of some previously learned class ŷi. If the model were updated with equal samples of Ei labeled K̂ + 1, and P ŷitrain labeled ŷi, the accuracy for class ŷi would become ambiguous during validation (ŷi≈ 50%, K̂ + 1≈ 50%). However, if the model were updated with a greater sample of K̂ + 1 labels, the accuracy drop for class ŷi would be considerably larger because K̂ + 1 would be favored during inference. L̂ has the option to use an imbalanced dataset during detection training where a fraction of the size for each class from Ptrain are used in comparison to the size of Etrain. Let P isampled ⊂ P itrain, the class-imbalance ratio λ is: λ = 1− |P isampled| |Etrain| (1) In Figure 2, we use Fashion-MNIST (Xiao et al. (2017)) and Imagenette (Howard) to obtain a general value for λ and θ for our experiments with other datasets. Performance drops are tracked for the repeated class and non-repeated classes during detection training with various values of λ. The goal is to maximize the performance loss for the repeated class while maintaining the performance for the other classes. In Section 4 a θ value of 0.6 and a λ value of 0.5 are used. λ is determined by the point of the maximum distance between the two accuracy curves; θ, the expected accuracy drop for a repeated exposure, is the mean of the two values at λ Figure 2. 3.4 MODEL AND EXEMPLAR UPDATE After obtaining the predicted label (K̂ + 1 or arg maxŷ∈[K̂] ∆ŷ) from detection training, L is trained with the aggregated dataset obtained from Ptrain and Etrain. Two sets with the most representative samples from Etrain and Etest are created and added to their respective exemplars for future replay. The selection process is determined by ranking the distance between the image features and the class-mean image features. This methodology is consistent with the procedure introduced by Castro et al. (2018). The availability of Pval allows L to assess how well old and new classes are considered. If the accuracy for any class interpreted by L falls below a percentage, the class is altogether discarded. This allows the model to remove extraneous classes that were learned insufficiently or have been forgotten. In Section 4, a percentage threshold of 20% is used for all experiments. 4 EXPERIMENTS The performance of our framework is evaluated using a series of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB (LeCun (1998); Netzer et al. (2011); Krizhevsky et al. (2009); Stojanov et al. (2019)). First, we compare our novelty detector to related OOD methods. Next, we evaluate performance of iLAP to that of other incremental learners: BiC (Wu et al. (2019)) (supervised) and IOLfCV (unsupervised). 4.1 EXPERIMENTAL DETAILS For the following experiments, a ResNet-18 model (He et al. (2016)) pre-trained with ImageNet is used for iLAP and all baselines (additional experiments without pretraining are presented in Appendix A.4). λ = 0.5 and θ = 0.6 are used for iLAP with class-imbalance detection training, while a λ = 0 and θ = 0.4 are used for iLAP without class-imbalance detection training. The parameters are maintained across all benchmarks. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam (Kingma & Ba (2014)) optimizer with validation-based early stopping; a learning rate of 2e−4 is used. The feature extraction layers use a ten times lower learning rate at 2e−5. For all models, the exemplar size per class is equal to the exposure size. The exposure validation split ratio is 0.8 (e.g. for exposure size = 200, iLAP: [Etrain] = 160 and [Eval] = 40). The thresholds used for IOLfCV in 4.1 were determined by maximizing the F-score for the classification of in-distribution exposures versus out-of-distribution exposures. To obtain the best performance possible, the entirety of the dataset was used. The values are 0.46, 0.63, 0.57, and 0.62 for MNIST, SVHN, CIFAR-10, and CIFAR-100 respectively. 4.2 OUT-OF-DISTRIBUTION DETECTION RESULTS The OOD detectors are assessed in an incremental setting with size 200 exposures. The detectors are evaluated on their ability to determine whether an exposure is novel by using the common established metrics: FPR95, AUROC, and AUPR (Hendrycks & Gimpel (2016)). Details of the compared works and evaluation methods are described in Appendix A.1. The results are illustrated in Table 1. 4.3 INCREMENTAL LEARNING RESULTS The accuracy of learner L is computed using the ground-truth mapping m : [K̂] → [K] with the equations: S(x, y) = { 1 |m−1(y)| , if L(x) ∈ m −1(y) 0, otherwise Accuracy = Ex,y∼test [S(x, y)] The learner accuracy is the mean of the sample accuracy scores evaluated on the test set, where (x, y) represents a single sample. For each sample with label y, let m−1(y) represent the corresponding labels from the learner. In the case that a learner output does not belong to set m−1(y), an accuracy score of 0 is assigned (class is not detected). Otherwise, an accuracy score of 1|m−1(y)| is designated to penalize the learner if m is non-injective and have attributed multiple labels to a single ground truth class. Performance results are illustrated in Table 2 & 3. Additional visualizations are provided in Appendix A.2, A.3 and A.4. 5 ANALYSIS Traditional OOD methods that rely on distance-based thresholds are restricted by the supervised samples that are available. These values are non-intuitive and vary drastically across datasets (whereas our percentage threshold are ≈ 50% for all datasets). In the incremental learning setting, early mistakes are amplified as more exposures are introduced, a proper threshold initialization dictates a model’s feasibility. However, we argue that even with a good threshold these methods will consistently fail in particular conditions. The purpose of this section is to discuss the results obtained from our experiments. Subsequently, we highlight a few cases that are overlooked by traditional distance-based methods. 5.1 OUT-OF-DISTRIBUTION DETECTION ANALYSIS iLAP with class-imbalance detection training (CI) was able to outperform related OOD methods for the MNIST, SVHN, CIFAR-10, and CIFAR-100 benchmarks under all metrics Table 1. However, the results for iLAP without CI were not as definitive. CE performed the worst in the incremental setting, possibly because the performance of the confidence branch is reliant on larger training samples. IOLfCV’s method performed on par with related methods. 5.2 UNSUPERVISED INCREMENTATL LEARNING ANALYSIS iLAP with CI was able to beat IOLfCV by 10.0, 41.3, 22.1, 7.0, and 1.0 percentage points for the MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB benchmarks respectively Table 2. iLAP was also able to maintain its performance when exposure sizes are decreased Appendix A.3. We found that the reason for the lower performance beat for CIFAR-100 and CRIB is not directly attributed to the larger number of classes in the dataset. Rather, the problem lies with how the exposure sequence for the incremental learning setting is created and how the distance-based threshold is calculated. The threshold for IOLfCV is computed by maximizing the F-score for the binary classification of novel versus non-novel classes. In our experiments, the entirety of the dataset was used to compute the baseline’s threshold. Although this is impractical, we wanted to illustrate that iLAP is able to beat the baseline even under the most optimal conditions. In Figure 3 we illustrate the behavior of the class feature distances as more classes are incorporated by a network. The most optimal threshold that maximizes the F-score lies in the mid-point between the two graphs when all 100 classes are learned. However, because the threshold is fixed, the novelty detector fails to correctly identify repeated classes early on during training and is more inclined to label repeated classes as unseen, (shaded red area in Figure 3). Consistent with the described setting in Stojanov et al. (2019), each class within a benchmark is repeated an equal number of times in a randomized sequence. For datasets with a large number of classes, there is a higher chance that the repeated exposures are further apart. Therefore, IOLfCV seemingly performs comparatively better on CIFAR-100 than CIFAR-10, but would fail if early repeated exposures were frequent. 5.3 CLASS SIMILARITY iLAP was able to detect all classes for MNIST, SVHN, and CIFAR-10 and on average 96.5 out of 100 classes for CIFAR-100. Meanwhile, IOLfCV struggles to identify unique classes for all evaluated benchmarks. Through closer inspection, we found that the distance-based method is unable to distinguish classes when they are too similar. Consider two classes, k1 and k2, that can be separated by a classifier C in learned feature space F . An incoming exposure, class k3, shares a similar distribution to class k1 in feature space F , but separable in some feature space F ′. The distance-based method is highly probable to fail because it is likely to categorize k1 and k3 as identical classes. However, because our method always trains the incoming exposure as a new class, C is forced to learn the feature space F ′ in which these two classes are separable. Figure 4 illustrates two prior classes, boy and lamp, in some feature space. The incoming exposure, class girl, is unable to be distinguished from class boy by the distance-based method Figure 4 (left). However, because detection training always attempts to classify the incoming exposure as a new class, our method is able to identify F ′ Figure 4 (right). 6 CONCLUSION To achieve learning in an unsupervised class-incremental setting, a reliable novelty detector is needed. Current methods utilize a detection threshold that is calibrated using class feature distances. In our work, we illustrate that the use of a static distance-based threshold is not only impractical but also unreliable. Instead, we introduce a technique that leverages confusion error to perform novelty detection by always training the incoming exposure as a new class. Using a series of image classification benchmarks, we illustrate that our method is able to closely rival supervised performance despite the lack of labels. A APPENDIX A.1 OOD EXPERIMENTAL DETAILS A brief description of the evaluation metrics used for Table 1. • False Positive Rate at 95% (FPR95): This measure determines the False Positive Rate (FPR) when True Positive Rate (TPR) is equal to 95%. FPR is calculated as FPFP+TN where FP is the number of False Positives and TN is the number of True Negatives. TPR is calculated as TPTP+FN where TP is the number of True Positives and FN is the number of False Negatives. • Area Under the Receiver Operating Characteristic (AUROC): This metric illustrates the relationship between the TPR and the FPR. This measure determines the probability that a novel class will have a higher detection score compared to a non-novel class. • Area Under the Precision-Recall (AUPR): This metric is constructed by plotting precision versus recall. The AUPR curve treats the novel examples as the positive class. A high area value represents high recall and high precision. The training detection method used in iLAP is compared to a set of related works in OOD. The following works reflect the acronyms used in Table 1 • MSP (Hendrycks & Gimpel (2016)): MSP is computed from a trained classifier to perform out-of-distribution detection. The mean MSP for all images belonging to an exposure is used to determine novelty. • CE (DeVries & Taylor (2018)): CE requires extending a model with a confidence branch to obtain a set of values. These values reflect the model’s ability to produce a correct prediction. When the mean estimation value for a set of input images is low, the sample is likely to be novel. • ODIN (Liang et al. (2017)): ODIN uses temperature scaling and input perturbations to widen the MSP difference between in-distribution and out-of-distribution samples. The optimal value for temperature and perturbation magnitude are found by minimizing FPR using grid search. 2 and 0.0012 are used for temperature and perturbation magnitude values respectively, for both the MNIST and CIFAR-10 dataset. • Zero-Shot OOD (Sastry & Oore (2019)): Zero-Shot OOD uses pairwise correlations of network features to detect novel samples. The class-conditional feature correlations, on the training data, are computed across all layers of the network. The mean correlations are then compared with the pairwise mean feature correlations from a test sample to obtain a deviation value. • IOLfCV (Stojanov et al. (2019)): IOLfCV determines a distance-based threshold computed using average-feature means from a network trained from supervised samples. Two initialized classes are used to compute the threshold by finding the optimal point using precision-recall analysis. A.2 MAIN EXPERIMENT RESULTS The following are produced with the use of a GTX TITAN X gpu. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam optimizer. The learning rate used is 2e−4. The feature extraction layers use a ten times lower learning rate at 2e−5. The input size is 224. A.3 EXPERIMENTS WITH LOWER EXEMPLAR SIZES In this section, we showcase iLAP’s results at lower exposure sizes. A.4 EXPERIMENTS WITHOUT PRE-TRAINING
1. What is the focus of the paper regarding unsupervised class-incremental learning? 2. What are the strengths and weaknesses of the proposed novelty detection approach? 3. How does the reviewer assess the paper's comparisons with other works, particularly incremental learning methods and open-set approaches? 4. What are the limitations of the proposed method regarding computational costs and scalability? 5. How does the reviewer evaluate the quality of the writing and organization of the paper?
Review
Review The paper studies an unsupervised class-incremental learning setting where a single class appears in each exposure, the classes can repeat and remain unknown during episodic training. A set of exemplars is used to evaluate accuracy changes, based on which novelty is determined. The ideas is novel, but I is less scalable and the approach currently lacks key analysis and comparisons with the incremental learning methods and open-set approaches. Pros`: An novelty detection approach that considers the changes in accuracy of the previous tasks as a new task is learned by assigning a new label to the incoming episode. A threshold value is then used to detect novelty. Cons: The basic intuition is that if a previous class is observed again, and the performance on old similar class will go down significantly. I feel this assumption is weak and can only be relevant in specialized cases, e.g., what if a very similar confusing class is observed? The currently evaluated datasets (SVHN, MNIST, CIFAR) do not consider such fine-grained cases. The evaluations in comparison to SOTA incremental learning methods is insufficient. Only a single approach, BiC, is considered for comparisons. The class imbalance based approach looks like a practical hack and is sensitive to the hyperparameters. The propose approach will incur a high computational cost with training the model at each episode to detect novelty. The computational cost comparison is not performed in the experimental section. The open set literature solves the same problem of OOD detection, however no comparison with SOTA methods is shown. I would recommend authors to check a nice survey on this topic: "Recent Advances in Open Set Recognition: A Survey" The paper is not well-written. Fig. 2 comes before class-imbalance discussion.
ICLR
Title Unsupervised Class-Incremental Learning through Confusion Abstract While many works on Continual Learning have shown promising results for mitigating catastrophic forgetting, they have relied on supervised training. To successfully learn in a label-agnostic incremental setting, a model must distinguish between learned and novel classes to properly include samples for training. We introduce a novelty detection method that leverages network confusion caused by training incoming data as a new class. We found that incorporating a class-imbalance during this detection method substantially enhances performance. The effectiveness of our approach is demonstrated across a set of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB. 1 INTRODUCTION The development of continually learning systems remains to be a major obstacle in the field of artificial intelligence. The primary challenge is to mitigate catastrophic forgetting: learning new tasks while maintaining the ability to perform old ones. This domain of research is often referred to as Continual Learning, Lifelong Learning, Sequential Learning, or Incremental Learning: each with subtleties in the learning environment and training process, but most with the use of supervision (De Lange et al. (2020)). Recently, Stojanov et al. (2019) introduced a novel unsupervised class-incremental learning problem motivated by the desire to simulate how children’s play behaviors support their ability to learn object models (IOLfCV). Here, sequential tasks take the form of exposures. Each exposure is comprised of a set of images that pertains to a single class that is hidden from the learner. Exposure boundaries, the transition from one exposure to the next, are known. The model trained in this setting, is analogous to a young child that has been placed in a playpen with a set of new toys. The child steadily gathers information over time by picking up, examining, and putting down new/old objects continuously. Similar to how the child does not have any guidance to the toys they will examine, the agent does not have access to the exposure’s identity during training. To learn in the unsupervised class-incremental setting, an agent must conduct two procedures successfully. Given a new learning exposure, the key step is to perform novelty detection: to identify whether an exposure corresponds to a class that has been learned. If the agent determines that an exposure is familiar, the second step is to identify its label such that the exposure can be leveraged to update the model. Both procedures must be performed reliably. Otherwise, the novelty detection mistakes will result in label noise that distorts the learned model, increasing the likelihood of subsequent mistakes. Deep neural networks are known to make overconfident decisions for anomalous data distributions that were not seen during training (Hendrycks & Gimpel (2016)). To address this problem, research related to out-of-distribution (OOD) detection have utilized supervised methods (Liang et al. (2017); Alemi et al. (2018)) and unsupervised methods (Choi & Jang (2018); Hendrycks et al. (2018); Serrà et al. (2019)). Works related to open set recognition have also addressed the OOD problem by applying distance-based thresholds computed from known class scores (Scheirer et al. (2012; 2014)). The work by Stojanov et al. (2019) applies a similar method to the unsupervised incremental setting by computing class features produced from a set of supervised samples. In contrast, we propose a model, Incremental Learning by Accuracy Performance (iLAP), that determines class novelty and identity by considering performance changes of previously learned tasks when an incoming set of exposure images are trained under a new label. Instead of using a distance-based metric, our novelty detection threshold relies on the percentage of accuracy that was maintained by performing a model update using the incoming exposure. This poses several practical advantages: First, the threshold value does not rely on supervised samples and is more intuitive (Section 3.3). Second, the performance of our method is independent of the sequence of the incoming exposure classes (Section 5.2). Finally, the model is able to distinguish between similar classes more reliably (Section 5.3). From our experiments, we demonstrate that the confusion resulting from training with label ambiguity results in a more reliable signal for novelty detection in comparison to previous methods. We demonstrate that our technique is more robust and results in substantial performance gains in comparison to various baselines. Furthermore, despite the absence of labels, our model was able to perform similarly to supervised models under several benchmarks. In summary, this work provides three contributions: • We present a novel framework, iLAP, that achieves learning in the unsupervised classincremental environment where the exposure identities are unknown. • We demonstrate that by including a class-imbalance technique, our unsupervised method is able to closely match supervised performance for several image classification benchmarks trained in the incremental setting. • We identify failure cases that are overlooked by traditional OOD methods that leverage distance-based thresholds. 2 RELATED WORKS Introduced by Stojanov et al. (2019), the unsupervised class-incremental setting contains a set of sequential tasks that are single-class exposures; classes pertaining to the exposures may repeat and are unknown. This is not to be mistaken with unsupervised continual learning (UCL) where task boundaries and task identities are unavailable (Lee et al. (2020); Smith & Dovrolis (2019); Rao et al. (2019)). Our work presents an agent that is able to leverage the boundary information from the unsupervised class-incremental environment to achieve performances that are close to models trained under supervision. 2.1 CONTINUAL LEARNING/INCREMENTAL LEARNING Prior works in this field primarily aim to improve a model’s ability to retain information while incorporating new tasks (Goodfellow et al. (2013); Parisi et al. (2019); Rebuffi et al. (2017); LopezPaz & Ranzato (2017); Aljundi et al. (2018); Castro et al. (2018)). Typically, these models reside in learning settings where both task labels and task boundaries are available. Methods include replay techniques, the usage of artifacts and generated samples to refresh a model’s memory (Kamra et al. (2017); Wu et al. (2018); Rolnick et al. (2019); Shin et al. (2017); Wu et al. (2019)), and regularization-based practices, the identification and preservation of weights that are crucial for the performance of specific tasks (Kirkpatrick et al. (2017); Zenke et al. (2017); Yoon et al. (2017)). In contrast to prior works, our method addresses incremental learning in a setting where exposure labels are unavailable. 2.2 UNSUPERVISED CONTINUAL LEARNING Recently, a series of works tackle the UCL problem where task boundaries and task identities are unknown. Smith & Dovrolis (2019) performs novelty detection by analyzing an input image through a series of receptive fields to determine if an input patch is an outlier. Meanwhile, CURL proposes a method to learn class-discriminative representations through a set of shared parameters (Rao et al. (2019)). CN-DPM, introduces an expansion-based approach that utilizes a mixture of experts to learn feature representations (Lee et al. (2020)). Although CN-DPM performs in a task-free setting, incoming tasks are multi-class and individual class labels are provided. This supervision is required to train the existing experts and determine when a new one is needed. While boundary information is not required for these works, the performances are far below supervised baselines (77.7% on and MNIST 13.2% Omniglot) (Rao et al. (2019)). 2.3 OUT-OF-DISTRIBUTION DETECTION This ongoing area of research aims to detect outliers in training and testing data. Current approaches can be largely categorized by statistical, distance-based, and deep learning methods (Eskin (2000); Yamanishi et al. (2004); Knorr et al. (2000); Hautamaki et al. (2004); Sabokrou et al. (2018); Kliger & Fleishman (2018)). Recent techniques involve using a threshold to determine class novelty from network confidence values (Hendrycks & Gimpel (2016)). ODIN uses input perturbations to increase softmax scores for neural networks to distinguish in-distribution images from out-of-distribution images (Liang et al. (2017)). DeVries & Taylor (2018) incorporates a confidence branch to obtain out-of-distribution estimations. Our method (iLAP) is the first to incorporate a threshold value that is dependent on class-accuracy changes caused by data poisoning. 3 APPROACH In this section, we provide an overview of our method. We begin by identifying the learning setting, followed by details of our training process. Finally, insights for choosing threshold values are provided. 3.1 SETTING In the unsupervised class-incremental setting, a learner L perceives an input stream of exposures denoted as E1, E2, ..., En. Each exposure contains a set of images, Ei = {ei1, ei2, ..., eini}, e i j ∈ RC×H×W , where C, H , and W are the number of channels, height, and width of the input image respectively. Each exposure belongs to a single class yi ∈ N, which has been sampled from class distribution P (C). For each Ei, L does not have knowledge of the true class label yi. Two exemplars, Ptrain = (P 1train, P 2train, ..., P K̂train) and Pval = (P 1val, P 2val, ..., P K̂val), are maintained at all times, where K̂ denotes the total number of classes L has currently determined. The exemplars are used to store samples from the exposure for replay and accuracy assessment. The sizes of both exemplars,∣∣P itrain∣∣ and ∣∣P ival∣∣, are bounded per class. 3.2 DETECTION TRAINING For each incoming exposure, the model is tasked to determine whether the class associated with the exposure was learned previously. Our solution is to perform a model update by treating the incoming exposure as a new class, we coin this technique detection training. Under the circumstances that the exposure class is repeated, the performance for the previously learned class would suffer drastically after training. The reason for this behavior is because the model has associated two different labels to a similar class distribution. During detection training, L̂, a copy of L is produced. The incoming exposure is assigned with label K̂ + 1. Train-validation split is performed on the incoming exposure to obtain Etrain and Eval, and are aggregated with exemplars Ptrain and Pval respectively. The combined samples are used to train L̂ via validation-based early stopping. We denote the vector {∆ŷ}ŷ∈[K̂] to represent the percentage decrease of the class accuracies (computed using Pval) before and after the update. If max({∆ŷ}) exceeds a threshold, θ, the incoming exposure is likely to have been learned by L. In this case, the correct identity pertaining to the exposure is arg maxŷ∈[K̂] ∆ŷ . Otherwise, if θ is unsatisfied, K̂ + 1 is the appropriate label for the new class. 3.3 CLASS-IMBALANCE FOR DETECTION TRAINING Introducing a class-imbalance during detection training creates a more distinct decision boundary by exacerbating the class-accuracy drop for repeated exposures. Consider a theoretical case where an optimal model has learned K̂ classes. The incoming exposure, Ei, contains a distribution that is equal to that of some previously learned class ŷi. If the model were updated with equal samples of Ei labeled K̂ + 1, and P ŷitrain labeled ŷi, the accuracy for class ŷi would become ambiguous during validation (ŷi≈ 50%, K̂ + 1≈ 50%). However, if the model were updated with a greater sample of K̂ + 1 labels, the accuracy drop for class ŷi would be considerably larger because K̂ + 1 would be favored during inference. L̂ has the option to use an imbalanced dataset during detection training where a fraction of the size for each class from Ptrain are used in comparison to the size of Etrain. Let P isampled ⊂ P itrain, the class-imbalance ratio λ is: λ = 1− |P isampled| |Etrain| (1) In Figure 2, we use Fashion-MNIST (Xiao et al. (2017)) and Imagenette (Howard) to obtain a general value for λ and θ for our experiments with other datasets. Performance drops are tracked for the repeated class and non-repeated classes during detection training with various values of λ. The goal is to maximize the performance loss for the repeated class while maintaining the performance for the other classes. In Section 4 a θ value of 0.6 and a λ value of 0.5 are used. λ is determined by the point of the maximum distance between the two accuracy curves; θ, the expected accuracy drop for a repeated exposure, is the mean of the two values at λ Figure 2. 3.4 MODEL AND EXEMPLAR UPDATE After obtaining the predicted label (K̂ + 1 or arg maxŷ∈[K̂] ∆ŷ) from detection training, L is trained with the aggregated dataset obtained from Ptrain and Etrain. Two sets with the most representative samples from Etrain and Etest are created and added to their respective exemplars for future replay. The selection process is determined by ranking the distance between the image features and the class-mean image features. This methodology is consistent with the procedure introduced by Castro et al. (2018). The availability of Pval allows L to assess how well old and new classes are considered. If the accuracy for any class interpreted by L falls below a percentage, the class is altogether discarded. This allows the model to remove extraneous classes that were learned insufficiently or have been forgotten. In Section 4, a percentage threshold of 20% is used for all experiments. 4 EXPERIMENTS The performance of our framework is evaluated using a series of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB (LeCun (1998); Netzer et al. (2011); Krizhevsky et al. (2009); Stojanov et al. (2019)). First, we compare our novelty detector to related OOD methods. Next, we evaluate performance of iLAP to that of other incremental learners: BiC (Wu et al. (2019)) (supervised) and IOLfCV (unsupervised). 4.1 EXPERIMENTAL DETAILS For the following experiments, a ResNet-18 model (He et al. (2016)) pre-trained with ImageNet is used for iLAP and all baselines (additional experiments without pretraining are presented in Appendix A.4). λ = 0.5 and θ = 0.6 are used for iLAP with class-imbalance detection training, while a λ = 0 and θ = 0.4 are used for iLAP without class-imbalance detection training. The parameters are maintained across all benchmarks. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam (Kingma & Ba (2014)) optimizer with validation-based early stopping; a learning rate of 2e−4 is used. The feature extraction layers use a ten times lower learning rate at 2e−5. For all models, the exemplar size per class is equal to the exposure size. The exposure validation split ratio is 0.8 (e.g. for exposure size = 200, iLAP: [Etrain] = 160 and [Eval] = 40). The thresholds used for IOLfCV in 4.1 were determined by maximizing the F-score for the classification of in-distribution exposures versus out-of-distribution exposures. To obtain the best performance possible, the entirety of the dataset was used. The values are 0.46, 0.63, 0.57, and 0.62 for MNIST, SVHN, CIFAR-10, and CIFAR-100 respectively. 4.2 OUT-OF-DISTRIBUTION DETECTION RESULTS The OOD detectors are assessed in an incremental setting with size 200 exposures. The detectors are evaluated on their ability to determine whether an exposure is novel by using the common established metrics: FPR95, AUROC, and AUPR (Hendrycks & Gimpel (2016)). Details of the compared works and evaluation methods are described in Appendix A.1. The results are illustrated in Table 1. 4.3 INCREMENTAL LEARNING RESULTS The accuracy of learner L is computed using the ground-truth mapping m : [K̂] → [K] with the equations: S(x, y) = { 1 |m−1(y)| , if L(x) ∈ m −1(y) 0, otherwise Accuracy = Ex,y∼test [S(x, y)] The learner accuracy is the mean of the sample accuracy scores evaluated on the test set, where (x, y) represents a single sample. For each sample with label y, let m−1(y) represent the corresponding labels from the learner. In the case that a learner output does not belong to set m−1(y), an accuracy score of 0 is assigned (class is not detected). Otherwise, an accuracy score of 1|m−1(y)| is designated to penalize the learner if m is non-injective and have attributed multiple labels to a single ground truth class. Performance results are illustrated in Table 2 & 3. Additional visualizations are provided in Appendix A.2, A.3 and A.4. 5 ANALYSIS Traditional OOD methods that rely on distance-based thresholds are restricted by the supervised samples that are available. These values are non-intuitive and vary drastically across datasets (whereas our percentage threshold are ≈ 50% for all datasets). In the incremental learning setting, early mistakes are amplified as more exposures are introduced, a proper threshold initialization dictates a model’s feasibility. However, we argue that even with a good threshold these methods will consistently fail in particular conditions. The purpose of this section is to discuss the results obtained from our experiments. Subsequently, we highlight a few cases that are overlooked by traditional distance-based methods. 5.1 OUT-OF-DISTRIBUTION DETECTION ANALYSIS iLAP with class-imbalance detection training (CI) was able to outperform related OOD methods for the MNIST, SVHN, CIFAR-10, and CIFAR-100 benchmarks under all metrics Table 1. However, the results for iLAP without CI were not as definitive. CE performed the worst in the incremental setting, possibly because the performance of the confidence branch is reliant on larger training samples. IOLfCV’s method performed on par with related methods. 5.2 UNSUPERVISED INCREMENTATL LEARNING ANALYSIS iLAP with CI was able to beat IOLfCV by 10.0, 41.3, 22.1, 7.0, and 1.0 percentage points for the MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB benchmarks respectively Table 2. iLAP was also able to maintain its performance when exposure sizes are decreased Appendix A.3. We found that the reason for the lower performance beat for CIFAR-100 and CRIB is not directly attributed to the larger number of classes in the dataset. Rather, the problem lies with how the exposure sequence for the incremental learning setting is created and how the distance-based threshold is calculated. The threshold for IOLfCV is computed by maximizing the F-score for the binary classification of novel versus non-novel classes. In our experiments, the entirety of the dataset was used to compute the baseline’s threshold. Although this is impractical, we wanted to illustrate that iLAP is able to beat the baseline even under the most optimal conditions. In Figure 3 we illustrate the behavior of the class feature distances as more classes are incorporated by a network. The most optimal threshold that maximizes the F-score lies in the mid-point between the two graphs when all 100 classes are learned. However, because the threshold is fixed, the novelty detector fails to correctly identify repeated classes early on during training and is more inclined to label repeated classes as unseen, (shaded red area in Figure 3). Consistent with the described setting in Stojanov et al. (2019), each class within a benchmark is repeated an equal number of times in a randomized sequence. For datasets with a large number of classes, there is a higher chance that the repeated exposures are further apart. Therefore, IOLfCV seemingly performs comparatively better on CIFAR-100 than CIFAR-10, but would fail if early repeated exposures were frequent. 5.3 CLASS SIMILARITY iLAP was able to detect all classes for MNIST, SVHN, and CIFAR-10 and on average 96.5 out of 100 classes for CIFAR-100. Meanwhile, IOLfCV struggles to identify unique classes for all evaluated benchmarks. Through closer inspection, we found that the distance-based method is unable to distinguish classes when they are too similar. Consider two classes, k1 and k2, that can be separated by a classifier C in learned feature space F . An incoming exposure, class k3, shares a similar distribution to class k1 in feature space F , but separable in some feature space F ′. The distance-based method is highly probable to fail because it is likely to categorize k1 and k3 as identical classes. However, because our method always trains the incoming exposure as a new class, C is forced to learn the feature space F ′ in which these two classes are separable. Figure 4 illustrates two prior classes, boy and lamp, in some feature space. The incoming exposure, class girl, is unable to be distinguished from class boy by the distance-based method Figure 4 (left). However, because detection training always attempts to classify the incoming exposure as a new class, our method is able to identify F ′ Figure 4 (right). 6 CONCLUSION To achieve learning in an unsupervised class-incremental setting, a reliable novelty detector is needed. Current methods utilize a detection threshold that is calibrated using class feature distances. In our work, we illustrate that the use of a static distance-based threshold is not only impractical but also unreliable. Instead, we introduce a technique that leverages confusion error to perform novelty detection by always training the incoming exposure as a new class. Using a series of image classification benchmarks, we illustrate that our method is able to closely rival supervised performance despite the lack of labels. A APPENDIX A.1 OOD EXPERIMENTAL DETAILS A brief description of the evaluation metrics used for Table 1. • False Positive Rate at 95% (FPR95): This measure determines the False Positive Rate (FPR) when True Positive Rate (TPR) is equal to 95%. FPR is calculated as FPFP+TN where FP is the number of False Positives and TN is the number of True Negatives. TPR is calculated as TPTP+FN where TP is the number of True Positives and FN is the number of False Negatives. • Area Under the Receiver Operating Characteristic (AUROC): This metric illustrates the relationship between the TPR and the FPR. This measure determines the probability that a novel class will have a higher detection score compared to a non-novel class. • Area Under the Precision-Recall (AUPR): This metric is constructed by plotting precision versus recall. The AUPR curve treats the novel examples as the positive class. A high area value represents high recall and high precision. The training detection method used in iLAP is compared to a set of related works in OOD. The following works reflect the acronyms used in Table 1 • MSP (Hendrycks & Gimpel (2016)): MSP is computed from a trained classifier to perform out-of-distribution detection. The mean MSP for all images belonging to an exposure is used to determine novelty. • CE (DeVries & Taylor (2018)): CE requires extending a model with a confidence branch to obtain a set of values. These values reflect the model’s ability to produce a correct prediction. When the mean estimation value for a set of input images is low, the sample is likely to be novel. • ODIN (Liang et al. (2017)): ODIN uses temperature scaling and input perturbations to widen the MSP difference between in-distribution and out-of-distribution samples. The optimal value for temperature and perturbation magnitude are found by minimizing FPR using grid search. 2 and 0.0012 are used for temperature and perturbation magnitude values respectively, for both the MNIST and CIFAR-10 dataset. • Zero-Shot OOD (Sastry & Oore (2019)): Zero-Shot OOD uses pairwise correlations of network features to detect novel samples. The class-conditional feature correlations, on the training data, are computed across all layers of the network. The mean correlations are then compared with the pairwise mean feature correlations from a test sample to obtain a deviation value. • IOLfCV (Stojanov et al. (2019)): IOLfCV determines a distance-based threshold computed using average-feature means from a network trained from supervised samples. Two initialized classes are used to compute the threshold by finding the optimal point using precision-recall analysis. A.2 MAIN EXPERIMENT RESULTS The following are produced with the use of a GTX TITAN X gpu. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam optimizer. The learning rate used is 2e−4. The feature extraction layers use a ten times lower learning rate at 2e−5. The input size is 224. A.3 EXPERIMENTS WITH LOWER EXEMPLAR SIZES In this section, we showcase iLAP’s results at lower exposure sizes. A.4 EXPERIMENTS WITHOUT PRE-TRAINING
1. What is the main contribution of the paper? 2. What is the purpose of the proposed method? 3. How does the reviewer feel about understanding the details of the paper? 4. What is the concern regarding the comparison to classical OOD methods? 5. What is the confusion regarding the metric given in section 4.3? 6. How does the reviewer assess the clarity of the tasks addressed and the evaluation means in the paper?
Review
Review This article proposes a method for predicting whether a batch of data is of the same class as one of the classes already seen by a classifier or whether it contains data from another class. The idea is to then be able to incorporate this batch to the previous training set, in an unsupervised learning context. It is assumed that each batch contains data from only one class. Experiments are there to show the interest of this method for anomaly detection or incremental learning. I find it difficult to formulate an opinion on this paper because I don't think I have managed to understand the detail of what is actually done. For example with regard to the detection of out of distribution data, the classic problem is whether a data is out of a distribution. Here it is not a data but a batch of data that is considered. I don't really see, under these conditions, how to compare to classical OOD methods. As far as incremental classification is concerned, I don't understand the definition of the metric given in section 4.3 and therefore I'm not sure I understand what the task is really about. The fact that it's unsupervised makes it away from standard problems. It seems to me that the paper lacks a clear definition of the tasks addressed and the means to evaluate performance.
ICLR
Title Unsupervised Class-Incremental Learning through Confusion Abstract While many works on Continual Learning have shown promising results for mitigating catastrophic forgetting, they have relied on supervised training. To successfully learn in a label-agnostic incremental setting, a model must distinguish between learned and novel classes to properly include samples for training. We introduce a novelty detection method that leverages network confusion caused by training incoming data as a new class. We found that incorporating a class-imbalance during this detection method substantially enhances performance. The effectiveness of our approach is demonstrated across a set of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB. 1 INTRODUCTION The development of continually learning systems remains to be a major obstacle in the field of artificial intelligence. The primary challenge is to mitigate catastrophic forgetting: learning new tasks while maintaining the ability to perform old ones. This domain of research is often referred to as Continual Learning, Lifelong Learning, Sequential Learning, or Incremental Learning: each with subtleties in the learning environment and training process, but most with the use of supervision (De Lange et al. (2020)). Recently, Stojanov et al. (2019) introduced a novel unsupervised class-incremental learning problem motivated by the desire to simulate how children’s play behaviors support their ability to learn object models (IOLfCV). Here, sequential tasks take the form of exposures. Each exposure is comprised of a set of images that pertains to a single class that is hidden from the learner. Exposure boundaries, the transition from one exposure to the next, are known. The model trained in this setting, is analogous to a young child that has been placed in a playpen with a set of new toys. The child steadily gathers information over time by picking up, examining, and putting down new/old objects continuously. Similar to how the child does not have any guidance to the toys they will examine, the agent does not have access to the exposure’s identity during training. To learn in the unsupervised class-incremental setting, an agent must conduct two procedures successfully. Given a new learning exposure, the key step is to perform novelty detection: to identify whether an exposure corresponds to a class that has been learned. If the agent determines that an exposure is familiar, the second step is to identify its label such that the exposure can be leveraged to update the model. Both procedures must be performed reliably. Otherwise, the novelty detection mistakes will result in label noise that distorts the learned model, increasing the likelihood of subsequent mistakes. Deep neural networks are known to make overconfident decisions for anomalous data distributions that were not seen during training (Hendrycks & Gimpel (2016)). To address this problem, research related to out-of-distribution (OOD) detection have utilized supervised methods (Liang et al. (2017); Alemi et al. (2018)) and unsupervised methods (Choi & Jang (2018); Hendrycks et al. (2018); Serrà et al. (2019)). Works related to open set recognition have also addressed the OOD problem by applying distance-based thresholds computed from known class scores (Scheirer et al. (2012; 2014)). The work by Stojanov et al. (2019) applies a similar method to the unsupervised incremental setting by computing class features produced from a set of supervised samples. In contrast, we propose a model, Incremental Learning by Accuracy Performance (iLAP), that determines class novelty and identity by considering performance changes of previously learned tasks when an incoming set of exposure images are trained under a new label. Instead of using a distance-based metric, our novelty detection threshold relies on the percentage of accuracy that was maintained by performing a model update using the incoming exposure. This poses several practical advantages: First, the threshold value does not rely on supervised samples and is more intuitive (Section 3.3). Second, the performance of our method is independent of the sequence of the incoming exposure classes (Section 5.2). Finally, the model is able to distinguish between similar classes more reliably (Section 5.3). From our experiments, we demonstrate that the confusion resulting from training with label ambiguity results in a more reliable signal for novelty detection in comparison to previous methods. We demonstrate that our technique is more robust and results in substantial performance gains in comparison to various baselines. Furthermore, despite the absence of labels, our model was able to perform similarly to supervised models under several benchmarks. In summary, this work provides three contributions: • We present a novel framework, iLAP, that achieves learning in the unsupervised classincremental environment where the exposure identities are unknown. • We demonstrate that by including a class-imbalance technique, our unsupervised method is able to closely match supervised performance for several image classification benchmarks trained in the incremental setting. • We identify failure cases that are overlooked by traditional OOD methods that leverage distance-based thresholds. 2 RELATED WORKS Introduced by Stojanov et al. (2019), the unsupervised class-incremental setting contains a set of sequential tasks that are single-class exposures; classes pertaining to the exposures may repeat and are unknown. This is not to be mistaken with unsupervised continual learning (UCL) where task boundaries and task identities are unavailable (Lee et al. (2020); Smith & Dovrolis (2019); Rao et al. (2019)). Our work presents an agent that is able to leverage the boundary information from the unsupervised class-incremental environment to achieve performances that are close to models trained under supervision. 2.1 CONTINUAL LEARNING/INCREMENTAL LEARNING Prior works in this field primarily aim to improve a model’s ability to retain information while incorporating new tasks (Goodfellow et al. (2013); Parisi et al. (2019); Rebuffi et al. (2017); LopezPaz & Ranzato (2017); Aljundi et al. (2018); Castro et al. (2018)). Typically, these models reside in learning settings where both task labels and task boundaries are available. Methods include replay techniques, the usage of artifacts and generated samples to refresh a model’s memory (Kamra et al. (2017); Wu et al. (2018); Rolnick et al. (2019); Shin et al. (2017); Wu et al. (2019)), and regularization-based practices, the identification and preservation of weights that are crucial for the performance of specific tasks (Kirkpatrick et al. (2017); Zenke et al. (2017); Yoon et al. (2017)). In contrast to prior works, our method addresses incremental learning in a setting where exposure labels are unavailable. 2.2 UNSUPERVISED CONTINUAL LEARNING Recently, a series of works tackle the UCL problem where task boundaries and task identities are unknown. Smith & Dovrolis (2019) performs novelty detection by analyzing an input image through a series of receptive fields to determine if an input patch is an outlier. Meanwhile, CURL proposes a method to learn class-discriminative representations through a set of shared parameters (Rao et al. (2019)). CN-DPM, introduces an expansion-based approach that utilizes a mixture of experts to learn feature representations (Lee et al. (2020)). Although CN-DPM performs in a task-free setting, incoming tasks are multi-class and individual class labels are provided. This supervision is required to train the existing experts and determine when a new one is needed. While boundary information is not required for these works, the performances are far below supervised baselines (77.7% on and MNIST 13.2% Omniglot) (Rao et al. (2019)). 2.3 OUT-OF-DISTRIBUTION DETECTION This ongoing area of research aims to detect outliers in training and testing data. Current approaches can be largely categorized by statistical, distance-based, and deep learning methods (Eskin (2000); Yamanishi et al. (2004); Knorr et al. (2000); Hautamaki et al. (2004); Sabokrou et al. (2018); Kliger & Fleishman (2018)). Recent techniques involve using a threshold to determine class novelty from network confidence values (Hendrycks & Gimpel (2016)). ODIN uses input perturbations to increase softmax scores for neural networks to distinguish in-distribution images from out-of-distribution images (Liang et al. (2017)). DeVries & Taylor (2018) incorporates a confidence branch to obtain out-of-distribution estimations. Our method (iLAP) is the first to incorporate a threshold value that is dependent on class-accuracy changes caused by data poisoning. 3 APPROACH In this section, we provide an overview of our method. We begin by identifying the learning setting, followed by details of our training process. Finally, insights for choosing threshold values are provided. 3.1 SETTING In the unsupervised class-incremental setting, a learner L perceives an input stream of exposures denoted as E1, E2, ..., En. Each exposure contains a set of images, Ei = {ei1, ei2, ..., eini}, e i j ∈ RC×H×W , where C, H , and W are the number of channels, height, and width of the input image respectively. Each exposure belongs to a single class yi ∈ N, which has been sampled from class distribution P (C). For each Ei, L does not have knowledge of the true class label yi. Two exemplars, Ptrain = (P 1train, P 2train, ..., P K̂train) and Pval = (P 1val, P 2val, ..., P K̂val), are maintained at all times, where K̂ denotes the total number of classes L has currently determined. The exemplars are used to store samples from the exposure for replay and accuracy assessment. The sizes of both exemplars,∣∣P itrain∣∣ and ∣∣P ival∣∣, are bounded per class. 3.2 DETECTION TRAINING For each incoming exposure, the model is tasked to determine whether the class associated with the exposure was learned previously. Our solution is to perform a model update by treating the incoming exposure as a new class, we coin this technique detection training. Under the circumstances that the exposure class is repeated, the performance for the previously learned class would suffer drastically after training. The reason for this behavior is because the model has associated two different labels to a similar class distribution. During detection training, L̂, a copy of L is produced. The incoming exposure is assigned with label K̂ + 1. Train-validation split is performed on the incoming exposure to obtain Etrain and Eval, and are aggregated with exemplars Ptrain and Pval respectively. The combined samples are used to train L̂ via validation-based early stopping. We denote the vector {∆ŷ}ŷ∈[K̂] to represent the percentage decrease of the class accuracies (computed using Pval) before and after the update. If max({∆ŷ}) exceeds a threshold, θ, the incoming exposure is likely to have been learned by L. In this case, the correct identity pertaining to the exposure is arg maxŷ∈[K̂] ∆ŷ . Otherwise, if θ is unsatisfied, K̂ + 1 is the appropriate label for the new class. 3.3 CLASS-IMBALANCE FOR DETECTION TRAINING Introducing a class-imbalance during detection training creates a more distinct decision boundary by exacerbating the class-accuracy drop for repeated exposures. Consider a theoretical case where an optimal model has learned K̂ classes. The incoming exposure, Ei, contains a distribution that is equal to that of some previously learned class ŷi. If the model were updated with equal samples of Ei labeled K̂ + 1, and P ŷitrain labeled ŷi, the accuracy for class ŷi would become ambiguous during validation (ŷi≈ 50%, K̂ + 1≈ 50%). However, if the model were updated with a greater sample of K̂ + 1 labels, the accuracy drop for class ŷi would be considerably larger because K̂ + 1 would be favored during inference. L̂ has the option to use an imbalanced dataset during detection training where a fraction of the size for each class from Ptrain are used in comparison to the size of Etrain. Let P isampled ⊂ P itrain, the class-imbalance ratio λ is: λ = 1− |P isampled| |Etrain| (1) In Figure 2, we use Fashion-MNIST (Xiao et al. (2017)) and Imagenette (Howard) to obtain a general value for λ and θ for our experiments with other datasets. Performance drops are tracked for the repeated class and non-repeated classes during detection training with various values of λ. The goal is to maximize the performance loss for the repeated class while maintaining the performance for the other classes. In Section 4 a θ value of 0.6 and a λ value of 0.5 are used. λ is determined by the point of the maximum distance between the two accuracy curves; θ, the expected accuracy drop for a repeated exposure, is the mean of the two values at λ Figure 2. 3.4 MODEL AND EXEMPLAR UPDATE After obtaining the predicted label (K̂ + 1 or arg maxŷ∈[K̂] ∆ŷ) from detection training, L is trained with the aggregated dataset obtained from Ptrain and Etrain. Two sets with the most representative samples from Etrain and Etest are created and added to their respective exemplars for future replay. The selection process is determined by ranking the distance between the image features and the class-mean image features. This methodology is consistent with the procedure introduced by Castro et al. (2018). The availability of Pval allows L to assess how well old and new classes are considered. If the accuracy for any class interpreted by L falls below a percentage, the class is altogether discarded. This allows the model to remove extraneous classes that were learned insufficiently or have been forgotten. In Section 4, a percentage threshold of 20% is used for all experiments. 4 EXPERIMENTS The performance of our framework is evaluated using a series of image classification benchmarks: MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB (LeCun (1998); Netzer et al. (2011); Krizhevsky et al. (2009); Stojanov et al. (2019)). First, we compare our novelty detector to related OOD methods. Next, we evaluate performance of iLAP to that of other incremental learners: BiC (Wu et al. (2019)) (supervised) and IOLfCV (unsupervised). 4.1 EXPERIMENTAL DETAILS For the following experiments, a ResNet-18 model (He et al. (2016)) pre-trained with ImageNet is used for iLAP and all baselines (additional experiments without pretraining are presented in Appendix A.4). λ = 0.5 and θ = 0.6 are used for iLAP with class-imbalance detection training, while a λ = 0 and θ = 0.4 are used for iLAP without class-imbalance detection training. The parameters are maintained across all benchmarks. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam (Kingma & Ba (2014)) optimizer with validation-based early stopping; a learning rate of 2e−4 is used. The feature extraction layers use a ten times lower learning rate at 2e−5. For all models, the exemplar size per class is equal to the exposure size. The exposure validation split ratio is 0.8 (e.g. for exposure size = 200, iLAP: [Etrain] = 160 and [Eval] = 40). The thresholds used for IOLfCV in 4.1 were determined by maximizing the F-score for the classification of in-distribution exposures versus out-of-distribution exposures. To obtain the best performance possible, the entirety of the dataset was used. The values are 0.46, 0.63, 0.57, and 0.62 for MNIST, SVHN, CIFAR-10, and CIFAR-100 respectively. 4.2 OUT-OF-DISTRIBUTION DETECTION RESULTS The OOD detectors are assessed in an incremental setting with size 200 exposures. The detectors are evaluated on their ability to determine whether an exposure is novel by using the common established metrics: FPR95, AUROC, and AUPR (Hendrycks & Gimpel (2016)). Details of the compared works and evaluation methods are described in Appendix A.1. The results are illustrated in Table 1. 4.3 INCREMENTAL LEARNING RESULTS The accuracy of learner L is computed using the ground-truth mapping m : [K̂] → [K] with the equations: S(x, y) = { 1 |m−1(y)| , if L(x) ∈ m −1(y) 0, otherwise Accuracy = Ex,y∼test [S(x, y)] The learner accuracy is the mean of the sample accuracy scores evaluated on the test set, where (x, y) represents a single sample. For each sample with label y, let m−1(y) represent the corresponding labels from the learner. In the case that a learner output does not belong to set m−1(y), an accuracy score of 0 is assigned (class is not detected). Otherwise, an accuracy score of 1|m−1(y)| is designated to penalize the learner if m is non-injective and have attributed multiple labels to a single ground truth class. Performance results are illustrated in Table 2 & 3. Additional visualizations are provided in Appendix A.2, A.3 and A.4. 5 ANALYSIS Traditional OOD methods that rely on distance-based thresholds are restricted by the supervised samples that are available. These values are non-intuitive and vary drastically across datasets (whereas our percentage threshold are ≈ 50% for all datasets). In the incremental learning setting, early mistakes are amplified as more exposures are introduced, a proper threshold initialization dictates a model’s feasibility. However, we argue that even with a good threshold these methods will consistently fail in particular conditions. The purpose of this section is to discuss the results obtained from our experiments. Subsequently, we highlight a few cases that are overlooked by traditional distance-based methods. 5.1 OUT-OF-DISTRIBUTION DETECTION ANALYSIS iLAP with class-imbalance detection training (CI) was able to outperform related OOD methods for the MNIST, SVHN, CIFAR-10, and CIFAR-100 benchmarks under all metrics Table 1. However, the results for iLAP without CI were not as definitive. CE performed the worst in the incremental setting, possibly because the performance of the confidence branch is reliant on larger training samples. IOLfCV’s method performed on par with related methods. 5.2 UNSUPERVISED INCREMENTATL LEARNING ANALYSIS iLAP with CI was able to beat IOLfCV by 10.0, 41.3, 22.1, 7.0, and 1.0 percentage points for the MNIST, SVHN, CIFAR-10, CIFAR-100, and CRIB benchmarks respectively Table 2. iLAP was also able to maintain its performance when exposure sizes are decreased Appendix A.3. We found that the reason for the lower performance beat for CIFAR-100 and CRIB is not directly attributed to the larger number of classes in the dataset. Rather, the problem lies with how the exposure sequence for the incremental learning setting is created and how the distance-based threshold is calculated. The threshold for IOLfCV is computed by maximizing the F-score for the binary classification of novel versus non-novel classes. In our experiments, the entirety of the dataset was used to compute the baseline’s threshold. Although this is impractical, we wanted to illustrate that iLAP is able to beat the baseline even under the most optimal conditions. In Figure 3 we illustrate the behavior of the class feature distances as more classes are incorporated by a network. The most optimal threshold that maximizes the F-score lies in the mid-point between the two graphs when all 100 classes are learned. However, because the threshold is fixed, the novelty detector fails to correctly identify repeated classes early on during training and is more inclined to label repeated classes as unseen, (shaded red area in Figure 3). Consistent with the described setting in Stojanov et al. (2019), each class within a benchmark is repeated an equal number of times in a randomized sequence. For datasets with a large number of classes, there is a higher chance that the repeated exposures are further apart. Therefore, IOLfCV seemingly performs comparatively better on CIFAR-100 than CIFAR-10, but would fail if early repeated exposures were frequent. 5.3 CLASS SIMILARITY iLAP was able to detect all classes for MNIST, SVHN, and CIFAR-10 and on average 96.5 out of 100 classes for CIFAR-100. Meanwhile, IOLfCV struggles to identify unique classes for all evaluated benchmarks. Through closer inspection, we found that the distance-based method is unable to distinguish classes when they are too similar. Consider two classes, k1 and k2, that can be separated by a classifier C in learned feature space F . An incoming exposure, class k3, shares a similar distribution to class k1 in feature space F , but separable in some feature space F ′. The distance-based method is highly probable to fail because it is likely to categorize k1 and k3 as identical classes. However, because our method always trains the incoming exposure as a new class, C is forced to learn the feature space F ′ in which these two classes are separable. Figure 4 illustrates two prior classes, boy and lamp, in some feature space. The incoming exposure, class girl, is unable to be distinguished from class boy by the distance-based method Figure 4 (left). However, because detection training always attempts to classify the incoming exposure as a new class, our method is able to identify F ′ Figure 4 (right). 6 CONCLUSION To achieve learning in an unsupervised class-incremental setting, a reliable novelty detector is needed. Current methods utilize a detection threshold that is calibrated using class feature distances. In our work, we illustrate that the use of a static distance-based threshold is not only impractical but also unreliable. Instead, we introduce a technique that leverages confusion error to perform novelty detection by always training the incoming exposure as a new class. Using a series of image classification benchmarks, we illustrate that our method is able to closely rival supervised performance despite the lack of labels. A APPENDIX A.1 OOD EXPERIMENTAL DETAILS A brief description of the evaluation metrics used for Table 1. • False Positive Rate at 95% (FPR95): This measure determines the False Positive Rate (FPR) when True Positive Rate (TPR) is equal to 95%. FPR is calculated as FPFP+TN where FP is the number of False Positives and TN is the number of True Negatives. TPR is calculated as TPTP+FN where TP is the number of True Positives and FN is the number of False Negatives. • Area Under the Receiver Operating Characteristic (AUROC): This metric illustrates the relationship between the TPR and the FPR. This measure determines the probability that a novel class will have a higher detection score compared to a non-novel class. • Area Under the Precision-Recall (AUPR): This metric is constructed by plotting precision versus recall. The AUPR curve treats the novel examples as the positive class. A high area value represents high recall and high precision. The training detection method used in iLAP is compared to a set of related works in OOD. The following works reflect the acronyms used in Table 1 • MSP (Hendrycks & Gimpel (2016)): MSP is computed from a trained classifier to perform out-of-distribution detection. The mean MSP for all images belonging to an exposure is used to determine novelty. • CE (DeVries & Taylor (2018)): CE requires extending a model with a confidence branch to obtain a set of values. These values reflect the model’s ability to produce a correct prediction. When the mean estimation value for a set of input images is low, the sample is likely to be novel. • ODIN (Liang et al. (2017)): ODIN uses temperature scaling and input perturbations to widen the MSP difference between in-distribution and out-of-distribution samples. The optimal value for temperature and perturbation magnitude are found by minimizing FPR using grid search. 2 and 0.0012 are used for temperature and perturbation magnitude values respectively, for both the MNIST and CIFAR-10 dataset. • Zero-Shot OOD (Sastry & Oore (2019)): Zero-Shot OOD uses pairwise correlations of network features to detect novel samples. The class-conditional feature correlations, on the training data, are computed across all layers of the network. The mean correlations are then compared with the pairwise mean feature correlations from a test sample to obtain a deviation value. • IOLfCV (Stojanov et al. (2019)): IOLfCV determines a distance-based threshold computed using average-feature means from a network trained from supervised samples. Two initialized classes are used to compute the threshold by finding the optimal point using precision-recall analysis. A.2 MAIN EXPERIMENT RESULTS The following are produced with the use of a GTX TITAN X gpu. For each exposure, the model is trained for 15 epochs with 16 batch size, using an Adam optimizer. The learning rate used is 2e−4. The feature extraction layers use a ten times lower learning rate at 2e−5. The input size is 224. A.3 EXPERIMENTS WITH LOWER EXEMPLAR SIZES In this section, we showcase iLAP’s results at lower exposure sizes. A.4 EXPERIMENTS WITHOUT PRE-TRAINING
1. What is the main contribution of the paper regarding unsupervised class-incremental learning? 2. What are the weaknesses of the proposed novelty detection module? 3. How does the reviewer assess the effectiveness of the proposed method compared to other competing methods, such as CURL and iCarl? 4. How does the reviewer evaluate the relevance of the proposed approach to active learning? 5. Do you have any suggestions for improving the clarity and coherence of the writing in the reviewed paper?
Review
Review The authors propose a novelty detection module to help unsupervised class-incremental learning. The novelty detection relies on the percentage of accuracy drop during a model update when treating incoming data as a new class. If the model maintains high accuracy, then the module treats the incoming data as familiar, thereby choosing one of the existing classes as the correct label. The paper investigates the effectiveness of the proposed method on MNIST, SVHN, CIFAR-10, and CIFAR-100. The main weakness of the submission might be that the proposed novelty detection is not well motivated. The bottleneck of the proposed pipeline comes down to whether the accuracy drop on the selected subset is a good indicator of out-of-distribution (OOD) detection. The submission does not provide theoretical insights nor direct references that show it is actually the case. In fact, in the experimental section (Sec 4.1), the authors have to use several different accuracy threshold settings (0.46, 0.63, 0.57, 0.62) for different datasets, demonstrating that accuracy drop might not be a reliable indicator. Besides, the direct competing method CURL (Rao et al.) is cited but not compared. iCarl [R1] should be quite related as well. The authors also use quite a different backbone network (ResNet-18) than other competing methods. Therefore, it is hard to justify whether the proposed approach is more effective than other baselines. The method described here is also quite similar to the field of active learning. It would be great to discuss the relationship between the proposed novelty detection and other active learning literatures. Sec 1 first sentence “continually learning systems remains to be a major obstacle in the field of artificial intelligence” is quite a strong statement. I believe there are other major obstacles in AI and they should be discussed as well. Sec 1 paragraph 3, “an agent must conduct two procedures successfully”. The authors do not clearly define what is an “agent” in the context. It is hard for the readers to follow through the manuscript. Sec 3.4, “After obtaining the correct label”. Actually the label technically is not “correct” but assumed to be correct for the class-incremental learning. [R1] Rebuffi et al. Icarl: Incremental classifier and representation learning. In CVPR 2017.
ICLR
Title Cascade Style Transfer Abstract Recent studies have made tremendous progress in style transfer for specific domains, e.g., artistic, semantic and photo-realistic. However, existing approaches have limited flexibility in extending to other domains, as different style representations are often specific to particular domains. This also limits the stylistic quality. To address these limitations, we propose Cascade Style Transfer, a simple yet effective framework that can improve the quality and flexibility of style transfer by combining multiple existing approaches directly. Our cascade framework contains two architectures, i.e., Serial Style Transfer (SST) and Parallel Style Transfer (PST). The SST takes the stylized output of one method as the input content of the others. This could help improve the stylistic quality. The PST uses a shared backbone and a loss module to optimize the loss functions of different methods in parallel. This could help improve the quality and flexibility, and guide us to find domain-independent approaches. Our experiments are conducted on three major style transfer domains: artistic, semantic and photo-realistic. In all these domains, our methods have shown superiority over the state-of-the-art methods. 1 INTRODUCTION Given the content and style images, the goal of style transfer is to synthesize an image that preserves some notion of the content but carries characteristics of the style. Recently, the seminal work of Gatys et al. (2015) firstly captured the style of artistic images and transferred it to other images using Convolutional Neural Networks (CNNs). Since then, various Neural Style Transfer (NST) Jing et al. (2017) methods have been advanced and obtained visually pleasing results. Despite the recent rapid progress, these existing works often limited to one or few specific domains (in this paper, we mainly focus on three domains: artistic, semantic and photo-realistic). For instance, Li et al. (2017b); Huang & Belongie (2017); Gatys et al. (2016) can transfer the artistic styles well, but they perform poorly on the style transfer of photographs and corresponding semantics. Luan et al. (2017); Li et al. (2018) specialize in photo-realistic style transfer, and Li & Wand (2016); Champandard (2016) mainly target semantic style transfer. Fortunately, there are some multi-domain approaches which can perform well on multiple domains, e.g., Liao et al. (2017) can perform well on semantic and photo-realistic style transfer, Gu et al. (2018) is suitable for artistic and semantic style transfer, and Li et al. (2019) can adapt to artistic and photo-realistic style transfer. Nevertheless, they still have some limitations, and the quality could be further improved (see Fig. 1 (a)). That is to say, nowadays, it is still inconvenient to users that have to choose the appropriate methods for specific domains. In this sense, finding a common approach which could perform well in all style transfer domains is extremely hard but significant. As a coin has two sides, every existing NST method has both advantages and shortcomings. Fig. 1 (b) shows some typical examples, we can observe that using Gram matrices Gatys et al. (2016) to transfer the artistic styles performs well on global color, but fails to capture enough local patterns (e.g., circles and droplets). Patch-based method Li & Wand (2016) can alleviate this problem, but may cause insufficient color. Is there a way to combine the advantages of both and overcome their shortcomings? Obviously, redesigning a new algorithm is difficult, why not use some simpler ways, such as combining existing methods directly through some general architectures? Based on the above analyses, we propose Cascade Style Transfer (CST) mainly for two targets, i.e., higher quality and higher flexibility, and design two architectures, i.e., the Serial Style Transfer (SST) and the Parallel Style Transfer (PST) for these targets. In this work, we first revisit and demonstrate the impact of different initialization strategies on style transfer, and inspired by this, design our SST for higher quality domain-specific style transfer. Moreover, we develop upon this and further propose our PST for more flexible style transfer, this could guide us to create domainindependent approaches. As far as we know, this is the first paper to propose domain-independent style transfer (note that this kind of approach is flexible for arbitrary images in arbitrary domains, while existing so-called arbitrary style transfer methods are only flexible for arbitrary images in specific domains), and also the first attempt to combine multiple existing approaches directly to improve the quality and flexibility of style transfer. The main contributions of our work are: •We revisit the initialization of style transfer, and demonstrate that initialization can play an important role in improving the quality of style transfer. • We propose a serial architecture for cascade style transfer, it is simple yet effective, which could help improve the quality of domain-specific style transfer. • We first propose domain-independent style transfer, and design a parallel architecture to help improve the quality and flexibility of style transfer. 2 RELATED WORK Our cascade style transfer can be related to the most style transfer methods. In this paper, we mainly focus on the NST methods in three major domains: artistic, semantic and photo-realistic. Artistic style transfer. This domain is dedicated to transferring the global artistic styles (e.g., abstract, painterly or sketch). The most representative work is Gatys et al. (2015). This method could produce amazing results but suffers from a slow iterative optimization procedure. To address it, Johnson et al. (2016) and Ulyanov et al. (2016; 2017) trained feed-forward generative networks for fast artistic style transfer. But one limit is that each model is trained to transfer exactly one fixed style. Some methods Dumoulin et al. (2017); Zhang & Dana (2018); Li et al. (2017a); Chen et al. (2017) further incorporated multiple styles into one single model, but they are still limited to a fixed number of pre-trained styles. Recently, several methods Li et al. (2017b); Huang & Belongie (2017); Li et al. (2019) were proposed to allow artistic style transfer for arbitrary images. Semantic style transfer. Transferring the styles between the corresponding semantic regions of the style and content images is referred to as semantic style transfer. The most representative work is Li & Wand (2016). They combined Markov Random Fields (MRFs) and CNNs to match the most similar local neural patches of the style and content images. Later, Champandard (2016) incorporated the segmentation masks for stricter semantic constraints. Recently, Liao et al. (2017) proposed Deep Image Analogy for accurate semantic-level patch match. Gu et al. (2018) used Deep Feature Reshuffle to consider both global and local information. Mechrez et al. (2018) proposed an alternative contextual loss for segmentation-free semantic style transfer. Moreover, some feedforward methods Chen & Schmidt (2016); Lu et al. (2017); Sheng et al. (2018); Park & Lee (2019); Yao et al. (2019) were also proposed for fast and real-time semantic style transfer. Photo-realistic style transfer. Photo-realistic style transfer seeks to transfer the style of a reference style photo onto other pictures. The greatest characteristic is that both the global structures and detailed contours in the content images should be preserved during the process. Traditional methods based on Global Reinhard et al. (2001); Pitie et al. (2005) and Local Laffont et al. (2014); Shih et al. (2014) are slow in practice and limited in specific scenarios (e.g., outdoor scenes or headshot portraits). Recently, Luan et al. (2017) incorporated a new loss term to the optimization objective of Gatys et al. (2015) to improve the photorealism of stylization outputs. Li et al. (2018) introduced a closed-form solution consisting of a stylization and a smoothing step for faster speed. More recently, Li et al. (2019) have also shown an effective approach for fast photo-realistic style transfer. Despite the fact that current NST methods have shown good performance in specific domains, there are few studies on how to combine them directly to improve the quality and flexibility. In our work, we select Gatys et al. (2016); Li et al. (2017b); Huang & Belongie (2017); Li et al. (2019; 2018); Luan et al. (2017); Sheng et al. (2018); Liao et al. (2017); Gu et al. (2018); Li & Wand (2016); Champandard (2016) for our studies mainly because of their representativeness in specific domains. 3 REVISIT INITIALIZATION IN STYLE TRANSFER Initialization is the first yet important step in almost all style transfer algorithms. Although some papers Gatys et al. (2016); Nikulin & Novak (2016) have discussed the impact of the initialization on style transfer, they only use white noise, the content image or the style image, and the evaluation criteria are only based on the qualitative aspect. Here, back to the most original NST algorithms, we revisit and compare more initialization strategies from both qualitative and quantitative aspects. Our experiments are based on two most original NST methods Gatys et al. (2016); Li & Wand (2016). These methods always initialize with white noise or the content image and iteratively optimize pixels to match the content features of the content image and style features of the style image. Besides white noise, the content and style images, we try and compare four other initialization strategies: salt and pepper noise, poisson noise, the stylized results of other methods (SROOM), and moreover, replacing the content images with SROOM and then initializing with them (RC-SROOM). Qualitative Comparisons: Fig. 2 demonstrates the qualitative comparisons. As we can see, initializing with the content image produces results with insufficient color (column a). Results generated from the style image introduce some undesired color (column b). Using salt and pepper noise produces over-bright results, and using poisson noise produces darker results (column d and e). By contrast, white noise initialization yields satisfying results (column c), but the results of SROOM performs better on overall effect (column f). Remarkably, using RC-SROOM (column g) can dramatically improve the effect and absorb the merits (as shown and discussed in Fig. 1 (b)) of both methods. More results can be found in appendix. Quantitative Comparisons: Fig. 3 shows the quantitative comparisons. We evaluate these initialization strategies on method Gatys et al. (2016), the optimization is conducted by Adam method, and stopped at 1000 iterations. We can see that initializing with images (i.e., content, style, SROOM and RC-SROOM) makes loss fall faster, while initializing with noise (i.e., white noise, salt and pepper noise, and poisson noise) decreases loss more steadily. It is worth noting that using RC-SROOM achieves much lower total, content and style loss than all other strategies. Conclusion: In this section, we have demonstrated that initialization plays an important role in improving the quality of style transfer. Compared with other initialization strategies, RC-SROOM has outstanding performance. More importantly, this could help produce higher quality style transfer results that absorb the merits of multiple methods. Inspired by this, we propose cascade style transfer, which will be presented in latter sections, and in turn verify this conclusion. 4 CASCADE STYLE TRANSFER In this paper, we define cascade style transfer as combinations of different NST methods. It contains two architectures: the serial style transfer and the parallel style transfer. 4.1 SERIAL STYLE TRANSFER (SST) As shown in Fig. 4 (a), serial style transfer serially connects multiple style transfer methods. Let ~c, ~s and ~xi be the content image, the style image and the stylized result of method i. The style transfer process of method i is denoted as fi. Specifically, for the first method, we use f1(~c,~s, d) to denote transferring the style of ~s to ~c by method 1 with the default initialization settings. For others, we use fi(~xi−1, ~s) to denote initializing with ~xi−1 and then transferring the style of ~s to ~xi−1 by method i. Our serial style transfer can be formulated as ~xi = { f1(~c,~s, d) if i = 1 fi(~xi−1, ~s) otherwise. (1) 4.2 PARALLEL STYLE TRANSFER (PST) As far as we know, current NST methods are mainly conducted in two different ways. One is based on VGG Simonyan & Zisserman (2014), iteratively optimizing the pixels of input images. The other is training a feed-forward network to directly generate the stylized results. Here, to demonstrate our PST more intuitively, we design a simple parallel architecture based on the former way. As shown in Fig. 4 (b), our PST contains two important parts. One is the shared backbone (e.g., VGG-19), it is mainly used for feature extraction and error back-propagation. The other is the loss module, it is used to combine loss functions of different methods. Let Li denote the loss function of method i, “⊕” denote a linear combination operation between different loss functions. We give hyperparameter ωi to weight every loss function Li. The total loss is defined as follows: Ltotal(i) = { ω1L1 if i = 1 Ltotal(i− 1)⊕ ωiLi otherwise. (2) We compute its gradients with respect to the pixel values and use them to iteratively update the input images. In this way, all methods can be optimized in parallel with the total loss. To verify the effectiveness of our PST, we design a specific parallel scheme ParallelNet based on four popular domain-specific style transfer methods Champandard (2016); Li & Wand (2016); Gatys et al. (2016); Luan et al. (2017). The principles for selecting the appropriate approaches will be discussed in later sections. The detailed combination procedure is as follows: • For method Gatys et al. (2016), we capture the vectorized features F`[~x] and F`[~s] of layers ` ∈ {convk 1}5k=1 for the generated image ~x and the style image ~s, and compute the Gram matrices: G`[·] = F`[·]F`[·]T , (3) here the Gram matrix is defined as the inner product between the vectorized features. In each layer, there areN` filters each with a vectorized feature of sizeM`. We define our style loss L1 as follows: L1 = 1 (2N`M`)2 ||G`[~x]−G`[~s]||2. (4) • For method Champandard (2016) and Li & Wand (2016), we reuse the features F~[~x] and F~[~s] of layers ~ ∈ {convk 1}4k=3, and then concatenate them with the segmentation masks ~cm and ~sm of the content image ~c and the style image ~s at the same resolutions, respectively: F~,m[~x] = F~[~x] λ · ð[~cm, ~], F~,m[~s] = F~[~s] λ · ð[~sm, ~], (5) where ð[·, ~] denotes resizing the masks to the same resolution as the output of layer ~. “ ” denotes the channel concatenation, the hyperparameter λ is given to weight the semantic awareness. Then we extract a set of 3×3 neural patches for both F~,m[~x] and F~,m[~s], denoted by {Φi(~x)}i∈nx and {Φj(~s)}j∈ns , where nx and ns are the number of extracted patches. For each patch Φi(~x), we determine a closest-matching style patch ΦCM(i)(~s) based on the following measure: CM(i) := arg max j=1,...,ns Φi(~x) · Φj(~s) |Φi(~x)| · |Φj(~s)| . (6) Finally, our style loss L2 is defined as follows: L2 = nx∑ i=1 ||Φi(~x)− ΦCM(i)(~s)||2. (7) • For method Luan et al. (2017), we directly use their photorealism regularization term L3: L3 = 3∑ c=1 Vc[~x] TMIVc[~x], (8) whereMI is the Matting Laplacian Matrix Levin et al. (2008), which is used to express a locally affine combination of the input RGB channels and only depends on the input image I . Vc[~x] denotes the vectorized version of the output image ~x in channel c. • At last, these methods use the same content loss Lc to preserve the structure of the content image. Lc = ||F [~x]−F [~c]||2, (9) where F [~x] and F [~c] denote the features extracted from layer conv4 2. Total loss: In our loss module, the total loss is the simple linear combination of the above loss: Ltotal = αLc + ω1L1 + ω2L2 + ω3L3 + µLTV , (10) where LTV refers to the total variation regularization loss Aly & Dubois (2005). 5 EXPERIMENTS For SST, we select some of the most representative methods in specific domains, and run the published implementation with default stylization settings for each method. For PST (we specifically refer to our ParallelNet), the hyperparameters α, ω1, ω2 and µ are fixed at 10, 0.1, 10 and 1, respectively. λ is set to 10 for cases with segmentation masks and 0 for others. ω3 is set to 104 for photo-realistic style transfer, and 0 for others. The optimization is conducted by LBFGS Zhu et al. (1997), and stopped at 500 iterations. The initialization is the content image. 5.1 ABLATION STUDIES How to select the appropriate methods? To find some empirical principles, we conduct a user study (Table 1) on some representative NST methods in terms of content fidelity, stylization global color and local texture patterns, since these three aspects are our main considerations in selecting the methods. To alleviate the burden of subjects, we show them 150 synthesized images of each method, and ask them to select one overall level for each aspect. We collect totally 990 votes from 30 subjects, each method has 30 votes in each aspect. The levels with the most votes are chosen. For SST, which methods could be combined and which order will return the optimal solutions? Take artistic style transfer as the example, Fig. 5 shows the comparisons of different serial schemes. The first column shows a reasonable serial scheme, since the method (b) help enrich the global color, and method (c) improves the local styles. The second column shows an unreasonable one, as the method (d) introduces some content distortions, and this directly affects the subsequent outputs. The third and fourth column show the effects of node order. We can find in the third column that method (b) and (a) progressively enrich the global color of (c), but do not change the local patterns. See the fourth column, unfortunately, changing the order of (d) cannot avoid the content distortions, but since the artistic style transfer does not require high content fidelity, the result is still acceptable. The last column shows some invalid nodes, method (b) and (e) have subtle effect on the result of (d). Now, look at these in conjunction with Table 1, we can conclude the following: • For the aspects of global color and local patterns, the same level can promote each other, and the higher level can improve the performance of the lower level, while the lower level has little effect on the higher level. The results are determined by the order of methods with different levels. • For the aspect of content fidelity, the higher level is only conducive to content preservation, while the distortions produced by the lower level are irreversible and sequence-independent. • The selection is specific for particular domains, e.g., for artistic or semantic style transfer, the global color and local patterns are prior to content fidelity, while for photo-realistic style transfer, content fidelity is the primary concern. These conclusions are also applicable to semantic and photo-realistic style transfer. More results can be found in appendix. We believe that these would help users design useful serial schemes to obtain desired results, and inspire future works in neural style transfer. For PST, how many and which methods could be combined? As we introduced in Section 1, our PST is proposed to improve the quality and flexibility for different domains. Generally, any approach that satisfies the following two conditions can be incorporated into our PST: • Sharing the same backbone (e.g., VGG-19) and similar process flow (e.g., backward optimization). • Having the capacity to solve the problem of different style transfer domain. How to get the optimal hyperparameters for PST? Our PST introduces some hyperparameters, we can adapt the model to different domains by adjusting them, but the fine-tuning may be intractable. Fortunately, we find that for most methods, the optimal hyperparameters before and after combinations are almost the same, e.g., the optimal values of λ and ω3 of our ParallelNet are the same as those of the original methods. And for others, we just need to make a few adjustments according to the original methods. The optimal hyperparameters are generalized for most cases, but for some special ones, fine-tuning may produce better results. Ablation studies about different loss terms and their hyperparameters of our ParallelNet can be found in appendix. 5.2 EVALUATION AND COMPARISONS Qualitative Comparisons: Fig. 6 shows some qualitative comparisons on artistic, semantic and photo-realistic style transfer. In the top row, we select Sheng et al. (2018), Li et al. (2019) and Li & Wand (2016) as three sequential nodes of our SST(a). Compared with them that only transfer the global color or local textures, our SST(a) and PST can consider both aspects simultaneously. In the middle row, we select Champandard (2016), Gu et al. (2018) and Liao et al. (2017) as three sequential nodes of our SST(b). We observe that these methods often produce either insufficiently stylized results or abnormal artifacts. By contrast, our SST(b) and PST produce much more satisfying results. In the bottom row, since the content fidelity level of Li et al. (2018) in Table 1 is only M (we can also observe some undesired shadows in column 3) and this is detrimental to our SST in such high fidelity task, we only select Li et al. (2019) and Luan et al. (2017) as sequential nodes of our SST(c). As we can see, the result of method Li et al. (2019) may lose some striking styles, e.g., the light at the bottom of the ship. The method Luan et al. (2017) may introduce some unacceptable artifacts, e.g., the abnormal hull. These problems do not occur in the results of our SST(c) and PST. Quantitative Comparisons: We conduct several user studies on different style transfer domains. The results can be found in appendix. In all domains, our SSTs and PST show their superiority. And we also compare the flexibility of different methods in different style transfer domains in Table 2. Since our SSTs target the higher quality for specific domains, they do not contribute to more flexibility. By contrast, our PST is the only one that can flexibly adapt to all the mentioned domains. Speed and Memory Analysis: Obviously, the time and memory requirements of our SSTs are simply the sums of those of its nodes. For PST, since the combined methods all share the same backbone, and the intermediate features can be stored and reused, the increments of time and memory are slight. The speed of our ParallelNet is comparable to that of the method Gatys et al. (2016). 6 CONCLUSION AND FUTURE DIRECTIONS In this paper, we propose cascade style transfer to combine multiple approaches in serial or in parallel without modifying any algorithm. Experiments have verified that our methods can improve the stylistic quality and flexibility over previous state-of-the-art methods in artistic, semantic and photo-realistic style transfer domains. In the future, more effective and efficient schemes could be designed, and the combination of SST and PST is also an interesting direction worthy of further studies. Moreover, our methods can be regarded as data-flow graphs of neural image editing operators from a high-level perspective, it is another neat idea for interactive computer graphics system and will probably have considerable impact on future work and architectures in this domain. A APPENDIX A.1 MORE RESULTS ABOUT INITIALIZATION A.1.1 GATYS ET AL. (2016) Fig. 7 shows more results of different initialization strategies on method Gatys et al. (2016). As mentioned in the paper, the first column shows the content image (top) and the style image (bottom). The other columns show the initialization images (top) and the corresponding style transfer results (bottom) of method Gatys et al. (2016). In column (f), we initialize with the default stylized results of method Li & Wand (2016) (SROOM). In column (g), we replace the content image with SROOM and then initialize with it (RC-SROOM). A.1.2 LI & WAND (2016) Fig. 8 shows more results of different initialization strategies on method Li & Wand (2016). The first column shows the content image (top) and the style image (bottom). The other columns show the initialization images (top) and the corresponding style transfer results (bottom) of method Li & Wand (2016). In column (f), we initialize with the default stylized results of method Gatys et al. (2016) (SROOM). In column (g), we replace the content image with SROOM and then initialize with it (RC-SROOM). A.2 MORE SERIAL SCHEMES FOR SST A.2.1 ARTISTIC STYLE TRANSFER Fig. 9 shows more comparisons of different serial schemes in artistic style transfer domain. As mentioned in the paper, each column represents a serial scheme, and each row shows the intermediate output. We select (a) Sheng et al. (2018), (b) Li et al. (2019), (c) Li & Wand (2016), (d) Li et al. (2017b), (e) Huang & Belongie (2017) for the nodes of different serial schemes. A.2.2 SEMANTIC STYLE TRANSFER Fig. 10 shows some comparisons of different serial schemes in semantic style transfer domain. Each column represents a serial scheme, and each row shows the intermediate output. We select (a) Champandard (2016), (b) Gu et al. (2018), (c) Liao et al. (2017), (d) Li et al. (2017b), (e) Sheng et al. (2018), (f) Li et al. (2019) for the nodes of different serial schemes. A.2.3 PHOTO-REALISTIC STYLE TRANSFER Fig. 11 shows some comparisons of different serial schemes in photo-realistic style transfer domain. Each column represents a serial scheme, and each row shows the intermediate output. We select (a) Li et al. (2019), (b) Luan et al. (2017), (c) Li et al. (2018), (d) Liao et al. (2017), (e) Huang & Belongie (2017) for the nodes of different serial schemes. A.3 ABLATION STUDIES OF PARALLELNET We study the effects of different loss terms of our proposed ParallelNet, including style loss L1 (in Eq.(4)), L2 (in Eq.(5) and Eq.(7)) and photorealism regularization loss L3 (in Eq.(8)). Generally, the weight α of Lc is fixed to 10, and the weight µ of LTV is fixed to 1. Fig. 12 shows the results of artistic style transfer by varying weight ω1 of loss L1 and ω2 of loss L2 while fixing the other weights. As the last column shows, the original method of Gatys et al. (2016) transfers the global color and rough textures of the style image, but does not transfer the local intricate patterns. The original method of Li & Wand (2016) transfers much more local style patterns, but as it shows, this method may generate insufficiently stylized result when huge difference exists between the content and style image. By combining the characteristics of these two methods, our ParallelNet can transfer both global color and local textures of the style images. As the top row shows, increasing the weight ω1 of loss L1 transfers more global color and rough textures. And as the bottom row shows, increasing the weight ω2 of loss L2 retains more local style patterns. However, since the trade-off between the content loss and the style loss, distortions of the content structure are inevitable if the values of ω1 and ω2 are too high. In our work, for artistic style transfer, ω1 and ω2 are set to 0.1 and 10 by default, respectively. Fig. 13 shows the results of semantic style transfer by varying weight ω2 and λ of loss L2. The top row shows transfer of painted style (a) onto a photo (b), this is easy and does not require the constraints of segmentation masks, so we fix the semantic awareness weight λ at 0. On the other hand, according to the aforementioned practice, we set the value of ω1 and ω3 to 0.1 and 0 by default, respectively. As the top row shows, increasing ω2 refines more detailed information of the corresponding semantics, but this may change the original structure of the content image. To avoid this, the value of ω2 should not be too high, so in our work, we set it to 10 by default. Based on these, the bottom row shows transfer of photo style (b) onto a painting (a), it is hard so we use the constraints of segmentation masks to get better results. As the bottom row shows, increasing λ achieves more accurate semantic matching. To obtain the best results, we set λ to 10 by default. Fig. 14 shows the results of photo-realistic style transfer by varying weight ω3 of photorealism regularization loss L3. Differing from Luan et al. (2017) using a two-stage optimization process based on the outputs of Gatys et al. (2016), we directly solve this problem by optimizing Eq. (10). This is much simpler, and could produce comparable results due to the synergistic effects of different loss items. As we can see, a too small value of ω3 cannot prevent distortions, thus the results in column 2 and 3 have a non-photorealistic look. On the contrary, a too large value of ω3 suppresses the style to be transferred and leads to color infidelity (see column 5 and 6). Similar to Luan et al. (2017), we choose ω3 = 104 by default. A.4 QUALITATIVE COMPARISONS WITH OTHER METHODS Here we demonstrate the results of our methods and twelve other representative single- or multidomain methods Sheng et al. (2018); Li et al. (2019); Li & Wand (2016); Champandard (2016); Gu et al. (2018); Liao et al. (2017); Luan et al. (2017); Li et al. (2018); Yao et al. (2019); Gatys et al. (2016); Huang & Belongie (2017); Li et al. (2017b) on artistic, semantic and photo-realistic style transfer, respectively. Each figure shows the representative results obtained by one method. The left, center and right columns show examples of artistic, semantic and photo-realistic style transfer, respectively. For the inputs in each group, the upper one is the content image and the lower one is the style image. Relevant quality and flexibility analyses can be found under the caption of each figure. Compared to these methods, our SST schemes perform better on stylistic quality in specific domains, and our PST scheme performs better on stylistic quality and flexibility in all domains. As shown in Fig. 15, this scheme can perform well on artistic and semantic style transfer (without segmentation masks). But since its nodes do not support segmentation masks, it could not solve the cases when huge differences exist between the content and style images (e.g., the case at the top of the center column). This scheme is not suitable for photo-realistic style transfer, as it may produce a lot of content distortions and abnormal artifacts (see the right column). As shown in Fig. 16, this scheme can perform well on artistic and semantic style transfer (with and without segmentation masks). However, in artistic style transfer, there are some deficiencies in content preservation because it prefers to express more style features (see the two cases at the bottom of the left column). This also limits its capability in photo-realistic style transfer (see the right column). As shown in Fig. 17, this scheme is only suitable for photo-realistic style transfer. As we can see, it maintains the photorealism of the content photographs and at the same time transfers the global color of the style images. But this also makes it difficult to produce the artworks with non-realistic styles. Since the nodes of it can incorporate segmentation masks, this scheme is capable to use segmentation masks. As shown in Fig. 18, this scheme is flexible enough to apply to all these style transfer domains. As we can see, in the left column, both the global color and local patterns of the artistic images can be transferred by this scheme. In the center and right column, it can also achieve semantic and photorealistic style transfer with and without segmentation masks. Moreover, users can further improve the stylistic quality by fine-tuning our provided hyperparameters for each input. As shown in Fig. 19, this method can be used in artistic and semantic style transfer (without segmentation masks), but the stylistic quality is not so good. As we can see, for artistic style transfer, it only transfers the global color of the style images, but lacks local patterns. For semantic style transfer, it may produce a lot of hazy blocks, which affect the overall clarity. These problems (including content distortions and abnormal artifacts) also exist in the photo-realistic style transfer. As shown in Fig. 20, this method can be used in artistic and photo-realistic style transfer. But since it prefers to maintain the structures of the content images, the produced results are often insufficiently stylized. This could help it to perform better in photo-realistic style transfer. On the other hand, since the spirit of this method is based on the global statistics, it cannot solve the tasks of semantic style transfer. This method is capable to use segmentation masks. As shown in Fig. 21, this method can be used in semantic style transfer (without segmentation masks). As we can see, for artistic style transfer, it could transfer adequate local patterns of the style images, but the global effects of the stylized results are unsatisfying (mainly because of the insufficient global color, see the left column). On the other hand, introduced content distortions also limit its capability for photo-realistic style transfer (see the right column). As shown in Fig. 22, this method can be used in artistic and semantic style transfer (with and without segmentation masks). For artistic style transfer (see the left column), the most results are similar to that of Li & Wand (2016), the difference is that this method can perform better on the global color or the local patterns, but it is still unsatisfying. For semantic style transfer, it may produce blurred results (e.g., the two cases at the top of the center column). For photo-realistic style transfer, it cannot adapt to this domain because of the introduced content distortions. As shown in Fig. 23, this method can be used in artistic and semantic style transfer (without segmentation masks). The stylized results produced by it can obtain sufficient global color and local patterns, but there are also a lot of abnormal artifacts which directly affect the final effect. This occurs in almost every case in these three domains. As shown in Fig. 24, this method can be used in semantic and photo-realistic style transfer (without segmentation masks). It is more suitable for style transfer of image pairs which have high semanticlevel correspondences. Therefore, for the cases of artistic style transfer which are not relevant in semantics, this method yields poor stylized results (see the left column). On the other hand, since this method does not support segmentation masks, mismatching is prone to occur in challenging tasks (e.g., the case at the top of the center column and cases in the right column). As shown in Fig. 25, similar to our SST(c) scheme (see Fig. 17), this method is only suitable for photo-realistic style transfer. Compared to our SST(c) scheme, it may produce some abnormal artifacts, e.g., the hull and eyes in the two cases at the bottom of the right column, respectively. This method is capable to use segmentation masks. As shown in Fig. 26, similar to our SST(c) scheme (see Fig. 17) and the method of Luan et al. (2017) (see Fig. 25), this method is only suitable for photo-realistic style transfer. See the right column, compared to our SST(c) scheme, it may produce too many undesired effects, e.g., the skies in the case 1 and case 3, the shadows in the case 2 and the eyes in the case 4 (numbers are arranged from top to bottom). This method is capable to use segmentation masks. As shown in Fig. 27, this method can be used in artistic and semantic style transfer (without segmentation masks). Because of the incorporation with self-attention mechanism, this method can highlight more salient areas (e.g., characters’ eyes) of the images. But in other places, the performance is similar to the method of Sheng et al. (2018) (see Fig. 19). As shown in Fig. 28, this method is only suitable for artistic style transfer. As we can see, it could transfer the global color and rough textures of the artistic style images to the content images (see the left column). But since the spirt of its algorithm is based on the global statistics, this method cannot solve the semantic style transfer (see the center column). Of course, without the improvement introduced by Luan et al. (2017), it cannot solve the tasks of photo-realistic style transfer (see the right column). As shown in Fig. 29, similar to the method of Gatys et al. (2016) (see Fig. 28), this method is only suitable for artistic style transfer. Compared to the method of Gatys et al. (2016), some results generated by it are not sufficiently stylized (e.g., the two cases at the bottom of the left column). As shown in Fig. 30, similar to the methods of Gatys et al. (2016) (see Fig. 28) and Huang & Belongie (2017) (see Fig. 29), this method is only suitable for artistic style transfer. Compared to the methods of Gatys et al. (2016) and Huang & Belongie (2017), some results generated by it are excessively stylized, thus introducing a lot of undesired effects (e.g., the case 2 and case 3 in the left column). A.5 QUANTITATIVE COMPARISONS WITH OTHER METHODS A.5.1 USER STUDY ON ARTISTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Sheng et al. (2018); Li et al. (2019); Li & Wand (2016); Yao et al. (2019); Gatys et al. (2016); Li et al. (2017b) on artistic style transfer. We use 10 content images and 10 style im- ages to synthesize 100 images in total for each method, and randomly select 50 content and style combinations to each subject. We show stylized images of 10 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 1500 votes from 30 users and show the percentage of votes for each method in Fig. 31. Overall, our proposed PST and SST(a) are favored among all evaluated methods. A.5.2 USER STUDY ON SEMANTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Sheng et al. (2018); Li & Wand (2016); Champandard (2016); Gu et al. (2018); Liao et al. (2017); Yao et al. (2019) on semantic style transfer. For each method, we use 30 image groups to synthesize 30 images (all images are produced without using segmentation masks) in total. We show stylized images of 10 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 900 votes from 30 users and show the percentage of votes for each method in Fig. 32 (a). Overall, our proposed PST, SST(b) and SST(a) are favored among all evaluated methods. A.5.3 USER STUDY ON PHOTO-REALISTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Li et al. (2019); Liao et al. (2017); Luan et al. (2017); Li et al. (2018) on photo-realistic style transfer. For each method, we use 30 image groups to synthesize 30 images in total. We show stylized images of 8 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 900 votes from 30 users and show the percentage of votes for each method in Fig. 32 (b). Overall, our proposed SST(c) and PST are favored among all evaluated methods.
1. What is the focus of the paper regarding artistic style transfer, and what are the proposed method's strengths and weaknesses? 2. What are the limitations of the experiments and comparisons made in the paper? 3. Do you have any concerns regarding the architecture choice and hyperparameters exploration? 4. How does the reviewer assess the quality and reliability of the quantitative comparison and user study results? 5. Are there any suggestions for improving the rigor and robustness of the human studies and statistical analysis?
Review
Review Summary: In this study, the authors propose a new method for performing artistic style transfer for arbitrary image and styles. The new method employs a cascade/serial architecture for performing the style transfer. The authors test their method using human preference studies. In summary, I found the architecture choice to be minimally explored. More importantly, a vast majority of the results to demonstrate the relative merits of this method were qualitative. The minimal quantitative results were unconvincing and left many unanswered questions about how well one could trust these results. Major Comments: 1. No experiments to explore the architecture hyperparameters. A natural question might be how the quality of the method varies systematically as the number of methods N grows. Presumably, if N=1, this would recover previous methods. 2. Authors are missing an important reference and point of comparison for arbitrary style transfer. Exploring the structure of a real-time, arbitrary neural artistic stylization network Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, Jonathon Shlens https://arxiv.org/abs/1705.06830 http://goo.gle/2oiDKaT 3. Minimal quantitative analysis. A vast majority of the results (30 of 32 figures) are qualitative comparisons and the paper is sorely lacking an emphasis on quantitative comparisons. This is a large and notable problem in this paper and a quantitative comparison *should* constitute the primary thrust and central result of such a paper to convincingly demonstrate to a reader that the proposed method is indeed to superior to other techniques. I wish the authors dedicated more emphasis in this paper to a detailed quantitative comparison for these methods. As a starter, the analysis presented as the final two appendix figures (31 and 32) should be front and center in the result section of the paper. 4. User study for quantitative comparison is incomplete and unconvincing. Table 1 and Appendix Figure 31 and 32 represent the primary result of this paper as these results and comprise the user studies to quantify how much better this method is to previous methods. These studies however are fairly unconvincing as lots of details are omitted and and I am concerned about the rigor of the human studies including but not limited to: 4a. How long did each human study each image? What controls were added to the study to ensure that all images were equally studied by humans? For instance, were any golden tests employed to ensure user engagement throughout the study? 4b. What was the repeatability of each measurement of preference? If a single human was presented the same image twice, how consistent were there ratings? For that matter, how consistent were the ratings across humans? I presume that some humans preferred some styles over others but how systematic was this? 4c. What types of user testing scenarios were explored to ensure minimal bias in the results? Were multi-choice, paired choice or force choice employed? What about minimal or maximal time limit enforcement? 4d. How can I have confidence that the authors did not cherry pick images and styles that favored their method? For that matter, I would expect that some methods work better on some styles or images. I would expect to see analysis accordingly to break down which styles/images work better on different slices of the data. 4e. The statistical significance of Figure 31 and Figure 32 is not provided. What would an error bar look like with resampling?
ICLR
Title Cascade Style Transfer Abstract Recent studies have made tremendous progress in style transfer for specific domains, e.g., artistic, semantic and photo-realistic. However, existing approaches have limited flexibility in extending to other domains, as different style representations are often specific to particular domains. This also limits the stylistic quality. To address these limitations, we propose Cascade Style Transfer, a simple yet effective framework that can improve the quality and flexibility of style transfer by combining multiple existing approaches directly. Our cascade framework contains two architectures, i.e., Serial Style Transfer (SST) and Parallel Style Transfer (PST). The SST takes the stylized output of one method as the input content of the others. This could help improve the stylistic quality. The PST uses a shared backbone and a loss module to optimize the loss functions of different methods in parallel. This could help improve the quality and flexibility, and guide us to find domain-independent approaches. Our experiments are conducted on three major style transfer domains: artistic, semantic and photo-realistic. In all these domains, our methods have shown superiority over the state-of-the-art methods. 1 INTRODUCTION Given the content and style images, the goal of style transfer is to synthesize an image that preserves some notion of the content but carries characteristics of the style. Recently, the seminal work of Gatys et al. (2015) firstly captured the style of artistic images and transferred it to other images using Convolutional Neural Networks (CNNs). Since then, various Neural Style Transfer (NST) Jing et al. (2017) methods have been advanced and obtained visually pleasing results. Despite the recent rapid progress, these existing works often limited to one or few specific domains (in this paper, we mainly focus on three domains: artistic, semantic and photo-realistic). For instance, Li et al. (2017b); Huang & Belongie (2017); Gatys et al. (2016) can transfer the artistic styles well, but they perform poorly on the style transfer of photographs and corresponding semantics. Luan et al. (2017); Li et al. (2018) specialize in photo-realistic style transfer, and Li & Wand (2016); Champandard (2016) mainly target semantic style transfer. Fortunately, there are some multi-domain approaches which can perform well on multiple domains, e.g., Liao et al. (2017) can perform well on semantic and photo-realistic style transfer, Gu et al. (2018) is suitable for artistic and semantic style transfer, and Li et al. (2019) can adapt to artistic and photo-realistic style transfer. Nevertheless, they still have some limitations, and the quality could be further improved (see Fig. 1 (a)). That is to say, nowadays, it is still inconvenient to users that have to choose the appropriate methods for specific domains. In this sense, finding a common approach which could perform well in all style transfer domains is extremely hard but significant. As a coin has two sides, every existing NST method has both advantages and shortcomings. Fig. 1 (b) shows some typical examples, we can observe that using Gram matrices Gatys et al. (2016) to transfer the artistic styles performs well on global color, but fails to capture enough local patterns (e.g., circles and droplets). Patch-based method Li & Wand (2016) can alleviate this problem, but may cause insufficient color. Is there a way to combine the advantages of both and overcome their shortcomings? Obviously, redesigning a new algorithm is difficult, why not use some simpler ways, such as combining existing methods directly through some general architectures? Based on the above analyses, we propose Cascade Style Transfer (CST) mainly for two targets, i.e., higher quality and higher flexibility, and design two architectures, i.e., the Serial Style Transfer (SST) and the Parallel Style Transfer (PST) for these targets. In this work, we first revisit and demonstrate the impact of different initialization strategies on style transfer, and inspired by this, design our SST for higher quality domain-specific style transfer. Moreover, we develop upon this and further propose our PST for more flexible style transfer, this could guide us to create domainindependent approaches. As far as we know, this is the first paper to propose domain-independent style transfer (note that this kind of approach is flexible for arbitrary images in arbitrary domains, while existing so-called arbitrary style transfer methods are only flexible for arbitrary images in specific domains), and also the first attempt to combine multiple existing approaches directly to improve the quality and flexibility of style transfer. The main contributions of our work are: •We revisit the initialization of style transfer, and demonstrate that initialization can play an important role in improving the quality of style transfer. • We propose a serial architecture for cascade style transfer, it is simple yet effective, which could help improve the quality of domain-specific style transfer. • We first propose domain-independent style transfer, and design a parallel architecture to help improve the quality and flexibility of style transfer. 2 RELATED WORK Our cascade style transfer can be related to the most style transfer methods. In this paper, we mainly focus on the NST methods in three major domains: artistic, semantic and photo-realistic. Artistic style transfer. This domain is dedicated to transferring the global artistic styles (e.g., abstract, painterly or sketch). The most representative work is Gatys et al. (2015). This method could produce amazing results but suffers from a slow iterative optimization procedure. To address it, Johnson et al. (2016) and Ulyanov et al. (2016; 2017) trained feed-forward generative networks for fast artistic style transfer. But one limit is that each model is trained to transfer exactly one fixed style. Some methods Dumoulin et al. (2017); Zhang & Dana (2018); Li et al. (2017a); Chen et al. (2017) further incorporated multiple styles into one single model, but they are still limited to a fixed number of pre-trained styles. Recently, several methods Li et al. (2017b); Huang & Belongie (2017); Li et al. (2019) were proposed to allow artistic style transfer for arbitrary images. Semantic style transfer. Transferring the styles between the corresponding semantic regions of the style and content images is referred to as semantic style transfer. The most representative work is Li & Wand (2016). They combined Markov Random Fields (MRFs) and CNNs to match the most similar local neural patches of the style and content images. Later, Champandard (2016) incorporated the segmentation masks for stricter semantic constraints. Recently, Liao et al. (2017) proposed Deep Image Analogy for accurate semantic-level patch match. Gu et al. (2018) used Deep Feature Reshuffle to consider both global and local information. Mechrez et al. (2018) proposed an alternative contextual loss for segmentation-free semantic style transfer. Moreover, some feedforward methods Chen & Schmidt (2016); Lu et al. (2017); Sheng et al. (2018); Park & Lee (2019); Yao et al. (2019) were also proposed for fast and real-time semantic style transfer. Photo-realistic style transfer. Photo-realistic style transfer seeks to transfer the style of a reference style photo onto other pictures. The greatest characteristic is that both the global structures and detailed contours in the content images should be preserved during the process. Traditional methods based on Global Reinhard et al. (2001); Pitie et al. (2005) and Local Laffont et al. (2014); Shih et al. (2014) are slow in practice and limited in specific scenarios (e.g., outdoor scenes or headshot portraits). Recently, Luan et al. (2017) incorporated a new loss term to the optimization objective of Gatys et al. (2015) to improve the photorealism of stylization outputs. Li et al. (2018) introduced a closed-form solution consisting of a stylization and a smoothing step for faster speed. More recently, Li et al. (2019) have also shown an effective approach for fast photo-realistic style transfer. Despite the fact that current NST methods have shown good performance in specific domains, there are few studies on how to combine them directly to improve the quality and flexibility. In our work, we select Gatys et al. (2016); Li et al. (2017b); Huang & Belongie (2017); Li et al. (2019; 2018); Luan et al. (2017); Sheng et al. (2018); Liao et al. (2017); Gu et al. (2018); Li & Wand (2016); Champandard (2016) for our studies mainly because of their representativeness in specific domains. 3 REVISIT INITIALIZATION IN STYLE TRANSFER Initialization is the first yet important step in almost all style transfer algorithms. Although some papers Gatys et al. (2016); Nikulin & Novak (2016) have discussed the impact of the initialization on style transfer, they only use white noise, the content image or the style image, and the evaluation criteria are only based on the qualitative aspect. Here, back to the most original NST algorithms, we revisit and compare more initialization strategies from both qualitative and quantitative aspects. Our experiments are based on two most original NST methods Gatys et al. (2016); Li & Wand (2016). These methods always initialize with white noise or the content image and iteratively optimize pixels to match the content features of the content image and style features of the style image. Besides white noise, the content and style images, we try and compare four other initialization strategies: salt and pepper noise, poisson noise, the stylized results of other methods (SROOM), and moreover, replacing the content images with SROOM and then initializing with them (RC-SROOM). Qualitative Comparisons: Fig. 2 demonstrates the qualitative comparisons. As we can see, initializing with the content image produces results with insufficient color (column a). Results generated from the style image introduce some undesired color (column b). Using salt and pepper noise produces over-bright results, and using poisson noise produces darker results (column d and e). By contrast, white noise initialization yields satisfying results (column c), but the results of SROOM performs better on overall effect (column f). Remarkably, using RC-SROOM (column g) can dramatically improve the effect and absorb the merits (as shown and discussed in Fig. 1 (b)) of both methods. More results can be found in appendix. Quantitative Comparisons: Fig. 3 shows the quantitative comparisons. We evaluate these initialization strategies on method Gatys et al. (2016), the optimization is conducted by Adam method, and stopped at 1000 iterations. We can see that initializing with images (i.e., content, style, SROOM and RC-SROOM) makes loss fall faster, while initializing with noise (i.e., white noise, salt and pepper noise, and poisson noise) decreases loss more steadily. It is worth noting that using RC-SROOM achieves much lower total, content and style loss than all other strategies. Conclusion: In this section, we have demonstrated that initialization plays an important role in improving the quality of style transfer. Compared with other initialization strategies, RC-SROOM has outstanding performance. More importantly, this could help produce higher quality style transfer results that absorb the merits of multiple methods. Inspired by this, we propose cascade style transfer, which will be presented in latter sections, and in turn verify this conclusion. 4 CASCADE STYLE TRANSFER In this paper, we define cascade style transfer as combinations of different NST methods. It contains two architectures: the serial style transfer and the parallel style transfer. 4.1 SERIAL STYLE TRANSFER (SST) As shown in Fig. 4 (a), serial style transfer serially connects multiple style transfer methods. Let ~c, ~s and ~xi be the content image, the style image and the stylized result of method i. The style transfer process of method i is denoted as fi. Specifically, for the first method, we use f1(~c,~s, d) to denote transferring the style of ~s to ~c by method 1 with the default initialization settings. For others, we use fi(~xi−1, ~s) to denote initializing with ~xi−1 and then transferring the style of ~s to ~xi−1 by method i. Our serial style transfer can be formulated as ~xi = { f1(~c,~s, d) if i = 1 fi(~xi−1, ~s) otherwise. (1) 4.2 PARALLEL STYLE TRANSFER (PST) As far as we know, current NST methods are mainly conducted in two different ways. One is based on VGG Simonyan & Zisserman (2014), iteratively optimizing the pixels of input images. The other is training a feed-forward network to directly generate the stylized results. Here, to demonstrate our PST more intuitively, we design a simple parallel architecture based on the former way. As shown in Fig. 4 (b), our PST contains two important parts. One is the shared backbone (e.g., VGG-19), it is mainly used for feature extraction and error back-propagation. The other is the loss module, it is used to combine loss functions of different methods. Let Li denote the loss function of method i, “⊕” denote a linear combination operation between different loss functions. We give hyperparameter ωi to weight every loss function Li. The total loss is defined as follows: Ltotal(i) = { ω1L1 if i = 1 Ltotal(i− 1)⊕ ωiLi otherwise. (2) We compute its gradients with respect to the pixel values and use them to iteratively update the input images. In this way, all methods can be optimized in parallel with the total loss. To verify the effectiveness of our PST, we design a specific parallel scheme ParallelNet based on four popular domain-specific style transfer methods Champandard (2016); Li & Wand (2016); Gatys et al. (2016); Luan et al. (2017). The principles for selecting the appropriate approaches will be discussed in later sections. The detailed combination procedure is as follows: • For method Gatys et al. (2016), we capture the vectorized features F`[~x] and F`[~s] of layers ` ∈ {convk 1}5k=1 for the generated image ~x and the style image ~s, and compute the Gram matrices: G`[·] = F`[·]F`[·]T , (3) here the Gram matrix is defined as the inner product between the vectorized features. In each layer, there areN` filters each with a vectorized feature of sizeM`. We define our style loss L1 as follows: L1 = 1 (2N`M`)2 ||G`[~x]−G`[~s]||2. (4) • For method Champandard (2016) and Li & Wand (2016), we reuse the features F~[~x] and F~[~s] of layers ~ ∈ {convk 1}4k=3, and then concatenate them with the segmentation masks ~cm and ~sm of the content image ~c and the style image ~s at the same resolutions, respectively: F~,m[~x] = F~[~x] λ · ð[~cm, ~], F~,m[~s] = F~[~s] λ · ð[~sm, ~], (5) where ð[·, ~] denotes resizing the masks to the same resolution as the output of layer ~. “ ” denotes the channel concatenation, the hyperparameter λ is given to weight the semantic awareness. Then we extract a set of 3×3 neural patches for both F~,m[~x] and F~,m[~s], denoted by {Φi(~x)}i∈nx and {Φj(~s)}j∈ns , where nx and ns are the number of extracted patches. For each patch Φi(~x), we determine a closest-matching style patch ΦCM(i)(~s) based on the following measure: CM(i) := arg max j=1,...,ns Φi(~x) · Φj(~s) |Φi(~x)| · |Φj(~s)| . (6) Finally, our style loss L2 is defined as follows: L2 = nx∑ i=1 ||Φi(~x)− ΦCM(i)(~s)||2. (7) • For method Luan et al. (2017), we directly use their photorealism regularization term L3: L3 = 3∑ c=1 Vc[~x] TMIVc[~x], (8) whereMI is the Matting Laplacian Matrix Levin et al. (2008), which is used to express a locally affine combination of the input RGB channels and only depends on the input image I . Vc[~x] denotes the vectorized version of the output image ~x in channel c. • At last, these methods use the same content loss Lc to preserve the structure of the content image. Lc = ||F [~x]−F [~c]||2, (9) where F [~x] and F [~c] denote the features extracted from layer conv4 2. Total loss: In our loss module, the total loss is the simple linear combination of the above loss: Ltotal = αLc + ω1L1 + ω2L2 + ω3L3 + µLTV , (10) where LTV refers to the total variation regularization loss Aly & Dubois (2005). 5 EXPERIMENTS For SST, we select some of the most representative methods in specific domains, and run the published implementation with default stylization settings for each method. For PST (we specifically refer to our ParallelNet), the hyperparameters α, ω1, ω2 and µ are fixed at 10, 0.1, 10 and 1, respectively. λ is set to 10 for cases with segmentation masks and 0 for others. ω3 is set to 104 for photo-realistic style transfer, and 0 for others. The optimization is conducted by LBFGS Zhu et al. (1997), and stopped at 500 iterations. The initialization is the content image. 5.1 ABLATION STUDIES How to select the appropriate methods? To find some empirical principles, we conduct a user study (Table 1) on some representative NST methods in terms of content fidelity, stylization global color and local texture patterns, since these three aspects are our main considerations in selecting the methods. To alleviate the burden of subjects, we show them 150 synthesized images of each method, and ask them to select one overall level for each aspect. We collect totally 990 votes from 30 subjects, each method has 30 votes in each aspect. The levels with the most votes are chosen. For SST, which methods could be combined and which order will return the optimal solutions? Take artistic style transfer as the example, Fig. 5 shows the comparisons of different serial schemes. The first column shows a reasonable serial scheme, since the method (b) help enrich the global color, and method (c) improves the local styles. The second column shows an unreasonable one, as the method (d) introduces some content distortions, and this directly affects the subsequent outputs. The third and fourth column show the effects of node order. We can find in the third column that method (b) and (a) progressively enrich the global color of (c), but do not change the local patterns. See the fourth column, unfortunately, changing the order of (d) cannot avoid the content distortions, but since the artistic style transfer does not require high content fidelity, the result is still acceptable. The last column shows some invalid nodes, method (b) and (e) have subtle effect on the result of (d). Now, look at these in conjunction with Table 1, we can conclude the following: • For the aspects of global color and local patterns, the same level can promote each other, and the higher level can improve the performance of the lower level, while the lower level has little effect on the higher level. The results are determined by the order of methods with different levels. • For the aspect of content fidelity, the higher level is only conducive to content preservation, while the distortions produced by the lower level are irreversible and sequence-independent. • The selection is specific for particular domains, e.g., for artistic or semantic style transfer, the global color and local patterns are prior to content fidelity, while for photo-realistic style transfer, content fidelity is the primary concern. These conclusions are also applicable to semantic and photo-realistic style transfer. More results can be found in appendix. We believe that these would help users design useful serial schemes to obtain desired results, and inspire future works in neural style transfer. For PST, how many and which methods could be combined? As we introduced in Section 1, our PST is proposed to improve the quality and flexibility for different domains. Generally, any approach that satisfies the following two conditions can be incorporated into our PST: • Sharing the same backbone (e.g., VGG-19) and similar process flow (e.g., backward optimization). • Having the capacity to solve the problem of different style transfer domain. How to get the optimal hyperparameters for PST? Our PST introduces some hyperparameters, we can adapt the model to different domains by adjusting them, but the fine-tuning may be intractable. Fortunately, we find that for most methods, the optimal hyperparameters before and after combinations are almost the same, e.g., the optimal values of λ and ω3 of our ParallelNet are the same as those of the original methods. And for others, we just need to make a few adjustments according to the original methods. The optimal hyperparameters are generalized for most cases, but for some special ones, fine-tuning may produce better results. Ablation studies about different loss terms and their hyperparameters of our ParallelNet can be found in appendix. 5.2 EVALUATION AND COMPARISONS Qualitative Comparisons: Fig. 6 shows some qualitative comparisons on artistic, semantic and photo-realistic style transfer. In the top row, we select Sheng et al. (2018), Li et al. (2019) and Li & Wand (2016) as three sequential nodes of our SST(a). Compared with them that only transfer the global color or local textures, our SST(a) and PST can consider both aspects simultaneously. In the middle row, we select Champandard (2016), Gu et al. (2018) and Liao et al. (2017) as three sequential nodes of our SST(b). We observe that these methods often produce either insufficiently stylized results or abnormal artifacts. By contrast, our SST(b) and PST produce much more satisfying results. In the bottom row, since the content fidelity level of Li et al. (2018) in Table 1 is only M (we can also observe some undesired shadows in column 3) and this is detrimental to our SST in such high fidelity task, we only select Li et al. (2019) and Luan et al. (2017) as sequential nodes of our SST(c). As we can see, the result of method Li et al. (2019) may lose some striking styles, e.g., the light at the bottom of the ship. The method Luan et al. (2017) may introduce some unacceptable artifacts, e.g., the abnormal hull. These problems do not occur in the results of our SST(c) and PST. Quantitative Comparisons: We conduct several user studies on different style transfer domains. The results can be found in appendix. In all domains, our SSTs and PST show their superiority. And we also compare the flexibility of different methods in different style transfer domains in Table 2. Since our SSTs target the higher quality for specific domains, they do not contribute to more flexibility. By contrast, our PST is the only one that can flexibly adapt to all the mentioned domains. Speed and Memory Analysis: Obviously, the time and memory requirements of our SSTs are simply the sums of those of its nodes. For PST, since the combined methods all share the same backbone, and the intermediate features can be stored and reused, the increments of time and memory are slight. The speed of our ParallelNet is comparable to that of the method Gatys et al. (2016). 6 CONCLUSION AND FUTURE DIRECTIONS In this paper, we propose cascade style transfer to combine multiple approaches in serial or in parallel without modifying any algorithm. Experiments have verified that our methods can improve the stylistic quality and flexibility over previous state-of-the-art methods in artistic, semantic and photo-realistic style transfer domains. In the future, more effective and efficient schemes could be designed, and the combination of SST and PST is also an interesting direction worthy of further studies. Moreover, our methods can be regarded as data-flow graphs of neural image editing operators from a high-level perspective, it is another neat idea for interactive computer graphics system and will probably have considerable impact on future work and architectures in this domain. A APPENDIX A.1 MORE RESULTS ABOUT INITIALIZATION A.1.1 GATYS ET AL. (2016) Fig. 7 shows more results of different initialization strategies on method Gatys et al. (2016). As mentioned in the paper, the first column shows the content image (top) and the style image (bottom). The other columns show the initialization images (top) and the corresponding style transfer results (bottom) of method Gatys et al. (2016). In column (f), we initialize with the default stylized results of method Li & Wand (2016) (SROOM). In column (g), we replace the content image with SROOM and then initialize with it (RC-SROOM). A.1.2 LI & WAND (2016) Fig. 8 shows more results of different initialization strategies on method Li & Wand (2016). The first column shows the content image (top) and the style image (bottom). The other columns show the initialization images (top) and the corresponding style transfer results (bottom) of method Li & Wand (2016). In column (f), we initialize with the default stylized results of method Gatys et al. (2016) (SROOM). In column (g), we replace the content image with SROOM and then initialize with it (RC-SROOM). A.2 MORE SERIAL SCHEMES FOR SST A.2.1 ARTISTIC STYLE TRANSFER Fig. 9 shows more comparisons of different serial schemes in artistic style transfer domain. As mentioned in the paper, each column represents a serial scheme, and each row shows the intermediate output. We select (a) Sheng et al. (2018), (b) Li et al. (2019), (c) Li & Wand (2016), (d) Li et al. (2017b), (e) Huang & Belongie (2017) for the nodes of different serial schemes. A.2.2 SEMANTIC STYLE TRANSFER Fig. 10 shows some comparisons of different serial schemes in semantic style transfer domain. Each column represents a serial scheme, and each row shows the intermediate output. We select (a) Champandard (2016), (b) Gu et al. (2018), (c) Liao et al. (2017), (d) Li et al. (2017b), (e) Sheng et al. (2018), (f) Li et al. (2019) for the nodes of different serial schemes. A.2.3 PHOTO-REALISTIC STYLE TRANSFER Fig. 11 shows some comparisons of different serial schemes in photo-realistic style transfer domain. Each column represents a serial scheme, and each row shows the intermediate output. We select (a) Li et al. (2019), (b) Luan et al. (2017), (c) Li et al. (2018), (d) Liao et al. (2017), (e) Huang & Belongie (2017) for the nodes of different serial schemes. A.3 ABLATION STUDIES OF PARALLELNET We study the effects of different loss terms of our proposed ParallelNet, including style loss L1 (in Eq.(4)), L2 (in Eq.(5) and Eq.(7)) and photorealism regularization loss L3 (in Eq.(8)). Generally, the weight α of Lc is fixed to 10, and the weight µ of LTV is fixed to 1. Fig. 12 shows the results of artistic style transfer by varying weight ω1 of loss L1 and ω2 of loss L2 while fixing the other weights. As the last column shows, the original method of Gatys et al. (2016) transfers the global color and rough textures of the style image, but does not transfer the local intricate patterns. The original method of Li & Wand (2016) transfers much more local style patterns, but as it shows, this method may generate insufficiently stylized result when huge difference exists between the content and style image. By combining the characteristics of these two methods, our ParallelNet can transfer both global color and local textures of the style images. As the top row shows, increasing the weight ω1 of loss L1 transfers more global color and rough textures. And as the bottom row shows, increasing the weight ω2 of loss L2 retains more local style patterns. However, since the trade-off between the content loss and the style loss, distortions of the content structure are inevitable if the values of ω1 and ω2 are too high. In our work, for artistic style transfer, ω1 and ω2 are set to 0.1 and 10 by default, respectively. Fig. 13 shows the results of semantic style transfer by varying weight ω2 and λ of loss L2. The top row shows transfer of painted style (a) onto a photo (b), this is easy and does not require the constraints of segmentation masks, so we fix the semantic awareness weight λ at 0. On the other hand, according to the aforementioned practice, we set the value of ω1 and ω3 to 0.1 and 0 by default, respectively. As the top row shows, increasing ω2 refines more detailed information of the corresponding semantics, but this may change the original structure of the content image. To avoid this, the value of ω2 should not be too high, so in our work, we set it to 10 by default. Based on these, the bottom row shows transfer of photo style (b) onto a painting (a), it is hard so we use the constraints of segmentation masks to get better results. As the bottom row shows, increasing λ achieves more accurate semantic matching. To obtain the best results, we set λ to 10 by default. Fig. 14 shows the results of photo-realistic style transfer by varying weight ω3 of photorealism regularization loss L3. Differing from Luan et al. (2017) using a two-stage optimization process based on the outputs of Gatys et al. (2016), we directly solve this problem by optimizing Eq. (10). This is much simpler, and could produce comparable results due to the synergistic effects of different loss items. As we can see, a too small value of ω3 cannot prevent distortions, thus the results in column 2 and 3 have a non-photorealistic look. On the contrary, a too large value of ω3 suppresses the style to be transferred and leads to color infidelity (see column 5 and 6). Similar to Luan et al. (2017), we choose ω3 = 104 by default. A.4 QUALITATIVE COMPARISONS WITH OTHER METHODS Here we demonstrate the results of our methods and twelve other representative single- or multidomain methods Sheng et al. (2018); Li et al. (2019); Li & Wand (2016); Champandard (2016); Gu et al. (2018); Liao et al. (2017); Luan et al. (2017); Li et al. (2018); Yao et al. (2019); Gatys et al. (2016); Huang & Belongie (2017); Li et al. (2017b) on artistic, semantic and photo-realistic style transfer, respectively. Each figure shows the representative results obtained by one method. The left, center and right columns show examples of artistic, semantic and photo-realistic style transfer, respectively. For the inputs in each group, the upper one is the content image and the lower one is the style image. Relevant quality and flexibility analyses can be found under the caption of each figure. Compared to these methods, our SST schemes perform better on stylistic quality in specific domains, and our PST scheme performs better on stylistic quality and flexibility in all domains. As shown in Fig. 15, this scheme can perform well on artistic and semantic style transfer (without segmentation masks). But since its nodes do not support segmentation masks, it could not solve the cases when huge differences exist between the content and style images (e.g., the case at the top of the center column). This scheme is not suitable for photo-realistic style transfer, as it may produce a lot of content distortions and abnormal artifacts (see the right column). As shown in Fig. 16, this scheme can perform well on artistic and semantic style transfer (with and without segmentation masks). However, in artistic style transfer, there are some deficiencies in content preservation because it prefers to express more style features (see the two cases at the bottom of the left column). This also limits its capability in photo-realistic style transfer (see the right column). As shown in Fig. 17, this scheme is only suitable for photo-realistic style transfer. As we can see, it maintains the photorealism of the content photographs and at the same time transfers the global color of the style images. But this also makes it difficult to produce the artworks with non-realistic styles. Since the nodes of it can incorporate segmentation masks, this scheme is capable to use segmentation masks. As shown in Fig. 18, this scheme is flexible enough to apply to all these style transfer domains. As we can see, in the left column, both the global color and local patterns of the artistic images can be transferred by this scheme. In the center and right column, it can also achieve semantic and photorealistic style transfer with and without segmentation masks. Moreover, users can further improve the stylistic quality by fine-tuning our provided hyperparameters for each input. As shown in Fig. 19, this method can be used in artistic and semantic style transfer (without segmentation masks), but the stylistic quality is not so good. As we can see, for artistic style transfer, it only transfers the global color of the style images, but lacks local patterns. For semantic style transfer, it may produce a lot of hazy blocks, which affect the overall clarity. These problems (including content distortions and abnormal artifacts) also exist in the photo-realistic style transfer. As shown in Fig. 20, this method can be used in artistic and photo-realistic style transfer. But since it prefers to maintain the structures of the content images, the produced results are often insufficiently stylized. This could help it to perform better in photo-realistic style transfer. On the other hand, since the spirit of this method is based on the global statistics, it cannot solve the tasks of semantic style transfer. This method is capable to use segmentation masks. As shown in Fig. 21, this method can be used in semantic style transfer (without segmentation masks). As we can see, for artistic style transfer, it could transfer adequate local patterns of the style images, but the global effects of the stylized results are unsatisfying (mainly because of the insufficient global color, see the left column). On the other hand, introduced content distortions also limit its capability for photo-realistic style transfer (see the right column). As shown in Fig. 22, this method can be used in artistic and semantic style transfer (with and without segmentation masks). For artistic style transfer (see the left column), the most results are similar to that of Li & Wand (2016), the difference is that this method can perform better on the global color or the local patterns, but it is still unsatisfying. For semantic style transfer, it may produce blurred results (e.g., the two cases at the top of the center column). For photo-realistic style transfer, it cannot adapt to this domain because of the introduced content distortions. As shown in Fig. 23, this method can be used in artistic and semantic style transfer (without segmentation masks). The stylized results produced by it can obtain sufficient global color and local patterns, but there are also a lot of abnormal artifacts which directly affect the final effect. This occurs in almost every case in these three domains. As shown in Fig. 24, this method can be used in semantic and photo-realistic style transfer (without segmentation masks). It is more suitable for style transfer of image pairs which have high semanticlevel correspondences. Therefore, for the cases of artistic style transfer which are not relevant in semantics, this method yields poor stylized results (see the left column). On the other hand, since this method does not support segmentation masks, mismatching is prone to occur in challenging tasks (e.g., the case at the top of the center column and cases in the right column). As shown in Fig. 25, similar to our SST(c) scheme (see Fig. 17), this method is only suitable for photo-realistic style transfer. Compared to our SST(c) scheme, it may produce some abnormal artifacts, e.g., the hull and eyes in the two cases at the bottom of the right column, respectively. This method is capable to use segmentation masks. As shown in Fig. 26, similar to our SST(c) scheme (see Fig. 17) and the method of Luan et al. (2017) (see Fig. 25), this method is only suitable for photo-realistic style transfer. See the right column, compared to our SST(c) scheme, it may produce too many undesired effects, e.g., the skies in the case 1 and case 3, the shadows in the case 2 and the eyes in the case 4 (numbers are arranged from top to bottom). This method is capable to use segmentation masks. As shown in Fig. 27, this method can be used in artistic and semantic style transfer (without segmentation masks). Because of the incorporation with self-attention mechanism, this method can highlight more salient areas (e.g., characters’ eyes) of the images. But in other places, the performance is similar to the method of Sheng et al. (2018) (see Fig. 19). As shown in Fig. 28, this method is only suitable for artistic style transfer. As we can see, it could transfer the global color and rough textures of the artistic style images to the content images (see the left column). But since the spirt of its algorithm is based on the global statistics, this method cannot solve the semantic style transfer (see the center column). Of course, without the improvement introduced by Luan et al. (2017), it cannot solve the tasks of photo-realistic style transfer (see the right column). As shown in Fig. 29, similar to the method of Gatys et al. (2016) (see Fig. 28), this method is only suitable for artistic style transfer. Compared to the method of Gatys et al. (2016), some results generated by it are not sufficiently stylized (e.g., the two cases at the bottom of the left column). As shown in Fig. 30, similar to the methods of Gatys et al. (2016) (see Fig. 28) and Huang & Belongie (2017) (see Fig. 29), this method is only suitable for artistic style transfer. Compared to the methods of Gatys et al. (2016) and Huang & Belongie (2017), some results generated by it are excessively stylized, thus introducing a lot of undesired effects (e.g., the case 2 and case 3 in the left column). A.5 QUANTITATIVE COMPARISONS WITH OTHER METHODS A.5.1 USER STUDY ON ARTISTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Sheng et al. (2018); Li et al. (2019); Li & Wand (2016); Yao et al. (2019); Gatys et al. (2016); Li et al. (2017b) on artistic style transfer. We use 10 content images and 10 style im- ages to synthesize 100 images in total for each method, and randomly select 50 content and style combinations to each subject. We show stylized images of 10 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 1500 votes from 30 users and show the percentage of votes for each method in Fig. 31. Overall, our proposed PST and SST(a) are favored among all evaluated methods. A.5.2 USER STUDY ON SEMANTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Sheng et al. (2018); Li & Wand (2016); Champandard (2016); Gu et al. (2018); Liao et al. (2017); Yao et al. (2019) on semantic style transfer. For each method, we use 30 image groups to synthesize 30 images (all images are produced without using segmentation masks) in total. We show stylized images of 10 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 900 votes from 30 users and show the percentage of votes for each method in Fig. 32 (a). Overall, our proposed PST, SST(b) and SST(a) are favored among all evaluated methods. A.5.3 USER STUDY ON PHOTO-REALISTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Li et al. (2019); Liao et al. (2017); Luan et al. (2017); Li et al. (2018) on photo-realistic style transfer. For each method, we use 30 image groups to synthesize 30 images in total. We show stylized images of 8 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 900 votes from 30 users and show the percentage of votes for each method in Fig. 32 (b). Overall, our proposed SST(c) and PST are favored among all evaluated methods.
1. What is the focus of the paper regarding style transfer methods? 2. What are the strengths and weaknesses of the proposed frameworks, particularly in terms of novelty and methodology? 3. How does the reviewer assess the literature review and comparisons with other works in the field? 4. What are the concerns regarding the experimental setup and parameter determination? 5. Are there any suggestions for improving the paper or its contributions?
Review
Review The authors proposed to mix together multiple styles by proposing two frameworks: 1) serial style transfer (SST), which combines style transfer methods in series; and 2) parallel style transfer (PST), which combines style transfer methods in parallel. The paper is clearly presented. It is interesting to see work on mixing up different styles, since it is not extensive studied so far. Though not much studied, this topic is not new [ref 1], [ref 2]. The authors didn't provide a thorough literature review on mixing multiple styles in the related work or anywhere else in the submission. In terms of the methodology, the novelty is quite limited. The proposed SST and PST are simple frameworks to mix different styles, which, by the way, are fully based on existing style transfer methods. At some point, PST is similar to [ref 2], and the difference is minor. PST linearly combines the losses for different styles to construct the final loss, while [ref 2] linearly combines together the features for different styles. About experiments, it is not clear about how the predefined parameters (e.g. \alpha, w_1, w_2, etc.) are determined. They were just empirically set and mentioned in the experiment section. It is appreciated to see more results in the Appendix, as well as the user study. However, due to lack of novelty, I think this submission may not be qualified for acceptance at this moment. Minor: I think the authors should give their proposed framework another name instead of using "cascade," which has a similar meaning of "series." [ref 1] Google's arty filters one-up Prisma by mixing various styles. https://www.engadget.com/2016/10/27/google-style-transfer-tech/ [ref 2] Pegios, et al. Style Decomposition for Improved Neural Style Transfer. ArXiv, 2018.
ICLR
Title Cascade Style Transfer Abstract Recent studies have made tremendous progress in style transfer for specific domains, e.g., artistic, semantic and photo-realistic. However, existing approaches have limited flexibility in extending to other domains, as different style representations are often specific to particular domains. This also limits the stylistic quality. To address these limitations, we propose Cascade Style Transfer, a simple yet effective framework that can improve the quality and flexibility of style transfer by combining multiple existing approaches directly. Our cascade framework contains two architectures, i.e., Serial Style Transfer (SST) and Parallel Style Transfer (PST). The SST takes the stylized output of one method as the input content of the others. This could help improve the stylistic quality. The PST uses a shared backbone and a loss module to optimize the loss functions of different methods in parallel. This could help improve the quality and flexibility, and guide us to find domain-independent approaches. Our experiments are conducted on three major style transfer domains: artistic, semantic and photo-realistic. In all these domains, our methods have shown superiority over the state-of-the-art methods. 1 INTRODUCTION Given the content and style images, the goal of style transfer is to synthesize an image that preserves some notion of the content but carries characteristics of the style. Recently, the seminal work of Gatys et al. (2015) firstly captured the style of artistic images and transferred it to other images using Convolutional Neural Networks (CNNs). Since then, various Neural Style Transfer (NST) Jing et al. (2017) methods have been advanced and obtained visually pleasing results. Despite the recent rapid progress, these existing works often limited to one or few specific domains (in this paper, we mainly focus on three domains: artistic, semantic and photo-realistic). For instance, Li et al. (2017b); Huang & Belongie (2017); Gatys et al. (2016) can transfer the artistic styles well, but they perform poorly on the style transfer of photographs and corresponding semantics. Luan et al. (2017); Li et al. (2018) specialize in photo-realistic style transfer, and Li & Wand (2016); Champandard (2016) mainly target semantic style transfer. Fortunately, there are some multi-domain approaches which can perform well on multiple domains, e.g., Liao et al. (2017) can perform well on semantic and photo-realistic style transfer, Gu et al. (2018) is suitable for artistic and semantic style transfer, and Li et al. (2019) can adapt to artistic and photo-realistic style transfer. Nevertheless, they still have some limitations, and the quality could be further improved (see Fig. 1 (a)). That is to say, nowadays, it is still inconvenient to users that have to choose the appropriate methods for specific domains. In this sense, finding a common approach which could perform well in all style transfer domains is extremely hard but significant. As a coin has two sides, every existing NST method has both advantages and shortcomings. Fig. 1 (b) shows some typical examples, we can observe that using Gram matrices Gatys et al. (2016) to transfer the artistic styles performs well on global color, but fails to capture enough local patterns (e.g., circles and droplets). Patch-based method Li & Wand (2016) can alleviate this problem, but may cause insufficient color. Is there a way to combine the advantages of both and overcome their shortcomings? Obviously, redesigning a new algorithm is difficult, why not use some simpler ways, such as combining existing methods directly through some general architectures? Based on the above analyses, we propose Cascade Style Transfer (CST) mainly for two targets, i.e., higher quality and higher flexibility, and design two architectures, i.e., the Serial Style Transfer (SST) and the Parallel Style Transfer (PST) for these targets. In this work, we first revisit and demonstrate the impact of different initialization strategies on style transfer, and inspired by this, design our SST for higher quality domain-specific style transfer. Moreover, we develop upon this and further propose our PST for more flexible style transfer, this could guide us to create domainindependent approaches. As far as we know, this is the first paper to propose domain-independent style transfer (note that this kind of approach is flexible for arbitrary images in arbitrary domains, while existing so-called arbitrary style transfer methods are only flexible for arbitrary images in specific domains), and also the first attempt to combine multiple existing approaches directly to improve the quality and flexibility of style transfer. The main contributions of our work are: •We revisit the initialization of style transfer, and demonstrate that initialization can play an important role in improving the quality of style transfer. • We propose a serial architecture for cascade style transfer, it is simple yet effective, which could help improve the quality of domain-specific style transfer. • We first propose domain-independent style transfer, and design a parallel architecture to help improve the quality and flexibility of style transfer. 2 RELATED WORK Our cascade style transfer can be related to the most style transfer methods. In this paper, we mainly focus on the NST methods in three major domains: artistic, semantic and photo-realistic. Artistic style transfer. This domain is dedicated to transferring the global artistic styles (e.g., abstract, painterly or sketch). The most representative work is Gatys et al. (2015). This method could produce amazing results but suffers from a slow iterative optimization procedure. To address it, Johnson et al. (2016) and Ulyanov et al. (2016; 2017) trained feed-forward generative networks for fast artistic style transfer. But one limit is that each model is trained to transfer exactly one fixed style. Some methods Dumoulin et al. (2017); Zhang & Dana (2018); Li et al. (2017a); Chen et al. (2017) further incorporated multiple styles into one single model, but they are still limited to a fixed number of pre-trained styles. Recently, several methods Li et al. (2017b); Huang & Belongie (2017); Li et al. (2019) were proposed to allow artistic style transfer for arbitrary images. Semantic style transfer. Transferring the styles between the corresponding semantic regions of the style and content images is referred to as semantic style transfer. The most representative work is Li & Wand (2016). They combined Markov Random Fields (MRFs) and CNNs to match the most similar local neural patches of the style and content images. Later, Champandard (2016) incorporated the segmentation masks for stricter semantic constraints. Recently, Liao et al. (2017) proposed Deep Image Analogy for accurate semantic-level patch match. Gu et al. (2018) used Deep Feature Reshuffle to consider both global and local information. Mechrez et al. (2018) proposed an alternative contextual loss for segmentation-free semantic style transfer. Moreover, some feedforward methods Chen & Schmidt (2016); Lu et al. (2017); Sheng et al. (2018); Park & Lee (2019); Yao et al. (2019) were also proposed for fast and real-time semantic style transfer. Photo-realistic style transfer. Photo-realistic style transfer seeks to transfer the style of a reference style photo onto other pictures. The greatest characteristic is that both the global structures and detailed contours in the content images should be preserved during the process. Traditional methods based on Global Reinhard et al. (2001); Pitie et al. (2005) and Local Laffont et al. (2014); Shih et al. (2014) are slow in practice and limited in specific scenarios (e.g., outdoor scenes or headshot portraits). Recently, Luan et al. (2017) incorporated a new loss term to the optimization objective of Gatys et al. (2015) to improve the photorealism of stylization outputs. Li et al. (2018) introduced a closed-form solution consisting of a stylization and a smoothing step for faster speed. More recently, Li et al. (2019) have also shown an effective approach for fast photo-realistic style transfer. Despite the fact that current NST methods have shown good performance in specific domains, there are few studies on how to combine them directly to improve the quality and flexibility. In our work, we select Gatys et al. (2016); Li et al. (2017b); Huang & Belongie (2017); Li et al. (2019; 2018); Luan et al. (2017); Sheng et al. (2018); Liao et al. (2017); Gu et al. (2018); Li & Wand (2016); Champandard (2016) for our studies mainly because of their representativeness in specific domains. 3 REVISIT INITIALIZATION IN STYLE TRANSFER Initialization is the first yet important step in almost all style transfer algorithms. Although some papers Gatys et al. (2016); Nikulin & Novak (2016) have discussed the impact of the initialization on style transfer, they only use white noise, the content image or the style image, and the evaluation criteria are only based on the qualitative aspect. Here, back to the most original NST algorithms, we revisit and compare more initialization strategies from both qualitative and quantitative aspects. Our experiments are based on two most original NST methods Gatys et al. (2016); Li & Wand (2016). These methods always initialize with white noise or the content image and iteratively optimize pixels to match the content features of the content image and style features of the style image. Besides white noise, the content and style images, we try and compare four other initialization strategies: salt and pepper noise, poisson noise, the stylized results of other methods (SROOM), and moreover, replacing the content images with SROOM and then initializing with them (RC-SROOM). Qualitative Comparisons: Fig. 2 demonstrates the qualitative comparisons. As we can see, initializing with the content image produces results with insufficient color (column a). Results generated from the style image introduce some undesired color (column b). Using salt and pepper noise produces over-bright results, and using poisson noise produces darker results (column d and e). By contrast, white noise initialization yields satisfying results (column c), but the results of SROOM performs better on overall effect (column f). Remarkably, using RC-SROOM (column g) can dramatically improve the effect and absorb the merits (as shown and discussed in Fig. 1 (b)) of both methods. More results can be found in appendix. Quantitative Comparisons: Fig. 3 shows the quantitative comparisons. We evaluate these initialization strategies on method Gatys et al. (2016), the optimization is conducted by Adam method, and stopped at 1000 iterations. We can see that initializing with images (i.e., content, style, SROOM and RC-SROOM) makes loss fall faster, while initializing with noise (i.e., white noise, salt and pepper noise, and poisson noise) decreases loss more steadily. It is worth noting that using RC-SROOM achieves much lower total, content and style loss than all other strategies. Conclusion: In this section, we have demonstrated that initialization plays an important role in improving the quality of style transfer. Compared with other initialization strategies, RC-SROOM has outstanding performance. More importantly, this could help produce higher quality style transfer results that absorb the merits of multiple methods. Inspired by this, we propose cascade style transfer, which will be presented in latter sections, and in turn verify this conclusion. 4 CASCADE STYLE TRANSFER In this paper, we define cascade style transfer as combinations of different NST methods. It contains two architectures: the serial style transfer and the parallel style transfer. 4.1 SERIAL STYLE TRANSFER (SST) As shown in Fig. 4 (a), serial style transfer serially connects multiple style transfer methods. Let ~c, ~s and ~xi be the content image, the style image and the stylized result of method i. The style transfer process of method i is denoted as fi. Specifically, for the first method, we use f1(~c,~s, d) to denote transferring the style of ~s to ~c by method 1 with the default initialization settings. For others, we use fi(~xi−1, ~s) to denote initializing with ~xi−1 and then transferring the style of ~s to ~xi−1 by method i. Our serial style transfer can be formulated as ~xi = { f1(~c,~s, d) if i = 1 fi(~xi−1, ~s) otherwise. (1) 4.2 PARALLEL STYLE TRANSFER (PST) As far as we know, current NST methods are mainly conducted in two different ways. One is based on VGG Simonyan & Zisserman (2014), iteratively optimizing the pixels of input images. The other is training a feed-forward network to directly generate the stylized results. Here, to demonstrate our PST more intuitively, we design a simple parallel architecture based on the former way. As shown in Fig. 4 (b), our PST contains two important parts. One is the shared backbone (e.g., VGG-19), it is mainly used for feature extraction and error back-propagation. The other is the loss module, it is used to combine loss functions of different methods. Let Li denote the loss function of method i, “⊕” denote a linear combination operation between different loss functions. We give hyperparameter ωi to weight every loss function Li. The total loss is defined as follows: Ltotal(i) = { ω1L1 if i = 1 Ltotal(i− 1)⊕ ωiLi otherwise. (2) We compute its gradients with respect to the pixel values and use them to iteratively update the input images. In this way, all methods can be optimized in parallel with the total loss. To verify the effectiveness of our PST, we design a specific parallel scheme ParallelNet based on four popular domain-specific style transfer methods Champandard (2016); Li & Wand (2016); Gatys et al. (2016); Luan et al. (2017). The principles for selecting the appropriate approaches will be discussed in later sections. The detailed combination procedure is as follows: • For method Gatys et al. (2016), we capture the vectorized features F`[~x] and F`[~s] of layers ` ∈ {convk 1}5k=1 for the generated image ~x and the style image ~s, and compute the Gram matrices: G`[·] = F`[·]F`[·]T , (3) here the Gram matrix is defined as the inner product between the vectorized features. In each layer, there areN` filters each with a vectorized feature of sizeM`. We define our style loss L1 as follows: L1 = 1 (2N`M`)2 ||G`[~x]−G`[~s]||2. (4) • For method Champandard (2016) and Li & Wand (2016), we reuse the features F~[~x] and F~[~s] of layers ~ ∈ {convk 1}4k=3, and then concatenate them with the segmentation masks ~cm and ~sm of the content image ~c and the style image ~s at the same resolutions, respectively: F~,m[~x] = F~[~x] λ · ð[~cm, ~], F~,m[~s] = F~[~s] λ · ð[~sm, ~], (5) where ð[·, ~] denotes resizing the masks to the same resolution as the output of layer ~. “ ” denotes the channel concatenation, the hyperparameter λ is given to weight the semantic awareness. Then we extract a set of 3×3 neural patches for both F~,m[~x] and F~,m[~s], denoted by {Φi(~x)}i∈nx and {Φj(~s)}j∈ns , where nx and ns are the number of extracted patches. For each patch Φi(~x), we determine a closest-matching style patch ΦCM(i)(~s) based on the following measure: CM(i) := arg max j=1,...,ns Φi(~x) · Φj(~s) |Φi(~x)| · |Φj(~s)| . (6) Finally, our style loss L2 is defined as follows: L2 = nx∑ i=1 ||Φi(~x)− ΦCM(i)(~s)||2. (7) • For method Luan et al. (2017), we directly use their photorealism regularization term L3: L3 = 3∑ c=1 Vc[~x] TMIVc[~x], (8) whereMI is the Matting Laplacian Matrix Levin et al. (2008), which is used to express a locally affine combination of the input RGB channels and only depends on the input image I . Vc[~x] denotes the vectorized version of the output image ~x in channel c. • At last, these methods use the same content loss Lc to preserve the structure of the content image. Lc = ||F [~x]−F [~c]||2, (9) where F [~x] and F [~c] denote the features extracted from layer conv4 2. Total loss: In our loss module, the total loss is the simple linear combination of the above loss: Ltotal = αLc + ω1L1 + ω2L2 + ω3L3 + µLTV , (10) where LTV refers to the total variation regularization loss Aly & Dubois (2005). 5 EXPERIMENTS For SST, we select some of the most representative methods in specific domains, and run the published implementation with default stylization settings for each method. For PST (we specifically refer to our ParallelNet), the hyperparameters α, ω1, ω2 and µ are fixed at 10, 0.1, 10 and 1, respectively. λ is set to 10 for cases with segmentation masks and 0 for others. ω3 is set to 104 for photo-realistic style transfer, and 0 for others. The optimization is conducted by LBFGS Zhu et al. (1997), and stopped at 500 iterations. The initialization is the content image. 5.1 ABLATION STUDIES How to select the appropriate methods? To find some empirical principles, we conduct a user study (Table 1) on some representative NST methods in terms of content fidelity, stylization global color and local texture patterns, since these three aspects are our main considerations in selecting the methods. To alleviate the burden of subjects, we show them 150 synthesized images of each method, and ask them to select one overall level for each aspect. We collect totally 990 votes from 30 subjects, each method has 30 votes in each aspect. The levels with the most votes are chosen. For SST, which methods could be combined and which order will return the optimal solutions? Take artistic style transfer as the example, Fig. 5 shows the comparisons of different serial schemes. The first column shows a reasonable serial scheme, since the method (b) help enrich the global color, and method (c) improves the local styles. The second column shows an unreasonable one, as the method (d) introduces some content distortions, and this directly affects the subsequent outputs. The third and fourth column show the effects of node order. We can find in the third column that method (b) and (a) progressively enrich the global color of (c), but do not change the local patterns. See the fourth column, unfortunately, changing the order of (d) cannot avoid the content distortions, but since the artistic style transfer does not require high content fidelity, the result is still acceptable. The last column shows some invalid nodes, method (b) and (e) have subtle effect on the result of (d). Now, look at these in conjunction with Table 1, we can conclude the following: • For the aspects of global color and local patterns, the same level can promote each other, and the higher level can improve the performance of the lower level, while the lower level has little effect on the higher level. The results are determined by the order of methods with different levels. • For the aspect of content fidelity, the higher level is only conducive to content preservation, while the distortions produced by the lower level are irreversible and sequence-independent. • The selection is specific for particular domains, e.g., for artistic or semantic style transfer, the global color and local patterns are prior to content fidelity, while for photo-realistic style transfer, content fidelity is the primary concern. These conclusions are also applicable to semantic and photo-realistic style transfer. More results can be found in appendix. We believe that these would help users design useful serial schemes to obtain desired results, and inspire future works in neural style transfer. For PST, how many and which methods could be combined? As we introduced in Section 1, our PST is proposed to improve the quality and flexibility for different domains. Generally, any approach that satisfies the following two conditions can be incorporated into our PST: • Sharing the same backbone (e.g., VGG-19) and similar process flow (e.g., backward optimization). • Having the capacity to solve the problem of different style transfer domain. How to get the optimal hyperparameters for PST? Our PST introduces some hyperparameters, we can adapt the model to different domains by adjusting them, but the fine-tuning may be intractable. Fortunately, we find that for most methods, the optimal hyperparameters before and after combinations are almost the same, e.g., the optimal values of λ and ω3 of our ParallelNet are the same as those of the original methods. And for others, we just need to make a few adjustments according to the original methods. The optimal hyperparameters are generalized for most cases, but for some special ones, fine-tuning may produce better results. Ablation studies about different loss terms and their hyperparameters of our ParallelNet can be found in appendix. 5.2 EVALUATION AND COMPARISONS Qualitative Comparisons: Fig. 6 shows some qualitative comparisons on artistic, semantic and photo-realistic style transfer. In the top row, we select Sheng et al. (2018), Li et al. (2019) and Li & Wand (2016) as three sequential nodes of our SST(a). Compared with them that only transfer the global color or local textures, our SST(a) and PST can consider both aspects simultaneously. In the middle row, we select Champandard (2016), Gu et al. (2018) and Liao et al. (2017) as three sequential nodes of our SST(b). We observe that these methods often produce either insufficiently stylized results or abnormal artifacts. By contrast, our SST(b) and PST produce much more satisfying results. In the bottom row, since the content fidelity level of Li et al. (2018) in Table 1 is only M (we can also observe some undesired shadows in column 3) and this is detrimental to our SST in such high fidelity task, we only select Li et al. (2019) and Luan et al. (2017) as sequential nodes of our SST(c). As we can see, the result of method Li et al. (2019) may lose some striking styles, e.g., the light at the bottom of the ship. The method Luan et al. (2017) may introduce some unacceptable artifacts, e.g., the abnormal hull. These problems do not occur in the results of our SST(c) and PST. Quantitative Comparisons: We conduct several user studies on different style transfer domains. The results can be found in appendix. In all domains, our SSTs and PST show their superiority. And we also compare the flexibility of different methods in different style transfer domains in Table 2. Since our SSTs target the higher quality for specific domains, they do not contribute to more flexibility. By contrast, our PST is the only one that can flexibly adapt to all the mentioned domains. Speed and Memory Analysis: Obviously, the time and memory requirements of our SSTs are simply the sums of those of its nodes. For PST, since the combined methods all share the same backbone, and the intermediate features can be stored and reused, the increments of time and memory are slight. The speed of our ParallelNet is comparable to that of the method Gatys et al. (2016). 6 CONCLUSION AND FUTURE DIRECTIONS In this paper, we propose cascade style transfer to combine multiple approaches in serial or in parallel without modifying any algorithm. Experiments have verified that our methods can improve the stylistic quality and flexibility over previous state-of-the-art methods in artistic, semantic and photo-realistic style transfer domains. In the future, more effective and efficient schemes could be designed, and the combination of SST and PST is also an interesting direction worthy of further studies. Moreover, our methods can be regarded as data-flow graphs of neural image editing operators from a high-level perspective, it is another neat idea for interactive computer graphics system and will probably have considerable impact on future work and architectures in this domain. A APPENDIX A.1 MORE RESULTS ABOUT INITIALIZATION A.1.1 GATYS ET AL. (2016) Fig. 7 shows more results of different initialization strategies on method Gatys et al. (2016). As mentioned in the paper, the first column shows the content image (top) and the style image (bottom). The other columns show the initialization images (top) and the corresponding style transfer results (bottom) of method Gatys et al. (2016). In column (f), we initialize with the default stylized results of method Li & Wand (2016) (SROOM). In column (g), we replace the content image with SROOM and then initialize with it (RC-SROOM). A.1.2 LI & WAND (2016) Fig. 8 shows more results of different initialization strategies on method Li & Wand (2016). The first column shows the content image (top) and the style image (bottom). The other columns show the initialization images (top) and the corresponding style transfer results (bottom) of method Li & Wand (2016). In column (f), we initialize with the default stylized results of method Gatys et al. (2016) (SROOM). In column (g), we replace the content image with SROOM and then initialize with it (RC-SROOM). A.2 MORE SERIAL SCHEMES FOR SST A.2.1 ARTISTIC STYLE TRANSFER Fig. 9 shows more comparisons of different serial schemes in artistic style transfer domain. As mentioned in the paper, each column represents a serial scheme, and each row shows the intermediate output. We select (a) Sheng et al. (2018), (b) Li et al. (2019), (c) Li & Wand (2016), (d) Li et al. (2017b), (e) Huang & Belongie (2017) for the nodes of different serial schemes. A.2.2 SEMANTIC STYLE TRANSFER Fig. 10 shows some comparisons of different serial schemes in semantic style transfer domain. Each column represents a serial scheme, and each row shows the intermediate output. We select (a) Champandard (2016), (b) Gu et al. (2018), (c) Liao et al. (2017), (d) Li et al. (2017b), (e) Sheng et al. (2018), (f) Li et al. (2019) for the nodes of different serial schemes. A.2.3 PHOTO-REALISTIC STYLE TRANSFER Fig. 11 shows some comparisons of different serial schemes in photo-realistic style transfer domain. Each column represents a serial scheme, and each row shows the intermediate output. We select (a) Li et al. (2019), (b) Luan et al. (2017), (c) Li et al. (2018), (d) Liao et al. (2017), (e) Huang & Belongie (2017) for the nodes of different serial schemes. A.3 ABLATION STUDIES OF PARALLELNET We study the effects of different loss terms of our proposed ParallelNet, including style loss L1 (in Eq.(4)), L2 (in Eq.(5) and Eq.(7)) and photorealism regularization loss L3 (in Eq.(8)). Generally, the weight α of Lc is fixed to 10, and the weight µ of LTV is fixed to 1. Fig. 12 shows the results of artistic style transfer by varying weight ω1 of loss L1 and ω2 of loss L2 while fixing the other weights. As the last column shows, the original method of Gatys et al. (2016) transfers the global color and rough textures of the style image, but does not transfer the local intricate patterns. The original method of Li & Wand (2016) transfers much more local style patterns, but as it shows, this method may generate insufficiently stylized result when huge difference exists between the content and style image. By combining the characteristics of these two methods, our ParallelNet can transfer both global color and local textures of the style images. As the top row shows, increasing the weight ω1 of loss L1 transfers more global color and rough textures. And as the bottom row shows, increasing the weight ω2 of loss L2 retains more local style patterns. However, since the trade-off between the content loss and the style loss, distortions of the content structure are inevitable if the values of ω1 and ω2 are too high. In our work, for artistic style transfer, ω1 and ω2 are set to 0.1 and 10 by default, respectively. Fig. 13 shows the results of semantic style transfer by varying weight ω2 and λ of loss L2. The top row shows transfer of painted style (a) onto a photo (b), this is easy and does not require the constraints of segmentation masks, so we fix the semantic awareness weight λ at 0. On the other hand, according to the aforementioned practice, we set the value of ω1 and ω3 to 0.1 and 0 by default, respectively. As the top row shows, increasing ω2 refines more detailed information of the corresponding semantics, but this may change the original structure of the content image. To avoid this, the value of ω2 should not be too high, so in our work, we set it to 10 by default. Based on these, the bottom row shows transfer of photo style (b) onto a painting (a), it is hard so we use the constraints of segmentation masks to get better results. As the bottom row shows, increasing λ achieves more accurate semantic matching. To obtain the best results, we set λ to 10 by default. Fig. 14 shows the results of photo-realistic style transfer by varying weight ω3 of photorealism regularization loss L3. Differing from Luan et al. (2017) using a two-stage optimization process based on the outputs of Gatys et al. (2016), we directly solve this problem by optimizing Eq. (10). This is much simpler, and could produce comparable results due to the synergistic effects of different loss items. As we can see, a too small value of ω3 cannot prevent distortions, thus the results in column 2 and 3 have a non-photorealistic look. On the contrary, a too large value of ω3 suppresses the style to be transferred and leads to color infidelity (see column 5 and 6). Similar to Luan et al. (2017), we choose ω3 = 104 by default. A.4 QUALITATIVE COMPARISONS WITH OTHER METHODS Here we demonstrate the results of our methods and twelve other representative single- or multidomain methods Sheng et al. (2018); Li et al. (2019); Li & Wand (2016); Champandard (2016); Gu et al. (2018); Liao et al. (2017); Luan et al. (2017); Li et al. (2018); Yao et al. (2019); Gatys et al. (2016); Huang & Belongie (2017); Li et al. (2017b) on artistic, semantic and photo-realistic style transfer, respectively. Each figure shows the representative results obtained by one method. The left, center and right columns show examples of artistic, semantic and photo-realistic style transfer, respectively. For the inputs in each group, the upper one is the content image and the lower one is the style image. Relevant quality and flexibility analyses can be found under the caption of each figure. Compared to these methods, our SST schemes perform better on stylistic quality in specific domains, and our PST scheme performs better on stylistic quality and flexibility in all domains. As shown in Fig. 15, this scheme can perform well on artistic and semantic style transfer (without segmentation masks). But since its nodes do not support segmentation masks, it could not solve the cases when huge differences exist between the content and style images (e.g., the case at the top of the center column). This scheme is not suitable for photo-realistic style transfer, as it may produce a lot of content distortions and abnormal artifacts (see the right column). As shown in Fig. 16, this scheme can perform well on artistic and semantic style transfer (with and without segmentation masks). However, in artistic style transfer, there are some deficiencies in content preservation because it prefers to express more style features (see the two cases at the bottom of the left column). This also limits its capability in photo-realistic style transfer (see the right column). As shown in Fig. 17, this scheme is only suitable for photo-realistic style transfer. As we can see, it maintains the photorealism of the content photographs and at the same time transfers the global color of the style images. But this also makes it difficult to produce the artworks with non-realistic styles. Since the nodes of it can incorporate segmentation masks, this scheme is capable to use segmentation masks. As shown in Fig. 18, this scheme is flexible enough to apply to all these style transfer domains. As we can see, in the left column, both the global color and local patterns of the artistic images can be transferred by this scheme. In the center and right column, it can also achieve semantic and photorealistic style transfer with and without segmentation masks. Moreover, users can further improve the stylistic quality by fine-tuning our provided hyperparameters for each input. As shown in Fig. 19, this method can be used in artistic and semantic style transfer (without segmentation masks), but the stylistic quality is not so good. As we can see, for artistic style transfer, it only transfers the global color of the style images, but lacks local patterns. For semantic style transfer, it may produce a lot of hazy blocks, which affect the overall clarity. These problems (including content distortions and abnormal artifacts) also exist in the photo-realistic style transfer. As shown in Fig. 20, this method can be used in artistic and photo-realistic style transfer. But since it prefers to maintain the structures of the content images, the produced results are often insufficiently stylized. This could help it to perform better in photo-realistic style transfer. On the other hand, since the spirit of this method is based on the global statistics, it cannot solve the tasks of semantic style transfer. This method is capable to use segmentation masks. As shown in Fig. 21, this method can be used in semantic style transfer (without segmentation masks). As we can see, for artistic style transfer, it could transfer adequate local patterns of the style images, but the global effects of the stylized results are unsatisfying (mainly because of the insufficient global color, see the left column). On the other hand, introduced content distortions also limit its capability for photo-realistic style transfer (see the right column). As shown in Fig. 22, this method can be used in artistic and semantic style transfer (with and without segmentation masks). For artistic style transfer (see the left column), the most results are similar to that of Li & Wand (2016), the difference is that this method can perform better on the global color or the local patterns, but it is still unsatisfying. For semantic style transfer, it may produce blurred results (e.g., the two cases at the top of the center column). For photo-realistic style transfer, it cannot adapt to this domain because of the introduced content distortions. As shown in Fig. 23, this method can be used in artistic and semantic style transfer (without segmentation masks). The stylized results produced by it can obtain sufficient global color and local patterns, but there are also a lot of abnormal artifacts which directly affect the final effect. This occurs in almost every case in these three domains. As shown in Fig. 24, this method can be used in semantic and photo-realistic style transfer (without segmentation masks). It is more suitable for style transfer of image pairs which have high semanticlevel correspondences. Therefore, for the cases of artistic style transfer which are not relevant in semantics, this method yields poor stylized results (see the left column). On the other hand, since this method does not support segmentation masks, mismatching is prone to occur in challenging tasks (e.g., the case at the top of the center column and cases in the right column). As shown in Fig. 25, similar to our SST(c) scheme (see Fig. 17), this method is only suitable for photo-realistic style transfer. Compared to our SST(c) scheme, it may produce some abnormal artifacts, e.g., the hull and eyes in the two cases at the bottom of the right column, respectively. This method is capable to use segmentation masks. As shown in Fig. 26, similar to our SST(c) scheme (see Fig. 17) and the method of Luan et al. (2017) (see Fig. 25), this method is only suitable for photo-realistic style transfer. See the right column, compared to our SST(c) scheme, it may produce too many undesired effects, e.g., the skies in the case 1 and case 3, the shadows in the case 2 and the eyes in the case 4 (numbers are arranged from top to bottom). This method is capable to use segmentation masks. As shown in Fig. 27, this method can be used in artistic and semantic style transfer (without segmentation masks). Because of the incorporation with self-attention mechanism, this method can highlight more salient areas (e.g., characters’ eyes) of the images. But in other places, the performance is similar to the method of Sheng et al. (2018) (see Fig. 19). As shown in Fig. 28, this method is only suitable for artistic style transfer. As we can see, it could transfer the global color and rough textures of the artistic style images to the content images (see the left column). But since the spirt of its algorithm is based on the global statistics, this method cannot solve the semantic style transfer (see the center column). Of course, without the improvement introduced by Luan et al. (2017), it cannot solve the tasks of photo-realistic style transfer (see the right column). As shown in Fig. 29, similar to the method of Gatys et al. (2016) (see Fig. 28), this method is only suitable for artistic style transfer. Compared to the method of Gatys et al. (2016), some results generated by it are not sufficiently stylized (e.g., the two cases at the bottom of the left column). As shown in Fig. 30, similar to the methods of Gatys et al. (2016) (see Fig. 28) and Huang & Belongie (2017) (see Fig. 29), this method is only suitable for artistic style transfer. Compared to the methods of Gatys et al. (2016) and Huang & Belongie (2017), some results generated by it are excessively stylized, thus introducing a lot of undesired effects (e.g., the case 2 and case 3 in the left column). A.5 QUANTITATIVE COMPARISONS WITH OTHER METHODS A.5.1 USER STUDY ON ARTISTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Sheng et al. (2018); Li et al. (2019); Li & Wand (2016); Yao et al. (2019); Gatys et al. (2016); Li et al. (2017b) on artistic style transfer. We use 10 content images and 10 style im- ages to synthesize 100 images in total for each method, and randomly select 50 content and style combinations to each subject. We show stylized images of 10 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 1500 votes from 30 users and show the percentage of votes for each method in Fig. 31. Overall, our proposed PST and SST(a) are favored among all evaluated methods. A.5.2 USER STUDY ON SEMANTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Sheng et al. (2018); Li & Wand (2016); Champandard (2016); Gu et al. (2018); Liao et al. (2017); Yao et al. (2019) on semantic style transfer. For each method, we use 30 image groups to synthesize 30 images (all images are produced without using segmentation masks) in total. We show stylized images of 10 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 900 votes from 30 users and show the percentage of votes for each method in Fig. 32 (a). Overall, our proposed PST, SST(b) and SST(a) are favored among all evaluated methods. A.5.3 USER STUDY ON PHOTO-REALISTIC STYLE TRANSFER We conduct a user study to evaluate the proposed schemes against the state-of-the-art style transfer methods Li et al. (2019); Liao et al. (2017); Luan et al. (2017); Li et al. (2018) on photo-realistic style transfer. For each method, we use 30 image groups to synthesize 30 images in total. We show stylized images of 8 compared methods (including ours) side-by-side in a random order and ask the subjects to select the most visually pleasant one. We collect 900 votes from 30 users and show the percentage of votes for each method in Fig. 32 (b). Overall, our proposed SST(c) and PST are favored among all evaluated methods.
1. What is the focus of the paper regarding style transfer methods? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its simplicity and flexibility? 3. Do you have any concerns about the novelty and engineering effort of the proposed method? 4. How does the reviewer assess the significance of the paper's contributions to the field of style transfer? 5. What are some unsolved issues in style transfer that the reviewer believes the authors should focus on?
Review
Review This paper proposes two ways to aggregate existing style transfer methods and shows improvements on quality and flexibility. However, the proposed method does not solve the limitations of any previous methods. Instead, it is as simple as an easy combination: the proposed SST is a sequential combination and the proposed PST has no difference with running each single method separately. To me this is more like an engineering effort rather than a research work. (1) For SST, it just connects N existing methods, using the output of method 1 as the input of method 2. The quality of results might be improved but there is little novelty. I agree combing methods can be a contribution only when there are principle designs and in-depth analysis. (2) For PST, I do not see its difference with running single method separately. Putting all previous methods altogether cannot be called being more flexible. As said in the paper, when for photorealistic transfer, the proposed PST set the loss weight of other methods except Luan et al. as 0. Then is it the same with running the single method of Luan et al.? In general, I do not encourage such a way of exploring research. Authors should focus more on the unsolved issues in style transfer, e.g., how to do geometric style transfer (shape).
ICLR
Title Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors Abstract As data becomes increasingly vital, a company would be very cautious about releasing data, because the competitors could use it to train high-performance models, thereby posing a tremendous threat to the company’s commercial competence. To prevent training good models on the data, we could add imperceptible perturbations to it. Since such perturbations aim at hurting the entire training process, they should reflect the vulnerability of DNN training, rather than that of a single model. Based on this new idea, we seek perturbed examples that are always unrecognized (never correctly classified) in training. In this paper, we uncover them by model checkpoints’ gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints’ cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures. That is, our amazing performance of ensemble only requires the computation of training one model. By extensive experiments with 9 baselines on 3 datasets and 5 architectures, SEP is verified to be a new state-of-the-art, e.g., our small l∞ = 2/255 perturbations reduce the accuracy of a CIFAR-10 ResNet18 from 94.56% to 14.68%, compared to 41.35% by the best-known method. Code is available at https://github.com/Sizhe-Chen/SEP. 1 INTRODUCTION Large-scale datasets have become increasingly important in training high-performance deep neural networks (DNNs). Thus, it is a common practice to collect data online (Mahajan et al., 2018; Sun et al., 2017), an almost unlimited data source. This poses a great threat to the commercial competence of data owners such as social media companies since the competitors could also train good DNNs from their data. Therefore, great efforts have been devoted to protecting data from unauthorized use in model training. The most typical way is to add imperceptible perturbations to the data, so that DNNs trained on it have poor generalization (Huang et al., 2020a; Fowl et al., 2021b). Existing data protection methods use a single DNN to generate incorrect but DNN-sensitive features (Huang et al., 2020a; Fu et al., 2021; Fowl et al., 2021b) for training data by, e.g., adversarial attacks (Goodfellow et al., 2015). However, the data protectors cannot know what DNN and what training strategies the unauthorized users will adopt. Thus, the protective examples should aim at hurting the DNN training, a whole dynamic process, instead of a static DNN. Therefore, it would be interesting to study the vulnerability of DNN training. Recall that the vulnerability of a DNN is revealed by the adversarial examples which are similar to clean ones but unrecognized by the model (Madry et al., 2018). Similarly, we depict the vulnerability of training by the perturbed training samples that are never predicted correctly during training. Learning on examples ignored during normal training tends to yield DNNs ignoring normal examples. Such examples could be easily uncovered by the gradients from the ensemble of model training checkpoints. However, ensemble methods have never been explored in data protection to the best of our knowledge, so it is natural to wonder Can we use these intermediate checkpoint models for data protection in a self-ensemble manner? ∗Correspondence to Xiaolin Huang (xiaolinhuang@sjtu.edu.cn). An effective ensemble demands high diversity of sub-models, which is generally quantified by their gradient similarity (Pang et al., 2019; Yang et al., 2021), i.e., the gradients on the same image from different sub-models should be orthogonal. Surprisingly, we found that checkpoints’ gradients are as orthogonal as DNNs with different architectures in the conventional ensemble. In this regard, we argue that intermediate checkpoints are very diverse to form the proposed self-ensemble protection (SEP), challenging existing beliefs on their similarity (Li et al., 2022). By SEP, effective ensemble protection is achieved by the computation of training only one DNN. Since the scale of data worth protecting is mostly very large, SEP avoids tremendous costs by training multiple models. Therefore, our study enables a practical ensemble for large-scale data, which may help improve the generalization, increase the attack transferability, and study DNN training dynamics. Multiple checkpoints offer us a pool of good features for an input. Thus, we could additionally take the advantage of diverse features besides diverse gradients at no cost. Inspired by neural collapse theory (Papyan et al., 2020), which demonstrates that the mean feature of samples in a class is a highly representative depiction of this class, we bring about a novel feature alignment loss that induces a sample’s last-layer feature collapse into the mean of incorrect-class features. With features from multiple checkpoints, FA robustly injects incorrect features so that DNNs are deeply confounded. Equipping SEP with FA, our method achieves astonishing performance by revealing the vulnerability of DNN training: (1) our examples are mostly mis-classified in any training processes compared to a recent method (Sandoval-Segura et al., 2022), and (2) clean samples are always much closer to each other than to protected samples, indicating that the latter belong to another distribution that could not be noticed by normal training. By setting ℓ∞ = 2/255, a very small bound, SEP perturbations on the CIFAR-10 training set reduce the testing accuracy of a ResNet18 from 94.56% to 14.68%, while the best-known results could only reach 41.35% with the same amount of overall calculation to craft the perturbations. The superiority of our method is also observable in the study on CIFAR-100 and ImageNet subset on 5 architectures. We also study perturbations under different norms, and found that mixing ℓ∞ and ℓ0 perturbations (Wu et al., 2023) is the only effective way to resist ℓ∞ adversarial training, which could recover the accuracy for all other types of perturbations. Our contributions could be summarized below. • We propose that protective perturbations should reveal the vulnerability of the DNN training process, which we depict by the examples never classified correctly in training. • We uncover such examples by the self-ensemble of model checkpoints, which are found to be surprisingly diverse as data protectors. • Our method is very effective even using the computation of training one DNN. Equipped with a novel feature alignment loss, our ℓ∞ = 8/255 perturbations lead DNNs to have < 5.7% / 3.2% / 0.6% accuracy on CIFAR-10 / CIFAR-100 / ImageNet subset. 2 RELATED WORK Small perturbations are known to be able to fool DNNs into incorrect predictions (Szegedy et al., 2014). Such test-time adversarial perturbations are crafted effectively by adversarially updating samples with model gradients (Carlini & Wagner, 2017), and the produced adversarial examples (AEs) transfer to hurt other DNNs as well (Chen et al., 2022). Similarly, training-time adversarial perturbations, i.e., poisoning examples, are also obtainable by adversarially modify training samples using DNN gradients (Koh & Liang, 2017; Fowl et al., 2021b). All DNNs trained on poisoning examples generalize poorly on clean examples, making poisoning methods helpful in protecting data from unauthorized use of training. Besides adversarial noise, it has been demonstrated that error-minimization (Huang et al., 2020a), gradient alignment (Fowl et al., 2021a) and influence functions (Fang et al., 2020) are also useful in protecting data. However, current methods only use one DNN because the scale of data worth protection is very large for training multiple models. Ensemble is validated as a panacea for boosting adversarial attacks (Liu et al., 2017; Dong et al., 2018). By aggregating the probabilities (Liu et al., 2017), logits or losses (Dong et al., 2018) of multiple models, ensemble attacks significantly increase the black-box attack success rate. Ensemble attacks could be further enhanced by reducing the gradient variance of sub-models (Xiong et al., 2022), and such an optimization way is also adopted in our method. Besides, ensemble has also been shown effective as a defense method by inducing low diversity across sub-models (Pang et al., 2019; Yang et al., 2020; 2021) or producing diverse AEs in adversarial training (Tramèr et al., 2018; Wang & Wang, 2021). Despite the good performance of ensemble in attacks and defenses, it has not been introduced to protect datasets due to its inefficiency. In this regard, we adopt the self-ensemble strategy, which only requires the computation of training one DNN. Its current applications are focused on semi-supervised learning (Zhao et al., 2019; Liu et al., 2022). Two similar but different tasks besides poisoning-based data protection are adversarial training and backdoor attacks. Adversarial training (Madry et al., 2018; Zhang et al., 2019; Stutz et al., 2020) continuously generates AEs with current checkpoint gradients to improve the model’s robustness towards worst-case perturbations. In contrast, data protection produces fixed poisoning examples so that unauthorized training yields low clean accuracy. Backdoor attacks (Geiping et al., 2020; Huang et al., 2020b) perturb a small proportion of the training set to make the DNNs mispredict certain samples, but remain well-functional on other clean samples. While data protectors perturb the whole training set to degrade the model’s performance on all clean samples. 3 THE PROPOSED METHOD 3.1 PRELIMINARIES We first introduce the threat model and problem formulation of the data protection task. The data owner company wishes to release data for users while preventing an unauthorized appropriator to collect data for training DNNs. Thus, the data protector would add imperceptible perturbations to samples, so that humans have no obstacle to seeing the data, while the appropriator cannot train DNNs to achieve an acceptable testing accuracy. Since the protector has access to the whole training set, it can craft perturbations for each sample for effective protection (Shen et al., 2019; Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021b). Mathematically, the problem can be formulated as max δ∈Πε ∑ (x,y)∈D LCE(fa(x, θ∗), y), s.t. θ∗ ∈ argmin θ ∑ (x,y)∈T LCE (fa (x+ δ, θ) , y) , (1) where the perturbations δ bounded by ε are added to the training set T so that an appropriator model trained on it fa(·, θ∗) have a low accuracy on the test set D, i.e., a high cross-entropy loss LCE(·, ·). The δ could be effectively calculated by targeted attacks (Fowl et al., 2021b), which use a well-trained protecting DNN fp to produce targeted adversarial examples (AEs) that have the non-robust features in the incorrect class g(y) as xt+1 = Πϵ (xt − α · sign (GCE(fp,xt))) , GCE(fp,x) = ∇xLCE (fp (x) , g(y)) , (2) where Πϵ clips the sample into the ε ℓ∞-norm bound after each update with a step size α. g(·) stands for a permutation on the label space. Here our method also adopts the optimization in (2). 3.2 DEPICTING THE VULNERABILITY OF DNN TRAINING Current methods (Shen et al., 2019; Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021b) craft protective perturbations that are supposed to generalize to poison different unknown architectures by a single DNN fp. However, the data protectors cannot know what DNN and what training strategies the unauthorized users will adopt. Thus, the protective examples should aim at hurting the DNN training, a whole dynamic process, instead of a static DNN. Therefore, it would be interesting to study the vulnerability of DNN training. Recall that the vulnerability of a DNN is represented by AEs (Madry et al., 2018), because they are slightly different from clean testing samples but are totally unrecognizable by a static model. Similarly, the vulnerability of DNN training could be depicted by examples that are slightly different from clean training samples but are always unrecognized in the training process, i.e., the perturbed data never correctly predicted by the training model. If we view the training process as the generation of checkpoint models, the problem becomes finding the examples that are adversarial to checkpoints, which could be easily solved by the ensemble attack (Dong et al., 2018). Let us investigate whether the training checkpoints, which are similar (Li et al., 2022) in architecture and parameters, could be diverse sub-models for effective self-ensemble. To measure the diversity, we adopt the common gradient similarity metric (Pang et al., 2019; Yang et al., 2021). In Fig. 1 (upper right), we plot the average of absolute cosine value for gradients in different checkpoints, and the low value indicates that gradients on images are close to orthogonal for different checkpoints like in the ensemble of different architectures (bottom right). This means, surprisingly, intermediate checkpoints are very diverse and sufficient to form the proposed self-ensemble protection (SEP) as GSEP(fp,x) = n−1∑ k=0 GCE(f k p ,x), (3) where fkp is the k th equidistant intermediate checkpoint and GSEP is the gradients for update in (2). As illustrated in 1 (left), SEP (vertical box) requires the computation of training only one DNN compared to the conventional ensemble (horizontal box) that needs time- and resource-consuming training processes to obtain a large number of ensemble models. This efficiency is especially important for data protection. Because only a large amount of data, if stolen, could be used to train competitive DNNs. In this regard, the scale of data requiring particular protection would be large, and saving the calculation of training extra models makes a significant difference. SEP is differently motivated compared to conventional ensemble. Ensemble attacks aim to produce architecture-invariant adversarial examples, and such transferable examples reveal the common vulnerability of different architectures (Chen et al., 2022). SEP, in contrast, targets the vulnerability of DNN training. By enforcing consistent misclassification, SEP produces examples ignored during normal training, and learning on them would thus yield DNNs ignoring normal examples. 3.3 PROTECTING DATA BY SELF-ENSEMBLE Multiple checkpoints offer us a pool of features for an input. Those representations, though distinctive, all contribute to accurate classification. Thus, we could additionally take the advantage of diverse features besides diverse gradients at no cost. Motivated by this, we resort to the neural collapse theory (Papyan et al., 2020) because it unravels the characteristics of DNN features. Neural collapse has four manifestations for a deep classifier. (1) In-class variability of last-layer activation collapse to class means. (2) Class means converge to simplex equiangular tight frame. (3) Linear classifiers approach class means. (4) Classifier converges to choose the nearest class mean. They demonstrate that the last-layer features of well-trained DNNs center closely on class means. In this regard, the mean feature of in-class samples is a highly representative depiction of this class. Based on this, we develop the feature alignment loss to jointly use different but good representations of a class from multiple checkpoints. Specifically, for every checkpoint, we encourage the last-layer feature of a sample to approximate the mean feature of target-class samples. In this way, FA promotes neural collapse to incorrect centers so that a sample has the exact high-dimensional feature of the target-class samples. Therefore, non-robust features of that target class could be robustly injected into data so that DNNs are deeply confounded. Mathematically, FA in SEP can be expressed as GSEP-FA(hp,x) = n−1∑ k=0 GFA(h k p ,x) = n−1∑ k=0 ∇x∥hkp (x)− hkc (g(y))∥, hkc (y) = ∑ x∈Ty h k p (x) |Ty| , (4) where hkp stands for the feature extractor (except for the last linear layer) of f k p , and h k c (g(y)) is for the mean (center) feature in the target class g(y) calculated by hkp . ∥ · ∥ means the MSE loss. Our overall algorithm is summarized in Alg. 1, where we use a stochastic variance reduction (VR) gradient method (Johnson & Zhang, 2013) to avoid bad local minima in optimization. Our method first calculates the FA gradients gk by each training checkpoint (line 4). Then before updating the sample by the accumulated gradients (line 11), we reduce the variance of ensemble gradients (from gens to gupd) in a predictive way by M inner virtual optimizations on x̂m, which has been verified to boost ensemble adversarial attacks (Xiong et al., 2022). Algorithm 1 Self-Ensemble Protection with Feature Alignment and Variance Reduction Input: Dataset T = {(x, y)}, ℓ∞ bound ε, step size α, number of protection iterations T , number of training iterations N , number of checkpoints in self-ensemble n, number of inner updates M Output: Protected dataset x′ 1: Train a DNN for N epochs and save n equidistant checkpoints 2: x0 = x 3: for t = 0 → T − 1 do 4: for k = 0 → n− 1 do gk = GFA(hkp ,xt) # get the gradients by each checkpoints as (4) 5: gens = 1 n ∑n−1 k=0 gk, gupd = 0, x̂0 = xt # initialize variables for inner optimization 6: for m = 0 → M − 1 do 7: Pick a random index k # stochastic variance reduction (Johnson & Zhang, 2013) 8: gupd = gupd +GFA(h k p , x̂m)− (gk − gens) # accumulate gradients with variance 9: x̂m+1 = Πϵ ( x̂m − α · sign(gupd) ) # virtual update onx̂m 10: end for 11: xt+1 = Πϵ ( xt − α · sign(gupd) ) # update samples with variance-reduced gradients 12: end for 13: return x′ = xT−1 In a word, the main part of our method is to use checkpoints to craft targeted AEs for the training set (Line 4-5 in Alg. 1) in a self-ensemble protection (SEP) manner. And SEP could be boosted by FA loss (Line 4) and VR optimization (Line 6-11). In this way, our overall method only requires 1×N training epochs, and T × (n+M) times of backward calculation to update samples. 4 EXPERIMENTS 4.1 SETUP We evaluate SEP along with 7 data protection baselines, including adding random noise, TensorClog aiming to cause gradient vanishing (Shen et al., 2019), Gradient Alignment to target-class gradients (Fowl et al., 2021a), DeepConfuse that protects by an autoencoder (Feng et al., 2019), Unlearnable Examples (ULEs) using error-minimization noise (Huang et al., 2020a), Robust ULEs (RULEs) that use adversarial training (Fu et al., 2021), Adversarial Poison (AdvPoison) resorting to targeted attacks (Fowl et al., 2021b), and AutoRegressive (AR) Poison (Sandoval-Segura et al., 2022) using Markov chain. Hyperparameters of baselines are shown in Appendix D. We use the reproduced results in (Fowl et al., 2021b) in Table 1, 5, and 6. For our method, we optimize class-y samples to have the mean feature of target incorrect class g(y), where g(y) = (y + 5)%10 for CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Krizhevsky et al., 2017) protected classes, and g(y) = (y + 50)%100 for CIFAR-100. For the ImageNet subset, we train fa and fp with the first 100 classes in ImageNet-1K, but only protect samples in 10 significantly different classes, which are African chameleon, black grouse, electric ray, hammerhead, hen, house finch, king snake, ostrich, tailed frog, and wolf spider. This establishes the class-wise data protection setting, and the reported accuracy is calculated on the testing samples in these 10 classes. We train a ResNet18 for N = 120 epochs as fp following (Huang et al., 2020a; Fowl et al., 2021b). 15 equidistant intermediate checkpoints (epoch 8, 16, ..., 120) are adopted with M = 15, T = 30 if not otherwise stated. Experiments are conducted on an NVIDIA Tesla A100 GPU but could be run on GPUs with 4GB+ memory because we store checkpoints on hardware. The data protection methods are assessed by training models with 5 architectures, which are ResNet18 (He et al., 2016), SENet18 (Hu et al., 2018), VGG16 (Simonyan & Zisserman, 2015), DenseNet121 (Huang et al., 2017), and GoogLeNet (Szegedy et al., 2015). They are implemented from Pytorch vision (Paszke et al., 2019). We train appropriator DNNs fa for 120 epochs by an SGD optimizer with an initial learning rate of 0.1, which is divided by 10 in epochs 75 and 90. The momentum item in training is 0.9 and the weight decay is 5e-4. In this setting, DNNs trained on clean data could get great accuracy, i.e., 95% / 75% / 78% for CIFAR-10 / CIFAR-100 / ImageNet subset. Training configurations are the same for fa and fp. In the ablation study, we denote the pure self-ensemble (3) as SEP, SEP with feature alignment (4) as SEP-FA, and SEP-FA with variance reduction as SEP-FA-VR (Alg. 1). In other experiments, "ours" stands for our final method, i.e., SEP-FA-VR. We put the confusion matrix of fa in Appendix C to provide class-wise analysis. 4.2 UNCOVERING THE VULNERABILITY OF DNN TRAINING We first investigate whether our examples successfully reveal the vulnerability of DNN training. If so, we should be able to hurt different training processes regardless of the model architecture. In this regard, we test the “cross-training transferability” of our protective data. We report the results in Fig. 2 (a), where one could see that SEP samples generated by ResNet18 training tend to be mispredicted in training DenseNet121 and VGG16. In contrast, AR samples behave like clean training data and can be well recognized. This demonstrates that our method well depicts the vulnerability of DNN training compared to the recent baseline (Sandoval-Segura et al., 2022). We also perform a class-wise study to illustrate how DNNs treat clean and protective examples. We first train a CIFAR-10 DNN with clean samples (classes 0-4) and protective ones (classes 5-9, with features of classes 0-4). The DNN performs well in classes 0-4 but misclassifies all clean samples in classes 5-9 to classes 0-4, seeing Fig. 2 (b). This indicates in DNN’s view, clean samples (from different classes) are much closer to each other than to protective ones (which look very similar). More extremely, if we inject features of class 0 to classes 1-9 examples, and use them with clean samples (class 0) to train, the DNN would classify all testing samples to class 0, seeing Fig. 2 (c). 4.3 PROTECTIVE PERFORMANCE By depicting the vulnerability of DNN training, our method achieves amazing protective performance on 3 datasets and 5 architectures against various baselines. In table 1, our method surpasses existing state-of-the-art baselines by a large margin, leading DNNs to have < 5.7% / 3.2% / 0.6% accuracy on CIFAR-10 / CIFAR-100 / ImageNet subset. Comparison with weaker baselines is shown in Table 5. The great performance enables us to set an extremely small bound of ε = 2/255, even for high-resolution ImageNet samples. The perturbations would be invisible even under meticulous inspection, seeing Appendix A. However, the appropriator can only reach no more than 30% accuracy in most cases, seeing Table 2. Adversarial training (AT) has been validated as the most effective strategy to recover the accuracy of training with protective perturbations. It does not hinder the practicability of data protection methods because AT significantly decreases the accuracy and requires several-fold training computation. However, it would be interesting to study the effect of different types of perturbations in different AT settings. Here we study with AR Poison, which is claimed to resist AT. We set the perturbation bound as ℓ2 = 1 (step size α = 0.2) (Sandoval-Segura et al., 2022) and ℓ0 = 1 (Wu et al., 2023), and the latter means perturbing one pixel without other restrictions. We keep the AT bound the same as the perturbations bound, and find that in this case, both ℓ∞ and ℓ2 ATs could recover accuracy of ℓ∞, ℓ2, and ℓ∞ + ℓ2 perturbations. The only type of perturbations able to resist ℓ∞ AT is the mixture of ℓ∞ and ℓ0 perturbations. Besides, our method is significantly better than AR Poison in normal training. 4.4 ABLATION STUDY We study the performance improvements from SEP, FA, and VR separately along with the best baseline AdvPoison (Fowl et al., 2021b), seeing Table 1. We first control the overall computation the same for all experiments. Then we vary the number of sub-models n to see its effect on our method. In Table 4, we maintain our methods to have comparative computation with AdvPoison, which trains the protecting DNN for 40 epochs and craft perturbations by 250 steps 8 times (we modify it to 4). In SEP and SEP-FA, we train for N = 120 epochs and use n = 30 checkpoints to update samples for T = 30 times, aligning the computation with AdvPoison. In SEP-FA-VR with inner update M = 15, we alter the number of checkpoints n = 15 so that the overall computation is the same, which is also the default setting for all experiments as in Sec. 4.1. We use ResNet18 as fp on CIFAR-10 dataset here. In the conventional ensemble, 30 DNNs with 5 architectures are trained with 6 seeds. As shown in Table 4, SEP is able to halve the accuracy of AdvPoison within the same computation budget, indicating that knowledge of multiple models is much more important than additional update steps. In comparison with the conventional ensemble, which requires 30× training computation, SEP performs just a little worse. Moreover, equipped with FA, which consumes no additional calculation, efficient SEP-FA could be as effective as the conventional ensemble. With VR, SEP-FA-VR is stably better, and reduces the accuracy from 45.10% to 17.47% on average. We illustrate the training process of Table 4 experiments in Fig. 3. Compared to the training on clean data (purple line), data protection methods accelerate the model’s convergence on training data, but the DNN’s testing accuracy would suddenly drop at the initial stage of training. After the learning rate decay at epoch 75, the protection performance of different methods could be clearly observed. SEP accounts for the majority of performance improvements, and FA and VR could also further decrease the accuracy, making our method finally outperform the conventional inefficient ensemble. We also show the validation accuracy (on unlearned protective examples) of different perturbations in Fig. 4, where it is obvious that early stopping could not be a good defense because validation accuracy is mostly close to training accuracy. However, a huge and unusual gap between them may be a signal for the existence of protective examples. We also vary the number of intermediate checkpoints n used in self-ensemble to perform ablation study under different computation budgets. We set n = 3, 5, 10, 30, 120 without changing other hyper-parameters in self-ensemble, and plot the results in Fig. 5. Similar results could be seen, i.e., FA, and VR could stably contribute to the performance. We also discover that although n is increasing exponentially, the resulting performance increase would not be that significant as n is large. Most prominently, raising n from 30 to 120 is not bound to yield better results, meaning that the performance would saturate around n = 30, and it would be unnecessary to use all checkpoints. 5 DISCUSSION AND CONCLUSION In this paper, we propose that data protection should target the vulnerability of DNN training, which we successfully depict as the examples never classified correctly in training. Such examples could be easily calculated by model checkpoints, which are found to have surprisingly diverse gradients. By self-ensemble, effective ensemble performance could be achieved by the computation of training one model, and we could also take advantage of the diverse features from checkpoints to further boost the performance by the novel feature alignment loss. Our method exceeds current baselines significantly, reducing the appropriator model’s accuracy from 45.10% (best-known results) to 17.47% on average. Our method could also serve as a potential benchmark to evaluate the DNN’s learning process, e.g., how to prevent DNNs from learning non-robust features (shortcuts) instead of semantic ones. And it would be interesting to study the poisoning task in self-supervised learning and stable diffusion. Since our method is implemented as the targeted ensemble attack, it is also applicable to nonclassification tasks where the adversarial attack has also been developed and neural collapse also exists for pre-trained feature extractors. ACKNOWLEDGEMENT This work is partly supported by the National Natural Science Foundation of China (61977046), Shanghai Science and Technology Program (22511105600), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and National Science Foundation (CCF-1937500). The authors are grateful to Prof. Sijia Liu for his valuable discussions.
1. What is the main contribution of the paper regarding data protection? 2. What are the strengths of the proposed approach, particularly in utilizing intermediate checkpoints? 3. What are the weaknesses of the paper, especially regarding experimentation and explanation of the feature alignment section? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposed to utilize intermediate checkpoints from a single training process to protect data from unauthorized use. It also proposed a novel feature alignment (FA) technique that improves the accuracy of its proposed self-ensemble protection (SEP) method. FA uses an existing theory called neural collapse to align the last layer feature of a sample for each checkpoint to the feature mean of the target class samples. It has minor improvement in performance. But the method is simple which will make it useful. Strengths And Weaknesses **Strength: ** The objective is straightforward and the problem of data protection is exciting and vital. Improving performance with intermediate checkpoints from a single training is an excellent idea to save time and resources. Weaknesses: The feature alignment section could be explained a little better. Especially the neural collapse theory should be explained a little more since it is the primary tool used to develop FA. The experiments are limited. More results on different datasets comparing different models should have been shown. Only one model and dataset are tested while the proposed method is compared with the previous methods in Table 1. It is unclear what model and dataset are used to produce Table 3. Clarity, Quality, Novelty And Reproducibility Sec 3 is not well explained, it could be rewritten for better understanding. The quality is acceptable. They have done minimal experiments to support their claims. It needs more experiments. Their self-ensemble idea is interesting, but it is mainly an extension of previous work.
ICLR
Title Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors Abstract As data becomes increasingly vital, a company would be very cautious about releasing data, because the competitors could use it to train high-performance models, thereby posing a tremendous threat to the company’s commercial competence. To prevent training good models on the data, we could add imperceptible perturbations to it. Since such perturbations aim at hurting the entire training process, they should reflect the vulnerability of DNN training, rather than that of a single model. Based on this new idea, we seek perturbed examples that are always unrecognized (never correctly classified) in training. In this paper, we uncover them by model checkpoints’ gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints’ cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures. That is, our amazing performance of ensemble only requires the computation of training one model. By extensive experiments with 9 baselines on 3 datasets and 5 architectures, SEP is verified to be a new state-of-the-art, e.g., our small l∞ = 2/255 perturbations reduce the accuracy of a CIFAR-10 ResNet18 from 94.56% to 14.68%, compared to 41.35% by the best-known method. Code is available at https://github.com/Sizhe-Chen/SEP. 1 INTRODUCTION Large-scale datasets have become increasingly important in training high-performance deep neural networks (DNNs). Thus, it is a common practice to collect data online (Mahajan et al., 2018; Sun et al., 2017), an almost unlimited data source. This poses a great threat to the commercial competence of data owners such as social media companies since the competitors could also train good DNNs from their data. Therefore, great efforts have been devoted to protecting data from unauthorized use in model training. The most typical way is to add imperceptible perturbations to the data, so that DNNs trained on it have poor generalization (Huang et al., 2020a; Fowl et al., 2021b). Existing data protection methods use a single DNN to generate incorrect but DNN-sensitive features (Huang et al., 2020a; Fu et al., 2021; Fowl et al., 2021b) for training data by, e.g., adversarial attacks (Goodfellow et al., 2015). However, the data protectors cannot know what DNN and what training strategies the unauthorized users will adopt. Thus, the protective examples should aim at hurting the DNN training, a whole dynamic process, instead of a static DNN. Therefore, it would be interesting to study the vulnerability of DNN training. Recall that the vulnerability of a DNN is revealed by the adversarial examples which are similar to clean ones but unrecognized by the model (Madry et al., 2018). Similarly, we depict the vulnerability of training by the perturbed training samples that are never predicted correctly during training. Learning on examples ignored during normal training tends to yield DNNs ignoring normal examples. Such examples could be easily uncovered by the gradients from the ensemble of model training checkpoints. However, ensemble methods have never been explored in data protection to the best of our knowledge, so it is natural to wonder Can we use these intermediate checkpoint models for data protection in a self-ensemble manner? ∗Correspondence to Xiaolin Huang (xiaolinhuang@sjtu.edu.cn). An effective ensemble demands high diversity of sub-models, which is generally quantified by their gradient similarity (Pang et al., 2019; Yang et al., 2021), i.e., the gradients on the same image from different sub-models should be orthogonal. Surprisingly, we found that checkpoints’ gradients are as orthogonal as DNNs with different architectures in the conventional ensemble. In this regard, we argue that intermediate checkpoints are very diverse to form the proposed self-ensemble protection (SEP), challenging existing beliefs on their similarity (Li et al., 2022). By SEP, effective ensemble protection is achieved by the computation of training only one DNN. Since the scale of data worth protecting is mostly very large, SEP avoids tremendous costs by training multiple models. Therefore, our study enables a practical ensemble for large-scale data, which may help improve the generalization, increase the attack transferability, and study DNN training dynamics. Multiple checkpoints offer us a pool of good features for an input. Thus, we could additionally take the advantage of diverse features besides diverse gradients at no cost. Inspired by neural collapse theory (Papyan et al., 2020), which demonstrates that the mean feature of samples in a class is a highly representative depiction of this class, we bring about a novel feature alignment loss that induces a sample’s last-layer feature collapse into the mean of incorrect-class features. With features from multiple checkpoints, FA robustly injects incorrect features so that DNNs are deeply confounded. Equipping SEP with FA, our method achieves astonishing performance by revealing the vulnerability of DNN training: (1) our examples are mostly mis-classified in any training processes compared to a recent method (Sandoval-Segura et al., 2022), and (2) clean samples are always much closer to each other than to protected samples, indicating that the latter belong to another distribution that could not be noticed by normal training. By setting ℓ∞ = 2/255, a very small bound, SEP perturbations on the CIFAR-10 training set reduce the testing accuracy of a ResNet18 from 94.56% to 14.68%, while the best-known results could only reach 41.35% with the same amount of overall calculation to craft the perturbations. The superiority of our method is also observable in the study on CIFAR-100 and ImageNet subset on 5 architectures. We also study perturbations under different norms, and found that mixing ℓ∞ and ℓ0 perturbations (Wu et al., 2023) is the only effective way to resist ℓ∞ adversarial training, which could recover the accuracy for all other types of perturbations. Our contributions could be summarized below. • We propose that protective perturbations should reveal the vulnerability of the DNN training process, which we depict by the examples never classified correctly in training. • We uncover such examples by the self-ensemble of model checkpoints, which are found to be surprisingly diverse as data protectors. • Our method is very effective even using the computation of training one DNN. Equipped with a novel feature alignment loss, our ℓ∞ = 8/255 perturbations lead DNNs to have < 5.7% / 3.2% / 0.6% accuracy on CIFAR-10 / CIFAR-100 / ImageNet subset. 2 RELATED WORK Small perturbations are known to be able to fool DNNs into incorrect predictions (Szegedy et al., 2014). Such test-time adversarial perturbations are crafted effectively by adversarially updating samples with model gradients (Carlini & Wagner, 2017), and the produced adversarial examples (AEs) transfer to hurt other DNNs as well (Chen et al., 2022). Similarly, training-time adversarial perturbations, i.e., poisoning examples, are also obtainable by adversarially modify training samples using DNN gradients (Koh & Liang, 2017; Fowl et al., 2021b). All DNNs trained on poisoning examples generalize poorly on clean examples, making poisoning methods helpful in protecting data from unauthorized use of training. Besides adversarial noise, it has been demonstrated that error-minimization (Huang et al., 2020a), gradient alignment (Fowl et al., 2021a) and influence functions (Fang et al., 2020) are also useful in protecting data. However, current methods only use one DNN because the scale of data worth protection is very large for training multiple models. Ensemble is validated as a panacea for boosting adversarial attacks (Liu et al., 2017; Dong et al., 2018). By aggregating the probabilities (Liu et al., 2017), logits or losses (Dong et al., 2018) of multiple models, ensemble attacks significantly increase the black-box attack success rate. Ensemble attacks could be further enhanced by reducing the gradient variance of sub-models (Xiong et al., 2022), and such an optimization way is also adopted in our method. Besides, ensemble has also been shown effective as a defense method by inducing low diversity across sub-models (Pang et al., 2019; Yang et al., 2020; 2021) or producing diverse AEs in adversarial training (Tramèr et al., 2018; Wang & Wang, 2021). Despite the good performance of ensemble in attacks and defenses, it has not been introduced to protect datasets due to its inefficiency. In this regard, we adopt the self-ensemble strategy, which only requires the computation of training one DNN. Its current applications are focused on semi-supervised learning (Zhao et al., 2019; Liu et al., 2022). Two similar but different tasks besides poisoning-based data protection are adversarial training and backdoor attacks. Adversarial training (Madry et al., 2018; Zhang et al., 2019; Stutz et al., 2020) continuously generates AEs with current checkpoint gradients to improve the model’s robustness towards worst-case perturbations. In contrast, data protection produces fixed poisoning examples so that unauthorized training yields low clean accuracy. Backdoor attacks (Geiping et al., 2020; Huang et al., 2020b) perturb a small proportion of the training set to make the DNNs mispredict certain samples, but remain well-functional on other clean samples. While data protectors perturb the whole training set to degrade the model’s performance on all clean samples. 3 THE PROPOSED METHOD 3.1 PRELIMINARIES We first introduce the threat model and problem formulation of the data protection task. The data owner company wishes to release data for users while preventing an unauthorized appropriator to collect data for training DNNs. Thus, the data protector would add imperceptible perturbations to samples, so that humans have no obstacle to seeing the data, while the appropriator cannot train DNNs to achieve an acceptable testing accuracy. Since the protector has access to the whole training set, it can craft perturbations for each sample for effective protection (Shen et al., 2019; Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021b). Mathematically, the problem can be formulated as max δ∈Πε ∑ (x,y)∈D LCE(fa(x, θ∗), y), s.t. θ∗ ∈ argmin θ ∑ (x,y)∈T LCE (fa (x+ δ, θ) , y) , (1) where the perturbations δ bounded by ε are added to the training set T so that an appropriator model trained on it fa(·, θ∗) have a low accuracy on the test set D, i.e., a high cross-entropy loss LCE(·, ·). The δ could be effectively calculated by targeted attacks (Fowl et al., 2021b), which use a well-trained protecting DNN fp to produce targeted adversarial examples (AEs) that have the non-robust features in the incorrect class g(y) as xt+1 = Πϵ (xt − α · sign (GCE(fp,xt))) , GCE(fp,x) = ∇xLCE (fp (x) , g(y)) , (2) where Πϵ clips the sample into the ε ℓ∞-norm bound after each update with a step size α. g(·) stands for a permutation on the label space. Here our method also adopts the optimization in (2). 3.2 DEPICTING THE VULNERABILITY OF DNN TRAINING Current methods (Shen et al., 2019; Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021b) craft protective perturbations that are supposed to generalize to poison different unknown architectures by a single DNN fp. However, the data protectors cannot know what DNN and what training strategies the unauthorized users will adopt. Thus, the protective examples should aim at hurting the DNN training, a whole dynamic process, instead of a static DNN. Therefore, it would be interesting to study the vulnerability of DNN training. Recall that the vulnerability of a DNN is represented by AEs (Madry et al., 2018), because they are slightly different from clean testing samples but are totally unrecognizable by a static model. Similarly, the vulnerability of DNN training could be depicted by examples that are slightly different from clean training samples but are always unrecognized in the training process, i.e., the perturbed data never correctly predicted by the training model. If we view the training process as the generation of checkpoint models, the problem becomes finding the examples that are adversarial to checkpoints, which could be easily solved by the ensemble attack (Dong et al., 2018). Let us investigate whether the training checkpoints, which are similar (Li et al., 2022) in architecture and parameters, could be diverse sub-models for effective self-ensemble. To measure the diversity, we adopt the common gradient similarity metric (Pang et al., 2019; Yang et al., 2021). In Fig. 1 (upper right), we plot the average of absolute cosine value for gradients in different checkpoints, and the low value indicates that gradients on images are close to orthogonal for different checkpoints like in the ensemble of different architectures (bottom right). This means, surprisingly, intermediate checkpoints are very diverse and sufficient to form the proposed self-ensemble protection (SEP) as GSEP(fp,x) = n−1∑ k=0 GCE(f k p ,x), (3) where fkp is the k th equidistant intermediate checkpoint and GSEP is the gradients for update in (2). As illustrated in 1 (left), SEP (vertical box) requires the computation of training only one DNN compared to the conventional ensemble (horizontal box) that needs time- and resource-consuming training processes to obtain a large number of ensemble models. This efficiency is especially important for data protection. Because only a large amount of data, if stolen, could be used to train competitive DNNs. In this regard, the scale of data requiring particular protection would be large, and saving the calculation of training extra models makes a significant difference. SEP is differently motivated compared to conventional ensemble. Ensemble attacks aim to produce architecture-invariant adversarial examples, and such transferable examples reveal the common vulnerability of different architectures (Chen et al., 2022). SEP, in contrast, targets the vulnerability of DNN training. By enforcing consistent misclassification, SEP produces examples ignored during normal training, and learning on them would thus yield DNNs ignoring normal examples. 3.3 PROTECTING DATA BY SELF-ENSEMBLE Multiple checkpoints offer us a pool of features for an input. Those representations, though distinctive, all contribute to accurate classification. Thus, we could additionally take the advantage of diverse features besides diverse gradients at no cost. Motivated by this, we resort to the neural collapse theory (Papyan et al., 2020) because it unravels the characteristics of DNN features. Neural collapse has four manifestations for a deep classifier. (1) In-class variability of last-layer activation collapse to class means. (2) Class means converge to simplex equiangular tight frame. (3) Linear classifiers approach class means. (4) Classifier converges to choose the nearest class mean. They demonstrate that the last-layer features of well-trained DNNs center closely on class means. In this regard, the mean feature of in-class samples is a highly representative depiction of this class. Based on this, we develop the feature alignment loss to jointly use different but good representations of a class from multiple checkpoints. Specifically, for every checkpoint, we encourage the last-layer feature of a sample to approximate the mean feature of target-class samples. In this way, FA promotes neural collapse to incorrect centers so that a sample has the exact high-dimensional feature of the target-class samples. Therefore, non-robust features of that target class could be robustly injected into data so that DNNs are deeply confounded. Mathematically, FA in SEP can be expressed as GSEP-FA(hp,x) = n−1∑ k=0 GFA(h k p ,x) = n−1∑ k=0 ∇x∥hkp (x)− hkc (g(y))∥, hkc (y) = ∑ x∈Ty h k p (x) |Ty| , (4) where hkp stands for the feature extractor (except for the last linear layer) of f k p , and h k c (g(y)) is for the mean (center) feature in the target class g(y) calculated by hkp . ∥ · ∥ means the MSE loss. Our overall algorithm is summarized in Alg. 1, where we use a stochastic variance reduction (VR) gradient method (Johnson & Zhang, 2013) to avoid bad local minima in optimization. Our method first calculates the FA gradients gk by each training checkpoint (line 4). Then before updating the sample by the accumulated gradients (line 11), we reduce the variance of ensemble gradients (from gens to gupd) in a predictive way by M inner virtual optimizations on x̂m, which has been verified to boost ensemble adversarial attacks (Xiong et al., 2022). Algorithm 1 Self-Ensemble Protection with Feature Alignment and Variance Reduction Input: Dataset T = {(x, y)}, ℓ∞ bound ε, step size α, number of protection iterations T , number of training iterations N , number of checkpoints in self-ensemble n, number of inner updates M Output: Protected dataset x′ 1: Train a DNN for N epochs and save n equidistant checkpoints 2: x0 = x 3: for t = 0 → T − 1 do 4: for k = 0 → n− 1 do gk = GFA(hkp ,xt) # get the gradients by each checkpoints as (4) 5: gens = 1 n ∑n−1 k=0 gk, gupd = 0, x̂0 = xt # initialize variables for inner optimization 6: for m = 0 → M − 1 do 7: Pick a random index k # stochastic variance reduction (Johnson & Zhang, 2013) 8: gupd = gupd +GFA(h k p , x̂m)− (gk − gens) # accumulate gradients with variance 9: x̂m+1 = Πϵ ( x̂m − α · sign(gupd) ) # virtual update onx̂m 10: end for 11: xt+1 = Πϵ ( xt − α · sign(gupd) ) # update samples with variance-reduced gradients 12: end for 13: return x′ = xT−1 In a word, the main part of our method is to use checkpoints to craft targeted AEs for the training set (Line 4-5 in Alg. 1) in a self-ensemble protection (SEP) manner. And SEP could be boosted by FA loss (Line 4) and VR optimization (Line 6-11). In this way, our overall method only requires 1×N training epochs, and T × (n+M) times of backward calculation to update samples. 4 EXPERIMENTS 4.1 SETUP We evaluate SEP along with 7 data protection baselines, including adding random noise, TensorClog aiming to cause gradient vanishing (Shen et al., 2019), Gradient Alignment to target-class gradients (Fowl et al., 2021a), DeepConfuse that protects by an autoencoder (Feng et al., 2019), Unlearnable Examples (ULEs) using error-minimization noise (Huang et al., 2020a), Robust ULEs (RULEs) that use adversarial training (Fu et al., 2021), Adversarial Poison (AdvPoison) resorting to targeted attacks (Fowl et al., 2021b), and AutoRegressive (AR) Poison (Sandoval-Segura et al., 2022) using Markov chain. Hyperparameters of baselines are shown in Appendix D. We use the reproduced results in (Fowl et al., 2021b) in Table 1, 5, and 6. For our method, we optimize class-y samples to have the mean feature of target incorrect class g(y), where g(y) = (y + 5)%10 for CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Krizhevsky et al., 2017) protected classes, and g(y) = (y + 50)%100 for CIFAR-100. For the ImageNet subset, we train fa and fp with the first 100 classes in ImageNet-1K, but only protect samples in 10 significantly different classes, which are African chameleon, black grouse, electric ray, hammerhead, hen, house finch, king snake, ostrich, tailed frog, and wolf spider. This establishes the class-wise data protection setting, and the reported accuracy is calculated on the testing samples in these 10 classes. We train a ResNet18 for N = 120 epochs as fp following (Huang et al., 2020a; Fowl et al., 2021b). 15 equidistant intermediate checkpoints (epoch 8, 16, ..., 120) are adopted with M = 15, T = 30 if not otherwise stated. Experiments are conducted on an NVIDIA Tesla A100 GPU but could be run on GPUs with 4GB+ memory because we store checkpoints on hardware. The data protection methods are assessed by training models with 5 architectures, which are ResNet18 (He et al., 2016), SENet18 (Hu et al., 2018), VGG16 (Simonyan & Zisserman, 2015), DenseNet121 (Huang et al., 2017), and GoogLeNet (Szegedy et al., 2015). They are implemented from Pytorch vision (Paszke et al., 2019). We train appropriator DNNs fa for 120 epochs by an SGD optimizer with an initial learning rate of 0.1, which is divided by 10 in epochs 75 and 90. The momentum item in training is 0.9 and the weight decay is 5e-4. In this setting, DNNs trained on clean data could get great accuracy, i.e., 95% / 75% / 78% for CIFAR-10 / CIFAR-100 / ImageNet subset. Training configurations are the same for fa and fp. In the ablation study, we denote the pure self-ensemble (3) as SEP, SEP with feature alignment (4) as SEP-FA, and SEP-FA with variance reduction as SEP-FA-VR (Alg. 1). In other experiments, "ours" stands for our final method, i.e., SEP-FA-VR. We put the confusion matrix of fa in Appendix C to provide class-wise analysis. 4.2 UNCOVERING THE VULNERABILITY OF DNN TRAINING We first investigate whether our examples successfully reveal the vulnerability of DNN training. If so, we should be able to hurt different training processes regardless of the model architecture. In this regard, we test the “cross-training transferability” of our protective data. We report the results in Fig. 2 (a), where one could see that SEP samples generated by ResNet18 training tend to be mispredicted in training DenseNet121 and VGG16. In contrast, AR samples behave like clean training data and can be well recognized. This demonstrates that our method well depicts the vulnerability of DNN training compared to the recent baseline (Sandoval-Segura et al., 2022). We also perform a class-wise study to illustrate how DNNs treat clean and protective examples. We first train a CIFAR-10 DNN with clean samples (classes 0-4) and protective ones (classes 5-9, with features of classes 0-4). The DNN performs well in classes 0-4 but misclassifies all clean samples in classes 5-9 to classes 0-4, seeing Fig. 2 (b). This indicates in DNN’s view, clean samples (from different classes) are much closer to each other than to protective ones (which look very similar). More extremely, if we inject features of class 0 to classes 1-9 examples, and use them with clean samples (class 0) to train, the DNN would classify all testing samples to class 0, seeing Fig. 2 (c). 4.3 PROTECTIVE PERFORMANCE By depicting the vulnerability of DNN training, our method achieves amazing protective performance on 3 datasets and 5 architectures against various baselines. In table 1, our method surpasses existing state-of-the-art baselines by a large margin, leading DNNs to have < 5.7% / 3.2% / 0.6% accuracy on CIFAR-10 / CIFAR-100 / ImageNet subset. Comparison with weaker baselines is shown in Table 5. The great performance enables us to set an extremely small bound of ε = 2/255, even for high-resolution ImageNet samples. The perturbations would be invisible even under meticulous inspection, seeing Appendix A. However, the appropriator can only reach no more than 30% accuracy in most cases, seeing Table 2. Adversarial training (AT) has been validated as the most effective strategy to recover the accuracy of training with protective perturbations. It does not hinder the practicability of data protection methods because AT significantly decreases the accuracy and requires several-fold training computation. However, it would be interesting to study the effect of different types of perturbations in different AT settings. Here we study with AR Poison, which is claimed to resist AT. We set the perturbation bound as ℓ2 = 1 (step size α = 0.2) (Sandoval-Segura et al., 2022) and ℓ0 = 1 (Wu et al., 2023), and the latter means perturbing one pixel without other restrictions. We keep the AT bound the same as the perturbations bound, and find that in this case, both ℓ∞ and ℓ2 ATs could recover accuracy of ℓ∞, ℓ2, and ℓ∞ + ℓ2 perturbations. The only type of perturbations able to resist ℓ∞ AT is the mixture of ℓ∞ and ℓ0 perturbations. Besides, our method is significantly better than AR Poison in normal training. 4.4 ABLATION STUDY We study the performance improvements from SEP, FA, and VR separately along with the best baseline AdvPoison (Fowl et al., 2021b), seeing Table 1. We first control the overall computation the same for all experiments. Then we vary the number of sub-models n to see its effect on our method. In Table 4, we maintain our methods to have comparative computation with AdvPoison, which trains the protecting DNN for 40 epochs and craft perturbations by 250 steps 8 times (we modify it to 4). In SEP and SEP-FA, we train for N = 120 epochs and use n = 30 checkpoints to update samples for T = 30 times, aligning the computation with AdvPoison. In SEP-FA-VR with inner update M = 15, we alter the number of checkpoints n = 15 so that the overall computation is the same, which is also the default setting for all experiments as in Sec. 4.1. We use ResNet18 as fp on CIFAR-10 dataset here. In the conventional ensemble, 30 DNNs with 5 architectures are trained with 6 seeds. As shown in Table 4, SEP is able to halve the accuracy of AdvPoison within the same computation budget, indicating that knowledge of multiple models is much more important than additional update steps. In comparison with the conventional ensemble, which requires 30× training computation, SEP performs just a little worse. Moreover, equipped with FA, which consumes no additional calculation, efficient SEP-FA could be as effective as the conventional ensemble. With VR, SEP-FA-VR is stably better, and reduces the accuracy from 45.10% to 17.47% on average. We illustrate the training process of Table 4 experiments in Fig. 3. Compared to the training on clean data (purple line), data protection methods accelerate the model’s convergence on training data, but the DNN’s testing accuracy would suddenly drop at the initial stage of training. After the learning rate decay at epoch 75, the protection performance of different methods could be clearly observed. SEP accounts for the majority of performance improvements, and FA and VR could also further decrease the accuracy, making our method finally outperform the conventional inefficient ensemble. We also show the validation accuracy (on unlearned protective examples) of different perturbations in Fig. 4, where it is obvious that early stopping could not be a good defense because validation accuracy is mostly close to training accuracy. However, a huge and unusual gap between them may be a signal for the existence of protective examples. We also vary the number of intermediate checkpoints n used in self-ensemble to perform ablation study under different computation budgets. We set n = 3, 5, 10, 30, 120 without changing other hyper-parameters in self-ensemble, and plot the results in Fig. 5. Similar results could be seen, i.e., FA, and VR could stably contribute to the performance. We also discover that although n is increasing exponentially, the resulting performance increase would not be that significant as n is large. Most prominently, raising n from 30 to 120 is not bound to yield better results, meaning that the performance would saturate around n = 30, and it would be unnecessary to use all checkpoints. 5 DISCUSSION AND CONCLUSION In this paper, we propose that data protection should target the vulnerability of DNN training, which we successfully depict as the examples never classified correctly in training. Such examples could be easily calculated by model checkpoints, which are found to have surprisingly diverse gradients. By self-ensemble, effective ensemble performance could be achieved by the computation of training one model, and we could also take advantage of the diverse features from checkpoints to further boost the performance by the novel feature alignment loss. Our method exceeds current baselines significantly, reducing the appropriator model’s accuracy from 45.10% (best-known results) to 17.47% on average. Our method could also serve as a potential benchmark to evaluate the DNN’s learning process, e.g., how to prevent DNNs from learning non-robust features (shortcuts) instead of semantic ones. And it would be interesting to study the poisoning task in self-supervised learning and stable diffusion. Since our method is implemented as the targeted ensemble attack, it is also applicable to nonclassification tasks where the adversarial attack has also been developed and neural collapse also exists for pre-trained feature extractors. ACKNOWLEDGEMENT This work is partly supported by the National Natural Science Foundation of China (61977046), Shanghai Science and Technology Program (22511105600), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and National Science Foundation (CCF-1937500). The authors are grateful to Prof. Sijia Liu for his valuable discussions.
1. What is the focus of the paper regarding image perturbation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its contribution to existing works? 3. Do you have any concerns or suggestions regarding the method's effectiveness against adversarial training or its limitations in terms of L2 perturbation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper is about perturbing images such that it is not possible to use them for training a model which has good performance. The authors propose to build upon existing works which use adversarial perturbations to corrupt the training set. As contributions, this work proposes to ensemble the perturbation gradients coming different model snapshots of a same training deep network. The perturbation gradients come from a feature alignment loss such that perturbed samples when passed to the DNN result in feature maps which are close to the mean feature of other classes. Strengths And Weaknesses Strength: The paper is clear and very well written. Lots of experiments to support the paper claims with interesting ablation studies. Weakness and suggestions: Adversarial training easily circumvents the data corruption. The proposed method suffers from the same drawback of other existing methods. L2 perturbation(not only L-inf) could complete the study. Maybe mixed perturbations could be helpful against adversarial training. What about exponential moving average as a comparison to the ensemble of snapshots? It would be a way to avoid storing all these snapshots. Typos in the abstract and other places for the perturbation bounds: with l_inf=8 instead of L_inf=8/255 for examples. Clarity, Quality, Novelty And Reproducibility Novelty lies in using a feature alignment loss and an ensemble of snapshots instead of a single DNN to attack the images. The paper is well written and clear.
ICLR
Title Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors Abstract As data becomes increasingly vital, a company would be very cautious about releasing data, because the competitors could use it to train high-performance models, thereby posing a tremendous threat to the company’s commercial competence. To prevent training good models on the data, we could add imperceptible perturbations to it. Since such perturbations aim at hurting the entire training process, they should reflect the vulnerability of DNN training, rather than that of a single model. Based on this new idea, we seek perturbed examples that are always unrecognized (never correctly classified) in training. In this paper, we uncover them by model checkpoints’ gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints’ cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures. That is, our amazing performance of ensemble only requires the computation of training one model. By extensive experiments with 9 baselines on 3 datasets and 5 architectures, SEP is verified to be a new state-of-the-art, e.g., our small l∞ = 2/255 perturbations reduce the accuracy of a CIFAR-10 ResNet18 from 94.56% to 14.68%, compared to 41.35% by the best-known method. Code is available at https://github.com/Sizhe-Chen/SEP. 1 INTRODUCTION Large-scale datasets have become increasingly important in training high-performance deep neural networks (DNNs). Thus, it is a common practice to collect data online (Mahajan et al., 2018; Sun et al., 2017), an almost unlimited data source. This poses a great threat to the commercial competence of data owners such as social media companies since the competitors could also train good DNNs from their data. Therefore, great efforts have been devoted to protecting data from unauthorized use in model training. The most typical way is to add imperceptible perturbations to the data, so that DNNs trained on it have poor generalization (Huang et al., 2020a; Fowl et al., 2021b). Existing data protection methods use a single DNN to generate incorrect but DNN-sensitive features (Huang et al., 2020a; Fu et al., 2021; Fowl et al., 2021b) for training data by, e.g., adversarial attacks (Goodfellow et al., 2015). However, the data protectors cannot know what DNN and what training strategies the unauthorized users will adopt. Thus, the protective examples should aim at hurting the DNN training, a whole dynamic process, instead of a static DNN. Therefore, it would be interesting to study the vulnerability of DNN training. Recall that the vulnerability of a DNN is revealed by the adversarial examples which are similar to clean ones but unrecognized by the model (Madry et al., 2018). Similarly, we depict the vulnerability of training by the perturbed training samples that are never predicted correctly during training. Learning on examples ignored during normal training tends to yield DNNs ignoring normal examples. Such examples could be easily uncovered by the gradients from the ensemble of model training checkpoints. However, ensemble methods have never been explored in data protection to the best of our knowledge, so it is natural to wonder Can we use these intermediate checkpoint models for data protection in a self-ensemble manner? ∗Correspondence to Xiaolin Huang (xiaolinhuang@sjtu.edu.cn). An effective ensemble demands high diversity of sub-models, which is generally quantified by their gradient similarity (Pang et al., 2019; Yang et al., 2021), i.e., the gradients on the same image from different sub-models should be orthogonal. Surprisingly, we found that checkpoints’ gradients are as orthogonal as DNNs with different architectures in the conventional ensemble. In this regard, we argue that intermediate checkpoints are very diverse to form the proposed self-ensemble protection (SEP), challenging existing beliefs on their similarity (Li et al., 2022). By SEP, effective ensemble protection is achieved by the computation of training only one DNN. Since the scale of data worth protecting is mostly very large, SEP avoids tremendous costs by training multiple models. Therefore, our study enables a practical ensemble for large-scale data, which may help improve the generalization, increase the attack transferability, and study DNN training dynamics. Multiple checkpoints offer us a pool of good features for an input. Thus, we could additionally take the advantage of diverse features besides diverse gradients at no cost. Inspired by neural collapse theory (Papyan et al., 2020), which demonstrates that the mean feature of samples in a class is a highly representative depiction of this class, we bring about a novel feature alignment loss that induces a sample’s last-layer feature collapse into the mean of incorrect-class features. With features from multiple checkpoints, FA robustly injects incorrect features so that DNNs are deeply confounded. Equipping SEP with FA, our method achieves astonishing performance by revealing the vulnerability of DNN training: (1) our examples are mostly mis-classified in any training processes compared to a recent method (Sandoval-Segura et al., 2022), and (2) clean samples are always much closer to each other than to protected samples, indicating that the latter belong to another distribution that could not be noticed by normal training. By setting ℓ∞ = 2/255, a very small bound, SEP perturbations on the CIFAR-10 training set reduce the testing accuracy of a ResNet18 from 94.56% to 14.68%, while the best-known results could only reach 41.35% with the same amount of overall calculation to craft the perturbations. The superiority of our method is also observable in the study on CIFAR-100 and ImageNet subset on 5 architectures. We also study perturbations under different norms, and found that mixing ℓ∞ and ℓ0 perturbations (Wu et al., 2023) is the only effective way to resist ℓ∞ adversarial training, which could recover the accuracy for all other types of perturbations. Our contributions could be summarized below. • We propose that protective perturbations should reveal the vulnerability of the DNN training process, which we depict by the examples never classified correctly in training. • We uncover such examples by the self-ensemble of model checkpoints, which are found to be surprisingly diverse as data protectors. • Our method is very effective even using the computation of training one DNN. Equipped with a novel feature alignment loss, our ℓ∞ = 8/255 perturbations lead DNNs to have < 5.7% / 3.2% / 0.6% accuracy on CIFAR-10 / CIFAR-100 / ImageNet subset. 2 RELATED WORK Small perturbations are known to be able to fool DNNs into incorrect predictions (Szegedy et al., 2014). Such test-time adversarial perturbations are crafted effectively by adversarially updating samples with model gradients (Carlini & Wagner, 2017), and the produced adversarial examples (AEs) transfer to hurt other DNNs as well (Chen et al., 2022). Similarly, training-time adversarial perturbations, i.e., poisoning examples, are also obtainable by adversarially modify training samples using DNN gradients (Koh & Liang, 2017; Fowl et al., 2021b). All DNNs trained on poisoning examples generalize poorly on clean examples, making poisoning methods helpful in protecting data from unauthorized use of training. Besides adversarial noise, it has been demonstrated that error-minimization (Huang et al., 2020a), gradient alignment (Fowl et al., 2021a) and influence functions (Fang et al., 2020) are also useful in protecting data. However, current methods only use one DNN because the scale of data worth protection is very large for training multiple models. Ensemble is validated as a panacea for boosting adversarial attacks (Liu et al., 2017; Dong et al., 2018). By aggregating the probabilities (Liu et al., 2017), logits or losses (Dong et al., 2018) of multiple models, ensemble attacks significantly increase the black-box attack success rate. Ensemble attacks could be further enhanced by reducing the gradient variance of sub-models (Xiong et al., 2022), and such an optimization way is also adopted in our method. Besides, ensemble has also been shown effective as a defense method by inducing low diversity across sub-models (Pang et al., 2019; Yang et al., 2020; 2021) or producing diverse AEs in adversarial training (Tramèr et al., 2018; Wang & Wang, 2021). Despite the good performance of ensemble in attacks and defenses, it has not been introduced to protect datasets due to its inefficiency. In this regard, we adopt the self-ensemble strategy, which only requires the computation of training one DNN. Its current applications are focused on semi-supervised learning (Zhao et al., 2019; Liu et al., 2022). Two similar but different tasks besides poisoning-based data protection are adversarial training and backdoor attacks. Adversarial training (Madry et al., 2018; Zhang et al., 2019; Stutz et al., 2020) continuously generates AEs with current checkpoint gradients to improve the model’s robustness towards worst-case perturbations. In contrast, data protection produces fixed poisoning examples so that unauthorized training yields low clean accuracy. Backdoor attacks (Geiping et al., 2020; Huang et al., 2020b) perturb a small proportion of the training set to make the DNNs mispredict certain samples, but remain well-functional on other clean samples. While data protectors perturb the whole training set to degrade the model’s performance on all clean samples. 3 THE PROPOSED METHOD 3.1 PRELIMINARIES We first introduce the threat model and problem formulation of the data protection task. The data owner company wishes to release data for users while preventing an unauthorized appropriator to collect data for training DNNs. Thus, the data protector would add imperceptible perturbations to samples, so that humans have no obstacle to seeing the data, while the appropriator cannot train DNNs to achieve an acceptable testing accuracy. Since the protector has access to the whole training set, it can craft perturbations for each sample for effective protection (Shen et al., 2019; Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021b). Mathematically, the problem can be formulated as max δ∈Πε ∑ (x,y)∈D LCE(fa(x, θ∗), y), s.t. θ∗ ∈ argmin θ ∑ (x,y)∈T LCE (fa (x+ δ, θ) , y) , (1) where the perturbations δ bounded by ε are added to the training set T so that an appropriator model trained on it fa(·, θ∗) have a low accuracy on the test set D, i.e., a high cross-entropy loss LCE(·, ·). The δ could be effectively calculated by targeted attacks (Fowl et al., 2021b), which use a well-trained protecting DNN fp to produce targeted adversarial examples (AEs) that have the non-robust features in the incorrect class g(y) as xt+1 = Πϵ (xt − α · sign (GCE(fp,xt))) , GCE(fp,x) = ∇xLCE (fp (x) , g(y)) , (2) where Πϵ clips the sample into the ε ℓ∞-norm bound after each update with a step size α. g(·) stands for a permutation on the label space. Here our method also adopts the optimization in (2). 3.2 DEPICTING THE VULNERABILITY OF DNN TRAINING Current methods (Shen et al., 2019; Feng et al., 2019; Huang et al., 2020a; Fowl et al., 2021b) craft protective perturbations that are supposed to generalize to poison different unknown architectures by a single DNN fp. However, the data protectors cannot know what DNN and what training strategies the unauthorized users will adopt. Thus, the protective examples should aim at hurting the DNN training, a whole dynamic process, instead of a static DNN. Therefore, it would be interesting to study the vulnerability of DNN training. Recall that the vulnerability of a DNN is represented by AEs (Madry et al., 2018), because they are slightly different from clean testing samples but are totally unrecognizable by a static model. Similarly, the vulnerability of DNN training could be depicted by examples that are slightly different from clean training samples but are always unrecognized in the training process, i.e., the perturbed data never correctly predicted by the training model. If we view the training process as the generation of checkpoint models, the problem becomes finding the examples that are adversarial to checkpoints, which could be easily solved by the ensemble attack (Dong et al., 2018). Let us investigate whether the training checkpoints, which are similar (Li et al., 2022) in architecture and parameters, could be diverse sub-models for effective self-ensemble. To measure the diversity, we adopt the common gradient similarity metric (Pang et al., 2019; Yang et al., 2021). In Fig. 1 (upper right), we plot the average of absolute cosine value for gradients in different checkpoints, and the low value indicates that gradients on images are close to orthogonal for different checkpoints like in the ensemble of different architectures (bottom right). This means, surprisingly, intermediate checkpoints are very diverse and sufficient to form the proposed self-ensemble protection (SEP) as GSEP(fp,x) = n−1∑ k=0 GCE(f k p ,x), (3) where fkp is the k th equidistant intermediate checkpoint and GSEP is the gradients for update in (2). As illustrated in 1 (left), SEP (vertical box) requires the computation of training only one DNN compared to the conventional ensemble (horizontal box) that needs time- and resource-consuming training processes to obtain a large number of ensemble models. This efficiency is especially important for data protection. Because only a large amount of data, if stolen, could be used to train competitive DNNs. In this regard, the scale of data requiring particular protection would be large, and saving the calculation of training extra models makes a significant difference. SEP is differently motivated compared to conventional ensemble. Ensemble attacks aim to produce architecture-invariant adversarial examples, and such transferable examples reveal the common vulnerability of different architectures (Chen et al., 2022). SEP, in contrast, targets the vulnerability of DNN training. By enforcing consistent misclassification, SEP produces examples ignored during normal training, and learning on them would thus yield DNNs ignoring normal examples. 3.3 PROTECTING DATA BY SELF-ENSEMBLE Multiple checkpoints offer us a pool of features for an input. Those representations, though distinctive, all contribute to accurate classification. Thus, we could additionally take the advantage of diverse features besides diverse gradients at no cost. Motivated by this, we resort to the neural collapse theory (Papyan et al., 2020) because it unravels the characteristics of DNN features. Neural collapse has four manifestations for a deep classifier. (1) In-class variability of last-layer activation collapse to class means. (2) Class means converge to simplex equiangular tight frame. (3) Linear classifiers approach class means. (4) Classifier converges to choose the nearest class mean. They demonstrate that the last-layer features of well-trained DNNs center closely on class means. In this regard, the mean feature of in-class samples is a highly representative depiction of this class. Based on this, we develop the feature alignment loss to jointly use different but good representations of a class from multiple checkpoints. Specifically, for every checkpoint, we encourage the last-layer feature of a sample to approximate the mean feature of target-class samples. In this way, FA promotes neural collapse to incorrect centers so that a sample has the exact high-dimensional feature of the target-class samples. Therefore, non-robust features of that target class could be robustly injected into data so that DNNs are deeply confounded. Mathematically, FA in SEP can be expressed as GSEP-FA(hp,x) = n−1∑ k=0 GFA(h k p ,x) = n−1∑ k=0 ∇x∥hkp (x)− hkc (g(y))∥, hkc (y) = ∑ x∈Ty h k p (x) |Ty| , (4) where hkp stands for the feature extractor (except for the last linear layer) of f k p , and h k c (g(y)) is for the mean (center) feature in the target class g(y) calculated by hkp . ∥ · ∥ means the MSE loss. Our overall algorithm is summarized in Alg. 1, where we use a stochastic variance reduction (VR) gradient method (Johnson & Zhang, 2013) to avoid bad local minima in optimization. Our method first calculates the FA gradients gk by each training checkpoint (line 4). Then before updating the sample by the accumulated gradients (line 11), we reduce the variance of ensemble gradients (from gens to gupd) in a predictive way by M inner virtual optimizations on x̂m, which has been verified to boost ensemble adversarial attacks (Xiong et al., 2022). Algorithm 1 Self-Ensemble Protection with Feature Alignment and Variance Reduction Input: Dataset T = {(x, y)}, ℓ∞ bound ε, step size α, number of protection iterations T , number of training iterations N , number of checkpoints in self-ensemble n, number of inner updates M Output: Protected dataset x′ 1: Train a DNN for N epochs and save n equidistant checkpoints 2: x0 = x 3: for t = 0 → T − 1 do 4: for k = 0 → n− 1 do gk = GFA(hkp ,xt) # get the gradients by each checkpoints as (4) 5: gens = 1 n ∑n−1 k=0 gk, gupd = 0, x̂0 = xt # initialize variables for inner optimization 6: for m = 0 → M − 1 do 7: Pick a random index k # stochastic variance reduction (Johnson & Zhang, 2013) 8: gupd = gupd +GFA(h k p , x̂m)− (gk − gens) # accumulate gradients with variance 9: x̂m+1 = Πϵ ( x̂m − α · sign(gupd) ) # virtual update onx̂m 10: end for 11: xt+1 = Πϵ ( xt − α · sign(gupd) ) # update samples with variance-reduced gradients 12: end for 13: return x′ = xT−1 In a word, the main part of our method is to use checkpoints to craft targeted AEs for the training set (Line 4-5 in Alg. 1) in a self-ensemble protection (SEP) manner. And SEP could be boosted by FA loss (Line 4) and VR optimization (Line 6-11). In this way, our overall method only requires 1×N training epochs, and T × (n+M) times of backward calculation to update samples. 4 EXPERIMENTS 4.1 SETUP We evaluate SEP along with 7 data protection baselines, including adding random noise, TensorClog aiming to cause gradient vanishing (Shen et al., 2019), Gradient Alignment to target-class gradients (Fowl et al., 2021a), DeepConfuse that protects by an autoencoder (Feng et al., 2019), Unlearnable Examples (ULEs) using error-minimization noise (Huang et al., 2020a), Robust ULEs (RULEs) that use adversarial training (Fu et al., 2021), Adversarial Poison (AdvPoison) resorting to targeted attacks (Fowl et al., 2021b), and AutoRegressive (AR) Poison (Sandoval-Segura et al., 2022) using Markov chain. Hyperparameters of baselines are shown in Appendix D. We use the reproduced results in (Fowl et al., 2021b) in Table 1, 5, and 6. For our method, we optimize class-y samples to have the mean feature of target incorrect class g(y), where g(y) = (y + 5)%10 for CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Krizhevsky et al., 2017) protected classes, and g(y) = (y + 50)%100 for CIFAR-100. For the ImageNet subset, we train fa and fp with the first 100 classes in ImageNet-1K, but only protect samples in 10 significantly different classes, which are African chameleon, black grouse, electric ray, hammerhead, hen, house finch, king snake, ostrich, tailed frog, and wolf spider. This establishes the class-wise data protection setting, and the reported accuracy is calculated on the testing samples in these 10 classes. We train a ResNet18 for N = 120 epochs as fp following (Huang et al., 2020a; Fowl et al., 2021b). 15 equidistant intermediate checkpoints (epoch 8, 16, ..., 120) are adopted with M = 15, T = 30 if not otherwise stated. Experiments are conducted on an NVIDIA Tesla A100 GPU but could be run on GPUs with 4GB+ memory because we store checkpoints on hardware. The data protection methods are assessed by training models with 5 architectures, which are ResNet18 (He et al., 2016), SENet18 (Hu et al., 2018), VGG16 (Simonyan & Zisserman, 2015), DenseNet121 (Huang et al., 2017), and GoogLeNet (Szegedy et al., 2015). They are implemented from Pytorch vision (Paszke et al., 2019). We train appropriator DNNs fa for 120 epochs by an SGD optimizer with an initial learning rate of 0.1, which is divided by 10 in epochs 75 and 90. The momentum item in training is 0.9 and the weight decay is 5e-4. In this setting, DNNs trained on clean data could get great accuracy, i.e., 95% / 75% / 78% for CIFAR-10 / CIFAR-100 / ImageNet subset. Training configurations are the same for fa and fp. In the ablation study, we denote the pure self-ensemble (3) as SEP, SEP with feature alignment (4) as SEP-FA, and SEP-FA with variance reduction as SEP-FA-VR (Alg. 1). In other experiments, "ours" stands for our final method, i.e., SEP-FA-VR. We put the confusion matrix of fa in Appendix C to provide class-wise analysis. 4.2 UNCOVERING THE VULNERABILITY OF DNN TRAINING We first investigate whether our examples successfully reveal the vulnerability of DNN training. If so, we should be able to hurt different training processes regardless of the model architecture. In this regard, we test the “cross-training transferability” of our protective data. We report the results in Fig. 2 (a), where one could see that SEP samples generated by ResNet18 training tend to be mispredicted in training DenseNet121 and VGG16. In contrast, AR samples behave like clean training data and can be well recognized. This demonstrates that our method well depicts the vulnerability of DNN training compared to the recent baseline (Sandoval-Segura et al., 2022). We also perform a class-wise study to illustrate how DNNs treat clean and protective examples. We first train a CIFAR-10 DNN with clean samples (classes 0-4) and protective ones (classes 5-9, with features of classes 0-4). The DNN performs well in classes 0-4 but misclassifies all clean samples in classes 5-9 to classes 0-4, seeing Fig. 2 (b). This indicates in DNN’s view, clean samples (from different classes) are much closer to each other than to protective ones (which look very similar). More extremely, if we inject features of class 0 to classes 1-9 examples, and use them with clean samples (class 0) to train, the DNN would classify all testing samples to class 0, seeing Fig. 2 (c). 4.3 PROTECTIVE PERFORMANCE By depicting the vulnerability of DNN training, our method achieves amazing protective performance on 3 datasets and 5 architectures against various baselines. In table 1, our method surpasses existing state-of-the-art baselines by a large margin, leading DNNs to have < 5.7% / 3.2% / 0.6% accuracy on CIFAR-10 / CIFAR-100 / ImageNet subset. Comparison with weaker baselines is shown in Table 5. The great performance enables us to set an extremely small bound of ε = 2/255, even for high-resolution ImageNet samples. The perturbations would be invisible even under meticulous inspection, seeing Appendix A. However, the appropriator can only reach no more than 30% accuracy in most cases, seeing Table 2. Adversarial training (AT) has been validated as the most effective strategy to recover the accuracy of training with protective perturbations. It does not hinder the practicability of data protection methods because AT significantly decreases the accuracy and requires several-fold training computation. However, it would be interesting to study the effect of different types of perturbations in different AT settings. Here we study with AR Poison, which is claimed to resist AT. We set the perturbation bound as ℓ2 = 1 (step size α = 0.2) (Sandoval-Segura et al., 2022) and ℓ0 = 1 (Wu et al., 2023), and the latter means perturbing one pixel without other restrictions. We keep the AT bound the same as the perturbations bound, and find that in this case, both ℓ∞ and ℓ2 ATs could recover accuracy of ℓ∞, ℓ2, and ℓ∞ + ℓ2 perturbations. The only type of perturbations able to resist ℓ∞ AT is the mixture of ℓ∞ and ℓ0 perturbations. Besides, our method is significantly better than AR Poison in normal training. 4.4 ABLATION STUDY We study the performance improvements from SEP, FA, and VR separately along with the best baseline AdvPoison (Fowl et al., 2021b), seeing Table 1. We first control the overall computation the same for all experiments. Then we vary the number of sub-models n to see its effect on our method. In Table 4, we maintain our methods to have comparative computation with AdvPoison, which trains the protecting DNN for 40 epochs and craft perturbations by 250 steps 8 times (we modify it to 4). In SEP and SEP-FA, we train for N = 120 epochs and use n = 30 checkpoints to update samples for T = 30 times, aligning the computation with AdvPoison. In SEP-FA-VR with inner update M = 15, we alter the number of checkpoints n = 15 so that the overall computation is the same, which is also the default setting for all experiments as in Sec. 4.1. We use ResNet18 as fp on CIFAR-10 dataset here. In the conventional ensemble, 30 DNNs with 5 architectures are trained with 6 seeds. As shown in Table 4, SEP is able to halve the accuracy of AdvPoison within the same computation budget, indicating that knowledge of multiple models is much more important than additional update steps. In comparison with the conventional ensemble, which requires 30× training computation, SEP performs just a little worse. Moreover, equipped with FA, which consumes no additional calculation, efficient SEP-FA could be as effective as the conventional ensemble. With VR, SEP-FA-VR is stably better, and reduces the accuracy from 45.10% to 17.47% on average. We illustrate the training process of Table 4 experiments in Fig. 3. Compared to the training on clean data (purple line), data protection methods accelerate the model’s convergence on training data, but the DNN’s testing accuracy would suddenly drop at the initial stage of training. After the learning rate decay at epoch 75, the protection performance of different methods could be clearly observed. SEP accounts for the majority of performance improvements, and FA and VR could also further decrease the accuracy, making our method finally outperform the conventional inefficient ensemble. We also show the validation accuracy (on unlearned protective examples) of different perturbations in Fig. 4, where it is obvious that early stopping could not be a good defense because validation accuracy is mostly close to training accuracy. However, a huge and unusual gap between them may be a signal for the existence of protective examples. We also vary the number of intermediate checkpoints n used in self-ensemble to perform ablation study under different computation budgets. We set n = 3, 5, 10, 30, 120 without changing other hyper-parameters in self-ensemble, and plot the results in Fig. 5. Similar results could be seen, i.e., FA, and VR could stably contribute to the performance. We also discover that although n is increasing exponentially, the resulting performance increase would not be that significant as n is large. Most prominently, raising n from 30 to 120 is not bound to yield better results, meaning that the performance would saturate around n = 30, and it would be unnecessary to use all checkpoints. 5 DISCUSSION AND CONCLUSION In this paper, we propose that data protection should target the vulnerability of DNN training, which we successfully depict as the examples never classified correctly in training. Such examples could be easily calculated by model checkpoints, which are found to have surprisingly diverse gradients. By self-ensemble, effective ensemble performance could be achieved by the computation of training one model, and we could also take advantage of the diverse features from checkpoints to further boost the performance by the novel feature alignment loss. Our method exceeds current baselines significantly, reducing the appropriator model’s accuracy from 45.10% (best-known results) to 17.47% on average. Our method could also serve as a potential benchmark to evaluate the DNN’s learning process, e.g., how to prevent DNNs from learning non-robust features (shortcuts) instead of semantic ones. And it would be interesting to study the poisoning task in self-supervised learning and stable diffusion. Since our method is implemented as the targeted ensemble attack, it is also applicable to nonclassification tasks where the adversarial attack has also been developed and neural collapse also exists for pre-trained feature extractors. ACKNOWLEDGEMENT This work is partly supported by the National Natural Science Foundation of China (61977046), Shanghai Science and Technology Program (22511105600), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and National Science Foundation (CCF-1937500). The authors are grateful to Prof. Sijia Liu for his valuable discussions.
1. What is the focus and contribution of the paper regarding data protection? 2. What are the strengths of the proposed ensemble method, particularly in its empirical success? 3. What are the weaknesses of the paper, especially regarding the comparison with other baselines and defenses? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a simple new ensemble method for protecting data from someone training on it. The work shows the empirical success of their method on multiple datasets and compared to strong baselines like adversarial poisons. Strengths And Weaknesses The main strength of the paper is in the empirical success compared to strong baselines and defenses. In Figure 3, the peak test accuracy of the proposed method is just as high as for adversarial poisons. Is there a way to reduce this peak? Otherwise, early stopping might be a highly effective defense. It might be worth considering the AR poisons of “Autoregressive Perturbations for Data Poisoning” as they are a recent stronger baseline than adversarial poisons. How did you tune the attack hyperparameters (e.g. step size) of adversarial poisons since you are using a smaller perturbation radius than they used in the experiments in their paper? In general, the writing is a bit messy and hard to follow, but the overall structure and content is good, and I was still able to understand the writing nonetheless. Clarity, Quality, Novelty And Reproducibility The paper is clear enough, although the writing is hard to follow at times. I did not notice some of the hyperparameters that are needed to re-produce the experiments, but maybe they were already there and I just did not see them. Crucially, the hyperparameters for the competitor are important since this work does not use the same constraint space as the paper for adversarial poisons.
ICLR
Title Dual Contradistinctive Generative Autoencoder Abstract We present a new generative autoencoder model with dual contradistinctive losses to improve generative autoencoder that performs simultaneous inference (reconstruction) and synthesis (generation). We name our model dual contradistinctive generative autoencoder (DC-VAE) that integrates an instance-level discriminative loss (maintaining the instance-level fidelity for the reconstruction/synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for the reconstruction/synthesis), both being contradistinctive. There also exists a mathematical connection between the instance-based classification and instance-level conditional distribution. DC-VAE achieves competitive results in three tasks, including image synthesis, image reconstruction, and representation learning. DC-VAE is applicable to various tasks in computer vision and machine learning. (a) DC-VAE (ours) Reconstruction results. Left: 128× 128. Right: 512× 512. (b) DC-VAE (ours) Sampling results. Left: 128× 128. Right: 512× 512. Figure 1: DC-VAE Reconstruction (top) and Sampling (bottom) on LSUN Bedroom Yu et al. (2015) at resolution 128× 128 (left) and CelebA-HQ (Karras et al., 2018) at resolution 512× 512 (right). 1 INTRODUCTION Tremendous progress has been made in deep learning for the development of various learning frameworks (Krizhevsky et al., 2012; He et al., 2016; Goodfellow et al., 2014; Vaswani et al., 2017). Autoencoder (AE) (LeCun, 1987; Hinton & Zemel, 1994) aims to compactly represent and faithfully reproduce the original input signal by concatenating an encoder and a decoder in an end-to-end learning framework. The goal of AE is to make the encoded representation semantically efficient and sufficient to reproduce the input signal by its decoder. Autoencoder’s generative companion, variational autoencoder (VAE) (Kingma & Welling, 2014), additionally learns a variational model for the latent variables to capture the underlying sample distribution. For the encoder and decoder models separately, tremendous progress has been made in image classification with deep convolutional neural network (CNN) (Krizhevsky et al., 2012; He et al., 2016) (an encoder) and in image generation with generative adversarial network (GAN) (Goodfellow et al., 2014) (a decoder). The key objective for a generative autoencoder is to maintain two types of fidelities: (1) an instancelevel fidelity to make the reconstruction/synthesis faithful to the individual input data sample, and (2) a set-level fidelity to make the reconstruction/synthesis of the decoder faithful to the entire input data set. The VAE/GAN algorithm (Larsen et al., 2016) combines a reconstruction loss with an adversarial loss. However, the result of VAE/GAN is sub-optimal, as shown in Table 1. The pixel-wise reconstruction loss in the standard VAE (Kingma & Welling, 2014) typically results in blurry images with degenerated semantics. A possible solution to resolving the above conflict lies in two aspects: (1) turning the measure in the pixel space into induced feature space that is more semantically meaningful; (2) changing the L2 distance (per-pixel) into a learned instance-level distance function for the entire image (akin to generative adversarial networks which learn set-level distance functions). Taking these two steps allows us to design an instance-level classification loss that is aligned with the adversarial loss in the GAN model enforcing set-level fidelity. Motivated by the above observations, we develop a new generative autoencoder model with dual contradistinctive losses by adopting a discriminative loss performing instance-level classification (enforcing the instance-level fidelity), which is rooted in metric learning (Kulis et al., 2012) and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; van den Oord et al., 2018). Combined with the adversarial losses for the set-level fidelity, both terms are formulated in the induced feature space performing contradistinction: (1) the instance-level contrastive loss considers each input instance (image) itself as a class, and (2) the set-level adversarial loss treats the entire input set as a positive class. We name our method dual contradistinctive generative autoencoder (DC-VAE) and make the following contributions. • We develop a new algorithm, dual contradistinctive generative autoencoder (DC-VAE), by combining instance-level and set-level classification losses in the VAE framework, and systematically show the significance of these two loss terms in DC-VAE. • The effectiveness of DC-VAE is illustrated in three tasks altogether, including image synthesis, image reconstruction, and representation learning. 2 RELATED WORK Related work can be roughly divided into three categories: (1) generative autoencoder, (2) deep generative model, and (3) contrastive learning. Variational autoencoder (VAE) (Kingma & Welling, 2014) points to an exciting direction of generative models by developing an Evidence Lower BOund (ELBO) objective (Higgins et al., 2017; Ding et al., 2020). However, the VAE reconstruction/synthesis is known to be blurry. To improve the image quality, a sequence of VAE based models have been developed (Larsen et al., 2016; Dumoulin et al., 2017; Huang et al., 2018; Brock et al., 2018; Zhang et al., 2019). VAE/GAN (Larsen et al., 2016) adopts an adversarial loss to improve the quality of the image, but its output for both reconstruction and synthesis (new samples) is still unsatisfactory. IntroVAE Huang et al. (2018) adds a loop from the output back to the input and is able to attain image quality that is on par with some modern GANs in some aspects. However, its full illustration for both reconstruction and synthesis is unclear. PGA (Zhang et al., 2019) adds a constraint to the latent variables. Pioneering works of (Tu, 2007; Gutmann & Hyvärinen, 2012) alleviate the difficulty of learning densities by approximating likelihoods via classification (real (positive) samples vs. fake (pseudonegative or adversarial) samples). Generative adversarial network (GAN) (Goodfellow et al., 2014) builds on neural networks and amortized sampling (a decoder network that maps a noise into an image). The subsequent development after GAN (Radford et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017; Karras et al., 2018; Gong et al., 2019; Dumoulin et al., 2017; Donahue et al., 2017) has led to a great leap forward in building decoder-based generative models. It has been widely observed that the adversarial loss in GANs contributes significantly to the improved quality of image synthesis. Energy-based generative models (Salakhutdinov & Hinton, 2009; Xie et al., 2016; Jin et al., 2017; Lee et al., 2018) — which aim to directly model data density — are making a steady improvement for a simultaneously generative and discriminative single model. From another angle, contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020; Chen et al., 2020) has lately shown its particular advantage in unsupervised training of CNN features. It overcomes the limitation in unsupervised learning where class label is missing by turning each image instance into one class. Thus, the softmax function in the standard discriminative classification training can be applied. Contrastive learning can be connected to metric learning (Bromley et al., 1993; Chopra et al., 2005; Chechik et al., 2010). In this paper, we aim to improve VAE (Kingma & Welling, 2014) by introducing a contrastive loss (van den Oord et al., 2018) to address instance-level fidelity between the input and the reconstruction in the induced feature space. Unlike in self-supervised representation learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020), where self-supervision requires generating a transformed input (via data augmentation operations), the reconstruction naturally fits into the contrastive term that encourages the matching between the reconstruction and the input image instance, while pushing the reconstruction away from the rest of the images in the entire training set. Thus, the instance-level and set-level contradistinctive terms collaborate with each to encourage the high fidelity of the reconstruction and synthesis. In Figure 3, we systematically show the significance of with and without the instance-level and the set-level contradistinctive terms. In addition, we explore multi-scale contrastive learning via two schemes in Section 4.1: 1) deep supervision for contrastive learning in different convolution layers, and 2) patch-based contrastive learning for fine-grained data fidelity. In the experiments, we show competitive results for the proposed dual contradistinctive generative autoencoder (DC-VAE) in a number of benchmarks for three tasks, including image synthesis, image reconstruction, and representation learning. 3 PRELIMINARIES: VAE AND VAE/GAN Variational autoencoder (VAE) Assume a given training set S = {xi}ni=1 where each xi ∈ Rm. We suppose that each xi is sampled from a generative process p(x|z). In the literature, vector z refers to latent variables. In practice, latent variables z and the generative process p(x|z) are unknown. The objectives of a variational autoencoder (VAE) (Kingma & Welling, 2014) is to simultaneously train an inference network qφ(z|x) and a generator network pθ(x|z). In VAE (Kingma & Welling, 2014), the inference network is a neural network that outputs parameters for Gaussian distribution qφ(z|x) = N (µφ(x),Σφ(x)). The generator is a deterministic neural network fθ(z) parameterized by θ. Generative density pθ(x|z) is assumed to be subject to a Gaussian distribution: pθ(x|z) = N (fθ(z), σ2I). These models can be trained by minimizing the negative of evidence lower bound (ELBO) in Eq. (1) below. LELBO(θ,φ;x) = −Ez∼qφ(z|x)[log(pθ(x|z))] +KL[qφ(z|x)||p(z)] (1) where p(z) is the prior, which is assumed to be N (0, I). The first term −Eqφ(z|x)[log(pθ(x|z))] reduces to standard pixel-wise reconstruction loss Eqφ(z|x)[||x− fθ(z)||22] (up to a constant) due to the Gaussian assumption. The second term is the regularization term, which prevents the conditional qφ(z|x) from deviating from the Gaussian prior N (0, I). The inference network and generator network are jointly optimized over training samples by: min θ,φ E x∼pdata(x) LELBO(θ,φ;x). (2) where pdata is the distribution induced by the training set S. VAE has an elegant formulation. However, it relies on a pixel-wise reconstruction loss, which is known not ideal to be reflective of perceptual realism (Johnson et al., 2016; Isola et al., 2017), often resulting in blurry images. From another viewpoint, it can be thought of as using a kernel density estimator (with an isotropic Gaussian kernel) in the pixel space. Although allowing efficient training and inference, such a non-parametric approach is overly simplistic for modeling the semantics and perception of natural images. VAE/GAN Generative adversarial networks (GANs) (Goodfellow et al., 2014) and its variants (Radford et al., 2016), on the other hand, are shown to be producing highly realistic images. The success was largely attributed to learning a fidelity function (often referred to as a discriminator) that measures how realistic the generated images are. This can be achieved by learning to contrast (classify) the set of training images with the set of generated images (Tu, 2007; Gutmann & Hyvärinen, 2012; Goodfellow et al., 2014). VAE/GAN (Larsen et al., 2016) augments the ELBO objective (Eq. (2)) with the GAN objective. Specifically, the objective of VAE/GAN consists of two terms, namely the modified ELBO (Eq. (3)) and the GAN objective. To make the notations later consistent, we now define the set of given training images as S = {xi}ni=1 in which a total number of n unlabeled training images are present. For each input image xi, the modified ELBO computes the reconstruction loss in the feature space of the discriminator instead of the pixel space: LELBO(θ,φ, D;xi) = Ez∼qφ(z|xi)[||FD(xi)− FD(fθ(z))|| 2 2] +KL[qφ(z|xi)||p(z)] (3) where FD(·) denotes the feature embedding from the discriminator D. Feature reconstruction loss (also referred to as perceptual loss), similar to that in style transfer (Johnson et al., 2016). The modified GAN objective considers both reconstructed images (latent code from qφ(z|x)) and sampled images (latent code from the prior p(z)) as its fake samples: LGAN(θ,φ, D;xi) = logD(xi)+Ez∼qφ(z|xi) log(1−D(fθ(z))+Ez∼p(z) log(1−D(fθ(z)). (4) The VAE/GAN objective becomes: min θ,φ max D n∑ i=1 [LELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (5) 4 DUAL CONTRADISTINCTIVE GENERATIVE AUTOENCODER (DC-VAE) Here we want to address a question: Is the degeneration of the synthesized images by VAE always the case once the decoder is joined with an encoder? Can the problem be remedied by using a more informative loss? Although improving the image qualities of VAE by integrating a set-level contrastive loss (GAN objective of Eq. (4)), VAE/GAN still does not accurately model instance-level fidelity. Inspired by the literature on instance-level classification (Malisiewicz et al., 2011), approximating likelihood by classification (Tu, 2007), and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020), we propose to model instance-level fidelity by contrastive loss (commonly referred to as InfoNCE loss) (van den Oord et al., 2018). In DC-VAE, we perform the following minimization and loosely call each term a loss. Linstance(θ,φ, D; i, {xj}nj=1) , −Ez∼qφ(z|xi) [ log eh(xi,fθ(z))∑n j=1 e h(xj ,fθ(z)) ] , (6) where i is an index for a training sample (instance), {xj}nj=1 is the union of positive samples and negative samples, h(x,y) is the critic function that measures compatibility between x and y. Following the popular choice from (He et al., 2020), h(x,y) is the cosine similarity between the embeddings of x and y: h(x,y) = FD(x) >FD(y) ||FD(x)||2||FD(y)||2 . Note that unlike in contrastive self-supervised learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020) where two views (independent augmentations) of an instance constitutes a positive pair, an input instance xi and its reconstruction fθ(z) comprises a positive pair in DC-VAE. Likewise, the reconstruction fθ(z) and any instance that is not xi can be a negative pair. To bridge the gap between the instance-level contrastive loss (Eq. (6)) and log-likelihood in ELBO term (Eq. (1)), we show the following observation. Theorem 1 (From (Ma & Collins, 2018; Poole et al., 2019)) The following objective is minimized, i.e., the optimal critic h is achieved, when h(fθ(z),x) = log p(x|z) + c(x) where c(x) is any function that does not depend on z. INCE , Ex1,···xKEi[Linstance(θ,φ, D; i, {xj}nj=1)]. (7) From Theorem 1, we see that the contrastive loss of Eq. (6) implicitly estimates the log-likelihood log pθ(x|z) required for the evidence lower bound (ELBO). Hence, we modify the ELBO objective of Eq. (1) as follows and name it as implicit ELBO (IELBO): LIELBO(θ,φ, D;xi) = Linstance(θ,φ, D; i, {xj}nj=1) +KL[qφ(z|xi)||p(z)]. (8) Finally, the combined objective for the proposed DC-VAE algorithm becomes: min θ,φ max D n∑ i=1 [LIELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (9) The definition of LGAN follows Eq. (4). Note here we also consider the term in Eq. (4) as contrasdistinctive since it tries to minimize the difference/discriminative classification between the input (“real”) image set and the reconstructed/generated (“fake”) image set. Below we highlight the significance of the two contradistinctive terms. Figure 2 shows the model architecture. • Instance-level fidelity. The first item in Eq. (8) is an instance-level fidelity term encouraging the reconstruction to be as close as possible to the input image while being different from all the rest of the images. A key advantage of the contrastive loss in Eq. (8) over the standard reconstruction loss in Eq. (3) is its relaxed and background instances aware formulation. In general, the reconstruction in Eq. (3) wants a perfect match between the reconstruction and the input, whereas the contrastive loss in Eq. (8) requests for being the most similar one among the training samples. This way, the contrastive loss becomes more cooperative with less conflict to the GAN loss, compared with the reconstruction loss. The introduction of the contrastive loss results in a significant improvement over VAE and VAE/GAN in which only matching the reconstruction, and the input instance is enforced. • Set-level fidelity. The second item in Eq. (9) is a set-level fidelity term encouraging the entire set of synthesized images to be non distinguishable from the input image set. Having this term (Eq. (4)) is still important since the instance contrastive loss alone (Eq. (9)) will also lead to a degenerated situation: the input image and its reconstruction can be projected to the same point in the new feature space, but without a guarantee that the reconstruction itself lies on the valid “real” image manifold. As shown in Figure 3 and Table 1 for the comparison with and without the individual terms in Eq. (9). We observe evident effectiveness of the proposed DC-VAE combining both the instance-level fidelity term (Eq. (6)) and the set-level fidelity term (Eq. (4)), compared with VAE (using pixel-wise reconstruction loss without the GAN objective), VAE-GAN (using feature reconstruction loss and the GAN objective), and VAE contrastive (using contrastive loss but without the GAN objective). In the experiments, we show that both terms required to achieve faithful reconstruction (captured by InfoNCE loss) with perceptual realism (captured by the GAN loss). 4.1 MULTI SCALE CONTRASTIVE LEARNING Inspired by (Lee et al., 2015), we utilize information from feature maps at different scales. In addition to contrasting on the last layer of D in Equation 9, we add contrastive objective on fl(z) where fl is some function on top of an intermediate layer l of D. We do it in two different ways. 1. Deep supervision: We use 1×1 convolution to reduce the dimension channel-wise, and use a linear layer to obtain fl. 2. Local patch: We use a random location across channel at layer l (size: 1×1×d, where d is the channel depth). The intuition for the second is that in a convolutional neural network, one location at a feature map corresponds to a receptive area (patch) in the original image. Thus, by contrasting locations across channels in the same feature maps, we are encouraging the original image and the reconstruction to image have locally similar content, while encouraging them to have locally dissimilar content in other images. We use deep supervision for initial training, and add local patch after certain iterations. 5 EXPERIMENTS 5.1 IMPLEMENTATION Datasets To validate our method, we train our method on several different datasets — CIFAR-10 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), CelebA (Liu et al., 2015), CelebA-HQ (Karras et al., 2018), and LSUN bedroom (Yu et al., 2015). See the appendix for more detailed descriptions. Network architecture For 32× 32 resolution, we design the encoder and decoder subnetworks of our model in a similar way to the discriminator and generator found through neural architecture search in AutoGAN (Gong et al., 2019). For the higher resolution experiments (128 × 128 and 512 × 512 resolution), we use Progressive GAN (Karras et al., 2018) as the backbone. Network architecture diagram is available in the appendix. Training details The number of negative samples for contrastive learning is 8096 for all datasets. The latent dimension for the VAE decoder is 128 for CIFAR-10, STL-10, and 512 for CelebA, CelebA-HQ and LSUN Bedroom. Learning rate is 0.0002 with Adam parameters of (β1, β2) = (0.0, 0.9) and a batch size of 128 for CIFAR-10 and STL-10. For CelebA, CelebA-HQ, LSUN Bedroom datasets, we use the optimizer parameters given in (Karras et al., 2018). The contrastive embedding dimension used is 16 for each of the experiments. 5.2 ABLATION STUDY To demonstrate the necessity of the GAN loss (Eq. 4) and contrastive loss (Eq. 8), we conduct four experiments with the same backbone. These experiments are: VAE (No GAN, no Contrastive), VAE/GAN (with GAN, no Contrastive), VAE-Contrastive (No GAN, with Contrastive, and ours (With GAN, with Contrastive). Here, GAN denotes Eq. 4, and Contrastive denotes Eq. 8. Qualitative analysis From Figure 3, we see that without GAN and contrastive, images are blurry; Without GAN, the contrastive head can classify images, but not on the image manifold; Without Contrastive, reconstruction images are on the image manifold because of the discriminator, but they are different from input images. These experiments show that it is necessary to combine both instance-level and set-level fidelity, and in a contradistinctive manner. Quantitative analysis In Table 1 we observe the same trend. VAE generates blurry images; thus the FID/IS (Inception Score) is not ideal. VAE-Contrastive does not generate images on the natural manifold; thus FID/IS is poor. VAE/GAN combines set-level and instance-level information. However the L2 objective is not ideal; thus the FID/IS is sub-optimal. For both reconstruction and sampling tasks, DC-VAE generates high fidelity images and has a favorable FID and Inception score. This illustrates the advantange of having a contradistinctive objective on both set level and instance level. To measure the faithfulness of the reconstructed image we compute the pixelwise L2 distance and the perceptual distance (Johnson et al. (2016)). For the pixel distance, VAE has the lowest value because it directly optimizes this distance during training; our pixel-wise distance is better than VAE/GAN and VAE-Contrastive. For perceptual distance, our method outperforms other three, which confirms that using contrastive learning helps reconstruct images semantically. 5.3 COMPARISON TO EXISTING GENERATIVE MODELS Table 2 gives a comparison of quantitative measurement for CIFAR-10 and STL-10 dataset. In general, there is a large difference in terms of FID and IS between GAN family and VAE family of models. Our model has state-of-the-art results in VAE family, and is comparable to state-of-the-art GAN models on CIFAR-10. Similarly Table 4 shows that DC-VAE is able to generate images that are comparable to GAN based methods even on higher resolution datasets. 5.4 LATENT SPACE REPRESENTATION: IMAGE AND STYLE INTERPOLATION We further validate the effectiveness of DC-VAE for representation learning. One benefit of having an AE/VAE framework compared with just a decoder as in GAN Goodfellow et al. (2014) is to be able to directly obtain the latent representation from the input images. The encoder and decoder modules in VAE allows us to readily perform image/style interpolation by mixing the latent variables of different images and reconstruct/synthesize new ones. We demonstrate qualitative results on image interpolation (Fig. 5, Appendix Fig. 9), style interpolation (Appendix Fig. 10) and image editing (Fig. 6). We directly use the trained DC-VAE model without disentanglement learning Karras et al. (2019). Additional latent space analysis and the method used for interpolation and editing is Table 3: Quality of image generation (FID) comparison on LSUN Bedrooms. †128×128 resolution. ¶256×256 resolution. ↓ means lower is better. Method FID↓ Progressive GAN‡ (Karras et al., 2018) 8.3 SNGAN† (Miyato et al., 2018) (from (Chen et al., 2019)) 16.0 SSGAN†(Chen et al., 2019) 13.3 StyleALAE¶ (Pidhorskyi et al., 2020) Reconstruction 15.92 StyleALAE¶ (Pidhorskyi et al., 2020) Sampling 17.13 DC-VAE† (ours) Reconstruction 10.57 DC-VAE† (ours) Sampling 14.3 provided in the Appendix. We also quantitatively compare the latent space disentanglement through the perceptual path length (PPL) (Karras et al., 2019) (Table 6). We observe that DC-VAE learns a more disentangled latent space representation than the backbone Progressive GAN (Karras et al., 2018) and StyleALAE (Pidhorskyi et al., 2020) that uses a much more capable StyleGAN (Karras et al., 2019) backbone. 5.5 LATENT SPACE REPRESENTATION: CLSSIFICATION To show that our model learns a good representation, we measure the performance on the downstream MNIST classification task (Ding et al., 2020). The VAE models were trained on MNIST dataset (LeCun, 1998). We feed input images into our VAE encoder and get the latent representation. Then we train a linear classifier on the latent representation to classify the classes of the input images. Results in Table 5 show that our model gives the lowest classification error in most cases. This experiment demonstrates that our model not only gains the ability to do faithful synthesis and reconstruction, but also gains better representation ability on the VAE side. 6 CONCLUSION In this paper, we have proposed dual contradistinctive generative autoencoder (DC-VAE), a new framework that integrates an instance-level discriminative loss (InfoNCE) with a set-level adversarial loss (GAN) into a single variational autoencoder framework. Our experiments show competitive results for a single model in several tasks, including image synthesis, image reconstruction, representation learning for image interpolation, and representation learning for classification. DC-VAE points to a encouraging direction that attains high-quality synthesis (decoding) and inference (encoding). A APPENDIX A.1 Additional reconstruction results A.2 Analysing the latent space In this section we analyse the smoothness of the latent space learnt by DC-VAE. In Figure 9 we qualitatively show the high resolution (512× 512) CelebA-HQ Karras et al. (2018) images generated by an evenly spaced linear blending between two latent vectors. In Fig. 6 we show that DC-VAE is able to perform meaningful attribute editing on images while retaining the original identity. To perform image editing, we first need to compute the direction vector in the latent space that correspond to a desired attribute (e.g. has glasses, has blonde hair, is a woman, has facial hair). We compute these attribute direction vectors by selecting 20 images that have the attribute and 20 images that do not have the attribute, obtaining the corresponding pairs of 20 latent vectors, and calculating the difference of the mean. The results in Fig. 6 show that these direction vectors can be added to a latent vector to add a diverse combination of desired image attributes while retaining the original identity of the individual. Additionally we corroborate the above qualitative results quantitatively by inspecting the Perceptual Path Length (PPL) Karras et al. (2019) of our learn DC-VAE Decoder (Tab. 6) to measures the disentanglement of the latent space. We note that although ProgressiveGAN (ours base model) has a better FID score, DC-VAE has a lower PPL score which indicated that the latent space learnt is more disentangled. A.3 Effect of negative samples In this section we analyse the effect of varying the number of negative samples used for contrastive learning. The figure 11 shows the reconstruction error on the CIFAR-10 Krizhevsky et al. (2009) test set as the negative samples is varied. We observe that a higher number of negative samples results in better reconstruction. We choose 8096 for all of our experiments because of memory constraints. A.4 Datasets used CIFAR-10 comprises 50,000 training images and 10,000 test images with a spatial resolution of 32 × 32. STL-10 is a similar dataset that contains 5,000 training images and 100,000 unlabeled images at 96× 96 resolution. We follow the procedure in AutoGAN(Gong et al., 2019) and resize the STL-10 images to 32× 32. The CelebA dataset has 162,770 training images and 19,962 testing images, CelebA-HQ contains 29,000 training images with 1,000 test images of size 1024 × 1024, and LSUN Bedroom has approximately 3M images. We resize all images progressively in these three datasets from (4× 4) to (512× 512) for the progressive training. A.5 Network architecture diagrams In Figures 15 we show the detailed network architecture of DC-VAE for input resolutions of 32× 32. Note that the comparison results shown in Figure 3 and Table 1 in the main paper, for VAE, VAE/GAN, VAE w/o GAN, and our proposed DC-VAE are all based on the same network architecture (shown in Figure 15 here), for a fair comparison. The network architectures shown in Figure 15 are adapted closely from the networks discovered by (Gong et al., 2019) through Neural Architecture Search. The DC-VAE developed in our paper is not tied to any particular CNN architecture. We choose the AutoGAN architecture (Gong et al., 2019) to start with a strong baseline. The decoder in Figure 15 matches the generator in (Gong et al., 2019). The encoder is built by modifying the output shape of the final linear layer in the discriminator of AutoGAN (Gong et al., 2019) to match the latent dimension and adding spectral normalization. The discriminator is used both for classifying real/fake images, and contrastive learning. For each layer we choose, we first apply 1x1 convolution and a linear layer, and then use this feature as an input to the contrastive module. For experiments at 32× 32, we pick two different positions: the output of second residual conv block (lower level) and the output of the first linear layer (higher level). For experiments on higher resolution datasets we use a Progressive GAN (Karras et al., 2018) Generator and Discriminator as our backbone and apply similar modifications as described above. A.6 Training infrastructure A.7 Further details about the representation learning experiments As seen in Table 4 in the main paper, we show the representation capability of DC-VAE following the procedure outlined in (Ding et al., 2020). We train our model on the MNIST dataset (LeCun, 1998) and measure the transferability though a classification task on the latent embedding vector. Specifically, we first pretrain the DC-VAE model on the training split of the MNIST dataset. Following that we freeze the DC-VAE model and train a linear classifier that takes latent embedding vector as the input and predicts the class label of the original image.
1. What is the main contribution of the paper, and how does it differ from other generative models? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to generate high-quality images? 3. Are there any concerns or suggestions regarding the experimental design or the choice of baselines for comparison? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper introduces a discriminator into the VAE framework. The original reconstruction loss is replaced by a adversarial loss and a contrastive loss upon features extracted by the discriminator. The author highlights the ability of reconstruction and generation ability compared with both VAE and GANs. Experimental results show that the proposed DC-VAE outperforms other auto-encoder-based generation methods and obtains comparable results with adversarial generative method. Pros, Experimental results seem good. This paper is well-written and easy to read. The source code is provided to make the results reproducible. Cons, Although a discriminator is introduced into DC-VAE, generation results are worse than GANs on most datasets. Considering that an encoder is also used in DC-VAE but not in GANs, these results seem to be less competitive. The contrastive loss is interesting, however, no experiment shows results obtained with contrastive loss only. An ablation study is welcome. In experimental parts, some interesting generative models are missing, such as relativistic GAN [1], PUGAN [2], NVAE [1] "The relativistic discriminator: a key element missing from standard GAN." International Conference on Learning Representations. 2018. [2] "On Positive-Unlabeled Classification in GAN." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [3] "Nvae: A deep hierarchical variational autoencoder." arXiv preprint arXiv:2007.03898 (2020).
ICLR
Title Dual Contradistinctive Generative Autoencoder Abstract We present a new generative autoencoder model with dual contradistinctive losses to improve generative autoencoder that performs simultaneous inference (reconstruction) and synthesis (generation). We name our model dual contradistinctive generative autoencoder (DC-VAE) that integrates an instance-level discriminative loss (maintaining the instance-level fidelity for the reconstruction/synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for the reconstruction/synthesis), both being contradistinctive. There also exists a mathematical connection between the instance-based classification and instance-level conditional distribution. DC-VAE achieves competitive results in three tasks, including image synthesis, image reconstruction, and representation learning. DC-VAE is applicable to various tasks in computer vision and machine learning. (a) DC-VAE (ours) Reconstruction results. Left: 128× 128. Right: 512× 512. (b) DC-VAE (ours) Sampling results. Left: 128× 128. Right: 512× 512. Figure 1: DC-VAE Reconstruction (top) and Sampling (bottom) on LSUN Bedroom Yu et al. (2015) at resolution 128× 128 (left) and CelebA-HQ (Karras et al., 2018) at resolution 512× 512 (right). 1 INTRODUCTION Tremendous progress has been made in deep learning for the development of various learning frameworks (Krizhevsky et al., 2012; He et al., 2016; Goodfellow et al., 2014; Vaswani et al., 2017). Autoencoder (AE) (LeCun, 1987; Hinton & Zemel, 1994) aims to compactly represent and faithfully reproduce the original input signal by concatenating an encoder and a decoder in an end-to-end learning framework. The goal of AE is to make the encoded representation semantically efficient and sufficient to reproduce the input signal by its decoder. Autoencoder’s generative companion, variational autoencoder (VAE) (Kingma & Welling, 2014), additionally learns a variational model for the latent variables to capture the underlying sample distribution. For the encoder and decoder models separately, tremendous progress has been made in image classification with deep convolutional neural network (CNN) (Krizhevsky et al., 2012; He et al., 2016) (an encoder) and in image generation with generative adversarial network (GAN) (Goodfellow et al., 2014) (a decoder). The key objective for a generative autoencoder is to maintain two types of fidelities: (1) an instancelevel fidelity to make the reconstruction/synthesis faithful to the individual input data sample, and (2) a set-level fidelity to make the reconstruction/synthesis of the decoder faithful to the entire input data set. The VAE/GAN algorithm (Larsen et al., 2016) combines a reconstruction loss with an adversarial loss. However, the result of VAE/GAN is sub-optimal, as shown in Table 1. The pixel-wise reconstruction loss in the standard VAE (Kingma & Welling, 2014) typically results in blurry images with degenerated semantics. A possible solution to resolving the above conflict lies in two aspects: (1) turning the measure in the pixel space into induced feature space that is more semantically meaningful; (2) changing the L2 distance (per-pixel) into a learned instance-level distance function for the entire image (akin to generative adversarial networks which learn set-level distance functions). Taking these two steps allows us to design an instance-level classification loss that is aligned with the adversarial loss in the GAN model enforcing set-level fidelity. Motivated by the above observations, we develop a new generative autoencoder model with dual contradistinctive losses by adopting a discriminative loss performing instance-level classification (enforcing the instance-level fidelity), which is rooted in metric learning (Kulis et al., 2012) and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; van den Oord et al., 2018). Combined with the adversarial losses for the set-level fidelity, both terms are formulated in the induced feature space performing contradistinction: (1) the instance-level contrastive loss considers each input instance (image) itself as a class, and (2) the set-level adversarial loss treats the entire input set as a positive class. We name our method dual contradistinctive generative autoencoder (DC-VAE) and make the following contributions. • We develop a new algorithm, dual contradistinctive generative autoencoder (DC-VAE), by combining instance-level and set-level classification losses in the VAE framework, and systematically show the significance of these two loss terms in DC-VAE. • The effectiveness of DC-VAE is illustrated in three tasks altogether, including image synthesis, image reconstruction, and representation learning. 2 RELATED WORK Related work can be roughly divided into three categories: (1) generative autoencoder, (2) deep generative model, and (3) contrastive learning. Variational autoencoder (VAE) (Kingma & Welling, 2014) points to an exciting direction of generative models by developing an Evidence Lower BOund (ELBO) objective (Higgins et al., 2017; Ding et al., 2020). However, the VAE reconstruction/synthesis is known to be blurry. To improve the image quality, a sequence of VAE based models have been developed (Larsen et al., 2016; Dumoulin et al., 2017; Huang et al., 2018; Brock et al., 2018; Zhang et al., 2019). VAE/GAN (Larsen et al., 2016) adopts an adversarial loss to improve the quality of the image, but its output for both reconstruction and synthesis (new samples) is still unsatisfactory. IntroVAE Huang et al. (2018) adds a loop from the output back to the input and is able to attain image quality that is on par with some modern GANs in some aspects. However, its full illustration for both reconstruction and synthesis is unclear. PGA (Zhang et al., 2019) adds a constraint to the latent variables. Pioneering works of (Tu, 2007; Gutmann & Hyvärinen, 2012) alleviate the difficulty of learning densities by approximating likelihoods via classification (real (positive) samples vs. fake (pseudonegative or adversarial) samples). Generative adversarial network (GAN) (Goodfellow et al., 2014) builds on neural networks and amortized sampling (a decoder network that maps a noise into an image). The subsequent development after GAN (Radford et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017; Karras et al., 2018; Gong et al., 2019; Dumoulin et al., 2017; Donahue et al., 2017) has led to a great leap forward in building decoder-based generative models. It has been widely observed that the adversarial loss in GANs contributes significantly to the improved quality of image synthesis. Energy-based generative models (Salakhutdinov & Hinton, 2009; Xie et al., 2016; Jin et al., 2017; Lee et al., 2018) — which aim to directly model data density — are making a steady improvement for a simultaneously generative and discriminative single model. From another angle, contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020; Chen et al., 2020) has lately shown its particular advantage in unsupervised training of CNN features. It overcomes the limitation in unsupervised learning where class label is missing by turning each image instance into one class. Thus, the softmax function in the standard discriminative classification training can be applied. Contrastive learning can be connected to metric learning (Bromley et al., 1993; Chopra et al., 2005; Chechik et al., 2010). In this paper, we aim to improve VAE (Kingma & Welling, 2014) by introducing a contrastive loss (van den Oord et al., 2018) to address instance-level fidelity between the input and the reconstruction in the induced feature space. Unlike in self-supervised representation learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020), where self-supervision requires generating a transformed input (via data augmentation operations), the reconstruction naturally fits into the contrastive term that encourages the matching between the reconstruction and the input image instance, while pushing the reconstruction away from the rest of the images in the entire training set. Thus, the instance-level and set-level contradistinctive terms collaborate with each to encourage the high fidelity of the reconstruction and synthesis. In Figure 3, we systematically show the significance of with and without the instance-level and the set-level contradistinctive terms. In addition, we explore multi-scale contrastive learning via two schemes in Section 4.1: 1) deep supervision for contrastive learning in different convolution layers, and 2) patch-based contrastive learning for fine-grained data fidelity. In the experiments, we show competitive results for the proposed dual contradistinctive generative autoencoder (DC-VAE) in a number of benchmarks for three tasks, including image synthesis, image reconstruction, and representation learning. 3 PRELIMINARIES: VAE AND VAE/GAN Variational autoencoder (VAE) Assume a given training set S = {xi}ni=1 where each xi ∈ Rm. We suppose that each xi is sampled from a generative process p(x|z). In the literature, vector z refers to latent variables. In practice, latent variables z and the generative process p(x|z) are unknown. The objectives of a variational autoencoder (VAE) (Kingma & Welling, 2014) is to simultaneously train an inference network qφ(z|x) and a generator network pθ(x|z). In VAE (Kingma & Welling, 2014), the inference network is a neural network that outputs parameters for Gaussian distribution qφ(z|x) = N (µφ(x),Σφ(x)). The generator is a deterministic neural network fθ(z) parameterized by θ. Generative density pθ(x|z) is assumed to be subject to a Gaussian distribution: pθ(x|z) = N (fθ(z), σ2I). These models can be trained by minimizing the negative of evidence lower bound (ELBO) in Eq. (1) below. LELBO(θ,φ;x) = −Ez∼qφ(z|x)[log(pθ(x|z))] +KL[qφ(z|x)||p(z)] (1) where p(z) is the prior, which is assumed to be N (0, I). The first term −Eqφ(z|x)[log(pθ(x|z))] reduces to standard pixel-wise reconstruction loss Eqφ(z|x)[||x− fθ(z)||22] (up to a constant) due to the Gaussian assumption. The second term is the regularization term, which prevents the conditional qφ(z|x) from deviating from the Gaussian prior N (0, I). The inference network and generator network are jointly optimized over training samples by: min θ,φ E x∼pdata(x) LELBO(θ,φ;x). (2) where pdata is the distribution induced by the training set S. VAE has an elegant formulation. However, it relies on a pixel-wise reconstruction loss, which is known not ideal to be reflective of perceptual realism (Johnson et al., 2016; Isola et al., 2017), often resulting in blurry images. From another viewpoint, it can be thought of as using a kernel density estimator (with an isotropic Gaussian kernel) in the pixel space. Although allowing efficient training and inference, such a non-parametric approach is overly simplistic for modeling the semantics and perception of natural images. VAE/GAN Generative adversarial networks (GANs) (Goodfellow et al., 2014) and its variants (Radford et al., 2016), on the other hand, are shown to be producing highly realistic images. The success was largely attributed to learning a fidelity function (often referred to as a discriminator) that measures how realistic the generated images are. This can be achieved by learning to contrast (classify) the set of training images with the set of generated images (Tu, 2007; Gutmann & Hyvärinen, 2012; Goodfellow et al., 2014). VAE/GAN (Larsen et al., 2016) augments the ELBO objective (Eq. (2)) with the GAN objective. Specifically, the objective of VAE/GAN consists of two terms, namely the modified ELBO (Eq. (3)) and the GAN objective. To make the notations later consistent, we now define the set of given training images as S = {xi}ni=1 in which a total number of n unlabeled training images are present. For each input image xi, the modified ELBO computes the reconstruction loss in the feature space of the discriminator instead of the pixel space: LELBO(θ,φ, D;xi) = Ez∼qφ(z|xi)[||FD(xi)− FD(fθ(z))|| 2 2] +KL[qφ(z|xi)||p(z)] (3) where FD(·) denotes the feature embedding from the discriminator D. Feature reconstruction loss (also referred to as perceptual loss), similar to that in style transfer (Johnson et al., 2016). The modified GAN objective considers both reconstructed images (latent code from qφ(z|x)) and sampled images (latent code from the prior p(z)) as its fake samples: LGAN(θ,φ, D;xi) = logD(xi)+Ez∼qφ(z|xi) log(1−D(fθ(z))+Ez∼p(z) log(1−D(fθ(z)). (4) The VAE/GAN objective becomes: min θ,φ max D n∑ i=1 [LELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (5) 4 DUAL CONTRADISTINCTIVE GENERATIVE AUTOENCODER (DC-VAE) Here we want to address a question: Is the degeneration of the synthesized images by VAE always the case once the decoder is joined with an encoder? Can the problem be remedied by using a more informative loss? Although improving the image qualities of VAE by integrating a set-level contrastive loss (GAN objective of Eq. (4)), VAE/GAN still does not accurately model instance-level fidelity. Inspired by the literature on instance-level classification (Malisiewicz et al., 2011), approximating likelihood by classification (Tu, 2007), and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020), we propose to model instance-level fidelity by contrastive loss (commonly referred to as InfoNCE loss) (van den Oord et al., 2018). In DC-VAE, we perform the following minimization and loosely call each term a loss. Linstance(θ,φ, D; i, {xj}nj=1) , −Ez∼qφ(z|xi) [ log eh(xi,fθ(z))∑n j=1 e h(xj ,fθ(z)) ] , (6) where i is an index for a training sample (instance), {xj}nj=1 is the union of positive samples and negative samples, h(x,y) is the critic function that measures compatibility between x and y. Following the popular choice from (He et al., 2020), h(x,y) is the cosine similarity between the embeddings of x and y: h(x,y) = FD(x) >FD(y) ||FD(x)||2||FD(y)||2 . Note that unlike in contrastive self-supervised learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020) where two views (independent augmentations) of an instance constitutes a positive pair, an input instance xi and its reconstruction fθ(z) comprises a positive pair in DC-VAE. Likewise, the reconstruction fθ(z) and any instance that is not xi can be a negative pair. To bridge the gap between the instance-level contrastive loss (Eq. (6)) and log-likelihood in ELBO term (Eq. (1)), we show the following observation. Theorem 1 (From (Ma & Collins, 2018; Poole et al., 2019)) The following objective is minimized, i.e., the optimal critic h is achieved, when h(fθ(z),x) = log p(x|z) + c(x) where c(x) is any function that does not depend on z. INCE , Ex1,···xKEi[Linstance(θ,φ, D; i, {xj}nj=1)]. (7) From Theorem 1, we see that the contrastive loss of Eq. (6) implicitly estimates the log-likelihood log pθ(x|z) required for the evidence lower bound (ELBO). Hence, we modify the ELBO objective of Eq. (1) as follows and name it as implicit ELBO (IELBO): LIELBO(θ,φ, D;xi) = Linstance(θ,φ, D; i, {xj}nj=1) +KL[qφ(z|xi)||p(z)]. (8) Finally, the combined objective for the proposed DC-VAE algorithm becomes: min θ,φ max D n∑ i=1 [LIELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (9) The definition of LGAN follows Eq. (4). Note here we also consider the term in Eq. (4) as contrasdistinctive since it tries to minimize the difference/discriminative classification between the input (“real”) image set and the reconstructed/generated (“fake”) image set. Below we highlight the significance of the two contradistinctive terms. Figure 2 shows the model architecture. • Instance-level fidelity. The first item in Eq. (8) is an instance-level fidelity term encouraging the reconstruction to be as close as possible to the input image while being different from all the rest of the images. A key advantage of the contrastive loss in Eq. (8) over the standard reconstruction loss in Eq. (3) is its relaxed and background instances aware formulation. In general, the reconstruction in Eq. (3) wants a perfect match between the reconstruction and the input, whereas the contrastive loss in Eq. (8) requests for being the most similar one among the training samples. This way, the contrastive loss becomes more cooperative with less conflict to the GAN loss, compared with the reconstruction loss. The introduction of the contrastive loss results in a significant improvement over VAE and VAE/GAN in which only matching the reconstruction, and the input instance is enforced. • Set-level fidelity. The second item in Eq. (9) is a set-level fidelity term encouraging the entire set of synthesized images to be non distinguishable from the input image set. Having this term (Eq. (4)) is still important since the instance contrastive loss alone (Eq. (9)) will also lead to a degenerated situation: the input image and its reconstruction can be projected to the same point in the new feature space, but without a guarantee that the reconstruction itself lies on the valid “real” image manifold. As shown in Figure 3 and Table 1 for the comparison with and without the individual terms in Eq. (9). We observe evident effectiveness of the proposed DC-VAE combining both the instance-level fidelity term (Eq. (6)) and the set-level fidelity term (Eq. (4)), compared with VAE (using pixel-wise reconstruction loss without the GAN objective), VAE-GAN (using feature reconstruction loss and the GAN objective), and VAE contrastive (using contrastive loss but without the GAN objective). In the experiments, we show that both terms required to achieve faithful reconstruction (captured by InfoNCE loss) with perceptual realism (captured by the GAN loss). 4.1 MULTI SCALE CONTRASTIVE LEARNING Inspired by (Lee et al., 2015), we utilize information from feature maps at different scales. In addition to contrasting on the last layer of D in Equation 9, we add contrastive objective on fl(z) where fl is some function on top of an intermediate layer l of D. We do it in two different ways. 1. Deep supervision: We use 1×1 convolution to reduce the dimension channel-wise, and use a linear layer to obtain fl. 2. Local patch: We use a random location across channel at layer l (size: 1×1×d, where d is the channel depth). The intuition for the second is that in a convolutional neural network, one location at a feature map corresponds to a receptive area (patch) in the original image. Thus, by contrasting locations across channels in the same feature maps, we are encouraging the original image and the reconstruction to image have locally similar content, while encouraging them to have locally dissimilar content in other images. We use deep supervision for initial training, and add local patch after certain iterations. 5 EXPERIMENTS 5.1 IMPLEMENTATION Datasets To validate our method, we train our method on several different datasets — CIFAR-10 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), CelebA (Liu et al., 2015), CelebA-HQ (Karras et al., 2018), and LSUN bedroom (Yu et al., 2015). See the appendix for more detailed descriptions. Network architecture For 32× 32 resolution, we design the encoder and decoder subnetworks of our model in a similar way to the discriminator and generator found through neural architecture search in AutoGAN (Gong et al., 2019). For the higher resolution experiments (128 × 128 and 512 × 512 resolution), we use Progressive GAN (Karras et al., 2018) as the backbone. Network architecture diagram is available in the appendix. Training details The number of negative samples for contrastive learning is 8096 for all datasets. The latent dimension for the VAE decoder is 128 for CIFAR-10, STL-10, and 512 for CelebA, CelebA-HQ and LSUN Bedroom. Learning rate is 0.0002 with Adam parameters of (β1, β2) = (0.0, 0.9) and a batch size of 128 for CIFAR-10 and STL-10. For CelebA, CelebA-HQ, LSUN Bedroom datasets, we use the optimizer parameters given in (Karras et al., 2018). The contrastive embedding dimension used is 16 for each of the experiments. 5.2 ABLATION STUDY To demonstrate the necessity of the GAN loss (Eq. 4) and contrastive loss (Eq. 8), we conduct four experiments with the same backbone. These experiments are: VAE (No GAN, no Contrastive), VAE/GAN (with GAN, no Contrastive), VAE-Contrastive (No GAN, with Contrastive, and ours (With GAN, with Contrastive). Here, GAN denotes Eq. 4, and Contrastive denotes Eq. 8. Qualitative analysis From Figure 3, we see that without GAN and contrastive, images are blurry; Without GAN, the contrastive head can classify images, but not on the image manifold; Without Contrastive, reconstruction images are on the image manifold because of the discriminator, but they are different from input images. These experiments show that it is necessary to combine both instance-level and set-level fidelity, and in a contradistinctive manner. Quantitative analysis In Table 1 we observe the same trend. VAE generates blurry images; thus the FID/IS (Inception Score) is not ideal. VAE-Contrastive does not generate images on the natural manifold; thus FID/IS is poor. VAE/GAN combines set-level and instance-level information. However the L2 objective is not ideal; thus the FID/IS is sub-optimal. For both reconstruction and sampling tasks, DC-VAE generates high fidelity images and has a favorable FID and Inception score. This illustrates the advantange of having a contradistinctive objective on both set level and instance level. To measure the faithfulness of the reconstructed image we compute the pixelwise L2 distance and the perceptual distance (Johnson et al. (2016)). For the pixel distance, VAE has the lowest value because it directly optimizes this distance during training; our pixel-wise distance is better than VAE/GAN and VAE-Contrastive. For perceptual distance, our method outperforms other three, which confirms that using contrastive learning helps reconstruct images semantically. 5.3 COMPARISON TO EXISTING GENERATIVE MODELS Table 2 gives a comparison of quantitative measurement for CIFAR-10 and STL-10 dataset. In general, there is a large difference in terms of FID and IS between GAN family and VAE family of models. Our model has state-of-the-art results in VAE family, and is comparable to state-of-the-art GAN models on CIFAR-10. Similarly Table 4 shows that DC-VAE is able to generate images that are comparable to GAN based methods even on higher resolution datasets. 5.4 LATENT SPACE REPRESENTATION: IMAGE AND STYLE INTERPOLATION We further validate the effectiveness of DC-VAE for representation learning. One benefit of having an AE/VAE framework compared with just a decoder as in GAN Goodfellow et al. (2014) is to be able to directly obtain the latent representation from the input images. The encoder and decoder modules in VAE allows us to readily perform image/style interpolation by mixing the latent variables of different images and reconstruct/synthesize new ones. We demonstrate qualitative results on image interpolation (Fig. 5, Appendix Fig. 9), style interpolation (Appendix Fig. 10) and image editing (Fig. 6). We directly use the trained DC-VAE model without disentanglement learning Karras et al. (2019). Additional latent space analysis and the method used for interpolation and editing is Table 3: Quality of image generation (FID) comparison on LSUN Bedrooms. †128×128 resolution. ¶256×256 resolution. ↓ means lower is better. Method FID↓ Progressive GAN‡ (Karras et al., 2018) 8.3 SNGAN† (Miyato et al., 2018) (from (Chen et al., 2019)) 16.0 SSGAN†(Chen et al., 2019) 13.3 StyleALAE¶ (Pidhorskyi et al., 2020) Reconstruction 15.92 StyleALAE¶ (Pidhorskyi et al., 2020) Sampling 17.13 DC-VAE† (ours) Reconstruction 10.57 DC-VAE† (ours) Sampling 14.3 provided in the Appendix. We also quantitatively compare the latent space disentanglement through the perceptual path length (PPL) (Karras et al., 2019) (Table 6). We observe that DC-VAE learns a more disentangled latent space representation than the backbone Progressive GAN (Karras et al., 2018) and StyleALAE (Pidhorskyi et al., 2020) that uses a much more capable StyleGAN (Karras et al., 2019) backbone. 5.5 LATENT SPACE REPRESENTATION: CLSSIFICATION To show that our model learns a good representation, we measure the performance on the downstream MNIST classification task (Ding et al., 2020). The VAE models were trained on MNIST dataset (LeCun, 1998). We feed input images into our VAE encoder and get the latent representation. Then we train a linear classifier on the latent representation to classify the classes of the input images. Results in Table 5 show that our model gives the lowest classification error in most cases. This experiment demonstrates that our model not only gains the ability to do faithful synthesis and reconstruction, but also gains better representation ability on the VAE side. 6 CONCLUSION In this paper, we have proposed dual contradistinctive generative autoencoder (DC-VAE), a new framework that integrates an instance-level discriminative loss (InfoNCE) with a set-level adversarial loss (GAN) into a single variational autoencoder framework. Our experiments show competitive results for a single model in several tasks, including image synthesis, image reconstruction, representation learning for image interpolation, and representation learning for classification. DC-VAE points to a encouraging direction that attains high-quality synthesis (decoding) and inference (encoding). A APPENDIX A.1 Additional reconstruction results A.2 Analysing the latent space In this section we analyse the smoothness of the latent space learnt by DC-VAE. In Figure 9 we qualitatively show the high resolution (512× 512) CelebA-HQ Karras et al. (2018) images generated by an evenly spaced linear blending between two latent vectors. In Fig. 6 we show that DC-VAE is able to perform meaningful attribute editing on images while retaining the original identity. To perform image editing, we first need to compute the direction vector in the latent space that correspond to a desired attribute (e.g. has glasses, has blonde hair, is a woman, has facial hair). We compute these attribute direction vectors by selecting 20 images that have the attribute and 20 images that do not have the attribute, obtaining the corresponding pairs of 20 latent vectors, and calculating the difference of the mean. The results in Fig. 6 show that these direction vectors can be added to a latent vector to add a diverse combination of desired image attributes while retaining the original identity of the individual. Additionally we corroborate the above qualitative results quantitatively by inspecting the Perceptual Path Length (PPL) Karras et al. (2019) of our learn DC-VAE Decoder (Tab. 6) to measures the disentanglement of the latent space. We note that although ProgressiveGAN (ours base model) has a better FID score, DC-VAE has a lower PPL score which indicated that the latent space learnt is more disentangled. A.3 Effect of negative samples In this section we analyse the effect of varying the number of negative samples used for contrastive learning. The figure 11 shows the reconstruction error on the CIFAR-10 Krizhevsky et al. (2009) test set as the negative samples is varied. We observe that a higher number of negative samples results in better reconstruction. We choose 8096 for all of our experiments because of memory constraints. A.4 Datasets used CIFAR-10 comprises 50,000 training images and 10,000 test images with a spatial resolution of 32 × 32. STL-10 is a similar dataset that contains 5,000 training images and 100,000 unlabeled images at 96× 96 resolution. We follow the procedure in AutoGAN(Gong et al., 2019) and resize the STL-10 images to 32× 32. The CelebA dataset has 162,770 training images and 19,962 testing images, CelebA-HQ contains 29,000 training images with 1,000 test images of size 1024 × 1024, and LSUN Bedroom has approximately 3M images. We resize all images progressively in these three datasets from (4× 4) to (512× 512) for the progressive training. A.5 Network architecture diagrams In Figures 15 we show the detailed network architecture of DC-VAE for input resolutions of 32× 32. Note that the comparison results shown in Figure 3 and Table 1 in the main paper, for VAE, VAE/GAN, VAE w/o GAN, and our proposed DC-VAE are all based on the same network architecture (shown in Figure 15 here), for a fair comparison. The network architectures shown in Figure 15 are adapted closely from the networks discovered by (Gong et al., 2019) through Neural Architecture Search. The DC-VAE developed in our paper is not tied to any particular CNN architecture. We choose the AutoGAN architecture (Gong et al., 2019) to start with a strong baseline. The decoder in Figure 15 matches the generator in (Gong et al., 2019). The encoder is built by modifying the output shape of the final linear layer in the discriminator of AutoGAN (Gong et al., 2019) to match the latent dimension and adding spectral normalization. The discriminator is used both for classifying real/fake images, and contrastive learning. For each layer we choose, we first apply 1x1 convolution and a linear layer, and then use this feature as an input to the contrastive module. For experiments at 32× 32, we pick two different positions: the output of second residual conv block (lower level) and the output of the first linear layer (higher level). For experiments on higher resolution datasets we use a Progressive GAN (Karras et al., 2018) Generator and Discriminator as our backbone and apply similar modifications as described above. A.6 Training infrastructure A.7 Further details about the representation learning experiments As seen in Table 4 in the main paper, we show the representation capability of DC-VAE following the procedure outlined in (Ding et al., 2020). We train our model on the MNIST dataset (LeCun, 1998) and measure the transferability though a classification task on the latent embedding vector. Specifically, we first pretrain the DC-VAE model on the training split of the MNIST dataset. Following that we freeze the DC-VAE model and train a linear classifier that takes latent embedding vector as the input and predicts the class label of the original image.
1. What is the focus of the paper, and what are its strengths and weaknesses? 2. What is the contribution of the novel approach to VAE-based generative models? 3. How effective is the dual "contradistinctive" loss in improving reconstruction and synthesis quality? 4. What are some potential improvements to the proposed method, such as changing the critic function or analyzing the impact of contrastive loss on GAN training stability? 5. Are there any limitations to the representation learning experiments conducted only on MNIST? 6. What are the implications of combining contrastive loss and GAN-based losses on training? 7. How does the choice of critic function impact the performance of the model? 8. Can the same level of performance be expected for representation learning on datasets other than MNIST?
Review
Review The paper presents a novel approach to VAE based generative models, and claims to address the three tasks of reconstruction, synthesis and learning semantic representations. The primary contribution is a novel loss that combines a contrastive loss at the instance level, which assists more faithful reconstructions, with a GAN loss to introduce set-level fidelity, which assists the synthesis quality. Additionally, a multi-scale contrastive learning loss is also used by imposing losses across different layers of the network, as well as at different patches (receptive field sizes). This dual "contradistinctive" loss is well-motivated and appears to work well for datasets like CIFAR-10, STL-10, CelebA, CelebA-HQ and LSUN bedroom. The quantitative analysis is done using Inception Score (IS) and Frechet Inception Distance (FID), Pixel Distance, and Perceptual Distance. The paper shows good results on these datasets and settings. Strengths Quality of reconstructions and synthesis is very good, and better than state-of-the-art (Intro-VAE). The quantitative analysis is consistent with the qualitative superiority of the proposed method. The use of contrastive loss together with a GAN based loss is an interesting idea, and it seems to work well based on the qualitative and quantitative results. Weaknesses Why is h(x,y) in Eq. (6) used as the cosine distance? Would changing it to L_1/L_2 distance help reconstruction better? The cosine similarity is probably appropriate in (He et al., 2020) because of their downstream task of classification, and not reconstruction. What is the impact of changing this critic function on reconstruction? How is the number of negative samples determined for contrastive learning? Why is it 8096, and how do the results change if this number is reduced or increased. An analysis experiment around this could help understand the impact of the contrastive loss. Does the stability of GAN training get impacted by the contrastive learning loss? Particularly, if the parameters like number of negative samples are changed? The paper does not provide details about the training infrastructure used for training this model. It could be helpful to provide these details along with training time, etc. Representation learning experiments are only done on MNIST. Conducting them on CIFAR-10 and/or STL-10 would add more value to the paper. MNIST not having been used elsewhere seems odd and gives the (possibly incorrect) impression that representation learning performance is not good for other datasets. Other (minor) issues Please define i (index of anchor point?), and x_j (samples that are not x_i?) in Eq. (6). Please define D_l in the critic function definition that follows. The sentence before Eq. (6) also needs attention, as it refers to a minimization problem, which is not indicated in Eq. (6) It seems the second last para before Sec. 4.1 has the Eq. numbers mixed up. It may be worth verifying. The results obtained are impressive, however, not all claims are satisfied thoroughly, e.g., the representation learning experiments are weaker as they are only shown on the MNIST dataset. The analysis experiments are limited and do not entirely provide the view of the hyperparameter choices. These are the primary reasons for me to recommend marginally above acceptance threshold. I request the authors to comment on the following: It will be helpful to understand the implications of the combination of contrastive loss and GAN based losses on the training. It will also be helpful to comment on the choice of the critic function. Should we expect similar results of representation learning on datasets other than MNIST as well?
ICLR
Title Dual Contradistinctive Generative Autoencoder Abstract We present a new generative autoencoder model with dual contradistinctive losses to improve generative autoencoder that performs simultaneous inference (reconstruction) and synthesis (generation). We name our model dual contradistinctive generative autoencoder (DC-VAE) that integrates an instance-level discriminative loss (maintaining the instance-level fidelity for the reconstruction/synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for the reconstruction/synthesis), both being contradistinctive. There also exists a mathematical connection between the instance-based classification and instance-level conditional distribution. DC-VAE achieves competitive results in three tasks, including image synthesis, image reconstruction, and representation learning. DC-VAE is applicable to various tasks in computer vision and machine learning. (a) DC-VAE (ours) Reconstruction results. Left: 128× 128. Right: 512× 512. (b) DC-VAE (ours) Sampling results. Left: 128× 128. Right: 512× 512. Figure 1: DC-VAE Reconstruction (top) and Sampling (bottom) on LSUN Bedroom Yu et al. (2015) at resolution 128× 128 (left) and CelebA-HQ (Karras et al., 2018) at resolution 512× 512 (right). 1 INTRODUCTION Tremendous progress has been made in deep learning for the development of various learning frameworks (Krizhevsky et al., 2012; He et al., 2016; Goodfellow et al., 2014; Vaswani et al., 2017). Autoencoder (AE) (LeCun, 1987; Hinton & Zemel, 1994) aims to compactly represent and faithfully reproduce the original input signal by concatenating an encoder and a decoder in an end-to-end learning framework. The goal of AE is to make the encoded representation semantically efficient and sufficient to reproduce the input signal by its decoder. Autoencoder’s generative companion, variational autoencoder (VAE) (Kingma & Welling, 2014), additionally learns a variational model for the latent variables to capture the underlying sample distribution. For the encoder and decoder models separately, tremendous progress has been made in image classification with deep convolutional neural network (CNN) (Krizhevsky et al., 2012; He et al., 2016) (an encoder) and in image generation with generative adversarial network (GAN) (Goodfellow et al., 2014) (a decoder). The key objective for a generative autoencoder is to maintain two types of fidelities: (1) an instancelevel fidelity to make the reconstruction/synthesis faithful to the individual input data sample, and (2) a set-level fidelity to make the reconstruction/synthesis of the decoder faithful to the entire input data set. The VAE/GAN algorithm (Larsen et al., 2016) combines a reconstruction loss with an adversarial loss. However, the result of VAE/GAN is sub-optimal, as shown in Table 1. The pixel-wise reconstruction loss in the standard VAE (Kingma & Welling, 2014) typically results in blurry images with degenerated semantics. A possible solution to resolving the above conflict lies in two aspects: (1) turning the measure in the pixel space into induced feature space that is more semantically meaningful; (2) changing the L2 distance (per-pixel) into a learned instance-level distance function for the entire image (akin to generative adversarial networks which learn set-level distance functions). Taking these two steps allows us to design an instance-level classification loss that is aligned with the adversarial loss in the GAN model enforcing set-level fidelity. Motivated by the above observations, we develop a new generative autoencoder model with dual contradistinctive losses by adopting a discriminative loss performing instance-level classification (enforcing the instance-level fidelity), which is rooted in metric learning (Kulis et al., 2012) and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; van den Oord et al., 2018). Combined with the adversarial losses for the set-level fidelity, both terms are formulated in the induced feature space performing contradistinction: (1) the instance-level contrastive loss considers each input instance (image) itself as a class, and (2) the set-level adversarial loss treats the entire input set as a positive class. We name our method dual contradistinctive generative autoencoder (DC-VAE) and make the following contributions. • We develop a new algorithm, dual contradistinctive generative autoencoder (DC-VAE), by combining instance-level and set-level classification losses in the VAE framework, and systematically show the significance of these two loss terms in DC-VAE. • The effectiveness of DC-VAE is illustrated in three tasks altogether, including image synthesis, image reconstruction, and representation learning. 2 RELATED WORK Related work can be roughly divided into three categories: (1) generative autoencoder, (2) deep generative model, and (3) contrastive learning. Variational autoencoder (VAE) (Kingma & Welling, 2014) points to an exciting direction of generative models by developing an Evidence Lower BOund (ELBO) objective (Higgins et al., 2017; Ding et al., 2020). However, the VAE reconstruction/synthesis is known to be blurry. To improve the image quality, a sequence of VAE based models have been developed (Larsen et al., 2016; Dumoulin et al., 2017; Huang et al., 2018; Brock et al., 2018; Zhang et al., 2019). VAE/GAN (Larsen et al., 2016) adopts an adversarial loss to improve the quality of the image, but its output for both reconstruction and synthesis (new samples) is still unsatisfactory. IntroVAE Huang et al. (2018) adds a loop from the output back to the input and is able to attain image quality that is on par with some modern GANs in some aspects. However, its full illustration for both reconstruction and synthesis is unclear. PGA (Zhang et al., 2019) adds a constraint to the latent variables. Pioneering works of (Tu, 2007; Gutmann & Hyvärinen, 2012) alleviate the difficulty of learning densities by approximating likelihoods via classification (real (positive) samples vs. fake (pseudonegative or adversarial) samples). Generative adversarial network (GAN) (Goodfellow et al., 2014) builds on neural networks and amortized sampling (a decoder network that maps a noise into an image). The subsequent development after GAN (Radford et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017; Karras et al., 2018; Gong et al., 2019; Dumoulin et al., 2017; Donahue et al., 2017) has led to a great leap forward in building decoder-based generative models. It has been widely observed that the adversarial loss in GANs contributes significantly to the improved quality of image synthesis. Energy-based generative models (Salakhutdinov & Hinton, 2009; Xie et al., 2016; Jin et al., 2017; Lee et al., 2018) — which aim to directly model data density — are making a steady improvement for a simultaneously generative and discriminative single model. From another angle, contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020; Chen et al., 2020) has lately shown its particular advantage in unsupervised training of CNN features. It overcomes the limitation in unsupervised learning where class label is missing by turning each image instance into one class. Thus, the softmax function in the standard discriminative classification training can be applied. Contrastive learning can be connected to metric learning (Bromley et al., 1993; Chopra et al., 2005; Chechik et al., 2010). In this paper, we aim to improve VAE (Kingma & Welling, 2014) by introducing a contrastive loss (van den Oord et al., 2018) to address instance-level fidelity between the input and the reconstruction in the induced feature space. Unlike in self-supervised representation learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020), where self-supervision requires generating a transformed input (via data augmentation operations), the reconstruction naturally fits into the contrastive term that encourages the matching between the reconstruction and the input image instance, while pushing the reconstruction away from the rest of the images in the entire training set. Thus, the instance-level and set-level contradistinctive terms collaborate with each to encourage the high fidelity of the reconstruction and synthesis. In Figure 3, we systematically show the significance of with and without the instance-level and the set-level contradistinctive terms. In addition, we explore multi-scale contrastive learning via two schemes in Section 4.1: 1) deep supervision for contrastive learning in different convolution layers, and 2) patch-based contrastive learning for fine-grained data fidelity. In the experiments, we show competitive results for the proposed dual contradistinctive generative autoencoder (DC-VAE) in a number of benchmarks for three tasks, including image synthesis, image reconstruction, and representation learning. 3 PRELIMINARIES: VAE AND VAE/GAN Variational autoencoder (VAE) Assume a given training set S = {xi}ni=1 where each xi ∈ Rm. We suppose that each xi is sampled from a generative process p(x|z). In the literature, vector z refers to latent variables. In practice, latent variables z and the generative process p(x|z) are unknown. The objectives of a variational autoencoder (VAE) (Kingma & Welling, 2014) is to simultaneously train an inference network qφ(z|x) and a generator network pθ(x|z). In VAE (Kingma & Welling, 2014), the inference network is a neural network that outputs parameters for Gaussian distribution qφ(z|x) = N (µφ(x),Σφ(x)). The generator is a deterministic neural network fθ(z) parameterized by θ. Generative density pθ(x|z) is assumed to be subject to a Gaussian distribution: pθ(x|z) = N (fθ(z), σ2I). These models can be trained by minimizing the negative of evidence lower bound (ELBO) in Eq. (1) below. LELBO(θ,φ;x) = −Ez∼qφ(z|x)[log(pθ(x|z))] +KL[qφ(z|x)||p(z)] (1) where p(z) is the prior, which is assumed to be N (0, I). The first term −Eqφ(z|x)[log(pθ(x|z))] reduces to standard pixel-wise reconstruction loss Eqφ(z|x)[||x− fθ(z)||22] (up to a constant) due to the Gaussian assumption. The second term is the regularization term, which prevents the conditional qφ(z|x) from deviating from the Gaussian prior N (0, I). The inference network and generator network are jointly optimized over training samples by: min θ,φ E x∼pdata(x) LELBO(θ,φ;x). (2) where pdata is the distribution induced by the training set S. VAE has an elegant formulation. However, it relies on a pixel-wise reconstruction loss, which is known not ideal to be reflective of perceptual realism (Johnson et al., 2016; Isola et al., 2017), often resulting in blurry images. From another viewpoint, it can be thought of as using a kernel density estimator (with an isotropic Gaussian kernel) in the pixel space. Although allowing efficient training and inference, such a non-parametric approach is overly simplistic for modeling the semantics and perception of natural images. VAE/GAN Generative adversarial networks (GANs) (Goodfellow et al., 2014) and its variants (Radford et al., 2016), on the other hand, are shown to be producing highly realistic images. The success was largely attributed to learning a fidelity function (often referred to as a discriminator) that measures how realistic the generated images are. This can be achieved by learning to contrast (classify) the set of training images with the set of generated images (Tu, 2007; Gutmann & Hyvärinen, 2012; Goodfellow et al., 2014). VAE/GAN (Larsen et al., 2016) augments the ELBO objective (Eq. (2)) with the GAN objective. Specifically, the objective of VAE/GAN consists of two terms, namely the modified ELBO (Eq. (3)) and the GAN objective. To make the notations later consistent, we now define the set of given training images as S = {xi}ni=1 in which a total number of n unlabeled training images are present. For each input image xi, the modified ELBO computes the reconstruction loss in the feature space of the discriminator instead of the pixel space: LELBO(θ,φ, D;xi) = Ez∼qφ(z|xi)[||FD(xi)− FD(fθ(z))|| 2 2] +KL[qφ(z|xi)||p(z)] (3) where FD(·) denotes the feature embedding from the discriminator D. Feature reconstruction loss (also referred to as perceptual loss), similar to that in style transfer (Johnson et al., 2016). The modified GAN objective considers both reconstructed images (latent code from qφ(z|x)) and sampled images (latent code from the prior p(z)) as its fake samples: LGAN(θ,φ, D;xi) = logD(xi)+Ez∼qφ(z|xi) log(1−D(fθ(z))+Ez∼p(z) log(1−D(fθ(z)). (4) The VAE/GAN objective becomes: min θ,φ max D n∑ i=1 [LELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (5) 4 DUAL CONTRADISTINCTIVE GENERATIVE AUTOENCODER (DC-VAE) Here we want to address a question: Is the degeneration of the synthesized images by VAE always the case once the decoder is joined with an encoder? Can the problem be remedied by using a more informative loss? Although improving the image qualities of VAE by integrating a set-level contrastive loss (GAN objective of Eq. (4)), VAE/GAN still does not accurately model instance-level fidelity. Inspired by the literature on instance-level classification (Malisiewicz et al., 2011), approximating likelihood by classification (Tu, 2007), and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020), we propose to model instance-level fidelity by contrastive loss (commonly referred to as InfoNCE loss) (van den Oord et al., 2018). In DC-VAE, we perform the following minimization and loosely call each term a loss. Linstance(θ,φ, D; i, {xj}nj=1) , −Ez∼qφ(z|xi) [ log eh(xi,fθ(z))∑n j=1 e h(xj ,fθ(z)) ] , (6) where i is an index for a training sample (instance), {xj}nj=1 is the union of positive samples and negative samples, h(x,y) is the critic function that measures compatibility between x and y. Following the popular choice from (He et al., 2020), h(x,y) is the cosine similarity between the embeddings of x and y: h(x,y) = FD(x) >FD(y) ||FD(x)||2||FD(y)||2 . Note that unlike in contrastive self-supervised learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020) where two views (independent augmentations) of an instance constitutes a positive pair, an input instance xi and its reconstruction fθ(z) comprises a positive pair in DC-VAE. Likewise, the reconstruction fθ(z) and any instance that is not xi can be a negative pair. To bridge the gap between the instance-level contrastive loss (Eq. (6)) and log-likelihood in ELBO term (Eq. (1)), we show the following observation. Theorem 1 (From (Ma & Collins, 2018; Poole et al., 2019)) The following objective is minimized, i.e., the optimal critic h is achieved, when h(fθ(z),x) = log p(x|z) + c(x) where c(x) is any function that does not depend on z. INCE , Ex1,···xKEi[Linstance(θ,φ, D; i, {xj}nj=1)]. (7) From Theorem 1, we see that the contrastive loss of Eq. (6) implicitly estimates the log-likelihood log pθ(x|z) required for the evidence lower bound (ELBO). Hence, we modify the ELBO objective of Eq. (1) as follows and name it as implicit ELBO (IELBO): LIELBO(θ,φ, D;xi) = Linstance(θ,φ, D; i, {xj}nj=1) +KL[qφ(z|xi)||p(z)]. (8) Finally, the combined objective for the proposed DC-VAE algorithm becomes: min θ,φ max D n∑ i=1 [LIELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (9) The definition of LGAN follows Eq. (4). Note here we also consider the term in Eq. (4) as contrasdistinctive since it tries to minimize the difference/discriminative classification between the input (“real”) image set and the reconstructed/generated (“fake”) image set. Below we highlight the significance of the two contradistinctive terms. Figure 2 shows the model architecture. • Instance-level fidelity. The first item in Eq. (8) is an instance-level fidelity term encouraging the reconstruction to be as close as possible to the input image while being different from all the rest of the images. A key advantage of the contrastive loss in Eq. (8) over the standard reconstruction loss in Eq. (3) is its relaxed and background instances aware formulation. In general, the reconstruction in Eq. (3) wants a perfect match between the reconstruction and the input, whereas the contrastive loss in Eq. (8) requests for being the most similar one among the training samples. This way, the contrastive loss becomes more cooperative with less conflict to the GAN loss, compared with the reconstruction loss. The introduction of the contrastive loss results in a significant improvement over VAE and VAE/GAN in which only matching the reconstruction, and the input instance is enforced. • Set-level fidelity. The second item in Eq. (9) is a set-level fidelity term encouraging the entire set of synthesized images to be non distinguishable from the input image set. Having this term (Eq. (4)) is still important since the instance contrastive loss alone (Eq. (9)) will also lead to a degenerated situation: the input image and its reconstruction can be projected to the same point in the new feature space, but without a guarantee that the reconstruction itself lies on the valid “real” image manifold. As shown in Figure 3 and Table 1 for the comparison with and without the individual terms in Eq. (9). We observe evident effectiveness of the proposed DC-VAE combining both the instance-level fidelity term (Eq. (6)) and the set-level fidelity term (Eq. (4)), compared with VAE (using pixel-wise reconstruction loss without the GAN objective), VAE-GAN (using feature reconstruction loss and the GAN objective), and VAE contrastive (using contrastive loss but without the GAN objective). In the experiments, we show that both terms required to achieve faithful reconstruction (captured by InfoNCE loss) with perceptual realism (captured by the GAN loss). 4.1 MULTI SCALE CONTRASTIVE LEARNING Inspired by (Lee et al., 2015), we utilize information from feature maps at different scales. In addition to contrasting on the last layer of D in Equation 9, we add contrastive objective on fl(z) where fl is some function on top of an intermediate layer l of D. We do it in two different ways. 1. Deep supervision: We use 1×1 convolution to reduce the dimension channel-wise, and use a linear layer to obtain fl. 2. Local patch: We use a random location across channel at layer l (size: 1×1×d, where d is the channel depth). The intuition for the second is that in a convolutional neural network, one location at a feature map corresponds to a receptive area (patch) in the original image. Thus, by contrasting locations across channels in the same feature maps, we are encouraging the original image and the reconstruction to image have locally similar content, while encouraging them to have locally dissimilar content in other images. We use deep supervision for initial training, and add local patch after certain iterations. 5 EXPERIMENTS 5.1 IMPLEMENTATION Datasets To validate our method, we train our method on several different datasets — CIFAR-10 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), CelebA (Liu et al., 2015), CelebA-HQ (Karras et al., 2018), and LSUN bedroom (Yu et al., 2015). See the appendix for more detailed descriptions. Network architecture For 32× 32 resolution, we design the encoder and decoder subnetworks of our model in a similar way to the discriminator and generator found through neural architecture search in AutoGAN (Gong et al., 2019). For the higher resolution experiments (128 × 128 and 512 × 512 resolution), we use Progressive GAN (Karras et al., 2018) as the backbone. Network architecture diagram is available in the appendix. Training details The number of negative samples for contrastive learning is 8096 for all datasets. The latent dimension for the VAE decoder is 128 for CIFAR-10, STL-10, and 512 for CelebA, CelebA-HQ and LSUN Bedroom. Learning rate is 0.0002 with Adam parameters of (β1, β2) = (0.0, 0.9) and a batch size of 128 for CIFAR-10 and STL-10. For CelebA, CelebA-HQ, LSUN Bedroom datasets, we use the optimizer parameters given in (Karras et al., 2018). The contrastive embedding dimension used is 16 for each of the experiments. 5.2 ABLATION STUDY To demonstrate the necessity of the GAN loss (Eq. 4) and contrastive loss (Eq. 8), we conduct four experiments with the same backbone. These experiments are: VAE (No GAN, no Contrastive), VAE/GAN (with GAN, no Contrastive), VAE-Contrastive (No GAN, with Contrastive, and ours (With GAN, with Contrastive). Here, GAN denotes Eq. 4, and Contrastive denotes Eq. 8. Qualitative analysis From Figure 3, we see that without GAN and contrastive, images are blurry; Without GAN, the contrastive head can classify images, but not on the image manifold; Without Contrastive, reconstruction images are on the image manifold because of the discriminator, but they are different from input images. These experiments show that it is necessary to combine both instance-level and set-level fidelity, and in a contradistinctive manner. Quantitative analysis In Table 1 we observe the same trend. VAE generates blurry images; thus the FID/IS (Inception Score) is not ideal. VAE-Contrastive does not generate images on the natural manifold; thus FID/IS is poor. VAE/GAN combines set-level and instance-level information. However the L2 objective is not ideal; thus the FID/IS is sub-optimal. For both reconstruction and sampling tasks, DC-VAE generates high fidelity images and has a favorable FID and Inception score. This illustrates the advantange of having a contradistinctive objective on both set level and instance level. To measure the faithfulness of the reconstructed image we compute the pixelwise L2 distance and the perceptual distance (Johnson et al. (2016)). For the pixel distance, VAE has the lowest value because it directly optimizes this distance during training; our pixel-wise distance is better than VAE/GAN and VAE-Contrastive. For perceptual distance, our method outperforms other three, which confirms that using contrastive learning helps reconstruct images semantically. 5.3 COMPARISON TO EXISTING GENERATIVE MODELS Table 2 gives a comparison of quantitative measurement for CIFAR-10 and STL-10 dataset. In general, there is a large difference in terms of FID and IS between GAN family and VAE family of models. Our model has state-of-the-art results in VAE family, and is comparable to state-of-the-art GAN models on CIFAR-10. Similarly Table 4 shows that DC-VAE is able to generate images that are comparable to GAN based methods even on higher resolution datasets. 5.4 LATENT SPACE REPRESENTATION: IMAGE AND STYLE INTERPOLATION We further validate the effectiveness of DC-VAE for representation learning. One benefit of having an AE/VAE framework compared with just a decoder as in GAN Goodfellow et al. (2014) is to be able to directly obtain the latent representation from the input images. The encoder and decoder modules in VAE allows us to readily perform image/style interpolation by mixing the latent variables of different images and reconstruct/synthesize new ones. We demonstrate qualitative results on image interpolation (Fig. 5, Appendix Fig. 9), style interpolation (Appendix Fig. 10) and image editing (Fig. 6). We directly use the trained DC-VAE model without disentanglement learning Karras et al. (2019). Additional latent space analysis and the method used for interpolation and editing is Table 3: Quality of image generation (FID) comparison on LSUN Bedrooms. †128×128 resolution. ¶256×256 resolution. ↓ means lower is better. Method FID↓ Progressive GAN‡ (Karras et al., 2018) 8.3 SNGAN† (Miyato et al., 2018) (from (Chen et al., 2019)) 16.0 SSGAN†(Chen et al., 2019) 13.3 StyleALAE¶ (Pidhorskyi et al., 2020) Reconstruction 15.92 StyleALAE¶ (Pidhorskyi et al., 2020) Sampling 17.13 DC-VAE† (ours) Reconstruction 10.57 DC-VAE† (ours) Sampling 14.3 provided in the Appendix. We also quantitatively compare the latent space disentanglement through the perceptual path length (PPL) (Karras et al., 2019) (Table 6). We observe that DC-VAE learns a more disentangled latent space representation than the backbone Progressive GAN (Karras et al., 2018) and StyleALAE (Pidhorskyi et al., 2020) that uses a much more capable StyleGAN (Karras et al., 2019) backbone. 5.5 LATENT SPACE REPRESENTATION: CLSSIFICATION To show that our model learns a good representation, we measure the performance on the downstream MNIST classification task (Ding et al., 2020). The VAE models were trained on MNIST dataset (LeCun, 1998). We feed input images into our VAE encoder and get the latent representation. Then we train a linear classifier on the latent representation to classify the classes of the input images. Results in Table 5 show that our model gives the lowest classification error in most cases. This experiment demonstrates that our model not only gains the ability to do faithful synthesis and reconstruction, but also gains better representation ability on the VAE side. 6 CONCLUSION In this paper, we have proposed dual contradistinctive generative autoencoder (DC-VAE), a new framework that integrates an instance-level discriminative loss (InfoNCE) with a set-level adversarial loss (GAN) into a single variational autoencoder framework. Our experiments show competitive results for a single model in several tasks, including image synthesis, image reconstruction, representation learning for image interpolation, and representation learning for classification. DC-VAE points to a encouraging direction that attains high-quality synthesis (decoding) and inference (encoding). A APPENDIX A.1 Additional reconstruction results A.2 Analysing the latent space In this section we analyse the smoothness of the latent space learnt by DC-VAE. In Figure 9 we qualitatively show the high resolution (512× 512) CelebA-HQ Karras et al. (2018) images generated by an evenly spaced linear blending between two latent vectors. In Fig. 6 we show that DC-VAE is able to perform meaningful attribute editing on images while retaining the original identity. To perform image editing, we first need to compute the direction vector in the latent space that correspond to a desired attribute (e.g. has glasses, has blonde hair, is a woman, has facial hair). We compute these attribute direction vectors by selecting 20 images that have the attribute and 20 images that do not have the attribute, obtaining the corresponding pairs of 20 latent vectors, and calculating the difference of the mean. The results in Fig. 6 show that these direction vectors can be added to a latent vector to add a diverse combination of desired image attributes while retaining the original identity of the individual. Additionally we corroborate the above qualitative results quantitatively by inspecting the Perceptual Path Length (PPL) Karras et al. (2019) of our learn DC-VAE Decoder (Tab. 6) to measures the disentanglement of the latent space. We note that although ProgressiveGAN (ours base model) has a better FID score, DC-VAE has a lower PPL score which indicated that the latent space learnt is more disentangled. A.3 Effect of negative samples In this section we analyse the effect of varying the number of negative samples used for contrastive learning. The figure 11 shows the reconstruction error on the CIFAR-10 Krizhevsky et al. (2009) test set as the negative samples is varied. We observe that a higher number of negative samples results in better reconstruction. We choose 8096 for all of our experiments because of memory constraints. A.4 Datasets used CIFAR-10 comprises 50,000 training images and 10,000 test images with a spatial resolution of 32 × 32. STL-10 is a similar dataset that contains 5,000 training images and 100,000 unlabeled images at 96× 96 resolution. We follow the procedure in AutoGAN(Gong et al., 2019) and resize the STL-10 images to 32× 32. The CelebA dataset has 162,770 training images and 19,962 testing images, CelebA-HQ contains 29,000 training images with 1,000 test images of size 1024 × 1024, and LSUN Bedroom has approximately 3M images. We resize all images progressively in these three datasets from (4× 4) to (512× 512) for the progressive training. A.5 Network architecture diagrams In Figures 15 we show the detailed network architecture of DC-VAE for input resolutions of 32× 32. Note that the comparison results shown in Figure 3 and Table 1 in the main paper, for VAE, VAE/GAN, VAE w/o GAN, and our proposed DC-VAE are all based on the same network architecture (shown in Figure 15 here), for a fair comparison. The network architectures shown in Figure 15 are adapted closely from the networks discovered by (Gong et al., 2019) through Neural Architecture Search. The DC-VAE developed in our paper is not tied to any particular CNN architecture. We choose the AutoGAN architecture (Gong et al., 2019) to start with a strong baseline. The decoder in Figure 15 matches the generator in (Gong et al., 2019). The encoder is built by modifying the output shape of the final linear layer in the discriminator of AutoGAN (Gong et al., 2019) to match the latent dimension and adding spectral normalization. The discriminator is used both for classifying real/fake images, and contrastive learning. For each layer we choose, we first apply 1x1 convolution and a linear layer, and then use this feature as an input to the contrastive module. For experiments at 32× 32, we pick two different positions: the output of second residual conv block (lower level) and the output of the first linear layer (higher level). For experiments on higher resolution datasets we use a Progressive GAN (Karras et al., 2018) Generator and Discriminator as our backbone and apply similar modifications as described above. A.6 Training infrastructure A.7 Further details about the representation learning experiments As seen in Table 4 in the main paper, we show the representation capability of DC-VAE following the procedure outlined in (Ding et al., 2020). We train our model on the MNIST dataset (LeCun, 1998) and measure the transferability though a classification task on the latent embedding vector. Specifically, we first pretrain the DC-VAE model on the training split of the MNIST dataset. Following that we freeze the DC-VAE model and train a linear classifier that takes latent embedding vector as the input and predicts the class label of the original image.
1. What is the main contribution of the paper, and how does it differ from other approaches in the field? 2. How effective are the proposed methods in achieving sharp output images, representation learning, and sampling? 3. What are the limitations of the experimental results presented in the paper? 4. How do the authors address the issue of cherry-picking in their experiments? 5. What is the significance of comparing the model's performance at different resolutions? 6. How does the model's performance compare to other relevant prior work, such as Dumoulin et al. (2018) or Donahue et al. (2018)? 7. Can the authors provide additional information about the FID computation for reconstructions? 8. Can the authors clarify their use of ProgressiveGAN for the 128x128 architecture? 9. Would it be possible to show latent interpolations or Perceptual Path Length measurements to evaluate the smoothness of the latent space?
Review
Review Review: DUAL CONTRADISTINCTIVE GENERATIVE AUTOENCODER Summary: The paper presents a new generative model called DC-VAE which leverages instance-level and set-level constrastive losses together to achieve simultaneously sharp outut images in reconstruction, in sampling and representation learning. The results appear to be similar or better than other comparable approaches like VAE-GAN (Larsen et al. 2016) which in itself improves upon VAE. ########################################################################## Reasons for score: While I appreciate the idea and the results look promising, a closer examination of the results raises several questions. These questions must be addressed in order for me to raise the score. ########################################################################## Pros: The model is interesting, the idea is well justified, and to my knowledge, this specific combination of ideas is novel, though the building blocks are largely well known. The presentation is clear. The model is general-purpose and hence potentially with wide range of applications. There are indications of good performance. ########################################################################## Cons: The experiments do not seem to form a coherent whole. Some results are only given in qualitative (visual) way which leaves room for heavy cherry-picking (faithfulness or LSUN reconstructions is only indicated in 3 images in fig 1, so are the reconstructions of CIFAR in fig.4, and the IntroVAE comparison). The faithfulness of reconstructions should be quantitatively measured if the authors want it to carry weight. Some claims seem to imply that results in very different resolutions are comparable. One cannot make this assumption. IntroVAE comparison is problematic. First, IntroVAE paper itself does not even claim that their reconstructions would retain the identity of the face, but only the general topology, and thus they often lose the identity. So it is a problematic comparison point to begin with since the authors do not discuss the identity question at all. Second, the authors compare 128x128 results of their model to 1024x1024 results of IntroVAE, and instead of granting that the capacity to learn a high resolution may cause more loss of details, they make the contrary claim that their model is "already better" at the low resolution. I do not agree with this interpretation. You cannot compare models that have this large of a difference in resolution and draw that kind of a conclusion. Smoothness of latent space is not evaluated in any way (e.g. by interpolations) Minor: There could be many more comparison points and it is not clear why these exact ones were chosen. E.g. [1-6] seem to be relevant prior work and some of them as potential baselines. Either (Donahue et al., 2018) or (Dumoulin et al., 2018) could also be better contrasted to or used as a baseline. ########################################################################## Questions during rebuttal period: How is the FID of reconstructions computed exactly? Is the reconstructed set of (how many?) samples compared against the set of the original samples? What do you mean by using ProgressiveGAN for the 128x128 architecture in 5.1? Do you mean you just use their 128x128 structure in a static manner, and not the progressive growing? Can you show also latent interpolations or Perceptual Path Length (Karras et al., 2018) measurements? [1] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow. Adversarial autoencoders. In International Conference on Learning Representations (ICLR), 2016. [2] L. Mescheder, S. Nowozin, and A. Geiger. Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks. In International Conference on Machine Learning (ICML), pages 2391–2400, 2017 [3] A. Heljakka, A. Solin, and J. Kannala. Pioneer networks: Progressively growing generative autoencoder. In Asian Conference on Computer Vision (ACCV), pages 22–38, 2018. [4] D. Ulyanov, A. Vedaldi, and V. Lempitsky. It takes (only) two: Adversarial generator-encoder networks. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), pages 1250–1257, 2018.
ICLR
Title Dual Contradistinctive Generative Autoencoder Abstract We present a new generative autoencoder model with dual contradistinctive losses to improve generative autoencoder that performs simultaneous inference (reconstruction) and synthesis (generation). We name our model dual contradistinctive generative autoencoder (DC-VAE) that integrates an instance-level discriminative loss (maintaining the instance-level fidelity for the reconstruction/synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for the reconstruction/synthesis), both being contradistinctive. There also exists a mathematical connection between the instance-based classification and instance-level conditional distribution. DC-VAE achieves competitive results in three tasks, including image synthesis, image reconstruction, and representation learning. DC-VAE is applicable to various tasks in computer vision and machine learning. (a) DC-VAE (ours) Reconstruction results. Left: 128× 128. Right: 512× 512. (b) DC-VAE (ours) Sampling results. Left: 128× 128. Right: 512× 512. Figure 1: DC-VAE Reconstruction (top) and Sampling (bottom) on LSUN Bedroom Yu et al. (2015) at resolution 128× 128 (left) and CelebA-HQ (Karras et al., 2018) at resolution 512× 512 (right). 1 INTRODUCTION Tremendous progress has been made in deep learning for the development of various learning frameworks (Krizhevsky et al., 2012; He et al., 2016; Goodfellow et al., 2014; Vaswani et al., 2017). Autoencoder (AE) (LeCun, 1987; Hinton & Zemel, 1994) aims to compactly represent and faithfully reproduce the original input signal by concatenating an encoder and a decoder in an end-to-end learning framework. The goal of AE is to make the encoded representation semantically efficient and sufficient to reproduce the input signal by its decoder. Autoencoder’s generative companion, variational autoencoder (VAE) (Kingma & Welling, 2014), additionally learns a variational model for the latent variables to capture the underlying sample distribution. For the encoder and decoder models separately, tremendous progress has been made in image classification with deep convolutional neural network (CNN) (Krizhevsky et al., 2012; He et al., 2016) (an encoder) and in image generation with generative adversarial network (GAN) (Goodfellow et al., 2014) (a decoder). The key objective for a generative autoencoder is to maintain two types of fidelities: (1) an instancelevel fidelity to make the reconstruction/synthesis faithful to the individual input data sample, and (2) a set-level fidelity to make the reconstruction/synthesis of the decoder faithful to the entire input data set. The VAE/GAN algorithm (Larsen et al., 2016) combines a reconstruction loss with an adversarial loss. However, the result of VAE/GAN is sub-optimal, as shown in Table 1. The pixel-wise reconstruction loss in the standard VAE (Kingma & Welling, 2014) typically results in blurry images with degenerated semantics. A possible solution to resolving the above conflict lies in two aspects: (1) turning the measure in the pixel space into induced feature space that is more semantically meaningful; (2) changing the L2 distance (per-pixel) into a learned instance-level distance function for the entire image (akin to generative adversarial networks which learn set-level distance functions). Taking these two steps allows us to design an instance-level classification loss that is aligned with the adversarial loss in the GAN model enforcing set-level fidelity. Motivated by the above observations, we develop a new generative autoencoder model with dual contradistinctive losses by adopting a discriminative loss performing instance-level classification (enforcing the instance-level fidelity), which is rooted in metric learning (Kulis et al., 2012) and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; van den Oord et al., 2018). Combined with the adversarial losses for the set-level fidelity, both terms are formulated in the induced feature space performing contradistinction: (1) the instance-level contrastive loss considers each input instance (image) itself as a class, and (2) the set-level adversarial loss treats the entire input set as a positive class. We name our method dual contradistinctive generative autoencoder (DC-VAE) and make the following contributions. • We develop a new algorithm, dual contradistinctive generative autoencoder (DC-VAE), by combining instance-level and set-level classification losses in the VAE framework, and systematically show the significance of these two loss terms in DC-VAE. • The effectiveness of DC-VAE is illustrated in three tasks altogether, including image synthesis, image reconstruction, and representation learning. 2 RELATED WORK Related work can be roughly divided into three categories: (1) generative autoencoder, (2) deep generative model, and (3) contrastive learning. Variational autoencoder (VAE) (Kingma & Welling, 2014) points to an exciting direction of generative models by developing an Evidence Lower BOund (ELBO) objective (Higgins et al., 2017; Ding et al., 2020). However, the VAE reconstruction/synthesis is known to be blurry. To improve the image quality, a sequence of VAE based models have been developed (Larsen et al., 2016; Dumoulin et al., 2017; Huang et al., 2018; Brock et al., 2018; Zhang et al., 2019). VAE/GAN (Larsen et al., 2016) adopts an adversarial loss to improve the quality of the image, but its output for both reconstruction and synthesis (new samples) is still unsatisfactory. IntroVAE Huang et al. (2018) adds a loop from the output back to the input and is able to attain image quality that is on par with some modern GANs in some aspects. However, its full illustration for both reconstruction and synthesis is unclear. PGA (Zhang et al., 2019) adds a constraint to the latent variables. Pioneering works of (Tu, 2007; Gutmann & Hyvärinen, 2012) alleviate the difficulty of learning densities by approximating likelihoods via classification (real (positive) samples vs. fake (pseudonegative or adversarial) samples). Generative adversarial network (GAN) (Goodfellow et al., 2014) builds on neural networks and amortized sampling (a decoder network that maps a noise into an image). The subsequent development after GAN (Radford et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017; Karras et al., 2018; Gong et al., 2019; Dumoulin et al., 2017; Donahue et al., 2017) has led to a great leap forward in building decoder-based generative models. It has been widely observed that the adversarial loss in GANs contributes significantly to the improved quality of image synthesis. Energy-based generative models (Salakhutdinov & Hinton, 2009; Xie et al., 2016; Jin et al., 2017; Lee et al., 2018) — which aim to directly model data density — are making a steady improvement for a simultaneously generative and discriminative single model. From another angle, contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020; Chen et al., 2020) has lately shown its particular advantage in unsupervised training of CNN features. It overcomes the limitation in unsupervised learning where class label is missing by turning each image instance into one class. Thus, the softmax function in the standard discriminative classification training can be applied. Contrastive learning can be connected to metric learning (Bromley et al., 1993; Chopra et al., 2005; Chechik et al., 2010). In this paper, we aim to improve VAE (Kingma & Welling, 2014) by introducing a contrastive loss (van den Oord et al., 2018) to address instance-level fidelity between the input and the reconstruction in the induced feature space. Unlike in self-supervised representation learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020), where self-supervision requires generating a transformed input (via data augmentation operations), the reconstruction naturally fits into the contrastive term that encourages the matching between the reconstruction and the input image instance, while pushing the reconstruction away from the rest of the images in the entire training set. Thus, the instance-level and set-level contradistinctive terms collaborate with each to encourage the high fidelity of the reconstruction and synthesis. In Figure 3, we systematically show the significance of with and without the instance-level and the set-level contradistinctive terms. In addition, we explore multi-scale contrastive learning via two schemes in Section 4.1: 1) deep supervision for contrastive learning in different convolution layers, and 2) patch-based contrastive learning for fine-grained data fidelity. In the experiments, we show competitive results for the proposed dual contradistinctive generative autoencoder (DC-VAE) in a number of benchmarks for three tasks, including image synthesis, image reconstruction, and representation learning. 3 PRELIMINARIES: VAE AND VAE/GAN Variational autoencoder (VAE) Assume a given training set S = {xi}ni=1 where each xi ∈ Rm. We suppose that each xi is sampled from a generative process p(x|z). In the literature, vector z refers to latent variables. In practice, latent variables z and the generative process p(x|z) are unknown. The objectives of a variational autoencoder (VAE) (Kingma & Welling, 2014) is to simultaneously train an inference network qφ(z|x) and a generator network pθ(x|z). In VAE (Kingma & Welling, 2014), the inference network is a neural network that outputs parameters for Gaussian distribution qφ(z|x) = N (µφ(x),Σφ(x)). The generator is a deterministic neural network fθ(z) parameterized by θ. Generative density pθ(x|z) is assumed to be subject to a Gaussian distribution: pθ(x|z) = N (fθ(z), σ2I). These models can be trained by minimizing the negative of evidence lower bound (ELBO) in Eq. (1) below. LELBO(θ,φ;x) = −Ez∼qφ(z|x)[log(pθ(x|z))] +KL[qφ(z|x)||p(z)] (1) where p(z) is the prior, which is assumed to be N (0, I). The first term −Eqφ(z|x)[log(pθ(x|z))] reduces to standard pixel-wise reconstruction loss Eqφ(z|x)[||x− fθ(z)||22] (up to a constant) due to the Gaussian assumption. The second term is the regularization term, which prevents the conditional qφ(z|x) from deviating from the Gaussian prior N (0, I). The inference network and generator network are jointly optimized over training samples by: min θ,φ E x∼pdata(x) LELBO(θ,φ;x). (2) where pdata is the distribution induced by the training set S. VAE has an elegant formulation. However, it relies on a pixel-wise reconstruction loss, which is known not ideal to be reflective of perceptual realism (Johnson et al., 2016; Isola et al., 2017), often resulting in blurry images. From another viewpoint, it can be thought of as using a kernel density estimator (with an isotropic Gaussian kernel) in the pixel space. Although allowing efficient training and inference, such a non-parametric approach is overly simplistic for modeling the semantics and perception of natural images. VAE/GAN Generative adversarial networks (GANs) (Goodfellow et al., 2014) and its variants (Radford et al., 2016), on the other hand, are shown to be producing highly realistic images. The success was largely attributed to learning a fidelity function (often referred to as a discriminator) that measures how realistic the generated images are. This can be achieved by learning to contrast (classify) the set of training images with the set of generated images (Tu, 2007; Gutmann & Hyvärinen, 2012; Goodfellow et al., 2014). VAE/GAN (Larsen et al., 2016) augments the ELBO objective (Eq. (2)) with the GAN objective. Specifically, the objective of VAE/GAN consists of two terms, namely the modified ELBO (Eq. (3)) and the GAN objective. To make the notations later consistent, we now define the set of given training images as S = {xi}ni=1 in which a total number of n unlabeled training images are present. For each input image xi, the modified ELBO computes the reconstruction loss in the feature space of the discriminator instead of the pixel space: LELBO(θ,φ, D;xi) = Ez∼qφ(z|xi)[||FD(xi)− FD(fθ(z))|| 2 2] +KL[qφ(z|xi)||p(z)] (3) where FD(·) denotes the feature embedding from the discriminator D. Feature reconstruction loss (also referred to as perceptual loss), similar to that in style transfer (Johnson et al., 2016). The modified GAN objective considers both reconstructed images (latent code from qφ(z|x)) and sampled images (latent code from the prior p(z)) as its fake samples: LGAN(θ,φ, D;xi) = logD(xi)+Ez∼qφ(z|xi) log(1−D(fθ(z))+Ez∼p(z) log(1−D(fθ(z)). (4) The VAE/GAN objective becomes: min θ,φ max D n∑ i=1 [LELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (5) 4 DUAL CONTRADISTINCTIVE GENERATIVE AUTOENCODER (DC-VAE) Here we want to address a question: Is the degeneration of the synthesized images by VAE always the case once the decoder is joined with an encoder? Can the problem be remedied by using a more informative loss? Although improving the image qualities of VAE by integrating a set-level contrastive loss (GAN objective of Eq. (4)), VAE/GAN still does not accurately model instance-level fidelity. Inspired by the literature on instance-level classification (Malisiewicz et al., 2011), approximating likelihood by classification (Tu, 2007), and contrastive learning (Hadsell et al., 2006; Wu et al., 2018; He et al., 2020), we propose to model instance-level fidelity by contrastive loss (commonly referred to as InfoNCE loss) (van den Oord et al., 2018). In DC-VAE, we perform the following minimization and loosely call each term a loss. Linstance(θ,φ, D; i, {xj}nj=1) , −Ez∼qφ(z|xi) [ log eh(xi,fθ(z))∑n j=1 e h(xj ,fθ(z)) ] , (6) where i is an index for a training sample (instance), {xj}nj=1 is the union of positive samples and negative samples, h(x,y) is the critic function that measures compatibility between x and y. Following the popular choice from (He et al., 2020), h(x,y) is the cosine similarity between the embeddings of x and y: h(x,y) = FD(x) >FD(y) ||FD(x)||2||FD(y)||2 . Note that unlike in contrastive self-supervised learning methods (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020) where two views (independent augmentations) of an instance constitutes a positive pair, an input instance xi and its reconstruction fθ(z) comprises a positive pair in DC-VAE. Likewise, the reconstruction fθ(z) and any instance that is not xi can be a negative pair. To bridge the gap between the instance-level contrastive loss (Eq. (6)) and log-likelihood in ELBO term (Eq. (1)), we show the following observation. Theorem 1 (From (Ma & Collins, 2018; Poole et al., 2019)) The following objective is minimized, i.e., the optimal critic h is achieved, when h(fθ(z),x) = log p(x|z) + c(x) where c(x) is any function that does not depend on z. INCE , Ex1,···xKEi[Linstance(θ,φ, D; i, {xj}nj=1)]. (7) From Theorem 1, we see that the contrastive loss of Eq. (6) implicitly estimates the log-likelihood log pθ(x|z) required for the evidence lower bound (ELBO). Hence, we modify the ELBO objective of Eq. (1) as follows and name it as implicit ELBO (IELBO): LIELBO(θ,φ, D;xi) = Linstance(θ,φ, D; i, {xj}nj=1) +KL[qφ(z|xi)||p(z)]. (8) Finally, the combined objective for the proposed DC-VAE algorithm becomes: min θ,φ max D n∑ i=1 [LIELBO(θ,φ, D;xi) + LGAN(θ,φ, D;xi)] . (9) The definition of LGAN follows Eq. (4). Note here we also consider the term in Eq. (4) as contrasdistinctive since it tries to minimize the difference/discriminative classification between the input (“real”) image set and the reconstructed/generated (“fake”) image set. Below we highlight the significance of the two contradistinctive terms. Figure 2 shows the model architecture. • Instance-level fidelity. The first item in Eq. (8) is an instance-level fidelity term encouraging the reconstruction to be as close as possible to the input image while being different from all the rest of the images. A key advantage of the contrastive loss in Eq. (8) over the standard reconstruction loss in Eq. (3) is its relaxed and background instances aware formulation. In general, the reconstruction in Eq. (3) wants a perfect match between the reconstruction and the input, whereas the contrastive loss in Eq. (8) requests for being the most similar one among the training samples. This way, the contrastive loss becomes more cooperative with less conflict to the GAN loss, compared with the reconstruction loss. The introduction of the contrastive loss results in a significant improvement over VAE and VAE/GAN in which only matching the reconstruction, and the input instance is enforced. • Set-level fidelity. The second item in Eq. (9) is a set-level fidelity term encouraging the entire set of synthesized images to be non distinguishable from the input image set. Having this term (Eq. (4)) is still important since the instance contrastive loss alone (Eq. (9)) will also lead to a degenerated situation: the input image and its reconstruction can be projected to the same point in the new feature space, but without a guarantee that the reconstruction itself lies on the valid “real” image manifold. As shown in Figure 3 and Table 1 for the comparison with and without the individual terms in Eq. (9). We observe evident effectiveness of the proposed DC-VAE combining both the instance-level fidelity term (Eq. (6)) and the set-level fidelity term (Eq. (4)), compared with VAE (using pixel-wise reconstruction loss without the GAN objective), VAE-GAN (using feature reconstruction loss and the GAN objective), and VAE contrastive (using contrastive loss but without the GAN objective). In the experiments, we show that both terms required to achieve faithful reconstruction (captured by InfoNCE loss) with perceptual realism (captured by the GAN loss). 4.1 MULTI SCALE CONTRASTIVE LEARNING Inspired by (Lee et al., 2015), we utilize information from feature maps at different scales. In addition to contrasting on the last layer of D in Equation 9, we add contrastive objective on fl(z) where fl is some function on top of an intermediate layer l of D. We do it in two different ways. 1. Deep supervision: We use 1×1 convolution to reduce the dimension channel-wise, and use a linear layer to obtain fl. 2. Local patch: We use a random location across channel at layer l (size: 1×1×d, where d is the channel depth). The intuition for the second is that in a convolutional neural network, one location at a feature map corresponds to a receptive area (patch) in the original image. Thus, by contrasting locations across channels in the same feature maps, we are encouraging the original image and the reconstruction to image have locally similar content, while encouraging them to have locally dissimilar content in other images. We use deep supervision for initial training, and add local patch after certain iterations. 5 EXPERIMENTS 5.1 IMPLEMENTATION Datasets To validate our method, we train our method on several different datasets — CIFAR-10 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), CelebA (Liu et al., 2015), CelebA-HQ (Karras et al., 2018), and LSUN bedroom (Yu et al., 2015). See the appendix for more detailed descriptions. Network architecture For 32× 32 resolution, we design the encoder and decoder subnetworks of our model in a similar way to the discriminator and generator found through neural architecture search in AutoGAN (Gong et al., 2019). For the higher resolution experiments (128 × 128 and 512 × 512 resolution), we use Progressive GAN (Karras et al., 2018) as the backbone. Network architecture diagram is available in the appendix. Training details The number of negative samples for contrastive learning is 8096 for all datasets. The latent dimension for the VAE decoder is 128 for CIFAR-10, STL-10, and 512 for CelebA, CelebA-HQ and LSUN Bedroom. Learning rate is 0.0002 with Adam parameters of (β1, β2) = (0.0, 0.9) and a batch size of 128 for CIFAR-10 and STL-10. For CelebA, CelebA-HQ, LSUN Bedroom datasets, we use the optimizer parameters given in (Karras et al., 2018). The contrastive embedding dimension used is 16 for each of the experiments. 5.2 ABLATION STUDY To demonstrate the necessity of the GAN loss (Eq. 4) and contrastive loss (Eq. 8), we conduct four experiments with the same backbone. These experiments are: VAE (No GAN, no Contrastive), VAE/GAN (with GAN, no Contrastive), VAE-Contrastive (No GAN, with Contrastive, and ours (With GAN, with Contrastive). Here, GAN denotes Eq. 4, and Contrastive denotes Eq. 8. Qualitative analysis From Figure 3, we see that without GAN and contrastive, images are blurry; Without GAN, the contrastive head can classify images, but not on the image manifold; Without Contrastive, reconstruction images are on the image manifold because of the discriminator, but they are different from input images. These experiments show that it is necessary to combine both instance-level and set-level fidelity, and in a contradistinctive manner. Quantitative analysis In Table 1 we observe the same trend. VAE generates blurry images; thus the FID/IS (Inception Score) is not ideal. VAE-Contrastive does not generate images on the natural manifold; thus FID/IS is poor. VAE/GAN combines set-level and instance-level information. However the L2 objective is not ideal; thus the FID/IS is sub-optimal. For both reconstruction and sampling tasks, DC-VAE generates high fidelity images and has a favorable FID and Inception score. This illustrates the advantange of having a contradistinctive objective on both set level and instance level. To measure the faithfulness of the reconstructed image we compute the pixelwise L2 distance and the perceptual distance (Johnson et al. (2016)). For the pixel distance, VAE has the lowest value because it directly optimizes this distance during training; our pixel-wise distance is better than VAE/GAN and VAE-Contrastive. For perceptual distance, our method outperforms other three, which confirms that using contrastive learning helps reconstruct images semantically. 5.3 COMPARISON TO EXISTING GENERATIVE MODELS Table 2 gives a comparison of quantitative measurement for CIFAR-10 and STL-10 dataset. In general, there is a large difference in terms of FID and IS between GAN family and VAE family of models. Our model has state-of-the-art results in VAE family, and is comparable to state-of-the-art GAN models on CIFAR-10. Similarly Table 4 shows that DC-VAE is able to generate images that are comparable to GAN based methods even on higher resolution datasets. 5.4 LATENT SPACE REPRESENTATION: IMAGE AND STYLE INTERPOLATION We further validate the effectiveness of DC-VAE for representation learning. One benefit of having an AE/VAE framework compared with just a decoder as in GAN Goodfellow et al. (2014) is to be able to directly obtain the latent representation from the input images. The encoder and decoder modules in VAE allows us to readily perform image/style interpolation by mixing the latent variables of different images and reconstruct/synthesize new ones. We demonstrate qualitative results on image interpolation (Fig. 5, Appendix Fig. 9), style interpolation (Appendix Fig. 10) and image editing (Fig. 6). We directly use the trained DC-VAE model without disentanglement learning Karras et al. (2019). Additional latent space analysis and the method used for interpolation and editing is Table 3: Quality of image generation (FID) comparison on LSUN Bedrooms. †128×128 resolution. ¶256×256 resolution. ↓ means lower is better. Method FID↓ Progressive GAN‡ (Karras et al., 2018) 8.3 SNGAN† (Miyato et al., 2018) (from (Chen et al., 2019)) 16.0 SSGAN†(Chen et al., 2019) 13.3 StyleALAE¶ (Pidhorskyi et al., 2020) Reconstruction 15.92 StyleALAE¶ (Pidhorskyi et al., 2020) Sampling 17.13 DC-VAE† (ours) Reconstruction 10.57 DC-VAE† (ours) Sampling 14.3 provided in the Appendix. We also quantitatively compare the latent space disentanglement through the perceptual path length (PPL) (Karras et al., 2019) (Table 6). We observe that DC-VAE learns a more disentangled latent space representation than the backbone Progressive GAN (Karras et al., 2018) and StyleALAE (Pidhorskyi et al., 2020) that uses a much more capable StyleGAN (Karras et al., 2019) backbone. 5.5 LATENT SPACE REPRESENTATION: CLSSIFICATION To show that our model learns a good representation, we measure the performance on the downstream MNIST classification task (Ding et al., 2020). The VAE models were trained on MNIST dataset (LeCun, 1998). We feed input images into our VAE encoder and get the latent representation. Then we train a linear classifier on the latent representation to classify the classes of the input images. Results in Table 5 show that our model gives the lowest classification error in most cases. This experiment demonstrates that our model not only gains the ability to do faithful synthesis and reconstruction, but also gains better representation ability on the VAE side. 6 CONCLUSION In this paper, we have proposed dual contradistinctive generative autoencoder (DC-VAE), a new framework that integrates an instance-level discriminative loss (InfoNCE) with a set-level adversarial loss (GAN) into a single variational autoencoder framework. Our experiments show competitive results for a single model in several tasks, including image synthesis, image reconstruction, representation learning for image interpolation, and representation learning for classification. DC-VAE points to a encouraging direction that attains high-quality synthesis (decoding) and inference (encoding). A APPENDIX A.1 Additional reconstruction results A.2 Analysing the latent space In this section we analyse the smoothness of the latent space learnt by DC-VAE. In Figure 9 we qualitatively show the high resolution (512× 512) CelebA-HQ Karras et al. (2018) images generated by an evenly spaced linear blending between two latent vectors. In Fig. 6 we show that DC-VAE is able to perform meaningful attribute editing on images while retaining the original identity. To perform image editing, we first need to compute the direction vector in the latent space that correspond to a desired attribute (e.g. has glasses, has blonde hair, is a woman, has facial hair). We compute these attribute direction vectors by selecting 20 images that have the attribute and 20 images that do not have the attribute, obtaining the corresponding pairs of 20 latent vectors, and calculating the difference of the mean. The results in Fig. 6 show that these direction vectors can be added to a latent vector to add a diverse combination of desired image attributes while retaining the original identity of the individual. Additionally we corroborate the above qualitative results quantitatively by inspecting the Perceptual Path Length (PPL) Karras et al. (2019) of our learn DC-VAE Decoder (Tab. 6) to measures the disentanglement of the latent space. We note that although ProgressiveGAN (ours base model) has a better FID score, DC-VAE has a lower PPL score which indicated that the latent space learnt is more disentangled. A.3 Effect of negative samples In this section we analyse the effect of varying the number of negative samples used for contrastive learning. The figure 11 shows the reconstruction error on the CIFAR-10 Krizhevsky et al. (2009) test set as the negative samples is varied. We observe that a higher number of negative samples results in better reconstruction. We choose 8096 for all of our experiments because of memory constraints. A.4 Datasets used CIFAR-10 comprises 50,000 training images and 10,000 test images with a spatial resolution of 32 × 32. STL-10 is a similar dataset that contains 5,000 training images and 100,000 unlabeled images at 96× 96 resolution. We follow the procedure in AutoGAN(Gong et al., 2019) and resize the STL-10 images to 32× 32. The CelebA dataset has 162,770 training images and 19,962 testing images, CelebA-HQ contains 29,000 training images with 1,000 test images of size 1024 × 1024, and LSUN Bedroom has approximately 3M images. We resize all images progressively in these three datasets from (4× 4) to (512× 512) for the progressive training. A.5 Network architecture diagrams In Figures 15 we show the detailed network architecture of DC-VAE for input resolutions of 32× 32. Note that the comparison results shown in Figure 3 and Table 1 in the main paper, for VAE, VAE/GAN, VAE w/o GAN, and our proposed DC-VAE are all based on the same network architecture (shown in Figure 15 here), for a fair comparison. The network architectures shown in Figure 15 are adapted closely from the networks discovered by (Gong et al., 2019) through Neural Architecture Search. The DC-VAE developed in our paper is not tied to any particular CNN architecture. We choose the AutoGAN architecture (Gong et al., 2019) to start with a strong baseline. The decoder in Figure 15 matches the generator in (Gong et al., 2019). The encoder is built by modifying the output shape of the final linear layer in the discriminator of AutoGAN (Gong et al., 2019) to match the latent dimension and adding spectral normalization. The discriminator is used both for classifying real/fake images, and contrastive learning. For each layer we choose, we first apply 1x1 convolution and a linear layer, and then use this feature as an input to the contrastive module. For experiments at 32× 32, we pick two different positions: the output of second residual conv block (lower level) and the output of the first linear layer (higher level). For experiments on higher resolution datasets we use a Progressive GAN (Karras et al., 2018) Generator and Discriminator as our backbone and apply similar modifications as described above. A.6 Training infrastructure A.7 Further details about the representation learning experiments As seen in Table 4 in the main paper, we show the representation capability of DC-VAE following the procedure outlined in (Ding et al., 2020). We train our model on the MNIST dataset (LeCun, 1998) and measure the transferability though a classification task on the latent embedding vector. Specifically, we first pretrain the DC-VAE model on the training split of the MNIST dataset. Following that we freeze the DC-VAE model and train a linear classifier that takes latent embedding vector as the input and predicts the class label of the original image.
1. What is the main contribution of the paper, and how does it improve upon previous works? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to synthesize and reconstruct images? 3. How does the reviewer assess the clarity and coherence of the paper's writing and argumentation? 4. Are there any inconsistencies or unclear points in the paper's notation or explanations? 5. Do the experimental results effectively support the paper's claims, and are they sufficient to demonstrate the method's effectiveness? 6. Does the paper adequately address potential limitations or counterarguments regarding its proposed approach?
Review
Review This paper proposed to replace the ELBO loss in VAE-GAN with an instance-level contrastive loss to improve the performance of image synthesis and reconstruction. Experiments on multiple benchmarks of images demonstrate the effectiveness of DC-VAE on image synthesis and reconstruction, and latent representation learning. Strengths: The improvements of DC-VAE on the benchmarks over VAE-GAN and other baselines are impressive. Weakness: The motivation of replacing the ELBO loss with the contrastive loss is not clearly introduced. Both the two losses work for the instance-level fidelity. Why the contrastive loss is better than the BLBO loss? The paper is not clearly written, some parts are confusing. For example, what is Theorem 1 telling us? I cannot understand it. Moreover, how to optimize the contrastive loss (6) in practice? Other comments and questions: Some notations are inconsistent. For example, in Eq (3), F_l() is denoted as the l-th layer feature embedding of D. But in Eq (6), D_l() is used. These inconsistent notations make the paper more difficult to be understood. In the first sentence of introduction, I cannot understand the three types of representation learning divided by the authors. Why CNNs are categorized as encoders and Transformers as autoencoders? In the last two sentences in the second paragraph, the authors claim "once an encoder and a decoder are joined together, the reconstruction/synthesis of the decoder often becomes degenerated." I don't believe that it is the joint learning of encoder and decoder leads to degenerated decoder. And what is the relation of this sentence with the following one "Note that Transformers, consisting of both encoder and decoder, are trained for temporal data with sequence-to-sequence prediction." I cannot follow what the two sentences want to say. Overall, the contrastive loss proposed in this paper is interesting, and the experimental results are impressive. However, the paper is not ready to be published and needs to be revised fundamentally.
ICLR
Title Linearised Implicit Variational Inference Abstract Bayesian neural networks (BNNs) are touted for robustness under data drift, resilience to overfitting and catastrophic forgetting whilst also producing actionable uncertainty estimates. In variational inference, these elegant properties are contingent on the expressivity of the variational approximation. Posteriors over parameters of large models are usually multimodal and highly correlated and hence cannot be well-approximated by simple, prescribed densities. We posit implicit variational distributions specified using differentiable generators are more flexible and propose a novel bound for training BNNs using such approximations (amortized neural samplers). The proposed bound uses an approximation of the variational distribution’s entropy by locally linearising the generator. Unlike existing works, our method does not require a discriminator network and moves away from an unfavourable adversarial objective. Our formulation resembles normalizing flows but does not necessitate invertibility of the generator. Moreover, we use a differentiable numerical lower bound on the Jacobians of the generator, mitigating computational concerns. We report log-likelihoods on UCI datasets competitive with deep ensembles and test our method on out-of-distribution benchmarks. 1 INTRODUCTION Deep neural networks are considered state of the art in numerous tasks in computer vision, speech and natural language processing. Scaling up neural architectures has led to outstanding performance on a myriad of generative and discriminative tasks, albeit some fundamental flaws remain. Neural networks are usually trained by maximising likelihood resulting in a single best estimate of parameters which renders these models highly overconfident of their predictions, prone to adversarial attacks and unusable in risk-averse domains. Furthermore, their usage remains restricted in sequential learning applications due to catastrophic forgetting (McCloskey & Cohen, 1989) and data-scarce regimes due to overfitting. When deployed in the wild, deep networks do not output a comprehensive measure of their uncertainty, prompting expert intervention. The Bayesian paradigm provides solutions to a number of these issues. In summary, Bayesian neural networks specify a prior distribution over parameters p(θ), and the neural network relates the parameters to the data D through a likelihood p(D|θ). The goal is to infer a conditional density over the parameters, called the posterior p(θ|D), given by the Bayes’ rule, p(θ|D) = p(D|θ)p(θ) p(D) = p(D|θ)p(θ)∫ p(D, θ) dθ . (1) This conditional density provides a range of suitable parameters with a probability over them given by the dataset. After training, predictions from an ensemble of parameters (models) can then be combined, weighted by their posterior probability forming a Bayesian model average (BMA). The variance of these aggregated predictions informs the user/human about the model’s confidence in a particular prediction. Finding the normalization constant in eq. (1) is analytically intractable for large models, and hence there is a clear focus on approximate inference techniques. Various approaches have been proposed, including Markov chain Monte Carlo (MCMC, Neal, 1995), variational inference (VI, Saul et al., 1996; Peterson, 1987) and the Laplace approximation (Mackay, 1991). Variational inference is a strategy that converts the inference problem into an optimisation over a family of distributions (variational family), denoted hereafter by Q, indexed by variational parameters denoted by γ. We optimise γ using a lower bound on the marginal log-likelihood of the data log p(D) called the evidence lower bound (ELBO). Usually, we are computationally limited to choosing simple distribution families like an isotropic Gaussian distribution (Tanaka, 1998; Blundell et al., 2015). The true posterior is much more complex and is approximated poorly using such approximations (Foong et al., 2019; 2020). This issue is exacerbated in large models that contain many symmetries and correlations. Notably, there have been attempts to extend VI to more structured and expressive distributions (Saul & Jordan, 1995; Bishop et al., 1997; Louizos & Welling, 2016) yet, capturing correlations between parameters with a flexible variational approximation remains the Achilles heel of these class of models. We propose an approach based on implicit generative modelling where the distribution over variables of interest is implicit and can only be sampled. This is in contrast to usual VI methods that use prescribed distributions with explicit parametrisation as the approximating density over latent variables (Diggle & Gratton, 1984; Mohamed & Lakshminarayanan, 2016). Although, this idea takes inspiration from GAN generators that try to recover the true data distribution, we do not require a discriminator network for training the generator and as a result do not suffer from the complicacies introduced by an adversarial objective. As emphasised by Tran et al. (2017), is a more natural way of capturing the generative process instead of forcing it to conform to an assumed latent structure which could be misspecified. Similar to other works in implicit VI (Shi et al., 2018), we posit using general (non-invertible) stochastic transformations that can produce highly flexible implicit densities to model posteriors of neural networks. We believe that these approximations can better capture the intricacies of posterior landscape. Additionally, when trying to model complicated densities in high-dimensions, it is sensible to learn a sampler instead of parameters of an expressive intractable approximation, especially if these approximations do not admit one-line samplers (Devroye, 1996). For example, EBMs can be very flexible but are not easy to sample from (Song & Kingma, 2021). If we were to use a fully correlated Gaussian to model the posterior of a neural network, we would need to optimize parameters quadratic in the number of weights of the network, O(dim(θ)2) to arrive at the optimum covariance matrix. In this work, we test our hypothesis of using an underparameterised generator to capture the important correlations in orders of magnitude less parameters than that. At the same time, we hint at the possibility that a constrained generator will probably avoid modelling redundancies present in BNN posteriors like permutationally symmetric modes. Succinctly, our contributions are presented as follows: • We derive a novel lower bound for variational inference in Bayesian Neural Networks using implicit variational approximation avoiding unstable minmax objectives. • We augment this lower bound by reducing its compute requirement, as we substitute a differentiable numerical lower bound for the entropy term comprising of Jacobians of neural networks. • We comprehensively empirically evaluate the capacity of this implicit variational approximation and the quality of the posteriors inferred using different out of distribution benchmarks. 2 VARIATIONAL INFERENCE FOR BAYESIAN NEURAL NETWORKS Consider the supervised learning setting, where we have training set D = {(xi,yi)}ni=1, where X = {xi}ni=1 are the covariates (inputs) and y = {yi}ni=1 are the labels (outputs). We consider a Bayesian regression or classification model given by p(D,θ) = p(θ)p(D|θ) = p(θ) n∏ i=1 p(yi|xi,θ), (2) where the likelihood is parameterised by θ ∈ Θ ≡ Rm. The objective function L in VI, called the ELBO is a lower bound on the log marginal likelihood of the data - log p(D), and the discrepancy between the two is equal to the KL divergence between the approximate and true posterior given by DKL[qγ(θ)||p(θ|D)] = log p(D)− Eθ∼qγ(γ) [ log p(D,θ) qγ(θ) ] ︸ ︷︷ ︸ L(γ) , (3) where qγ is the variational approximation of the posterior with parameters γ ∈ Γ. Since the KL divergence is non-negative, L is a lower bound on the evidence. This objective function can be written in terms of a likelihood term and a regularisation term as L(γ) = [ Eθ∼qγ(θ) [ log p(D|θ) ]︸ ︷︷ ︸ likelihood term −DKL[qγ(θ)||p(θ)]︸ ︷︷ ︸ regularisation term ] ≤ log p(D), (4) where the likelihood term promotes the variational approximation to model the data well and the regularisation term keeps the posterior close to the prior. Since the log-evidence, log p(D), does not depend on γ, minimising the the KL divergence is equivalent to maximising the ELBO, i.e., argmin γ DKL[qγ(θ)||p(θ|D)] ≡ argmax γ L(γ). (5) 2.1 IMPLICIT VARIATIONAL INFERENCE In implicit VI (IVI), the variational distribution is only implicitly defined through its generative process over the parameters θ z ∼ q(z), θ = gγ(z), (6) where the q is a fixed base distribution and gγ : Rd → Rm is a non-linear mapping and typically not a diffeomorphism. For IVI, the likelihood term from eq. (4) and its gradients can be estimated using Monte Carlo and the reparameterization trick. However, the regularisation term is more difficult as it involves the entropy of qγ , DKL[qγ(θ)||p(θ)] = Eθ∼qγ(θ) [ log qγ(θ) p(θ) ] = −Hq(qγ)− Eθ∼qγ(θ) [ log p(θ) ] . (7) Generally, the entropy of the generative process in eq. (6) is not available in an explicit form as the density of the process is not tractable. A prevalant technique to estimate the regularisation term uses density ratio estimators based on a GAN-like discriminator(Sugiyama et al., 2012; Huszár, 2017), and Geng et al. (2021) have given a tractable and differentiable lower bound on this entropy . Furthermore, when the dimensions of the base distribution d is smaller than m, the KL divergence is not well defined. In the KL divergance, we integrate over the whole space Θ but qγ does not have full support over this space and exists on a manifold embedded in the Θ space. In the GAN literature this problem is called mode dropping and is caused by the inability of the generator to recover all modes of the true data distribution (Che et al., 2020; Xu et al., 2018). To alleviate this, we draw inspiration from works in the GAN literature (Che et al., 2020) and add m dimensional noise to the output of the generator and redefine the variational approximation in the following section. 3 A DEEP LATENT VARIABLE MODEL AND ITS ENTROPY As the variational distribution, we propose to use a Gaussian deep latent variable model (DLVM) of a real variable θ ∈ Rm and with a real latent variable z ∈ Rd with density q(θ) = ∫ q(θ|z)q(z) dz = Ez∼q(z)[q(θ|z)]. (8) We assume a Gaussian base density and a Gaussian output density, that is q(z) = N (z|0, Id) (9) q(θ|z) = N (θ|gγ(z), σ2Im), (10) where g : Rd → Rm is the decoder/generator and σ2 ∈ R+ is the fixed homoscedastic variance of the output density. In general, we do not have a closed form for q(θ) due to the the integral in eq. (8) and the non-linear gγ , but we note that KL divergence in eq. (7) is well defined for this variational distribution. Below we propose a novel approximation of the differential entropy of this model. This model can equivalently be viewed as a variational autoencoder (VAE, Kingma & Welling, 2014; Rezende et al., 2014) with a Gaussian prior and a Gaussian output density with constant constant homoscedastic variance and no encoder, or as a implicit distribution from eq. (6) with added Gaussian noise. The latter is clearly seen from the generative process of by describe the generative process for The generative process of θ, which is θ′ = gγ(z), z ∼ N (0, Id) (11) θ = θ′ + η, η ∼ N (0, σ2Im). (12) 3.1 DIFFERENTIAL ENTROPY We want to calculate the different entropy of the Gaussian DLVM given by H[q(θ)] = −Eθ∼q(θ)[log q(θ)]. (13) We can in general not compute this analytically since we do not have a closed form of p(θ). Since we can sample from p(θ), we can approximate the expectation in eq. (13) using Monte Carlo sampling from p(z). However, since we do not have a closed form of p(θ), we still need an approximation of log q(θ). We could approximate p(θ) using Monte Carlo sampling from p(z), but this approximation has high variance. Usually, the variance is reduced by learning and encoder and doing importance sampling. Here we derive an approximation without using an encoder. 3.1.1 LINEARISATION OF THE GENERATOR First we consider a local linearisation of the generator. Assuming that the Jacobian of g exists, the first order Taylor polynomial of g at z0 is given by T 1z0(z) = g(z0) + Dg(z0) (z − z0), (14) where Dg(z0) is the Jacobian of g evaluated in z0. This assumes that the Jacobian exists, i.e. the generator has at least one derivative. We can approximate g(z) by T 1z0(z) when z is close to z0. We apply this approximation to q(θ) from eq. (8), which gives us q(θ) = Ez∼q(z)[q(θ|z)] = Ez∼q(z)[N (θ|g(z), σ2Im)] (15) ≈ Ez∼q(z)[N (θ|g(z0) + Dg(z0) (z − z0), σ2Im)] = N (θ|µ(z0), C(z0)) =: q̃z0(θ), (16) where µ(z0) = g(z0)−Dg(z0) z0 (17) C(z0) = Dg(z0)Dg(z0) ⊺ + σ2Im. (18) The result in eq. (16) can be obtained analytically by integrating over the latent variable, see e.g. Tipping & Bishop (1999). 3.2 APPROXIMATION OF THE DIFFERENTIAL ENTROPY We use the Gaussian approximation of q(θ) to approximate the entropy of the DLVM, that is H[q(θ)] = −Ez∼q(z)Eθ∼q(θ|z)[log q(θ)] ≈ −Ez∼q(z)Eθ∼q(θ|z)[log q̃z0=z(θ)] =: H̃[p(θ)]. (19) Importantly, we do the linearisation of q(θ) around the latent value z that is used to sample each θ in the expectation. We have that log q̃z0=z(θ) = − p 2 log 2π − 1 2 log detC(z0)− 1 2 (θ − µ(z0))⊺C(z0)−1(θ − µ(z0))︸ ︷︷ ︸ =:h(θ,z0) , (20) which means that our approximation of the entropy is H̃[q(θ)] = m 2 log 2π + 1 2 Ez∼q(z)[log detC(z)] + Ez∼q(z)Eθ∼q(θ|z)[h(θ, z)]. (21) As shown in appendix A.1, the last term can be written as Ez∼q(z)Eθ∼q(θ|z)[h(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 + σ2Im )−1 ( σ2Im + (Dg(z) z) 2 ))] , (22) where we used the notation M2 = MM⊺ for a matrix M . Now, if we let σ2 tend to zero, we find that lim σ2→0 Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 )−1 (Dg(z) z)2 )] (23) = 1 2 Ez∼q(z) [z⊺z] = 1 2 tr(Id) = d 2 . (24) Similar, we can also take the limit of the determinant term from eq. (21), that is lim σ2→0 1 2 Ez∼q(z)[log detC(z)] = 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] (25) Combining eqs. (21), (24) and (25), gives us the final approximation. For small values of the output variance σ2, we can approximate the differential entropy of a DLVM as H[p(θ)] ≈ lim σ2→0 H̃[p(θ)] = d 2 + m 2 log 2π + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] . (26) We can get a slightly more accurate approximation, by only applying the limit from eq. (23), and not the limit from eq. (25), which gives us H[p(θ)] ≈ d 2 + m 2 log 2π + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ + σ2Im )] . (27) 4 LINEARISED IMPLICIT VARIATIONAL INFERENCE (LIVI) We propose a novel bound for IVI. As the variational distribution, we use the DLVM of eq. (8), which is equivalent to adding noise to the implicit distribuion of eq. (6). Using the entropy approximation from eq. (26), we propose the approximate ELBO, L̃(γ) = Eθ∼qγ(θ) [ log p(D|θ) ] + Eθ∼qγ(θ) [ log p(θ) ] + lim σ2→0 H̃[p(θ)] (28) = Eθ∼qγ(θ) [ log p(D|θ) + log p(θ) ] + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] + c, (29) where c = d2 + m 2 log 2π. We can reparameterise the above with the base variables z,η to get L̃(γ) = Ez∼q(z),η∼q(η) [ log p(D|g(z) + η) + log p(g(z) + η) + 1 2 log det ( Dg(z)Dg(z)⊺ )] + c. (30) To avoid the calculation of the log-determinant term, we can follow Geng et al. (2021, eq. 10) and lower-bound it as 1 2 log det(Dg(z)Dg(z)⊺) = 1 2 m∑ i=1 log s2i (z) ≥ m log s1(z), (31) where sm(z) ≥ . . . ≥ s1(z) are the singular values of the Jacobian Dg(z). This gives us a lower bound on L̃(γ) given by ˜̃L(γ) =Eθ∼qγ(θ) [ log p(D|θ) + log p(θ) ] + Ez∼q(z) [ m log s1(z) ] + c ≤ L̃(γ) (32) Again, by reparameterisation with z,η we get ˜̃L(γ) = Ez∼q(z),η∼q(η) [ log p(D|g(z) + η) + log p(g(z) + η) +m log s1(z) ] + c. (33) We denote L̃(γ) the LIVI bound with accurate Jacobian and ˜̃L(γ) the LIVI bound with a differentiable lower bounded on the determinant. Depending on the amount of compute available, the two bounds provide a trade-off between the accuracy of uncertainties and the resources consumed. In both cases, the entropy maximisation promotes the generator to generate diverse weight samples which is in accordance with the principle behind Bayesian model averaging and supported by the performance of deep ensembles (Lakshminarayanan et al., 2017). We present connections with existing works in the literature in the following section highlighting similarities and divergences. 5 RELATED WORK The usage of a secondary network to generate parameters of a primary network first appeared in the form of hypernetworks (Ha et al., 2017). Our approach is probabilistic and is hence closer to Bayesian hypernetworks (Krueger et al., 2017). Compared to our approach, these models require invertibility of the generator and thereby avoid the complexities of estimating the entropy term. This corresponds to using a normalizing flow as a variational approximation. Training a normalizing flow over large parameter spaces is computionally costly due to large Jacobian matrices, typically requiring particular focus on the design of the variational approximation to curb dimensionality of the flow. In particular, Louizos & Welling (2017) use an expressive flow on multiplicative factors of weights in each layer and not on all weights jointly. Our bound uses a very similar change in volume formulation, log det(Dg(z)Dg(z)), for obtaining the log probability of samples under the variational density, but does not necessitate invertibility making it more general. Subsequently, Shi et al. (2018); Tran et al. (2017); Pawlowski et al. (2017) have successfully demonstrated implicit variational inference in BNNs using hypernetworks. Shi et al. (2018); Tran et al. (2017) do not focus on the entropy term, but rather try to estimate the ratio of the variational approximation to the prior (regularisation-term) in a procedure called density ratio estimation (also referred to as the prior-contrastive formulation by Huszár, 2017). Tran et al. (2017) opt for training a discriminator network to maximally distinguish two distributions given only i.i.d. samples from each. This approach, though general, adds to the computational requirements and becomes more challenging in high dimensions (Sugiyama et al., 2012). To mitigate the overhead of training the discriminator for each update of the ELBO, many works limit the discriminator training to a single or few iterations. Furthermore, this approach entails an adversarial objective that are infamously unstable (Mescheder et al., 2017). Pawlowski et al. (2017) treat all the weights as independent and find that a single discriminator network is inaccurate at estimating log ratios when compared to the analytical form of Bayes by backprop (Blundell et al., 2015), and opt to use a kernel method that matches the analytical form more closely. Shi et al. (2018) propose a novel way of estimating the ratio of the two densities using kernel regression in the space of parameters which obviates the need for a minmax objective. An obvious difficulty with kernel ridge regression in practice is its inaccuracy to estimate high-dimensional density ratios which is similar to using discriminators. This is especially the case given a limited number of samples from both the densities as well as the RBF kernel. While the RBF kernel still takes the same high-dimensional inputs and does not involve learning massive sets of parameters, its accuracy at larger scales is still doubtful. This work also proposes matrix multiplication neural network (MMNN) a novel generator architecture for generating large set of parameters. Pradier et al. (2018) are also motivated by the possibility of compressing the posterior in a lower dimensional space and use an inference network with a generator. Their model differs from ours as they also consider the parameters of the generator/decoder to be stochastic. Moreover they require empirical weight samples to train which doubles the training steps. D-SIVI (Molchanov et al., 2019) and SIVI (Yin & Zhou, 2018) use Monte Carlo (MC) averaging to approximate the entropy term. Both works use the implicit formulation to only model the mixing coefficients and not all the weights of the network. Our entropy term 8 also has a similar form and can be MC approximated. In the spirit of some recent works (Izmailov et al., 2020; Daxberger et al., 2021b;a) that alternatively choose a lower dimensional representation to preclude costly, high-dimensional inference, our work can be seen as allowing the approximate posterior in the form of the generator to choose which dimensions and parts of posterior are crucial and model them accordingly. 6 EXPERIMENTS 6.1 TOY DATA In fig. 1, we compare inference with our method against the gold standard for posterior inference on a simple toy dataset. After training, we also plot a KDE-plot of the samples the generator outputs in appendix A.6. We infer from this plot that the generator is capable of representing non-trivial distributions as we can spot heavy tails and multiple modes. 6.2 UCI DATASETS We perform experiments on UCI regression datasets with the setups by Lakshminarayanan et al. (2017) and Shi et al. (2018), using a BNN with one hidden layer MLP with 50 units on all of these datasets. We report the RMSE and log-likelihood on held out data for our method. We use generator architectures that are either equally or less powerful than Shi et al. (2018) and do not assume independence across layers, i.e. using one MLP to generate all the weights of the BNN. All of the generator architectures are one hidden layered MLP with a slightly varying number of units depending on the dataset. At this scale it is feasible to estimate uncertainties using accurate Jacobians. We require far fewer number of samples (5-10) per iteration compared to 100 used by KIVI to achieve very competitive results. We suspect they use high number of samples to curb the variance of the kernel estimator. Our results are summarised in table 1. We train our method with a homoscedastic assumption i.e. the variance in the dataset is assumed to be constant and we train an observation noise parameter using type II maximum likelihood. 6.3 MNIST DATASET Next we test our method on the MNIST dataset. While using the MMNN as the generator, we were able to achieve errors on the test set on par with KIVI for MLP with 400 and 800 hidden units. With the total number of parameters generated exceeding 400K even for 400 hidden units we chose to train the model only with the differentiable lower bound due to prohibitively high memory usage. For OOD testing we compare our method to last-layer laplace, deep ensembles and a simple MAP estimate. We intentionally choose these methods to compare against as a mean-field approximation usually does not achieve good accuracy on in-distribution data and has been shown, repeatedly, to suffer from many optimisation difficulties. On the other hand it is possible to run HMC samplers at this scale, it is not preferable. Very few works in the literature report results using full batch HMC (Izmailov et al., 2021). Deep ensembles predict using neural networks that have converged onto different minima hence encompassing information from diverse modes of the posterior, and as such remains one of the best in terms of uncertainty estimation. As for benchmarks we choose two OOD benchmarks presented in Daxberger et al. (2021a). First we test the OOD AUROC and confidence of a LeNet5 BNN trained on MNIST by using FMNIST, KMNIST and EMNIST. We used a MMNN architecture for generating over 40K parameters and trained using the differentiable numerical approximation with 3-6 samples depending on the architecture. We expand on few generator architectures here and leave the rest for appendix appendix A.2. The BNN trained with the implicit variational approximation, a generator with a 1225 dimension noise input and 2 matrix multiplication layers of 350 units each achieves accuracy of 99.071%±0.02, and calibration error of 0.084 ± 0.011 with nll −0.021 ± 0.001 on the test set. The same model reports an averaged OOD AUCROC of 97.15 ± 0.17 with an averaged confidence 68.53 ± 0.24. According to results provided in Daxberger et al. (2021a, Table 1), our model does not outperform in terms of confidence values yet, we notice that the performance degrades very smoothly as it encounters OOD data as opposed to models like Deep Ensembles and Laplace both of which fail relatively immediately and drastically in terms of confidence values. Our model does perform quite well on the averaged AUROC as well as on test set calibration error and log-likelihood. To probe out of distribution performance further, we compare our method on the rotated MINST benchmark from Daxberger et al. (2021a). In this benchmark we plot the negative log-likelihoods and empirical calibration errors for different rotated MNIST images. In this benchmark task we plot results in fig. 2 for three different architectures and our best (LIVI 3) remains the same architecture as above. Here too, we nearly match the performance of deep ensembles on these two metrics. The other two architectures, LIVI 1 has 1764 dimensional noise input and one matrix multiplication hidden layer with 350 units while LIVI 2 has 900 dimensional noise input with 2 hidden matrix multiplication layers of 320 units each. 6.4 COMPARISON BETWEEN IMPLICIT VARIATIONAL APPROXIMATIONS Variational inference for BNNs relies heavily on the expressivity of the family of approximations chosen to model the posterior. In our case the architecture of the generator represents the flexibility and overall modeling capability of the implicit variational density. We trained different architectures and noticed that generator architectures with more hidden layers perform better on in-distribution metrics like accuracy and log-likelihood. Additional hidden layers afford the generators the capacity to warp the input Gaussian noise into a suitable posterior distribution. On the other hand, the dimensionality of input noise becomes crucial for uncertainty quantification and OOD performance. We believe this is because the number of noise inputs are all the degrees of freedom available to the generator to model the parameters of the BNN. As such, the entropy of resulting posterior is directly dependent on the this factor. Although this number cannot be increased without repercussions because the base distribution and the number of samples affect the signal to noise ratio of our objective function eq. (29) and a very large z results in large gradient variance hindering covergence, requiring more samples during training or higher number of iterations to converge. In these experiments we also noticed that down scaling only the prior log probability has a very positive effect on the results. This is due to the fact that the prior term regularises the generator, forcing it to find minimas close to itself, a standard normal distribution. The scale of this prior log probability term is significant in the ELBO and gradients of this term are detrimental to the overall optimisation process. Unlike cold-posteriors(Wenzel et al., 2020), we keep the gradients of the entropy term as is and only reduce regularisation by downscaling the prior. As the last benchmark we opt to ascertain the quality of our model’s posterior and the implied predictive uncertainties by plotting the empirical CDF of predictive entropies across OOD images(Lakshminarayanan et al., 2017; Louizos & Welling, 2017) in fig. 3. Given a model trained on MNIST, the predictions on a data point from a different distribution should be given a high entropy prediction like a uniform distribution. For this plot we first obtain entropies of the output softmax distributions for all the models across data points and use an empirical CDF to represent how many of these predictive entropies are closer to a uniform distribution which has an entropy of 2.3. Ideally, we are looking lines closer to the right bottom corner, i.e. the number of low-entropy or highly confident predictions should be less. We compare our model to MAP, deep ensembles and last-layer Laplace and find that our model trained on MNIST is quite competitive in the quality of uncertainty estimates for this test over FMNIST dataset. For this plot we use the best generator architecture with a LeNet5 BNN which is called LIVI 3 in the tests above. 7 CONCLUSION In this paper we present a novel method for implicit variational inference for Bayesian Neural Networks that circumvents the need for a discriminator network to estimate intractable density ratios. We find that modelling the posterior with a highly flexible approximation indeed does have benefits. Our methods, in the wide range of variational approximations, get closer to the performance of deep ensembles, a non-probabilistic method on in distribution and out of distributions performance. Unlike conventional probabilistic methods we do not. One possible limitation of such hypernetworks can be generating massive parameter vectors for large neural networks. Works like Pawlowski et al. (2017); Shi et al. (2018) use different generator architectures to generate weights for each hidden layer in turn loose the information from modelling correlations across layers. Similarly this approach can be extended to use multiple smaller generators at the sacrifice of modelling correlations across layers. A APPENDIX A.1 DETAILS ON APPROXIMATION OF THE DIFFERENTIAL ENTROPY In this section we derive eq. (22). To simplify the derivation, we will use the notation v2 = v⊺v for vectors and M2 = M⊺M for matrices. Starting from the left hand side of eq. (22), we have that Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = Ez∼q(z)Eθ∼q(θ|z) [ 1 2 (θ − µ(z))⊺C(z)−1(θ − µ(z)) ] (34) = 1 2 Ez∼q(z)Eθ∼q(θ|z)[tr((θ − µ(z))⊺C(z)−1(θ − µ(z)))] (35) = 1 2 Ez∼q(z)[tr(C(z)−1Eθ∼q(θ|z)[(θ − µ(z))⊺(θ − µ(z))])] (36) The inner expectation simplifies to Eθ∼q(θ|z)[(θ − µ(z))2] = Eθ∼q(θ|z)[(θ − g(z) + Dg(z) z)2] (37) = Eθ∼q(θ|z) [ (θ − g(z))2 + (Dg(z) z)2 + 2(θ − g(z))Dg(z) z ] (38) = σ2Im + (Dg(z) z) 2, (39) where we that Eθ∼N (θ|g(z),σ2Im)[(θ− g(z))] = 0 and Eθ∼N (θ|g(z),σ2Im)[(θ− g(z))2] = σ2Im. If we plug in the result of eq. (39) into eq. (36), we obtain Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 + σ2Im )−1 ( σ2Im + (Dg(z) z) 2 ))] . (40) Note that eq. (40) could also be derived from eq. (34) using Petersen & Pedersen (2012, eq. 380) and some reordering the terms. Equations (37) to (39) also follows from Petersen & Pedersen (2012, eq. 325). A.2 EXPERIMENT DETAILS We use the MMNN architecture as presented in Shi et al. (2018) for generating weights of the MLP BNN that was trained on MNIST as well as LeNet BNN that was used for all the OOD benchmarks. For the MLP experiment to compare with KIVI we used one MM network that generated all the parameters of the network Following architectures were tried for LeNet5 generators: • Noise input - 25x25, 2 MM hidden layers with 250 units, output layer size 350x127. • Noise input - 30x30, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 35x35, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 35x35, 3 MM hidden layers with 350 units, output layer same as above. • Noise input - 38x38, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 42x42, 2 MM hidden layers with 325 units, output layer same as above. All the above architectures were trained without dataset augmentation and with a maximum of 6 samples per minibatch. The last architecture required higher number of samples due to gradient noise which is proportional to the dimensionality of input noise. This phenomenon has been widely observed in training high dimensional variational approximations (Osawa et al., 2019; Mohamed et al., 2020). As all the architectures are trained for 100 epochs with the same learning rate, increasing gradient noise can significantly deter convergence when the input noise dimensions are increased. A.3 PRIOR DOWNWEIGHTING We choose to down scale the log prior probability in all the benchmark experiments. This term appears in the ELBO objective function and serves an important purpose. When the prior is chosen by domain experts this term ensures that the approximate posterior inferred is not too different from the intelligently chosen prior and hence the log prior probability of samples coming from the variational approximation should be high when the ELBO is being maximised. However, the choice of an appropriate prior is an active area of research in Bayesian deep learning(Fortuin, 2022) and this prior term and the regularisation effect is known to limit the variational approximation. DKL[q(θ)||p(θ)] should be minimised as a result of the optimisation of ELBO using gradient ascent and when the prior is naively chosen to be a standard normal it forces most of the weights of the posterior to be zero-centered. This forces the model to look for minimas that are very close to 0 and has a detrimental effect on the in-distribution performance. We use the plotting tool used by Shi et al. (2018) to demonstrate this effect. The line-plot below has all of the weights of a neural network used to solve a toy regression task on the x-axis and their respective magnitudes on the y-axis. We chose to sort the weights in order of their magnitude as the positions of weights are not very informative in neural networks due to permutation invariance. In the first plot, most of the weights are zero-centred and are not very active, on the other hand the second plot shows what happens when we down weight the prior by just 0.1. A.4 DETAILS OF FIG. 1 In fig. 1 we compare both the objective functions presented in this work for training with implicit variational approximations to different methods for uncertainty quantification for neural networks. All models were trained for 10K iterations and had to learn observation noise present in the toy sinusoidal dataset. We deliberately removed a part of the data to see if the models tested were able to find in-between uncertainties. All methods were given the same sized 2 hidden layered MLP with 7 and 10 units respectively. We trained 5 networks with different seeds for Deep Ensembles and average their predictions to make the plot. The variance of the predictions were then used for the confidence bands in blue. We also train the model with an observation noise parameter. For MFVI, we used KL down weighting to get it to convergence and increase the weight in the end of training. For HMC we Kernel Density Estimate(single weight) sample 5000 samples using the library Pyro. We also tried to make multiplicative normalizing flows converge for this dataset, but with even 20K parameters and training for 15K iteration with a very long learning rate did not help. We even tried KL down weighting to reduce the effect of the prior in the initial iterations but that did not work either. A.5 COMPUTATION GRAPH Here we provide some details about how the combination of the joint generator-BNN model works. The Bayesian neural network classes for all types of architectures(feed forward, convolutions, etc.) require a generator in the init function. As such, the generator networks reside inside the BNN and reparametrise it with a simple sample_parameters function. The most important part of this kind of implementation was the layers themselves. PyTorch provides different kinds of mutable layer implementations in nn.module but these layers do not expose their state i.e. their parameters in a manner that allows changes on the fly during training. We reimplemented the layers allowing such resampling to occur with the generator. In the init function of the BNN, we generate one set of parameters with the generator, package it in a dict that has the weight sample as well as a index to know the number of weights used by a previous layer. This counter index is updated by each layer in their init and sample_parameters function. As such, only the parameters of the generator are trainable, the parameters of the BNN are switchable and relay gradients to the generator via the likelihood or the entropy term. A.6 KDE PLOT Figure 6 shows a KDE plot of weights randomly chosen from samples obtained from a trained generator.
1. What is the focus and contribution of the paper regarding Bayesian neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in its difference from previous works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or suggestions regarding the experiments and comparisons with other methods? 5. Can the authors provide more details or references regarding the relationship between their approach and previous works, such as Bayesian hypernetworks and normalizing flows?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Exact posteriors over parameters in BNNs are usually more complex than the prescribed-density approximations that are usually imposed in VI. The usage of implicit variational distributions allows for an increase in the flexibility of these methods. In this paper, the authors propose a novel bound for training BNNs in such schemes. This is achieved through the combination of a Gaussian deep-latent variable model and a Laplace approximation of the generator of the implicit distribution samples, which simplifies the estimation of the KL regularization term in the original ELBO objective function. Strengths And Weaknesses Strengths: The constructed model is reasonable and simplifies terms of the objective function that are usually very hard to deal with. The final objective function obtained seems an interesting step for implicit-distribution-based methods. The paper is clearly written for the most part and can be followed easily. Weaknesses: The proposed system does not seem too different from previous proposals. As the authors mention, this is highly related to Bayesian hypernetworks and normalizing flows, and somewhat could be seen as a particular combination of both concepts. Since the motivation behind the contribution is related to providing better uncertainty estimates for BNNs, I think the authors should provide a stronger experimental phase on which this is shown more extensively (e.g. adding comparisons against HMC in toy datasets and comparing with other methods that have shown high performance in this regard, s.a. [2]). In several points of the article where previous literature on the topic is covered, I cannot help but notice that some important contributions are missing. For instance, [1] should be clearly mentioned here since it is highly related to the topic, and this applies both to the initial setup on page 2 as well as to the Related Work section. Moreover, both [1] and [2] could (and maybe should) be considered as benchmarks to compare against. Moreover, there has been an extensive ongoing research on the implicit approach applied to the function-space formulation of BNNs which is never mentioned. These methods have shown improved performance and several relevant properties that the regular weight-space formulation fails to reproduce. Some relevant examples here are [3,4,5,6], among others. In particular, these methods extend the formulation of Eq.(6) to implicit stochastic processes. Scalability studies and a detailed comparison with other methods is not included anywhere. I strongly suggest the authors to provide some insights here. References: [1] Mescheder, Lars, Sebastian Nowozin, and Andreas Geiger. "Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks." International Conference on Machine Learning. PMLR, 2017. [2] Santana, S. R., & Hernández-Lobato, D. (2022). Adversarial α-divergence minimization for Bayesian approximate inference. Neurocomputing, 471, 260-274. [3] Ma, C., Li, Y., and Hernández-Lobato, J. M. (2019). “Variational implicit processes”. In: International Conference on Machine Learning, pp. 4222–4233. [4] Sun, S., Zhang, G., Shi, J., and Grosse, R. (2019). “Functional variational Bayesian neural networks”. In: International Conference on Learning Representations. [5] Ma, C., & Hernández-Lobato, J. M. (2021). Functional variational inference based on stochastic process generators. Advances in Neural Information Processing Systems, 34, 21795-21807. [6] Rodrı́guez-Santana, S., Zaldivar, B., & Hernandez-Lobato, D. (2022, June). Function-space Inference with Sparse Implicit Processes. In International Conference on Machine Learning (pp. 18723-18740). PMLR. Clarity, Quality, Novelty And Reproducibility Clarity The paper is mostly self-contained, with clear explanations about the different concepts needed to understand the contribution. The derivation of the simplified version of the entropy term is detailed and done step-by-step, which helps understanding the procedure. The definition of the prior as an implicit distribution is implied, but never explicitly shoen, which would be clearer. Please, provide an explicit expression in Sections 2 or 3. Some typos: "constant constant" (paragraph above Eq.11) The last two lines of the paragraph above Eq.11 do not make much sense Check first sentence of section 3.1 Eq.(13) and the following text seem to be referring to different things. Please, check carefully this discussion. Is the covariance term of Eq.(16) first's step right? Quality I think it can be an interesting contribution to the community interested in implicit-distribution-based inference. I wish the experimental support was a bit sturdier, since the whole idea here is to provide better uncertainty estimates than other methods. Novelty The method itself does not seem very novel, rather a combination of previous ideas but for the derivation of the new objective function. Reproducibility The authors do not mention whether they will provide the code for the method or not, and therefore it remains to be seen if it is easily reproducible.
ICLR
Title Linearised Implicit Variational Inference Abstract Bayesian neural networks (BNNs) are touted for robustness under data drift, resilience to overfitting and catastrophic forgetting whilst also producing actionable uncertainty estimates. In variational inference, these elegant properties are contingent on the expressivity of the variational approximation. Posteriors over parameters of large models are usually multimodal and highly correlated and hence cannot be well-approximated by simple, prescribed densities. We posit implicit variational distributions specified using differentiable generators are more flexible and propose a novel bound for training BNNs using such approximations (amortized neural samplers). The proposed bound uses an approximation of the variational distribution’s entropy by locally linearising the generator. Unlike existing works, our method does not require a discriminator network and moves away from an unfavourable adversarial objective. Our formulation resembles normalizing flows but does not necessitate invertibility of the generator. Moreover, we use a differentiable numerical lower bound on the Jacobians of the generator, mitigating computational concerns. We report log-likelihoods on UCI datasets competitive with deep ensembles and test our method on out-of-distribution benchmarks. 1 INTRODUCTION Deep neural networks are considered state of the art in numerous tasks in computer vision, speech and natural language processing. Scaling up neural architectures has led to outstanding performance on a myriad of generative and discriminative tasks, albeit some fundamental flaws remain. Neural networks are usually trained by maximising likelihood resulting in a single best estimate of parameters which renders these models highly overconfident of their predictions, prone to adversarial attacks and unusable in risk-averse domains. Furthermore, their usage remains restricted in sequential learning applications due to catastrophic forgetting (McCloskey & Cohen, 1989) and data-scarce regimes due to overfitting. When deployed in the wild, deep networks do not output a comprehensive measure of their uncertainty, prompting expert intervention. The Bayesian paradigm provides solutions to a number of these issues. In summary, Bayesian neural networks specify a prior distribution over parameters p(θ), and the neural network relates the parameters to the data D through a likelihood p(D|θ). The goal is to infer a conditional density over the parameters, called the posterior p(θ|D), given by the Bayes’ rule, p(θ|D) = p(D|θ)p(θ) p(D) = p(D|θ)p(θ)∫ p(D, θ) dθ . (1) This conditional density provides a range of suitable parameters with a probability over them given by the dataset. After training, predictions from an ensemble of parameters (models) can then be combined, weighted by their posterior probability forming a Bayesian model average (BMA). The variance of these aggregated predictions informs the user/human about the model’s confidence in a particular prediction. Finding the normalization constant in eq. (1) is analytically intractable for large models, and hence there is a clear focus on approximate inference techniques. Various approaches have been proposed, including Markov chain Monte Carlo (MCMC, Neal, 1995), variational inference (VI, Saul et al., 1996; Peterson, 1987) and the Laplace approximation (Mackay, 1991). Variational inference is a strategy that converts the inference problem into an optimisation over a family of distributions (variational family), denoted hereafter by Q, indexed by variational parameters denoted by γ. We optimise γ using a lower bound on the marginal log-likelihood of the data log p(D) called the evidence lower bound (ELBO). Usually, we are computationally limited to choosing simple distribution families like an isotropic Gaussian distribution (Tanaka, 1998; Blundell et al., 2015). The true posterior is much more complex and is approximated poorly using such approximations (Foong et al., 2019; 2020). This issue is exacerbated in large models that contain many symmetries and correlations. Notably, there have been attempts to extend VI to more structured and expressive distributions (Saul & Jordan, 1995; Bishop et al., 1997; Louizos & Welling, 2016) yet, capturing correlations between parameters with a flexible variational approximation remains the Achilles heel of these class of models. We propose an approach based on implicit generative modelling where the distribution over variables of interest is implicit and can only be sampled. This is in contrast to usual VI methods that use prescribed distributions with explicit parametrisation as the approximating density over latent variables (Diggle & Gratton, 1984; Mohamed & Lakshminarayanan, 2016). Although, this idea takes inspiration from GAN generators that try to recover the true data distribution, we do not require a discriminator network for training the generator and as a result do not suffer from the complicacies introduced by an adversarial objective. As emphasised by Tran et al. (2017), is a more natural way of capturing the generative process instead of forcing it to conform to an assumed latent structure which could be misspecified. Similar to other works in implicit VI (Shi et al., 2018), we posit using general (non-invertible) stochastic transformations that can produce highly flexible implicit densities to model posteriors of neural networks. We believe that these approximations can better capture the intricacies of posterior landscape. Additionally, when trying to model complicated densities in high-dimensions, it is sensible to learn a sampler instead of parameters of an expressive intractable approximation, especially if these approximations do not admit one-line samplers (Devroye, 1996). For example, EBMs can be very flexible but are not easy to sample from (Song & Kingma, 2021). If we were to use a fully correlated Gaussian to model the posterior of a neural network, we would need to optimize parameters quadratic in the number of weights of the network, O(dim(θ)2) to arrive at the optimum covariance matrix. In this work, we test our hypothesis of using an underparameterised generator to capture the important correlations in orders of magnitude less parameters than that. At the same time, we hint at the possibility that a constrained generator will probably avoid modelling redundancies present in BNN posteriors like permutationally symmetric modes. Succinctly, our contributions are presented as follows: • We derive a novel lower bound for variational inference in Bayesian Neural Networks using implicit variational approximation avoiding unstable minmax objectives. • We augment this lower bound by reducing its compute requirement, as we substitute a differentiable numerical lower bound for the entropy term comprising of Jacobians of neural networks. • We comprehensively empirically evaluate the capacity of this implicit variational approximation and the quality of the posteriors inferred using different out of distribution benchmarks. 2 VARIATIONAL INFERENCE FOR BAYESIAN NEURAL NETWORKS Consider the supervised learning setting, where we have training set D = {(xi,yi)}ni=1, where X = {xi}ni=1 are the covariates (inputs) and y = {yi}ni=1 are the labels (outputs). We consider a Bayesian regression or classification model given by p(D,θ) = p(θ)p(D|θ) = p(θ) n∏ i=1 p(yi|xi,θ), (2) where the likelihood is parameterised by θ ∈ Θ ≡ Rm. The objective function L in VI, called the ELBO is a lower bound on the log marginal likelihood of the data - log p(D), and the discrepancy between the two is equal to the KL divergence between the approximate and true posterior given by DKL[qγ(θ)||p(θ|D)] = log p(D)− Eθ∼qγ(γ) [ log p(D,θ) qγ(θ) ] ︸ ︷︷ ︸ L(γ) , (3) where qγ is the variational approximation of the posterior with parameters γ ∈ Γ. Since the KL divergence is non-negative, L is a lower bound on the evidence. This objective function can be written in terms of a likelihood term and a regularisation term as L(γ) = [ Eθ∼qγ(θ) [ log p(D|θ) ]︸ ︷︷ ︸ likelihood term −DKL[qγ(θ)||p(θ)]︸ ︷︷ ︸ regularisation term ] ≤ log p(D), (4) where the likelihood term promotes the variational approximation to model the data well and the regularisation term keeps the posterior close to the prior. Since the log-evidence, log p(D), does not depend on γ, minimising the the KL divergence is equivalent to maximising the ELBO, i.e., argmin γ DKL[qγ(θ)||p(θ|D)] ≡ argmax γ L(γ). (5) 2.1 IMPLICIT VARIATIONAL INFERENCE In implicit VI (IVI), the variational distribution is only implicitly defined through its generative process over the parameters θ z ∼ q(z), θ = gγ(z), (6) where the q is a fixed base distribution and gγ : Rd → Rm is a non-linear mapping and typically not a diffeomorphism. For IVI, the likelihood term from eq. (4) and its gradients can be estimated using Monte Carlo and the reparameterization trick. However, the regularisation term is more difficult as it involves the entropy of qγ , DKL[qγ(θ)||p(θ)] = Eθ∼qγ(θ) [ log qγ(θ) p(θ) ] = −Hq(qγ)− Eθ∼qγ(θ) [ log p(θ) ] . (7) Generally, the entropy of the generative process in eq. (6) is not available in an explicit form as the density of the process is not tractable. A prevalant technique to estimate the regularisation term uses density ratio estimators based on a GAN-like discriminator(Sugiyama et al., 2012; Huszár, 2017), and Geng et al. (2021) have given a tractable and differentiable lower bound on this entropy . Furthermore, when the dimensions of the base distribution d is smaller than m, the KL divergence is not well defined. In the KL divergance, we integrate over the whole space Θ but qγ does not have full support over this space and exists on a manifold embedded in the Θ space. In the GAN literature this problem is called mode dropping and is caused by the inability of the generator to recover all modes of the true data distribution (Che et al., 2020; Xu et al., 2018). To alleviate this, we draw inspiration from works in the GAN literature (Che et al., 2020) and add m dimensional noise to the output of the generator and redefine the variational approximation in the following section. 3 A DEEP LATENT VARIABLE MODEL AND ITS ENTROPY As the variational distribution, we propose to use a Gaussian deep latent variable model (DLVM) of a real variable θ ∈ Rm and with a real latent variable z ∈ Rd with density q(θ) = ∫ q(θ|z)q(z) dz = Ez∼q(z)[q(θ|z)]. (8) We assume a Gaussian base density and a Gaussian output density, that is q(z) = N (z|0, Id) (9) q(θ|z) = N (θ|gγ(z), σ2Im), (10) where g : Rd → Rm is the decoder/generator and σ2 ∈ R+ is the fixed homoscedastic variance of the output density. In general, we do not have a closed form for q(θ) due to the the integral in eq. (8) and the non-linear gγ , but we note that KL divergence in eq. (7) is well defined for this variational distribution. Below we propose a novel approximation of the differential entropy of this model. This model can equivalently be viewed as a variational autoencoder (VAE, Kingma & Welling, 2014; Rezende et al., 2014) with a Gaussian prior and a Gaussian output density with constant constant homoscedastic variance and no encoder, or as a implicit distribution from eq. (6) with added Gaussian noise. The latter is clearly seen from the generative process of by describe the generative process for The generative process of θ, which is θ′ = gγ(z), z ∼ N (0, Id) (11) θ = θ′ + η, η ∼ N (0, σ2Im). (12) 3.1 DIFFERENTIAL ENTROPY We want to calculate the different entropy of the Gaussian DLVM given by H[q(θ)] = −Eθ∼q(θ)[log q(θ)]. (13) We can in general not compute this analytically since we do not have a closed form of p(θ). Since we can sample from p(θ), we can approximate the expectation in eq. (13) using Monte Carlo sampling from p(z). However, since we do not have a closed form of p(θ), we still need an approximation of log q(θ). We could approximate p(θ) using Monte Carlo sampling from p(z), but this approximation has high variance. Usually, the variance is reduced by learning and encoder and doing importance sampling. Here we derive an approximation without using an encoder. 3.1.1 LINEARISATION OF THE GENERATOR First we consider a local linearisation of the generator. Assuming that the Jacobian of g exists, the first order Taylor polynomial of g at z0 is given by T 1z0(z) = g(z0) + Dg(z0) (z − z0), (14) where Dg(z0) is the Jacobian of g evaluated in z0. This assumes that the Jacobian exists, i.e. the generator has at least one derivative. We can approximate g(z) by T 1z0(z) when z is close to z0. We apply this approximation to q(θ) from eq. (8), which gives us q(θ) = Ez∼q(z)[q(θ|z)] = Ez∼q(z)[N (θ|g(z), σ2Im)] (15) ≈ Ez∼q(z)[N (θ|g(z0) + Dg(z0) (z − z0), σ2Im)] = N (θ|µ(z0), C(z0)) =: q̃z0(θ), (16) where µ(z0) = g(z0)−Dg(z0) z0 (17) C(z0) = Dg(z0)Dg(z0) ⊺ + σ2Im. (18) The result in eq. (16) can be obtained analytically by integrating over the latent variable, see e.g. Tipping & Bishop (1999). 3.2 APPROXIMATION OF THE DIFFERENTIAL ENTROPY We use the Gaussian approximation of q(θ) to approximate the entropy of the DLVM, that is H[q(θ)] = −Ez∼q(z)Eθ∼q(θ|z)[log q(θ)] ≈ −Ez∼q(z)Eθ∼q(θ|z)[log q̃z0=z(θ)] =: H̃[p(θ)]. (19) Importantly, we do the linearisation of q(θ) around the latent value z that is used to sample each θ in the expectation. We have that log q̃z0=z(θ) = − p 2 log 2π − 1 2 log detC(z0)− 1 2 (θ − µ(z0))⊺C(z0)−1(θ − µ(z0))︸ ︷︷ ︸ =:h(θ,z0) , (20) which means that our approximation of the entropy is H̃[q(θ)] = m 2 log 2π + 1 2 Ez∼q(z)[log detC(z)] + Ez∼q(z)Eθ∼q(θ|z)[h(θ, z)]. (21) As shown in appendix A.1, the last term can be written as Ez∼q(z)Eθ∼q(θ|z)[h(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 + σ2Im )−1 ( σ2Im + (Dg(z) z) 2 ))] , (22) where we used the notation M2 = MM⊺ for a matrix M . Now, if we let σ2 tend to zero, we find that lim σ2→0 Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 )−1 (Dg(z) z)2 )] (23) = 1 2 Ez∼q(z) [z⊺z] = 1 2 tr(Id) = d 2 . (24) Similar, we can also take the limit of the determinant term from eq. (21), that is lim σ2→0 1 2 Ez∼q(z)[log detC(z)] = 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] (25) Combining eqs. (21), (24) and (25), gives us the final approximation. For small values of the output variance σ2, we can approximate the differential entropy of a DLVM as H[p(θ)] ≈ lim σ2→0 H̃[p(θ)] = d 2 + m 2 log 2π + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] . (26) We can get a slightly more accurate approximation, by only applying the limit from eq. (23), and not the limit from eq. (25), which gives us H[p(θ)] ≈ d 2 + m 2 log 2π + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ + σ2Im )] . (27) 4 LINEARISED IMPLICIT VARIATIONAL INFERENCE (LIVI) We propose a novel bound for IVI. As the variational distribution, we use the DLVM of eq. (8), which is equivalent to adding noise to the implicit distribuion of eq. (6). Using the entropy approximation from eq. (26), we propose the approximate ELBO, L̃(γ) = Eθ∼qγ(θ) [ log p(D|θ) ] + Eθ∼qγ(θ) [ log p(θ) ] + lim σ2→0 H̃[p(θ)] (28) = Eθ∼qγ(θ) [ log p(D|θ) + log p(θ) ] + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] + c, (29) where c = d2 + m 2 log 2π. We can reparameterise the above with the base variables z,η to get L̃(γ) = Ez∼q(z),η∼q(η) [ log p(D|g(z) + η) + log p(g(z) + η) + 1 2 log det ( Dg(z)Dg(z)⊺ )] + c. (30) To avoid the calculation of the log-determinant term, we can follow Geng et al. (2021, eq. 10) and lower-bound it as 1 2 log det(Dg(z)Dg(z)⊺) = 1 2 m∑ i=1 log s2i (z) ≥ m log s1(z), (31) where sm(z) ≥ . . . ≥ s1(z) are the singular values of the Jacobian Dg(z). This gives us a lower bound on L̃(γ) given by ˜̃L(γ) =Eθ∼qγ(θ) [ log p(D|θ) + log p(θ) ] + Ez∼q(z) [ m log s1(z) ] + c ≤ L̃(γ) (32) Again, by reparameterisation with z,η we get ˜̃L(γ) = Ez∼q(z),η∼q(η) [ log p(D|g(z) + η) + log p(g(z) + η) +m log s1(z) ] + c. (33) We denote L̃(γ) the LIVI bound with accurate Jacobian and ˜̃L(γ) the LIVI bound with a differentiable lower bounded on the determinant. Depending on the amount of compute available, the two bounds provide a trade-off between the accuracy of uncertainties and the resources consumed. In both cases, the entropy maximisation promotes the generator to generate diverse weight samples which is in accordance with the principle behind Bayesian model averaging and supported by the performance of deep ensembles (Lakshminarayanan et al., 2017). We present connections with existing works in the literature in the following section highlighting similarities and divergences. 5 RELATED WORK The usage of a secondary network to generate parameters of a primary network first appeared in the form of hypernetworks (Ha et al., 2017). Our approach is probabilistic and is hence closer to Bayesian hypernetworks (Krueger et al., 2017). Compared to our approach, these models require invertibility of the generator and thereby avoid the complexities of estimating the entropy term. This corresponds to using a normalizing flow as a variational approximation. Training a normalizing flow over large parameter spaces is computionally costly due to large Jacobian matrices, typically requiring particular focus on the design of the variational approximation to curb dimensionality of the flow. In particular, Louizos & Welling (2017) use an expressive flow on multiplicative factors of weights in each layer and not on all weights jointly. Our bound uses a very similar change in volume formulation, log det(Dg(z)Dg(z)), for obtaining the log probability of samples under the variational density, but does not necessitate invertibility making it more general. Subsequently, Shi et al. (2018); Tran et al. (2017); Pawlowski et al. (2017) have successfully demonstrated implicit variational inference in BNNs using hypernetworks. Shi et al. (2018); Tran et al. (2017) do not focus on the entropy term, but rather try to estimate the ratio of the variational approximation to the prior (regularisation-term) in a procedure called density ratio estimation (also referred to as the prior-contrastive formulation by Huszár, 2017). Tran et al. (2017) opt for training a discriminator network to maximally distinguish two distributions given only i.i.d. samples from each. This approach, though general, adds to the computational requirements and becomes more challenging in high dimensions (Sugiyama et al., 2012). To mitigate the overhead of training the discriminator for each update of the ELBO, many works limit the discriminator training to a single or few iterations. Furthermore, this approach entails an adversarial objective that are infamously unstable (Mescheder et al., 2017). Pawlowski et al. (2017) treat all the weights as independent and find that a single discriminator network is inaccurate at estimating log ratios when compared to the analytical form of Bayes by backprop (Blundell et al., 2015), and opt to use a kernel method that matches the analytical form more closely. Shi et al. (2018) propose a novel way of estimating the ratio of the two densities using kernel regression in the space of parameters which obviates the need for a minmax objective. An obvious difficulty with kernel ridge regression in practice is its inaccuracy to estimate high-dimensional density ratios which is similar to using discriminators. This is especially the case given a limited number of samples from both the densities as well as the RBF kernel. While the RBF kernel still takes the same high-dimensional inputs and does not involve learning massive sets of parameters, its accuracy at larger scales is still doubtful. This work also proposes matrix multiplication neural network (MMNN) a novel generator architecture for generating large set of parameters. Pradier et al. (2018) are also motivated by the possibility of compressing the posterior in a lower dimensional space and use an inference network with a generator. Their model differs from ours as they also consider the parameters of the generator/decoder to be stochastic. Moreover they require empirical weight samples to train which doubles the training steps. D-SIVI (Molchanov et al., 2019) and SIVI (Yin & Zhou, 2018) use Monte Carlo (MC) averaging to approximate the entropy term. Both works use the implicit formulation to only model the mixing coefficients and not all the weights of the network. Our entropy term 8 also has a similar form and can be MC approximated. In the spirit of some recent works (Izmailov et al., 2020; Daxberger et al., 2021b;a) that alternatively choose a lower dimensional representation to preclude costly, high-dimensional inference, our work can be seen as allowing the approximate posterior in the form of the generator to choose which dimensions and parts of posterior are crucial and model them accordingly. 6 EXPERIMENTS 6.1 TOY DATA In fig. 1, we compare inference with our method against the gold standard for posterior inference on a simple toy dataset. After training, we also plot a KDE-plot of the samples the generator outputs in appendix A.6. We infer from this plot that the generator is capable of representing non-trivial distributions as we can spot heavy tails and multiple modes. 6.2 UCI DATASETS We perform experiments on UCI regression datasets with the setups by Lakshminarayanan et al. (2017) and Shi et al. (2018), using a BNN with one hidden layer MLP with 50 units on all of these datasets. We report the RMSE and log-likelihood on held out data for our method. We use generator architectures that are either equally or less powerful than Shi et al. (2018) and do not assume independence across layers, i.e. using one MLP to generate all the weights of the BNN. All of the generator architectures are one hidden layered MLP with a slightly varying number of units depending on the dataset. At this scale it is feasible to estimate uncertainties using accurate Jacobians. We require far fewer number of samples (5-10) per iteration compared to 100 used by KIVI to achieve very competitive results. We suspect they use high number of samples to curb the variance of the kernel estimator. Our results are summarised in table 1. We train our method with a homoscedastic assumption i.e. the variance in the dataset is assumed to be constant and we train an observation noise parameter using type II maximum likelihood. 6.3 MNIST DATASET Next we test our method on the MNIST dataset. While using the MMNN as the generator, we were able to achieve errors on the test set on par with KIVI for MLP with 400 and 800 hidden units. With the total number of parameters generated exceeding 400K even for 400 hidden units we chose to train the model only with the differentiable lower bound due to prohibitively high memory usage. For OOD testing we compare our method to last-layer laplace, deep ensembles and a simple MAP estimate. We intentionally choose these methods to compare against as a mean-field approximation usually does not achieve good accuracy on in-distribution data and has been shown, repeatedly, to suffer from many optimisation difficulties. On the other hand it is possible to run HMC samplers at this scale, it is not preferable. Very few works in the literature report results using full batch HMC (Izmailov et al., 2021). Deep ensembles predict using neural networks that have converged onto different minima hence encompassing information from diverse modes of the posterior, and as such remains one of the best in terms of uncertainty estimation. As for benchmarks we choose two OOD benchmarks presented in Daxberger et al. (2021a). First we test the OOD AUROC and confidence of a LeNet5 BNN trained on MNIST by using FMNIST, KMNIST and EMNIST. We used a MMNN architecture for generating over 40K parameters and trained using the differentiable numerical approximation with 3-6 samples depending on the architecture. We expand on few generator architectures here and leave the rest for appendix appendix A.2. The BNN trained with the implicit variational approximation, a generator with a 1225 dimension noise input and 2 matrix multiplication layers of 350 units each achieves accuracy of 99.071%±0.02, and calibration error of 0.084 ± 0.011 with nll −0.021 ± 0.001 on the test set. The same model reports an averaged OOD AUCROC of 97.15 ± 0.17 with an averaged confidence 68.53 ± 0.24. According to results provided in Daxberger et al. (2021a, Table 1), our model does not outperform in terms of confidence values yet, we notice that the performance degrades very smoothly as it encounters OOD data as opposed to models like Deep Ensembles and Laplace both of which fail relatively immediately and drastically in terms of confidence values. Our model does perform quite well on the averaged AUROC as well as on test set calibration error and log-likelihood. To probe out of distribution performance further, we compare our method on the rotated MINST benchmark from Daxberger et al. (2021a). In this benchmark we plot the negative log-likelihoods and empirical calibration errors for different rotated MNIST images. In this benchmark task we plot results in fig. 2 for three different architectures and our best (LIVI 3) remains the same architecture as above. Here too, we nearly match the performance of deep ensembles on these two metrics. The other two architectures, LIVI 1 has 1764 dimensional noise input and one matrix multiplication hidden layer with 350 units while LIVI 2 has 900 dimensional noise input with 2 hidden matrix multiplication layers of 320 units each. 6.4 COMPARISON BETWEEN IMPLICIT VARIATIONAL APPROXIMATIONS Variational inference for BNNs relies heavily on the expressivity of the family of approximations chosen to model the posterior. In our case the architecture of the generator represents the flexibility and overall modeling capability of the implicit variational density. We trained different architectures and noticed that generator architectures with more hidden layers perform better on in-distribution metrics like accuracy and log-likelihood. Additional hidden layers afford the generators the capacity to warp the input Gaussian noise into a suitable posterior distribution. On the other hand, the dimensionality of input noise becomes crucial for uncertainty quantification and OOD performance. We believe this is because the number of noise inputs are all the degrees of freedom available to the generator to model the parameters of the BNN. As such, the entropy of resulting posterior is directly dependent on the this factor. Although this number cannot be increased without repercussions because the base distribution and the number of samples affect the signal to noise ratio of our objective function eq. (29) and a very large z results in large gradient variance hindering covergence, requiring more samples during training or higher number of iterations to converge. In these experiments we also noticed that down scaling only the prior log probability has a very positive effect on the results. This is due to the fact that the prior term regularises the generator, forcing it to find minimas close to itself, a standard normal distribution. The scale of this prior log probability term is significant in the ELBO and gradients of this term are detrimental to the overall optimisation process. Unlike cold-posteriors(Wenzel et al., 2020), we keep the gradients of the entropy term as is and only reduce regularisation by downscaling the prior. As the last benchmark we opt to ascertain the quality of our model’s posterior and the implied predictive uncertainties by plotting the empirical CDF of predictive entropies across OOD images(Lakshminarayanan et al., 2017; Louizos & Welling, 2017) in fig. 3. Given a model trained on MNIST, the predictions on a data point from a different distribution should be given a high entropy prediction like a uniform distribution. For this plot we first obtain entropies of the output softmax distributions for all the models across data points and use an empirical CDF to represent how many of these predictive entropies are closer to a uniform distribution which has an entropy of 2.3. Ideally, we are looking lines closer to the right bottom corner, i.e. the number of low-entropy or highly confident predictions should be less. We compare our model to MAP, deep ensembles and last-layer Laplace and find that our model trained on MNIST is quite competitive in the quality of uncertainty estimates for this test over FMNIST dataset. For this plot we use the best generator architecture with a LeNet5 BNN which is called LIVI 3 in the tests above. 7 CONCLUSION In this paper we present a novel method for implicit variational inference for Bayesian Neural Networks that circumvents the need for a discriminator network to estimate intractable density ratios. We find that modelling the posterior with a highly flexible approximation indeed does have benefits. Our methods, in the wide range of variational approximations, get closer to the performance of deep ensembles, a non-probabilistic method on in distribution and out of distributions performance. Unlike conventional probabilistic methods we do not. One possible limitation of such hypernetworks can be generating massive parameter vectors for large neural networks. Works like Pawlowski et al. (2017); Shi et al. (2018) use different generator architectures to generate weights for each hidden layer in turn loose the information from modelling correlations across layers. Similarly this approach can be extended to use multiple smaller generators at the sacrifice of modelling correlations across layers. A APPENDIX A.1 DETAILS ON APPROXIMATION OF THE DIFFERENTIAL ENTROPY In this section we derive eq. (22). To simplify the derivation, we will use the notation v2 = v⊺v for vectors and M2 = M⊺M for matrices. Starting from the left hand side of eq. (22), we have that Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = Ez∼q(z)Eθ∼q(θ|z) [ 1 2 (θ − µ(z))⊺C(z)−1(θ − µ(z)) ] (34) = 1 2 Ez∼q(z)Eθ∼q(θ|z)[tr((θ − µ(z))⊺C(z)−1(θ − µ(z)))] (35) = 1 2 Ez∼q(z)[tr(C(z)−1Eθ∼q(θ|z)[(θ − µ(z))⊺(θ − µ(z))])] (36) The inner expectation simplifies to Eθ∼q(θ|z)[(θ − µ(z))2] = Eθ∼q(θ|z)[(θ − g(z) + Dg(z) z)2] (37) = Eθ∼q(θ|z) [ (θ − g(z))2 + (Dg(z) z)2 + 2(θ − g(z))Dg(z) z ] (38) = σ2Im + (Dg(z) z) 2, (39) where we that Eθ∼N (θ|g(z),σ2Im)[(θ− g(z))] = 0 and Eθ∼N (θ|g(z),σ2Im)[(θ− g(z))2] = σ2Im. If we plug in the result of eq. (39) into eq. (36), we obtain Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 + σ2Im )−1 ( σ2Im + (Dg(z) z) 2 ))] . (40) Note that eq. (40) could also be derived from eq. (34) using Petersen & Pedersen (2012, eq. 380) and some reordering the terms. Equations (37) to (39) also follows from Petersen & Pedersen (2012, eq. 325). A.2 EXPERIMENT DETAILS We use the MMNN architecture as presented in Shi et al. (2018) for generating weights of the MLP BNN that was trained on MNIST as well as LeNet BNN that was used for all the OOD benchmarks. For the MLP experiment to compare with KIVI we used one MM network that generated all the parameters of the network Following architectures were tried for LeNet5 generators: • Noise input - 25x25, 2 MM hidden layers with 250 units, output layer size 350x127. • Noise input - 30x30, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 35x35, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 35x35, 3 MM hidden layers with 350 units, output layer same as above. • Noise input - 38x38, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 42x42, 2 MM hidden layers with 325 units, output layer same as above. All the above architectures were trained without dataset augmentation and with a maximum of 6 samples per minibatch. The last architecture required higher number of samples due to gradient noise which is proportional to the dimensionality of input noise. This phenomenon has been widely observed in training high dimensional variational approximations (Osawa et al., 2019; Mohamed et al., 2020). As all the architectures are trained for 100 epochs with the same learning rate, increasing gradient noise can significantly deter convergence when the input noise dimensions are increased. A.3 PRIOR DOWNWEIGHTING We choose to down scale the log prior probability in all the benchmark experiments. This term appears in the ELBO objective function and serves an important purpose. When the prior is chosen by domain experts this term ensures that the approximate posterior inferred is not too different from the intelligently chosen prior and hence the log prior probability of samples coming from the variational approximation should be high when the ELBO is being maximised. However, the choice of an appropriate prior is an active area of research in Bayesian deep learning(Fortuin, 2022) and this prior term and the regularisation effect is known to limit the variational approximation. DKL[q(θ)||p(θ)] should be minimised as a result of the optimisation of ELBO using gradient ascent and when the prior is naively chosen to be a standard normal it forces most of the weights of the posterior to be zero-centered. This forces the model to look for minimas that are very close to 0 and has a detrimental effect on the in-distribution performance. We use the plotting tool used by Shi et al. (2018) to demonstrate this effect. The line-plot below has all of the weights of a neural network used to solve a toy regression task on the x-axis and their respective magnitudes on the y-axis. We chose to sort the weights in order of their magnitude as the positions of weights are not very informative in neural networks due to permutation invariance. In the first plot, most of the weights are zero-centred and are not very active, on the other hand the second plot shows what happens when we down weight the prior by just 0.1. A.4 DETAILS OF FIG. 1 In fig. 1 we compare both the objective functions presented in this work for training with implicit variational approximations to different methods for uncertainty quantification for neural networks. All models were trained for 10K iterations and had to learn observation noise present in the toy sinusoidal dataset. We deliberately removed a part of the data to see if the models tested were able to find in-between uncertainties. All methods were given the same sized 2 hidden layered MLP with 7 and 10 units respectively. We trained 5 networks with different seeds for Deep Ensembles and average their predictions to make the plot. The variance of the predictions were then used for the confidence bands in blue. We also train the model with an observation noise parameter. For MFVI, we used KL down weighting to get it to convergence and increase the weight in the end of training. For HMC we Kernel Density Estimate(single weight) sample 5000 samples using the library Pyro. We also tried to make multiplicative normalizing flows converge for this dataset, but with even 20K parameters and training for 15K iteration with a very long learning rate did not help. We even tried KL down weighting to reduce the effect of the prior in the initial iterations but that did not work either. A.5 COMPUTATION GRAPH Here we provide some details about how the combination of the joint generator-BNN model works. The Bayesian neural network classes for all types of architectures(feed forward, convolutions, etc.) require a generator in the init function. As such, the generator networks reside inside the BNN and reparametrise it with a simple sample_parameters function. The most important part of this kind of implementation was the layers themselves. PyTorch provides different kinds of mutable layer implementations in nn.module but these layers do not expose their state i.e. their parameters in a manner that allows changes on the fly during training. We reimplemented the layers allowing such resampling to occur with the generator. In the init function of the BNN, we generate one set of parameters with the generator, package it in a dict that has the weight sample as well as a index to know the number of weights used by a previous layer. This counter index is updated by each layer in their init and sample_parameters function. As such, only the parameters of the generator are trainable, the parameters of the BNN are switchable and relay gradients to the generator via the likelihood or the entropy term. A.6 KDE PLOT Figure 6 shows a KDE plot of weights randomly chosen from samples obtained from a trained generator.
1. What is the focus of the paper regarding variational inference and implicit variational distributions? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of theoretical analysis and approximations? 3. Do you have any concerns about the reliability and scalability of the method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the paper, such as discussing quasi-KL measures or including more experiments on realistic datasets and models?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper posits a new variational inference method based on implicit variational distributions. It develops a bound for estimating the entropy of the implicit distribution involved in the ELBO based on local linearization. The authors then use a differentiable numerical lower bound on the Jacobians of the generator to mitigate computational concerns. Experiments are conducted on UCI regression and MNIST classification. Strengths And Weaknesses Strength The paper develops a new differentiation bound on the entropy of implicit distributions. This may be useful for the probabilistic inference community. The literature review is relatively thorough. Weaknesses Major issues Regarding the ill-defined KL when d < m, the authors should discuss the quasi-KL measure [1] to make the paper more convincing. My major concern about this paper is that there are too many approximations such that I am not convinced of the fidelity and reliability of the yielded bound. One approximation is about local linearization around the sample z. The approx. error here can be arbitrarily large? The second approximation lies in eq 23. You really use a \simga 2 that approaches 0? But if doing so, you fall back to the ill-defined KL... The third approximation is eq 31, where Jacobians are replaced with their singular values. The approximation error here can be bounded but currently, the overall approximation error is unmeasurable. Thus, I question the reliability of this method and believe more theoretical analysis regarding the tightness of you bound of entropy is required. The main technical novelty lies in the local linearization of the generator, which in my opinion, is limited. As said, more discussion or analyses on local linearization are needed. The biggest limitation of this method is its poor scalability. It is two-fold. (1) The generator cannot trivially generate millions of parameters for modern NNs as it cannot have that wide output layer. (2) The singular values of Jacobian are expensive to estimate; even the Jacobians themselves cannot be easily estimated for modern NNs. As a result, the method cannot be applied to realistic datasets and models. Results on at least cifar-10 are appreciated. Why the closely related KIVI is not included in the MNIST exps? Minor issues The writing is not good enough and there are typos. An example is the first paragraph of sec 3.1. By inspecting figure1, I don't think LIVI is as good as HMC, DE, and even MNF. Though the authors highlight LIVI captures in-between uncertainty, but it seems that it is not good enough and at least worse than that of even MNF. Can the authors provide a quantitative estimation of the quality of the predictive distributions of these methods using something like the divergence from the ground-truth predictive distribution (provided by HMC in my opinion)? By the way, why isn't the closely related KIVI included here? [1] Variational Bayesian dropout: pitfalls and fixes Clarity, Quality, Novelty And Reproducibility Given the above reviews, the clarity and reproducibility are good but the novelty and quality are poor.
ICLR
Title Linearised Implicit Variational Inference Abstract Bayesian neural networks (BNNs) are touted for robustness under data drift, resilience to overfitting and catastrophic forgetting whilst also producing actionable uncertainty estimates. In variational inference, these elegant properties are contingent on the expressivity of the variational approximation. Posteriors over parameters of large models are usually multimodal and highly correlated and hence cannot be well-approximated by simple, prescribed densities. We posit implicit variational distributions specified using differentiable generators are more flexible and propose a novel bound for training BNNs using such approximations (amortized neural samplers). The proposed bound uses an approximation of the variational distribution’s entropy by locally linearising the generator. Unlike existing works, our method does not require a discriminator network and moves away from an unfavourable adversarial objective. Our formulation resembles normalizing flows but does not necessitate invertibility of the generator. Moreover, we use a differentiable numerical lower bound on the Jacobians of the generator, mitigating computational concerns. We report log-likelihoods on UCI datasets competitive with deep ensembles and test our method on out-of-distribution benchmarks. 1 INTRODUCTION Deep neural networks are considered state of the art in numerous tasks in computer vision, speech and natural language processing. Scaling up neural architectures has led to outstanding performance on a myriad of generative and discriminative tasks, albeit some fundamental flaws remain. Neural networks are usually trained by maximising likelihood resulting in a single best estimate of parameters which renders these models highly overconfident of their predictions, prone to adversarial attacks and unusable in risk-averse domains. Furthermore, their usage remains restricted in sequential learning applications due to catastrophic forgetting (McCloskey & Cohen, 1989) and data-scarce regimes due to overfitting. When deployed in the wild, deep networks do not output a comprehensive measure of their uncertainty, prompting expert intervention. The Bayesian paradigm provides solutions to a number of these issues. In summary, Bayesian neural networks specify a prior distribution over parameters p(θ), and the neural network relates the parameters to the data D through a likelihood p(D|θ). The goal is to infer a conditional density over the parameters, called the posterior p(θ|D), given by the Bayes’ rule, p(θ|D) = p(D|θ)p(θ) p(D) = p(D|θ)p(θ)∫ p(D, θ) dθ . (1) This conditional density provides a range of suitable parameters with a probability over them given by the dataset. After training, predictions from an ensemble of parameters (models) can then be combined, weighted by their posterior probability forming a Bayesian model average (BMA). The variance of these aggregated predictions informs the user/human about the model’s confidence in a particular prediction. Finding the normalization constant in eq. (1) is analytically intractable for large models, and hence there is a clear focus on approximate inference techniques. Various approaches have been proposed, including Markov chain Monte Carlo (MCMC, Neal, 1995), variational inference (VI, Saul et al., 1996; Peterson, 1987) and the Laplace approximation (Mackay, 1991). Variational inference is a strategy that converts the inference problem into an optimisation over a family of distributions (variational family), denoted hereafter by Q, indexed by variational parameters denoted by γ. We optimise γ using a lower bound on the marginal log-likelihood of the data log p(D) called the evidence lower bound (ELBO). Usually, we are computationally limited to choosing simple distribution families like an isotropic Gaussian distribution (Tanaka, 1998; Blundell et al., 2015). The true posterior is much more complex and is approximated poorly using such approximations (Foong et al., 2019; 2020). This issue is exacerbated in large models that contain many symmetries and correlations. Notably, there have been attempts to extend VI to more structured and expressive distributions (Saul & Jordan, 1995; Bishop et al., 1997; Louizos & Welling, 2016) yet, capturing correlations between parameters with a flexible variational approximation remains the Achilles heel of these class of models. We propose an approach based on implicit generative modelling where the distribution over variables of interest is implicit and can only be sampled. This is in contrast to usual VI methods that use prescribed distributions with explicit parametrisation as the approximating density over latent variables (Diggle & Gratton, 1984; Mohamed & Lakshminarayanan, 2016). Although, this idea takes inspiration from GAN generators that try to recover the true data distribution, we do not require a discriminator network for training the generator and as a result do not suffer from the complicacies introduced by an adversarial objective. As emphasised by Tran et al. (2017), is a more natural way of capturing the generative process instead of forcing it to conform to an assumed latent structure which could be misspecified. Similar to other works in implicit VI (Shi et al., 2018), we posit using general (non-invertible) stochastic transformations that can produce highly flexible implicit densities to model posteriors of neural networks. We believe that these approximations can better capture the intricacies of posterior landscape. Additionally, when trying to model complicated densities in high-dimensions, it is sensible to learn a sampler instead of parameters of an expressive intractable approximation, especially if these approximations do not admit one-line samplers (Devroye, 1996). For example, EBMs can be very flexible but are not easy to sample from (Song & Kingma, 2021). If we were to use a fully correlated Gaussian to model the posterior of a neural network, we would need to optimize parameters quadratic in the number of weights of the network, O(dim(θ)2) to arrive at the optimum covariance matrix. In this work, we test our hypothesis of using an underparameterised generator to capture the important correlations in orders of magnitude less parameters than that. At the same time, we hint at the possibility that a constrained generator will probably avoid modelling redundancies present in BNN posteriors like permutationally symmetric modes. Succinctly, our contributions are presented as follows: • We derive a novel lower bound for variational inference in Bayesian Neural Networks using implicit variational approximation avoiding unstable minmax objectives. • We augment this lower bound by reducing its compute requirement, as we substitute a differentiable numerical lower bound for the entropy term comprising of Jacobians of neural networks. • We comprehensively empirically evaluate the capacity of this implicit variational approximation and the quality of the posteriors inferred using different out of distribution benchmarks. 2 VARIATIONAL INFERENCE FOR BAYESIAN NEURAL NETWORKS Consider the supervised learning setting, where we have training set D = {(xi,yi)}ni=1, where X = {xi}ni=1 are the covariates (inputs) and y = {yi}ni=1 are the labels (outputs). We consider a Bayesian regression or classification model given by p(D,θ) = p(θ)p(D|θ) = p(θ) n∏ i=1 p(yi|xi,θ), (2) where the likelihood is parameterised by θ ∈ Θ ≡ Rm. The objective function L in VI, called the ELBO is a lower bound on the log marginal likelihood of the data - log p(D), and the discrepancy between the two is equal to the KL divergence between the approximate and true posterior given by DKL[qγ(θ)||p(θ|D)] = log p(D)− Eθ∼qγ(γ) [ log p(D,θ) qγ(θ) ] ︸ ︷︷ ︸ L(γ) , (3) where qγ is the variational approximation of the posterior with parameters γ ∈ Γ. Since the KL divergence is non-negative, L is a lower bound on the evidence. This objective function can be written in terms of a likelihood term and a regularisation term as L(γ) = [ Eθ∼qγ(θ) [ log p(D|θ) ]︸ ︷︷ ︸ likelihood term −DKL[qγ(θ)||p(θ)]︸ ︷︷ ︸ regularisation term ] ≤ log p(D), (4) where the likelihood term promotes the variational approximation to model the data well and the regularisation term keeps the posterior close to the prior. Since the log-evidence, log p(D), does not depend on γ, minimising the the KL divergence is equivalent to maximising the ELBO, i.e., argmin γ DKL[qγ(θ)||p(θ|D)] ≡ argmax γ L(γ). (5) 2.1 IMPLICIT VARIATIONAL INFERENCE In implicit VI (IVI), the variational distribution is only implicitly defined through its generative process over the parameters θ z ∼ q(z), θ = gγ(z), (6) where the q is a fixed base distribution and gγ : Rd → Rm is a non-linear mapping and typically not a diffeomorphism. For IVI, the likelihood term from eq. (4) and its gradients can be estimated using Monte Carlo and the reparameterization trick. However, the regularisation term is more difficult as it involves the entropy of qγ , DKL[qγ(θ)||p(θ)] = Eθ∼qγ(θ) [ log qγ(θ) p(θ) ] = −Hq(qγ)− Eθ∼qγ(θ) [ log p(θ) ] . (7) Generally, the entropy of the generative process in eq. (6) is not available in an explicit form as the density of the process is not tractable. A prevalant technique to estimate the regularisation term uses density ratio estimators based on a GAN-like discriminator(Sugiyama et al., 2012; Huszár, 2017), and Geng et al. (2021) have given a tractable and differentiable lower bound on this entropy . Furthermore, when the dimensions of the base distribution d is smaller than m, the KL divergence is not well defined. In the KL divergance, we integrate over the whole space Θ but qγ does not have full support over this space and exists on a manifold embedded in the Θ space. In the GAN literature this problem is called mode dropping and is caused by the inability of the generator to recover all modes of the true data distribution (Che et al., 2020; Xu et al., 2018). To alleviate this, we draw inspiration from works in the GAN literature (Che et al., 2020) and add m dimensional noise to the output of the generator and redefine the variational approximation in the following section. 3 A DEEP LATENT VARIABLE MODEL AND ITS ENTROPY As the variational distribution, we propose to use a Gaussian deep latent variable model (DLVM) of a real variable θ ∈ Rm and with a real latent variable z ∈ Rd with density q(θ) = ∫ q(θ|z)q(z) dz = Ez∼q(z)[q(θ|z)]. (8) We assume a Gaussian base density and a Gaussian output density, that is q(z) = N (z|0, Id) (9) q(θ|z) = N (θ|gγ(z), σ2Im), (10) where g : Rd → Rm is the decoder/generator and σ2 ∈ R+ is the fixed homoscedastic variance of the output density. In general, we do not have a closed form for q(θ) due to the the integral in eq. (8) and the non-linear gγ , but we note that KL divergence in eq. (7) is well defined for this variational distribution. Below we propose a novel approximation of the differential entropy of this model. This model can equivalently be viewed as a variational autoencoder (VAE, Kingma & Welling, 2014; Rezende et al., 2014) with a Gaussian prior and a Gaussian output density with constant constant homoscedastic variance and no encoder, or as a implicit distribution from eq. (6) with added Gaussian noise. The latter is clearly seen from the generative process of by describe the generative process for The generative process of θ, which is θ′ = gγ(z), z ∼ N (0, Id) (11) θ = θ′ + η, η ∼ N (0, σ2Im). (12) 3.1 DIFFERENTIAL ENTROPY We want to calculate the different entropy of the Gaussian DLVM given by H[q(θ)] = −Eθ∼q(θ)[log q(θ)]. (13) We can in general not compute this analytically since we do not have a closed form of p(θ). Since we can sample from p(θ), we can approximate the expectation in eq. (13) using Monte Carlo sampling from p(z). However, since we do not have a closed form of p(θ), we still need an approximation of log q(θ). We could approximate p(θ) using Monte Carlo sampling from p(z), but this approximation has high variance. Usually, the variance is reduced by learning and encoder and doing importance sampling. Here we derive an approximation without using an encoder. 3.1.1 LINEARISATION OF THE GENERATOR First we consider a local linearisation of the generator. Assuming that the Jacobian of g exists, the first order Taylor polynomial of g at z0 is given by T 1z0(z) = g(z0) + Dg(z0) (z − z0), (14) where Dg(z0) is the Jacobian of g evaluated in z0. This assumes that the Jacobian exists, i.e. the generator has at least one derivative. We can approximate g(z) by T 1z0(z) when z is close to z0. We apply this approximation to q(θ) from eq. (8), which gives us q(θ) = Ez∼q(z)[q(θ|z)] = Ez∼q(z)[N (θ|g(z), σ2Im)] (15) ≈ Ez∼q(z)[N (θ|g(z0) + Dg(z0) (z − z0), σ2Im)] = N (θ|µ(z0), C(z0)) =: q̃z0(θ), (16) where µ(z0) = g(z0)−Dg(z0) z0 (17) C(z0) = Dg(z0)Dg(z0) ⊺ + σ2Im. (18) The result in eq. (16) can be obtained analytically by integrating over the latent variable, see e.g. Tipping & Bishop (1999). 3.2 APPROXIMATION OF THE DIFFERENTIAL ENTROPY We use the Gaussian approximation of q(θ) to approximate the entropy of the DLVM, that is H[q(θ)] = −Ez∼q(z)Eθ∼q(θ|z)[log q(θ)] ≈ −Ez∼q(z)Eθ∼q(θ|z)[log q̃z0=z(θ)] =: H̃[p(θ)]. (19) Importantly, we do the linearisation of q(θ) around the latent value z that is used to sample each θ in the expectation. We have that log q̃z0=z(θ) = − p 2 log 2π − 1 2 log detC(z0)− 1 2 (θ − µ(z0))⊺C(z0)−1(θ − µ(z0))︸ ︷︷ ︸ =:h(θ,z0) , (20) which means that our approximation of the entropy is H̃[q(θ)] = m 2 log 2π + 1 2 Ez∼q(z)[log detC(z)] + Ez∼q(z)Eθ∼q(θ|z)[h(θ, z)]. (21) As shown in appendix A.1, the last term can be written as Ez∼q(z)Eθ∼q(θ|z)[h(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 + σ2Im )−1 ( σ2Im + (Dg(z) z) 2 ))] , (22) where we used the notation M2 = MM⊺ for a matrix M . Now, if we let σ2 tend to zero, we find that lim σ2→0 Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 )−1 (Dg(z) z)2 )] (23) = 1 2 Ez∼q(z) [z⊺z] = 1 2 tr(Id) = d 2 . (24) Similar, we can also take the limit of the determinant term from eq. (21), that is lim σ2→0 1 2 Ez∼q(z)[log detC(z)] = 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] (25) Combining eqs. (21), (24) and (25), gives us the final approximation. For small values of the output variance σ2, we can approximate the differential entropy of a DLVM as H[p(θ)] ≈ lim σ2→0 H̃[p(θ)] = d 2 + m 2 log 2π + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] . (26) We can get a slightly more accurate approximation, by only applying the limit from eq. (23), and not the limit from eq. (25), which gives us H[p(θ)] ≈ d 2 + m 2 log 2π + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ + σ2Im )] . (27) 4 LINEARISED IMPLICIT VARIATIONAL INFERENCE (LIVI) We propose a novel bound for IVI. As the variational distribution, we use the DLVM of eq. (8), which is equivalent to adding noise to the implicit distribuion of eq. (6). Using the entropy approximation from eq. (26), we propose the approximate ELBO, L̃(γ) = Eθ∼qγ(θ) [ log p(D|θ) ] + Eθ∼qγ(θ) [ log p(θ) ] + lim σ2→0 H̃[p(θ)] (28) = Eθ∼qγ(θ) [ log p(D|θ) + log p(θ) ] + 1 2 Ez∼q(z) [ log det ( Dg(z)Dg(z)⊺ )] + c, (29) where c = d2 + m 2 log 2π. We can reparameterise the above with the base variables z,η to get L̃(γ) = Ez∼q(z),η∼q(η) [ log p(D|g(z) + η) + log p(g(z) + η) + 1 2 log det ( Dg(z)Dg(z)⊺ )] + c. (30) To avoid the calculation of the log-determinant term, we can follow Geng et al. (2021, eq. 10) and lower-bound it as 1 2 log det(Dg(z)Dg(z)⊺) = 1 2 m∑ i=1 log s2i (z) ≥ m log s1(z), (31) where sm(z) ≥ . . . ≥ s1(z) are the singular values of the Jacobian Dg(z). This gives us a lower bound on L̃(γ) given by ˜̃L(γ) =Eθ∼qγ(θ) [ log p(D|θ) + log p(θ) ] + Ez∼q(z) [ m log s1(z) ] + c ≤ L̃(γ) (32) Again, by reparameterisation with z,η we get ˜̃L(γ) = Ez∼q(z),η∼q(η) [ log p(D|g(z) + η) + log p(g(z) + η) +m log s1(z) ] + c. (33) We denote L̃(γ) the LIVI bound with accurate Jacobian and ˜̃L(γ) the LIVI bound with a differentiable lower bounded on the determinant. Depending on the amount of compute available, the two bounds provide a trade-off between the accuracy of uncertainties and the resources consumed. In both cases, the entropy maximisation promotes the generator to generate diverse weight samples which is in accordance with the principle behind Bayesian model averaging and supported by the performance of deep ensembles (Lakshminarayanan et al., 2017). We present connections with existing works in the literature in the following section highlighting similarities and divergences. 5 RELATED WORK The usage of a secondary network to generate parameters of a primary network first appeared in the form of hypernetworks (Ha et al., 2017). Our approach is probabilistic and is hence closer to Bayesian hypernetworks (Krueger et al., 2017). Compared to our approach, these models require invertibility of the generator and thereby avoid the complexities of estimating the entropy term. This corresponds to using a normalizing flow as a variational approximation. Training a normalizing flow over large parameter spaces is computionally costly due to large Jacobian matrices, typically requiring particular focus on the design of the variational approximation to curb dimensionality of the flow. In particular, Louizos & Welling (2017) use an expressive flow on multiplicative factors of weights in each layer and not on all weights jointly. Our bound uses a very similar change in volume formulation, log det(Dg(z)Dg(z)), for obtaining the log probability of samples under the variational density, but does not necessitate invertibility making it more general. Subsequently, Shi et al. (2018); Tran et al. (2017); Pawlowski et al. (2017) have successfully demonstrated implicit variational inference in BNNs using hypernetworks. Shi et al. (2018); Tran et al. (2017) do not focus on the entropy term, but rather try to estimate the ratio of the variational approximation to the prior (regularisation-term) in a procedure called density ratio estimation (also referred to as the prior-contrastive formulation by Huszár, 2017). Tran et al. (2017) opt for training a discriminator network to maximally distinguish two distributions given only i.i.d. samples from each. This approach, though general, adds to the computational requirements and becomes more challenging in high dimensions (Sugiyama et al., 2012). To mitigate the overhead of training the discriminator for each update of the ELBO, many works limit the discriminator training to a single or few iterations. Furthermore, this approach entails an adversarial objective that are infamously unstable (Mescheder et al., 2017). Pawlowski et al. (2017) treat all the weights as independent and find that a single discriminator network is inaccurate at estimating log ratios when compared to the analytical form of Bayes by backprop (Blundell et al., 2015), and opt to use a kernel method that matches the analytical form more closely. Shi et al. (2018) propose a novel way of estimating the ratio of the two densities using kernel regression in the space of parameters which obviates the need for a minmax objective. An obvious difficulty with kernel ridge regression in practice is its inaccuracy to estimate high-dimensional density ratios which is similar to using discriminators. This is especially the case given a limited number of samples from both the densities as well as the RBF kernel. While the RBF kernel still takes the same high-dimensional inputs and does not involve learning massive sets of parameters, its accuracy at larger scales is still doubtful. This work also proposes matrix multiplication neural network (MMNN) a novel generator architecture for generating large set of parameters. Pradier et al. (2018) are also motivated by the possibility of compressing the posterior in a lower dimensional space and use an inference network with a generator. Their model differs from ours as they also consider the parameters of the generator/decoder to be stochastic. Moreover they require empirical weight samples to train which doubles the training steps. D-SIVI (Molchanov et al., 2019) and SIVI (Yin & Zhou, 2018) use Monte Carlo (MC) averaging to approximate the entropy term. Both works use the implicit formulation to only model the mixing coefficients and not all the weights of the network. Our entropy term 8 also has a similar form and can be MC approximated. In the spirit of some recent works (Izmailov et al., 2020; Daxberger et al., 2021b;a) that alternatively choose a lower dimensional representation to preclude costly, high-dimensional inference, our work can be seen as allowing the approximate posterior in the form of the generator to choose which dimensions and parts of posterior are crucial and model them accordingly. 6 EXPERIMENTS 6.1 TOY DATA In fig. 1, we compare inference with our method against the gold standard for posterior inference on a simple toy dataset. After training, we also plot a KDE-plot of the samples the generator outputs in appendix A.6. We infer from this plot that the generator is capable of representing non-trivial distributions as we can spot heavy tails and multiple modes. 6.2 UCI DATASETS We perform experiments on UCI regression datasets with the setups by Lakshminarayanan et al. (2017) and Shi et al. (2018), using a BNN with one hidden layer MLP with 50 units on all of these datasets. We report the RMSE and log-likelihood on held out data for our method. We use generator architectures that are either equally or less powerful than Shi et al. (2018) and do not assume independence across layers, i.e. using one MLP to generate all the weights of the BNN. All of the generator architectures are one hidden layered MLP with a slightly varying number of units depending on the dataset. At this scale it is feasible to estimate uncertainties using accurate Jacobians. We require far fewer number of samples (5-10) per iteration compared to 100 used by KIVI to achieve very competitive results. We suspect they use high number of samples to curb the variance of the kernel estimator. Our results are summarised in table 1. We train our method with a homoscedastic assumption i.e. the variance in the dataset is assumed to be constant and we train an observation noise parameter using type II maximum likelihood. 6.3 MNIST DATASET Next we test our method on the MNIST dataset. While using the MMNN as the generator, we were able to achieve errors on the test set on par with KIVI for MLP with 400 and 800 hidden units. With the total number of parameters generated exceeding 400K even for 400 hidden units we chose to train the model only with the differentiable lower bound due to prohibitively high memory usage. For OOD testing we compare our method to last-layer laplace, deep ensembles and a simple MAP estimate. We intentionally choose these methods to compare against as a mean-field approximation usually does not achieve good accuracy on in-distribution data and has been shown, repeatedly, to suffer from many optimisation difficulties. On the other hand it is possible to run HMC samplers at this scale, it is not preferable. Very few works in the literature report results using full batch HMC (Izmailov et al., 2021). Deep ensembles predict using neural networks that have converged onto different minima hence encompassing information from diverse modes of the posterior, and as such remains one of the best in terms of uncertainty estimation. As for benchmarks we choose two OOD benchmarks presented in Daxberger et al. (2021a). First we test the OOD AUROC and confidence of a LeNet5 BNN trained on MNIST by using FMNIST, KMNIST and EMNIST. We used a MMNN architecture for generating over 40K parameters and trained using the differentiable numerical approximation with 3-6 samples depending on the architecture. We expand on few generator architectures here and leave the rest for appendix appendix A.2. The BNN trained with the implicit variational approximation, a generator with a 1225 dimension noise input and 2 matrix multiplication layers of 350 units each achieves accuracy of 99.071%±0.02, and calibration error of 0.084 ± 0.011 with nll −0.021 ± 0.001 on the test set. The same model reports an averaged OOD AUCROC of 97.15 ± 0.17 with an averaged confidence 68.53 ± 0.24. According to results provided in Daxberger et al. (2021a, Table 1), our model does not outperform in terms of confidence values yet, we notice that the performance degrades very smoothly as it encounters OOD data as opposed to models like Deep Ensembles and Laplace both of which fail relatively immediately and drastically in terms of confidence values. Our model does perform quite well on the averaged AUROC as well as on test set calibration error and log-likelihood. To probe out of distribution performance further, we compare our method on the rotated MINST benchmark from Daxberger et al. (2021a). In this benchmark we plot the negative log-likelihoods and empirical calibration errors for different rotated MNIST images. In this benchmark task we plot results in fig. 2 for three different architectures and our best (LIVI 3) remains the same architecture as above. Here too, we nearly match the performance of deep ensembles on these two metrics. The other two architectures, LIVI 1 has 1764 dimensional noise input and one matrix multiplication hidden layer with 350 units while LIVI 2 has 900 dimensional noise input with 2 hidden matrix multiplication layers of 320 units each. 6.4 COMPARISON BETWEEN IMPLICIT VARIATIONAL APPROXIMATIONS Variational inference for BNNs relies heavily on the expressivity of the family of approximations chosen to model the posterior. In our case the architecture of the generator represents the flexibility and overall modeling capability of the implicit variational density. We trained different architectures and noticed that generator architectures with more hidden layers perform better on in-distribution metrics like accuracy and log-likelihood. Additional hidden layers afford the generators the capacity to warp the input Gaussian noise into a suitable posterior distribution. On the other hand, the dimensionality of input noise becomes crucial for uncertainty quantification and OOD performance. We believe this is because the number of noise inputs are all the degrees of freedom available to the generator to model the parameters of the BNN. As such, the entropy of resulting posterior is directly dependent on the this factor. Although this number cannot be increased without repercussions because the base distribution and the number of samples affect the signal to noise ratio of our objective function eq. (29) and a very large z results in large gradient variance hindering covergence, requiring more samples during training or higher number of iterations to converge. In these experiments we also noticed that down scaling only the prior log probability has a very positive effect on the results. This is due to the fact that the prior term regularises the generator, forcing it to find minimas close to itself, a standard normal distribution. The scale of this prior log probability term is significant in the ELBO and gradients of this term are detrimental to the overall optimisation process. Unlike cold-posteriors(Wenzel et al., 2020), we keep the gradients of the entropy term as is and only reduce regularisation by downscaling the prior. As the last benchmark we opt to ascertain the quality of our model’s posterior and the implied predictive uncertainties by plotting the empirical CDF of predictive entropies across OOD images(Lakshminarayanan et al., 2017; Louizos & Welling, 2017) in fig. 3. Given a model trained on MNIST, the predictions on a data point from a different distribution should be given a high entropy prediction like a uniform distribution. For this plot we first obtain entropies of the output softmax distributions for all the models across data points and use an empirical CDF to represent how many of these predictive entropies are closer to a uniform distribution which has an entropy of 2.3. Ideally, we are looking lines closer to the right bottom corner, i.e. the number of low-entropy or highly confident predictions should be less. We compare our model to MAP, deep ensembles and last-layer Laplace and find that our model trained on MNIST is quite competitive in the quality of uncertainty estimates for this test over FMNIST dataset. For this plot we use the best generator architecture with a LeNet5 BNN which is called LIVI 3 in the tests above. 7 CONCLUSION In this paper we present a novel method for implicit variational inference for Bayesian Neural Networks that circumvents the need for a discriminator network to estimate intractable density ratios. We find that modelling the posterior with a highly flexible approximation indeed does have benefits. Our methods, in the wide range of variational approximations, get closer to the performance of deep ensembles, a non-probabilistic method on in distribution and out of distributions performance. Unlike conventional probabilistic methods we do not. One possible limitation of such hypernetworks can be generating massive parameter vectors for large neural networks. Works like Pawlowski et al. (2017); Shi et al. (2018) use different generator architectures to generate weights for each hidden layer in turn loose the information from modelling correlations across layers. Similarly this approach can be extended to use multiple smaller generators at the sacrifice of modelling correlations across layers. A APPENDIX A.1 DETAILS ON APPROXIMATION OF THE DIFFERENTIAL ENTROPY In this section we derive eq. (22). To simplify the derivation, we will use the notation v2 = v⊺v for vectors and M2 = M⊺M for matrices. Starting from the left hand side of eq. (22), we have that Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = Ez∼q(z)Eθ∼q(θ|z) [ 1 2 (θ − µ(z))⊺C(z)−1(θ − µ(z)) ] (34) = 1 2 Ez∼q(z)Eθ∼q(θ|z)[tr((θ − µ(z))⊺C(z)−1(θ − µ(z)))] (35) = 1 2 Ez∼q(z)[tr(C(z)−1Eθ∼q(θ|z)[(θ − µ(z))⊺(θ − µ(z))])] (36) The inner expectation simplifies to Eθ∼q(θ|z)[(θ − µ(z))2] = Eθ∼q(θ|z)[(θ − g(z) + Dg(z) z)2] (37) = Eθ∼q(θ|z) [ (θ − g(z))2 + (Dg(z) z)2 + 2(θ − g(z))Dg(z) z ] (38) = σ2Im + (Dg(z) z) 2, (39) where we that Eθ∼N (θ|g(z),σ2Im)[(θ− g(z))] = 0 and Eθ∼N (θ|g(z),σ2Im)[(θ− g(z))2] = σ2Im. If we plug in the result of eq. (39) into eq. (36), we obtain Ez∼q(z)Eθ∼q(θ|z)[g(θ, z)] = 1 2 Ez∼q(z) [ tr (( Dg(z)2 + σ2Im )−1 ( σ2Im + (Dg(z) z) 2 ))] . (40) Note that eq. (40) could also be derived from eq. (34) using Petersen & Pedersen (2012, eq. 380) and some reordering the terms. Equations (37) to (39) also follows from Petersen & Pedersen (2012, eq. 325). A.2 EXPERIMENT DETAILS We use the MMNN architecture as presented in Shi et al. (2018) for generating weights of the MLP BNN that was trained on MNIST as well as LeNet BNN that was used for all the OOD benchmarks. For the MLP experiment to compare with KIVI we used one MM network that generated all the parameters of the network Following architectures were tried for LeNet5 generators: • Noise input - 25x25, 2 MM hidden layers with 250 units, output layer size 350x127. • Noise input - 30x30, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 35x35, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 35x35, 3 MM hidden layers with 350 units, output layer same as above. • Noise input - 38x38, 2 MM hidden layers with 350 units, output layer same as above. • Noise input - 42x42, 2 MM hidden layers with 325 units, output layer same as above. All the above architectures were trained without dataset augmentation and with a maximum of 6 samples per minibatch. The last architecture required higher number of samples due to gradient noise which is proportional to the dimensionality of input noise. This phenomenon has been widely observed in training high dimensional variational approximations (Osawa et al., 2019; Mohamed et al., 2020). As all the architectures are trained for 100 epochs with the same learning rate, increasing gradient noise can significantly deter convergence when the input noise dimensions are increased. A.3 PRIOR DOWNWEIGHTING We choose to down scale the log prior probability in all the benchmark experiments. This term appears in the ELBO objective function and serves an important purpose. When the prior is chosen by domain experts this term ensures that the approximate posterior inferred is not too different from the intelligently chosen prior and hence the log prior probability of samples coming from the variational approximation should be high when the ELBO is being maximised. However, the choice of an appropriate prior is an active area of research in Bayesian deep learning(Fortuin, 2022) and this prior term and the regularisation effect is known to limit the variational approximation. DKL[q(θ)||p(θ)] should be minimised as a result of the optimisation of ELBO using gradient ascent and when the prior is naively chosen to be a standard normal it forces most of the weights of the posterior to be zero-centered. This forces the model to look for minimas that are very close to 0 and has a detrimental effect on the in-distribution performance. We use the plotting tool used by Shi et al. (2018) to demonstrate this effect. The line-plot below has all of the weights of a neural network used to solve a toy regression task on the x-axis and their respective magnitudes on the y-axis. We chose to sort the weights in order of their magnitude as the positions of weights are not very informative in neural networks due to permutation invariance. In the first plot, most of the weights are zero-centred and are not very active, on the other hand the second plot shows what happens when we down weight the prior by just 0.1. A.4 DETAILS OF FIG. 1 In fig. 1 we compare both the objective functions presented in this work for training with implicit variational approximations to different methods for uncertainty quantification for neural networks. All models were trained for 10K iterations and had to learn observation noise present in the toy sinusoidal dataset. We deliberately removed a part of the data to see if the models tested were able to find in-between uncertainties. All methods were given the same sized 2 hidden layered MLP with 7 and 10 units respectively. We trained 5 networks with different seeds for Deep Ensembles and average their predictions to make the plot. The variance of the predictions were then used for the confidence bands in blue. We also train the model with an observation noise parameter. For MFVI, we used KL down weighting to get it to convergence and increase the weight in the end of training. For HMC we Kernel Density Estimate(single weight) sample 5000 samples using the library Pyro. We also tried to make multiplicative normalizing flows converge for this dataset, but with even 20K parameters and training for 15K iteration with a very long learning rate did not help. We even tried KL down weighting to reduce the effect of the prior in the initial iterations but that did not work either. A.5 COMPUTATION GRAPH Here we provide some details about how the combination of the joint generator-BNN model works. The Bayesian neural network classes for all types of architectures(feed forward, convolutions, etc.) require a generator in the init function. As such, the generator networks reside inside the BNN and reparametrise it with a simple sample_parameters function. The most important part of this kind of implementation was the layers themselves. PyTorch provides different kinds of mutable layer implementations in nn.module but these layers do not expose their state i.e. their parameters in a manner that allows changes on the fly during training. We reimplemented the layers allowing such resampling to occur with the generator. In the init function of the BNN, we generate one set of parameters with the generator, package it in a dict that has the weight sample as well as a index to know the number of weights used by a previous layer. This counter index is updated by each layer in their init and sample_parameters function. As such, only the parameters of the generator are trainable, the parameters of the BNN are switchable and relay gradients to the generator via the likelihood or the entropy term. A.6 KDE PLOT Figure 6 shows a KDE plot of weights randomly chosen from samples obtained from a trained generator.
1. What is the focus and contribution of the paper on variational inference? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its technical exposition and experimental evaluation? 3. Do you have any concerns about the approximations made in the paper, such as down-weighting the prior term in the ELBO? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the paper, such as including HMC results or providing more information about the variants used in the experiments?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a variational inference method based on sampling model parameters from a generator neural network. It calculates the entropy of resulting implicit variational distribution by linearizing the network, resulting in a Gaussian entropy. Further, the paper proposes to bound the costly log determinant calculation of the variance by the maximum eigenvalue (via the maximum singular value of the Jacobian). It reports competitive performance on UCI regresson and MNIST uncertainty estimation benchmarks with deep ensembles, a kernel-based implicit VI method and a last-layer Laplace approximation. Strengths And Weaknesses Strengths: Generating parameters by a neural network is an appealing direction for variational as it allows for potentially complex, multi-modal variational distributions. This paper proposes a coherent and principled approach for this. The technical exposition is extremely clear. Weaknesses: The experimental evaluation is extremely small scale, not going beyond MNIST, which is not exactly informative. There are unfortunately no ablation studies or relevant qualitative experiments. Given that the paper makes multiple approximations, I really would have wanted to see some more in-depth analysis of these choices rather than fairly tangential experiments on the architecture of the generator (I know this matter, but does not seem exactly relevant for the main text). There is a lengthy discussion around down-weighing the prior term in the ELBO. I know it is common practice to temper the KL divergence in the variational objective and have used such tricks myself, however I find this somewhat contradicts the motivation of the paper to approximate the entropy term in the objective. Why bother approximating the entropy if we're not using the ELBO in a principled way anyway? Minor comments: I don't understand why the variance on the parameters is taken to the limit of 0 rather than being treated as a variational parameter. I'm not entirely sure whether this equivalence holds, but taking the outer product of the jacobians that the paper uses as equal to the Fisher, I wonder whether instead of using an iterative approach for calculating the singular value the structure of the Fisher could be exploited as e.g. in (Ritter et al., A scalable Laplace approximation for neural networks, ICLR 2018) for efficiently calculating eigenvalues/log determinant. I would suggest adding HMC results to the UCI benchmark for reference. FIg 3 should include reference values for the in-distribution test data, OOD entropies are meaningless in isolation. Clarity, Quality, Novelty And Reproducibility Clarity The methodological section is clear. The experimental section is a bit underwhelming in this regard, e.g. there are different variations of the proposed method that are not defined at all ('acc-jac' and 'diff-lb' in Fig 1) or barely motivated (LIVI 1-3). I would suggest the authors state explicitly what questions they are trying to address with these variant. Quality While the methodological derivations are correct and interesting, the experimental evaluation is severely lacking. *Novelty The approach is novel.
ICLR
Title k-Median Clustering via Metric Embedding: Towards Better Initialization with Differential Privacy Abstract In clustering algorithms, the choice of initial centers is crucial for the quality of the learned clusters. We propose a new initialization scheme for the k-median problem in the general metric space (e.g., discrete space induced by graphs), based on the construction of metric embedding tree structure of the data. From the tree, we propose a novel and efficient search algorithm, for good initial centers that can be used subsequently for the local search algorithm. The so-called HST initialization method can produce initial centers achieving lower errors than those from another popular initialization method, k-median++, with comparable efficiency. Our HST initialization can also be easily extended to the setting of differential privacy (DP) to generate private initial centers. We show that the error of applying DP local search followed by our private HST initialization improves previous results on the approximation error, and approaches the lower bound within a small factor. Experiments demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Clustering is an important problem in unsupervised learning that has been widely studied in statistics, data mining, network analysis, etc. (Punj and Stewart, 1983; Dhillon and Modha, 2001; Banerjee et al., 2005; Berkhin, 2006; Abbasi and Younis, 2007). The goal of clustering is to partition a set of data points into clusters such that items in the same cluster are expected to be similar, while items in different clusters should be different. This is concretely measured by the sum of distances (or squared distances) between each point to its nearest cluster center. One conventional notion to evaluate a clustering algorithms is: with high probability, cost(C,D) ≤ γOPTk(D) + ξ, where C is the centers output by the algorithm and cost(C,D) is a cost function defined for C on dataset D. OPTk(D) is the cost of optimal (oracle) clustering solution on D. When everything is clear from context, we will use OPT for short. Here, γ is called multiplicative error and ξ is called additive error. Alternatively, we may also use the notion of expected cost. Two popularly studied clustering problems are 1) the k-median problem, and 2) the k-means problem. The origin of k-median dates back to the 1970’s (e.g., Kaufman et al. (1977)), where one tries to find the best location of facilities that minimizes the cost measured by the distance between clients and facilities. Formally, given a set of points D and a distance measure, the goal is to find k center points minimizing the sum of absolute distances of each sample point to its nearest center. In k-means, the objective is to minimize the sum of squared distances instead. Particularly, k-median is usually the one used for clustering on graph/network data. In general, there are two popular frameworks for clustering. One heuristic is the Lloyd’s algorithm (Lloyd, 1982), which is built upon an iterative distortion minimization approach. In most cases, this method can only be applied to numerical data, typically in the (continuous) Euclidean space. Clustering in general metric spaces (discrete spaces) is also important and useful when dealing with, for example, the graph data, where Lloyd’s method is no longer applicable. A more broadly applicable approach, the local search method (Kanungo et al., 2002; Arya et al., 2004), has also been widely studied. It iteratively finds the optimal swap between the center set and non-center data points to keep lowering the cost. Local search can achieve a constant approximation ratio (γ = O(1)) to the optimal solution for k-median (Arya et al., 2004). Initialization of cluster centers. It is well-known that the performance of clustering can be highly sensitive to initialization. If clustering starts with good initial centers (i.e., with small approximation error), the algorithm may use fewer iterations to find a better solution. The k-median++ algorithm (Arthur and Vassilvitskii, 2007) iteratively selects k data points as initial centers, favoring distant points in a probabilistic way. Intuitively, the initial centers tend to be well spread over the data points (i.e., over different clusters). The produced initial center is proved to have O(log k) multiplicative error. Follow-up works of k-means++ further improved its efficiency and scalability, e.g., Bahmani et al. (2012); Bachem et al. (2016); Lattanzi and Sohler (2019). In this work, we propose a new initialization framework, called HST initialization, based on metric embedding techniques. Our method is built upon a novel search algorithm on metric embedding trees, with comparable approximation error and running time as k-median++. Moreover, importantly, our initialization scheme can be conveniently combined with the notion of differential privacy (DP). Clustering with Differential Privacy. The concept of differential privacy (Dwork, 2006; McSherry and Talwar, 2007) has been popular to rigorously define and resolve the problem of keeping useful information for model learning, while protecting privacy for each individual. Private k-means problem has been widely studied, e.g., Feldman et al. (2009); Nock et al. (2016); Feldman et al. (2017), mostly in the continuous Euclidean space. The paper (Balcan et al., 2017) considered identifying a good candidate set (in a private manner) of centers before applying private local search, which yields O(log3 n) multiplicative error and O((k2+d) log5 n) additive error. Later on, the Euclidean k-means errors are further improved to γ = O(1) and ξ = O(k1.01 · d0.51 + k1.5) by Stemmer and Kaplan (2018), with more advanced candidate set selection. Huang and Liu (2018) gave an optimal algorithm in terms of minimizing Wasserstein distance under some data separability condition. For private k-median clustering, Feldman et al. (2009) considered the problem in high dimensional Euclidean space. However, it is rather difficult to extend their analysis to more general metrics in discrete spaces (e.g., on graphs). The strategy of (Balcan et al., 2017) to form a candidate center set could as well be adopted to k-median, which leads to O(log3/2 n) multiplicative error and O((k2 + d) log3 n) additive error in high dimensional Euclidean space. In discrete space, Gupta et al. (2010) proposed a private method for the classical local search heuristic, which applies to both k-medians and k-means. To cast privacy on each swapping step, the authors applied the exponential mechanism of (McSherry and Talwar, 2007). Their method produced an ϵ-differentially private solution with cost 6OPT +O(△k2 log2 n/ϵ), where△ is the diameter of the point set. In this work, we will show that our HST initialization can improve DP local search for k-median (Gupta et al., 2010) in terms of both approximation error and efficiency. The main contributions of this work include : • We introduce the Hierarchically Well-Separated Tree (HST) to the k-median clustering problem for initialization. We design an efficient sampling strategy to select the initial center set from the tree, with an approximation factor O(logmin{k,△}) in the non-private setting, which is O(logmin{k, d}) when△ = O(d) (e.g., bounded data). This improves the O(log k) error of k-means/median++ in e.g., the lower dimensional Euclidean space. • We propose a differentially private version of HST initialization under the setting of Gupta et al. (2010) in discrete metric space. The so-called DP-HST algorithm finds initial centers with O(log n) multiplicative error and O(ϵ−1△k2 log2 n) additive error. Moreover, running DP-local search starting from this initialization gives O(1) multiplicative error and O(ϵ−1△k2(log log n) log n) additive error, which improves previous results towards the well-known lower bound O(ϵ−1△k log(n/k)) on the additive error of DP k-median (Gupta et al., 2010) within a small O(k log log n) factor. This is the first clustering initialization method with differential privacy guarantee and improved error rate in general metric space. • We conduct experiments on simulated and real-world datasets to demonstrate the effectiveness of our methods. In both non-private and private settings, our proposed HST-based approach achieves smaller cost at initialization than k-median++, which may also lead to improvements in the final clustering quality. 2 BACKGROUND AND SETUP Definition 2.1 (Differential Privacy (DP) (Dwork, 2006)). If for any two adjacent data sets D and D′ with symmetric difference of size one, for any O ⊂ Range(A), an algorithmA satisfies Pr[A(D) ∈ O] ≤ eϵPr[A(D′) ∈ O], then algorithmA is said to be ϵ-differentially private. Intuitively, DP requires that after removing any observation, the output of D′ should not be too different from that of the original dataset D. Smaller ϵ indicates stronger privacy, which, however, usually sacrifices utility. Thus, one central topic in DP is to balance the utility-privacy trade-off. To achieve DP, one approach is to add noise to the algorithm output. The Laplace mechanism adds Laplace(η(f)/ϵ) noise to the output, which is known to achieve ϵ-DP. The exponential mechanism is also a tool for many DP algorithms. Let O be the set of feasible outputs. The utility function q : D × O → R is what we aim to maximize. The exponential mechanism outputs an element o ∈ O with probability P [A(D) = o] ∝ exp( ϵq(D,o)2η(q) ), where D is the input dataset and η(f) = sup|D−D′|=1 |f(D)− f(D′)| is the sensitivity of f . Both mechanisms will be used in our paper. 2.1 k-MEDIAN CLUSTERING Following Arya et al. (2004); Gupta et al. (2010), the problem of k-median clustering (DP and non-DP) studied in our paper is stated as below. Definition 2.2 (k-median). Given a universe point set U and a metric ρ : U × U → R, the goal of k-median to pick F ⊆ U with |F | = k to minimize k-median: costk(F,U) = ∑ v∈U min f∈F ρ(v, f). (1) Let D ⊆ U be a set of demand points. The goal of DP k-median is to minimize DP k-median: costk(F,D) = ∑ v∈D min f∈F ρ(v, f). (2) At the same time, the output F is required to be ϵ-differentially private to D. We may drop “F ” and use “costk(U)” or “costk(D)” if there is no risk of ambiguity. To better understand the motivation of the DP clustering, we provide a real-world example as follows. Example 2.3. Consider U to be the universe of all users in a social network (e.g., Twitter). Each user (account) is public, but also has some private information that can only be seen by the data holder. Let D be users grouped by some feature that might be set as private. Suppose a third party plans to collaborate with the most influential users in D for e.g., commercial purposes, thus requesting the cluster centers of D. In this case, we need a strategy to safely release the centers, while protecting the individuals in D from being identified (since the membership of D is private). The local search procedure for k-median proposed by Arya et al. (2004) is summarized in Algorithm 1. First we randomly pick k points in U as the initial centers. In each iteration, we search over all x ∈ F and y ∈ U , and do the swap F ← F − {x} + {y} such that F − {x} + {y} improves the cost of F the most (if more than factor (1− α/k) where α > 0 is a hyper-parameter). We repeat the procedure until no such swap exists. Arya et al. (2004) showed that the output centers F achieves 5 approximation error to the optimal solution, i.e., cost(F ) ≤ 5OPT . Algorithm 1: Local search for k-median clustering (Arya et al., 2004) Input: Data points U , parameter k, constant α Initialization: Randomly select k points from U as initial center set F while ∃ x ∈ F, y ∈ U s.t. cost(F − {x}+ {y}) ≤ (1− α/k)cost(F ) do Select (x, y) ∈ Fi × (D \ Fi) with argminx,y{cost(F − {x}+ {y})} Swap operation: F ← F − {x}+ {y} Output: Center set F 2.2 k-MEDIAN++ INITIALIZATION Although local search is able to find a solution with constant error, it takes O(n2) per iteration (Resende and Werneck, 2007) in expected O(k log n) steps (in total O(kn2 log n)) when started from random center set, which would be slow for large datasets. Indeed, we do not need such complicated algorithm to reduce the cost at the beginning, i.e., when the cost is large. To accelerate the process, efficient initialization methods find a “roughly” good center set as the starting point for local search. In the paper, we compare our new initialization scheme mainly with a popular (and perhaps most well-known) initialization method, the k-median++ (Arthur and Vassilvitskii, 2007) (see Algorithm 6 in the Appendix). Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error with time complexity O(nk). Starting from the initialization, we only need to run O(k log log k) steps of the computationally heavy local search to reach a constant error solution. Thus, initialization may greatly improve the clustering efficiency. 3 INITIALIZATION VIA HIERARCHICALLY WELL-SEPARATED TREE (HST) In this section, we propose our novel initialization scheme for k-median clustering, and provide our analysis in the non-private case solving (1). The idea is based on the metric embedding theory. We will start with an introduction to the main tool used in our approach. 3.1 HIERARCHICALLY WELL-SEPARATED TREE (HST) In this paper, for an L-level tree, we will count levels in descending order down the tree. We use hv to denote the level of v, and ni be the number of nodes at level i. The Hierarchically Well-Separated Tree (HST) is based on the padded decompositions of a general metric space in a hierarchical manner (Fakcharoenphol et al., 2004). Let (U, ρ) be a metric space with |U | = n, and we will refer to this metric space without specific clarification. A β–padded decomposition of U is a probabilistic distribution of partitions of U such that the diameter of each cluster Ui ∈ U is at most β, i.e., ρ(u, v) ≤ β, ∀u, v ∈ Ui, i = 1, ..., k. The formal definition of HST is given as below. Definition 3.1. Assume minu,v∈U ρ(u, v) = 1 and denote △ = maxu,v∈U ρ(u, v). An αHierarchically Well-Separated Tree (α-HST) with depth L is an edge-weighted rooted tree T , such that an edge between any pair of two nodes of level i− 1 and level i has length at most△/αL−i. In this paper, we consider α = 2-HST for simplicity, as α only affects the constants in our theoretical analysis. Figure 1 is an example L = 3-level 2-HST (right panel), along with its underlying padded decompositions (left panel). A 2-HST can be built as follows: we first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point (leaf), or a pre-specified tree depth is reached. More details can be found in Algorithm 7 in the Appendix A. Blelloch et al. (2017) proposed an efficient HST construction in O(m log n) time, where n and m are the number of nodes and edges in a graph, respectively. The first step of our method is to embed the data points into a HST (see Algorithm 2). Next, we will describe our new strategy to search for the initial centers on the tree (w.r.t. the tree metric). Before moving on, it is worth mentioning that, there are polynomial time algorithms for computing an exact k-median solution in the tree metric (Tamir (1996); Shah (2003)). However, the dynamic programming algorithms have high complexity (e.g., O(kn2)), making them unsuitable for the purpose of fast initialization. Moreover, it is unknown how to apply them effectively to the private Algorithm 2: NDP-HST initialization Input: U ,△, k Initialization: L = log△, C0 = ∅, C1 = ∅ Call Algorithm 7 to build a level-L 2-HST T using U for each node v in T do Nv ← |U ∩ T (v)| score(v)← Nv · 2hv while |C1| < k do Add top (k − |C1|) nodes with highest score to C1 for each v ∈ C1 do C1 = C1 \ {v}, if ∃ v′ ∈ C1 such that v′ is a descendant of v C0 = FIND-LEAF(T,C1) Output: Initial center set C0 ⊆ U Algorithm 3: FIND-LEAF (T,C1) Input: T , C1 Initialization: C0 = ∅ for each node v in C1 do while v is not a leaf node do v ← argw max{Nw, w ∈ ch(v)}, where ch(v) denotes the children nodes of v Add v to C0 Output: Initial center set C0 ⊆ U case. As will be shown, our new algorithm 1) is very efficient, 2) gives O(1) approximation error in the tree metric, and 3) can be effectively extended to DP easily. 3.2 HST INITIALIZATION ALGORITHM Let L = log∆ and suppose T is a level-L 2-HST in (U, ρ), where we assume L is an integer. For a node v at level i, we use T (v) to denote the subtree rooted at v. Let Nv = |T (v)| be the number of data points in T (v). The search strategy for the initial centers, NDP-HST initialization (“NDP” stands for “Non-Differentially Private”), is presented in Algorithm 2 with two phases. Subtree search. The first step is to identify the subtrees that contain the k centers. To begin with, k initial centers C1 are picked from T who have the largest score(v) = N(v) · 2hv . This is intuitive, since to get a good clustering, we typically want the ball surrounding each center to include more data points. Next, we do a screening over C1: if there is any ancestor-descendant pair of nodes, we remove the ancestor from C1. If the current size of C1 is smaller than k, we repeat the process until k centers are chosen (we do not re-select nodes in C1 and their ancestors). This way, C1 contains k root nodes of k disjoint subtrees. Leaf search. After we find C1 the set of k subtrees, the next step is to find the center in each subtree using Algorithm 3 (“FIND-LEAF”). We employ a greedy search strategy, by finding the child node with largest score level by level, until a leaf is found. This approach is intuitive since the diameter of the partition ball exponentially decays with the level. Therefore, we are in a sense focusing more and more on the region with higher density (i.e., with more data points). The complexity of our search algorithm is given as follows. Proposition 3.2 (Complexity). Algorithm 2 takes O(dn log n) time in the Euclidean space. Remark 3.3. The complexity of HST initialization is in general comparable to O(dnk) of kmedian++. Our algorithm would be faster if k > log n, i.e., the number of centers is large. 3.3 APPROXIMATION ERROR OF HST INITIALIZATION Firstly, we show that the initial center set produced by NDP-HST is already a good approximation to the optimal k-median solution. Let ρT (x, y) = dT (x, y) denote the “2-HST metric” between x and y in the 2-HST T , where dT (x, y) is the tree distance between nodes x and y in T . By Definition 3.1 and since△ = 2L, in the analysis we assume equivalently that the edge weight of the i-th level 2i−1. The crucial step of our analysis is to examine the approximation error in terms of the 2-HST metric, after which the error can be adapted to the general metrics by the following Lemma (Bartal, 1996). Lemma 3.4. In a metric space (U, ρ) with |U | = n and diameter △, it holds that E[ρT (x, y)] = O(min{log n, log△})ρ(x, y). In the Euclidean space Rd, E[ρT (x, y)] = O(d)ρ(x, y). Recall C0, C1 from Algorithm 2. We define costTk (U) = ∑ y∈U min x∈C0 ρT (x, y), (3) costTk ′(U,C1) = min |F∩T (v)|=1, ∀v∈C1 ∑ y∈U min x∈F ρT (x, y), (4) OPTTk (U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y) ≡ min C′1 costTk ′(U,C ′1). (5) For simplicity, we will use costTk ′(U) to denote costTk ′(U,C1). Here, OPTTk (5) is the cost of the global optimal solution with 2-HST metric. The last equivalence in (5) holds because the optimal centers set can always located in k disjoint subtrees, as each leaf only contain one point. (3) is the k-median cost with 2-HST metric of the output C0 of Algorithm 2. (4) is the oracle cost after the subtrees are chosen. That is, it represents the optimal cost to pick one center from each subtree in C1. Firstly, we bound the approximation error of subtree search and leaf search, respectively. Lemma 3.5 (Subtree search). costTk ′(U) ≤ 5OPTTk (U). Lemma 3.6 (Leaf search). costTk (U) ≤ 2costTk ′(U). Combining Lemma 3.5 and Lemma 3.6, we obtain Theorem 3.7 (2-HST error). Running Algorithm 2, we have costTk (U) ≤ 10OPTTk (U). Thus, HST-initialization produces an O(1) approximation to OPT in the 2-HST metric. Define costk(U) as (1) for our HST centers, and the optimal cost w.r.t. ρ as OPTk(U) = min |F |=k ∑ y∈U min x∈F ρ(x, y). (6) We have the following result based on Lemma 3.4. Theorem 3.8. In general metric space, the expected k-median cost of Algorithm 2 is E[costk(U)] = O(min{log n, log△})OPTk(U). Remark 3.9. In the Euclidean space, Makarychev et al. (2019) proved O(log k) random projections suffice for k-median to achieve O(1) error. Thus, if△ = O(d) (e.g., bounded data), by Lemma 3.4, HST initialization is able to achieve O(log(min{d, k})) error, which is better than O(log k) of k-median++ when d is small. NDP-HST Local Search. We are interested in the approximation quality of standard local search (Algorithm 1), when initialized by our NDP-HST. Theorem 3.10. NDP-HST local search achieves O(1) approximation error in expected O(k log logmin{n,△}) number of iterations for input in general metric space. Before ending this section, we remark that the initial centers found by NDP-HST can be used for k-means clustering analogously. For general metrics, E[costkm(U)] = O(min{log n, log△})2OPTkm(U) where costkm(U) is the optimal k-means cost. See Appendix D for the detailed (and similar) analysis. 4 HST INITIALIZATION WITH DIFFERENTIAL PRIVACY In this section, we consider initialization method with differential privacy (DP). Recall (2) that U is the universe of data points, and D ⊂ U is a demand set that needs to be clustered with privacy. Algorithm 4: DP-HST initialization Input: U,D,△, k, ϵ Build a level-L 2-HST T based on input U for each node v in T do Nv ← |D ∩ T (v)| N̂v ← Nv + Lap(2(L−hv)/ϵ) score(v)← N̂(v) · 2hv Based on N̂v , apply the same strategy as Algorithm 2: find C1; C0 = FIND-LEAF(C1) Output: Private initial center set C0 ⊆ U Algorithm 5: DP-HST local search Input: U , demand points D ⊆ U , parameter k, ϵ, T Initialization: F1 the private initial centers generated by Algorithm 4 with privacy ϵ/2 Set parameter ϵ′ = ϵ4△(T+1) for i = 1 to T do Select (x, y) ∈ Fi× (V \Fi) with prob. proportional to exp(−ϵ′× (cost(Fi−{x}+ {y})) Let Fi+1 ← Fi − {x}+ {y} Select j from {1, 2, ..., T + 1} with probability proportional to exp(−ϵ′ × cost(Fj)) Output: F = Fj the private center set Since U is public, simply running initialization algorithms on U would preserve the privacy of D. However, 1) this might be too expensive; 2) in many cases one would probably want to incorporate some information about D in the initialization, since D could be a very imbalanced subset of U . For example, D may only contain data points from one cluster, out of tens of clusters in U . In this case, initialization on U is likely to pick initial centers in multiple clusters, which would not be helpful for clustering on D. Next, we show how our proposed HST initialization can be easily combined with differential privacy that at the same time contains information about the demand set D, leading to improved approximation error (Theorem 4.3). Again, suppose T is an L = log△-level 2-HST of universe U in a general metric space. Denote Nv = |T (v) ∩D| for a node point v. Our private HST initialization (DP-HST) is similar to the non-private Algorithm 2. To gain privacy, we perturb Nv by adding i.i.d. Laplace noise: N̂v = Nv + Lap(2 (L−hv)/ϵ), where Lap(2(L−hv)/ϵ) is a Laplace random number with rate 2(L−hv)/ϵ. We will use the perturbed N̂v for node sampling instead of the true value Nv, as described in Algorithm 4. The DP guarantee of this initialization scheme is straightforward by the composition theory (Dwork, 2006). Theorem 4.1. Algorithm 4 is ϵ-differentially private. Proof. For each level i, the subtrees T (v, i) are disjoint to each other. The privacy used in i-th level is ϵ/2(L−i), and the total privacy is ∑ i ϵ/2 (L−i) < ϵ. We now consider the approximation error. As the structure of the analysis is similar to the non-DP case, we present the main result here and defer the detailed proofs to Appendix C. Theorem 4.2. Algorithm 4 finds initial centers such that E[costk(D)] = O(log n)(OPTk(D) + kϵ −1△ log n). DP-HST Local Search. Similarly, we can use private HST initialization to improve the performance of private k-median local search, which is presented in Algorithm 5. After initialization, the DP local search procedure follows Gupta et al. (2010) using the exponential mechanism. Theorem 4.3. Algorithm 5 achieves ϵ-differential privacy. With probability (1− 1poly(n) ), the output centers admit costk(D) ≤ 6OPTk(D) +O(ϵ−1k2△(log log n) log n) in T = O(k log log n) iterations. The DP local search with random initialization (Gupta et al., 2010) has 6 multiplicative error and O(ϵ−1△k2 log2 n) additive error. Our result improves the log n term to log log n in the additive error. Meanwhile, the number of iterations needed is improved from T = O(k log n) to O(k log log n) (see Appendix B for an empirical justification). Notably, it has been shown in Gupta et al. (2010) that for k-median problem, the lower bounds on the multiplicative and additive error of any ϵ-DP algorithm are O(1) and O(ϵ−1△k log(n/k)), respectively. Our result matches the lower bound on the multiplicative error, and the additive error is only worse than the bound by a factor of O(k log log n) which would be small in many cases. To our knowledge, Theorem 4.3 is the first result in literature to improve the error of DP local search in general metric space. 5 EXPERIMENTS 5.1 DATASETS AND ALGORITHMS Discrete Euclidean space. Following previous work ., we test k-median clustering on the MNIST hand-written digit dataset (LeCun et al., 1998) with 10 natural clusters (digit 0 to 9). We set U as 10000 randomly chosen data points. We choose the demand set D using two strategies: 1) “balance”, where we randomly choose 500 samples from U ; 2) “imbalance”, where D contains 500 random samples from U only from digit “0” and “8” (two clusters). We note that, the imbalanced D is a very practical setting in real-world scenarios, where data are typically not uniformly distributed. On this dataset, we test clustering with both l1 and l2 distance as the underlying metric. Metric space induced by graph. Random graphs have been widely considered in testing k-median methods (Balcan et al., 2013; Todo et al., 2019). The construction of graphs follows a similar approach as the synthetic pmedinfo graphs provided by the popular OR-Library (Beasley, 1990). The metric ρ for this experiment is the shortest (weighted) path distance. To generate a size n graph, we first randomly split the nodes into 10 clusters. Within each cluster, each pair of nodes is connected with probability 0.2 and weight drawn from standard uniform distribution. For each pair of clusters, we randomly connect some nodes from each cluster, with weights following uniform [0.5, r]. A larger r makes the graph more separable, i.e., clusters are farther from each other (see Appendix B for example graphs). We present two cases: r = 1 and r = 100. For this task, U has 3000 nodes, and the private set D (500 nodes) is chosen using similar “balanced” and “imbalanced” scheme as described above. In the imbalanced case, we choose D randomly from only two clusters. Algorithms. We compare the following clustering algorithms in both non-DP and DP setting: (1) NDP-rand: Local search with random initialization; (2) NDP-kmedian++: Local search with k-median++ initialization (Algorithm 6); (3) NDP-HST: Local search with NDP-HST initialization (Algorithm 2), as described in Section 3; (4) DP-rand: Standard DP local search algorithm (Gupta et al., 2010), which is Algorithm 5 with initial centers randomly chosen from U ; (5) DP-kmedian++: DP local search with k-median++ initialization run on U ; (6) DP-HST: DP local search with HST-initialization (Algorithm 5). For non-DP tasks, we set L = 6. For DP clustering, we use L = 8. For non-DP methods, we set α = 10−3 in Algorithm 1 and the maximum number of iterations as 20. To examine the quality of initialization as well as the final centers, We report both the cost at initialization and the cost of the final output. For DP methods, we run the algorithms for T = 20 steps and report the results with ϵ = 1. We test k ∈ {2, 5, 10, 15, 20}. The average cost over T iterations is reported for more robustness. All results are averaged over 10 independent repetitions. 5.2 RESULTS The results on MNIST dataset are given in Figure 2. The comparisons are similar for both l1 and l2: • From the left column, the initial centers found by HST has lower cost than k-median++ and random initialization, for both non-DP and DP setting, and for both balanced and imbalanced demand set D. This confirms that the proposed HST initialization is more powerful than k-median++ in finding good initial centers. • From the right column, we also observe lower final cost of HST followed by local search in DP clustering. In the non-DP case, the final cost curves overlap, which means that despite HST offers better initial centers, local search can always find a good solution eventually. • The advantage of DP-HST, in terms of both the initial and the final cost, is more significant when D is an imbalanced subset of U . As mentioned before, this is because our DP-HST initialization approach also privately incorporates the information of D. The results on graphs are reported in Figure 3, which give similar conclusions. In all cases, our proposed HST scheme finds better initial centers with smaller cost than k-median++. Moreover, HST again considerably outperforms k-median++ in the private and imbalanced D setting, for both r = 100 (highly separable) and r = 1 (less separable). The advantages of HST over k-median++ are especially significant in the harder tasks when r = 1, i.e., the clusters are nearly mixed up. 6 CONCLUSION In this paper, we propose a new initialization framework for the k-median problem in general metric space. Our approach is called HST initialization, which leverages tools from metric embedding theory. Our novel tree search approach has comparable efficiency and approximation error to the popular k-median++ initialization. Moreover, we propose differentially private (DP) HST initialization algorithm, which adapts to the private demand point set, leading to better clustering performance. When combined with subsequent DP local search heuristic, our algorithm is able to improve the additive error of DP local search, which is close to the theoretical lower bound within a small factor. Experiments with Euclidean metrics and graph metrics verify the effectiveness of our methods, which improve the cost of both the initial centers and the final k-median output. A POSTPONED ALGORITHM A.1 k-MEDIAN++ In the paper, we compared our HST initialization mainly with another (perhaps most well-known) initialization algorithm for clustering, the k-median++ (Arthur and Vassilvitskii, 2007). For reference, we present the concrete procedures in Algorithm 6. Here, the function D(u,C) is the shortest distance from a data point u to the closest (center) point in set C. Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error, in O(dnk) time. Algorithm 6: k-median++ initialization (Arthur and Vassilvitskii, 2007) Input: Data points U , number of centers k Randomly pick a point c1 ∈ U and set F = {c1} for i = 2 to k do Select ci = u ∈ U with probability ρ(u,F )∑ u′∈U ρ(u ′,F ) F = F ∪ {ci} Output: k-median++ initial center set F A.2 HST CONSTRUCTION As presented in Algorithm 7, the construction starts by applying a permutation π on U , such that in following steps the points are picked in a random sequence. We first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point, or a pre-specified tree depth is reached. In Figure 1, we provide an example of L = 3-level 2-HST (left panel), along with its underlying padded decompositions (right panel). Algorithm 7: Build 2-HST(U,L) Input: Data points U with diameter△, L Randomly pick a point in U as the root node of T Let r = △/2 Apply a permutation π on U // so points will be chosen in a random sequence for each v ∈ U do Set Cv = [v] for each u ∈ U do Add u ∈ U to Cv if d(v, u) ≤ r and u /∈ ⋃ v′ ̸=v Cv′ Set the non-empty clusters Cv as the children nodes of T for each non-empty cluster Cv do Run 2-HST(Cv, L− 1) to extend the tree T ; stop until L levels or reaching a leaf node Output: 2-HST T B MORE EXPERIMENTS B.1 EXAMPLES OF GRAPH DATA In Figure 4, we plot two example graphs (subgraphs of 50 nodes) with r = 100 and r = 1. When r = 100, the graph is highly separable (i.e., clusters are far from each other). When r = 1, the clusters are harder to be distinguished from each other. B.2 RUNNING TIME COMPARISON WITH k-MEDIAN++ In Proposition 3.2, we show that our HST initialization algorithm admits O(dn log n) complexity when considering the Euclidean space. With a smart implementation of Algorithm 6 where each data point tracks its distance to the current closest candidate center in C, k-median++ has O(dnk) running time. Therefore, the running time of our algorithm is in general comparable to k-median++. Our method would run faster if k = Ω(log n). In Figure 5, we plot the empirical running time of HST initialization against k-median++, on MNIST dataset with l2 distance (similar comparison holds for l1). From the left subfigure, we see that k-median++ becomes slower with increasing k, and our method is more efficient when k > 20. In the right panel, we observe that the running time of both methods increases with larger sample size n. Our HST algorithm has a slightly faster increasing rate, which is predicted by the complexity comparison (n log n v.s. n). However, this difference in log n factor would not be too significant unless the sample size is extremely large. Overall, our numerical results suggest that in general, the proposed HST initialization would have similar efficiency as k-median++ in common practical scenarios. B.3 IMPROVED ITERATION COST OF DP-HST In Theorem 4.3, we show that under differential privacy constraints, the proposed DP-HST (Algorithm 5) improves both the approximation error and the number of iterations required to find a good solution of classical DP local search (Gupta et al., 2010). In this section, we provide some numerical results to justify the theory. First, we need to properly measure the iteration cost of DP local search. This is because, unlike the non-private clustering, the k-median cost after each iteration in DP local search is not decreasing monotonically, due to the probabilistic exponential mechanism. To this end, for the cost sequence with length T = 20, we compute its moving average sequence with window size 5. Attaining the minimal value of the moving average indicates that the algorithm has found a “local optimum”, i.e., it has reached a “neighborhood” of solutions with small clustering cost. Thus, we use the number of iterations to reach such local optimum as the measure of iteration cost. The results are provided in Figure 6. We see that on all the tasks (MNIST with l1 and l2 distance, and graph dataset with r = 1 and r = 100), DP-HST has significantly smaller iterations cost. In Figure 7, we further report the k-median cost of the best solution in T iterations found by each DP algorithm. We see that DP-HST again provide the smallest cost. This additional set of experiments again validates the claims of Theorem 4.3, that DP-HST is able to found better initial centers in fewer iterations. C PROOFS The following composition result of differential privacy will be used in our proof. Theorem C.1 (Composition Theorem (Dwork, 2006)). If Algorithms A1,A2, ...,Am are ϵ1, ϵ2, ..., ϵm differentially private respectively, then the union (A1(D),A2(D), ...,Am(D)) is∑m i=1 ϵi-DP. C.1 PROOF OF LEMMA 3.5 Proof. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. If C1 = C∗1 , the proof is done. Thus, we prove the case for C1 ̸= C∗1 . Note that T (v), v ∈ C1 are disjoint subtrees. We have the following reasoning. • Case 1: for some i, j′, vi is a descendant node of v′j . Since the optimal center point f ∗ is a leaf node by the definition of (4), we know that there must exist one child node of v′j that expands a subtree which contains f∗. Therefore, we can always replace v′j by one of its child nodes. Hence, we can assume that vi is not a descendant of v′j . Note that, we have score(v′j) ≤ score(vi) if v′j /∈ C∗1 ∩C1. Algorithm 2 sorts all the nodes based on cost value, and it would have more priority to pick v′j than vi if score(v ′ j) > score(vi) and vi is not a child node of v′j . • Case 2: for some i, j′, v′j is a descendant of vi. In this case, optimal center point f ∗, which is a leaf of T (vi), must also be a leaf node of T (v′j). We can simply replace C1 with the swap C1 \ {vi}+ {v′j} which does not change costTk ′(U). Hence, we can assume that v′j is not a descendant of vi. • Case 3: Otherwise. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPT T k (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k (U)). Thus, we only need to consider Case 3. Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j . Let Sj denote the set of leaves in S ′ j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v ′′ j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to costTk ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTk ′(U) ≤ 5OPTTk (U). C.2 PROOF OF LEMMA 3.6 Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y), (7) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · 2hw ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. C.3 PROOF OF PROPOSITION 3.2 Proof. It is known that the 2-HST can be constructed in O(dn log n) (Bartal, 1996). The subtree search in Algorithm 2 involves at most sorting all the nodes in the HST based on the score, which takes O(nlogn). We use a priority queue to store the nodes in C1. When we insert a new node v into queue, its parent node (if existing in the queue) would be removed from the queue. The number of nodes is O(n) and each operation (insertion, deletion) in a priority queue based on score has O(log n) complexity. Lastly, the total time to obtain C0 is O(n), as the FIND-LEAF only requires a top down scan in k disjoint subtrees of T . Summing parts together proves the claim. C.4 PROOF OF THEOREM 4.2 Similarly, we prove the error in general metric by first analyzing the error in 2-HST metric. Then the result follows from Lemma 3.4. Let costTk (D), cost T k ′(D) and OPTTk (D) be defined analogously to (3), (4) and (5), where “y ∈ U” in the summation is changed into “y ∈ D” since D is the demand set. That is, costTk (D) = ∑ y∈D min x∈C0 ρT (x, y), (8) costTk ′(D,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈D min x∈F ρT (x, y), (9) OPTTk (D) = min F⊂D,|F |=k ∑ y∈D min x∈F ρT (x, y) ≡ min C′1 costTk ′(D,C ′1). (10) We have the following. Lemma C.2. costTk (D) ≤ 10OPTTk (D) + 10ckϵ−1△ log n with probability 1− 4k/nc. Proof. The result follows by combining the following Lemma C.4, Lemma C.5, and applying union bound. Lemma C.3. For any node v in T , with probability 1− 1/nc, |N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n. Proof. Since N̂v = Nv + Lap(2(L−hv)/2/ϵ), we have Pr[|N̂v −Nv| ≥ x/ϵ] = exp(−x/2(L−hv)). As L = log△, we have Pr[|N̂v −Nv| ≥ x△/(2hvϵ)] ≤ exp(−x). Hence, for some constant c > 0, Pr[|N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n] ≥ 1− exp(−c log n) = 1− 1/nc. Lemma C.4 (DP Subtree Search). With probability 1 − 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Proof. The proof is similar to that of Lemma 3.5. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal disjoint subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. Assume C1 ̸= C∗1 . By the same argument as the proof of Lemma 3.5, we consider for some i, j such that vi ̸= v′j , where vi is not a descendent of v′j and v′j is either a descendent of vi. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k ). Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S ′ j be the leaves assigned to c ∗ j . Let Sj denote the set of leaves in S ′ j whose distance to c∗j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to cost T k ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. Summing all the swaps over C∗1 \ C1, we obtain costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Applying union bound with Lemma C.3, with probability 1− 2/nc, we have Nv′j2 hv′ j −Nvi2hvi ≤ 2cϵ−1△ log n. Consequently, we have with probability, 1− 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4c|C1 \ C∗1 |ϵ−1△ log n ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Lemma C.5 (DP Leaf Search). With probability 1− 2k/nc, Algorithm 4 produces initial centers with costTk (D) ≤ 2costTk ′(D) + 2ckϵ−1△ log n. Proof. The proof strategy follows Lemma 3.6. We first consider one subtree with root v. Let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v,D) = min x∈T (v) ∑ y∈T (v)∩D ρT (x, y). (11) Suppose v has more than one children u,w, ..., and the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . As hu = hw, leveraging Lemma C.3, with probability 1− 2/nc, 2hu · (Nu −Nw) ≤ 2hu(N̂u − N̂w) + 2cϵ−1△ log n ≤ 2cϵ−1△ log n. since our algorithm picks subtree roots with highest scores. Then we have costT1 (v,D) ≤ costTk ′(v,D) + Nw · 2hu + 2cϵ−1△ log n ≤ 2costTk ′(v,D) + 2cϵ−1△ log n with high probability. Lastly, applying union bound over the disjoint k subtrees gives the desired result. C.5 PROOF OF THEOREM 4.3 Proof. The privacy analysis is straightforward, by using the composition theorem (Theorem C.1). Since the sensitivity of cost(·) is△, in each swap iteration the privacy budget is ϵ/2(T + 1). Also, we spend another ϵ/2(T + 1) privacy for picking a output. Hence, the total privacy is ϵ/2 for local search. Algorithm 4 takes ϵ/2 DP budget for initialization, so the total privacy is ϵ. The analysis of the approximation error follows from Gupta et al. (2010), where the initial cost is reduced by our private HST method. We need the following two lemmas. Lemma C.6 (Gupta et al. (2010)). Assume the solution to the optimal utility is unique. For any output o ∈ O of 2△ϵ-DP exponential mechanism on dataset D, it holds for ∀t > 0 that Pr[q(D, o) ≤ max o∈O q(D, o)− (ln |O|+ t)/ϵ] ≤ e−t, where |O| is the size of the output set. Lemma C.7 (Arya et al. (2004)). For any set F ⊆ D with |F | = k, there exists some swap (x, y) such that the local search method admits costk(F,D)− costk(F − {x}+ {y}, D) ≥ costk(F,D)− 5OPT (D) k . From Lemma C.7, we know that when costk(Fi, D) > 6OPT (D), there exists a swap (x, y) s.t. costk(Fi − {x}+ {y}, D) ≤ (1− 1 6k )costk(Fi, D). At each iteration, there are at most n2 possible outputs (i.e., possible swaps), i.e., |O| = n2. Using Lemma C.6 with t = 2 log n, for ∀i, Pr[costk(Fi+1, D) ≥ costk(F ∗i+1, D) + 4 log n ϵ′ ] ≥ 1− 1/n2, where costk(F ∗i+1, D) is the minimum cost among iteration 1, 2, ..., t + 1. Hence, we have that as long as cost(Fi, D) > 6OPT (D) + 24k lognϵ′ , the improvement in cost is at least by a factor of (1− 16k ). By Theorem 4.2, we have costk(F1, D) ≤ C(log n)(6OPT (D) + 6k△ log n/ϵ) for some constant C > 0. Let T = 6Ck log log n. We have that E[cost(Fi, D)] ≤ (6OPT (D) + 6kϵ−1△ log n)C(log n)(1− 1/6k)6Ck log logn ≤ 6OPT (D) + 6kϵ−1△ log n ≤ 6OPT (D) + 24k log n ϵ′ . Therefore, with probability at least (1−T/n2), there exists an i ≤ T s.t. cost(Fi, D) ≤ 6OPT (D)+ 24k logn ϵ′ . Then by using the Lemma C.7, one will pick an Fj with additional additive error 4 lnn/ϵ ′ to the min{cost(Fj , D), j = 1, 2, ..., T} with probability 1− 1/n2. Consequently, we know that the expected additive error is 24k△ log n/ϵ′ + 4 log n/ϵ′ = O(ϵ−1k2△(log log n) log n), with probability 1− 1/poly(n). D EXTEND HST INITIALIZATION TO k-MEANS Naturally, our HST method can also be applied to k-means clustering problem. In this section, we extend the HST to k-means and provide some brief analysis similar to k-median. We present the analysis in the non-private case, which can then be easily adapted to the private case. Define the following costs for k-means. costTkm(U) = ∑ y∈U min x∈C0 ρT (x, y)2, (12) costTkm ′(U,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈U min x∈F ρT (x, y)2, (13) OPTTkm(U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y)2 ≡ min C′1 costTkm ′(U,C ′1). (14) For simplicity, we will use costTkm ′(U) to denote costTkm ′(U,C1) if everything is clear from context. Here, OPTTkm (14) is the cost of the global optimal solution with 2-HST metric. Lemma D.1 (Subtree search). costTkm′(U) ≤ 17OPTTkm(U). Proof. The analysis is similar with the proof of Lemma 3.5. Thus, we mainly highlight the difference. Let us just use some notations the same as in Lemma 3.5 here. Let us consider the clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j in optimal k-means clustering in tree metric. Let Sj denote the set of leaves in S′j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v′j into C1 can only reduce (4 · 2 hv′′ j )2N(v′′j ) to cost T km ′(U). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTkm ′(U)−OPTTkm(U) ≤ ∑ v′j∈C∗1 \C1 Nv′j · (4 · 2 hv′ j )2, OPTTkm(U) ≥ ∑ vi∈C1\C∗1 Nvi(2 hvi )2. Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTkm ′(U) ≤ 17OPTTkm(U). Next, we show that the greedy leaf search strategy (Algorithm 3) only leads to an extra multiplicative error of 2. Lemma D.2 (Leaf search). costTkm(U) ≤ 2costTkm′(U). Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-means cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y)2, (15) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · (2hx)2 where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx ·(2hx)2 ≤ costT1 ′(v, U)+Nu ·(2hu)2. Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · (2hw)2 ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. We are ready to state the error bound for our proposed HST initialization (Algorithm 2), which is a natural combination of Lemma D.1 and Lemma D.2. Theorem D.3 (HST initialization). costTkm(U) ≤ 34OPTTkm(U). We have the following result based on Lemma 3.4. Theorem D.4. In a general metric space, E[costkm(U)] = O(min{log n, log△})2OPTkm(U).
1. What is the focus and contribution of the paper on k-means clustering? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its approximation guarantees and comparison to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions about the empirical evidence presented in the paper? 5. Does the paper provide sufficient literary context and discussion of related work in the field?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes to seed Lloyd's iteration for computing k-means clustering, in the context of finite metric spaces, using the approximation the underlying metric by a convex combination of hierarchically separated tree metrics. This is claimed to achieve better approximation guarantees (for the seeding step) than current conventions. Also, the same idea is used to generate differentially private seeding. Strengths And Weaknesses The claims are somewhat dubious, because the paper introduces a new parameter (the diameter of the metric space). The bound becomes better if this parameter is small, but it is not clear that previous analyses of kmeans++ variants cannot be proven to do better if this parameter is taken into account. Moreover, part of the attractiveness of the older seeding schemes does not lie in their worst case performance, but rather in their excellent performance in stable instances, under various notions of stability. The paper has some empirical evidence that their method is good. It's beyond my expertise to judge the empirical part. Also, I cannot evaluate the significance of the differential privacy result. The survey of relevant literature is lacking. Clarity, Quality, Novelty And Reproducibility The paper is written clearly.
ICLR
Title k-Median Clustering via Metric Embedding: Towards Better Initialization with Differential Privacy Abstract In clustering algorithms, the choice of initial centers is crucial for the quality of the learned clusters. We propose a new initialization scheme for the k-median problem in the general metric space (e.g., discrete space induced by graphs), based on the construction of metric embedding tree structure of the data. From the tree, we propose a novel and efficient search algorithm, for good initial centers that can be used subsequently for the local search algorithm. The so-called HST initialization method can produce initial centers achieving lower errors than those from another popular initialization method, k-median++, with comparable efficiency. Our HST initialization can also be easily extended to the setting of differential privacy (DP) to generate private initial centers. We show that the error of applying DP local search followed by our private HST initialization improves previous results on the approximation error, and approaches the lower bound within a small factor. Experiments demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Clustering is an important problem in unsupervised learning that has been widely studied in statistics, data mining, network analysis, etc. (Punj and Stewart, 1983; Dhillon and Modha, 2001; Banerjee et al., 2005; Berkhin, 2006; Abbasi and Younis, 2007). The goal of clustering is to partition a set of data points into clusters such that items in the same cluster are expected to be similar, while items in different clusters should be different. This is concretely measured by the sum of distances (or squared distances) between each point to its nearest cluster center. One conventional notion to evaluate a clustering algorithms is: with high probability, cost(C,D) ≤ γOPTk(D) + ξ, where C is the centers output by the algorithm and cost(C,D) is a cost function defined for C on dataset D. OPTk(D) is the cost of optimal (oracle) clustering solution on D. When everything is clear from context, we will use OPT for short. Here, γ is called multiplicative error and ξ is called additive error. Alternatively, we may also use the notion of expected cost. Two popularly studied clustering problems are 1) the k-median problem, and 2) the k-means problem. The origin of k-median dates back to the 1970’s (e.g., Kaufman et al. (1977)), where one tries to find the best location of facilities that minimizes the cost measured by the distance between clients and facilities. Formally, given a set of points D and a distance measure, the goal is to find k center points minimizing the sum of absolute distances of each sample point to its nearest center. In k-means, the objective is to minimize the sum of squared distances instead. Particularly, k-median is usually the one used for clustering on graph/network data. In general, there are two popular frameworks for clustering. One heuristic is the Lloyd’s algorithm (Lloyd, 1982), which is built upon an iterative distortion minimization approach. In most cases, this method can only be applied to numerical data, typically in the (continuous) Euclidean space. Clustering in general metric spaces (discrete spaces) is also important and useful when dealing with, for example, the graph data, where Lloyd’s method is no longer applicable. A more broadly applicable approach, the local search method (Kanungo et al., 2002; Arya et al., 2004), has also been widely studied. It iteratively finds the optimal swap between the center set and non-center data points to keep lowering the cost. Local search can achieve a constant approximation ratio (γ = O(1)) to the optimal solution for k-median (Arya et al., 2004). Initialization of cluster centers. It is well-known that the performance of clustering can be highly sensitive to initialization. If clustering starts with good initial centers (i.e., with small approximation error), the algorithm may use fewer iterations to find a better solution. The k-median++ algorithm (Arthur and Vassilvitskii, 2007) iteratively selects k data points as initial centers, favoring distant points in a probabilistic way. Intuitively, the initial centers tend to be well spread over the data points (i.e., over different clusters). The produced initial center is proved to have O(log k) multiplicative error. Follow-up works of k-means++ further improved its efficiency and scalability, e.g., Bahmani et al. (2012); Bachem et al. (2016); Lattanzi and Sohler (2019). In this work, we propose a new initialization framework, called HST initialization, based on metric embedding techniques. Our method is built upon a novel search algorithm on metric embedding trees, with comparable approximation error and running time as k-median++. Moreover, importantly, our initialization scheme can be conveniently combined with the notion of differential privacy (DP). Clustering with Differential Privacy. The concept of differential privacy (Dwork, 2006; McSherry and Talwar, 2007) has been popular to rigorously define and resolve the problem of keeping useful information for model learning, while protecting privacy for each individual. Private k-means problem has been widely studied, e.g., Feldman et al. (2009); Nock et al. (2016); Feldman et al. (2017), mostly in the continuous Euclidean space. The paper (Balcan et al., 2017) considered identifying a good candidate set (in a private manner) of centers before applying private local search, which yields O(log3 n) multiplicative error and O((k2+d) log5 n) additive error. Later on, the Euclidean k-means errors are further improved to γ = O(1) and ξ = O(k1.01 · d0.51 + k1.5) by Stemmer and Kaplan (2018), with more advanced candidate set selection. Huang and Liu (2018) gave an optimal algorithm in terms of minimizing Wasserstein distance under some data separability condition. For private k-median clustering, Feldman et al. (2009) considered the problem in high dimensional Euclidean space. However, it is rather difficult to extend their analysis to more general metrics in discrete spaces (e.g., on graphs). The strategy of (Balcan et al., 2017) to form a candidate center set could as well be adopted to k-median, which leads to O(log3/2 n) multiplicative error and O((k2 + d) log3 n) additive error in high dimensional Euclidean space. In discrete space, Gupta et al. (2010) proposed a private method for the classical local search heuristic, which applies to both k-medians and k-means. To cast privacy on each swapping step, the authors applied the exponential mechanism of (McSherry and Talwar, 2007). Their method produced an ϵ-differentially private solution with cost 6OPT +O(△k2 log2 n/ϵ), where△ is the diameter of the point set. In this work, we will show that our HST initialization can improve DP local search for k-median (Gupta et al., 2010) in terms of both approximation error and efficiency. The main contributions of this work include : • We introduce the Hierarchically Well-Separated Tree (HST) to the k-median clustering problem for initialization. We design an efficient sampling strategy to select the initial center set from the tree, with an approximation factor O(logmin{k,△}) in the non-private setting, which is O(logmin{k, d}) when△ = O(d) (e.g., bounded data). This improves the O(log k) error of k-means/median++ in e.g., the lower dimensional Euclidean space. • We propose a differentially private version of HST initialization under the setting of Gupta et al. (2010) in discrete metric space. The so-called DP-HST algorithm finds initial centers with O(log n) multiplicative error and O(ϵ−1△k2 log2 n) additive error. Moreover, running DP-local search starting from this initialization gives O(1) multiplicative error and O(ϵ−1△k2(log log n) log n) additive error, which improves previous results towards the well-known lower bound O(ϵ−1△k log(n/k)) on the additive error of DP k-median (Gupta et al., 2010) within a small O(k log log n) factor. This is the first clustering initialization method with differential privacy guarantee and improved error rate in general metric space. • We conduct experiments on simulated and real-world datasets to demonstrate the effectiveness of our methods. In both non-private and private settings, our proposed HST-based approach achieves smaller cost at initialization than k-median++, which may also lead to improvements in the final clustering quality. 2 BACKGROUND AND SETUP Definition 2.1 (Differential Privacy (DP) (Dwork, 2006)). If for any two adjacent data sets D and D′ with symmetric difference of size one, for any O ⊂ Range(A), an algorithmA satisfies Pr[A(D) ∈ O] ≤ eϵPr[A(D′) ∈ O], then algorithmA is said to be ϵ-differentially private. Intuitively, DP requires that after removing any observation, the output of D′ should not be too different from that of the original dataset D. Smaller ϵ indicates stronger privacy, which, however, usually sacrifices utility. Thus, one central topic in DP is to balance the utility-privacy trade-off. To achieve DP, one approach is to add noise to the algorithm output. The Laplace mechanism adds Laplace(η(f)/ϵ) noise to the output, which is known to achieve ϵ-DP. The exponential mechanism is also a tool for many DP algorithms. Let O be the set of feasible outputs. The utility function q : D × O → R is what we aim to maximize. The exponential mechanism outputs an element o ∈ O with probability P [A(D) = o] ∝ exp( ϵq(D,o)2η(q) ), where D is the input dataset and η(f) = sup|D−D′|=1 |f(D)− f(D′)| is the sensitivity of f . Both mechanisms will be used in our paper. 2.1 k-MEDIAN CLUSTERING Following Arya et al. (2004); Gupta et al. (2010), the problem of k-median clustering (DP and non-DP) studied in our paper is stated as below. Definition 2.2 (k-median). Given a universe point set U and a metric ρ : U × U → R, the goal of k-median to pick F ⊆ U with |F | = k to minimize k-median: costk(F,U) = ∑ v∈U min f∈F ρ(v, f). (1) Let D ⊆ U be a set of demand points. The goal of DP k-median is to minimize DP k-median: costk(F,D) = ∑ v∈D min f∈F ρ(v, f). (2) At the same time, the output F is required to be ϵ-differentially private to D. We may drop “F ” and use “costk(U)” or “costk(D)” if there is no risk of ambiguity. To better understand the motivation of the DP clustering, we provide a real-world example as follows. Example 2.3. Consider U to be the universe of all users in a social network (e.g., Twitter). Each user (account) is public, but also has some private information that can only be seen by the data holder. Let D be users grouped by some feature that might be set as private. Suppose a third party plans to collaborate with the most influential users in D for e.g., commercial purposes, thus requesting the cluster centers of D. In this case, we need a strategy to safely release the centers, while protecting the individuals in D from being identified (since the membership of D is private). The local search procedure for k-median proposed by Arya et al. (2004) is summarized in Algorithm 1. First we randomly pick k points in U as the initial centers. In each iteration, we search over all x ∈ F and y ∈ U , and do the swap F ← F − {x} + {y} such that F − {x} + {y} improves the cost of F the most (if more than factor (1− α/k) where α > 0 is a hyper-parameter). We repeat the procedure until no such swap exists. Arya et al. (2004) showed that the output centers F achieves 5 approximation error to the optimal solution, i.e., cost(F ) ≤ 5OPT . Algorithm 1: Local search for k-median clustering (Arya et al., 2004) Input: Data points U , parameter k, constant α Initialization: Randomly select k points from U as initial center set F while ∃ x ∈ F, y ∈ U s.t. cost(F − {x}+ {y}) ≤ (1− α/k)cost(F ) do Select (x, y) ∈ Fi × (D \ Fi) with argminx,y{cost(F − {x}+ {y})} Swap operation: F ← F − {x}+ {y} Output: Center set F 2.2 k-MEDIAN++ INITIALIZATION Although local search is able to find a solution with constant error, it takes O(n2) per iteration (Resende and Werneck, 2007) in expected O(k log n) steps (in total O(kn2 log n)) when started from random center set, which would be slow for large datasets. Indeed, we do not need such complicated algorithm to reduce the cost at the beginning, i.e., when the cost is large. To accelerate the process, efficient initialization methods find a “roughly” good center set as the starting point for local search. In the paper, we compare our new initialization scheme mainly with a popular (and perhaps most well-known) initialization method, the k-median++ (Arthur and Vassilvitskii, 2007) (see Algorithm 6 in the Appendix). Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error with time complexity O(nk). Starting from the initialization, we only need to run O(k log log k) steps of the computationally heavy local search to reach a constant error solution. Thus, initialization may greatly improve the clustering efficiency. 3 INITIALIZATION VIA HIERARCHICALLY WELL-SEPARATED TREE (HST) In this section, we propose our novel initialization scheme for k-median clustering, and provide our analysis in the non-private case solving (1). The idea is based on the metric embedding theory. We will start with an introduction to the main tool used in our approach. 3.1 HIERARCHICALLY WELL-SEPARATED TREE (HST) In this paper, for an L-level tree, we will count levels in descending order down the tree. We use hv to denote the level of v, and ni be the number of nodes at level i. The Hierarchically Well-Separated Tree (HST) is based on the padded decompositions of a general metric space in a hierarchical manner (Fakcharoenphol et al., 2004). Let (U, ρ) be a metric space with |U | = n, and we will refer to this metric space without specific clarification. A β–padded decomposition of U is a probabilistic distribution of partitions of U such that the diameter of each cluster Ui ∈ U is at most β, i.e., ρ(u, v) ≤ β, ∀u, v ∈ Ui, i = 1, ..., k. The formal definition of HST is given as below. Definition 3.1. Assume minu,v∈U ρ(u, v) = 1 and denote △ = maxu,v∈U ρ(u, v). An αHierarchically Well-Separated Tree (α-HST) with depth L is an edge-weighted rooted tree T , such that an edge between any pair of two nodes of level i− 1 and level i has length at most△/αL−i. In this paper, we consider α = 2-HST for simplicity, as α only affects the constants in our theoretical analysis. Figure 1 is an example L = 3-level 2-HST (right panel), along with its underlying padded decompositions (left panel). A 2-HST can be built as follows: we first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point (leaf), or a pre-specified tree depth is reached. More details can be found in Algorithm 7 in the Appendix A. Blelloch et al. (2017) proposed an efficient HST construction in O(m log n) time, where n and m are the number of nodes and edges in a graph, respectively. The first step of our method is to embed the data points into a HST (see Algorithm 2). Next, we will describe our new strategy to search for the initial centers on the tree (w.r.t. the tree metric). Before moving on, it is worth mentioning that, there are polynomial time algorithms for computing an exact k-median solution in the tree metric (Tamir (1996); Shah (2003)). However, the dynamic programming algorithms have high complexity (e.g., O(kn2)), making them unsuitable for the purpose of fast initialization. Moreover, it is unknown how to apply them effectively to the private Algorithm 2: NDP-HST initialization Input: U ,△, k Initialization: L = log△, C0 = ∅, C1 = ∅ Call Algorithm 7 to build a level-L 2-HST T using U for each node v in T do Nv ← |U ∩ T (v)| score(v)← Nv · 2hv while |C1| < k do Add top (k − |C1|) nodes with highest score to C1 for each v ∈ C1 do C1 = C1 \ {v}, if ∃ v′ ∈ C1 such that v′ is a descendant of v C0 = FIND-LEAF(T,C1) Output: Initial center set C0 ⊆ U Algorithm 3: FIND-LEAF (T,C1) Input: T , C1 Initialization: C0 = ∅ for each node v in C1 do while v is not a leaf node do v ← argw max{Nw, w ∈ ch(v)}, where ch(v) denotes the children nodes of v Add v to C0 Output: Initial center set C0 ⊆ U case. As will be shown, our new algorithm 1) is very efficient, 2) gives O(1) approximation error in the tree metric, and 3) can be effectively extended to DP easily. 3.2 HST INITIALIZATION ALGORITHM Let L = log∆ and suppose T is a level-L 2-HST in (U, ρ), where we assume L is an integer. For a node v at level i, we use T (v) to denote the subtree rooted at v. Let Nv = |T (v)| be the number of data points in T (v). The search strategy for the initial centers, NDP-HST initialization (“NDP” stands for “Non-Differentially Private”), is presented in Algorithm 2 with two phases. Subtree search. The first step is to identify the subtrees that contain the k centers. To begin with, k initial centers C1 are picked from T who have the largest score(v) = N(v) · 2hv . This is intuitive, since to get a good clustering, we typically want the ball surrounding each center to include more data points. Next, we do a screening over C1: if there is any ancestor-descendant pair of nodes, we remove the ancestor from C1. If the current size of C1 is smaller than k, we repeat the process until k centers are chosen (we do not re-select nodes in C1 and their ancestors). This way, C1 contains k root nodes of k disjoint subtrees. Leaf search. After we find C1 the set of k subtrees, the next step is to find the center in each subtree using Algorithm 3 (“FIND-LEAF”). We employ a greedy search strategy, by finding the child node with largest score level by level, until a leaf is found. This approach is intuitive since the diameter of the partition ball exponentially decays with the level. Therefore, we are in a sense focusing more and more on the region with higher density (i.e., with more data points). The complexity of our search algorithm is given as follows. Proposition 3.2 (Complexity). Algorithm 2 takes O(dn log n) time in the Euclidean space. Remark 3.3. The complexity of HST initialization is in general comparable to O(dnk) of kmedian++. Our algorithm would be faster if k > log n, i.e., the number of centers is large. 3.3 APPROXIMATION ERROR OF HST INITIALIZATION Firstly, we show that the initial center set produced by NDP-HST is already a good approximation to the optimal k-median solution. Let ρT (x, y) = dT (x, y) denote the “2-HST metric” between x and y in the 2-HST T , where dT (x, y) is the tree distance between nodes x and y in T . By Definition 3.1 and since△ = 2L, in the analysis we assume equivalently that the edge weight of the i-th level 2i−1. The crucial step of our analysis is to examine the approximation error in terms of the 2-HST metric, after which the error can be adapted to the general metrics by the following Lemma (Bartal, 1996). Lemma 3.4. In a metric space (U, ρ) with |U | = n and diameter △, it holds that E[ρT (x, y)] = O(min{log n, log△})ρ(x, y). In the Euclidean space Rd, E[ρT (x, y)] = O(d)ρ(x, y). Recall C0, C1 from Algorithm 2. We define costTk (U) = ∑ y∈U min x∈C0 ρT (x, y), (3) costTk ′(U,C1) = min |F∩T (v)|=1, ∀v∈C1 ∑ y∈U min x∈F ρT (x, y), (4) OPTTk (U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y) ≡ min C′1 costTk ′(U,C ′1). (5) For simplicity, we will use costTk ′(U) to denote costTk ′(U,C1). Here, OPTTk (5) is the cost of the global optimal solution with 2-HST metric. The last equivalence in (5) holds because the optimal centers set can always located in k disjoint subtrees, as each leaf only contain one point. (3) is the k-median cost with 2-HST metric of the output C0 of Algorithm 2. (4) is the oracle cost after the subtrees are chosen. That is, it represents the optimal cost to pick one center from each subtree in C1. Firstly, we bound the approximation error of subtree search and leaf search, respectively. Lemma 3.5 (Subtree search). costTk ′(U) ≤ 5OPTTk (U). Lemma 3.6 (Leaf search). costTk (U) ≤ 2costTk ′(U). Combining Lemma 3.5 and Lemma 3.6, we obtain Theorem 3.7 (2-HST error). Running Algorithm 2, we have costTk (U) ≤ 10OPTTk (U). Thus, HST-initialization produces an O(1) approximation to OPT in the 2-HST metric. Define costk(U) as (1) for our HST centers, and the optimal cost w.r.t. ρ as OPTk(U) = min |F |=k ∑ y∈U min x∈F ρ(x, y). (6) We have the following result based on Lemma 3.4. Theorem 3.8. In general metric space, the expected k-median cost of Algorithm 2 is E[costk(U)] = O(min{log n, log△})OPTk(U). Remark 3.9. In the Euclidean space, Makarychev et al. (2019) proved O(log k) random projections suffice for k-median to achieve O(1) error. Thus, if△ = O(d) (e.g., bounded data), by Lemma 3.4, HST initialization is able to achieve O(log(min{d, k})) error, which is better than O(log k) of k-median++ when d is small. NDP-HST Local Search. We are interested in the approximation quality of standard local search (Algorithm 1), when initialized by our NDP-HST. Theorem 3.10. NDP-HST local search achieves O(1) approximation error in expected O(k log logmin{n,△}) number of iterations for input in general metric space. Before ending this section, we remark that the initial centers found by NDP-HST can be used for k-means clustering analogously. For general metrics, E[costkm(U)] = O(min{log n, log△})2OPTkm(U) where costkm(U) is the optimal k-means cost. See Appendix D for the detailed (and similar) analysis. 4 HST INITIALIZATION WITH DIFFERENTIAL PRIVACY In this section, we consider initialization method with differential privacy (DP). Recall (2) that U is the universe of data points, and D ⊂ U is a demand set that needs to be clustered with privacy. Algorithm 4: DP-HST initialization Input: U,D,△, k, ϵ Build a level-L 2-HST T based on input U for each node v in T do Nv ← |D ∩ T (v)| N̂v ← Nv + Lap(2(L−hv)/ϵ) score(v)← N̂(v) · 2hv Based on N̂v , apply the same strategy as Algorithm 2: find C1; C0 = FIND-LEAF(C1) Output: Private initial center set C0 ⊆ U Algorithm 5: DP-HST local search Input: U , demand points D ⊆ U , parameter k, ϵ, T Initialization: F1 the private initial centers generated by Algorithm 4 with privacy ϵ/2 Set parameter ϵ′ = ϵ4△(T+1) for i = 1 to T do Select (x, y) ∈ Fi× (V \Fi) with prob. proportional to exp(−ϵ′× (cost(Fi−{x}+ {y})) Let Fi+1 ← Fi − {x}+ {y} Select j from {1, 2, ..., T + 1} with probability proportional to exp(−ϵ′ × cost(Fj)) Output: F = Fj the private center set Since U is public, simply running initialization algorithms on U would preserve the privacy of D. However, 1) this might be too expensive; 2) in many cases one would probably want to incorporate some information about D in the initialization, since D could be a very imbalanced subset of U . For example, D may only contain data points from one cluster, out of tens of clusters in U . In this case, initialization on U is likely to pick initial centers in multiple clusters, which would not be helpful for clustering on D. Next, we show how our proposed HST initialization can be easily combined with differential privacy that at the same time contains information about the demand set D, leading to improved approximation error (Theorem 4.3). Again, suppose T is an L = log△-level 2-HST of universe U in a general metric space. Denote Nv = |T (v) ∩D| for a node point v. Our private HST initialization (DP-HST) is similar to the non-private Algorithm 2. To gain privacy, we perturb Nv by adding i.i.d. Laplace noise: N̂v = Nv + Lap(2 (L−hv)/ϵ), where Lap(2(L−hv)/ϵ) is a Laplace random number with rate 2(L−hv)/ϵ. We will use the perturbed N̂v for node sampling instead of the true value Nv, as described in Algorithm 4. The DP guarantee of this initialization scheme is straightforward by the composition theory (Dwork, 2006). Theorem 4.1. Algorithm 4 is ϵ-differentially private. Proof. For each level i, the subtrees T (v, i) are disjoint to each other. The privacy used in i-th level is ϵ/2(L−i), and the total privacy is ∑ i ϵ/2 (L−i) < ϵ. We now consider the approximation error. As the structure of the analysis is similar to the non-DP case, we present the main result here and defer the detailed proofs to Appendix C. Theorem 4.2. Algorithm 4 finds initial centers such that E[costk(D)] = O(log n)(OPTk(D) + kϵ −1△ log n). DP-HST Local Search. Similarly, we can use private HST initialization to improve the performance of private k-median local search, which is presented in Algorithm 5. After initialization, the DP local search procedure follows Gupta et al. (2010) using the exponential mechanism. Theorem 4.3. Algorithm 5 achieves ϵ-differential privacy. With probability (1− 1poly(n) ), the output centers admit costk(D) ≤ 6OPTk(D) +O(ϵ−1k2△(log log n) log n) in T = O(k log log n) iterations. The DP local search with random initialization (Gupta et al., 2010) has 6 multiplicative error and O(ϵ−1△k2 log2 n) additive error. Our result improves the log n term to log log n in the additive error. Meanwhile, the number of iterations needed is improved from T = O(k log n) to O(k log log n) (see Appendix B for an empirical justification). Notably, it has been shown in Gupta et al. (2010) that for k-median problem, the lower bounds on the multiplicative and additive error of any ϵ-DP algorithm are O(1) and O(ϵ−1△k log(n/k)), respectively. Our result matches the lower bound on the multiplicative error, and the additive error is only worse than the bound by a factor of O(k log log n) which would be small in many cases. To our knowledge, Theorem 4.3 is the first result in literature to improve the error of DP local search in general metric space. 5 EXPERIMENTS 5.1 DATASETS AND ALGORITHMS Discrete Euclidean space. Following previous work ., we test k-median clustering on the MNIST hand-written digit dataset (LeCun et al., 1998) with 10 natural clusters (digit 0 to 9). We set U as 10000 randomly chosen data points. We choose the demand set D using two strategies: 1) “balance”, where we randomly choose 500 samples from U ; 2) “imbalance”, where D contains 500 random samples from U only from digit “0” and “8” (two clusters). We note that, the imbalanced D is a very practical setting in real-world scenarios, where data are typically not uniformly distributed. On this dataset, we test clustering with both l1 and l2 distance as the underlying metric. Metric space induced by graph. Random graphs have been widely considered in testing k-median methods (Balcan et al., 2013; Todo et al., 2019). The construction of graphs follows a similar approach as the synthetic pmedinfo graphs provided by the popular OR-Library (Beasley, 1990). The metric ρ for this experiment is the shortest (weighted) path distance. To generate a size n graph, we first randomly split the nodes into 10 clusters. Within each cluster, each pair of nodes is connected with probability 0.2 and weight drawn from standard uniform distribution. For each pair of clusters, we randomly connect some nodes from each cluster, with weights following uniform [0.5, r]. A larger r makes the graph more separable, i.e., clusters are farther from each other (see Appendix B for example graphs). We present two cases: r = 1 and r = 100. For this task, U has 3000 nodes, and the private set D (500 nodes) is chosen using similar “balanced” and “imbalanced” scheme as described above. In the imbalanced case, we choose D randomly from only two clusters. Algorithms. We compare the following clustering algorithms in both non-DP and DP setting: (1) NDP-rand: Local search with random initialization; (2) NDP-kmedian++: Local search with k-median++ initialization (Algorithm 6); (3) NDP-HST: Local search with NDP-HST initialization (Algorithm 2), as described in Section 3; (4) DP-rand: Standard DP local search algorithm (Gupta et al., 2010), which is Algorithm 5 with initial centers randomly chosen from U ; (5) DP-kmedian++: DP local search with k-median++ initialization run on U ; (6) DP-HST: DP local search with HST-initialization (Algorithm 5). For non-DP tasks, we set L = 6. For DP clustering, we use L = 8. For non-DP methods, we set α = 10−3 in Algorithm 1 and the maximum number of iterations as 20. To examine the quality of initialization as well as the final centers, We report both the cost at initialization and the cost of the final output. For DP methods, we run the algorithms for T = 20 steps and report the results with ϵ = 1. We test k ∈ {2, 5, 10, 15, 20}. The average cost over T iterations is reported for more robustness. All results are averaged over 10 independent repetitions. 5.2 RESULTS The results on MNIST dataset are given in Figure 2. The comparisons are similar for both l1 and l2: • From the left column, the initial centers found by HST has lower cost than k-median++ and random initialization, for both non-DP and DP setting, and for both balanced and imbalanced demand set D. This confirms that the proposed HST initialization is more powerful than k-median++ in finding good initial centers. • From the right column, we also observe lower final cost of HST followed by local search in DP clustering. In the non-DP case, the final cost curves overlap, which means that despite HST offers better initial centers, local search can always find a good solution eventually. • The advantage of DP-HST, in terms of both the initial and the final cost, is more significant when D is an imbalanced subset of U . As mentioned before, this is because our DP-HST initialization approach also privately incorporates the information of D. The results on graphs are reported in Figure 3, which give similar conclusions. In all cases, our proposed HST scheme finds better initial centers with smaller cost than k-median++. Moreover, HST again considerably outperforms k-median++ in the private and imbalanced D setting, for both r = 100 (highly separable) and r = 1 (less separable). The advantages of HST over k-median++ are especially significant in the harder tasks when r = 1, i.e., the clusters are nearly mixed up. 6 CONCLUSION In this paper, we propose a new initialization framework for the k-median problem in general metric space. Our approach is called HST initialization, which leverages tools from metric embedding theory. Our novel tree search approach has comparable efficiency and approximation error to the popular k-median++ initialization. Moreover, we propose differentially private (DP) HST initialization algorithm, which adapts to the private demand point set, leading to better clustering performance. When combined with subsequent DP local search heuristic, our algorithm is able to improve the additive error of DP local search, which is close to the theoretical lower bound within a small factor. Experiments with Euclidean metrics and graph metrics verify the effectiveness of our methods, which improve the cost of both the initial centers and the final k-median output. A POSTPONED ALGORITHM A.1 k-MEDIAN++ In the paper, we compared our HST initialization mainly with another (perhaps most well-known) initialization algorithm for clustering, the k-median++ (Arthur and Vassilvitskii, 2007). For reference, we present the concrete procedures in Algorithm 6. Here, the function D(u,C) is the shortest distance from a data point u to the closest (center) point in set C. Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error, in O(dnk) time. Algorithm 6: k-median++ initialization (Arthur and Vassilvitskii, 2007) Input: Data points U , number of centers k Randomly pick a point c1 ∈ U and set F = {c1} for i = 2 to k do Select ci = u ∈ U with probability ρ(u,F )∑ u′∈U ρ(u ′,F ) F = F ∪ {ci} Output: k-median++ initial center set F A.2 HST CONSTRUCTION As presented in Algorithm 7, the construction starts by applying a permutation π on U , such that in following steps the points are picked in a random sequence. We first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point, or a pre-specified tree depth is reached. In Figure 1, we provide an example of L = 3-level 2-HST (left panel), along with its underlying padded decompositions (right panel). Algorithm 7: Build 2-HST(U,L) Input: Data points U with diameter△, L Randomly pick a point in U as the root node of T Let r = △/2 Apply a permutation π on U // so points will be chosen in a random sequence for each v ∈ U do Set Cv = [v] for each u ∈ U do Add u ∈ U to Cv if d(v, u) ≤ r and u /∈ ⋃ v′ ̸=v Cv′ Set the non-empty clusters Cv as the children nodes of T for each non-empty cluster Cv do Run 2-HST(Cv, L− 1) to extend the tree T ; stop until L levels or reaching a leaf node Output: 2-HST T B MORE EXPERIMENTS B.1 EXAMPLES OF GRAPH DATA In Figure 4, we plot two example graphs (subgraphs of 50 nodes) with r = 100 and r = 1. When r = 100, the graph is highly separable (i.e., clusters are far from each other). When r = 1, the clusters are harder to be distinguished from each other. B.2 RUNNING TIME COMPARISON WITH k-MEDIAN++ In Proposition 3.2, we show that our HST initialization algorithm admits O(dn log n) complexity when considering the Euclidean space. With a smart implementation of Algorithm 6 where each data point tracks its distance to the current closest candidate center in C, k-median++ has O(dnk) running time. Therefore, the running time of our algorithm is in general comparable to k-median++. Our method would run faster if k = Ω(log n). In Figure 5, we plot the empirical running time of HST initialization against k-median++, on MNIST dataset with l2 distance (similar comparison holds for l1). From the left subfigure, we see that k-median++ becomes slower with increasing k, and our method is more efficient when k > 20. In the right panel, we observe that the running time of both methods increases with larger sample size n. Our HST algorithm has a slightly faster increasing rate, which is predicted by the complexity comparison (n log n v.s. n). However, this difference in log n factor would not be too significant unless the sample size is extremely large. Overall, our numerical results suggest that in general, the proposed HST initialization would have similar efficiency as k-median++ in common practical scenarios. B.3 IMPROVED ITERATION COST OF DP-HST In Theorem 4.3, we show that under differential privacy constraints, the proposed DP-HST (Algorithm 5) improves both the approximation error and the number of iterations required to find a good solution of classical DP local search (Gupta et al., 2010). In this section, we provide some numerical results to justify the theory. First, we need to properly measure the iteration cost of DP local search. This is because, unlike the non-private clustering, the k-median cost after each iteration in DP local search is not decreasing monotonically, due to the probabilistic exponential mechanism. To this end, for the cost sequence with length T = 20, we compute its moving average sequence with window size 5. Attaining the minimal value of the moving average indicates that the algorithm has found a “local optimum”, i.e., it has reached a “neighborhood” of solutions with small clustering cost. Thus, we use the number of iterations to reach such local optimum as the measure of iteration cost. The results are provided in Figure 6. We see that on all the tasks (MNIST with l1 and l2 distance, and graph dataset with r = 1 and r = 100), DP-HST has significantly smaller iterations cost. In Figure 7, we further report the k-median cost of the best solution in T iterations found by each DP algorithm. We see that DP-HST again provide the smallest cost. This additional set of experiments again validates the claims of Theorem 4.3, that DP-HST is able to found better initial centers in fewer iterations. C PROOFS The following composition result of differential privacy will be used in our proof. Theorem C.1 (Composition Theorem (Dwork, 2006)). If Algorithms A1,A2, ...,Am are ϵ1, ϵ2, ..., ϵm differentially private respectively, then the union (A1(D),A2(D), ...,Am(D)) is∑m i=1 ϵi-DP. C.1 PROOF OF LEMMA 3.5 Proof. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. If C1 = C∗1 , the proof is done. Thus, we prove the case for C1 ̸= C∗1 . Note that T (v), v ∈ C1 are disjoint subtrees. We have the following reasoning. • Case 1: for some i, j′, vi is a descendant node of v′j . Since the optimal center point f ∗ is a leaf node by the definition of (4), we know that there must exist one child node of v′j that expands a subtree which contains f∗. Therefore, we can always replace v′j by one of its child nodes. Hence, we can assume that vi is not a descendant of v′j . Note that, we have score(v′j) ≤ score(vi) if v′j /∈ C∗1 ∩C1. Algorithm 2 sorts all the nodes based on cost value, and it would have more priority to pick v′j than vi if score(v ′ j) > score(vi) and vi is not a child node of v′j . • Case 2: for some i, j′, v′j is a descendant of vi. In this case, optimal center point f ∗, which is a leaf of T (vi), must also be a leaf node of T (v′j). We can simply replace C1 with the swap C1 \ {vi}+ {v′j} which does not change costTk ′(U). Hence, we can assume that v′j is not a descendant of vi. • Case 3: Otherwise. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPT T k (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k (U)). Thus, we only need to consider Case 3. Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j . Let Sj denote the set of leaves in S ′ j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v ′′ j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to costTk ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTk ′(U) ≤ 5OPTTk (U). C.2 PROOF OF LEMMA 3.6 Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y), (7) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · 2hw ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. C.3 PROOF OF PROPOSITION 3.2 Proof. It is known that the 2-HST can be constructed in O(dn log n) (Bartal, 1996). The subtree search in Algorithm 2 involves at most sorting all the nodes in the HST based on the score, which takes O(nlogn). We use a priority queue to store the nodes in C1. When we insert a new node v into queue, its parent node (if existing in the queue) would be removed from the queue. The number of nodes is O(n) and each operation (insertion, deletion) in a priority queue based on score has O(log n) complexity. Lastly, the total time to obtain C0 is O(n), as the FIND-LEAF only requires a top down scan in k disjoint subtrees of T . Summing parts together proves the claim. C.4 PROOF OF THEOREM 4.2 Similarly, we prove the error in general metric by first analyzing the error in 2-HST metric. Then the result follows from Lemma 3.4. Let costTk (D), cost T k ′(D) and OPTTk (D) be defined analogously to (3), (4) and (5), where “y ∈ U” in the summation is changed into “y ∈ D” since D is the demand set. That is, costTk (D) = ∑ y∈D min x∈C0 ρT (x, y), (8) costTk ′(D,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈D min x∈F ρT (x, y), (9) OPTTk (D) = min F⊂D,|F |=k ∑ y∈D min x∈F ρT (x, y) ≡ min C′1 costTk ′(D,C ′1). (10) We have the following. Lemma C.2. costTk (D) ≤ 10OPTTk (D) + 10ckϵ−1△ log n with probability 1− 4k/nc. Proof. The result follows by combining the following Lemma C.4, Lemma C.5, and applying union bound. Lemma C.3. For any node v in T , with probability 1− 1/nc, |N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n. Proof. Since N̂v = Nv + Lap(2(L−hv)/2/ϵ), we have Pr[|N̂v −Nv| ≥ x/ϵ] = exp(−x/2(L−hv)). As L = log△, we have Pr[|N̂v −Nv| ≥ x△/(2hvϵ)] ≤ exp(−x). Hence, for some constant c > 0, Pr[|N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n] ≥ 1− exp(−c log n) = 1− 1/nc. Lemma C.4 (DP Subtree Search). With probability 1 − 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Proof. The proof is similar to that of Lemma 3.5. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal disjoint subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. Assume C1 ̸= C∗1 . By the same argument as the proof of Lemma 3.5, we consider for some i, j such that vi ̸= v′j , where vi is not a descendent of v′j and v′j is either a descendent of vi. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k ). Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S ′ j be the leaves assigned to c ∗ j . Let Sj denote the set of leaves in S ′ j whose distance to c∗j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to cost T k ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. Summing all the swaps over C∗1 \ C1, we obtain costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Applying union bound with Lemma C.3, with probability 1− 2/nc, we have Nv′j2 hv′ j −Nvi2hvi ≤ 2cϵ−1△ log n. Consequently, we have with probability, 1− 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4c|C1 \ C∗1 |ϵ−1△ log n ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Lemma C.5 (DP Leaf Search). With probability 1− 2k/nc, Algorithm 4 produces initial centers with costTk (D) ≤ 2costTk ′(D) + 2ckϵ−1△ log n. Proof. The proof strategy follows Lemma 3.6. We first consider one subtree with root v. Let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v,D) = min x∈T (v) ∑ y∈T (v)∩D ρT (x, y). (11) Suppose v has more than one children u,w, ..., and the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . As hu = hw, leveraging Lemma C.3, with probability 1− 2/nc, 2hu · (Nu −Nw) ≤ 2hu(N̂u − N̂w) + 2cϵ−1△ log n ≤ 2cϵ−1△ log n. since our algorithm picks subtree roots with highest scores. Then we have costT1 (v,D) ≤ costTk ′(v,D) + Nw · 2hu + 2cϵ−1△ log n ≤ 2costTk ′(v,D) + 2cϵ−1△ log n with high probability. Lastly, applying union bound over the disjoint k subtrees gives the desired result. C.5 PROOF OF THEOREM 4.3 Proof. The privacy analysis is straightforward, by using the composition theorem (Theorem C.1). Since the sensitivity of cost(·) is△, in each swap iteration the privacy budget is ϵ/2(T + 1). Also, we spend another ϵ/2(T + 1) privacy for picking a output. Hence, the total privacy is ϵ/2 for local search. Algorithm 4 takes ϵ/2 DP budget for initialization, so the total privacy is ϵ. The analysis of the approximation error follows from Gupta et al. (2010), where the initial cost is reduced by our private HST method. We need the following two lemmas. Lemma C.6 (Gupta et al. (2010)). Assume the solution to the optimal utility is unique. For any output o ∈ O of 2△ϵ-DP exponential mechanism on dataset D, it holds for ∀t > 0 that Pr[q(D, o) ≤ max o∈O q(D, o)− (ln |O|+ t)/ϵ] ≤ e−t, where |O| is the size of the output set. Lemma C.7 (Arya et al. (2004)). For any set F ⊆ D with |F | = k, there exists some swap (x, y) such that the local search method admits costk(F,D)− costk(F − {x}+ {y}, D) ≥ costk(F,D)− 5OPT (D) k . From Lemma C.7, we know that when costk(Fi, D) > 6OPT (D), there exists a swap (x, y) s.t. costk(Fi − {x}+ {y}, D) ≤ (1− 1 6k )costk(Fi, D). At each iteration, there are at most n2 possible outputs (i.e., possible swaps), i.e., |O| = n2. Using Lemma C.6 with t = 2 log n, for ∀i, Pr[costk(Fi+1, D) ≥ costk(F ∗i+1, D) + 4 log n ϵ′ ] ≥ 1− 1/n2, where costk(F ∗i+1, D) is the minimum cost among iteration 1, 2, ..., t + 1. Hence, we have that as long as cost(Fi, D) > 6OPT (D) + 24k lognϵ′ , the improvement in cost is at least by a factor of (1− 16k ). By Theorem 4.2, we have costk(F1, D) ≤ C(log n)(6OPT (D) + 6k△ log n/ϵ) for some constant C > 0. Let T = 6Ck log log n. We have that E[cost(Fi, D)] ≤ (6OPT (D) + 6kϵ−1△ log n)C(log n)(1− 1/6k)6Ck log logn ≤ 6OPT (D) + 6kϵ−1△ log n ≤ 6OPT (D) + 24k log n ϵ′ . Therefore, with probability at least (1−T/n2), there exists an i ≤ T s.t. cost(Fi, D) ≤ 6OPT (D)+ 24k logn ϵ′ . Then by using the Lemma C.7, one will pick an Fj with additional additive error 4 lnn/ϵ ′ to the min{cost(Fj , D), j = 1, 2, ..., T} with probability 1− 1/n2. Consequently, we know that the expected additive error is 24k△ log n/ϵ′ + 4 log n/ϵ′ = O(ϵ−1k2△(log log n) log n), with probability 1− 1/poly(n). D EXTEND HST INITIALIZATION TO k-MEANS Naturally, our HST method can also be applied to k-means clustering problem. In this section, we extend the HST to k-means and provide some brief analysis similar to k-median. We present the analysis in the non-private case, which can then be easily adapted to the private case. Define the following costs for k-means. costTkm(U) = ∑ y∈U min x∈C0 ρT (x, y)2, (12) costTkm ′(U,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈U min x∈F ρT (x, y)2, (13) OPTTkm(U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y)2 ≡ min C′1 costTkm ′(U,C ′1). (14) For simplicity, we will use costTkm ′(U) to denote costTkm ′(U,C1) if everything is clear from context. Here, OPTTkm (14) is the cost of the global optimal solution with 2-HST metric. Lemma D.1 (Subtree search). costTkm′(U) ≤ 17OPTTkm(U). Proof. The analysis is similar with the proof of Lemma 3.5. Thus, we mainly highlight the difference. Let us just use some notations the same as in Lemma 3.5 here. Let us consider the clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j in optimal k-means clustering in tree metric. Let Sj denote the set of leaves in S′j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v′j into C1 can only reduce (4 · 2 hv′′ j )2N(v′′j ) to cost T km ′(U). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTkm ′(U)−OPTTkm(U) ≤ ∑ v′j∈C∗1 \C1 Nv′j · (4 · 2 hv′ j )2, OPTTkm(U) ≥ ∑ vi∈C1\C∗1 Nvi(2 hvi )2. Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTkm ′(U) ≤ 17OPTTkm(U). Next, we show that the greedy leaf search strategy (Algorithm 3) only leads to an extra multiplicative error of 2. Lemma D.2 (Leaf search). costTkm(U) ≤ 2costTkm′(U). Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-means cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y)2, (15) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · (2hx)2 where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx ·(2hx)2 ≤ costT1 ′(v, U)+Nu ·(2hu)2. Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · (2hw)2 ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. We are ready to state the error bound for our proposed HST initialization (Algorithm 2), which is a natural combination of Lemma D.1 and Lemma D.2. Theorem D.3 (HST initialization). costTkm(U) ≤ 34OPTTkm(U). We have the following result based on Lemma 3.4. Theorem D.4. In a general metric space, E[costkm(U)] = O(min{log n, log△})2OPTkm(U).
1. What is the focus of the paper regarding seeding techniques for local search algorithms? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its academic interest and experimental results? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any minor issues or typos in the paper that the reviewer has identified?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper provides a seeding technique for the local search algorithm for k-median clustering in general metric spaces. Their algorithm is based on a tree embedding of the data. They also provide a version of their algorithm which can be used for differentially private k -median clustering. Strengths And Weaknesses Strength: The paper is overall nicely written and studies important problems. Weaknesses: Although their algorithm is interesting academically, I don't find either their approximation bounds or their experimental results to be earth shattering. Clarity, Quality, Novelty And Reproducibility Clarity: Overall pretty clear. But I encountered one irritating issue when reading the paper. The authors refer to k -medians++ as if it is a standard name for the technique they are using as the baseline. But AFAIK, this naming is very non-standard and I haven't seen such usage in the literature elsewhere. k -means++ gets its name because it is literally an augmented version of an algorithm which is referred to as the " k -means" algorithm. But the k -medians algorithm is not the local search algorithm, it refers to an algorithm similar to Lloyd's ( k -means) but instead of computing the mean in the alternative minimization step, it computes the median of the group. See https://en.wikipedia.org/wiki/K-medians_clustering . Instead, I would refer to your baseline seeding algorithm as D 1 -sampling following Wei (2016) : Wei, Dennis. "A constant-factor bi-criteria approximation guarantee for k-means++." Advances in Neural Information Processing Systems 29 (2016). A Constant-Factor Bi-Criteria Approximation Guarantee for k-means++ Another issue was that it was a bit hard to compare the plots. They were too crowded. I suggest providing plots for the non-DP algorithms and the DP-algorithms separately in the appendix. Quality: The submission is technically sound. All claims are well-supported with proofs and detailed experiments. Novelty might be the Achilles' heel for this paper. Reproducibility: Proofs are provided. The code is not provided but their algorithms and experiments are detailed enough that it would not be too hard to reproduce their results. But I encourage the authors to provide the code as well (or open-source it). Typos/minor issues: Line just before the inequality in the introduction: “a clustering algorithms” In section 2.1, you claim that Arya et al. (2004) showed that cost(F) ≤ 5OPT for the Algorithm 1 you describe. But that is only true when α = 0 i.e. if the final solution is a locally-optimal solution. There should be a term dependent on alpha. Something like cost(F) ≤ 5 ( 1 + α ) OPT but I am not entirely sure about the constant (might be 5 ( 1 + 2 α ) or something like that). I wouldn't bother too much with this. You can just provide a note. If you are enthusiastic enough and want to compute the exact constant, I would Williamson-Shmoys chapter 9 over Arya et al. Experiments 5.1 “Discrete Euclidean space. Following previous work .,” has .,
ICLR
Title k-Median Clustering via Metric Embedding: Towards Better Initialization with Differential Privacy Abstract In clustering algorithms, the choice of initial centers is crucial for the quality of the learned clusters. We propose a new initialization scheme for the k-median problem in the general metric space (e.g., discrete space induced by graphs), based on the construction of metric embedding tree structure of the data. From the tree, we propose a novel and efficient search algorithm, for good initial centers that can be used subsequently for the local search algorithm. The so-called HST initialization method can produce initial centers achieving lower errors than those from another popular initialization method, k-median++, with comparable efficiency. Our HST initialization can also be easily extended to the setting of differential privacy (DP) to generate private initial centers. We show that the error of applying DP local search followed by our private HST initialization improves previous results on the approximation error, and approaches the lower bound within a small factor. Experiments demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Clustering is an important problem in unsupervised learning that has been widely studied in statistics, data mining, network analysis, etc. (Punj and Stewart, 1983; Dhillon and Modha, 2001; Banerjee et al., 2005; Berkhin, 2006; Abbasi and Younis, 2007). The goal of clustering is to partition a set of data points into clusters such that items in the same cluster are expected to be similar, while items in different clusters should be different. This is concretely measured by the sum of distances (or squared distances) between each point to its nearest cluster center. One conventional notion to evaluate a clustering algorithms is: with high probability, cost(C,D) ≤ γOPTk(D) + ξ, where C is the centers output by the algorithm and cost(C,D) is a cost function defined for C on dataset D. OPTk(D) is the cost of optimal (oracle) clustering solution on D. When everything is clear from context, we will use OPT for short. Here, γ is called multiplicative error and ξ is called additive error. Alternatively, we may also use the notion of expected cost. Two popularly studied clustering problems are 1) the k-median problem, and 2) the k-means problem. The origin of k-median dates back to the 1970’s (e.g., Kaufman et al. (1977)), where one tries to find the best location of facilities that minimizes the cost measured by the distance between clients and facilities. Formally, given a set of points D and a distance measure, the goal is to find k center points minimizing the sum of absolute distances of each sample point to its nearest center. In k-means, the objective is to minimize the sum of squared distances instead. Particularly, k-median is usually the one used for clustering on graph/network data. In general, there are two popular frameworks for clustering. One heuristic is the Lloyd’s algorithm (Lloyd, 1982), which is built upon an iterative distortion minimization approach. In most cases, this method can only be applied to numerical data, typically in the (continuous) Euclidean space. Clustering in general metric spaces (discrete spaces) is also important and useful when dealing with, for example, the graph data, where Lloyd’s method is no longer applicable. A more broadly applicable approach, the local search method (Kanungo et al., 2002; Arya et al., 2004), has also been widely studied. It iteratively finds the optimal swap between the center set and non-center data points to keep lowering the cost. Local search can achieve a constant approximation ratio (γ = O(1)) to the optimal solution for k-median (Arya et al., 2004). Initialization of cluster centers. It is well-known that the performance of clustering can be highly sensitive to initialization. If clustering starts with good initial centers (i.e., with small approximation error), the algorithm may use fewer iterations to find a better solution. The k-median++ algorithm (Arthur and Vassilvitskii, 2007) iteratively selects k data points as initial centers, favoring distant points in a probabilistic way. Intuitively, the initial centers tend to be well spread over the data points (i.e., over different clusters). The produced initial center is proved to have O(log k) multiplicative error. Follow-up works of k-means++ further improved its efficiency and scalability, e.g., Bahmani et al. (2012); Bachem et al. (2016); Lattanzi and Sohler (2019). In this work, we propose a new initialization framework, called HST initialization, based on metric embedding techniques. Our method is built upon a novel search algorithm on metric embedding trees, with comparable approximation error and running time as k-median++. Moreover, importantly, our initialization scheme can be conveniently combined with the notion of differential privacy (DP). Clustering with Differential Privacy. The concept of differential privacy (Dwork, 2006; McSherry and Talwar, 2007) has been popular to rigorously define and resolve the problem of keeping useful information for model learning, while protecting privacy for each individual. Private k-means problem has been widely studied, e.g., Feldman et al. (2009); Nock et al. (2016); Feldman et al. (2017), mostly in the continuous Euclidean space. The paper (Balcan et al., 2017) considered identifying a good candidate set (in a private manner) of centers before applying private local search, which yields O(log3 n) multiplicative error and O((k2+d) log5 n) additive error. Later on, the Euclidean k-means errors are further improved to γ = O(1) and ξ = O(k1.01 · d0.51 + k1.5) by Stemmer and Kaplan (2018), with more advanced candidate set selection. Huang and Liu (2018) gave an optimal algorithm in terms of minimizing Wasserstein distance under some data separability condition. For private k-median clustering, Feldman et al. (2009) considered the problem in high dimensional Euclidean space. However, it is rather difficult to extend their analysis to more general metrics in discrete spaces (e.g., on graphs). The strategy of (Balcan et al., 2017) to form a candidate center set could as well be adopted to k-median, which leads to O(log3/2 n) multiplicative error and O((k2 + d) log3 n) additive error in high dimensional Euclidean space. In discrete space, Gupta et al. (2010) proposed a private method for the classical local search heuristic, which applies to both k-medians and k-means. To cast privacy on each swapping step, the authors applied the exponential mechanism of (McSherry and Talwar, 2007). Their method produced an ϵ-differentially private solution with cost 6OPT +O(△k2 log2 n/ϵ), where△ is the diameter of the point set. In this work, we will show that our HST initialization can improve DP local search for k-median (Gupta et al., 2010) in terms of both approximation error and efficiency. The main contributions of this work include : • We introduce the Hierarchically Well-Separated Tree (HST) to the k-median clustering problem for initialization. We design an efficient sampling strategy to select the initial center set from the tree, with an approximation factor O(logmin{k,△}) in the non-private setting, which is O(logmin{k, d}) when△ = O(d) (e.g., bounded data). This improves the O(log k) error of k-means/median++ in e.g., the lower dimensional Euclidean space. • We propose a differentially private version of HST initialization under the setting of Gupta et al. (2010) in discrete metric space. The so-called DP-HST algorithm finds initial centers with O(log n) multiplicative error and O(ϵ−1△k2 log2 n) additive error. Moreover, running DP-local search starting from this initialization gives O(1) multiplicative error and O(ϵ−1△k2(log log n) log n) additive error, which improves previous results towards the well-known lower bound O(ϵ−1△k log(n/k)) on the additive error of DP k-median (Gupta et al., 2010) within a small O(k log log n) factor. This is the first clustering initialization method with differential privacy guarantee and improved error rate in general metric space. • We conduct experiments on simulated and real-world datasets to demonstrate the effectiveness of our methods. In both non-private and private settings, our proposed HST-based approach achieves smaller cost at initialization than k-median++, which may also lead to improvements in the final clustering quality. 2 BACKGROUND AND SETUP Definition 2.1 (Differential Privacy (DP) (Dwork, 2006)). If for any two adjacent data sets D and D′ with symmetric difference of size one, for any O ⊂ Range(A), an algorithmA satisfies Pr[A(D) ∈ O] ≤ eϵPr[A(D′) ∈ O], then algorithmA is said to be ϵ-differentially private. Intuitively, DP requires that after removing any observation, the output of D′ should not be too different from that of the original dataset D. Smaller ϵ indicates stronger privacy, which, however, usually sacrifices utility. Thus, one central topic in DP is to balance the utility-privacy trade-off. To achieve DP, one approach is to add noise to the algorithm output. The Laplace mechanism adds Laplace(η(f)/ϵ) noise to the output, which is known to achieve ϵ-DP. The exponential mechanism is also a tool for many DP algorithms. Let O be the set of feasible outputs. The utility function q : D × O → R is what we aim to maximize. The exponential mechanism outputs an element o ∈ O with probability P [A(D) = o] ∝ exp( ϵq(D,o)2η(q) ), where D is the input dataset and η(f) = sup|D−D′|=1 |f(D)− f(D′)| is the sensitivity of f . Both mechanisms will be used in our paper. 2.1 k-MEDIAN CLUSTERING Following Arya et al. (2004); Gupta et al. (2010), the problem of k-median clustering (DP and non-DP) studied in our paper is stated as below. Definition 2.2 (k-median). Given a universe point set U and a metric ρ : U × U → R, the goal of k-median to pick F ⊆ U with |F | = k to minimize k-median: costk(F,U) = ∑ v∈U min f∈F ρ(v, f). (1) Let D ⊆ U be a set of demand points. The goal of DP k-median is to minimize DP k-median: costk(F,D) = ∑ v∈D min f∈F ρ(v, f). (2) At the same time, the output F is required to be ϵ-differentially private to D. We may drop “F ” and use “costk(U)” or “costk(D)” if there is no risk of ambiguity. To better understand the motivation of the DP clustering, we provide a real-world example as follows. Example 2.3. Consider U to be the universe of all users in a social network (e.g., Twitter). Each user (account) is public, but also has some private information that can only be seen by the data holder. Let D be users grouped by some feature that might be set as private. Suppose a third party plans to collaborate with the most influential users in D for e.g., commercial purposes, thus requesting the cluster centers of D. In this case, we need a strategy to safely release the centers, while protecting the individuals in D from being identified (since the membership of D is private). The local search procedure for k-median proposed by Arya et al. (2004) is summarized in Algorithm 1. First we randomly pick k points in U as the initial centers. In each iteration, we search over all x ∈ F and y ∈ U , and do the swap F ← F − {x} + {y} such that F − {x} + {y} improves the cost of F the most (if more than factor (1− α/k) where α > 0 is a hyper-parameter). We repeat the procedure until no such swap exists. Arya et al. (2004) showed that the output centers F achieves 5 approximation error to the optimal solution, i.e., cost(F ) ≤ 5OPT . Algorithm 1: Local search for k-median clustering (Arya et al., 2004) Input: Data points U , parameter k, constant α Initialization: Randomly select k points from U as initial center set F while ∃ x ∈ F, y ∈ U s.t. cost(F − {x}+ {y}) ≤ (1− α/k)cost(F ) do Select (x, y) ∈ Fi × (D \ Fi) with argminx,y{cost(F − {x}+ {y})} Swap operation: F ← F − {x}+ {y} Output: Center set F 2.2 k-MEDIAN++ INITIALIZATION Although local search is able to find a solution with constant error, it takes O(n2) per iteration (Resende and Werneck, 2007) in expected O(k log n) steps (in total O(kn2 log n)) when started from random center set, which would be slow for large datasets. Indeed, we do not need such complicated algorithm to reduce the cost at the beginning, i.e., when the cost is large. To accelerate the process, efficient initialization methods find a “roughly” good center set as the starting point for local search. In the paper, we compare our new initialization scheme mainly with a popular (and perhaps most well-known) initialization method, the k-median++ (Arthur and Vassilvitskii, 2007) (see Algorithm 6 in the Appendix). Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error with time complexity O(nk). Starting from the initialization, we only need to run O(k log log k) steps of the computationally heavy local search to reach a constant error solution. Thus, initialization may greatly improve the clustering efficiency. 3 INITIALIZATION VIA HIERARCHICALLY WELL-SEPARATED TREE (HST) In this section, we propose our novel initialization scheme for k-median clustering, and provide our analysis in the non-private case solving (1). The idea is based on the metric embedding theory. We will start with an introduction to the main tool used in our approach. 3.1 HIERARCHICALLY WELL-SEPARATED TREE (HST) In this paper, for an L-level tree, we will count levels in descending order down the tree. We use hv to denote the level of v, and ni be the number of nodes at level i. The Hierarchically Well-Separated Tree (HST) is based on the padded decompositions of a general metric space in a hierarchical manner (Fakcharoenphol et al., 2004). Let (U, ρ) be a metric space with |U | = n, and we will refer to this metric space without specific clarification. A β–padded decomposition of U is a probabilistic distribution of partitions of U such that the diameter of each cluster Ui ∈ U is at most β, i.e., ρ(u, v) ≤ β, ∀u, v ∈ Ui, i = 1, ..., k. The formal definition of HST is given as below. Definition 3.1. Assume minu,v∈U ρ(u, v) = 1 and denote △ = maxu,v∈U ρ(u, v). An αHierarchically Well-Separated Tree (α-HST) with depth L is an edge-weighted rooted tree T , such that an edge between any pair of two nodes of level i− 1 and level i has length at most△/αL−i. In this paper, we consider α = 2-HST for simplicity, as α only affects the constants in our theoretical analysis. Figure 1 is an example L = 3-level 2-HST (right panel), along with its underlying padded decompositions (left panel). A 2-HST can be built as follows: we first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point (leaf), or a pre-specified tree depth is reached. More details can be found in Algorithm 7 in the Appendix A. Blelloch et al. (2017) proposed an efficient HST construction in O(m log n) time, where n and m are the number of nodes and edges in a graph, respectively. The first step of our method is to embed the data points into a HST (see Algorithm 2). Next, we will describe our new strategy to search for the initial centers on the tree (w.r.t. the tree metric). Before moving on, it is worth mentioning that, there are polynomial time algorithms for computing an exact k-median solution in the tree metric (Tamir (1996); Shah (2003)). However, the dynamic programming algorithms have high complexity (e.g., O(kn2)), making them unsuitable for the purpose of fast initialization. Moreover, it is unknown how to apply them effectively to the private Algorithm 2: NDP-HST initialization Input: U ,△, k Initialization: L = log△, C0 = ∅, C1 = ∅ Call Algorithm 7 to build a level-L 2-HST T using U for each node v in T do Nv ← |U ∩ T (v)| score(v)← Nv · 2hv while |C1| < k do Add top (k − |C1|) nodes with highest score to C1 for each v ∈ C1 do C1 = C1 \ {v}, if ∃ v′ ∈ C1 such that v′ is a descendant of v C0 = FIND-LEAF(T,C1) Output: Initial center set C0 ⊆ U Algorithm 3: FIND-LEAF (T,C1) Input: T , C1 Initialization: C0 = ∅ for each node v in C1 do while v is not a leaf node do v ← argw max{Nw, w ∈ ch(v)}, where ch(v) denotes the children nodes of v Add v to C0 Output: Initial center set C0 ⊆ U case. As will be shown, our new algorithm 1) is very efficient, 2) gives O(1) approximation error in the tree metric, and 3) can be effectively extended to DP easily. 3.2 HST INITIALIZATION ALGORITHM Let L = log∆ and suppose T is a level-L 2-HST in (U, ρ), where we assume L is an integer. For a node v at level i, we use T (v) to denote the subtree rooted at v. Let Nv = |T (v)| be the number of data points in T (v). The search strategy for the initial centers, NDP-HST initialization (“NDP” stands for “Non-Differentially Private”), is presented in Algorithm 2 with two phases. Subtree search. The first step is to identify the subtrees that contain the k centers. To begin with, k initial centers C1 are picked from T who have the largest score(v) = N(v) · 2hv . This is intuitive, since to get a good clustering, we typically want the ball surrounding each center to include more data points. Next, we do a screening over C1: if there is any ancestor-descendant pair of nodes, we remove the ancestor from C1. If the current size of C1 is smaller than k, we repeat the process until k centers are chosen (we do not re-select nodes in C1 and their ancestors). This way, C1 contains k root nodes of k disjoint subtrees. Leaf search. After we find C1 the set of k subtrees, the next step is to find the center in each subtree using Algorithm 3 (“FIND-LEAF”). We employ a greedy search strategy, by finding the child node with largest score level by level, until a leaf is found. This approach is intuitive since the diameter of the partition ball exponentially decays with the level. Therefore, we are in a sense focusing more and more on the region with higher density (i.e., with more data points). The complexity of our search algorithm is given as follows. Proposition 3.2 (Complexity). Algorithm 2 takes O(dn log n) time in the Euclidean space. Remark 3.3. The complexity of HST initialization is in general comparable to O(dnk) of kmedian++. Our algorithm would be faster if k > log n, i.e., the number of centers is large. 3.3 APPROXIMATION ERROR OF HST INITIALIZATION Firstly, we show that the initial center set produced by NDP-HST is already a good approximation to the optimal k-median solution. Let ρT (x, y) = dT (x, y) denote the “2-HST metric” between x and y in the 2-HST T , where dT (x, y) is the tree distance between nodes x and y in T . By Definition 3.1 and since△ = 2L, in the analysis we assume equivalently that the edge weight of the i-th level 2i−1. The crucial step of our analysis is to examine the approximation error in terms of the 2-HST metric, after which the error can be adapted to the general metrics by the following Lemma (Bartal, 1996). Lemma 3.4. In a metric space (U, ρ) with |U | = n and diameter △, it holds that E[ρT (x, y)] = O(min{log n, log△})ρ(x, y). In the Euclidean space Rd, E[ρT (x, y)] = O(d)ρ(x, y). Recall C0, C1 from Algorithm 2. We define costTk (U) = ∑ y∈U min x∈C0 ρT (x, y), (3) costTk ′(U,C1) = min |F∩T (v)|=1, ∀v∈C1 ∑ y∈U min x∈F ρT (x, y), (4) OPTTk (U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y) ≡ min C′1 costTk ′(U,C ′1). (5) For simplicity, we will use costTk ′(U) to denote costTk ′(U,C1). Here, OPTTk (5) is the cost of the global optimal solution with 2-HST metric. The last equivalence in (5) holds because the optimal centers set can always located in k disjoint subtrees, as each leaf only contain one point. (3) is the k-median cost with 2-HST metric of the output C0 of Algorithm 2. (4) is the oracle cost after the subtrees are chosen. That is, it represents the optimal cost to pick one center from each subtree in C1. Firstly, we bound the approximation error of subtree search and leaf search, respectively. Lemma 3.5 (Subtree search). costTk ′(U) ≤ 5OPTTk (U). Lemma 3.6 (Leaf search). costTk (U) ≤ 2costTk ′(U). Combining Lemma 3.5 and Lemma 3.6, we obtain Theorem 3.7 (2-HST error). Running Algorithm 2, we have costTk (U) ≤ 10OPTTk (U). Thus, HST-initialization produces an O(1) approximation to OPT in the 2-HST metric. Define costk(U) as (1) for our HST centers, and the optimal cost w.r.t. ρ as OPTk(U) = min |F |=k ∑ y∈U min x∈F ρ(x, y). (6) We have the following result based on Lemma 3.4. Theorem 3.8. In general metric space, the expected k-median cost of Algorithm 2 is E[costk(U)] = O(min{log n, log△})OPTk(U). Remark 3.9. In the Euclidean space, Makarychev et al. (2019) proved O(log k) random projections suffice for k-median to achieve O(1) error. Thus, if△ = O(d) (e.g., bounded data), by Lemma 3.4, HST initialization is able to achieve O(log(min{d, k})) error, which is better than O(log k) of k-median++ when d is small. NDP-HST Local Search. We are interested in the approximation quality of standard local search (Algorithm 1), when initialized by our NDP-HST. Theorem 3.10. NDP-HST local search achieves O(1) approximation error in expected O(k log logmin{n,△}) number of iterations for input in general metric space. Before ending this section, we remark that the initial centers found by NDP-HST can be used for k-means clustering analogously. For general metrics, E[costkm(U)] = O(min{log n, log△})2OPTkm(U) where costkm(U) is the optimal k-means cost. See Appendix D for the detailed (and similar) analysis. 4 HST INITIALIZATION WITH DIFFERENTIAL PRIVACY In this section, we consider initialization method with differential privacy (DP). Recall (2) that U is the universe of data points, and D ⊂ U is a demand set that needs to be clustered with privacy. Algorithm 4: DP-HST initialization Input: U,D,△, k, ϵ Build a level-L 2-HST T based on input U for each node v in T do Nv ← |D ∩ T (v)| N̂v ← Nv + Lap(2(L−hv)/ϵ) score(v)← N̂(v) · 2hv Based on N̂v , apply the same strategy as Algorithm 2: find C1; C0 = FIND-LEAF(C1) Output: Private initial center set C0 ⊆ U Algorithm 5: DP-HST local search Input: U , demand points D ⊆ U , parameter k, ϵ, T Initialization: F1 the private initial centers generated by Algorithm 4 with privacy ϵ/2 Set parameter ϵ′ = ϵ4△(T+1) for i = 1 to T do Select (x, y) ∈ Fi× (V \Fi) with prob. proportional to exp(−ϵ′× (cost(Fi−{x}+ {y})) Let Fi+1 ← Fi − {x}+ {y} Select j from {1, 2, ..., T + 1} with probability proportional to exp(−ϵ′ × cost(Fj)) Output: F = Fj the private center set Since U is public, simply running initialization algorithms on U would preserve the privacy of D. However, 1) this might be too expensive; 2) in many cases one would probably want to incorporate some information about D in the initialization, since D could be a very imbalanced subset of U . For example, D may only contain data points from one cluster, out of tens of clusters in U . In this case, initialization on U is likely to pick initial centers in multiple clusters, which would not be helpful for clustering on D. Next, we show how our proposed HST initialization can be easily combined with differential privacy that at the same time contains information about the demand set D, leading to improved approximation error (Theorem 4.3). Again, suppose T is an L = log△-level 2-HST of universe U in a general metric space. Denote Nv = |T (v) ∩D| for a node point v. Our private HST initialization (DP-HST) is similar to the non-private Algorithm 2. To gain privacy, we perturb Nv by adding i.i.d. Laplace noise: N̂v = Nv + Lap(2 (L−hv)/ϵ), where Lap(2(L−hv)/ϵ) is a Laplace random number with rate 2(L−hv)/ϵ. We will use the perturbed N̂v for node sampling instead of the true value Nv, as described in Algorithm 4. The DP guarantee of this initialization scheme is straightforward by the composition theory (Dwork, 2006). Theorem 4.1. Algorithm 4 is ϵ-differentially private. Proof. For each level i, the subtrees T (v, i) are disjoint to each other. The privacy used in i-th level is ϵ/2(L−i), and the total privacy is ∑ i ϵ/2 (L−i) < ϵ. We now consider the approximation error. As the structure of the analysis is similar to the non-DP case, we present the main result here and defer the detailed proofs to Appendix C. Theorem 4.2. Algorithm 4 finds initial centers such that E[costk(D)] = O(log n)(OPTk(D) + kϵ −1△ log n). DP-HST Local Search. Similarly, we can use private HST initialization to improve the performance of private k-median local search, which is presented in Algorithm 5. After initialization, the DP local search procedure follows Gupta et al. (2010) using the exponential mechanism. Theorem 4.3. Algorithm 5 achieves ϵ-differential privacy. With probability (1− 1poly(n) ), the output centers admit costk(D) ≤ 6OPTk(D) +O(ϵ−1k2△(log log n) log n) in T = O(k log log n) iterations. The DP local search with random initialization (Gupta et al., 2010) has 6 multiplicative error and O(ϵ−1△k2 log2 n) additive error. Our result improves the log n term to log log n in the additive error. Meanwhile, the number of iterations needed is improved from T = O(k log n) to O(k log log n) (see Appendix B for an empirical justification). Notably, it has been shown in Gupta et al. (2010) that for k-median problem, the lower bounds on the multiplicative and additive error of any ϵ-DP algorithm are O(1) and O(ϵ−1△k log(n/k)), respectively. Our result matches the lower bound on the multiplicative error, and the additive error is only worse than the bound by a factor of O(k log log n) which would be small in many cases. To our knowledge, Theorem 4.3 is the first result in literature to improve the error of DP local search in general metric space. 5 EXPERIMENTS 5.1 DATASETS AND ALGORITHMS Discrete Euclidean space. Following previous work ., we test k-median clustering on the MNIST hand-written digit dataset (LeCun et al., 1998) with 10 natural clusters (digit 0 to 9). We set U as 10000 randomly chosen data points. We choose the demand set D using two strategies: 1) “balance”, where we randomly choose 500 samples from U ; 2) “imbalance”, where D contains 500 random samples from U only from digit “0” and “8” (two clusters). We note that, the imbalanced D is a very practical setting in real-world scenarios, where data are typically not uniformly distributed. On this dataset, we test clustering with both l1 and l2 distance as the underlying metric. Metric space induced by graph. Random graphs have been widely considered in testing k-median methods (Balcan et al., 2013; Todo et al., 2019). The construction of graphs follows a similar approach as the synthetic pmedinfo graphs provided by the popular OR-Library (Beasley, 1990). The metric ρ for this experiment is the shortest (weighted) path distance. To generate a size n graph, we first randomly split the nodes into 10 clusters. Within each cluster, each pair of nodes is connected with probability 0.2 and weight drawn from standard uniform distribution. For each pair of clusters, we randomly connect some nodes from each cluster, with weights following uniform [0.5, r]. A larger r makes the graph more separable, i.e., clusters are farther from each other (see Appendix B for example graphs). We present two cases: r = 1 and r = 100. For this task, U has 3000 nodes, and the private set D (500 nodes) is chosen using similar “balanced” and “imbalanced” scheme as described above. In the imbalanced case, we choose D randomly from only two clusters. Algorithms. We compare the following clustering algorithms in both non-DP and DP setting: (1) NDP-rand: Local search with random initialization; (2) NDP-kmedian++: Local search with k-median++ initialization (Algorithm 6); (3) NDP-HST: Local search with NDP-HST initialization (Algorithm 2), as described in Section 3; (4) DP-rand: Standard DP local search algorithm (Gupta et al., 2010), which is Algorithm 5 with initial centers randomly chosen from U ; (5) DP-kmedian++: DP local search with k-median++ initialization run on U ; (6) DP-HST: DP local search with HST-initialization (Algorithm 5). For non-DP tasks, we set L = 6. For DP clustering, we use L = 8. For non-DP methods, we set α = 10−3 in Algorithm 1 and the maximum number of iterations as 20. To examine the quality of initialization as well as the final centers, We report both the cost at initialization and the cost of the final output. For DP methods, we run the algorithms for T = 20 steps and report the results with ϵ = 1. We test k ∈ {2, 5, 10, 15, 20}. The average cost over T iterations is reported for more robustness. All results are averaged over 10 independent repetitions. 5.2 RESULTS The results on MNIST dataset are given in Figure 2. The comparisons are similar for both l1 and l2: • From the left column, the initial centers found by HST has lower cost than k-median++ and random initialization, for both non-DP and DP setting, and for both balanced and imbalanced demand set D. This confirms that the proposed HST initialization is more powerful than k-median++ in finding good initial centers. • From the right column, we also observe lower final cost of HST followed by local search in DP clustering. In the non-DP case, the final cost curves overlap, which means that despite HST offers better initial centers, local search can always find a good solution eventually. • The advantage of DP-HST, in terms of both the initial and the final cost, is more significant when D is an imbalanced subset of U . As mentioned before, this is because our DP-HST initialization approach also privately incorporates the information of D. The results on graphs are reported in Figure 3, which give similar conclusions. In all cases, our proposed HST scheme finds better initial centers with smaller cost than k-median++. Moreover, HST again considerably outperforms k-median++ in the private and imbalanced D setting, for both r = 100 (highly separable) and r = 1 (less separable). The advantages of HST over k-median++ are especially significant in the harder tasks when r = 1, i.e., the clusters are nearly mixed up. 6 CONCLUSION In this paper, we propose a new initialization framework for the k-median problem in general metric space. Our approach is called HST initialization, which leverages tools from metric embedding theory. Our novel tree search approach has comparable efficiency and approximation error to the popular k-median++ initialization. Moreover, we propose differentially private (DP) HST initialization algorithm, which adapts to the private demand point set, leading to better clustering performance. When combined with subsequent DP local search heuristic, our algorithm is able to improve the additive error of DP local search, which is close to the theoretical lower bound within a small factor. Experiments with Euclidean metrics and graph metrics verify the effectiveness of our methods, which improve the cost of both the initial centers and the final k-median output. A POSTPONED ALGORITHM A.1 k-MEDIAN++ In the paper, we compared our HST initialization mainly with another (perhaps most well-known) initialization algorithm for clustering, the k-median++ (Arthur and Vassilvitskii, 2007). For reference, we present the concrete procedures in Algorithm 6. Here, the function D(u,C) is the shortest distance from a data point u to the closest (center) point in set C. Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error, in O(dnk) time. Algorithm 6: k-median++ initialization (Arthur and Vassilvitskii, 2007) Input: Data points U , number of centers k Randomly pick a point c1 ∈ U and set F = {c1} for i = 2 to k do Select ci = u ∈ U with probability ρ(u,F )∑ u′∈U ρ(u ′,F ) F = F ∪ {ci} Output: k-median++ initial center set F A.2 HST CONSTRUCTION As presented in Algorithm 7, the construction starts by applying a permutation π on U , such that in following steps the points are picked in a random sequence. We first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point, or a pre-specified tree depth is reached. In Figure 1, we provide an example of L = 3-level 2-HST (left panel), along with its underlying padded decompositions (right panel). Algorithm 7: Build 2-HST(U,L) Input: Data points U with diameter△, L Randomly pick a point in U as the root node of T Let r = △/2 Apply a permutation π on U // so points will be chosen in a random sequence for each v ∈ U do Set Cv = [v] for each u ∈ U do Add u ∈ U to Cv if d(v, u) ≤ r and u /∈ ⋃ v′ ̸=v Cv′ Set the non-empty clusters Cv as the children nodes of T for each non-empty cluster Cv do Run 2-HST(Cv, L− 1) to extend the tree T ; stop until L levels or reaching a leaf node Output: 2-HST T B MORE EXPERIMENTS B.1 EXAMPLES OF GRAPH DATA In Figure 4, we plot two example graphs (subgraphs of 50 nodes) with r = 100 and r = 1. When r = 100, the graph is highly separable (i.e., clusters are far from each other). When r = 1, the clusters are harder to be distinguished from each other. B.2 RUNNING TIME COMPARISON WITH k-MEDIAN++ In Proposition 3.2, we show that our HST initialization algorithm admits O(dn log n) complexity when considering the Euclidean space. With a smart implementation of Algorithm 6 where each data point tracks its distance to the current closest candidate center in C, k-median++ has O(dnk) running time. Therefore, the running time of our algorithm is in general comparable to k-median++. Our method would run faster if k = Ω(log n). In Figure 5, we plot the empirical running time of HST initialization against k-median++, on MNIST dataset with l2 distance (similar comparison holds for l1). From the left subfigure, we see that k-median++ becomes slower with increasing k, and our method is more efficient when k > 20. In the right panel, we observe that the running time of both methods increases with larger sample size n. Our HST algorithm has a slightly faster increasing rate, which is predicted by the complexity comparison (n log n v.s. n). However, this difference in log n factor would not be too significant unless the sample size is extremely large. Overall, our numerical results suggest that in general, the proposed HST initialization would have similar efficiency as k-median++ in common practical scenarios. B.3 IMPROVED ITERATION COST OF DP-HST In Theorem 4.3, we show that under differential privacy constraints, the proposed DP-HST (Algorithm 5) improves both the approximation error and the number of iterations required to find a good solution of classical DP local search (Gupta et al., 2010). In this section, we provide some numerical results to justify the theory. First, we need to properly measure the iteration cost of DP local search. This is because, unlike the non-private clustering, the k-median cost after each iteration in DP local search is not decreasing monotonically, due to the probabilistic exponential mechanism. To this end, for the cost sequence with length T = 20, we compute its moving average sequence with window size 5. Attaining the minimal value of the moving average indicates that the algorithm has found a “local optimum”, i.e., it has reached a “neighborhood” of solutions with small clustering cost. Thus, we use the number of iterations to reach such local optimum as the measure of iteration cost. The results are provided in Figure 6. We see that on all the tasks (MNIST with l1 and l2 distance, and graph dataset with r = 1 and r = 100), DP-HST has significantly smaller iterations cost. In Figure 7, we further report the k-median cost of the best solution in T iterations found by each DP algorithm. We see that DP-HST again provide the smallest cost. This additional set of experiments again validates the claims of Theorem 4.3, that DP-HST is able to found better initial centers in fewer iterations. C PROOFS The following composition result of differential privacy will be used in our proof. Theorem C.1 (Composition Theorem (Dwork, 2006)). If Algorithms A1,A2, ...,Am are ϵ1, ϵ2, ..., ϵm differentially private respectively, then the union (A1(D),A2(D), ...,Am(D)) is∑m i=1 ϵi-DP. C.1 PROOF OF LEMMA 3.5 Proof. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. If C1 = C∗1 , the proof is done. Thus, we prove the case for C1 ̸= C∗1 . Note that T (v), v ∈ C1 are disjoint subtrees. We have the following reasoning. • Case 1: for some i, j′, vi is a descendant node of v′j . Since the optimal center point f ∗ is a leaf node by the definition of (4), we know that there must exist one child node of v′j that expands a subtree which contains f∗. Therefore, we can always replace v′j by one of its child nodes. Hence, we can assume that vi is not a descendant of v′j . Note that, we have score(v′j) ≤ score(vi) if v′j /∈ C∗1 ∩C1. Algorithm 2 sorts all the nodes based on cost value, and it would have more priority to pick v′j than vi if score(v ′ j) > score(vi) and vi is not a child node of v′j . • Case 2: for some i, j′, v′j is a descendant of vi. In this case, optimal center point f ∗, which is a leaf of T (vi), must also be a leaf node of T (v′j). We can simply replace C1 with the swap C1 \ {vi}+ {v′j} which does not change costTk ′(U). Hence, we can assume that v′j is not a descendant of vi. • Case 3: Otherwise. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPT T k (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k (U)). Thus, we only need to consider Case 3. Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j . Let Sj denote the set of leaves in S ′ j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v ′′ j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to costTk ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTk ′(U) ≤ 5OPTTk (U). C.2 PROOF OF LEMMA 3.6 Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y), (7) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · 2hw ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. C.3 PROOF OF PROPOSITION 3.2 Proof. It is known that the 2-HST can be constructed in O(dn log n) (Bartal, 1996). The subtree search in Algorithm 2 involves at most sorting all the nodes in the HST based on the score, which takes O(nlogn). We use a priority queue to store the nodes in C1. When we insert a new node v into queue, its parent node (if existing in the queue) would be removed from the queue. The number of nodes is O(n) and each operation (insertion, deletion) in a priority queue based on score has O(log n) complexity. Lastly, the total time to obtain C0 is O(n), as the FIND-LEAF only requires a top down scan in k disjoint subtrees of T . Summing parts together proves the claim. C.4 PROOF OF THEOREM 4.2 Similarly, we prove the error in general metric by first analyzing the error in 2-HST metric. Then the result follows from Lemma 3.4. Let costTk (D), cost T k ′(D) and OPTTk (D) be defined analogously to (3), (4) and (5), where “y ∈ U” in the summation is changed into “y ∈ D” since D is the demand set. That is, costTk (D) = ∑ y∈D min x∈C0 ρT (x, y), (8) costTk ′(D,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈D min x∈F ρT (x, y), (9) OPTTk (D) = min F⊂D,|F |=k ∑ y∈D min x∈F ρT (x, y) ≡ min C′1 costTk ′(D,C ′1). (10) We have the following. Lemma C.2. costTk (D) ≤ 10OPTTk (D) + 10ckϵ−1△ log n with probability 1− 4k/nc. Proof. The result follows by combining the following Lemma C.4, Lemma C.5, and applying union bound. Lemma C.3. For any node v in T , with probability 1− 1/nc, |N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n. Proof. Since N̂v = Nv + Lap(2(L−hv)/2/ϵ), we have Pr[|N̂v −Nv| ≥ x/ϵ] = exp(−x/2(L−hv)). As L = log△, we have Pr[|N̂v −Nv| ≥ x△/(2hvϵ)] ≤ exp(−x). Hence, for some constant c > 0, Pr[|N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n] ≥ 1− exp(−c log n) = 1− 1/nc. Lemma C.4 (DP Subtree Search). With probability 1 − 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Proof. The proof is similar to that of Lemma 3.5. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal disjoint subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. Assume C1 ̸= C∗1 . By the same argument as the proof of Lemma 3.5, we consider for some i, j such that vi ̸= v′j , where vi is not a descendent of v′j and v′j is either a descendent of vi. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k ). Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S ′ j be the leaves assigned to c ∗ j . Let Sj denote the set of leaves in S ′ j whose distance to c∗j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to cost T k ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. Summing all the swaps over C∗1 \ C1, we obtain costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Applying union bound with Lemma C.3, with probability 1− 2/nc, we have Nv′j2 hv′ j −Nvi2hvi ≤ 2cϵ−1△ log n. Consequently, we have with probability, 1− 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4c|C1 \ C∗1 |ϵ−1△ log n ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Lemma C.5 (DP Leaf Search). With probability 1− 2k/nc, Algorithm 4 produces initial centers with costTk (D) ≤ 2costTk ′(D) + 2ckϵ−1△ log n. Proof. The proof strategy follows Lemma 3.6. We first consider one subtree with root v. Let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v,D) = min x∈T (v) ∑ y∈T (v)∩D ρT (x, y). (11) Suppose v has more than one children u,w, ..., and the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . As hu = hw, leveraging Lemma C.3, with probability 1− 2/nc, 2hu · (Nu −Nw) ≤ 2hu(N̂u − N̂w) + 2cϵ−1△ log n ≤ 2cϵ−1△ log n. since our algorithm picks subtree roots with highest scores. Then we have costT1 (v,D) ≤ costTk ′(v,D) + Nw · 2hu + 2cϵ−1△ log n ≤ 2costTk ′(v,D) + 2cϵ−1△ log n with high probability. Lastly, applying union bound over the disjoint k subtrees gives the desired result. C.5 PROOF OF THEOREM 4.3 Proof. The privacy analysis is straightforward, by using the composition theorem (Theorem C.1). Since the sensitivity of cost(·) is△, in each swap iteration the privacy budget is ϵ/2(T + 1). Also, we spend another ϵ/2(T + 1) privacy for picking a output. Hence, the total privacy is ϵ/2 for local search. Algorithm 4 takes ϵ/2 DP budget for initialization, so the total privacy is ϵ. The analysis of the approximation error follows from Gupta et al. (2010), where the initial cost is reduced by our private HST method. We need the following two lemmas. Lemma C.6 (Gupta et al. (2010)). Assume the solution to the optimal utility is unique. For any output o ∈ O of 2△ϵ-DP exponential mechanism on dataset D, it holds for ∀t > 0 that Pr[q(D, o) ≤ max o∈O q(D, o)− (ln |O|+ t)/ϵ] ≤ e−t, where |O| is the size of the output set. Lemma C.7 (Arya et al. (2004)). For any set F ⊆ D with |F | = k, there exists some swap (x, y) such that the local search method admits costk(F,D)− costk(F − {x}+ {y}, D) ≥ costk(F,D)− 5OPT (D) k . From Lemma C.7, we know that when costk(Fi, D) > 6OPT (D), there exists a swap (x, y) s.t. costk(Fi − {x}+ {y}, D) ≤ (1− 1 6k )costk(Fi, D). At each iteration, there are at most n2 possible outputs (i.e., possible swaps), i.e., |O| = n2. Using Lemma C.6 with t = 2 log n, for ∀i, Pr[costk(Fi+1, D) ≥ costk(F ∗i+1, D) + 4 log n ϵ′ ] ≥ 1− 1/n2, where costk(F ∗i+1, D) is the minimum cost among iteration 1, 2, ..., t + 1. Hence, we have that as long as cost(Fi, D) > 6OPT (D) + 24k lognϵ′ , the improvement in cost is at least by a factor of (1− 16k ). By Theorem 4.2, we have costk(F1, D) ≤ C(log n)(6OPT (D) + 6k△ log n/ϵ) for some constant C > 0. Let T = 6Ck log log n. We have that E[cost(Fi, D)] ≤ (6OPT (D) + 6kϵ−1△ log n)C(log n)(1− 1/6k)6Ck log logn ≤ 6OPT (D) + 6kϵ−1△ log n ≤ 6OPT (D) + 24k log n ϵ′ . Therefore, with probability at least (1−T/n2), there exists an i ≤ T s.t. cost(Fi, D) ≤ 6OPT (D)+ 24k logn ϵ′ . Then by using the Lemma C.7, one will pick an Fj with additional additive error 4 lnn/ϵ ′ to the min{cost(Fj , D), j = 1, 2, ..., T} with probability 1− 1/n2. Consequently, we know that the expected additive error is 24k△ log n/ϵ′ + 4 log n/ϵ′ = O(ϵ−1k2△(log log n) log n), with probability 1− 1/poly(n). D EXTEND HST INITIALIZATION TO k-MEANS Naturally, our HST method can also be applied to k-means clustering problem. In this section, we extend the HST to k-means and provide some brief analysis similar to k-median. We present the analysis in the non-private case, which can then be easily adapted to the private case. Define the following costs for k-means. costTkm(U) = ∑ y∈U min x∈C0 ρT (x, y)2, (12) costTkm ′(U,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈U min x∈F ρT (x, y)2, (13) OPTTkm(U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y)2 ≡ min C′1 costTkm ′(U,C ′1). (14) For simplicity, we will use costTkm ′(U) to denote costTkm ′(U,C1) if everything is clear from context. Here, OPTTkm (14) is the cost of the global optimal solution with 2-HST metric. Lemma D.1 (Subtree search). costTkm′(U) ≤ 17OPTTkm(U). Proof. The analysis is similar with the proof of Lemma 3.5. Thus, we mainly highlight the difference. Let us just use some notations the same as in Lemma 3.5 here. Let us consider the clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j in optimal k-means clustering in tree metric. Let Sj denote the set of leaves in S′j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v′j into C1 can only reduce (4 · 2 hv′′ j )2N(v′′j ) to cost T km ′(U). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTkm ′(U)−OPTTkm(U) ≤ ∑ v′j∈C∗1 \C1 Nv′j · (4 · 2 hv′ j )2, OPTTkm(U) ≥ ∑ vi∈C1\C∗1 Nvi(2 hvi )2. Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTkm ′(U) ≤ 17OPTTkm(U). Next, we show that the greedy leaf search strategy (Algorithm 3) only leads to an extra multiplicative error of 2. Lemma D.2 (Leaf search). costTkm(U) ≤ 2costTkm′(U). Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-means cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y)2, (15) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · (2hx)2 where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx ·(2hx)2 ≤ costT1 ′(v, U)+Nu ·(2hu)2. Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · (2hw)2 ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. We are ready to state the error bound for our proposed HST initialization (Algorithm 2), which is a natural combination of Lemma D.1 and Lemma D.2. Theorem D.3 (HST initialization). costTkm(U) ≤ 34OPTTkm(U). We have the following result based on Lemma 3.4. Theorem D.4. In a general metric space, E[costkm(U)] = O(min{log n, log△})2OPTkm(U).
1. What is the main contribution of the paper regarding k-median clustering? 2. What are the strengths and weaknesses of the proposed non-private and private algorithms? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the paper's experiments and comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work focuses on k-median clustering in metric space with privacy. The paper presents a new algorithm based on a HST and compares it with baselines in experiments. 1- The author presents a k-median clustering initialization with O ( log ⁡ min k , d ) approximation guarantee. 2- They propose a differentially private algorithm with constant approximation guarantee and additive error O ( 1 ϵ Δ k 2 log 2 ⁡ n ) . Moreover the authors provide experiments for different datasets. Strengths And Weaknesses non-private algorithm: Strength: The algorithm is well explained and the selection of centers in novel (to the best of my knowledge). Weaknesses: The approximation ratio of the algorithm is comparable with the HST based algorithm but significantly more than the best known algorithm. There are almost linear time algorithms that find the optimum solution on a HST, the algorithm proposed in this work is significantly weaker (e.g., Parallel and Efficient Hierarchical k-Median Clustering). These works are not mentioned and compared at all in this paper. The running time for metric space is not presented clearly, but it is presented for euclidean space. Private algorithm: Strength: the algorithm improves the additive error by a factor O ( log ⁡ n / log ⁡ log ⁡ n ) . Weaknesses: the improvement in additive error is marginal given that there is more than a factor k gap to the lower-bound. Experiments: The experiments consider only small size instances. Experiments lack fast and state of the art k-median algorithms. Experiments does ignore greedy k-median++ which is known to outperform k-median++. HST is slower than k-median++ even for very small datasets, and seems to be worse as they grow. There are few datasets considered. Clarity, Quality, Novelty And Reproducibility Most parts of the paper is well-written and the algorithm that solves the HST instance is novel. The comparison with previous works is questionable in this paper. Most of the state of the art algorithms for k-median are not presented (for instance well-known publications based on LP roundings which result in the best approximation guarantee for k-median). Also more advanced algorithms are developed for solving an HST which is also not mentioned in this paper. Arthur and Vassilvitskii paper in 2007 does not introduce k-median++ as well, it only focuses on k-means++.
ICLR
Title k-Median Clustering via Metric Embedding: Towards Better Initialization with Differential Privacy Abstract In clustering algorithms, the choice of initial centers is crucial for the quality of the learned clusters. We propose a new initialization scheme for the k-median problem in the general metric space (e.g., discrete space induced by graphs), based on the construction of metric embedding tree structure of the data. From the tree, we propose a novel and efficient search algorithm, for good initial centers that can be used subsequently for the local search algorithm. The so-called HST initialization method can produce initial centers achieving lower errors than those from another popular initialization method, k-median++, with comparable efficiency. Our HST initialization can also be easily extended to the setting of differential privacy (DP) to generate private initial centers. We show that the error of applying DP local search followed by our private HST initialization improves previous results on the approximation error, and approaches the lower bound within a small factor. Experiments demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Clustering is an important problem in unsupervised learning that has been widely studied in statistics, data mining, network analysis, etc. (Punj and Stewart, 1983; Dhillon and Modha, 2001; Banerjee et al., 2005; Berkhin, 2006; Abbasi and Younis, 2007). The goal of clustering is to partition a set of data points into clusters such that items in the same cluster are expected to be similar, while items in different clusters should be different. This is concretely measured by the sum of distances (or squared distances) between each point to its nearest cluster center. One conventional notion to evaluate a clustering algorithms is: with high probability, cost(C,D) ≤ γOPTk(D) + ξ, where C is the centers output by the algorithm and cost(C,D) is a cost function defined for C on dataset D. OPTk(D) is the cost of optimal (oracle) clustering solution on D. When everything is clear from context, we will use OPT for short. Here, γ is called multiplicative error and ξ is called additive error. Alternatively, we may also use the notion of expected cost. Two popularly studied clustering problems are 1) the k-median problem, and 2) the k-means problem. The origin of k-median dates back to the 1970’s (e.g., Kaufman et al. (1977)), where one tries to find the best location of facilities that minimizes the cost measured by the distance between clients and facilities. Formally, given a set of points D and a distance measure, the goal is to find k center points minimizing the sum of absolute distances of each sample point to its nearest center. In k-means, the objective is to minimize the sum of squared distances instead. Particularly, k-median is usually the one used for clustering on graph/network data. In general, there are two popular frameworks for clustering. One heuristic is the Lloyd’s algorithm (Lloyd, 1982), which is built upon an iterative distortion minimization approach. In most cases, this method can only be applied to numerical data, typically in the (continuous) Euclidean space. Clustering in general metric spaces (discrete spaces) is also important and useful when dealing with, for example, the graph data, where Lloyd’s method is no longer applicable. A more broadly applicable approach, the local search method (Kanungo et al., 2002; Arya et al., 2004), has also been widely studied. It iteratively finds the optimal swap between the center set and non-center data points to keep lowering the cost. Local search can achieve a constant approximation ratio (γ = O(1)) to the optimal solution for k-median (Arya et al., 2004). Initialization of cluster centers. It is well-known that the performance of clustering can be highly sensitive to initialization. If clustering starts with good initial centers (i.e., with small approximation error), the algorithm may use fewer iterations to find a better solution. The k-median++ algorithm (Arthur and Vassilvitskii, 2007) iteratively selects k data points as initial centers, favoring distant points in a probabilistic way. Intuitively, the initial centers tend to be well spread over the data points (i.e., over different clusters). The produced initial center is proved to have O(log k) multiplicative error. Follow-up works of k-means++ further improved its efficiency and scalability, e.g., Bahmani et al. (2012); Bachem et al. (2016); Lattanzi and Sohler (2019). In this work, we propose a new initialization framework, called HST initialization, based on metric embedding techniques. Our method is built upon a novel search algorithm on metric embedding trees, with comparable approximation error and running time as k-median++. Moreover, importantly, our initialization scheme can be conveniently combined with the notion of differential privacy (DP). Clustering with Differential Privacy. The concept of differential privacy (Dwork, 2006; McSherry and Talwar, 2007) has been popular to rigorously define and resolve the problem of keeping useful information for model learning, while protecting privacy for each individual. Private k-means problem has been widely studied, e.g., Feldman et al. (2009); Nock et al. (2016); Feldman et al. (2017), mostly in the continuous Euclidean space. The paper (Balcan et al., 2017) considered identifying a good candidate set (in a private manner) of centers before applying private local search, which yields O(log3 n) multiplicative error and O((k2+d) log5 n) additive error. Later on, the Euclidean k-means errors are further improved to γ = O(1) and ξ = O(k1.01 · d0.51 + k1.5) by Stemmer and Kaplan (2018), with more advanced candidate set selection. Huang and Liu (2018) gave an optimal algorithm in terms of minimizing Wasserstein distance under some data separability condition. For private k-median clustering, Feldman et al. (2009) considered the problem in high dimensional Euclidean space. However, it is rather difficult to extend their analysis to more general metrics in discrete spaces (e.g., on graphs). The strategy of (Balcan et al., 2017) to form a candidate center set could as well be adopted to k-median, which leads to O(log3/2 n) multiplicative error and O((k2 + d) log3 n) additive error in high dimensional Euclidean space. In discrete space, Gupta et al. (2010) proposed a private method for the classical local search heuristic, which applies to both k-medians and k-means. To cast privacy on each swapping step, the authors applied the exponential mechanism of (McSherry and Talwar, 2007). Their method produced an ϵ-differentially private solution with cost 6OPT +O(△k2 log2 n/ϵ), where△ is the diameter of the point set. In this work, we will show that our HST initialization can improve DP local search for k-median (Gupta et al., 2010) in terms of both approximation error and efficiency. The main contributions of this work include : • We introduce the Hierarchically Well-Separated Tree (HST) to the k-median clustering problem for initialization. We design an efficient sampling strategy to select the initial center set from the tree, with an approximation factor O(logmin{k,△}) in the non-private setting, which is O(logmin{k, d}) when△ = O(d) (e.g., bounded data). This improves the O(log k) error of k-means/median++ in e.g., the lower dimensional Euclidean space. • We propose a differentially private version of HST initialization under the setting of Gupta et al. (2010) in discrete metric space. The so-called DP-HST algorithm finds initial centers with O(log n) multiplicative error and O(ϵ−1△k2 log2 n) additive error. Moreover, running DP-local search starting from this initialization gives O(1) multiplicative error and O(ϵ−1△k2(log log n) log n) additive error, which improves previous results towards the well-known lower bound O(ϵ−1△k log(n/k)) on the additive error of DP k-median (Gupta et al., 2010) within a small O(k log log n) factor. This is the first clustering initialization method with differential privacy guarantee and improved error rate in general metric space. • We conduct experiments on simulated and real-world datasets to demonstrate the effectiveness of our methods. In both non-private and private settings, our proposed HST-based approach achieves smaller cost at initialization than k-median++, which may also lead to improvements in the final clustering quality. 2 BACKGROUND AND SETUP Definition 2.1 (Differential Privacy (DP) (Dwork, 2006)). If for any two adjacent data sets D and D′ with symmetric difference of size one, for any O ⊂ Range(A), an algorithmA satisfies Pr[A(D) ∈ O] ≤ eϵPr[A(D′) ∈ O], then algorithmA is said to be ϵ-differentially private. Intuitively, DP requires that after removing any observation, the output of D′ should not be too different from that of the original dataset D. Smaller ϵ indicates stronger privacy, which, however, usually sacrifices utility. Thus, one central topic in DP is to balance the utility-privacy trade-off. To achieve DP, one approach is to add noise to the algorithm output. The Laplace mechanism adds Laplace(η(f)/ϵ) noise to the output, which is known to achieve ϵ-DP. The exponential mechanism is also a tool for many DP algorithms. Let O be the set of feasible outputs. The utility function q : D × O → R is what we aim to maximize. The exponential mechanism outputs an element o ∈ O with probability P [A(D) = o] ∝ exp( ϵq(D,o)2η(q) ), where D is the input dataset and η(f) = sup|D−D′|=1 |f(D)− f(D′)| is the sensitivity of f . Both mechanisms will be used in our paper. 2.1 k-MEDIAN CLUSTERING Following Arya et al. (2004); Gupta et al. (2010), the problem of k-median clustering (DP and non-DP) studied in our paper is stated as below. Definition 2.2 (k-median). Given a universe point set U and a metric ρ : U × U → R, the goal of k-median to pick F ⊆ U with |F | = k to minimize k-median: costk(F,U) = ∑ v∈U min f∈F ρ(v, f). (1) Let D ⊆ U be a set of demand points. The goal of DP k-median is to minimize DP k-median: costk(F,D) = ∑ v∈D min f∈F ρ(v, f). (2) At the same time, the output F is required to be ϵ-differentially private to D. We may drop “F ” and use “costk(U)” or “costk(D)” if there is no risk of ambiguity. To better understand the motivation of the DP clustering, we provide a real-world example as follows. Example 2.3. Consider U to be the universe of all users in a social network (e.g., Twitter). Each user (account) is public, but also has some private information that can only be seen by the data holder. Let D be users grouped by some feature that might be set as private. Suppose a third party plans to collaborate with the most influential users in D for e.g., commercial purposes, thus requesting the cluster centers of D. In this case, we need a strategy to safely release the centers, while protecting the individuals in D from being identified (since the membership of D is private). The local search procedure for k-median proposed by Arya et al. (2004) is summarized in Algorithm 1. First we randomly pick k points in U as the initial centers. In each iteration, we search over all x ∈ F and y ∈ U , and do the swap F ← F − {x} + {y} such that F − {x} + {y} improves the cost of F the most (if more than factor (1− α/k) where α > 0 is a hyper-parameter). We repeat the procedure until no such swap exists. Arya et al. (2004) showed that the output centers F achieves 5 approximation error to the optimal solution, i.e., cost(F ) ≤ 5OPT . Algorithm 1: Local search for k-median clustering (Arya et al., 2004) Input: Data points U , parameter k, constant α Initialization: Randomly select k points from U as initial center set F while ∃ x ∈ F, y ∈ U s.t. cost(F − {x}+ {y}) ≤ (1− α/k)cost(F ) do Select (x, y) ∈ Fi × (D \ Fi) with argminx,y{cost(F − {x}+ {y})} Swap operation: F ← F − {x}+ {y} Output: Center set F 2.2 k-MEDIAN++ INITIALIZATION Although local search is able to find a solution with constant error, it takes O(n2) per iteration (Resende and Werneck, 2007) in expected O(k log n) steps (in total O(kn2 log n)) when started from random center set, which would be slow for large datasets. Indeed, we do not need such complicated algorithm to reduce the cost at the beginning, i.e., when the cost is large. To accelerate the process, efficient initialization methods find a “roughly” good center set as the starting point for local search. In the paper, we compare our new initialization scheme mainly with a popular (and perhaps most well-known) initialization method, the k-median++ (Arthur and Vassilvitskii, 2007) (see Algorithm 6 in the Appendix). Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error with time complexity O(nk). Starting from the initialization, we only need to run O(k log log k) steps of the computationally heavy local search to reach a constant error solution. Thus, initialization may greatly improve the clustering efficiency. 3 INITIALIZATION VIA HIERARCHICALLY WELL-SEPARATED TREE (HST) In this section, we propose our novel initialization scheme for k-median clustering, and provide our analysis in the non-private case solving (1). The idea is based on the metric embedding theory. We will start with an introduction to the main tool used in our approach. 3.1 HIERARCHICALLY WELL-SEPARATED TREE (HST) In this paper, for an L-level tree, we will count levels in descending order down the tree. We use hv to denote the level of v, and ni be the number of nodes at level i. The Hierarchically Well-Separated Tree (HST) is based on the padded decompositions of a general metric space in a hierarchical manner (Fakcharoenphol et al., 2004). Let (U, ρ) be a metric space with |U | = n, and we will refer to this metric space without specific clarification. A β–padded decomposition of U is a probabilistic distribution of partitions of U such that the diameter of each cluster Ui ∈ U is at most β, i.e., ρ(u, v) ≤ β, ∀u, v ∈ Ui, i = 1, ..., k. The formal definition of HST is given as below. Definition 3.1. Assume minu,v∈U ρ(u, v) = 1 and denote △ = maxu,v∈U ρ(u, v). An αHierarchically Well-Separated Tree (α-HST) with depth L is an edge-weighted rooted tree T , such that an edge between any pair of two nodes of level i− 1 and level i has length at most△/αL−i. In this paper, we consider α = 2-HST for simplicity, as α only affects the constants in our theoretical analysis. Figure 1 is an example L = 3-level 2-HST (right panel), along with its underlying padded decompositions (left panel). A 2-HST can be built as follows: we first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point (leaf), or a pre-specified tree depth is reached. More details can be found in Algorithm 7 in the Appendix A. Blelloch et al. (2017) proposed an efficient HST construction in O(m log n) time, where n and m are the number of nodes and edges in a graph, respectively. The first step of our method is to embed the data points into a HST (see Algorithm 2). Next, we will describe our new strategy to search for the initial centers on the tree (w.r.t. the tree metric). Before moving on, it is worth mentioning that, there are polynomial time algorithms for computing an exact k-median solution in the tree metric (Tamir (1996); Shah (2003)). However, the dynamic programming algorithms have high complexity (e.g., O(kn2)), making them unsuitable for the purpose of fast initialization. Moreover, it is unknown how to apply them effectively to the private Algorithm 2: NDP-HST initialization Input: U ,△, k Initialization: L = log△, C0 = ∅, C1 = ∅ Call Algorithm 7 to build a level-L 2-HST T using U for each node v in T do Nv ← |U ∩ T (v)| score(v)← Nv · 2hv while |C1| < k do Add top (k − |C1|) nodes with highest score to C1 for each v ∈ C1 do C1 = C1 \ {v}, if ∃ v′ ∈ C1 such that v′ is a descendant of v C0 = FIND-LEAF(T,C1) Output: Initial center set C0 ⊆ U Algorithm 3: FIND-LEAF (T,C1) Input: T , C1 Initialization: C0 = ∅ for each node v in C1 do while v is not a leaf node do v ← argw max{Nw, w ∈ ch(v)}, where ch(v) denotes the children nodes of v Add v to C0 Output: Initial center set C0 ⊆ U case. As will be shown, our new algorithm 1) is very efficient, 2) gives O(1) approximation error in the tree metric, and 3) can be effectively extended to DP easily. 3.2 HST INITIALIZATION ALGORITHM Let L = log∆ and suppose T is a level-L 2-HST in (U, ρ), where we assume L is an integer. For a node v at level i, we use T (v) to denote the subtree rooted at v. Let Nv = |T (v)| be the number of data points in T (v). The search strategy for the initial centers, NDP-HST initialization (“NDP” stands for “Non-Differentially Private”), is presented in Algorithm 2 with two phases. Subtree search. The first step is to identify the subtrees that contain the k centers. To begin with, k initial centers C1 are picked from T who have the largest score(v) = N(v) · 2hv . This is intuitive, since to get a good clustering, we typically want the ball surrounding each center to include more data points. Next, we do a screening over C1: if there is any ancestor-descendant pair of nodes, we remove the ancestor from C1. If the current size of C1 is smaller than k, we repeat the process until k centers are chosen (we do not re-select nodes in C1 and their ancestors). This way, C1 contains k root nodes of k disjoint subtrees. Leaf search. After we find C1 the set of k subtrees, the next step is to find the center in each subtree using Algorithm 3 (“FIND-LEAF”). We employ a greedy search strategy, by finding the child node with largest score level by level, until a leaf is found. This approach is intuitive since the diameter of the partition ball exponentially decays with the level. Therefore, we are in a sense focusing more and more on the region with higher density (i.e., with more data points). The complexity of our search algorithm is given as follows. Proposition 3.2 (Complexity). Algorithm 2 takes O(dn log n) time in the Euclidean space. Remark 3.3. The complexity of HST initialization is in general comparable to O(dnk) of kmedian++. Our algorithm would be faster if k > log n, i.e., the number of centers is large. 3.3 APPROXIMATION ERROR OF HST INITIALIZATION Firstly, we show that the initial center set produced by NDP-HST is already a good approximation to the optimal k-median solution. Let ρT (x, y) = dT (x, y) denote the “2-HST metric” between x and y in the 2-HST T , where dT (x, y) is the tree distance between nodes x and y in T . By Definition 3.1 and since△ = 2L, in the analysis we assume equivalently that the edge weight of the i-th level 2i−1. The crucial step of our analysis is to examine the approximation error in terms of the 2-HST metric, after which the error can be adapted to the general metrics by the following Lemma (Bartal, 1996). Lemma 3.4. In a metric space (U, ρ) with |U | = n and diameter △, it holds that E[ρT (x, y)] = O(min{log n, log△})ρ(x, y). In the Euclidean space Rd, E[ρT (x, y)] = O(d)ρ(x, y). Recall C0, C1 from Algorithm 2. We define costTk (U) = ∑ y∈U min x∈C0 ρT (x, y), (3) costTk ′(U,C1) = min |F∩T (v)|=1, ∀v∈C1 ∑ y∈U min x∈F ρT (x, y), (4) OPTTk (U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y) ≡ min C′1 costTk ′(U,C ′1). (5) For simplicity, we will use costTk ′(U) to denote costTk ′(U,C1). Here, OPTTk (5) is the cost of the global optimal solution with 2-HST metric. The last equivalence in (5) holds because the optimal centers set can always located in k disjoint subtrees, as each leaf only contain one point. (3) is the k-median cost with 2-HST metric of the output C0 of Algorithm 2. (4) is the oracle cost after the subtrees are chosen. That is, it represents the optimal cost to pick one center from each subtree in C1. Firstly, we bound the approximation error of subtree search and leaf search, respectively. Lemma 3.5 (Subtree search). costTk ′(U) ≤ 5OPTTk (U). Lemma 3.6 (Leaf search). costTk (U) ≤ 2costTk ′(U). Combining Lemma 3.5 and Lemma 3.6, we obtain Theorem 3.7 (2-HST error). Running Algorithm 2, we have costTk (U) ≤ 10OPTTk (U). Thus, HST-initialization produces an O(1) approximation to OPT in the 2-HST metric. Define costk(U) as (1) for our HST centers, and the optimal cost w.r.t. ρ as OPTk(U) = min |F |=k ∑ y∈U min x∈F ρ(x, y). (6) We have the following result based on Lemma 3.4. Theorem 3.8. In general metric space, the expected k-median cost of Algorithm 2 is E[costk(U)] = O(min{log n, log△})OPTk(U). Remark 3.9. In the Euclidean space, Makarychev et al. (2019) proved O(log k) random projections suffice for k-median to achieve O(1) error. Thus, if△ = O(d) (e.g., bounded data), by Lemma 3.4, HST initialization is able to achieve O(log(min{d, k})) error, which is better than O(log k) of k-median++ when d is small. NDP-HST Local Search. We are interested in the approximation quality of standard local search (Algorithm 1), when initialized by our NDP-HST. Theorem 3.10. NDP-HST local search achieves O(1) approximation error in expected O(k log logmin{n,△}) number of iterations for input in general metric space. Before ending this section, we remark that the initial centers found by NDP-HST can be used for k-means clustering analogously. For general metrics, E[costkm(U)] = O(min{log n, log△})2OPTkm(U) where costkm(U) is the optimal k-means cost. See Appendix D for the detailed (and similar) analysis. 4 HST INITIALIZATION WITH DIFFERENTIAL PRIVACY In this section, we consider initialization method with differential privacy (DP). Recall (2) that U is the universe of data points, and D ⊂ U is a demand set that needs to be clustered with privacy. Algorithm 4: DP-HST initialization Input: U,D,△, k, ϵ Build a level-L 2-HST T based on input U for each node v in T do Nv ← |D ∩ T (v)| N̂v ← Nv + Lap(2(L−hv)/ϵ) score(v)← N̂(v) · 2hv Based on N̂v , apply the same strategy as Algorithm 2: find C1; C0 = FIND-LEAF(C1) Output: Private initial center set C0 ⊆ U Algorithm 5: DP-HST local search Input: U , demand points D ⊆ U , parameter k, ϵ, T Initialization: F1 the private initial centers generated by Algorithm 4 with privacy ϵ/2 Set parameter ϵ′ = ϵ4△(T+1) for i = 1 to T do Select (x, y) ∈ Fi× (V \Fi) with prob. proportional to exp(−ϵ′× (cost(Fi−{x}+ {y})) Let Fi+1 ← Fi − {x}+ {y} Select j from {1, 2, ..., T + 1} with probability proportional to exp(−ϵ′ × cost(Fj)) Output: F = Fj the private center set Since U is public, simply running initialization algorithms on U would preserve the privacy of D. However, 1) this might be too expensive; 2) in many cases one would probably want to incorporate some information about D in the initialization, since D could be a very imbalanced subset of U . For example, D may only contain data points from one cluster, out of tens of clusters in U . In this case, initialization on U is likely to pick initial centers in multiple clusters, which would not be helpful for clustering on D. Next, we show how our proposed HST initialization can be easily combined with differential privacy that at the same time contains information about the demand set D, leading to improved approximation error (Theorem 4.3). Again, suppose T is an L = log△-level 2-HST of universe U in a general metric space. Denote Nv = |T (v) ∩D| for a node point v. Our private HST initialization (DP-HST) is similar to the non-private Algorithm 2. To gain privacy, we perturb Nv by adding i.i.d. Laplace noise: N̂v = Nv + Lap(2 (L−hv)/ϵ), where Lap(2(L−hv)/ϵ) is a Laplace random number with rate 2(L−hv)/ϵ. We will use the perturbed N̂v for node sampling instead of the true value Nv, as described in Algorithm 4. The DP guarantee of this initialization scheme is straightforward by the composition theory (Dwork, 2006). Theorem 4.1. Algorithm 4 is ϵ-differentially private. Proof. For each level i, the subtrees T (v, i) are disjoint to each other. The privacy used in i-th level is ϵ/2(L−i), and the total privacy is ∑ i ϵ/2 (L−i) < ϵ. We now consider the approximation error. As the structure of the analysis is similar to the non-DP case, we present the main result here and defer the detailed proofs to Appendix C. Theorem 4.2. Algorithm 4 finds initial centers such that E[costk(D)] = O(log n)(OPTk(D) + kϵ −1△ log n). DP-HST Local Search. Similarly, we can use private HST initialization to improve the performance of private k-median local search, which is presented in Algorithm 5. After initialization, the DP local search procedure follows Gupta et al. (2010) using the exponential mechanism. Theorem 4.3. Algorithm 5 achieves ϵ-differential privacy. With probability (1− 1poly(n) ), the output centers admit costk(D) ≤ 6OPTk(D) +O(ϵ−1k2△(log log n) log n) in T = O(k log log n) iterations. The DP local search with random initialization (Gupta et al., 2010) has 6 multiplicative error and O(ϵ−1△k2 log2 n) additive error. Our result improves the log n term to log log n in the additive error. Meanwhile, the number of iterations needed is improved from T = O(k log n) to O(k log log n) (see Appendix B for an empirical justification). Notably, it has been shown in Gupta et al. (2010) that for k-median problem, the lower bounds on the multiplicative and additive error of any ϵ-DP algorithm are O(1) and O(ϵ−1△k log(n/k)), respectively. Our result matches the lower bound on the multiplicative error, and the additive error is only worse than the bound by a factor of O(k log log n) which would be small in many cases. To our knowledge, Theorem 4.3 is the first result in literature to improve the error of DP local search in general metric space. 5 EXPERIMENTS 5.1 DATASETS AND ALGORITHMS Discrete Euclidean space. Following previous work ., we test k-median clustering on the MNIST hand-written digit dataset (LeCun et al., 1998) with 10 natural clusters (digit 0 to 9). We set U as 10000 randomly chosen data points. We choose the demand set D using two strategies: 1) “balance”, where we randomly choose 500 samples from U ; 2) “imbalance”, where D contains 500 random samples from U only from digit “0” and “8” (two clusters). We note that, the imbalanced D is a very practical setting in real-world scenarios, where data are typically not uniformly distributed. On this dataset, we test clustering with both l1 and l2 distance as the underlying metric. Metric space induced by graph. Random graphs have been widely considered in testing k-median methods (Balcan et al., 2013; Todo et al., 2019). The construction of graphs follows a similar approach as the synthetic pmedinfo graphs provided by the popular OR-Library (Beasley, 1990). The metric ρ for this experiment is the shortest (weighted) path distance. To generate a size n graph, we first randomly split the nodes into 10 clusters. Within each cluster, each pair of nodes is connected with probability 0.2 and weight drawn from standard uniform distribution. For each pair of clusters, we randomly connect some nodes from each cluster, with weights following uniform [0.5, r]. A larger r makes the graph more separable, i.e., clusters are farther from each other (see Appendix B for example graphs). We present two cases: r = 1 and r = 100. For this task, U has 3000 nodes, and the private set D (500 nodes) is chosen using similar “balanced” and “imbalanced” scheme as described above. In the imbalanced case, we choose D randomly from only two clusters. Algorithms. We compare the following clustering algorithms in both non-DP and DP setting: (1) NDP-rand: Local search with random initialization; (2) NDP-kmedian++: Local search with k-median++ initialization (Algorithm 6); (3) NDP-HST: Local search with NDP-HST initialization (Algorithm 2), as described in Section 3; (4) DP-rand: Standard DP local search algorithm (Gupta et al., 2010), which is Algorithm 5 with initial centers randomly chosen from U ; (5) DP-kmedian++: DP local search with k-median++ initialization run on U ; (6) DP-HST: DP local search with HST-initialization (Algorithm 5). For non-DP tasks, we set L = 6. For DP clustering, we use L = 8. For non-DP methods, we set α = 10−3 in Algorithm 1 and the maximum number of iterations as 20. To examine the quality of initialization as well as the final centers, We report both the cost at initialization and the cost of the final output. For DP methods, we run the algorithms for T = 20 steps and report the results with ϵ = 1. We test k ∈ {2, 5, 10, 15, 20}. The average cost over T iterations is reported for more robustness. All results are averaged over 10 independent repetitions. 5.2 RESULTS The results on MNIST dataset are given in Figure 2. The comparisons are similar for both l1 and l2: • From the left column, the initial centers found by HST has lower cost than k-median++ and random initialization, for both non-DP and DP setting, and for both balanced and imbalanced demand set D. This confirms that the proposed HST initialization is more powerful than k-median++ in finding good initial centers. • From the right column, we also observe lower final cost of HST followed by local search in DP clustering. In the non-DP case, the final cost curves overlap, which means that despite HST offers better initial centers, local search can always find a good solution eventually. • The advantage of DP-HST, in terms of both the initial and the final cost, is more significant when D is an imbalanced subset of U . As mentioned before, this is because our DP-HST initialization approach also privately incorporates the information of D. The results on graphs are reported in Figure 3, which give similar conclusions. In all cases, our proposed HST scheme finds better initial centers with smaller cost than k-median++. Moreover, HST again considerably outperforms k-median++ in the private and imbalanced D setting, for both r = 100 (highly separable) and r = 1 (less separable). The advantages of HST over k-median++ are especially significant in the harder tasks when r = 1, i.e., the clusters are nearly mixed up. 6 CONCLUSION In this paper, we propose a new initialization framework for the k-median problem in general metric space. Our approach is called HST initialization, which leverages tools from metric embedding theory. Our novel tree search approach has comparable efficiency and approximation error to the popular k-median++ initialization. Moreover, we propose differentially private (DP) HST initialization algorithm, which adapts to the private demand point set, leading to better clustering performance. When combined with subsequent DP local search heuristic, our algorithm is able to improve the additive error of DP local search, which is close to the theoretical lower bound within a small factor. Experiments with Euclidean metrics and graph metrics verify the effectiveness of our methods, which improve the cost of both the initial centers and the final k-median output. A POSTPONED ALGORITHM A.1 k-MEDIAN++ In the paper, we compared our HST initialization mainly with another (perhaps most well-known) initialization algorithm for clustering, the k-median++ (Arthur and Vassilvitskii, 2007). For reference, we present the concrete procedures in Algorithm 6. Here, the function D(u,C) is the shortest distance from a data point u to the closest (center) point in set C. Arthur and Vassilvitskii (2007) showed that the output centers C by k-median++ achieves O(log k) approximation error, in O(dnk) time. Algorithm 6: k-median++ initialization (Arthur and Vassilvitskii, 2007) Input: Data points U , number of centers k Randomly pick a point c1 ∈ U and set F = {c1} for i = 2 to k do Select ci = u ∈ U with probability ρ(u,F )∑ u′∈U ρ(u ′,F ) F = F ∪ {ci} Output: k-median++ initial center set F A.2 HST CONSTRUCTION As presented in Algorithm 7, the construction starts by applying a permutation π on U , such that in following steps the points are picked in a random sequence. We first find a padded decomposition PL = {PL,1, ..., PL,nL} of U with parameter β = △/2. The center of each partition in PL,j serves as a root node in level L. Then, we re-do a padded decomposition for each partition PL,j , to find sub-partitions with diameter β = △/4, and set the corresponding centers as the nodes in level L− 1, and so on. Each partition at level i is obtained with β = △/2L−i. This process proceeds until a node has a single point, or a pre-specified tree depth is reached. In Figure 1, we provide an example of L = 3-level 2-HST (left panel), along with its underlying padded decompositions (right panel). Algorithm 7: Build 2-HST(U,L) Input: Data points U with diameter△, L Randomly pick a point in U as the root node of T Let r = △/2 Apply a permutation π on U // so points will be chosen in a random sequence for each v ∈ U do Set Cv = [v] for each u ∈ U do Add u ∈ U to Cv if d(v, u) ≤ r and u /∈ ⋃ v′ ̸=v Cv′ Set the non-empty clusters Cv as the children nodes of T for each non-empty cluster Cv do Run 2-HST(Cv, L− 1) to extend the tree T ; stop until L levels or reaching a leaf node Output: 2-HST T B MORE EXPERIMENTS B.1 EXAMPLES OF GRAPH DATA In Figure 4, we plot two example graphs (subgraphs of 50 nodes) with r = 100 and r = 1. When r = 100, the graph is highly separable (i.e., clusters are far from each other). When r = 1, the clusters are harder to be distinguished from each other. B.2 RUNNING TIME COMPARISON WITH k-MEDIAN++ In Proposition 3.2, we show that our HST initialization algorithm admits O(dn log n) complexity when considering the Euclidean space. With a smart implementation of Algorithm 6 where each data point tracks its distance to the current closest candidate center in C, k-median++ has O(dnk) running time. Therefore, the running time of our algorithm is in general comparable to k-median++. Our method would run faster if k = Ω(log n). In Figure 5, we plot the empirical running time of HST initialization against k-median++, on MNIST dataset with l2 distance (similar comparison holds for l1). From the left subfigure, we see that k-median++ becomes slower with increasing k, and our method is more efficient when k > 20. In the right panel, we observe that the running time of both methods increases with larger sample size n. Our HST algorithm has a slightly faster increasing rate, which is predicted by the complexity comparison (n log n v.s. n). However, this difference in log n factor would not be too significant unless the sample size is extremely large. Overall, our numerical results suggest that in general, the proposed HST initialization would have similar efficiency as k-median++ in common practical scenarios. B.3 IMPROVED ITERATION COST OF DP-HST In Theorem 4.3, we show that under differential privacy constraints, the proposed DP-HST (Algorithm 5) improves both the approximation error and the number of iterations required to find a good solution of classical DP local search (Gupta et al., 2010). In this section, we provide some numerical results to justify the theory. First, we need to properly measure the iteration cost of DP local search. This is because, unlike the non-private clustering, the k-median cost after each iteration in DP local search is not decreasing monotonically, due to the probabilistic exponential mechanism. To this end, for the cost sequence with length T = 20, we compute its moving average sequence with window size 5. Attaining the minimal value of the moving average indicates that the algorithm has found a “local optimum”, i.e., it has reached a “neighborhood” of solutions with small clustering cost. Thus, we use the number of iterations to reach such local optimum as the measure of iteration cost. The results are provided in Figure 6. We see that on all the tasks (MNIST with l1 and l2 distance, and graph dataset with r = 1 and r = 100), DP-HST has significantly smaller iterations cost. In Figure 7, we further report the k-median cost of the best solution in T iterations found by each DP algorithm. We see that DP-HST again provide the smallest cost. This additional set of experiments again validates the claims of Theorem 4.3, that DP-HST is able to found better initial centers in fewer iterations. C PROOFS The following composition result of differential privacy will be used in our proof. Theorem C.1 (Composition Theorem (Dwork, 2006)). If Algorithms A1,A2, ...,Am are ϵ1, ϵ2, ..., ϵm differentially private respectively, then the union (A1(D),A2(D), ...,Am(D)) is∑m i=1 ϵi-DP. C.1 PROOF OF LEMMA 3.5 Proof. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. If C1 = C∗1 , the proof is done. Thus, we prove the case for C1 ̸= C∗1 . Note that T (v), v ∈ C1 are disjoint subtrees. We have the following reasoning. • Case 1: for some i, j′, vi is a descendant node of v′j . Since the optimal center point f ∗ is a leaf node by the definition of (4), we know that there must exist one child node of v′j that expands a subtree which contains f∗. Therefore, we can always replace v′j by one of its child nodes. Hence, we can assume that vi is not a descendant of v′j . Note that, we have score(v′j) ≤ score(vi) if v′j /∈ C∗1 ∩C1. Algorithm 2 sorts all the nodes based on cost value, and it would have more priority to pick v′j than vi if score(v ′ j) > score(vi) and vi is not a child node of v′j . • Case 2: for some i, j′, v′j is a descendant of vi. In this case, optimal center point f ∗, which is a leaf of T (vi), must also be a leaf node of T (v′j). We can simply replace C1 with the swap C1 \ {vi}+ {v′j} which does not change costTk ′(U). Hence, we can assume that v′j is not a descendant of vi. • Case 3: Otherwise. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPT T k (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k (U)). Thus, we only need to consider Case 3. Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j . Let Sj denote the set of leaves in S ′ j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v ′′ j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to costTk ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTk ′(U) ≤ 5OPTTk (U). C.2 PROOF OF LEMMA 3.6 Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y), (7) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · 2hw ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. C.3 PROOF OF PROPOSITION 3.2 Proof. It is known that the 2-HST can be constructed in O(dn log n) (Bartal, 1996). The subtree search in Algorithm 2 involves at most sorting all the nodes in the HST based on the score, which takes O(nlogn). We use a priority queue to store the nodes in C1. When we insert a new node v into queue, its parent node (if existing in the queue) would be removed from the queue. The number of nodes is O(n) and each operation (insertion, deletion) in a priority queue based on score has O(log n) complexity. Lastly, the total time to obtain C0 is O(n), as the FIND-LEAF only requires a top down scan in k disjoint subtrees of T . Summing parts together proves the claim. C.4 PROOF OF THEOREM 4.2 Similarly, we prove the error in general metric by first analyzing the error in 2-HST metric. Then the result follows from Lemma 3.4. Let costTk (D), cost T k ′(D) and OPTTk (D) be defined analogously to (3), (4) and (5), where “y ∈ U” in the summation is changed into “y ∈ D” since D is the demand set. That is, costTk (D) = ∑ y∈D min x∈C0 ρT (x, y), (8) costTk ′(D,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈D min x∈F ρT (x, y), (9) OPTTk (D) = min F⊂D,|F |=k ∑ y∈D min x∈F ρT (x, y) ≡ min C′1 costTk ′(D,C ′1). (10) We have the following. Lemma C.2. costTk (D) ≤ 10OPTTk (D) + 10ckϵ−1△ log n with probability 1− 4k/nc. Proof. The result follows by combining the following Lemma C.4, Lemma C.5, and applying union bound. Lemma C.3. For any node v in T , with probability 1− 1/nc, |N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n. Proof. Since N̂v = Nv + Lap(2(L−hv)/2/ϵ), we have Pr[|N̂v −Nv| ≥ x/ϵ] = exp(−x/2(L−hv)). As L = log△, we have Pr[|N̂v −Nv| ≥ x△/(2hvϵ)] ≤ exp(−x). Hence, for some constant c > 0, Pr[|N̂v · 2hv −Nv · 2hv | ≤ cϵ−1△ log n] ≥ 1− exp(−c log n) = 1− 1/nc. Lemma C.4 (DP Subtree Search). With probability 1 − 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Proof. The proof is similar to that of Lemma 3.5. Consider the intermediate output of Algorithm 2, C1 = {v1, v2, ..., vk}, which is the set of roots of the minimal disjoint subtrees each containing exactly one output center C0. Suppose one of the optimal “root set” that minimizes (4) is C∗1 = {v′1, v′2, ..., v′k}. Assume C1 ̸= C∗1 . By the same argument as the proof of Lemma 3.5, we consider for some i, j such that vi ̸= v′j , where vi is not a descendent of v′j and v′j is either a descendent of vi. By the construction of C1, we know that score(v′j) ≤ min{score(vi), i = 1, ..., k} when v′j ∈ C∗1 \ C1. Consider the swap between C1 and C∗1 . By the definition of tree distance, we have OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi , since {T (vi), vi ∈ C1 \ C∗1} does not contain any center of the optimal solution determined by C∗1 (which is also the optimal “root set” for OPT T k ). Let us consider the optimal clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S ′ j be the leaves assigned to c ∗ j . Let Sj denote the set of leaves in S ′ j whose distance to c∗j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v ′ j into C1 can only reduce 4 · 2 hv′′ j N(v′′j ) to cost T k ′(U) (the tree distance between any leaf in Sj and its closest center in C1 is at most 4 · 2 hv′′ j ). We just use v′j to represent v ′′ j for later part of this proof for simplicity. Summing all the swaps over C∗1 \ C1, we obtain costTk ′(U)−OPTTk (U) ≤ 4 ∑ v′j∈C∗1 \C1 Nv′j2 hv′ j , OPTTk (U) ≥ ∑ vi∈C1\C∗1 Nvi2 hvi . Applying union bound with Lemma C.3, with probability 1− 2/nc, we have Nv′j2 hv′ j −Nvi2hvi ≤ 2cϵ−1△ log n. Consequently, we have with probability, 1− 2k/nc, costTk ′(D) ≤ 5OPTTk (D) + 4c|C1 \ C∗1 |ϵ−1△ log n ≤ 5OPTTk (D) + 4ckϵ−1△ log n. Lemma C.5 (DP Leaf Search). With probability 1− 2k/nc, Algorithm 4 produces initial centers with costTk (D) ≤ 2costTk ′(D) + 2ckϵ−1△ log n. Proof. The proof strategy follows Lemma 3.6. We first consider one subtree with root v. Let costT1 ′(v, U) denote the optimal k-median cost within the point set T (v) with one center in 2-HST: costT1 ′(v,D) = min x∈T (v) ∑ y∈T (v)∩D ρT (x, y). (11) Suppose v has more than one children u,w, ..., and the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · 2hx where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx · 2hx ≤ costT1 ′(v, U) +Nu · 2hu . As hu = hw, leveraging Lemma C.3, with probability 1− 2/nc, 2hu · (Nu −Nw) ≤ 2hu(N̂u − N̂w) + 2cϵ−1△ log n ≤ 2cϵ−1△ log n. since our algorithm picks subtree roots with highest scores. Then we have costT1 (v,D) ≤ costTk ′(v,D) + Nw · 2hu + 2cϵ−1△ log n ≤ 2costTk ′(v,D) + 2cϵ−1△ log n with high probability. Lastly, applying union bound over the disjoint k subtrees gives the desired result. C.5 PROOF OF THEOREM 4.3 Proof. The privacy analysis is straightforward, by using the composition theorem (Theorem C.1). Since the sensitivity of cost(·) is△, in each swap iteration the privacy budget is ϵ/2(T + 1). Also, we spend another ϵ/2(T + 1) privacy for picking a output. Hence, the total privacy is ϵ/2 for local search. Algorithm 4 takes ϵ/2 DP budget for initialization, so the total privacy is ϵ. The analysis of the approximation error follows from Gupta et al. (2010), where the initial cost is reduced by our private HST method. We need the following two lemmas. Lemma C.6 (Gupta et al. (2010)). Assume the solution to the optimal utility is unique. For any output o ∈ O of 2△ϵ-DP exponential mechanism on dataset D, it holds for ∀t > 0 that Pr[q(D, o) ≤ max o∈O q(D, o)− (ln |O|+ t)/ϵ] ≤ e−t, where |O| is the size of the output set. Lemma C.7 (Arya et al. (2004)). For any set F ⊆ D with |F | = k, there exists some swap (x, y) such that the local search method admits costk(F,D)− costk(F − {x}+ {y}, D) ≥ costk(F,D)− 5OPT (D) k . From Lemma C.7, we know that when costk(Fi, D) > 6OPT (D), there exists a swap (x, y) s.t. costk(Fi − {x}+ {y}, D) ≤ (1− 1 6k )costk(Fi, D). At each iteration, there are at most n2 possible outputs (i.e., possible swaps), i.e., |O| = n2. Using Lemma C.6 with t = 2 log n, for ∀i, Pr[costk(Fi+1, D) ≥ costk(F ∗i+1, D) + 4 log n ϵ′ ] ≥ 1− 1/n2, where costk(F ∗i+1, D) is the minimum cost among iteration 1, 2, ..., t + 1. Hence, we have that as long as cost(Fi, D) > 6OPT (D) + 24k lognϵ′ , the improvement in cost is at least by a factor of (1− 16k ). By Theorem 4.2, we have costk(F1, D) ≤ C(log n)(6OPT (D) + 6k△ log n/ϵ) for some constant C > 0. Let T = 6Ck log log n. We have that E[cost(Fi, D)] ≤ (6OPT (D) + 6kϵ−1△ log n)C(log n)(1− 1/6k)6Ck log logn ≤ 6OPT (D) + 6kϵ−1△ log n ≤ 6OPT (D) + 24k log n ϵ′ . Therefore, with probability at least (1−T/n2), there exists an i ≤ T s.t. cost(Fi, D) ≤ 6OPT (D)+ 24k logn ϵ′ . Then by using the Lemma C.7, one will pick an Fj with additional additive error 4 lnn/ϵ ′ to the min{cost(Fj , D), j = 1, 2, ..., T} with probability 1− 1/n2. Consequently, we know that the expected additive error is 24k△ log n/ϵ′ + 4 log n/ϵ′ = O(ϵ−1k2△(log log n) log n), with probability 1− 1/poly(n). D EXTEND HST INITIALIZATION TO k-MEANS Naturally, our HST method can also be applied to k-means clustering problem. In this section, we extend the HST to k-means and provide some brief analysis similar to k-median. We present the analysis in the non-private case, which can then be easily adapted to the private case. Define the following costs for k-means. costTkm(U) = ∑ y∈U min x∈C0 ρT (x, y)2, (12) costTkm ′(U,C1) = min |F∩T (v)|=1,∀v∈C1 ∑ y∈U min x∈F ρT (x, y)2, (13) OPTTkm(U) = min F⊂U,|F |=k ∑ y∈U min x∈F ρT (x, y)2 ≡ min C′1 costTkm ′(U,C ′1). (14) For simplicity, we will use costTkm ′(U) to denote costTkm ′(U,C1) if everything is clear from context. Here, OPTTkm (14) is the cost of the global optimal solution with 2-HST metric. Lemma D.1 (Subtree search). costTkm′(U) ≤ 17OPTTkm(U). Proof. The analysis is similar with the proof of Lemma 3.5. Thus, we mainly highlight the difference. Let us just use some notations the same as in Lemma 3.5 here. Let us consider the clustering with center set be C∗ = {c∗1, c∗2, ..., c∗k} (each center c∗j is a leaf of subtree whose root be c′j), and S′j be the leaves assigned to c∗j in optimal k-means clustering in tree metric. Let Sj denote the set of leaves in S′j whose distance to c ∗ j is strictly smaller than its distance to any centers in C1. Let Pj denote the union of paths between leaves of Sj to its closest center in C1. Let v′′j be the nodes in Pj with highest level satisfying T (v′′j ) ∩ C1 = ∅. The score of v′′j is 2 hv′′ j N(v′′j ). That means the swap with a center v′j into C1 can only reduce (4 · 2 hv′′ j )2N(v′′j ) to cost T km ′(U). We just use v′j to represent v ′′ j for later part of this proof for simplicity. By our reasoning, summing all the swaps over C∗1 \ C1 gives costTkm ′(U)−OPTTkm(U) ≤ ∑ v′j∈C∗1 \C1 Nv′j · (4 · 2 hv′ j )2, OPTTkm(U) ≥ ∑ vi∈C1\C∗1 Nvi(2 hvi )2. Also, based on our discussion on Case 1, it holds that Nv′j2 hv′ j −Nvi2hvi ≤ 0. Summing them together, we have costTkm ′(U) ≤ 17OPTTkm(U). Next, we show that the greedy leaf search strategy (Algorithm 3) only leads to an extra multiplicative error of 2. Lemma D.2 (Leaf search). costTkm(U) ≤ 2costTkm′(U). Proof. Since the subtrees in C1 are disjoint, it suffices to consider one subtree with root v. With a little abuse of notation, let costT1 ′(v, U) denote the optimal k-means cost within the point set T (v) with one center in 2-HST: costT1 ′(v, U) = min x∈T (v) ∑ y∈T (v) ρT (x, y)2, (15) which is the optimal cost within the subtree. Suppose v has more than one children u,w, ..., otherwise the optimal center is clear. Suppose the optimal solution of costT1 ′(v, U) chooses a leaf node in T (u), and our HST initialization algorithm picks a leaf of T (w). If u = w, then HST chooses the optimal one where the argument holds trivially. Thus, we consider u ̸= w. We have the following two observations: • Since one needs to pick a leaf of T (u) to minimize costT1 ′(v, U), we have costT1 ′(v, U) ≥∑ x∈ch(v),x ̸=u Nx · (2hx)2 where ch(u) denotes the children nodes of u. • By our greedy strategy, costT1 (v, U) ≤ ∑ x∈ch(u) Nx ·(2hx)2 ≤ costT1 ′(v, U)+Nu ·(2hu)2. Since hu = hw, we have 2hu · (Nu −Nw) ≤ 0, since our algorithm picks subtree roots with highest scores. Then we have costT1 (v, U) ≤ costT1 ′(v, U) + Nw · (2hw)2 ≤ 2costT1 ′(v, U). Since the subtrees in C1 are disjoint, the union of centers for OPTT1 (v, U), v ∈ C1 forms the optimal centers with size k. Note that, for any data point p ∈ U \ C1, the tree distance ρT (p, f) for ∀f that is a leaf node of T (v), v ∈ C1 is the same. That is, the choice of leaf in T (v) as the center does not affect the k-median cost under 2-HST metric. Therefore, union bound over k subtree costs completes the proof. We are ready to state the error bound for our proposed HST initialization (Algorithm 2), which is a natural combination of Lemma D.1 and Lemma D.2. Theorem D.3 (HST initialization). costTkm(U) ≤ 34OPTTkm(U). We have the following result based on Lemma 3.4. Theorem D.4. In a general metric space, E[costkm(U)] = O(min{log n, log△})2OPTkm(U).
1. What is the focus and contribution of the paper regarding tree embedding for k-median clustering? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its efficiency, distortion, and privacy guarantee? 3. Do you have any concerns or suggestions regarding the clarity and quality of the paper's content? 4. How does the reviewer assess the originality and significance of the paper's contributions? 5. Are there any limitations or potential improvements regarding the paper's experimental results and comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies initialization method that is based on tree embedding for k-median clustering, with differential privacy guarantee. By initialization, it means a light-weight approximation algorithm that finds k initial center points, to be used with iterative approximation algorithms such as local search. Since it is only an initialization, the approximation ratio does not need to be heavily optimized, and it is the efficiency that is important. This paper focuses on the general metric space case. The main technical idea is to impose a tree embedding, which is O(log n) distortion to the true distance, and find a set of representative centers directly on the tree. The overall running time is \tilde{O}(nd), which is independent of k. The differential privacy can also be guaranteed, with a slight modification to the non-private version, by adding noise to some intermediate variables. The differentially private version, if combined with a previous work Gupta et al., can obtain a slightly improved additive error bound. The experiments have been conducted to validate the performance of the new initialization method, including the widely-used k-means++. The new method demonstrates a better performance overall. Strengths And Weaknesses Strength: While there are many recent works on the topic, I find the general metric case less studied, and this paper fills in this gap which is timely. The fact that the running time is independent of k can be crucial for some applications, and this is something k-means++ method cannot achieve. The tree embedding method is easy-to-implement, and is generally applicable, which is an advantage. Weakness: The claimed improvement in ratio/error seems to be minor (for instance, log(\min{\Delta, k}) v.s. k-means++’s log k, and a log n -> log log n improvement in the number of iterations of local search, where there is already/still a factor of k^2). Clarity, Quality, Novelty And Reproducibility Clarity: The clarify is fine overall, but I have the following detailed comments. Tree embedding in clustering-like problems have also been considered before, in e.g., “Facility location problems in differential privacy model revisited”, by Esencayi et al. Please add a comparison/discussion. Also, I find “Differentially Private Clustering: Tight Approximation Ratios” by Ghazi et al. relevant but is not cited. At the end of page one, you mentioned “k-median++”. It’s fine to call it “k-median++”, but it is actually “k-means++”, so one needs to clarify at least once, that k-median++ is the k-means++ adapted to the k-median case. It seems you use \Delta as the diameter of the dataset. However, this makes sense only when you normalize the minimum distance between every pair of distinct point. Unfortunately, I didn’t see this mentioned (I might have missed it). In the first bullet of page 2, you mentioned that \Delta = O(d) is the typical case of bounded data — I don’t agree, and in my opinion the bounded case should be \Delta = poly(n). The definition of (2) is confusing, since the expression only replaces U with D. A suggestion is to avoid writing this (2) again, but simply say (1) with a differential privacy constraint is the DP k-median. In the experiments, your refer to “left” and “right” column in Fig 2 and Fig 3. However, I only see one column. Maybe you can use sub-captions? Quality: The major concern is the experiments. The data is mostly from simulated sources, even though some of the simulation is based on real datasets. The suggestion is to experiment on more real datasets to make it more convincing. For instance, for the graph data, what about the road network data, such as OpenStreetMap? For Euclidean data, MNIST seems to be a small dataset, and experimenting a dataset of higher dimension and bigger size is helpful. The running time comparison, which is an important aspect for initialization/seeding algorithms, is not provided. Originality: The study of the general metric case is timely, but the techniques are somewhat standard, and I wouldn’t consider the result particularly novel since it is an immediate application of tree embedding, especially provided that similar ideas have been used in differential privacy (see e.g., “Facility location problems in differential privacy model revisited”, by Esencayi et al.).
ICLR
Title Meta-Continual Learning Via Dynamic Programming Abstract Meta continual learning algorithms seek to train a model when faced with similar tasks observed in a sequential manner. Despite promising methodological advancements, there is a lack of theoretical frameworks that enable analysis of learning challenges such as generalization and catastrophic forgetting. To that end, we develop a new theoretical approach for meta continual learning (MCL) where we mathematically model the learning dynamics using dynamic programming, and we establish conditions of optimality for the MCL problem. Moreover, using the theoretical framework, we derive a new dynamic-programming-based MCL method that adopts stochastic-gradient-driven alternating optimization to balance generalization and catastrophic forgetting. We show that, on MCL benchmark data sets, our theoretically grounded method achieves accuracy better than or comparable to that of existing state-of-the-art methods. 1 INTRODUCTION The central theme of meta continual learning (MCL) is to learn on similar tasks revealed sequentially. In this process, two fundamental challenges must be addressed: catastrophic forgetting of the previous tasks and generalization to new tasks (Javed and White, 2019). In order to address these challenges, several approaches (Javed and White, 2019; Beaulieu et al., 2020; Riemer et al., 2018) have been proposed in the literature that build on the second-order, derivative-driven approach introduced in Finn et al. (2017). Despite the promising prior methodological advancements, existing MCL methods suffer from three key issues: (1) there is a lack of a theoretical framework to systematically design and analyze MCL methods; (2) data samples representing the complete task distribution must be known in advance (Finn et al., 2017; Javed and White, 2019; Beaulieu et al., 2020), often, an impractical requirement in real-world environments as the tasks are observed sequentially; and (3) the use of fixed representations (Javed and White, 2019; Beaulieu et al., 2020) limits the ability to handle significant changes in the input data distribution, as demonstrated in Caccia et al. (2020). We focus on a supervised learning paradigm within MCL, and our key contributions are (1) a dynamicprogramming-based theoretical framework for MCL and (2) a theoretically grounded MCL approach with convergence properties that compare favorably with the existing MCL methods. Dynamic-programming-based theoretical framework for MCL: In our approach, the problem is first posed as the minimization of a cost function that is integrated over the lifetime of the model. Nevertheless, at any time t, the future tasks are not available, and the integral calculation becomes intractable. Therefore, we use the Bellman’s principle of optimality (Bellman, 2015) to recast the MCL problem to minimize the sum of catastrophic forgetting cost on the previous tasks and generalization cost on the new task. Next, we theoretically analyze the impact of these costs on the MCL problem using tools from the optimal control literature (Lewis et al., 2012). Furthermore, we demonstrate that the MCL approaches proposed in (Finn et al., 2017; Beaulieu et al., 2020; Javed and White, 2019) can be derived from the proposed framework. Our theoretical framework unifies different MCL approaches. In this paper, we discuss the connection between the theoretical framework and other MCL methods in the literature. Specifically, we show how these methods can be derived from our framework. Theoretically grounded MCL approach: We derive a theoretically grounded dynamic programming-based meta continual learning (DPMCL) approach. The generalization cost is com- puted by training and evaluating the model on given new task data. The catastrophic forgetting cost is computed by evaluating the model on the task memory (previous tasks) after the model is trained on the new task. We alternately minimize the generalization and catastrophic forgetting costs for a predefined number of iterations to achieve a balance between the two. (See Fig. 2 in the appendix for an overview). We analyze the performance of the DPMCL approach experimentally on classification and regression benchmark data sets. 2 PROBLEM FORMULATION We focus on the supervised MCL setting. We let R denote the set of real numbers and use boldface to denote vectors and matrices. We use ‖.‖ to denote the Euclidean norm for vectors and the Frobenius norm for matrices. The lifetime of the model is given by [0,Γ] : Γ ∈ R, where Γ is the maximum lifetime of the model. We let p(T ) be the distribution over all the tasks in the interval [0,Γ]. Based on underlying processes that generate the tasks, the task arrival can be continuous time (CT) or discrete time (DT). For example, consider the system identification problem in a stochastic process modeled by ordinary differential equations (ODEs) or partial differential equations (PDE)–where the tasks, represented by the states of the process x(t), are generated in CT through the ODE. On the other hand, in the typical supervised learning setting, consider an image classification problem where each task comprises a set of images sampled from a discrete process and the tasks arrive in DT. As in many previous MCL works, we focus on DT MCL. However, we develop our theory for the CT MCL setting first because a CT MCL approach is broadly applicable to many domains. In Section 3.3 we provide a DPMCL approach for DT MCL setting by discretizing our theory. A task T (t) is a tuple of input-output pairs {X (t),Y(t)} provided in the interval [t, t+ ∆t]∀t ∈ [0,Γ),∆t ∈ R. We denote (x(t),y(t)) ∈ {X (t),Y(t)}.We define a parametric model g(.) with parameters θ̂ such that ŷ(t) = g(x(t); θ̂(t)). Although we will use neural networks, any parametric model can be utilized with our framework. Our MCL problem is comprised of two challenges, catastrophic forgetting on previous tasks and generalization to new tasks. The catastrophic forgetting cost measures the error of the model on all the previous tasks; the generalization cost measures the error of the model on the new task. The goal in MCL is to minimize both the catastrophic forgetting cost and generalization cost for every t ∈ [0,Γ). Let us split the interval [0,Γ) as [0, t] ∪ (t,Γ), where the intervals [0, t] and (t,Γ) comprise previous tasks and new task respectively (i.e., the collection of all the task that can be observed in the respective intervals). To take all the previous tasks into account, we define the instantaneous catastrophic forgetting cost J(t; θ̂(t)) to be the integral of the loss function `(τ) for any t ∈ [0,Γ) as J(t; θ̂(t)) = ∫ t τ=0 γ(τ)`(τ)dτ, (1) where `(τ) is computed on task T (τ) with γ(τ) being a parameter describing the contribution of this task to the integral. The value of γ(τ) is critical for the integral to be bounded (details are provided in Lemma 1). Given a new task, the goal is to perform well on the new task as well as maintain the performance on the previous tasks. To this end, we write V (t; θ̂(t)), as the cumulative cost (combination of catastrophic cost and generalization cost) that is integrated over [t,Γ). We therefore seek to minimize V (t; θ̂(t)) and obtain the optimal value, V ∗(t), by solving the problem V ∗(t) = minθ̂(τ)∈Ω:t≤τ≤Γ ∫ Γ τ=t J(τ ; θ̂(τ))d τ, (2) where Ω is the compact set required to initialize the parameters of the model. In a non-convex optimization problem, the compact set describes the boundaries of the solution space. It is also assumed that there is at least one minima of the optimization space in this set. If the parameters are initialized from within the compact set, the optimization may converge to the local solution within the set. MCL is a sequential decision making problem where the goal is to obtain a sequence of parameters in the interval (t,Γ) as described in Eq. (2). In MCL context, whenever a new tasks is observed at τ ∈ (t,Γ), we seek to make a decision (find a parameter set, θ̂(τ) ∈ Ω) such that θ̂(τ) is optimal for all the previous tasks and the new task in the interval [0, τ ]. Consequentially, for each new task, we obtain a new parameter set and the solution to the MCL problem is provided by a sequence of parameters (decisions). For making each of these decision, a task is counted exactly one and the contribution of cost is determined by the choice of γ. The optimization problem in Eq. (2) in its current form is intractable for two reasons. First, note that V ∗(t) is the optimal cost value over the complete lifetime of the model [0,Γ). Since we have only the data corresponding to all the tasks in the interval [0, t], solving Eq. (2) in its current form is intractable. Second, it is not feasible to maintain a parameter set for each τ ∈ (t,Γ). To circumvent this issue, we take a dynamic programming view of the MCL problem. We introduce a new theoretical framework where we model the learning process as a dynamical system and use Bellman’s principle of optimality to simplify the MCL problem. Furthermore, we derive conditions under which the learning process is stable and optimal, using tools from the optimal control literature (Lewis et al., 2012). 3 THEORETICAL META CONTINUAL LEARNING FRAMEWORK We will recast the problem defined in Eq. (2) using ideas from dynamic programming, specifically Bellman’s principle of optimality (Lewis et al., 2012). We treat the MCL problem as a dynamical system and describe the system using the following PDE: −∂V ∗(t) ∂t = minθ̂(t)∈Ω [ J(t; θ̂(t)) + JN (t; θ̂(t)) + ( V ∗ θ̂(t) )T ∆θ̂ ] + ( V ∗x(t) )T ∆x(t), (3) where V ∗(t) describes the optimal cost (the left-hand side of Eq. (2)) and (.)T refers to the transpose operator. The notation A(.) denotes the partial derivative of A with respect to (.), for instance, V ∗ θ̂(t) = ∂V ∗(t;θ̂(t)) ∂θ̂(t) The full derivation for Eq. (3) from Eq. (2) is provided in Appendix A.1.. Note that since y(t) is a function of x(t), the changes in the optimal cost due to y(t) are captured by( V ∗x(t) )T ∆x(t). Eq. (3) is also known as the Hamilton-Jacobi Bellman equation in optimal control (Lewis et al., 2012) with the key difference that there is an extra term to quantify the changes due to the new task. Intuitively, the PDE completely describes the dynamics of learning for the MCL problem in the period [0,Γ]. Specifically, the left hand side of Eq. (3), ∂V ∗(t) ∂t describes the change in the global solution of the MCL problem, V ∗(t), with respect to time t. The right hand side describes what are the different components of the MCL problem that impact this global solution. Note from the right hand side of Eq. (3), this impact is quantified by the four terms: the cost contribution from all the previous tasks J(t; θ̂(t)); the cost due to the new task JN (t; θ̂(t)); the change in the optimal cost due to the change in the parameters ( V ∗ θ̂(t) )T ∆θ̂; and the change in the optimal cost due to change in the input (introduction of new task) ( V ∗x(t) )T ∆x(t). Since, the PDE completely describes our problem, the solution to the MCL problem can be obtained by obtaining the parameter θ̂(t) that minimizes the right-hand side of Eq. (3). Specifically, we seek to solve the following optimization problem minθ̂(t)∈Ω [ J(t; θ̂(t)) + JN (t; θ̂(t)) + ( V ∗ θ̂(t) )T ∆θ̂ ] , s. t. ∂V ∗(t) ∂t + ( V ∗x(t) )T ∆x(t) = 0. (4) Solving the problem in Eq. (4) is equivalent to minimizing the impact of introducing a new task on the optimal cost. Observe that the intractable problem in Eq. (2) has been posed as a PDE constrained optimization Eq. (4). Note that in Eq. (2), the global solution is achieved when a series of parameters are obtained. On the other hand, in the optimization problem in Eq. (4) we obtain a parameter at time t to achieve the global solution under certain assumption on the parameters and data. Although, this solution is under assumptions on the data, these assumptions can be satisfied in practice. More details about the convergence of this optimization and assumptions involved is provided in Theorem. 1 in the next section. 3.1 ANALYSIS Our formalism has two critical elements: γ(t), which quantifies the contribution of each task to catastrophic forgetting cost, and the impact of the change in the input data distribution ∆x on learning, specifically while adapting to new tasks. We will analyze them next. Impact of γ(t) on catastrophic forgetting cost: Since, the cost in Eq. (1) is an integral of cost contributions from all the tasks, it is critical that this integral has a converging point. In other words, we seek to understand, when all the tasks provide non-zero values to the overall loss, is the cost is bounded and can the optimization problem be solved? The existence of the convergence point depends directly on the contribution of each task determined through γ(t). We therefore present Lemma 1 and Corollary 1 ( the complete statement and proofs can be found in Appendix A.2) In Lemma 1, we demonstrate that when all tasks in a MCL problem provide a nonzero cost, it is not possible to maintain equivalent performance on all the tasks. Specifically, we show that when γ(τ) = 1,∀τ that is the contribution of each task to the cost is greater than zero and equal, then the integral diverges when the number of tasks tend to ∞. This phenomenon has been observed empirically (Lin, 1992). However, by choosing γ(t) appropriately, we can control the catastrophic forgetting by selecting which tasks to forget. One reasonable solution is to give older tasks less priority and new tasks more priority, this is shown in Corollary 1 to provide a cost function that is both bounded and convergent. An example of such γ is γ(τ) = e−τ , τ ∈ [0,Γ]. However, any choice of γ(t) that will keep J(t; θ̂(t)) bounded is reasonable. The second most important component of our approach is the impact of change in the input (∆x) on learning. To substantiate this, we present Theorem 1 (the complete statement and proof can be found in Appendix A.2). Theorem 1 shows the convergence of a gradient-based solution to the MCL learning under assumptions on the input, the gradient and the learning rate. Specifically, we show through Lyapunov principles that J(t; θ̂(t)) (the cost on all the previous tasks) decreases as t→∞ and ultimately achieves a value less than β. In Theorem 1, there are four main assumptions. First is a consequence of Corollary 1 where the contributions of each task to the cost must be chosen such that the cost is bounded and convergent. Second is the assumption of a compact set Ω. This assumption implies that if a weight value initialized from within the compact set Ω, there will be a convergence to local minima. Third, we consider the condition ||Jθ̂(t)|| = 0 which is well known in the literature as the vanishing gradient problem (Pascanu et al., 2013). Impact of ∆x(t) on learning: The fourth assumption, that is ‖Jx‖ > 0 is important to the proof and directly explains how ∆x(t) can impact the validity of Theorem 1 (thus the convergence of the approach). Note that if ‖Jx‖ = 0, the value of the cost J will not change due to change in the input ∆x(t) > 0. Due to this, Theorem 1 will not hold and the learning will stagnate. To give an example, consider the MNIST dataset with a total of 10 classes and the solution of the MCL problem is to predict efficiently on all the 10 classes. Consider now the case when each task is created randomly to include exactly one class and each task is being shown to the model sequentially. If the model only experiences classes 1 through 5 and does not experience classes 5 through 10 (by virtue of improper sampling), the information provided to the model is not informative enough to perform well on all the 10 classes. Thus, although the model is perfect in predicting classes 1 through 5 such that ‖Jx‖ = 0, the MCL problem has not reached the global solution. Consequently, the learning process has stagnated. In control theory, this condition is known as persistence of excitation (Lewis et al., 1998). On the other hand, a large change in the input data distribution presents issues in the learning process as well. Note that for Theorem 1 to hold, α(t) > 0, therefore, ‖Jx‖‖∆x(t)‖ < 1. Let ‖∆x(t)‖ ≤ bx, where bx is the upper bound on the change in the input data distribution. If bx is large (going from predicting on images to understanding texts), the condition ‖Jx‖‖∆x(t)‖ < 1 will be violated, and our approach will be unstable. We can, however, adapt our model to the change in the input datadistribution ∆x(t) exactly if we can explicitly track the change in the input. This type of adaptation can be done easily when the process generating x(t) can be described by using an ODE or PDE. In traditional supervised learning settings, however, such a description is not possible. The issue highlights the need for strong representation learning methodologies where a good representation over all the tasks can minimize the impact of changes in ∆x(t) on the performance of the MCL problem (Javed and White, 2019; Beaulieu et al., 2020). Currently, in the literature, it is common to control the magnitude of ∆x(t) through normalization procedures under the assumption that all tasks are sampled from the same distribution. Therefore, for all practical purposes, we can choose 0 ≤ α(t) ≤ (β ||Jθ̂(t)||) −1. Connection to MAML, FTML, and their variants: The optimization problem in MAML (Finn et al., 2017), FTML (Finn et al., 2019) and other variants can be obtained from Eq. (3) by setting the third and the fourth terms to zero, which provides −∂V ∗(t) ∂t = minθ̂(t)∈Ω[J(t; θ̂(t)) + JN (t; θ̂(t)]. The MCL problem is split into three phases, meta training, meta testing and testing. In the meta training, we learn to generalize to new task. In meta testing, we learn from all the previous tasks, which aims at learning over p(T ) (which is similar to minimize catastrophic forgetting). Finally, in the testing phase, the network predicts on a set of held out tasks that are representative of the complete task distribution. To learn common features across all the tasks, we must optimize for the first term on the right-hand side in the equation above (where a second order derivative can be utilized). When the goal is to generalize to new tasks, one must optimize the second term on the right hand side of the equation above. With the choice of different architectures for the neural network, all of the approaches that build on MAML and FTML such as (Javed and White, 2019) and (Beaulieu et al., 2020) can be directly derived. For instance, to obtain the methodology in (Javed and White, 2019), we may do the following. First, the model architecture is described as a combination of representation learning network (RLN) and prediction learning network (PLN) such that θ̂ = [θ̂1θ̂2] and the vector space for θ̂1 and θ̂2 can be considered to denote PLN and RLN respectively. In the learning process, we will first pre-train a RLN as an encoder by optimizing the first term in Eq. (3.1) while a new PLN is randomly initialized for each new task. Next, we may freeze the RLN and update PLN to minimize the second term in Eq. (3.1) when new tasks are observed sequentially. Similar to Javed and White (2019), θ̂ = [θ̂1θ̂2] in Beaulieu et al. (2020) described as a combination of neuro-modulatory network (NLM) and the prediction network (PLN). The methodology in Beaulieu et al. (2020) can be obtained by following a two step learning procedure. First, NLM and PLN are pre-trained to minimize the right hand side of (3.1) with data from all the available tasks. Second, we train the prediction network on unseen tasks by optimizing the second term in the right hand side of (3.1) and fix the NLM. All these methods do not adopt the PDE formalism; the third and fourth terms in Eq. (3) are not explicitly included in MAML and FTML. On the other hand, the works in (Javed and White, 2019) and (Beaulieu et al., 2020) learn to represent p(T ), (the distribution over all the tasks) and require a pre-training phase. To the best of our knowledge, the only work where the third and fourth term are implicitly addressed is meta experience replay (MER) (Riemer et al., 2018). MER models the interference (forward transfer and backward transfer) due to the introduction of new tasks as a gradient alignment problem. Observe, from Eq. (3), the third term models the change in the global solution (backward transfer) with respect to change in the weights. Furthermore, the fourth term models the change in the global solution (forward transfer) with respect to change in the input. MER can be directly derived from Eq. (3) by optimizing the third and fourth term with samples from the experience replay memory, to minimize interference (both forward and backward). This is very similar to DPMCL with the key difference that, instead of approximating the optimal cost directly, MER approximates the angle between the gradients.Furthermore, this approximation is performed using Reptile (Nichol et al., 2018). Reptile is essentially similar to constraining the change in the weight such that the third term in Eq. (3) is zero. Although this regularizes the learning in the presence of new tasks such that forgetting is not large, the network can unlearn experiences due to parameter drift especially when the new tasks are very similar to order tasks (Narendra and Annaswamy, 1987). DPMCL is a new approach derived from the presented theoretical framework. In our DPMCL approach, we provide a clear and methodical procedure from Eq. (3) for deriving the weight update rule, providing us with a much more methodical process of addressing the impact of these terms and, by extension, the impact of key challenges in the MCL setting. 3.2 DYNAMIC PROGRAMMING-BASED META CONTINUAL LEARNING (DPMCL) As a consequence of Theorem 1, the update for the parameters is provided by α(t)Vθ̂(t)). Since V (t; θ̂(t)) is not completely known, we have to approximate this gradient. To derive this approx- imation, we first rewrite Eq. (3) as −∂V ∗(t;θ̂(t)) ∂t = minθ̂(t)∈Ω [ H(t; θ̂(t)) ] , which provides the optimization problem as θ̂ ∗ (t) = arg minθ̂(t)∈Ω [ H(t; θ̂(t))) ] , where H(t; θ̂(t))) = J(t; θ̂(t)) + JN (t; θ̂(t))+ ( V ∗ θ̂(t) )T ∆θ̂+ ( V ∗x(t) )T ∆x(t) is the CT Hamiltonian. The solution for this optimization problem (the updates for the parameters in the network) is provided when the derivative of the Hamiltonian is set to zero. First, we discretize the MCL problem setting. Let k be the discrete sampling instant such that t = k∆t, where ∆t is the sampling interval. Let a task T k at instant k be sampled from p(T ). Let T k = (X k,Yk) be a tuple, where X k ∈ Rn×p denotes the input data and Yk ∈ Rn×1 denotes the target labels (output). Let n be the number of samples and p be the number of dimensions. Let the parametric model be given as ŷ = g(h(x; θ̂1); θ̂2), where the inner map h(.) is treated as a representation learning network and g(.) is the prediction network. Let θ̂ = [θ̂1 θ̂2] be the learnable model parameters and the weight updates be given as θ̂(k + 1) = θ̂(k)− α(k) ∂ ∂θ̂(k) [ JN (k) + JP (k) + ( JPN (k)− JPN (k; θ̂(k + ζ)) )] (5) Our update rule has three terms (the terms inside the bracket). The first term depends on JN , which is calculated on the new task; the second term depends on JP and is calculated on all the previous tasks; the third term comprises JPN and is evaluated on a combination of the previous tasks and the new task. The first term minimizes the generalization cost. Together, the second and the third terms minimize the catastrophic forgetting cost. The first two terms can be obtained by measuring JP and JN directly. To obtain the third term in Eq. (5), we simplify the third and fourth term in Eq. (3). In Eq. 3, the third and the fourth term quantifies the change in the optimal cost due to the parameter updates and the change in the input respectively. Since, the boundary value for the optimal cost is the current performance of the model on a combination of all the previous tasks and the new task (how well the model performs on all the tasks till now), we use the current performance as an approximation of the optimal cost. This fact combined with the finite difference approximation results in the third term in Eq. (5). Updating with the third term in Eq. (5) regularizes the impact of the new task on the global solution. In Eq. (5) α(k) ≤ 1β‖J ˆθ(k))‖2+ , where β is a user-defined parameter and > 0 is a small value to ensure that the denominator does not go to zero. The derivation of the update rule and the discretization are presented in Appendix A.3. The value β is the theoretical threshold on the optimal cost and the way we choose beta as the reciprocal of the learning rate. Equipped with the gradient updates, we now describe the DPMCL algorithm. We define a new task sample, DN (k) = {Xk,Yk}, and a task memory (samples from all the previous tasks) DP (k) ⊂ ∪k−1τ=0T τ . We can approximate the required terms in our update rule Eq. 5 using samples (batches) from DP (k) and DN (k). The overall algorithm consists of two steps: generalization and catastrophic forgetting (see Algorithm 1 in Appendix B.4). DPMCL comprises representation and prediction neural networks parameterized by θ̂1 and θ̂2, respectively. Since, the vector space of θ̂ is written as a combination of θ̂1 and θ̂2. The theory developed on θ̂ extends to the vector space defined by the combination of θ̂1 and θ̂2. The solution in this space can be done achieved either by coordinate descent or by alternative minimization. We choose to update in an alternative minimization manner. For each batch bN ∈ DN (k), DPMCL alternatively performs generalization and catastrophic forgetting cost updates ρ times. The generalization cost update consists of computing the cost JN and using that to update θ̂1 and θ̂2; the catastrophic forgetting cost update comprises the following steps. First we create a batch that combines the new task data with samples from the previous tasks bPN = bP ∪ bN (k), where bP ∈ DP (k). Second, to approximate the term ( JPN (k; θ̂(k + ζ))),, we copy θ̂2 (prediction network) into a temporary network parameterized by θ̂B . We then perform ζ updates on θ̂B while keeping θ̂1 fixed. Third, using θ̂B(k + ζ), we compute JPN (k; θ̂B(k + ζ)) and update θ̂1, θ̂2 with JP (k) + (JPN (k) − JPN (k; θ̂B(k + ζ))). The inner loop with ζ is purely for the purpose of approximating the optimal cost. The rationale behind repeated updates to approximate JPN (k; θ̂(k + ζ)), is as follows. At every instant k, JPN (k) (the cost on all the previous tasks and the new task) is the boundary value for the optimal cost as the optimal cost can only be less than JPN (k) (minimum of the cost can only be less than or equal to the current value). Therefore, if we start from JPN (k) and execute a Markov chain (repeated gradient updates) for ζ steps, the end point of this Markov chain is the optimal value of JPN (k) (the cost will reduce with repeated updates using the same batch of data) at instant k, provided ζ is large enough. Furthermore, the difference between the cost at the starting point and the end point of this chain provides us with a value that should be minimized such that the cost JPN (k) will reach the optimal value for the MCL problem at instant k. To execute this Markov chain, we perform repeated updates on a copy of θ̂2 (prediction network) denoted as θ̂B . The goal of the markov chain procedure is purely to approximate the optimal cost, that is, how well, the current state of the model can predict on all the tasks that have been observed till now (previous tasks and the new task). The performance of the model depends on the prediction network and the representation network. Since, we seek to maintain a global solution, the pursuit is well served by learning a robust representation over all the tasks. Therefore, we want to observe, can the current representation allow us to reach the global solution over all the tasks? Therefore, we seek to estimate the difference JPN (k) − JPN (k; θ̂B(k + ζ)), corresponding to the current representation and we treat the output of the representation network as input and measure the optimal cost by updating only a copy of θ̂2. This process allows us to measure the optimal cost as a function of the representation network and train the representation network toward minimizing optimal cost. We repeat this alternative update process for each data batch in the new task. Once all the data from the new task is exhausted, we move to the next task. 3.3 RELATED WORK Existing MCL methods can be grouped into three groups: (1) dynamic architectures and flexible knowledge representation (Sutton, 1990; Rusu et al., 2016; Yoon et al., 2017); (2) regularization approaches, ((Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018)) and (3) memory/experience replay, (Lin, 1992; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019). Flexible knowledge representations seek to maintain a state of the whole dataset and require computationally expensive mechanisms (Yoon et al., 2017). Regularization approaches (Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018) attempt to minimize the impact of new tasks (changes in the input data-distribution) on the parameters of the model involving a significant trial-and-error process. Memory/experience replay-driven approaches (Lin, 1992; Chaudhry et al., 2019; Lopez-Paz and Ranzato, 2017) can address catastrophic forgetting but do not generalize well to new tasks. In recent literature, the most comprehensive methodology that enables the study of the MCL problem was introduced in Finn et al. (2017). In Finn et al. (2017), the authors presented an approach where an additional term is introduced into the cost function (the gradient of the cost function with respect to the previous tasks). However, this method requires all the data to be known prior to the start of the learning procedure. To obviate this constraint, an online MAML approach was introduced in ?Caccia et al. (2020). The approach does not explicitly minimize catastrophic forgetting but focuses on fast online learning. In contrast with Finn et al. (2017), our method can learn sequentially as the new tasks are observed. Although sequential learning was possible in Finn et al. (2019), it was highlighted in Caccia et al. (2020), that there is an inherent trade-off between memory requirements and catastrophic forgetting, which could be addressed by learning a representation over all the tasks as in Javed and White (2019); Beaulieu et al. (2020). Similar to Javed and White (2019); Beaulieu et al. (2020), our method allows a representation to be learned over the distribution of all the tasks p(T ). However, both the representation and the training model are learned sequentially in DPCML, and we do not observe a pre-training step. Our approach is the first comprehensive theoretical framework based on dynamic programming that is model agnostic and can adapted to different MCL setting in both CT and DT. Although theoretical underpinnings were provided in Finn et al. (2019) and Flennerhag et al. (2019), the focus was to provide structure for parameter updates but not attempt to holistically model the overall learning dynamics as is done in our theoretical framework. The key ideas in this paper have been adapted from dynamic programming and optimal control theory; additional details can be found in Lewis and Vrabie (2009). repetitions. Gaussian smoothing filter with standard deviation of 2 is applied on each trajectory. 4 EXPERIMENTS We use four continual learning data sets: incremental sine wave (regression on 50 tasks (SINE)); split-Omniglot (classification on 50 tasks, (OMNI)); continuous MNIST (classification on 10 tasks and (MNIST)); and CIFAR10 (classification on 10 tasks, (CIFAR10)). All these data sets have been used in (Finn et al., 2019; 2017; Javed and White, 2019). We compare DPMCL with Naive (training is always performed on the new task without any explicit catastrophic forgetting minimization); Experience-Replay (ER) (Lin, 1992) (training is performed by sampling batches of data from all the tasks (previous and the new)); follow the meta learner (FTML) (Finn et al., 2019), online meta-continual learning (OML) (Javed and White, 2019), neuro-modulated meta learning (ANML) (Beaulieu et al., 2020) and meta experience replay (MER) (Riemer et al., 2018). To keep consistency with our computing environment and the task structure, we implement a sequential online version (Finn et al., 2019) (where each task is exposed to the model sequentially) of all these algorithms in our environment. For ANML and OML, we will run a pre-training phase for each new task. In particular, for each task, we will first train the RLN/NLM with data for the new task. Next, we train the complete network (RLN/NLM and PLN) with the data from the new task. For any particular data set, we set the same model hyper-parameters (number of hidden layers, number of hidden layer units, activation functions, learning rate, etc.) across all implementations. For fair comparisons, for any given task we also fix the total number of gradient updates a method can perform (See Appendix B.1,2,3 for details on data-set and hyper-parameters). For each task, we split the given data into training (60%), validation (20%), and testing (20%) data sets. Methods such as ANML, FTML, and OML follow the two loop training strategy from FTML: the inner loop and the outer loop. The training data for each task is used for the inner loop, whereas the validation data is used in the outer loop. The testing data is used to report accuracy metrics. We measure generalization and catastrophic forgetting through cumulative error (CME) given by the average error on all the previous tasks and the new task error (NTE) given by the average error on the new task, respectively. For regression problems, they are computed from mean squared error; for classification problems, given as (1 − Acc100 ), where Acc refers to the classification accuracy. For cost function, we use the mean squared error for regression and categorical cross-entropy for classification. We use a total of 50 runs (repetition) with different random seeds and report the mean µ and standard error of the mean (σerror = σ/ √ 50, where σ is the standard deviation with 50 being the number of repetition). We report only the σerror when it is greater than 10−3; otherwise, we indicate a 0 (See Appendix B.4,5 for implementations details). 4.1 RESULTS We first analyze the CME and NTE as each task is incrementally shown to the model. We record the CME and NTE on the testing data (averaged over 50 repetitions) at each instant when a task is observed. The results are shown in Fig. 1. Unlike the other methods, DPMCL achieves low error with respect to CME and NTE. ANML and OML perform poorly on CME because of the lack of a learned representation (as we do not have a pre-training phase to learn an encoder). The performance of DPMCL is better than MER, FTML and ER in all data sets except CIFAR-10, where ER is comparable to DPMCL. The poor performance of Naive is expected because it is trained only on the new task data and thus incurs catastrophic forgetting. DPMCL generalizes well to new tasks, and consequently the performance of DPMCL is better than FTML, ER, and ANML in all the data sets except CIFAR10. As expected, Naive has the lowest NTE. In the absence of a well-learned representation, OML exhibits behavior similar to Naive’s and is able to quickly generalize to new task. MER also behaves similar to Naive and incurs very low NTE. For CIFAR10, FTML achieves NTE lower than that of DPMCL. ER struggles to generalize to a new task; however, the performance is better than ANML’s, which exhibits the poorest performance due to the absence of a well-learned representation. The CME and NTE values for all the data sets and methods are summarized in Table 1 in Appendix B.6. DPMCL achieves CME [µ(σerr)] of 1.12× 10−5(0) for SINE, 0.175(0.007) for OMNI, 0.015(0.001) for MNIST, and 0.475(0.003) for CIFAR10. We note that the CME for DPMCL are the best among all the methods and all the data sets except MER for OMNI and ER for CIFAR10. In CIFAR10, ER demonstrates a 7% improvement in accuracy. For OMNI, MER outperforms DPMCL with an 1% improvement on the CME scale. On the NTE [µ (σerr)] scale, DPMCL achieves 3.89 × 10−5(0) for SINE, 0.189(0.091) for OMNI, 0.003(0) for MNIST and 0.273(0.008) for CIFAR10. On the NTE scale, the best-performing method is Naive followed by OML and MER; this behavior can be observed in the trends from Fig. 1. DPMCL is better than all the other methods in all the data sets except CIFAR 10, where FTML is better (observed earlier in Fig. 1) by 10%. However, this 10% improvement for FTML comes at the expense of 18% drop in performance on the CME scale. Similarly, although ER exhibits a 3% improvement on CME scale, DPMCL outperforms ER on the NTE scale by a 22.7% improvement. The method closest to DPMCL in terms of design and performance is MER. MER outperforms DPMCL on the NTE scale in all the datasets except sine and MNIST where the performance is comparable. MER is expected to generalize really well with new tasks as it is designed for this purpose. MER regularizes the change in weight parameters with the introduction of a new task as the update rules are derived from Nichol et al. (2018). On the other hand, on the CME scale, DPMCL outperforms MER in all datasets except OMNI where the performance of MER is better. The reason is parameter drift (Narendra and Annaswamy, 1987) observed due to weight regularization present in MER. The parameter drift can make the network unlearn previous experiences especially when the new tasks are very similar to the older tasks (Narendra and Annaswamy, 1987). In OMNI, the new tasks are distinctly different from the older ones but in the rest of the datasets, the new tasks are similar. Such a parameter drift can be avoided when the weight is updated proportional to the cost (Narendra and Annaswamy, 1987) which is why DPMCL performs better on the CME scale. Overall, the results show that DPMCL achieves a balance between CME and NTE. The balance can be engineered depending on the choice of ζ and κ (the study is presented in Appendix B.6.). We have chosen these parameters to achieve better performance in the CME scale. As a result, we show that DPMCL outperforms all methods in CME scale without significant drop in the NTE scale. All other methods either perform well on the NTE or CME scale but not on both. 5 CONCLUSIONS We introduced a dynamic-programming-based theoretical framework for meta continual learning.Within this framework, catastrophic forgetting and generalization, the two central challenges of the meta continual learning, can be studied and analyzed methodically. Furthermore, the framework also allowed us to provide theoretical justification for intuitive and empirically proven ideas about generalization and catastrophic forgetting. We then introduced DPMCL which was able to systematically model and compensate for the trade-off between the catastrophic forgetting and generalization. We also provided experimental results in a sequential learning setting that show that the framework is practical with comparable performance to state of the art in meta continual learning. In the future, we plan to extend this approach to reinforcement and unsupervised learning. Moreover, we plan to study different architectures such as convolutional neural networks and graph neural networks.
1. What is the novel perspective proposed by the paper regarding meta-continual learning? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. How does the paper frame the MCL problem using dynamic programming? 4. Are there any questions regarding the connection between minimizing the cumulative cost and computing the partial derivative of the minimizer? 5. Do you have any concerns about the methodology explained on page 5, particularly when updating the representation learning parameter and prediction parameter separately? 6. Can you explain how the algorithm handles user-defined parameters such as \zeta and \beta? 7. How does the proposed method differ from previous works regarding knowing data for all tasks in advance? 8. Why does the learning rate \alpha(t) seem negative in Thm 1? 9. Can you provide more explanations or intuitions behind the technical details in the paper? 10. What are your opinions on the empirical results presented in the paper, especially for new tasks?
Review
Review The paper proposes a meta-continual learning method derived from dynamic programming. The paper frames the MCL as a PDE problem in theory and proposes discrete approximation as the methodology in practice. Viewing MCL from dynamic programming is a novel perspective and it is nice to see previous methods focusing on different parts of the proposed objective. A unified framework would further help understand the meta-learning with sequential data. However, the current paper lacks clarity for a broad range of readers. Here are some suggestions to improve it. — What may be compressed: The statement and the role of theorem 1 are not clear. For statement: the derivative of V is a scalar, why it is said to be negative semi-definite? What does the “Let … be …” mean? Is it an assumption, fact, or notation? — What needs to be elaborated: The paper seems to skip an important step to explain the connection between minimizing the cumulative cost (as in Eq. (2)) and computing the partial derivative of the minimizer (as in Eq.(3)). Are they equivalent? The methodology on page 5 needed more explanations. Is there a specific reason to update the representation learning parameter \hat{\theta}_1 and prediction parameter \hat{\theta}_2 seperately? How does it work if \hat{\theta} = [\hat{\theta}_1, \hat{\theta}_2] are updated jointly? The explanation by Markov chain is not clear. Some heuristics in the algorithm such as user-defined parameters \zeta and \beta are not clear how to set. The connection DPMCL to previous work is currently explained in a very vague way. Other comments: A key issue of MCL by previous works mentioned in the introduction is that it requires knowing data for all tasks in advance. For the proposed method, does it also need future tasks to compute the generalization loss J_P? How does it differ from previous work? The learning rate \alpha(t) below Thm 1 seems can be negative. Some curves are missing in the panels of Figure 2. And DPMCL is worse than Naive on NTE in certain cases? Why in Lemma 1 it assumes the loss function l(\tau) to be lower bounded by \epsilon? In general, the direction this paper explores is interesting but the clarity needs to be improved. The intuition and main contributions are submerged in technical details. The paper aims for a theoretical sound framework for MCL yet it is weakened by many heuristics applied in practice. In addition, the empirical results especially for new tasks are not strong. So in general I rate it as “Ok but not good enough”. =====POST-REBUTTAL COMMENTS======== Since the author did not submit a response, the original decision is retained. (Update: the author did submit the comments after this initial response.) =====POST-REBUTTAL COMMENTS 2======== I have read the author's updates. The readability of the paper has improved in the revised version but it is not clear yet why DPMCL unifies several previous works and which change contributes to the improvements in practice. Besides, there are multiple undirected references in the appendix. I think this work needs further revision and the experiments need some ablation study before getting ready to be published, therefore I maintain my rating.
ICLR
Title Meta-Continual Learning Via Dynamic Programming Abstract Meta continual learning algorithms seek to train a model when faced with similar tasks observed in a sequential manner. Despite promising methodological advancements, there is a lack of theoretical frameworks that enable analysis of learning challenges such as generalization and catastrophic forgetting. To that end, we develop a new theoretical approach for meta continual learning (MCL) where we mathematically model the learning dynamics using dynamic programming, and we establish conditions of optimality for the MCL problem. Moreover, using the theoretical framework, we derive a new dynamic-programming-based MCL method that adopts stochastic-gradient-driven alternating optimization to balance generalization and catastrophic forgetting. We show that, on MCL benchmark data sets, our theoretically grounded method achieves accuracy better than or comparable to that of existing state-of-the-art methods. 1 INTRODUCTION The central theme of meta continual learning (MCL) is to learn on similar tasks revealed sequentially. In this process, two fundamental challenges must be addressed: catastrophic forgetting of the previous tasks and generalization to new tasks (Javed and White, 2019). In order to address these challenges, several approaches (Javed and White, 2019; Beaulieu et al., 2020; Riemer et al., 2018) have been proposed in the literature that build on the second-order, derivative-driven approach introduced in Finn et al. (2017). Despite the promising prior methodological advancements, existing MCL methods suffer from three key issues: (1) there is a lack of a theoretical framework to systematically design and analyze MCL methods; (2) data samples representing the complete task distribution must be known in advance (Finn et al., 2017; Javed and White, 2019; Beaulieu et al., 2020), often, an impractical requirement in real-world environments as the tasks are observed sequentially; and (3) the use of fixed representations (Javed and White, 2019; Beaulieu et al., 2020) limits the ability to handle significant changes in the input data distribution, as demonstrated in Caccia et al. (2020). We focus on a supervised learning paradigm within MCL, and our key contributions are (1) a dynamicprogramming-based theoretical framework for MCL and (2) a theoretically grounded MCL approach with convergence properties that compare favorably with the existing MCL methods. Dynamic-programming-based theoretical framework for MCL: In our approach, the problem is first posed as the minimization of a cost function that is integrated over the lifetime of the model. Nevertheless, at any time t, the future tasks are not available, and the integral calculation becomes intractable. Therefore, we use the Bellman’s principle of optimality (Bellman, 2015) to recast the MCL problem to minimize the sum of catastrophic forgetting cost on the previous tasks and generalization cost on the new task. Next, we theoretically analyze the impact of these costs on the MCL problem using tools from the optimal control literature (Lewis et al., 2012). Furthermore, we demonstrate that the MCL approaches proposed in (Finn et al., 2017; Beaulieu et al., 2020; Javed and White, 2019) can be derived from the proposed framework. Our theoretical framework unifies different MCL approaches. In this paper, we discuss the connection between the theoretical framework and other MCL methods in the literature. Specifically, we show how these methods can be derived from our framework. Theoretically grounded MCL approach: We derive a theoretically grounded dynamic programming-based meta continual learning (DPMCL) approach. The generalization cost is com- puted by training and evaluating the model on given new task data. The catastrophic forgetting cost is computed by evaluating the model on the task memory (previous tasks) after the model is trained on the new task. We alternately minimize the generalization and catastrophic forgetting costs for a predefined number of iterations to achieve a balance between the two. (See Fig. 2 in the appendix for an overview). We analyze the performance of the DPMCL approach experimentally on classification and regression benchmark data sets. 2 PROBLEM FORMULATION We focus on the supervised MCL setting. We let R denote the set of real numbers and use boldface to denote vectors and matrices. We use ‖.‖ to denote the Euclidean norm for vectors and the Frobenius norm for matrices. The lifetime of the model is given by [0,Γ] : Γ ∈ R, where Γ is the maximum lifetime of the model. We let p(T ) be the distribution over all the tasks in the interval [0,Γ]. Based on underlying processes that generate the tasks, the task arrival can be continuous time (CT) or discrete time (DT). For example, consider the system identification problem in a stochastic process modeled by ordinary differential equations (ODEs) or partial differential equations (PDE)–where the tasks, represented by the states of the process x(t), are generated in CT through the ODE. On the other hand, in the typical supervised learning setting, consider an image classification problem where each task comprises a set of images sampled from a discrete process and the tasks arrive in DT. As in many previous MCL works, we focus on DT MCL. However, we develop our theory for the CT MCL setting first because a CT MCL approach is broadly applicable to many domains. In Section 3.3 we provide a DPMCL approach for DT MCL setting by discretizing our theory. A task T (t) is a tuple of input-output pairs {X (t),Y(t)} provided in the interval [t, t+ ∆t]∀t ∈ [0,Γ),∆t ∈ R. We denote (x(t),y(t)) ∈ {X (t),Y(t)}.We define a parametric model g(.) with parameters θ̂ such that ŷ(t) = g(x(t); θ̂(t)). Although we will use neural networks, any parametric model can be utilized with our framework. Our MCL problem is comprised of two challenges, catastrophic forgetting on previous tasks and generalization to new tasks. The catastrophic forgetting cost measures the error of the model on all the previous tasks; the generalization cost measures the error of the model on the new task. The goal in MCL is to minimize both the catastrophic forgetting cost and generalization cost for every t ∈ [0,Γ). Let us split the interval [0,Γ) as [0, t] ∪ (t,Γ), where the intervals [0, t] and (t,Γ) comprise previous tasks and new task respectively (i.e., the collection of all the task that can be observed in the respective intervals). To take all the previous tasks into account, we define the instantaneous catastrophic forgetting cost J(t; θ̂(t)) to be the integral of the loss function `(τ) for any t ∈ [0,Γ) as J(t; θ̂(t)) = ∫ t τ=0 γ(τ)`(τ)dτ, (1) where `(τ) is computed on task T (τ) with γ(τ) being a parameter describing the contribution of this task to the integral. The value of γ(τ) is critical for the integral to be bounded (details are provided in Lemma 1). Given a new task, the goal is to perform well on the new task as well as maintain the performance on the previous tasks. To this end, we write V (t; θ̂(t)), as the cumulative cost (combination of catastrophic cost and generalization cost) that is integrated over [t,Γ). We therefore seek to minimize V (t; θ̂(t)) and obtain the optimal value, V ∗(t), by solving the problem V ∗(t) = minθ̂(τ)∈Ω:t≤τ≤Γ ∫ Γ τ=t J(τ ; θ̂(τ))d τ, (2) where Ω is the compact set required to initialize the parameters of the model. In a non-convex optimization problem, the compact set describes the boundaries of the solution space. It is also assumed that there is at least one minima of the optimization space in this set. If the parameters are initialized from within the compact set, the optimization may converge to the local solution within the set. MCL is a sequential decision making problem where the goal is to obtain a sequence of parameters in the interval (t,Γ) as described in Eq. (2). In MCL context, whenever a new tasks is observed at τ ∈ (t,Γ), we seek to make a decision (find a parameter set, θ̂(τ) ∈ Ω) such that θ̂(τ) is optimal for all the previous tasks and the new task in the interval [0, τ ]. Consequentially, for each new task, we obtain a new parameter set and the solution to the MCL problem is provided by a sequence of parameters (decisions). For making each of these decision, a task is counted exactly one and the contribution of cost is determined by the choice of γ. The optimization problem in Eq. (2) in its current form is intractable for two reasons. First, note that V ∗(t) is the optimal cost value over the complete lifetime of the model [0,Γ). Since we have only the data corresponding to all the tasks in the interval [0, t], solving Eq. (2) in its current form is intractable. Second, it is not feasible to maintain a parameter set for each τ ∈ (t,Γ). To circumvent this issue, we take a dynamic programming view of the MCL problem. We introduce a new theoretical framework where we model the learning process as a dynamical system and use Bellman’s principle of optimality to simplify the MCL problem. Furthermore, we derive conditions under which the learning process is stable and optimal, using tools from the optimal control literature (Lewis et al., 2012). 3 THEORETICAL META CONTINUAL LEARNING FRAMEWORK We will recast the problem defined in Eq. (2) using ideas from dynamic programming, specifically Bellman’s principle of optimality (Lewis et al., 2012). We treat the MCL problem as a dynamical system and describe the system using the following PDE: −∂V ∗(t) ∂t = minθ̂(t)∈Ω [ J(t; θ̂(t)) + JN (t; θ̂(t)) + ( V ∗ θ̂(t) )T ∆θ̂ ] + ( V ∗x(t) )T ∆x(t), (3) where V ∗(t) describes the optimal cost (the left-hand side of Eq. (2)) and (.)T refers to the transpose operator. The notation A(.) denotes the partial derivative of A with respect to (.), for instance, V ∗ θ̂(t) = ∂V ∗(t;θ̂(t)) ∂θ̂(t) The full derivation for Eq. (3) from Eq. (2) is provided in Appendix A.1.. Note that since y(t) is a function of x(t), the changes in the optimal cost due to y(t) are captured by( V ∗x(t) )T ∆x(t). Eq. (3) is also known as the Hamilton-Jacobi Bellman equation in optimal control (Lewis et al., 2012) with the key difference that there is an extra term to quantify the changes due to the new task. Intuitively, the PDE completely describes the dynamics of learning for the MCL problem in the period [0,Γ]. Specifically, the left hand side of Eq. (3), ∂V ∗(t) ∂t describes the change in the global solution of the MCL problem, V ∗(t), with respect to time t. The right hand side describes what are the different components of the MCL problem that impact this global solution. Note from the right hand side of Eq. (3), this impact is quantified by the four terms: the cost contribution from all the previous tasks J(t; θ̂(t)); the cost due to the new task JN (t; θ̂(t)); the change in the optimal cost due to the change in the parameters ( V ∗ θ̂(t) )T ∆θ̂; and the change in the optimal cost due to change in the input (introduction of new task) ( V ∗x(t) )T ∆x(t). Since, the PDE completely describes our problem, the solution to the MCL problem can be obtained by obtaining the parameter θ̂(t) that minimizes the right-hand side of Eq. (3). Specifically, we seek to solve the following optimization problem minθ̂(t)∈Ω [ J(t; θ̂(t)) + JN (t; θ̂(t)) + ( V ∗ θ̂(t) )T ∆θ̂ ] , s. t. ∂V ∗(t) ∂t + ( V ∗x(t) )T ∆x(t) = 0. (4) Solving the problem in Eq. (4) is equivalent to minimizing the impact of introducing a new task on the optimal cost. Observe that the intractable problem in Eq. (2) has been posed as a PDE constrained optimization Eq. (4). Note that in Eq. (2), the global solution is achieved when a series of parameters are obtained. On the other hand, in the optimization problem in Eq. (4) we obtain a parameter at time t to achieve the global solution under certain assumption on the parameters and data. Although, this solution is under assumptions on the data, these assumptions can be satisfied in practice. More details about the convergence of this optimization and assumptions involved is provided in Theorem. 1 in the next section. 3.1 ANALYSIS Our formalism has two critical elements: γ(t), which quantifies the contribution of each task to catastrophic forgetting cost, and the impact of the change in the input data distribution ∆x on learning, specifically while adapting to new tasks. We will analyze them next. Impact of γ(t) on catastrophic forgetting cost: Since, the cost in Eq. (1) is an integral of cost contributions from all the tasks, it is critical that this integral has a converging point. In other words, we seek to understand, when all the tasks provide non-zero values to the overall loss, is the cost is bounded and can the optimization problem be solved? The existence of the convergence point depends directly on the contribution of each task determined through γ(t). We therefore present Lemma 1 and Corollary 1 ( the complete statement and proofs can be found in Appendix A.2) In Lemma 1, we demonstrate that when all tasks in a MCL problem provide a nonzero cost, it is not possible to maintain equivalent performance on all the tasks. Specifically, we show that when γ(τ) = 1,∀τ that is the contribution of each task to the cost is greater than zero and equal, then the integral diverges when the number of tasks tend to ∞. This phenomenon has been observed empirically (Lin, 1992). However, by choosing γ(t) appropriately, we can control the catastrophic forgetting by selecting which tasks to forget. One reasonable solution is to give older tasks less priority and new tasks more priority, this is shown in Corollary 1 to provide a cost function that is both bounded and convergent. An example of such γ is γ(τ) = e−τ , τ ∈ [0,Γ]. However, any choice of γ(t) that will keep J(t; θ̂(t)) bounded is reasonable. The second most important component of our approach is the impact of change in the input (∆x) on learning. To substantiate this, we present Theorem 1 (the complete statement and proof can be found in Appendix A.2). Theorem 1 shows the convergence of a gradient-based solution to the MCL learning under assumptions on the input, the gradient and the learning rate. Specifically, we show through Lyapunov principles that J(t; θ̂(t)) (the cost on all the previous tasks) decreases as t→∞ and ultimately achieves a value less than β. In Theorem 1, there are four main assumptions. First is a consequence of Corollary 1 where the contributions of each task to the cost must be chosen such that the cost is bounded and convergent. Second is the assumption of a compact set Ω. This assumption implies that if a weight value initialized from within the compact set Ω, there will be a convergence to local minima. Third, we consider the condition ||Jθ̂(t)|| = 0 which is well known in the literature as the vanishing gradient problem (Pascanu et al., 2013). Impact of ∆x(t) on learning: The fourth assumption, that is ‖Jx‖ > 0 is important to the proof and directly explains how ∆x(t) can impact the validity of Theorem 1 (thus the convergence of the approach). Note that if ‖Jx‖ = 0, the value of the cost J will not change due to change in the input ∆x(t) > 0. Due to this, Theorem 1 will not hold and the learning will stagnate. To give an example, consider the MNIST dataset with a total of 10 classes and the solution of the MCL problem is to predict efficiently on all the 10 classes. Consider now the case when each task is created randomly to include exactly one class and each task is being shown to the model sequentially. If the model only experiences classes 1 through 5 and does not experience classes 5 through 10 (by virtue of improper sampling), the information provided to the model is not informative enough to perform well on all the 10 classes. Thus, although the model is perfect in predicting classes 1 through 5 such that ‖Jx‖ = 0, the MCL problem has not reached the global solution. Consequently, the learning process has stagnated. In control theory, this condition is known as persistence of excitation (Lewis et al., 1998). On the other hand, a large change in the input data distribution presents issues in the learning process as well. Note that for Theorem 1 to hold, α(t) > 0, therefore, ‖Jx‖‖∆x(t)‖ < 1. Let ‖∆x(t)‖ ≤ bx, where bx is the upper bound on the change in the input data distribution. If bx is large (going from predicting on images to understanding texts), the condition ‖Jx‖‖∆x(t)‖ < 1 will be violated, and our approach will be unstable. We can, however, adapt our model to the change in the input datadistribution ∆x(t) exactly if we can explicitly track the change in the input. This type of adaptation can be done easily when the process generating x(t) can be described by using an ODE or PDE. In traditional supervised learning settings, however, such a description is not possible. The issue highlights the need for strong representation learning methodologies where a good representation over all the tasks can minimize the impact of changes in ∆x(t) on the performance of the MCL problem (Javed and White, 2019; Beaulieu et al., 2020). Currently, in the literature, it is common to control the magnitude of ∆x(t) through normalization procedures under the assumption that all tasks are sampled from the same distribution. Therefore, for all practical purposes, we can choose 0 ≤ α(t) ≤ (β ||Jθ̂(t)||) −1. Connection to MAML, FTML, and their variants: The optimization problem in MAML (Finn et al., 2017), FTML (Finn et al., 2019) and other variants can be obtained from Eq. (3) by setting the third and the fourth terms to zero, which provides −∂V ∗(t) ∂t = minθ̂(t)∈Ω[J(t; θ̂(t)) + JN (t; θ̂(t)]. The MCL problem is split into three phases, meta training, meta testing and testing. In the meta training, we learn to generalize to new task. In meta testing, we learn from all the previous tasks, which aims at learning over p(T ) (which is similar to minimize catastrophic forgetting). Finally, in the testing phase, the network predicts on a set of held out tasks that are representative of the complete task distribution. To learn common features across all the tasks, we must optimize for the first term on the right-hand side in the equation above (where a second order derivative can be utilized). When the goal is to generalize to new tasks, one must optimize the second term on the right hand side of the equation above. With the choice of different architectures for the neural network, all of the approaches that build on MAML and FTML such as (Javed and White, 2019) and (Beaulieu et al., 2020) can be directly derived. For instance, to obtain the methodology in (Javed and White, 2019), we may do the following. First, the model architecture is described as a combination of representation learning network (RLN) and prediction learning network (PLN) such that θ̂ = [θ̂1θ̂2] and the vector space for θ̂1 and θ̂2 can be considered to denote PLN and RLN respectively. In the learning process, we will first pre-train a RLN as an encoder by optimizing the first term in Eq. (3.1) while a new PLN is randomly initialized for each new task. Next, we may freeze the RLN and update PLN to minimize the second term in Eq. (3.1) when new tasks are observed sequentially. Similar to Javed and White (2019), θ̂ = [θ̂1θ̂2] in Beaulieu et al. (2020) described as a combination of neuro-modulatory network (NLM) and the prediction network (PLN). The methodology in Beaulieu et al. (2020) can be obtained by following a two step learning procedure. First, NLM and PLN are pre-trained to minimize the right hand side of (3.1) with data from all the available tasks. Second, we train the prediction network on unseen tasks by optimizing the second term in the right hand side of (3.1) and fix the NLM. All these methods do not adopt the PDE formalism; the third and fourth terms in Eq. (3) are not explicitly included in MAML and FTML. On the other hand, the works in (Javed and White, 2019) and (Beaulieu et al., 2020) learn to represent p(T ), (the distribution over all the tasks) and require a pre-training phase. To the best of our knowledge, the only work where the third and fourth term are implicitly addressed is meta experience replay (MER) (Riemer et al., 2018). MER models the interference (forward transfer and backward transfer) due to the introduction of new tasks as a gradient alignment problem. Observe, from Eq. (3), the third term models the change in the global solution (backward transfer) with respect to change in the weights. Furthermore, the fourth term models the change in the global solution (forward transfer) with respect to change in the input. MER can be directly derived from Eq. (3) by optimizing the third and fourth term with samples from the experience replay memory, to minimize interference (both forward and backward). This is very similar to DPMCL with the key difference that, instead of approximating the optimal cost directly, MER approximates the angle between the gradients.Furthermore, this approximation is performed using Reptile (Nichol et al., 2018). Reptile is essentially similar to constraining the change in the weight such that the third term in Eq. (3) is zero. Although this regularizes the learning in the presence of new tasks such that forgetting is not large, the network can unlearn experiences due to parameter drift especially when the new tasks are very similar to order tasks (Narendra and Annaswamy, 1987). DPMCL is a new approach derived from the presented theoretical framework. In our DPMCL approach, we provide a clear and methodical procedure from Eq. (3) for deriving the weight update rule, providing us with a much more methodical process of addressing the impact of these terms and, by extension, the impact of key challenges in the MCL setting. 3.2 DYNAMIC PROGRAMMING-BASED META CONTINUAL LEARNING (DPMCL) As a consequence of Theorem 1, the update for the parameters is provided by α(t)Vθ̂(t)). Since V (t; θ̂(t)) is not completely known, we have to approximate this gradient. To derive this approx- imation, we first rewrite Eq. (3) as −∂V ∗(t;θ̂(t)) ∂t = minθ̂(t)∈Ω [ H(t; θ̂(t)) ] , which provides the optimization problem as θ̂ ∗ (t) = arg minθ̂(t)∈Ω [ H(t; θ̂(t))) ] , where H(t; θ̂(t))) = J(t; θ̂(t)) + JN (t; θ̂(t))+ ( V ∗ θ̂(t) )T ∆θ̂+ ( V ∗x(t) )T ∆x(t) is the CT Hamiltonian. The solution for this optimization problem (the updates for the parameters in the network) is provided when the derivative of the Hamiltonian is set to zero. First, we discretize the MCL problem setting. Let k be the discrete sampling instant such that t = k∆t, where ∆t is the sampling interval. Let a task T k at instant k be sampled from p(T ). Let T k = (X k,Yk) be a tuple, where X k ∈ Rn×p denotes the input data and Yk ∈ Rn×1 denotes the target labels (output). Let n be the number of samples and p be the number of dimensions. Let the parametric model be given as ŷ = g(h(x; θ̂1); θ̂2), where the inner map h(.) is treated as a representation learning network and g(.) is the prediction network. Let θ̂ = [θ̂1 θ̂2] be the learnable model parameters and the weight updates be given as θ̂(k + 1) = θ̂(k)− α(k) ∂ ∂θ̂(k) [ JN (k) + JP (k) + ( JPN (k)− JPN (k; θ̂(k + ζ)) )] (5) Our update rule has three terms (the terms inside the bracket). The first term depends on JN , which is calculated on the new task; the second term depends on JP and is calculated on all the previous tasks; the third term comprises JPN and is evaluated on a combination of the previous tasks and the new task. The first term minimizes the generalization cost. Together, the second and the third terms minimize the catastrophic forgetting cost. The first two terms can be obtained by measuring JP and JN directly. To obtain the third term in Eq. (5), we simplify the third and fourth term in Eq. (3). In Eq. 3, the third and the fourth term quantifies the change in the optimal cost due to the parameter updates and the change in the input respectively. Since, the boundary value for the optimal cost is the current performance of the model on a combination of all the previous tasks and the new task (how well the model performs on all the tasks till now), we use the current performance as an approximation of the optimal cost. This fact combined with the finite difference approximation results in the third term in Eq. (5). Updating with the third term in Eq. (5) regularizes the impact of the new task on the global solution. In Eq. (5) α(k) ≤ 1β‖J ˆθ(k))‖2+ , where β is a user-defined parameter and > 0 is a small value to ensure that the denominator does not go to zero. The derivation of the update rule and the discretization are presented in Appendix A.3. The value β is the theoretical threshold on the optimal cost and the way we choose beta as the reciprocal of the learning rate. Equipped with the gradient updates, we now describe the DPMCL algorithm. We define a new task sample, DN (k) = {Xk,Yk}, and a task memory (samples from all the previous tasks) DP (k) ⊂ ∪k−1τ=0T τ . We can approximate the required terms in our update rule Eq. 5 using samples (batches) from DP (k) and DN (k). The overall algorithm consists of two steps: generalization and catastrophic forgetting (see Algorithm 1 in Appendix B.4). DPMCL comprises representation and prediction neural networks parameterized by θ̂1 and θ̂2, respectively. Since, the vector space of θ̂ is written as a combination of θ̂1 and θ̂2. The theory developed on θ̂ extends to the vector space defined by the combination of θ̂1 and θ̂2. The solution in this space can be done achieved either by coordinate descent or by alternative minimization. We choose to update in an alternative minimization manner. For each batch bN ∈ DN (k), DPMCL alternatively performs generalization and catastrophic forgetting cost updates ρ times. The generalization cost update consists of computing the cost JN and using that to update θ̂1 and θ̂2; the catastrophic forgetting cost update comprises the following steps. First we create a batch that combines the new task data with samples from the previous tasks bPN = bP ∪ bN (k), where bP ∈ DP (k). Second, to approximate the term ( JPN (k; θ̂(k + ζ))),, we copy θ̂2 (prediction network) into a temporary network parameterized by θ̂B . We then perform ζ updates on θ̂B while keeping θ̂1 fixed. Third, using θ̂B(k + ζ), we compute JPN (k; θ̂B(k + ζ)) and update θ̂1, θ̂2 with JP (k) + (JPN (k) − JPN (k; θ̂B(k + ζ))). The inner loop with ζ is purely for the purpose of approximating the optimal cost. The rationale behind repeated updates to approximate JPN (k; θ̂(k + ζ)), is as follows. At every instant k, JPN (k) (the cost on all the previous tasks and the new task) is the boundary value for the optimal cost as the optimal cost can only be less than JPN (k) (minimum of the cost can only be less than or equal to the current value). Therefore, if we start from JPN (k) and execute a Markov chain (repeated gradient updates) for ζ steps, the end point of this Markov chain is the optimal value of JPN (k) (the cost will reduce with repeated updates using the same batch of data) at instant k, provided ζ is large enough. Furthermore, the difference between the cost at the starting point and the end point of this chain provides us with a value that should be minimized such that the cost JPN (k) will reach the optimal value for the MCL problem at instant k. To execute this Markov chain, we perform repeated updates on a copy of θ̂2 (prediction network) denoted as θ̂B . The goal of the markov chain procedure is purely to approximate the optimal cost, that is, how well, the current state of the model can predict on all the tasks that have been observed till now (previous tasks and the new task). The performance of the model depends on the prediction network and the representation network. Since, we seek to maintain a global solution, the pursuit is well served by learning a robust representation over all the tasks. Therefore, we want to observe, can the current representation allow us to reach the global solution over all the tasks? Therefore, we seek to estimate the difference JPN (k) − JPN (k; θ̂B(k + ζ)), corresponding to the current representation and we treat the output of the representation network as input and measure the optimal cost by updating only a copy of θ̂2. This process allows us to measure the optimal cost as a function of the representation network and train the representation network toward minimizing optimal cost. We repeat this alternative update process for each data batch in the new task. Once all the data from the new task is exhausted, we move to the next task. 3.3 RELATED WORK Existing MCL methods can be grouped into three groups: (1) dynamic architectures and flexible knowledge representation (Sutton, 1990; Rusu et al., 2016; Yoon et al., 2017); (2) regularization approaches, ((Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018)) and (3) memory/experience replay, (Lin, 1992; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019). Flexible knowledge representations seek to maintain a state of the whole dataset and require computationally expensive mechanisms (Yoon et al., 2017). Regularization approaches (Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018) attempt to minimize the impact of new tasks (changes in the input data-distribution) on the parameters of the model involving a significant trial-and-error process. Memory/experience replay-driven approaches (Lin, 1992; Chaudhry et al., 2019; Lopez-Paz and Ranzato, 2017) can address catastrophic forgetting but do not generalize well to new tasks. In recent literature, the most comprehensive methodology that enables the study of the MCL problem was introduced in Finn et al. (2017). In Finn et al. (2017), the authors presented an approach where an additional term is introduced into the cost function (the gradient of the cost function with respect to the previous tasks). However, this method requires all the data to be known prior to the start of the learning procedure. To obviate this constraint, an online MAML approach was introduced in ?Caccia et al. (2020). The approach does not explicitly minimize catastrophic forgetting but focuses on fast online learning. In contrast with Finn et al. (2017), our method can learn sequentially as the new tasks are observed. Although sequential learning was possible in Finn et al. (2019), it was highlighted in Caccia et al. (2020), that there is an inherent trade-off between memory requirements and catastrophic forgetting, which could be addressed by learning a representation over all the tasks as in Javed and White (2019); Beaulieu et al. (2020). Similar to Javed and White (2019); Beaulieu et al. (2020), our method allows a representation to be learned over the distribution of all the tasks p(T ). However, both the representation and the training model are learned sequentially in DPCML, and we do not observe a pre-training step. Our approach is the first comprehensive theoretical framework based on dynamic programming that is model agnostic and can adapted to different MCL setting in both CT and DT. Although theoretical underpinnings were provided in Finn et al. (2019) and Flennerhag et al. (2019), the focus was to provide structure for parameter updates but not attempt to holistically model the overall learning dynamics as is done in our theoretical framework. The key ideas in this paper have been adapted from dynamic programming and optimal control theory; additional details can be found in Lewis and Vrabie (2009). repetitions. Gaussian smoothing filter with standard deviation of 2 is applied on each trajectory. 4 EXPERIMENTS We use four continual learning data sets: incremental sine wave (regression on 50 tasks (SINE)); split-Omniglot (classification on 50 tasks, (OMNI)); continuous MNIST (classification on 10 tasks and (MNIST)); and CIFAR10 (classification on 10 tasks, (CIFAR10)). All these data sets have been used in (Finn et al., 2019; 2017; Javed and White, 2019). We compare DPMCL with Naive (training is always performed on the new task without any explicit catastrophic forgetting minimization); Experience-Replay (ER) (Lin, 1992) (training is performed by sampling batches of data from all the tasks (previous and the new)); follow the meta learner (FTML) (Finn et al., 2019), online meta-continual learning (OML) (Javed and White, 2019), neuro-modulated meta learning (ANML) (Beaulieu et al., 2020) and meta experience replay (MER) (Riemer et al., 2018). To keep consistency with our computing environment and the task structure, we implement a sequential online version (Finn et al., 2019) (where each task is exposed to the model sequentially) of all these algorithms in our environment. For ANML and OML, we will run a pre-training phase for each new task. In particular, for each task, we will first train the RLN/NLM with data for the new task. Next, we train the complete network (RLN/NLM and PLN) with the data from the new task. For any particular data set, we set the same model hyper-parameters (number of hidden layers, number of hidden layer units, activation functions, learning rate, etc.) across all implementations. For fair comparisons, for any given task we also fix the total number of gradient updates a method can perform (See Appendix B.1,2,3 for details on data-set and hyper-parameters). For each task, we split the given data into training (60%), validation (20%), and testing (20%) data sets. Methods such as ANML, FTML, and OML follow the two loop training strategy from FTML: the inner loop and the outer loop. The training data for each task is used for the inner loop, whereas the validation data is used in the outer loop. The testing data is used to report accuracy metrics. We measure generalization and catastrophic forgetting through cumulative error (CME) given by the average error on all the previous tasks and the new task error (NTE) given by the average error on the new task, respectively. For regression problems, they are computed from mean squared error; for classification problems, given as (1 − Acc100 ), where Acc refers to the classification accuracy. For cost function, we use the mean squared error for regression and categorical cross-entropy for classification. We use a total of 50 runs (repetition) with different random seeds and report the mean µ and standard error of the mean (σerror = σ/ √ 50, where σ is the standard deviation with 50 being the number of repetition). We report only the σerror when it is greater than 10−3; otherwise, we indicate a 0 (See Appendix B.4,5 for implementations details). 4.1 RESULTS We first analyze the CME and NTE as each task is incrementally shown to the model. We record the CME and NTE on the testing data (averaged over 50 repetitions) at each instant when a task is observed. The results are shown in Fig. 1. Unlike the other methods, DPMCL achieves low error with respect to CME and NTE. ANML and OML perform poorly on CME because of the lack of a learned representation (as we do not have a pre-training phase to learn an encoder). The performance of DPMCL is better than MER, FTML and ER in all data sets except CIFAR-10, where ER is comparable to DPMCL. The poor performance of Naive is expected because it is trained only on the new task data and thus incurs catastrophic forgetting. DPMCL generalizes well to new tasks, and consequently the performance of DPMCL is better than FTML, ER, and ANML in all the data sets except CIFAR10. As expected, Naive has the lowest NTE. In the absence of a well-learned representation, OML exhibits behavior similar to Naive’s and is able to quickly generalize to new task. MER also behaves similar to Naive and incurs very low NTE. For CIFAR10, FTML achieves NTE lower than that of DPMCL. ER struggles to generalize to a new task; however, the performance is better than ANML’s, which exhibits the poorest performance due to the absence of a well-learned representation. The CME and NTE values for all the data sets and methods are summarized in Table 1 in Appendix B.6. DPMCL achieves CME [µ(σerr)] of 1.12× 10−5(0) for SINE, 0.175(0.007) for OMNI, 0.015(0.001) for MNIST, and 0.475(0.003) for CIFAR10. We note that the CME for DPMCL are the best among all the methods and all the data sets except MER for OMNI and ER for CIFAR10. In CIFAR10, ER demonstrates a 7% improvement in accuracy. For OMNI, MER outperforms DPMCL with an 1% improvement on the CME scale. On the NTE [µ (σerr)] scale, DPMCL achieves 3.89 × 10−5(0) for SINE, 0.189(0.091) for OMNI, 0.003(0) for MNIST and 0.273(0.008) for CIFAR10. On the NTE scale, the best-performing method is Naive followed by OML and MER; this behavior can be observed in the trends from Fig. 1. DPMCL is better than all the other methods in all the data sets except CIFAR 10, where FTML is better (observed earlier in Fig. 1) by 10%. However, this 10% improvement for FTML comes at the expense of 18% drop in performance on the CME scale. Similarly, although ER exhibits a 3% improvement on CME scale, DPMCL outperforms ER on the NTE scale by a 22.7% improvement. The method closest to DPMCL in terms of design and performance is MER. MER outperforms DPMCL on the NTE scale in all the datasets except sine and MNIST where the performance is comparable. MER is expected to generalize really well with new tasks as it is designed for this purpose. MER regularizes the change in weight parameters with the introduction of a new task as the update rules are derived from Nichol et al. (2018). On the other hand, on the CME scale, DPMCL outperforms MER in all datasets except OMNI where the performance of MER is better. The reason is parameter drift (Narendra and Annaswamy, 1987) observed due to weight regularization present in MER. The parameter drift can make the network unlearn previous experiences especially when the new tasks are very similar to the older tasks (Narendra and Annaswamy, 1987). In OMNI, the new tasks are distinctly different from the older ones but in the rest of the datasets, the new tasks are similar. Such a parameter drift can be avoided when the weight is updated proportional to the cost (Narendra and Annaswamy, 1987) which is why DPMCL performs better on the CME scale. Overall, the results show that DPMCL achieves a balance between CME and NTE. The balance can be engineered depending on the choice of ζ and κ (the study is presented in Appendix B.6.). We have chosen these parameters to achieve better performance in the CME scale. As a result, we show that DPMCL outperforms all methods in CME scale without significant drop in the NTE scale. All other methods either perform well on the NTE or CME scale but not on both. 5 CONCLUSIONS We introduced a dynamic-programming-based theoretical framework for meta continual learning.Within this framework, catastrophic forgetting and generalization, the two central challenges of the meta continual learning, can be studied and analyzed methodically. Furthermore, the framework also allowed us to provide theoretical justification for intuitive and empirically proven ideas about generalization and catastrophic forgetting. We then introduced DPMCL which was able to systematically model and compensate for the trade-off between the catastrophic forgetting and generalization. We also provided experimental results in a sequential learning setting that show that the framework is practical with comparable performance to state of the art in meta continual learning. In the future, we plan to extend this approach to reinforcement and unsupervised learning. Moreover, we plan to study different architectures such as convolutional neural networks and graph neural networks.
1. What is the main contribution of the paper regarding meta-continual learning? 2. What are the strengths of the proposed approach, particularly in its theoretical grounding? 3. What are the weaknesses of the paper, especially regarding the experiments and writing? 4. How does the reviewer assess the significance of the terms (3) and (4) in the PDE framework? 5. What is the relationship between the proposed method and Meta-Experience Replay (MER)? 6. What are the limitations of the experimental setting and results? 7. How could the paper be improved in terms of clarity, polishing, and minor concerns?
Review
Review Summary The authors propose a theoretical framework for meta-continual learning (MCL) both in continuous time (CT) and discrete time (DT). Specifically, they treat the MCL problem of minimizing a catastrophic forgetting cost and a generalization (to new tasks) cost as a dynamical system. This system is described by a PDE involving four terms: impact of the catastrophic forgetting cost (1), the impact of the generalization cost (2), the impact of change in parameters on the total cost (3), and the impact of the change in the input distribution on the total cost (4). Then the authors argued that methods like MAML, OML, and MCL focus on minimizing (1) and (2) via meta-training and meta-testing, respectively. Next, the authors derive a theoretically grounded MCL approach base on the introduced framework, namely Dynamic Programming-Based Meta Continual Learning DPMCL). Concretely, their approach uses gradient-based meta-learning to optimize term (1) and (2) in the outer-loop and an approximation of term (3) and (4) in the inner loop. The empirical section shows that DPCML can offer some gains over previous methods like OML, CML, ANML as well as ER. position Continual learning (CL) definitely needs more theory and theoretically-grounded methods. This is why I believe this paper is great. The theory and the methodology seem sound to me (although I haven't had the time to proof-read it all the way). As far as I know, the terms (3) and (4) have never been studied within CL, so this paper has the potential to change how we think about CL. My only concerns are about the experiments and the writing, explained next. concerns Section 3 could be improved. I greatly appreciate the effort that the authors made to introduce more theory to CL. However, some CL readers might not be well-versed in PDEs. I think the authors could provide more intuitions and examples such that these readers can also enjoy the paper. Specifically, I think that terms (3) and (4), which are central to the paper, should be explained in more detail. Understanding how these terms evolve from eq. 3 to eq. 4 could wasn't all clear to me initially. I understand, however, that the authors need to work within the 8-page limit. Meta-Experience Replay (MER) [1] is an important and missing reference. DPCML is closest to MER than all aforementioned methods. I'm willing to increase my score if the authors can explain how both methods relate, as well as adding MER in the experimental section. I was also disappointed that code wasn't released, although the authors have provided lots of pseudo-code in the Appendix. It is also unclear if the experiments are in the task-aware or task-agnostic setting, i.e. are the methods allowed test-time task identifiers. In Figure 2, some methods are not reported or at least lie outside of the cropping. It seems strange to me however that they do not appear at task 0, i.e. when methods have incurred any forgetting yet. Specifically, why is Naive not in the MNIST and CIFAR10 figures on CME? Finally, throughout all the paper, the authors always refer to CL as MCL. Not all CL is MCL. E.g., why is the theoretical framework about MCL and not CL? Seems to me that CL is the problem and that meta-learning can be a solution to solving simultaneously term (1)-(4). Superficially the paper could also be polished. A non-exhaustive list is provided next. Minor concerns Figure 1 is too involved to be at the beginning of the paper Section 2: MCL is not "widely studied" Section 2: what does that mean: "where Ω is the compact set that implies that the parameters are initialized appropriately" Theorem 1: typo here ||Jˆθ(t)||,>0 authors should explain how MCL and ANML are used w/o pretraining, and not in the Appendix cause this is critical. shouldn't be explicitly repeating the results of Table 1 in the text (last paragraph of page 7) figure 3, don't center the text κ and k is confusing. choose another symbol there are other typos, please use a grammar checking software "well-learned representation"? maybe use better wording POST REBUTTAL I appreciate the efforts and clarifications that were made. I think that the addition of MER solidifies the paper. R4 brought up an important issue (concern 3). I think the authors should look into it. I'm keeping my score constant as it still reflects my opinion of the paper. [1] Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. In ICLR, 2019.
ICLR
Title Meta-Continual Learning Via Dynamic Programming Abstract Meta continual learning algorithms seek to train a model when faced with similar tasks observed in a sequential manner. Despite promising methodological advancements, there is a lack of theoretical frameworks that enable analysis of learning challenges such as generalization and catastrophic forgetting. To that end, we develop a new theoretical approach for meta continual learning (MCL) where we mathematically model the learning dynamics using dynamic programming, and we establish conditions of optimality for the MCL problem. Moreover, using the theoretical framework, we derive a new dynamic-programming-based MCL method that adopts stochastic-gradient-driven alternating optimization to balance generalization and catastrophic forgetting. We show that, on MCL benchmark data sets, our theoretically grounded method achieves accuracy better than or comparable to that of existing state-of-the-art methods. 1 INTRODUCTION The central theme of meta continual learning (MCL) is to learn on similar tasks revealed sequentially. In this process, two fundamental challenges must be addressed: catastrophic forgetting of the previous tasks and generalization to new tasks (Javed and White, 2019). In order to address these challenges, several approaches (Javed and White, 2019; Beaulieu et al., 2020; Riemer et al., 2018) have been proposed in the literature that build on the second-order, derivative-driven approach introduced in Finn et al. (2017). Despite the promising prior methodological advancements, existing MCL methods suffer from three key issues: (1) there is a lack of a theoretical framework to systematically design and analyze MCL methods; (2) data samples representing the complete task distribution must be known in advance (Finn et al., 2017; Javed and White, 2019; Beaulieu et al., 2020), often, an impractical requirement in real-world environments as the tasks are observed sequentially; and (3) the use of fixed representations (Javed and White, 2019; Beaulieu et al., 2020) limits the ability to handle significant changes in the input data distribution, as demonstrated in Caccia et al. (2020). We focus on a supervised learning paradigm within MCL, and our key contributions are (1) a dynamicprogramming-based theoretical framework for MCL and (2) a theoretically grounded MCL approach with convergence properties that compare favorably with the existing MCL methods. Dynamic-programming-based theoretical framework for MCL: In our approach, the problem is first posed as the minimization of a cost function that is integrated over the lifetime of the model. Nevertheless, at any time t, the future tasks are not available, and the integral calculation becomes intractable. Therefore, we use the Bellman’s principle of optimality (Bellman, 2015) to recast the MCL problem to minimize the sum of catastrophic forgetting cost on the previous tasks and generalization cost on the new task. Next, we theoretically analyze the impact of these costs on the MCL problem using tools from the optimal control literature (Lewis et al., 2012). Furthermore, we demonstrate that the MCL approaches proposed in (Finn et al., 2017; Beaulieu et al., 2020; Javed and White, 2019) can be derived from the proposed framework. Our theoretical framework unifies different MCL approaches. In this paper, we discuss the connection between the theoretical framework and other MCL methods in the literature. Specifically, we show how these methods can be derived from our framework. Theoretically grounded MCL approach: We derive a theoretically grounded dynamic programming-based meta continual learning (DPMCL) approach. The generalization cost is com- puted by training and evaluating the model on given new task data. The catastrophic forgetting cost is computed by evaluating the model on the task memory (previous tasks) after the model is trained on the new task. We alternately minimize the generalization and catastrophic forgetting costs for a predefined number of iterations to achieve a balance between the two. (See Fig. 2 in the appendix for an overview). We analyze the performance of the DPMCL approach experimentally on classification and regression benchmark data sets. 2 PROBLEM FORMULATION We focus on the supervised MCL setting. We let R denote the set of real numbers and use boldface to denote vectors and matrices. We use ‖.‖ to denote the Euclidean norm for vectors and the Frobenius norm for matrices. The lifetime of the model is given by [0,Γ] : Γ ∈ R, where Γ is the maximum lifetime of the model. We let p(T ) be the distribution over all the tasks in the interval [0,Γ]. Based on underlying processes that generate the tasks, the task arrival can be continuous time (CT) or discrete time (DT). For example, consider the system identification problem in a stochastic process modeled by ordinary differential equations (ODEs) or partial differential equations (PDE)–where the tasks, represented by the states of the process x(t), are generated in CT through the ODE. On the other hand, in the typical supervised learning setting, consider an image classification problem where each task comprises a set of images sampled from a discrete process and the tasks arrive in DT. As in many previous MCL works, we focus on DT MCL. However, we develop our theory for the CT MCL setting first because a CT MCL approach is broadly applicable to many domains. In Section 3.3 we provide a DPMCL approach for DT MCL setting by discretizing our theory. A task T (t) is a tuple of input-output pairs {X (t),Y(t)} provided in the interval [t, t+ ∆t]∀t ∈ [0,Γ),∆t ∈ R. We denote (x(t),y(t)) ∈ {X (t),Y(t)}.We define a parametric model g(.) with parameters θ̂ such that ŷ(t) = g(x(t); θ̂(t)). Although we will use neural networks, any parametric model can be utilized with our framework. Our MCL problem is comprised of two challenges, catastrophic forgetting on previous tasks and generalization to new tasks. The catastrophic forgetting cost measures the error of the model on all the previous tasks; the generalization cost measures the error of the model on the new task. The goal in MCL is to minimize both the catastrophic forgetting cost and generalization cost for every t ∈ [0,Γ). Let us split the interval [0,Γ) as [0, t] ∪ (t,Γ), where the intervals [0, t] and (t,Γ) comprise previous tasks and new task respectively (i.e., the collection of all the task that can be observed in the respective intervals). To take all the previous tasks into account, we define the instantaneous catastrophic forgetting cost J(t; θ̂(t)) to be the integral of the loss function `(τ) for any t ∈ [0,Γ) as J(t; θ̂(t)) = ∫ t τ=0 γ(τ)`(τ)dτ, (1) where `(τ) is computed on task T (τ) with γ(τ) being a parameter describing the contribution of this task to the integral. The value of γ(τ) is critical for the integral to be bounded (details are provided in Lemma 1). Given a new task, the goal is to perform well on the new task as well as maintain the performance on the previous tasks. To this end, we write V (t; θ̂(t)), as the cumulative cost (combination of catastrophic cost and generalization cost) that is integrated over [t,Γ). We therefore seek to minimize V (t; θ̂(t)) and obtain the optimal value, V ∗(t), by solving the problem V ∗(t) = minθ̂(τ)∈Ω:t≤τ≤Γ ∫ Γ τ=t J(τ ; θ̂(τ))d τ, (2) where Ω is the compact set required to initialize the parameters of the model. In a non-convex optimization problem, the compact set describes the boundaries of the solution space. It is also assumed that there is at least one minima of the optimization space in this set. If the parameters are initialized from within the compact set, the optimization may converge to the local solution within the set. MCL is a sequential decision making problem where the goal is to obtain a sequence of parameters in the interval (t,Γ) as described in Eq. (2). In MCL context, whenever a new tasks is observed at τ ∈ (t,Γ), we seek to make a decision (find a parameter set, θ̂(τ) ∈ Ω) such that θ̂(τ) is optimal for all the previous tasks and the new task in the interval [0, τ ]. Consequentially, for each new task, we obtain a new parameter set and the solution to the MCL problem is provided by a sequence of parameters (decisions). For making each of these decision, a task is counted exactly one and the contribution of cost is determined by the choice of γ. The optimization problem in Eq. (2) in its current form is intractable for two reasons. First, note that V ∗(t) is the optimal cost value over the complete lifetime of the model [0,Γ). Since we have only the data corresponding to all the tasks in the interval [0, t], solving Eq. (2) in its current form is intractable. Second, it is not feasible to maintain a parameter set for each τ ∈ (t,Γ). To circumvent this issue, we take a dynamic programming view of the MCL problem. We introduce a new theoretical framework where we model the learning process as a dynamical system and use Bellman’s principle of optimality to simplify the MCL problem. Furthermore, we derive conditions under which the learning process is stable and optimal, using tools from the optimal control literature (Lewis et al., 2012). 3 THEORETICAL META CONTINUAL LEARNING FRAMEWORK We will recast the problem defined in Eq. (2) using ideas from dynamic programming, specifically Bellman’s principle of optimality (Lewis et al., 2012). We treat the MCL problem as a dynamical system and describe the system using the following PDE: −∂V ∗(t) ∂t = minθ̂(t)∈Ω [ J(t; θ̂(t)) + JN (t; θ̂(t)) + ( V ∗ θ̂(t) )T ∆θ̂ ] + ( V ∗x(t) )T ∆x(t), (3) where V ∗(t) describes the optimal cost (the left-hand side of Eq. (2)) and (.)T refers to the transpose operator. The notation A(.) denotes the partial derivative of A with respect to (.), for instance, V ∗ θ̂(t) = ∂V ∗(t;θ̂(t)) ∂θ̂(t) The full derivation for Eq. (3) from Eq. (2) is provided in Appendix A.1.. Note that since y(t) is a function of x(t), the changes in the optimal cost due to y(t) are captured by( V ∗x(t) )T ∆x(t). Eq. (3) is also known as the Hamilton-Jacobi Bellman equation in optimal control (Lewis et al., 2012) with the key difference that there is an extra term to quantify the changes due to the new task. Intuitively, the PDE completely describes the dynamics of learning for the MCL problem in the period [0,Γ]. Specifically, the left hand side of Eq. (3), ∂V ∗(t) ∂t describes the change in the global solution of the MCL problem, V ∗(t), with respect to time t. The right hand side describes what are the different components of the MCL problem that impact this global solution. Note from the right hand side of Eq. (3), this impact is quantified by the four terms: the cost contribution from all the previous tasks J(t; θ̂(t)); the cost due to the new task JN (t; θ̂(t)); the change in the optimal cost due to the change in the parameters ( V ∗ θ̂(t) )T ∆θ̂; and the change in the optimal cost due to change in the input (introduction of new task) ( V ∗x(t) )T ∆x(t). Since, the PDE completely describes our problem, the solution to the MCL problem can be obtained by obtaining the parameter θ̂(t) that minimizes the right-hand side of Eq. (3). Specifically, we seek to solve the following optimization problem minθ̂(t)∈Ω [ J(t; θ̂(t)) + JN (t; θ̂(t)) + ( V ∗ θ̂(t) )T ∆θ̂ ] , s. t. ∂V ∗(t) ∂t + ( V ∗x(t) )T ∆x(t) = 0. (4) Solving the problem in Eq. (4) is equivalent to minimizing the impact of introducing a new task on the optimal cost. Observe that the intractable problem in Eq. (2) has been posed as a PDE constrained optimization Eq. (4). Note that in Eq. (2), the global solution is achieved when a series of parameters are obtained. On the other hand, in the optimization problem in Eq. (4) we obtain a parameter at time t to achieve the global solution under certain assumption on the parameters and data. Although, this solution is under assumptions on the data, these assumptions can be satisfied in practice. More details about the convergence of this optimization and assumptions involved is provided in Theorem. 1 in the next section. 3.1 ANALYSIS Our formalism has two critical elements: γ(t), which quantifies the contribution of each task to catastrophic forgetting cost, and the impact of the change in the input data distribution ∆x on learning, specifically while adapting to new tasks. We will analyze them next. Impact of γ(t) on catastrophic forgetting cost: Since, the cost in Eq. (1) is an integral of cost contributions from all the tasks, it is critical that this integral has a converging point. In other words, we seek to understand, when all the tasks provide non-zero values to the overall loss, is the cost is bounded and can the optimization problem be solved? The existence of the convergence point depends directly on the contribution of each task determined through γ(t). We therefore present Lemma 1 and Corollary 1 ( the complete statement and proofs can be found in Appendix A.2) In Lemma 1, we demonstrate that when all tasks in a MCL problem provide a nonzero cost, it is not possible to maintain equivalent performance on all the tasks. Specifically, we show that when γ(τ) = 1,∀τ that is the contribution of each task to the cost is greater than zero and equal, then the integral diverges when the number of tasks tend to ∞. This phenomenon has been observed empirically (Lin, 1992). However, by choosing γ(t) appropriately, we can control the catastrophic forgetting by selecting which tasks to forget. One reasonable solution is to give older tasks less priority and new tasks more priority, this is shown in Corollary 1 to provide a cost function that is both bounded and convergent. An example of such γ is γ(τ) = e−τ , τ ∈ [0,Γ]. However, any choice of γ(t) that will keep J(t; θ̂(t)) bounded is reasonable. The second most important component of our approach is the impact of change in the input (∆x) on learning. To substantiate this, we present Theorem 1 (the complete statement and proof can be found in Appendix A.2). Theorem 1 shows the convergence of a gradient-based solution to the MCL learning under assumptions on the input, the gradient and the learning rate. Specifically, we show through Lyapunov principles that J(t; θ̂(t)) (the cost on all the previous tasks) decreases as t→∞ and ultimately achieves a value less than β. In Theorem 1, there are four main assumptions. First is a consequence of Corollary 1 where the contributions of each task to the cost must be chosen such that the cost is bounded and convergent. Second is the assumption of a compact set Ω. This assumption implies that if a weight value initialized from within the compact set Ω, there will be a convergence to local minima. Third, we consider the condition ||Jθ̂(t)|| = 0 which is well known in the literature as the vanishing gradient problem (Pascanu et al., 2013). Impact of ∆x(t) on learning: The fourth assumption, that is ‖Jx‖ > 0 is important to the proof and directly explains how ∆x(t) can impact the validity of Theorem 1 (thus the convergence of the approach). Note that if ‖Jx‖ = 0, the value of the cost J will not change due to change in the input ∆x(t) > 0. Due to this, Theorem 1 will not hold and the learning will stagnate. To give an example, consider the MNIST dataset with a total of 10 classes and the solution of the MCL problem is to predict efficiently on all the 10 classes. Consider now the case when each task is created randomly to include exactly one class and each task is being shown to the model sequentially. If the model only experiences classes 1 through 5 and does not experience classes 5 through 10 (by virtue of improper sampling), the information provided to the model is not informative enough to perform well on all the 10 classes. Thus, although the model is perfect in predicting classes 1 through 5 such that ‖Jx‖ = 0, the MCL problem has not reached the global solution. Consequently, the learning process has stagnated. In control theory, this condition is known as persistence of excitation (Lewis et al., 1998). On the other hand, a large change in the input data distribution presents issues in the learning process as well. Note that for Theorem 1 to hold, α(t) > 0, therefore, ‖Jx‖‖∆x(t)‖ < 1. Let ‖∆x(t)‖ ≤ bx, where bx is the upper bound on the change in the input data distribution. If bx is large (going from predicting on images to understanding texts), the condition ‖Jx‖‖∆x(t)‖ < 1 will be violated, and our approach will be unstable. We can, however, adapt our model to the change in the input datadistribution ∆x(t) exactly if we can explicitly track the change in the input. This type of adaptation can be done easily when the process generating x(t) can be described by using an ODE or PDE. In traditional supervised learning settings, however, such a description is not possible. The issue highlights the need for strong representation learning methodologies where a good representation over all the tasks can minimize the impact of changes in ∆x(t) on the performance of the MCL problem (Javed and White, 2019; Beaulieu et al., 2020). Currently, in the literature, it is common to control the magnitude of ∆x(t) through normalization procedures under the assumption that all tasks are sampled from the same distribution. Therefore, for all practical purposes, we can choose 0 ≤ α(t) ≤ (β ||Jθ̂(t)||) −1. Connection to MAML, FTML, and their variants: The optimization problem in MAML (Finn et al., 2017), FTML (Finn et al., 2019) and other variants can be obtained from Eq. (3) by setting the third and the fourth terms to zero, which provides −∂V ∗(t) ∂t = minθ̂(t)∈Ω[J(t; θ̂(t)) + JN (t; θ̂(t)]. The MCL problem is split into three phases, meta training, meta testing and testing. In the meta training, we learn to generalize to new task. In meta testing, we learn from all the previous tasks, which aims at learning over p(T ) (which is similar to minimize catastrophic forgetting). Finally, in the testing phase, the network predicts on a set of held out tasks that are representative of the complete task distribution. To learn common features across all the tasks, we must optimize for the first term on the right-hand side in the equation above (where a second order derivative can be utilized). When the goal is to generalize to new tasks, one must optimize the second term on the right hand side of the equation above. With the choice of different architectures for the neural network, all of the approaches that build on MAML and FTML such as (Javed and White, 2019) and (Beaulieu et al., 2020) can be directly derived. For instance, to obtain the methodology in (Javed and White, 2019), we may do the following. First, the model architecture is described as a combination of representation learning network (RLN) and prediction learning network (PLN) such that θ̂ = [θ̂1θ̂2] and the vector space for θ̂1 and θ̂2 can be considered to denote PLN and RLN respectively. In the learning process, we will first pre-train a RLN as an encoder by optimizing the first term in Eq. (3.1) while a new PLN is randomly initialized for each new task. Next, we may freeze the RLN and update PLN to minimize the second term in Eq. (3.1) when new tasks are observed sequentially. Similar to Javed and White (2019), θ̂ = [θ̂1θ̂2] in Beaulieu et al. (2020) described as a combination of neuro-modulatory network (NLM) and the prediction network (PLN). The methodology in Beaulieu et al. (2020) can be obtained by following a two step learning procedure. First, NLM and PLN are pre-trained to minimize the right hand side of (3.1) with data from all the available tasks. Second, we train the prediction network on unseen tasks by optimizing the second term in the right hand side of (3.1) and fix the NLM. All these methods do not adopt the PDE formalism; the third and fourth terms in Eq. (3) are not explicitly included in MAML and FTML. On the other hand, the works in (Javed and White, 2019) and (Beaulieu et al., 2020) learn to represent p(T ), (the distribution over all the tasks) and require a pre-training phase. To the best of our knowledge, the only work where the third and fourth term are implicitly addressed is meta experience replay (MER) (Riemer et al., 2018). MER models the interference (forward transfer and backward transfer) due to the introduction of new tasks as a gradient alignment problem. Observe, from Eq. (3), the third term models the change in the global solution (backward transfer) with respect to change in the weights. Furthermore, the fourth term models the change in the global solution (forward transfer) with respect to change in the input. MER can be directly derived from Eq. (3) by optimizing the third and fourth term with samples from the experience replay memory, to minimize interference (both forward and backward). This is very similar to DPMCL with the key difference that, instead of approximating the optimal cost directly, MER approximates the angle between the gradients.Furthermore, this approximation is performed using Reptile (Nichol et al., 2018). Reptile is essentially similar to constraining the change in the weight such that the third term in Eq. (3) is zero. Although this regularizes the learning in the presence of new tasks such that forgetting is not large, the network can unlearn experiences due to parameter drift especially when the new tasks are very similar to order tasks (Narendra and Annaswamy, 1987). DPMCL is a new approach derived from the presented theoretical framework. In our DPMCL approach, we provide a clear and methodical procedure from Eq. (3) for deriving the weight update rule, providing us with a much more methodical process of addressing the impact of these terms and, by extension, the impact of key challenges in the MCL setting. 3.2 DYNAMIC PROGRAMMING-BASED META CONTINUAL LEARNING (DPMCL) As a consequence of Theorem 1, the update for the parameters is provided by α(t)Vθ̂(t)). Since V (t; θ̂(t)) is not completely known, we have to approximate this gradient. To derive this approx- imation, we first rewrite Eq. (3) as −∂V ∗(t;θ̂(t)) ∂t = minθ̂(t)∈Ω [ H(t; θ̂(t)) ] , which provides the optimization problem as θ̂ ∗ (t) = arg minθ̂(t)∈Ω [ H(t; θ̂(t))) ] , where H(t; θ̂(t))) = J(t; θ̂(t)) + JN (t; θ̂(t))+ ( V ∗ θ̂(t) )T ∆θ̂+ ( V ∗x(t) )T ∆x(t) is the CT Hamiltonian. The solution for this optimization problem (the updates for the parameters in the network) is provided when the derivative of the Hamiltonian is set to zero. First, we discretize the MCL problem setting. Let k be the discrete sampling instant such that t = k∆t, where ∆t is the sampling interval. Let a task T k at instant k be sampled from p(T ). Let T k = (X k,Yk) be a tuple, where X k ∈ Rn×p denotes the input data and Yk ∈ Rn×1 denotes the target labels (output). Let n be the number of samples and p be the number of dimensions. Let the parametric model be given as ŷ = g(h(x; θ̂1); θ̂2), where the inner map h(.) is treated as a representation learning network and g(.) is the prediction network. Let θ̂ = [θ̂1 θ̂2] be the learnable model parameters and the weight updates be given as θ̂(k + 1) = θ̂(k)− α(k) ∂ ∂θ̂(k) [ JN (k) + JP (k) + ( JPN (k)− JPN (k; θ̂(k + ζ)) )] (5) Our update rule has three terms (the terms inside the bracket). The first term depends on JN , which is calculated on the new task; the second term depends on JP and is calculated on all the previous tasks; the third term comprises JPN and is evaluated on a combination of the previous tasks and the new task. The first term minimizes the generalization cost. Together, the second and the third terms minimize the catastrophic forgetting cost. The first two terms can be obtained by measuring JP and JN directly. To obtain the third term in Eq. (5), we simplify the third and fourth term in Eq. (3). In Eq. 3, the third and the fourth term quantifies the change in the optimal cost due to the parameter updates and the change in the input respectively. Since, the boundary value for the optimal cost is the current performance of the model on a combination of all the previous tasks and the new task (how well the model performs on all the tasks till now), we use the current performance as an approximation of the optimal cost. This fact combined with the finite difference approximation results in the third term in Eq. (5). Updating with the third term in Eq. (5) regularizes the impact of the new task on the global solution. In Eq. (5) α(k) ≤ 1β‖J ˆθ(k))‖2+ , where β is a user-defined parameter and > 0 is a small value to ensure that the denominator does not go to zero. The derivation of the update rule and the discretization are presented in Appendix A.3. The value β is the theoretical threshold on the optimal cost and the way we choose beta as the reciprocal of the learning rate. Equipped with the gradient updates, we now describe the DPMCL algorithm. We define a new task sample, DN (k) = {Xk,Yk}, and a task memory (samples from all the previous tasks) DP (k) ⊂ ∪k−1τ=0T τ . We can approximate the required terms in our update rule Eq. 5 using samples (batches) from DP (k) and DN (k). The overall algorithm consists of two steps: generalization and catastrophic forgetting (see Algorithm 1 in Appendix B.4). DPMCL comprises representation and prediction neural networks parameterized by θ̂1 and θ̂2, respectively. Since, the vector space of θ̂ is written as a combination of θ̂1 and θ̂2. The theory developed on θ̂ extends to the vector space defined by the combination of θ̂1 and θ̂2. The solution in this space can be done achieved either by coordinate descent or by alternative minimization. We choose to update in an alternative minimization manner. For each batch bN ∈ DN (k), DPMCL alternatively performs generalization and catastrophic forgetting cost updates ρ times. The generalization cost update consists of computing the cost JN and using that to update θ̂1 and θ̂2; the catastrophic forgetting cost update comprises the following steps. First we create a batch that combines the new task data with samples from the previous tasks bPN = bP ∪ bN (k), where bP ∈ DP (k). Second, to approximate the term ( JPN (k; θ̂(k + ζ))),, we copy θ̂2 (prediction network) into a temporary network parameterized by θ̂B . We then perform ζ updates on θ̂B while keeping θ̂1 fixed. Third, using θ̂B(k + ζ), we compute JPN (k; θ̂B(k + ζ)) and update θ̂1, θ̂2 with JP (k) + (JPN (k) − JPN (k; θ̂B(k + ζ))). The inner loop with ζ is purely for the purpose of approximating the optimal cost. The rationale behind repeated updates to approximate JPN (k; θ̂(k + ζ)), is as follows. At every instant k, JPN (k) (the cost on all the previous tasks and the new task) is the boundary value for the optimal cost as the optimal cost can only be less than JPN (k) (minimum of the cost can only be less than or equal to the current value). Therefore, if we start from JPN (k) and execute a Markov chain (repeated gradient updates) for ζ steps, the end point of this Markov chain is the optimal value of JPN (k) (the cost will reduce with repeated updates using the same batch of data) at instant k, provided ζ is large enough. Furthermore, the difference between the cost at the starting point and the end point of this chain provides us with a value that should be minimized such that the cost JPN (k) will reach the optimal value for the MCL problem at instant k. To execute this Markov chain, we perform repeated updates on a copy of θ̂2 (prediction network) denoted as θ̂B . The goal of the markov chain procedure is purely to approximate the optimal cost, that is, how well, the current state of the model can predict on all the tasks that have been observed till now (previous tasks and the new task). The performance of the model depends on the prediction network and the representation network. Since, we seek to maintain a global solution, the pursuit is well served by learning a robust representation over all the tasks. Therefore, we want to observe, can the current representation allow us to reach the global solution over all the tasks? Therefore, we seek to estimate the difference JPN (k) − JPN (k; θ̂B(k + ζ)), corresponding to the current representation and we treat the output of the representation network as input and measure the optimal cost by updating only a copy of θ̂2. This process allows us to measure the optimal cost as a function of the representation network and train the representation network toward minimizing optimal cost. We repeat this alternative update process for each data batch in the new task. Once all the data from the new task is exhausted, we move to the next task. 3.3 RELATED WORK Existing MCL methods can be grouped into three groups: (1) dynamic architectures and flexible knowledge representation (Sutton, 1990; Rusu et al., 2016; Yoon et al., 2017); (2) regularization approaches, ((Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018)) and (3) memory/experience replay, (Lin, 1992; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019). Flexible knowledge representations seek to maintain a state of the whole dataset and require computationally expensive mechanisms (Yoon et al., 2017). Regularization approaches (Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018) attempt to minimize the impact of new tasks (changes in the input data-distribution) on the parameters of the model involving a significant trial-and-error process. Memory/experience replay-driven approaches (Lin, 1992; Chaudhry et al., 2019; Lopez-Paz and Ranzato, 2017) can address catastrophic forgetting but do not generalize well to new tasks. In recent literature, the most comprehensive methodology that enables the study of the MCL problem was introduced in Finn et al. (2017). In Finn et al. (2017), the authors presented an approach where an additional term is introduced into the cost function (the gradient of the cost function with respect to the previous tasks). However, this method requires all the data to be known prior to the start of the learning procedure. To obviate this constraint, an online MAML approach was introduced in ?Caccia et al. (2020). The approach does not explicitly minimize catastrophic forgetting but focuses on fast online learning. In contrast with Finn et al. (2017), our method can learn sequentially as the new tasks are observed. Although sequential learning was possible in Finn et al. (2019), it was highlighted in Caccia et al. (2020), that there is an inherent trade-off between memory requirements and catastrophic forgetting, which could be addressed by learning a representation over all the tasks as in Javed and White (2019); Beaulieu et al. (2020). Similar to Javed and White (2019); Beaulieu et al. (2020), our method allows a representation to be learned over the distribution of all the tasks p(T ). However, both the representation and the training model are learned sequentially in DPCML, and we do not observe a pre-training step. Our approach is the first comprehensive theoretical framework based on dynamic programming that is model agnostic and can adapted to different MCL setting in both CT and DT. Although theoretical underpinnings were provided in Finn et al. (2019) and Flennerhag et al. (2019), the focus was to provide structure for parameter updates but not attempt to holistically model the overall learning dynamics as is done in our theoretical framework. The key ideas in this paper have been adapted from dynamic programming and optimal control theory; additional details can be found in Lewis and Vrabie (2009). repetitions. Gaussian smoothing filter with standard deviation of 2 is applied on each trajectory. 4 EXPERIMENTS We use four continual learning data sets: incremental sine wave (regression on 50 tasks (SINE)); split-Omniglot (classification on 50 tasks, (OMNI)); continuous MNIST (classification on 10 tasks and (MNIST)); and CIFAR10 (classification on 10 tasks, (CIFAR10)). All these data sets have been used in (Finn et al., 2019; 2017; Javed and White, 2019). We compare DPMCL with Naive (training is always performed on the new task without any explicit catastrophic forgetting minimization); Experience-Replay (ER) (Lin, 1992) (training is performed by sampling batches of data from all the tasks (previous and the new)); follow the meta learner (FTML) (Finn et al., 2019), online meta-continual learning (OML) (Javed and White, 2019), neuro-modulated meta learning (ANML) (Beaulieu et al., 2020) and meta experience replay (MER) (Riemer et al., 2018). To keep consistency with our computing environment and the task structure, we implement a sequential online version (Finn et al., 2019) (where each task is exposed to the model sequentially) of all these algorithms in our environment. For ANML and OML, we will run a pre-training phase for each new task. In particular, for each task, we will first train the RLN/NLM with data for the new task. Next, we train the complete network (RLN/NLM and PLN) with the data from the new task. For any particular data set, we set the same model hyper-parameters (number of hidden layers, number of hidden layer units, activation functions, learning rate, etc.) across all implementations. For fair comparisons, for any given task we also fix the total number of gradient updates a method can perform (See Appendix B.1,2,3 for details on data-set and hyper-parameters). For each task, we split the given data into training (60%), validation (20%), and testing (20%) data sets. Methods such as ANML, FTML, and OML follow the two loop training strategy from FTML: the inner loop and the outer loop. The training data for each task is used for the inner loop, whereas the validation data is used in the outer loop. The testing data is used to report accuracy metrics. We measure generalization and catastrophic forgetting through cumulative error (CME) given by the average error on all the previous tasks and the new task error (NTE) given by the average error on the new task, respectively. For regression problems, they are computed from mean squared error; for classification problems, given as (1 − Acc100 ), where Acc refers to the classification accuracy. For cost function, we use the mean squared error for regression and categorical cross-entropy for classification. We use a total of 50 runs (repetition) with different random seeds and report the mean µ and standard error of the mean (σerror = σ/ √ 50, where σ is the standard deviation with 50 being the number of repetition). We report only the σerror when it is greater than 10−3; otherwise, we indicate a 0 (See Appendix B.4,5 for implementations details). 4.1 RESULTS We first analyze the CME and NTE as each task is incrementally shown to the model. We record the CME and NTE on the testing data (averaged over 50 repetitions) at each instant when a task is observed. The results are shown in Fig. 1. Unlike the other methods, DPMCL achieves low error with respect to CME and NTE. ANML and OML perform poorly on CME because of the lack of a learned representation (as we do not have a pre-training phase to learn an encoder). The performance of DPMCL is better than MER, FTML and ER in all data sets except CIFAR-10, where ER is comparable to DPMCL. The poor performance of Naive is expected because it is trained only on the new task data and thus incurs catastrophic forgetting. DPMCL generalizes well to new tasks, and consequently the performance of DPMCL is better than FTML, ER, and ANML in all the data sets except CIFAR10. As expected, Naive has the lowest NTE. In the absence of a well-learned representation, OML exhibits behavior similar to Naive’s and is able to quickly generalize to new task. MER also behaves similar to Naive and incurs very low NTE. For CIFAR10, FTML achieves NTE lower than that of DPMCL. ER struggles to generalize to a new task; however, the performance is better than ANML’s, which exhibits the poorest performance due to the absence of a well-learned representation. The CME and NTE values for all the data sets and methods are summarized in Table 1 in Appendix B.6. DPMCL achieves CME [µ(σerr)] of 1.12× 10−5(0) for SINE, 0.175(0.007) for OMNI, 0.015(0.001) for MNIST, and 0.475(0.003) for CIFAR10. We note that the CME for DPMCL are the best among all the methods and all the data sets except MER for OMNI and ER for CIFAR10. In CIFAR10, ER demonstrates a 7% improvement in accuracy. For OMNI, MER outperforms DPMCL with an 1% improvement on the CME scale. On the NTE [µ (σerr)] scale, DPMCL achieves 3.89 × 10−5(0) for SINE, 0.189(0.091) for OMNI, 0.003(0) for MNIST and 0.273(0.008) for CIFAR10. On the NTE scale, the best-performing method is Naive followed by OML and MER; this behavior can be observed in the trends from Fig. 1. DPMCL is better than all the other methods in all the data sets except CIFAR 10, where FTML is better (observed earlier in Fig. 1) by 10%. However, this 10% improvement for FTML comes at the expense of 18% drop in performance on the CME scale. Similarly, although ER exhibits a 3% improvement on CME scale, DPMCL outperforms ER on the NTE scale by a 22.7% improvement. The method closest to DPMCL in terms of design and performance is MER. MER outperforms DPMCL on the NTE scale in all the datasets except sine and MNIST where the performance is comparable. MER is expected to generalize really well with new tasks as it is designed for this purpose. MER regularizes the change in weight parameters with the introduction of a new task as the update rules are derived from Nichol et al. (2018). On the other hand, on the CME scale, DPMCL outperforms MER in all datasets except OMNI where the performance of MER is better. The reason is parameter drift (Narendra and Annaswamy, 1987) observed due to weight regularization present in MER. The parameter drift can make the network unlearn previous experiences especially when the new tasks are very similar to the older tasks (Narendra and Annaswamy, 1987). In OMNI, the new tasks are distinctly different from the older ones but in the rest of the datasets, the new tasks are similar. Such a parameter drift can be avoided when the weight is updated proportional to the cost (Narendra and Annaswamy, 1987) which is why DPMCL performs better on the CME scale. Overall, the results show that DPMCL achieves a balance between CME and NTE. The balance can be engineered depending on the choice of ζ and κ (the study is presented in Appendix B.6.). We have chosen these parameters to achieve better performance in the CME scale. As a result, we show that DPMCL outperforms all methods in CME scale without significant drop in the NTE scale. All other methods either perform well on the NTE or CME scale but not on both. 5 CONCLUSIONS We introduced a dynamic-programming-based theoretical framework for meta continual learning.Within this framework, catastrophic forgetting and generalization, the two central challenges of the meta continual learning, can be studied and analyzed methodically. Furthermore, the framework also allowed us to provide theoretical justification for intuitive and empirically proven ideas about generalization and catastrophic forgetting. We then introduced DPMCL which was able to systematically model and compensate for the trade-off between the catastrophic forgetting and generalization. We also provided experimental results in a sequential learning setting that show that the framework is practical with comparable performance to state of the art in meta continual learning. In the future, we plan to extend this approach to reinforcement and unsupervised learning. Moreover, we plan to study different architectures such as convolutional neural networks and graph neural networks.
1. What is the main contribution of the paper regarding theoretical framework for meta/continual learning? 2. What are the strengths and weaknesses of the proposed formulation in the online learning process? 3. Do you have any concerns or questions regarding the derivation and the problems raised in the review? 4. How does the reviewer assess the significance of the paper's content, particularly in its novel approach to meta-learning and continual learning? 5. Are there any typos or minor issues in the paper that need to be addressed?
Review
Review This paper would be a nice theoretical framework for meta/continual learning in the online setting if it is further polished. My evaluation maybe limited as I only followed through the derivation and locate problems that I can see, but didn't try to verify every detail of the derivation. Pros: Formulating the online learning process as a value function is new. Both meta learning and continual learning is considered in the proposed formulation. The derivation would look good to me if the problems raised in cons is resolved by the authors. Analyzing the the online learning process under this framework provides a new theoretical angle to meta-learning and continual learning. Cons: It confuses me to write in the form of (3) with the lhs a partial derivative of t. I think the real goal is to perform the minimization. A suggestion would be to write it in an objective function centric way, namely keep the lhs on the rhs in Supp (7), and remove the terms that are not dependent on theta. Supp (4), the sentence before (4) doesn't convince me that V ( t + Δ t , θ ^ ( t + Δ t ) ) can be replaced with V ∗ ( t + Δ t , θ ^ ( t + Δ t ) ) . As the minimization is over (J+V), it is not necessarily true that arg ⁡ min θ ( J + V ) is equal to arg ⁡ min θ ( V ) (I write (3) as J+V and (4) as J+V* in short.) Lemma 1 seems not useful, as I understand it, γ ( τ ) is used in the range 0 < τ < t , to penalize forgetting on more recent data, we need to have γ ( τ ) → 1 for τ → t , and γ ( τ ) increasing in the range of [ 0 , t ] . I wonder why it is assumed in lemma 1 that t goes to infinity for γ ( t ) ? A quick look at the hyperparameters in the supplementary and the algorithm pseudo code doesn't give me the answer how γ is actually set. In the discretized case, it is very different whether we make prediction before or after parameter update. If we predict before update, it measures the forward transfer / generalization. If we predict after update, it measures how well we learn on this new data. How would you distinguish this two cases in your continuous time derivation? In the paramgraph below theorem 1, | J x | = 0 means the learning process will stagnate, I don't agree on this. Because J measures the integral from 0 to t , thus it should increase when new x comes. If its derivative wrt x is 0 , it is even better. Even if we consider γ , | J x | = 0 could also result from that all l ( τ ) ; τ ∈ [ 0 , t ] changes but cancels out. Am I misunderstanding at this point? The link with MAML isn't explained enough, i.e. what are support/query set mapped to in the time continuous setting? Maybe the answer to 4 would also address this question. The theory part doesn't separate θ 1 and θ 2 , but in the implementation, it comes out a bit abruptly. The final paragraph at page 5 gives a reason, but I think it doesn't justify why the derivation can't be used as it is. Typos: before section 3, optimal using tools -> optimize using tools Supp B4, Eq. ??